Implementation of the Master Curve method in ProSACC
International Nuclear Information System (INIS)
Feilitzen, Carl von; Sattari-Far, Iradj
2012-03-01
Cleavage fracture toughness data display normally large amount of statistical scatter in the transition region. The cleavage toughness data in this region is specimen size-dependent, and should be treated statistically rather than deterministically. Master Curve methodology is a procedure for mechanical testing and statistical analysis of fracture toughness of ferritic steels in the transition region. The methodology accounts for temperature and size dependence of fracture toughness. Using the Master Curve methodology for evaluation of the fracture toughness in the transition region releases the overconservatism that has been observed in using the ASME-KIC curve. One main advantage of using the Master Curve methodology is possibility to use small Charpy-size specimens to determine fracture toughness. Detailed description of the Master Curve methodology is given by Sattari-Far and Wallin [2005). ProSACC is a suitable program in using for structural integrity assessments of components containing crack like defects and for defect tolerance analysis. The program gives possibilities to conduct assessments based on deterministic or probabilistic grounds. The method utilized in ProSACC is based on the R6-method developed at Nuclear Electric plc, Milne et al [1988]. The basic assumption in this method is that fracture in a cracked body can be described by two parameters Kr and Lr. The parameter Kr is the ratio between the stress intensity factor and the fracture toughness of the material. The parameter Lr is the ratio between applied load and the plastic limit load of the structure. The ProSACC assessment results are therefore highly dependent on the applied fracture toughness value in the assessment. In this work, the main options of the Master Curve methodology are implemented in the ProSACC program. Different options in evaluating Master Curve fracture toughness from standard fracture toughness testing data or impact testing data are considered. In addition, the
Implementation of the Master Curve method in ProSACC
Energy Technology Data Exchange (ETDEWEB)
Feilitzen, Carl von; Sattari-Far, Iradj [Inspecta Technology AB, Stockholm (Sweden)
2012-03-15
Cleavage fracture toughness data display normally large amount of statistical scatter in the transition region. The cleavage toughness data in this region is specimen size-dependent, and should be treated statistically rather than deterministically. Master Curve methodology is a procedure for mechanical testing and statistical analysis of fracture toughness of ferritic steels in the transition region. The methodology accounts for temperature and size dependence of fracture toughness. Using the Master Curve methodology for evaluation of the fracture toughness in the transition region releases the overconservatism that has been observed in using the ASME-KIC curve. One main advantage of using the Master Curve methodology is possibility to use small Charpy-size specimens to determine fracture toughness. Detailed description of the Master Curve methodology is given by Sattari-Far and Wallin [2005). ProSACC is a suitable program in using for structural integrity assessments of components containing crack like defects and for defect tolerance analysis. The program gives possibilities to conduct assessments based on deterministic or probabilistic grounds. The method utilized in ProSACC is based on the R6-method developed at Nuclear Electric plc, Milne et al [1988]. The basic assumption in this method is that fracture in a cracked body can be described by two parameters Kr and Lr. The parameter Kr is the ratio between the stress intensity factor and the fracture toughness of the material. The parameter Lr is the ratio between applied load and the plastic limit load of the structure. The ProSACC assessment results are therefore highly dependent on the applied fracture toughness value in the assessment. In this work, the main options of the Master Curve methodology are implemented in the ProSACC program. Different options in evaluating Master Curve fracture toughness from standard fracture toughness testing data or impact testing data are considered. In addition, the
International Nuclear Information System (INIS)
Yang Wendou
2011-01-01
The new Master Curve Method is called as a revolutionary advance to the assessment of- reactor pressure vessel integrity in USA. This paper explains the origin, basis and standard of the Master Curve from the reactor pressure-temperature limit curve which assures the safety of nuclear power plant. According to the characteristics of brittle fracture which is greatly susceptible to the microstructure, the theory and the test method of the Master Curve as well as its statistical law which can be modeled using Weibull distribution are described in this paper. The meaning, advantage, application and importance of the Master Curve as well as the relation between the Master Curve and nuclear power safety are understood from the fitting formula for the fracture toughness database by Weibull distribution model. (author)
Miranda Guedes, Rui
2018-02-01
Long-term creep of viscoelastic materials is experimentally inferred through accelerating techniques based on the time-temperature superposition principle (TTSP) or on the time-stress superposition principle (TSSP). According to these principles, a given property measured for short times at a higher temperature or higher stress level remains the same as that obtained for longer times at a lower temperature or lower stress level, except that the curves are shifted parallel to the horizontal axis, matching a master curve. These procedures enable the construction of creep master curves with short-term experimental tests. The Stepped Isostress Method (SSM) is an evolution of the classical TSSP method. Higher reduction of the required number of test specimens to obtain the master curve is achieved by the SSM technique, since only one specimen is necessary. The classical approach, using creep tests, demands at least one specimen per each stress level to produce a set of creep curves upon which TSSP is applied to obtain the master curve. This work proposes an analytical method to process the SSM raw data. The method is validated using numerical simulations to reproduce the SSM tests based on two different viscoelastic models. One model represents the viscoelastic behavior of a graphite/epoxy laminate and the other represents an adhesive based on epoxy resin.
Method and Excel VBA Algorithm for Modeling Master Recession Curve Using Trigonometry Approach.
Posavec, Kristijan; Giacopetti, Marco; Materazzi, Marco; Birk, Steffen
2017-11-01
A new method was developed and implemented into an Excel Visual Basic for Applications (VBAs) algorithm utilizing trigonometry laws in an innovative way to overlap recession segments of time series and create master recession curves (MRCs). Based on a trigonometry approach, the algorithm horizontally translates succeeding recession segments of time series, placing their vertex, that is, the highest recorded value of each recession segment, directly onto the appropriate connection line defined by measurement points of a preceding recession segment. The new method and algorithm continues the development of methods and algorithms for the generation of MRC, where the first published method was based on a multiple linear/nonlinear regression model approach (Posavec et al. 2006). The newly developed trigonometry-based method was tested on real case study examples and compared with the previously published multiple linear/nonlinear regression model-based method. The results show that in some cases, that is, for some time series, the trigonometry-based method creates narrower overlaps of the recession segments, resulting in higher coefficients of determination R 2 , while in other cases the multiple linear/nonlinear regression model-based method remains superior. The Excel VBA algorithm for modeling MRC using the trigonometry approach is implemented into a spreadsheet tool (MRCTools v3.0 written by and available from Kristijan Posavec, Zagreb, Croatia) containing the previously published VBA algorithms for MRC generation and separation. All algorithms within the MRCTools v3.0 are open access and available free of charge, supporting the idea of running science on available, open, and free of charge software. © 2017, National Ground Water Association.
Master sintering curves of two different alumina powder compacts
Directory of Open Access Journals (Sweden)
Vaclav Pouchly
2009-12-01
Full Text Available Concept of Master Sintering Curve is a strong tool for optimizing sintering schedule. The sintering behaviour can be predicted, and sintering activation energy can be calculated with the help of few dilatometric measurements. In this paper an automatic procedure was used to calculate Master Sintering Curves of two different alumina compacts. The sintering activation energies were determined as 640 kJ/mol for alumina with particle size of 240 nm, respective 770 kJ/mol for alumina with particle size of 110 nm. The possibility to predict sintering behaviour with the help of Master Sintering Curve was verified.
Master sintering curve: A practical approach to its construction
Directory of Open Access Journals (Sweden)
Pouchly V.
2010-01-01
Full Text Available The concept of a Master Sintering Curve (MSC is a strong tool for optimizing the sintering process. However, constructing the MSC from sintering data involves complicated and time-consuming calculations. A practical method for the construction of a MSC is presented in the paper. With the help of a few dilatometric sintering experiments the newly developed software calculates the MSC and finds the optimal activation energy of a given material. The software, which also enables sintering prediction, was verified by sintering tetragonal and cubic zirconia, and alumina of two different particle sizes.
Application of Master Curve Methodology for Structural Integrity Assessments of Nuclear Components
Energy Technology Data Exchange (ETDEWEB)
Sattari-Far, Iradj [Det Norske Veritas, Stockholm (Sweden); Wallin, Kim [VTT, Esbo (Finland)
2005-10-15
The objective was to perform an in-depth investigation of the Master Curve methodology and also based on this method develop a procedure for fracture assessments of nuclear components. The project has sufficiently illustrated the capabilities of the Master Curve methodology for fracture assessments of nuclear components. Within the scope of this work, the theoretical background of the methodology and its validation on small and large specimens has been studied and presented to a sufficiently large extent, as well as the correlations between the charpy-V data and the Master Curve T{sub 0} reference temperature in the evaluation of fracture toughness. The work gives a comprehensive report of the background theory and the different applications of the Master Curve methodology. The main results of the work have shown that the cleavage fracture toughness is characterized by a large amount of statistical scatter in the transition region, it is specimen size dependent and it should be treated statistically rather than deterministically. The Master Curve methodology is able to make use of statistical data in a consistent way. Furthermore, the Master Curve methodology provides a more precise prediction of the fracture toughness of embrittled materials in comparison with the ASME K{sub IC} reference curve, which often gives over-conservative results. The suggested procedure in this study, concerning the application of the Master Curve method in fracture assessments of ferritic steels in the transition region and the low shelf regions, is valid for the temperatures range T{sub 0}-50{<=}T{<=}T{sub 0}+50 deg C. If only approximate information is required, the Master Curve may well be extrapolated outside this temperature range. The suggested procedure has also been illustrated for some examples.
Strain Rate Effects, Transition Behaviour and Master Curve Concept
Czech Academy of Sciences Publication Activity Database
Dlouhý, Ivo; Pluvinage, G.; Holzmann, Miloslav
č. 8 (2004), s. IV 16-IV 22 ISSN 1291-8199 R&D Projects: GA AV ČR IAA2041003; GA ČR GA106/01/0342 Institutional research plan: CEZ:AV0Z2041904 Keywords : ferritic steel * pressure vessel steel * master curve Subject RIV: JL - Materials Fatigue, Friction Mechanics
International Nuclear Information System (INIS)
2009-10-01
A series of coordinated research projects (CRPs) have been sponsored by the IAEA, starting in the early 1970s, focused on neutron radiation effects on reactor pressure vessel (RPV) steels. The purpose of the CRPs was to develop correlative comparisons to test the uniformity of results through coordinated international research studies and data sharing. The overall scope of the eighth CRP (CRP-8), Master Curve Approach to Monitor Fracture Toughness of Reactor Pressure Vessels in Nuclear Power Plants, has evolved from previous CRPs which have focused on fracture toughness related issues. The ultimate use of embrittlement understanding is application to assure structural integrity of the RPV under current and future operation and accident conditions. The Master Curve approach for assessing the fracture toughness of a sampled irradiated material has been gaining acceptance throughout the world. This direct measurement of fracture toughness approach is technically superior to the correlative and indirect methods used in the past to assess irradiated RPV integrity. Several elements have been identified as focal points for Master Curve use: (i) limits of applicability for the Master Curve at the upper range of the transition region for loading quasi-static to dynamic/impact loading rates; (ii) effects of non-homogeneous material or changes due to environment conditions on the Master Curve, and how heterogeneity can be integrated into a more inclusive Master Curve methodology; (iii) importance of fracture mode differences and changes affect the Master Curve shape. The collected data in this report represent mostly results from non-irradiated testing, although some results from test reactor irradiations and plant surveillance programmes have been included as available. The results presented here should allow utility engineers and scientists to directly measure fracture toughness using small surveillance size specimens and apply the results using the Master Curve approach
Master-slave micromanipulator method
Energy Technology Data Exchange (ETDEWEB)
Morimoto, A.K.; Kozlowski, D.M.; Charles, S.T.; Spalding, J.A.
1999-12-14
A method is disclosed based on precision X-Y stages that are stacked. Attached to arms projecting from each X-Y stage are a set of two axis gimbals. Attached to the gimbals is a rod, which provides motion along the axis of the rod and rotation around its axis. A dual-planar apparatus that provides six degrees of freedom of motion precise to within microns of motion. Precision linear stages along with precision linear motors, encoders, and controls provide a robotics system. The motors can be remotized by incorporating a set of bellows on the motors and can be connected through a computer controller that will allow one to be a master and the other one to be a slave. Position information from the master can be used to control the slave. Forces of interaction of the slave with its environment can be reflected back to the motor control of the master to provide a sense of force sensed by the slave. Forces import onto the master by the operator can be fed back into the control of the slave to reduce the forces required to move it.
International Nuclear Information System (INIS)
Wallin, K.; Rintamaa, R.
1999-01-01
Historically the ASME reference curve concept assumes a constant relation between static fracture toughness initiation toughness and crack arrest toughness. In reality, this is not the case. Experimental results show that the difference between K IC and K Ia is material specific. For some materials there is a big difference while for others they nearly coincide. So far, however, no systematic study regarding a possible correlation between the two parameters has been performed. The recent Master curve method, developed for brittle fracture initiation estimation, has enabled a consistent analysis of fracture initiation toughness data. The Master curve method has been modified to be able to describe also crack arrest toughness. Here, this modified 'crack arrest master curve' is further validated and used to develop a simple, but yet (for safety assessment purpose) adequately accurate correlation between the two fracture toughness parameters. The correlation enables the estimation of crack arrest toughness from small Charpy-sized static fracture toughness tests. The correlation is valid for low Nickel steels ≤ (1.2% Ni). If a more accurate description of the crack arrest toughness is required, it can either be measured experimentally or estimated from instrumented Charpy-V crack arrest load information. (orig.)
Use of the Master Curve methodology for real three dimensional cracks
International Nuclear Information System (INIS)
Wallin, Kim
2007-01-01
At VTT, development work has been in progress for 15 years to develop and validate testing and analysis methods applicable for fracture resistance determination from small material samples. The VTT approach is a holistic approach by which to determine static, dynamic and crack arrest fracture toughness properties either directly or by correlations from small material samples. The development work has evolved a testing standard for fracture toughness testing in the transition region. The standard, known as the Master Curve standard is in a way 'first of a kind', since it includes guidelines on how to properly treat the test data for use in structural integrity assessment. No standard, so far, has done this. The standard is based on the VTT approach, but presently, the VTT approach goes beyond the standard. Key components in the standard are statistical expressions for describing the data scatter, and for predicting a specimens size (crack front length) effect and an expression (Master Curve) for the fracture toughness temperature dependence. The standard and the approach, it is based upon, can be considered to represent the state of the art of small specimen fracture toughness characterization. Normally, the Master Curve parameters are determined using test specimens with 'straight' crack fronts and comparatively uniform stress state along the crack front. This enables the use of a single K I value and single constraint value to describe the whole specimen. For a real crack in a structure, this is usually not the case. Normally, both K I and constraint vary along the crack front and in the case of a thermal shock, even the temperature will vary along the crack front. A proper means of applying the Master Curve methodology for such cases is presented here
Use of the master curve methodology for real three dimensional cracks
International Nuclear Information System (INIS)
Wallin, K.; Rintamaa, R.
2005-01-01
At VTT, development work has been in progress for 15 years to develop and validate testing and analysis methods applicable for fracture resistance determination from small material samples. The VTT approach is a holistic approach by which to determine static, dynamic and crack arrest fracture toughness properties either directly or by correlations from small material samples. The development work has evolved a testing standard for fracture toughness testing in the transition region. The standard, known as the Master Curve standard is in a way 'first of a kind', since it includes guidelines on how to properly treat the test data for use in structural integrity assessment. No standard, so far, has done this. The standard is based on the VTT approach, but presently, the VTT approach goes beyond the standard. Key components in the standard are statistical expressions for describing the data scatter, and for predicting a specimen's size (crack front length) effect and an expression (Master Curve) for the fracture toughness temperature dependence. The standard and the approach it is based upon can be considered to represent the state of the art of small specimen fracture toughness characterization. Normally, the Master Curve parameters are determined using test specimens with 'straight' crack fronts and comparatively uniform stress state along the crack front. This enables the use of a single KI value and single constraint value to describe the whole specimen. For a real crack in a structure, this is usually not the case. Normally, both KI and constraint varies along the crack front and in the case of a thermal shock, even the temperature will vary along the crack front. A proper means of applying the Master Curve methodology for such cases is presented here. (authors)
Use of Master Curve technology for assessing shallow flaws in a reactor pressure vessel material
International Nuclear Information System (INIS)
Bass, Bennett Richard; Taylor, Nigel
2006-01-01
In the NESC-IV project an experimental/analytical program was performed to develop validated analysis methods for transferring fracture toughness data to shallow flaws in reactor pressure vessels subject to biaxial loading in the lower-transition temperature region. Within this scope an extensive range of fracture tests was performed on material removed from a production-quality reactor pressure vessel. The Master Curve analysis of this data is reported and its application to the assessment of the project feature tests on large beam test pieces.
Application and validation of the notch master curve in medium and high strength structural steels
Energy Technology Data Exchange (ETDEWEB)
Cicero, Sergio; Garcia, Tiberio [Universidad de Cantabria, Santander (Spain); Madrazo, Virginia [PCTCAN, Santander (Spain)
2015-10-15
This paper applies and validates the Notch master curve in two ferritic steels with medium (steel S460M) and high (steel S690Q) strength. The Notch master curve is an engineering tool that allows the fracture resistance of notched ferritic steels operating within their corresponding ductile-to-brittle transition zone to be estimated. It combines the Master curve and the Theory of critical distances in order to take into account the temperature and the notch effect respectively, assuming that both effects are independent. The results, derived from 168 fracture tests on notched specimens, demonstrate the capability of the Notch master curve for the prediction of the fracture resistance of medium and high strength ferritic steels operating within their ductile-to-brittle transition zone and containing notches.
Application of Bimodal Master Curve Approach on KSNP RPV steel SA508 Gr. 3
International Nuclear Information System (INIS)
Kim, Jongmin; Kim, Minchul; Choi, Kwonjae; Lee, Bongsang
2014-01-01
In this paper, the standard MC approach and BMC are applied to the forging material of the KSNP RPV steel SA508 Gr. 3. A series of fracture toughness tests were conducted in the DBTT transition region, and fracture toughness specimens were extracted from four regions, i.e., the surface, 1/8T, 1/4T and 1/2T. Deterministic material inhomogeneity was reviewed through a conventional MC approach and the random inhomogeneity was evaluated by BMC. In the present paper, four regions, surface, 1/8T, 1/4T and 1/2T, were considered for the fracture toughness specimens of KSNP (Korean Standard Nuclear Plant) SA508 Gr. 3 steel to provide deterministic material inhomogeneity and review the applicability of BMC. T0 determined by a conventional MC has a low value owing to the higher quenching rate at the surface as expected. However, more than about 15% of the KJC values lay above the 95% probability curves indexed with the standard MC T0 at the surface and 1/8T, which implies the existence of inhomogeneity in the material. To review the applicability of the BMC method, the deterministic inhomogeneity owing to the extraction location and quenching rate is treated as random inhomogeneity. Although the lower bound and upper bound curve of the BMC covered more KJC values than that of the conventional MC, there is no significant relationship between the BMC analysis lines and measured KJC values in the higher toughness distribution, and BMC and MC provide almost the same T0 values. Therefore, the standard MC evaluation method for this material is appropriate even though the standard MC has a narrow upper/lower bound curve range from the RPV evaluation point of view. The material is not homogeneous in reality. Such inhomogeneity comes in the effect of material inhomogeneity depending on the specimen location, heat treatment, and whole manufacturing process. The conventional master curve has a limitation to be applied to a large scatted data of fracture toughness such as the weld region
Econophysics: Master curve for price-impact function
Lillo, Fabrizio; Farmer, J. Doyne; Mantegna, Rosario N.
2003-01-01
The price reaction to a single transaction depends on transaction volume, the identity of the stock, and possibly many other factors. Here we show that, by taking into account the differences in liquidity for stocks of different size classes of market capitalization, we can rescale both the average price shift and the transaction volume to obtain a uniform price-impact curve for all size classes of firm for four different years (1995-98). This single-curve collapse of the price-impact function suggests that fluctuations from the supply-and-demand equilibrium for many financial assets, differing in economic sectors of activity and market capitalization, are governed by the same statistical rule.
Semi-empirical master curve concept describing the rate capability of lithium insertion electrodes
Heubner, C.; Seeba, J.; Liebmann, T.; Nickol, A.; Börner, S.; Fritsch, M.; Nikolowski, K.; Wolter, M.; Schneider, M.; Michaelis, A.
2018-03-01
A simple semi-empirical master curve concept, describing the rate capability of porous insertion electrodes for lithium-ion batteries, is proposed. The model is based on the evaluation of the time constants of lithium diffusion in the liquid electrolyte and the solid active material. This theoretical approach is successfully verified by comprehensive experimental investigations of the rate capability of a large number of porous insertion electrodes with various active materials and design parameters. It turns out, that the rate capability of all investigated electrodes follows a simple master curve governed by the time constant of the rate limiting process. We demonstrate that the master curve concept can be used to determine optimum design criteria meeting specific requirements in terms of maximum gravimetric capacity for a desired rate capability. The model further reveals practical limits of the electrode design, attesting the empirically well-known and inevitable tradeoff between energy and power density.
Fracture toughness evaluation of steels through master curve approach using Charpy impact specimens
International Nuclear Information System (INIS)
Chatterjee, S.; Sriharsha, H.K.; Shah, Priti Kotak
2007-01-01
The master curve approach can be used for the evaluation of fracture toughness of all steels which exhibit a transition between brittle to ductile mode of fracture with increasing temperature, and to monitor the extent of embrittlement caused by metallurgical damage mechanisms. This paper details the procedure followed to evaluate the fracture toughness of a typical ferritic steel used as material for pressure vessels. The potential of master curve approach to overcome the inherent limitations of the estimation of fracture toughness using ASME Code reference toughness is also illustrated. (author)
Master curve characterization of the fracture toughness behavior in SA508 Gr.4N low alloy steels
Energy Technology Data Exchange (ETDEWEB)
Lee, Ki-Hyoung, E-mail: shirimp@kaist.ac.k [Department of Materials Science and Engineering, KAIST, Daejeon 305-701 (Korea, Republic of); Kim, Min-Chul; Lee, Bong-Sang [Nuclear Materials Research Division, KAERI, Daejeon 305-353 (Korea, Republic of); Wee, Dang-Moon [Department of Materials Science and Engineering, KAIST, Daejeon 305-701 (Korea, Republic of)
2010-08-15
The fracture toughness properties of the tempered martensitic SA508 Gr.4N Ni-Mo-Cr low alloy steel for reactor pressure vessels were investigated by using the master curve concept. These results were compared to those of the bainitic SA508 Gr.3 Mn-Mo-Ni low alloy steel, which is a commercial RPV material. The fracture toughness tests were conducted by 3-point bending with pre-cracked charpy (PCVN) specimens according to the ASTM E1921-09c standard method. The temperature dependency of the fracture toughness was steeper than those predicted by the standard master curve, while the bainitic SA508 Gr.3 steel fitted well with the standard prediction. In order to properly evaluate the fracture toughness of the Gr.4N steels, the exponential coefficient of the master curve equation was changed and the modified curve was applied to the fracture toughness test results of model alloys that have various chemical compositions. It was found that the modified curve provided a better description for the overall fracture toughness behavior and adequate T{sub 0} determination for the tempered martensitic SA508 Gr.4N steels.
Extension of the master sintering curve for constant heating rate modeling
McCoy, Tammy Michelle
The purpose of this work is to extend the functionality of the Master Sintering Curve (MSC) such that it can be used as a practical tool for predicting sintering schemes that combine both a constant heating rate and an isothermal hold. Rather than just being able to predict a final density for the object of interest, the extension to the MSC will actually be able to model a sintering run from start to finish. Because the Johnson model does not incorporate this capability, the work presented is an extension of what has already been shown in literature to be a valuable resource in many sintering situations. A predicted sintering curve that incorporates a combination of constant heating rate and an isothermal hold is more indicative of what is found in real-life sintering operations. This research offers the possibility of predicting the sintering schedule for a material, thereby having advanced information about the extent of sintering, the time schedule for sintering, and the sintering temperature with a high degree of accuracy and repeatability. The research conducted in this thesis focuses on the development of a working model for predicting the sintering schedules of several stabilized zirconia powders having the compositions YSZ (HSY8), 10Sc1CeSZ, 10Sc1YSZ, and 11ScSZ1A. The compositions of the four powders are first verified using x-ray diffraction (XRD) and the particle size and surface area are verified using a particle size analyzer and BET analysis, respectively. The sintering studies were conducted on powder compacts using a double pushrod dilatometer. Density measurements are obtained both geometrically and using the Archimedes method. Each of the four powders is pressed into ¼" diameter pellets using a manual press with no additives, such as a binder or lubricant. Using a double push-rod dilatometer, shrinkage data for the pellets is obtained over several different heating rates. The shrinkage data is then converted to reflect the change in relative
Use of precracked Charpy and smaller specimens to establish the master curve
International Nuclear Information System (INIS)
Sokolov, M.A.; McCabe, D.E.; Nanstad, R.K.; Davidov, Y.A.
1997-01-01
The current provisions used in the U.S. Code of Federal Regulations for the determination of the fracture toughness of reactor pressure vessel steels employs an assumption that there is a direct correlation between K Ic lower-bound toughness and the Charpy V-notch transition curve. Such correlations are subject to scatter from both approaches which weakens the reliability of fracture mechanics-based analyses. In this study, precracked Charpy and smaller size specimens are used in three-point static bend testing to develop fracture mechanics based K k values. The testing is performed under carefully controlled conditions such that the values can be used to predict the fracture toughness performance of large specimens. The concept of a universal transition curve (master curve) is applied. Data scatter that is characteristic of commercial grade steels and their weldments is handled by Weibull statistical modeling. The master curve is developed to describe the median K Jc fracture toughness for 1T size compact specimens. Size effects are modeled using weakest-link theory and are studied for different specimen geometries. It is shown that precracked Charpy specimens when tested within their confined validity limits follow the weakest-link size-adjustment trend and predict the fracture toughness of larger specimens. Specimens of smaller than Charpy sizes (5 mm thick) exhibit some disparities in results relative to weakest-link size adjustment prediction suggesting that application of such adjustment to very small specimens may have some limitations
International Nuclear Information System (INIS)
Server, William; Rosinski, Stan; Lott, Randy; Kim, Charles; Weakland, Dennis
2002-01-01
The Master Curve fracture toughness approach has been used in the USA for better defining the transition temperature fracture toughness of irradiated reactor pressure vessel (RPV) steels for end-of-life (EOL) and EOL extension (EOLE) time periods. The first application was for the Kewaunee plant in which the life-limiting material was a circumferential weld metal. Fracture toughness testing of this weld metal corresponding to EOL and beyond EOLE was used to reassess the PTS screening value, RT PTS , and to develop new operating pressure-temperature curves. The NRC has approved this application using a shift-based methodology and higher safety margins than those proposed by the utility and its contractors. Beaver Valley Unit 1, a First Energy nuclear plant, has performed similar fracture toughness testing, but none of the testing has been conducted at EOL or EOLE at this time. Therefore, extrapolation of the life-limiting plate data to higher fluences is necessary, and the projections will be checked in the next decade by Master Curve fracture toughness testing of all of the Beaver Valley Unit 1 beltline materials (three plates and three welds) at fluences near or greater than EOLE. A supplemental surveillance capsule has been installed in the sister plant, Beaver Valley Unit 2, which has the capability of achieving a higher lead factor while operating under essentially the same environment. The Beaver Valley Unit 1 evaluation has been submitted to the NRC. This paper reviews the shift-based approach taken for the Beaver Valley Unit 1 RPV and presents the use of the RT T 0 methodology (which evolved out of the Master Curve testing and endorsed through two ASME Code Cases). The applied margin accounts for uncertainties in the various material parameters. Discussion of a direct measurement of RT T 0 approach, as originally submitted for the Kewaunee case, is also presented
Directory of Open Access Journals (Sweden)
Carvalho Humberto M.
2015-12-01
Full Text Available The aim of this paper was to outline a multilevel modeling approach to fit individual angle-specific torque curves describing concentric knee extension and flexion isokinetic muscular actions in Master athletes. The potential of the analytical approach to examine between individual differences across the angle-specific torque curves was illustrated including between-individuals variation due to gender differences at a higher level. Torques in concentric muscular actions of knee extension and knee extension at 60°·s-1 were considered within a range of motion between 5°and 85° (only torques “truly” isokinetic. Multilevel time series models with autoregressive covariance structures with standard multilevel models were superior fits compared with standard multilevel models for repeated measures to fit anglespecific torque curves. Third and fourth order polynomial models were the best fits to describe angle-specific torque curves of isokinetic knee flexion and extension concentric actions, respectively. The fixed exponents allow interpretations for initial acceleration, the angle at peak torque and the decrement of torque after peak torque. Also, the multilevel models were flexible to illustrate the influence of gender differences on the shape of torque throughout the range of motion and in the shape of the curves. The presented multilevel regression models may afford a general framework to examine angle-specific moment curves by isokinetic dynamometry, and add to the understanding mechanisms of strength development, particularly the force-length relationship, both related to performance and injury prevention.
GLOBAL AND STRICT CURVE FITTING METHOD
Nakajima, Y.; Mori, S.
2004-01-01
To find a global and smooth curve fitting, cubic BSpline method and gathering line methods are investigated. When segmenting and recognizing a contour curve of character shape, some global method is required. If we want to connect contour curves around a singular point like crossing points,
International Nuclear Information System (INIS)
EricksonKirk, M.
2004-01-01
The report provides a framework for a basis to significantly reduce the degree of conservatism in RPV safety assessments according to NRC Regulation 10CFR50.61 and 10CFR50, Appendix G. Procedures and rationale are proposed to address regulatory concerns with the application of the Master-Curve based fracture toughness characterization methodologies. This report address the continued development of strategies for using direct characterization of fracture toughness of a reactor pressure vessel (RPV) for integrity assessments. The development of a fracture toughness-based integrity assessment framework will allow for a more realistic assessment of vessel integrity that provides greater operating flexibility while maintaining appropriate safety margins
International Nuclear Information System (INIS)
Viehrig, Hans-Werner; Zurbuchen, Conrad; Kalkhof, Dietmar
2010-06-01
The paper presents results of a research project founded by the Swiss Federal Nuclear Inspectorate concerning the application of the Master Curve approach in nuclear reactor pressure vessels integrity assessment. The main focus is put on the applicability of pre-cracked 0.4T-SE(B) specimens with short cracks, the verification of transferability of MC reference temperatures T 0 from 0.4T thick specimens to larger specimens, ascertaining the influence of the specimen type and the test temperature on T 0 , investigation of the applicability of specimens with electroerosive notches for the fracture toughness testing, and the quantification of the loading rate and specimen type on T 0 . The test material is a forged ring of steel 22 NiMoCr 3-7 of the uncommissioned German pressurized water reactor Biblis C. SE(B) specimens with different overall sizes (specimen thickness B=0.4T, 0.8T, 1.6T, 3T, fatigue pre-cracked to a/W=0.5 and 20% side-grooved) have comparable T 0 . T 0 varies within the 1σ scatter band. The testing of C(T) specimens results in higher T 0 compared to SE(B) specimens. It can be stated that except for the lowest test temperature allowed by ASTM E1921-09a, the T 0 values evaluated with specimens tested at different test temperatures are consistent. The testing in the temperature range of T 0 ± 20 K is recommended because it gave the highest accuracy. Specimens with a/W=0.3 and a/W=0.5 crack length ratios yield comparable T 0 . The T 0 of EDM notched specimens lie 41 K up to 54 K below the T 0 of fatigue pre-cracked specimens. A significant influence of the loading rate on the MC T 0 was observed. The HSK AN 425 test procedure is a suitable method to evaluate dynamic MC tests. The reference temperature T 0 is eligible to define a reference temperature RT To for the ASME-KIC reference curve as recommended in the ASME Code Case N-629. An additional margin has to be defined for the specific type of transient to be considered in the RPV integrity assessment
Inferring the temperature dependence of Beremin cleavage model parameters from the Master Curve
International Nuclear Information System (INIS)
Cao Yupeng; Hui Hu; Wang Guozhen; Xuan Fuzhen
2011-01-01
Research highlights: → Temperature dependence of Beremin model parameters is inferred by Master Curve approach. → Weibull modulus decreases while Weibull stress scale parameter increases with increasing the temperature. → Estimation of Weibull stress parameters in terms of small amounts of specimens leads to a considerable uncertainty. - Abstract: The temperature dependence of Beremin model parameters in the ductile-to-brittle transition region was addressed by employing the Master Curve. Monte Carlo simulation was performed to produce a large number of 1T fracture toughness data randomly drawn from the scatter band at a temperature of interest and thus to determine Beremin model parameters. In terms of the experimental data of a C-Mn steel (the 16MnR steel in China), results revealed that the Weibull modulus, m, decreases with temperature over the lower transition range and remains a constant in the lower-to-mid transition region. The Weibull scale parameter, σ u , increases with temperature over the temperature range of investigated. A small sample may lead to a considerable uncertainty in estimates of the Weibull stress parameters. However, no significant difference was observed for the average of Weibull stress parameters from different sample sizes.
Method of construction spatial transition curve
Directory of Open Access Journals (Sweden)
S.V. Didanov
2013-04-01
Full Text Available Purpose. The movement of rail transport (speed rolling stock, traffic safety, etc. is largely dependent on the quality of the track. In this case, a special role is the transition curve, which ensures smooth insertion of the transition from linear to circular section of road. The article deals with modeling of spatial transition curve based on the parabolic distribution of the curvature and torsion. This is a continuation of research conducted by the authors regarding the spatial modeling of curved contours. Methodology. Construction of the spatial transition curve is numerical methods for solving nonlinear integral equations, where the initial data are taken coordinate the starting and ending points of the curve of the future, and the inclination of the tangent and the deviation of the curve from the tangent plane at these points. System solutions for the numerical method are the partial derivatives of the equations of the unknown parameters of the law of change of torsion and length of the transition curve. Findings. The parametric equations of the spatial transition curve are calculated by finding the unknown coefficients of the parabolic distribution of the curvature and torsion, as well as the spatial length of the transition curve. Originality. A method for constructing the spatial transition curve is devised, and based on this software geometric modeling spatial transition curves of railway track with specified deviations of the curve from the tangent plane. Practical value. The resulting curve can be applied in any sector of the economy, where it is necessary to ensure a smooth transition from linear to circular section of the curved space bypass. An example is the transition curve in the construction of the railway line, road, pipe, profile, flat section of the working blades of the turbine and compressor, the ship, plane, car, etc.
Applicability of the fracture toughness master curve to irradiated reactor pressure vessel steels
International Nuclear Information System (INIS)
Sokolov, M.A.; McCabe, D.E.; Alexander, D.J.; Nanstad, R.K.
1997-01-01
The current methodology for determination of fracture toughness of irradiated reactor pressure vessel (RPV) steels is based on the upward temperature shift of the American Society of Mechanical Engineers (ASME) K Ic curve from either measurement of Charpy impact surveillance specimens or predictive calculations based on a database of Charpy impact tests from RPV surveillance programs. Currently, the provisions for determination of the upward temperature shift of the curve due to irradiation are based on the Charpy V-notch (CVN) 41-J shift, and the shape of the fracture toughness curve is assumed to not change as a consequence or irradiation. The ASME curve is a function of test temperature (T) normalized to a reference nit-ductility temperature, RT NDT , namely, T-RT NDT . That curve was constructed as the lower boundary to the available K Ic database and, therefore, does not consider probability matters. Moreover, to achieve valid fracture toughness data in the temperature range where the rate of fracture toughness increase with temperature is rapidly increasing, very large test specimens were needed to maintain plain-strain, linear-elastic conditions. Such large specimens are impractical for fracture toughness testing of each RPV steel, but the evolution of elastic-plastic fracture mechanics has led to the use of relatively small test specimens to achieve acceptable cleavage fracture toughness measurements, K Jc , in the transition temperature range. Accompanying this evolution is the employment of the Weibull distribution function to model the scatter of fracture toughness values in the transition range. Thus, a probabilistic-based bound for a given data population can be made. Further, it has been demonstrated by Wallin that the probabilistic-based estimates of median fracture toughness of ferritic steels tend to form transition curves of the same shape, the so-called ''master curve'', normalized to one common specimen size, namely the 1T [i.e., 1.0-in
Comparison of power curve monitoring methods
Directory of Open Access Journals (Sweden)
Cambron Philippe
2017-01-01
Full Text Available Performance monitoring is an important aspect of operating wind farms. This can be done through the power curve monitoring (PCM of wind turbines (WT. In the past years, important work has been conducted on PCM. Various methodologies have been proposed, each one with interesting results. However, it is difficult to compare these methods because they have been developed using their respective data sets. The objective of this actual work is to compare some of the proposed PCM methods using common data sets. The metric used to compare the PCM methods is the time needed to detect a change in the power curve. Two power curve models will be covered to establish the effect the model type has on the monitoring outcomes. Each model was tested with two control charts. Other methodologies and metrics proposed in the literature for power curve monitoring such as areas under the power curve and the use of statistical copulas have also been covered. Results demonstrate that model-based PCM methods are more reliable at the detecting a performance change than other methodologies and that the effectiveness of the control chart depends on the types of shift observed.
Parallel control method for a bilateral master-slave manipulator
International Nuclear Information System (INIS)
Miyazaki, Tomohiro; Hagihara, Shiro
1989-01-01
In this paper, a new control method for a bilateral master-slave manipulator is proposed. The proposed method yields stable and fast response of the control system. These are essential to obtain a precise position control and a sensitive force reflection control. In the conventional position-force control method, each control loop of the master and the slave arms are connected in series to construct a bilateral control loop. Therefore the total phase lag through the bilateral control loop becomes twice as much as that of one arm control. Such phase lag makes the control system unstable and control performance worse. To improve the stability and the control performance, we propose 'parallel control method.' In the proposed method, the control loops of the master and the slave arms are connected in parallel so that the total phase lag is reduced to as much as that of one arm. The stability condition of the proposed method is studied and it is proved that the stability of this method can be guaranteed independent of the rigidness of a reaction surface and the position/force ratio between the master and the slave arms while the stability of the conventional method depends on them. (author)
Directory of Open Access Journals (Sweden)
M. Galimberti
2018-03-01
Full Text Available This work presents high surface area sp2 carbon allotropes as important tools to design and prepare lightweight materials. Composites were prepared based on either carbon black (CB or carbon nanotubes (CNT or hybrid CB/CNT filler systems, with either poly(1,4-cis-isoprene or poly(styrene-co-butadiene as the polymer matrix. A correlation was established between the specific interfacial area (i.a., i.e. the surface made available by the filler per unit volume of composite, and the initial modulus of the composite (G′γmin, determined through dynamic mechanical shear tests. Experimental points could be fitted with a common line, a sort of master curve, up to about 30.2 and 9.8 mass% as CB and CNT content, respectively. The equation of such master curve allowed to correlate modulus and density of the composite. Thanks to the master curve, composites with the same modulus and lower density could be designed by substituting part of CB with lower amount of the carbon allotrope with larger surface area, CNT. This work establishes a quantitative correlation as a tool to design lightweight materials and paves the way for large scale application in polymer matrices of innovative sp2 carbon allotropes.
Methodical approaches in the Norwegian Master Plan for Water Resources
International Nuclear Information System (INIS)
Bowitz, Einar
1997-01-01
The Norwegian Master Plan for Water Resources instructs the management not to consider applications for concession to develop hydroelectric projects in the so called category II of the plan. These are the environmentally most controversial projects or the most expensive projects. This report discusses the methods used in this Master Plan to classify the projects. The question whether the assessments of the environmental disadvantages of hydropower development are reasonable is approached in two ways: (1) Compare the environmental costs imbedded in the Plan with direct assessments, and (2) Discuss the appropriateness of the methodology used for environmental evaluations in the Plan. The report concludes that (1) the environmental costs that can be derived from the ranking in the Plan are significantly greater than those following from direct evaluations, (2) the differences are generally so great that one may ask whether the methods used in the Plan overestimate the real environmental costs, (3) it seems to have been difficult to make a unified assessment of the environmental disadvantages, (4) the Plan has considered the economic impact on agriculture and forestry very roughly and indirectly, which may have contributed to overestimated environmental costs of hydropower development. 20 refs., 6 figs., 7 tabs
International Nuclear Information System (INIS)
Lucon, E.
2005-11-01
The latest IAEA Co-ordinated Research Project (CRP-8) focuses on the application of the Master Curve approach to monitor fracture toughness of reactor pressure vessels in nuclear power power plants. Three main work areas have been identified: (a) constraint and geometry effects on Master Curve To values; (b) loading rate effects up to impact conditions; (c) potential changes of Master Curve shape for highly embrittled materials. After the kick-off meeting in Vienna in October 2004, the first Research Coordination Meeting was held in May 2005, hosted by AEKI Budapest. The present document focuses on the participation and contribution of SCK-CEN to Topic Area no. 2 (Loading rate effects on Master Curve - Impact Loading), for which E. Lucon acts as co-task leader. A Round-Robin exercise is planned for early 2006, consisting in 10 tests per participant on precracked Charpy-V specimen of JRQ, tested dynamically using an instrumented pendulum; the results will be analysed using the Master Curve procedure (ASTM E1921-05) and compared to data obtained at other loading rates (quasi-static and/or dynamic). Guidelines and detailed specifications have been produced and circulated after the meeting in Budapest. SCK-CEN has also produced data reporting sheets in EXCEL97 form, which will be used for reporting all fracture toughness test results (at quasi-static, dynamic or impact loading rates) performed in the framework of the CRP-8. (author)
International Nuclear Information System (INIS)
McCabe, D.E.; Sokolov, M.A.; Nanstad, R.K.
1997-01-01
The primary objective of the Heavy-Section Steel Irradiation (HSSI) Program Tenth Irradiation Series was to develop a fracture mechanics evaluation of weld metal WF-70, which was taken from the beltline and nozzle course girth weld joints of the Midland Reactor vessel. This material became available when Consumers Power Company of Midland, Michigan, decided to abort plans to operate their nuclear power plant. WF-70 is classified as a low upper-shelf steel primarily due to the Linde 80 flux that was used in the submerged-arc welding process. The master curve concept is introduced to model the transition range fracture toughness when the toughness is quantified in terms of K Jc values. K Jc is an elastic-plastic stress intensity factor calculated by conversion from J c ; i.e., J-integral at onset of cleavage instability
Development of MR compatible laparoscope robot using master-slave control method
International Nuclear Information System (INIS)
Toyoda, Kazutaka; Jaeheon, Chung; Murata, Masaharu; Odaira, Takeshi; Hashizume, Makoto; Ieiri, Satoshi
2011-01-01
Recently, MRI guided robotic surgery has been studied. This surgery uses MRI, a surgical navigation system and a surgical robot system intraoperatively for realization of safer and assured surgeries. We have developed a MR compatible laparoscope robot and 4DOF master manipulator (master) independently. So, in this research we report system integration of the master and the laparoscope robot. The degrees of freedom between the master and the laparoscope robot is the same (4DOF), so that the relation of orientation between master and laparoscope robot is one to one. The network communication method between the master and the laparoscope robot is UDP connection based on TCP/IP protocol for reduction of communication delay. In future work we will do experiments of operability of master-slave laparoscope robot system. (author)
International Nuclear Information System (INIS)
Lee, Bong Sang; Yang, Won Jon; Hong, Jun Hwa
2000-12-01
This report summarizes the test results obtained from the Korean contribution to the integrity assessment of low toughness Beaver Valley reactor vessel by characterizing the fracture toughness of Linde 1092 (No. 305414) weld metal. 10 PCVN specimens and 10 1T-CT specimens were tested in accordance with the ASTM E 1921-97 standard, 'Standard test method for determination of reference temperature, T o , for ferritic steels in the transition range'. This results can also be useful for assessment of Linde 80 low toughness welds of Kori-1
Energy Technology Data Exchange (ETDEWEB)
Lee, Bong Sang; Yang, Won Jon; Hong, Jun Hwa
2000-12-01
This report summarizes the test results obtained from the Korean contribution to the integrity assessment of low toughness Beaver Valley reactor vessel by characterizing the fracture toughness of Linde 1092 (No. 305414) weld metal. 10 PCVN specimens and 10 1T-CT specimens were tested in accordance with the ASTM E 1921-97 standard, 'Standard test method for determination of reference temperature, T{sub o}, for ferritic steels in the transition range'. This results can also be useful for assessment of Linde 80 low toughness welds of Kori-1.
Energy Technology Data Exchange (ETDEWEB)
Lucon, E.; Scibetta, M.; Puzzolante, L.
2008-10-15
In the framework of the 2006 Convention, we investigated the applicability of fatigue precracked miniature Charpy specimens of KLST type (MPCC - B = 3 mm, W = 4 mm and L = 27 mm) for impact toughness measurements, using the well-characterized JRQ RPV steel. In the ductile to-brittle transition region, MPCC tests analyzed using the Master Curve approach and compared to data previously obtained from PCC specimens had shown a more ductile behavior and therefore un conservative results. In the investigation presented in this report, two additional RPV steels have been used to compare the performance of impact-tested MPCC and PCC specimens in the transition regime: the low-toughness JSPS steel and the high-toughness 20MnMoNi55 steel. The results obtained (excellent agreement for 20MnMoNi55 and considerable differences between T0 values for JSPS) are contradictory and do not presently allow qualifying the MPCC specimens as a reliable alternative to PCC samples for impact toughness measurements.
MATHEMATICAL METHODS TO DETERMINE THE INTERSECTION CURVES OF THE CYLINDERS
Directory of Open Access Journals (Sweden)
POPA Carmen
2010-07-01
Full Text Available The aim of this paper is to establish the intersection curves between cylinders, by using the Mathematica program. This thing can be obtained by introducing the curves equations, which are inferred, in Mathematica program. This paper take into discussion three right cylinders and another inclined to 45 degrees. The intersection curves can also be obtained by using the classical methods of the descriptive geometry.
Studying the method of linearization of exponential calibration curves
International Nuclear Information System (INIS)
Bunzh, Z.A.
1989-01-01
The results of study of the method for linearization of exponential calibration curves are given. The calibration technique and comparison of the proposed method with piecewise-linear approximation and power series expansion, are given
CURVE LSFIT, Gamma Spectrometer Calibration by Interactive Fitting Method
International Nuclear Information System (INIS)
Olson, D.G.
1992-01-01
1 - Description of program or function: CURVE and LSFIT are interactive programs designed to obtain the best data fit to an arbitrary curve. CURVE finds the type of fitting routine which produces the best curve. The types of fitting routines available are linear regression, exponential, logarithmic, power, least squares polynomial, and spline. LSFIT produces a reliable calibration curve for gamma ray spectrometry by using the uncertainty value associated with each data point. LSFIT is intended for use where an entire efficiency curve is to be made starting at 30 KeV and continuing to 1836 KeV. It creates calibration curves using up to three least squares polynomial fits to produce the best curve for photon energies above 120 KeV and a spline function to combine these fitted points with a best fit for points below 120 KeV. 2 - Method of solution: The quality of fit is tested by comparing the measured y-value to the y-value calculated from the fitted curve. The fractional difference between these two values is printed for the evaluation of the quality of the fit. 3 - Restrictions on the complexity of the problem - Maxima of: 2000 data points calibration curve output (LSFIT) 30 input data points 3 least squares polynomial fits (LSFIT) The least squares polynomial fit requires that the number of data points used exceed the degree of fit by at least two
Methods for predicting isochronous stress-strain curves
International Nuclear Information System (INIS)
Kiyoshige, Masanori; Shimizu, Shigeki; Satoh, Keisuke.
1976-01-01
Isochronous stress-strain curves show the relation between stress and total strain at a certain temperature with time as a parameter, and they are drawn up from the creep test results at various stress levels at a definite temperature. The concept regarding the isochronous stress-strain curves was proposed by McVetty in 1930s, and has been used for the design of aero-engines. Recently the high temperature characteristics of materials are shown as the isochronous stress-strain curves in the design guide for the nuclear energy equipments and structures used in high temperature creep region. It is prescribed that these curves are used as the criteria for determining design stress intensity or the data for analyzing the superposed effects of creep and fatigue. In case of the isochronous stress-strain curves used for the design of nuclear energy equipments with very long service life, it is impractical to determine the curves directly from the results of long time creep test, accordingly the method of predicting long time stress-strain curves from short time creep test results must be established. The method proposed by the authors, for which the creep constitution equations taking the first and second creep stages into account are used, and the method using Larson-Miller parameter were studied, and it was found that both methods were reliable for the prediction. (Kako, I.)
Qualitative Comparison of Contraction-Based Curve Skeletonization Methods
Sobiecki, André; Yasan, Haluk C.; Jalba, Andrei C.; Telea, Alexandru C.
2013-01-01
In recent years, many new methods have been proposed for extracting curve skeletons of 3D shapes, using a mesh-contraction principle. However, it is still unclear how these methods perform with respect to each other, and with respect to earlier voxel-based skeletonization methods, from the viewpoint
Modelling with the master equation solution methods and applications in social and natural sciences
Haag, Günter
2017-01-01
This book presents the theory and practical applications of the Master equation approach, which provides a powerful general framework for model building in a variety of disciplines. The aim of the book is to not only highlight different mathematical solution methods, but also reveal their potential by means of practical examples. Part I of the book, which can be used as a toolbox, introduces selected statistical fundamentals and solution methods for the Master equation. In Part II and Part III, the Master equation approach is applied to important applications in the natural and social sciences. The case studies presented mainly hail from the social sciences, including urban and regional dynamics, population dynamics, dynamic decision theory, opinion formation and traffic dynamics; however, some applications from physics and chemistry are treated as well, underlining the interdisciplinary modelling potential of the Master equation approach. Drawing upon the author’s extensive teaching and research experience...
Construction of molecular potential energy curves by an optimization method
Wang, J.; Blake, A. J.; McCoy, D. G.; Torop, L.
1991-01-01
A technique for determining the potential energy curves for diatomic molecules from measurements of diffused or continuum spectra is presented. It is based on a numerical procedure which minimizes the difference between the calculated spectra and the experimental measurements and can be used in cases where other techniques, such as the conventional RKR method, are not applicable. With the aid of suitable spectral data, the associated dipole electronic transition moments can be simultaneously obtained. The method is illustrated by modeling the "longest band" of molecular oxygen to extract the E 3Σ u- and B 3Σ u- potential curves in analytical form.
Using the QUAIT Model to Effectively Teach Research Methods Curriculum to Master's-Level Students
Hamilton, Nancy J.; Gitchel, Dent
2017-01-01
Purpose: To apply Slavin's model of effective instruction to teaching research methods to master's-level students. Methods: Barriers to the scientist-practitioner model (student research experience, confidence, and utility value pertaining to research methods as well as faculty research and pedagogical incompetencies) are discussed. Results: The…
A systematic and efficient method to compute multi-loop master integrals
Directory of Open Access Journals (Sweden)
Xiao Liu
2018-04-01
Full Text Available We propose a novel method to compute multi-loop master integrals by constructing and numerically solving a system of ordinary differential equations, with almost trivial boundary conditions. Thus it can be systematically applied to problems with arbitrary kinematic configurations. Numerical tests show that our method can not only achieve results with high precision, but also be much faster than the only existing systematic method sector decomposition. As a by product, we find a new strategy to compute scalar one-loop integrals without reducing them to master integrals.
A systematic and efficient method to compute multi-loop master integrals
Liu, Xiao; Ma, Yan-Qing; Wang, Chen-Yu
2018-04-01
We propose a novel method to compute multi-loop master integrals by constructing and numerically solving a system of ordinary differential equations, with almost trivial boundary conditions. Thus it can be systematically applied to problems with arbitrary kinematic configurations. Numerical tests show that our method can not only achieve results with high precision, but also be much faster than the only existing systematic method sector decomposition. As a by product, we find a new strategy to compute scalar one-loop integrals without reducing them to master integrals.
Modeling error distributions of growth curve models through Bayesian methods.
Zhang, Zhiyong
2016-06-01
Growth curve models are widely used in social and behavioral sciences. However, typical growth curve models often assume that the errors are normally distributed although non-normal data may be even more common than normal data. In order to avoid possible statistical inference problems in blindly assuming normality, a general Bayesian framework is proposed to flexibly model normal and non-normal data through the explicit specification of the error distributions. A simulation study shows when the distribution of the error is correctly specified, one can avoid the loss in the efficiency of standard error estimates. A real example on the analysis of mathematical ability growth data from the Early Childhood Longitudinal Study, Kindergarten Class of 1998-99 is used to show the application of the proposed methods. Instructions and code on how to conduct growth curve analysis with both normal and non-normal error distributions using the the MCMC procedure of SAS are provided.
Curve fitting methods for solar radiation data modeling
Energy Technology Data Exchange (ETDEWEB)
Karim, Samsul Ariffin Abdul, E-mail: samsul-ariffin@petronas.com.my, E-mail: balbir@petronas.com.my; Singh, Balbir Singh Mahinder, E-mail: samsul-ariffin@petronas.com.my, E-mail: balbir@petronas.com.my [Department of Fundamental and Applied Sciences, Faculty of Sciences and Information Technology, Universiti Teknologi PETRONAS, Bandar Seri Iskandar, 31750 Tronoh, Perak Darul Ridzuan (Malaysia)
2014-10-24
This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R{sup 2}. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods.
Curve fitting methods for solar radiation data modeling
Karim, Samsul Ariffin Abdul; Singh, Balbir Singh Mahinder
2014-10-01
This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R2. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods.
Curve fitting methods for solar radiation data modeling
International Nuclear Information System (INIS)
Karim, Samsul Ariffin Abdul; Singh, Balbir Singh Mahinder
2014-01-01
This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R 2 . The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods
Directory of Open Access Journals (Sweden)
S. Musto
2017-06-01
Full Text Available In this paper, master curves are reported for the crosslinking of a diene rubber with a sulphur based system in the presence of either nano- or nano-structured carbon allotropes, such as carbon nanotubes (CNT, a nanosized graphite with high surface area (HSAG and carbon black (CB. Poly(1,4-cis-isoprene from Hevea Brasiliensis was the diene rubber and crosslinking was performed in temperatures ranging from 151 to 180 °C, with carbon allotropes below and above their percolation threshold. Such carbon allotropes were characterized by different aspect ratio, surface area and pH. However, in the crosslinking reaction, they revealed common behaviour. In fact, the specific interfacial area could be used to correlate crosslinking parameters, such as induction time (ts1 and activation energy (Ea calculated by applying the autocatalytic model. Monotonous decrease of ts1 and increase of Ea were observed, with points lying on master curves, regardless of the nature of the carbon allotropes. Remarkable differences were however observed in the structure of the crosslinking network: when the carbon allotrope was above the percolation threshold much larger crosslinking density was obtained in the presence of CNT whereas composites based on HSAG became soluble in hydrocarbon solvent, after the reaction with a thiol. Proposed explanation of these results is based on the reactivity of carbon allotropes with sulphur and sulphur based compounds, demonstrated through the reaction of 1-dodecanethiol and sulphur with CNT and HSAG and with a model substrate such as anthracene.
THE CPA QUALIFICATION METHOD BASED ON THE GAUSSIAN CURVE FITTING
Directory of Open Access Journals (Sweden)
M.T. Adithia
2015-01-01
Full Text Available The Correlation Power Analysis (CPA attack is an attack on cryptographic devices, especially smart cards. The results of the attack are correlation traces. Based on the correlation traces, an evaluation is done to observe whether significant peaks appear in the traces or not. The evaluation is done manually, by experts. If significant peaks appear then the smart card is not considered secure since it is assumed that the secret key is revealed. We develop a method that objectively detects peaks and decides which peak is significant. We conclude that using the Gaussian curve fitting method, the subjective qualification of the peak significance can be objectified. Thus, better decisions can be taken by security experts. We also conclude that the Gaussian curve fitting method is able to show the influence of peak sizes, especially the width and height, to a significance of a particular peak.
Arctic curves in path models from the tangent method
Di Francesco, Philippe; Lapa, Matthew F.
2018-04-01
Recently, Colomo and Sportiello introduced a powerful method, known as the tangent method, for computing the arctic curve in statistical models which have a (non- or weakly-) intersecting lattice path formulation. We apply the tangent method to compute arctic curves in various models: the domino tiling of the Aztec diamond for which we recover the celebrated arctic circle; a model of Dyck paths equivalent to the rhombus tiling of a half-hexagon for which we find an arctic half-ellipse; another rhombus tiling model with an arctic parabola; the vertically symmetric alternating sign matrices, where we find the same arctic curve as for unconstrained alternating sign matrices. The latter case involves lattice paths that are non-intersecting but that are allowed to have osculating contact points, for which the tangent method was argued to still apply. For each problem we estimate the large size asymptotics of a certain one-point function using LU decomposition of the corresponding Gessel–Viennot matrices, and a reformulation of the result amenable to asymptotic analysis.
Aerodynamic calculational methods for curved-blade Darrieus VAWT WECS
Templin, R. J.
1985-03-01
Calculation of aerodynamic performance and load distributions for curved-blade wind turbines is discussed. Double multiple stream tube theory, and the uncertainties that remain in further developing adequate methods are considered. The lack of relevant airfoil data at high Reynolds numbers and high angles of attack, and doubts concerning the accuracy of models of dynamic stall are underlined. Wind tunnel tests of blade airbrake configurations are summarized.
Sediment Curve Uncertainty Estimation Using GLUE and Bootstrap Methods
Directory of Open Access Journals (Sweden)
aboalhasan fathabadi
2017-02-01
Full Text Available Introduction: In order to implement watershed practices to decrease soil erosion effects it needs to estimate output sediment of watershed. Sediment rating curve is used as the most conventional tool to estimate sediment. Regarding to sampling errors and short data, there are some uncertainties in estimating sediment using sediment curve. In this research, bootstrap and the Generalized Likelihood Uncertainty Estimation (GLUE resampling techniques were used to calculate suspended sediment loads by using sediment rating curves. Materials and Methods: The total drainage area of the Sefidrood watershed is about 560000 km2. In this study uncertainty in suspended sediment rating curves was estimated in four stations including Motorkhane, Miyane Tonel Shomare 7, Stor and Glinak constructed on Ayghdamosh, Ghrangho, GHezelOzan and Shahrod rivers, respectively. Data were randomly divided into a training data set (80 percent and a test set (20 percent by Latin hypercube random sampling.Different suspended sediment rating curves equations were fitted to log-transformed values of sediment concentration and discharge and the best fit models were selected based on the lowest root mean square error (RMSE and the highest correlation of coefficient (R2. In the GLUE methodology, different parameter sets were sampled randomly from priori probability distribution. For each station using sampled parameter sets and selected suspended sediment rating curves equation suspended sediment concentration values were estimated several times (100000 to 400000 times. With respect to likelihood function and certain subjective threshold, parameter sets were divided into behavioral and non-behavioral parameter sets. Finally using behavioral parameter sets the 95% confidence intervals for suspended sediment concentration due to parameter uncertainty were estimated. In bootstrap methodology observed suspended sediment and discharge vectors were resampled with replacement B (set to
A preliminary study on method of saturated curve
International Nuclear Information System (INIS)
Cao Liguo; Chen Yan; Ao Qi; Li Huijuan
1987-01-01
It is an effective method to determine directly the absorption coefficient of sample with the matrix effect correction. The absorption coefficient is calculated using the relation of the characteristic X-ray intensity with the thickness of samples (saturated curve). The method explains directly the feature of the sample and the correction of the enhanced effect in certain condition. The method is not as same as the usual one in which the determination of the absorption coefficient of sample is based on the procedure of absorption of X-ray penetrating sample. The sensitivity factor KI 0 is discussed. The idea of determinating KI o by experiment and quasi-absoluted measurement of absorption coefficient μ are proposed. The experimental results with correction in different condition are shown
A volume-based method for denoising on curved surfaces
Biddle, Harry; von Glehn, Ingrid; Macdonald, Colin B.; Marz, Thomas
2013-01-01
We demonstrate a method for removing noise from images or other data on curved surfaces. Our approach relies on in-surface diffusion: we formulate both the Gaussian diffusion and Perona-Malik edge-preserving diffusion equations in a surface-intrinsic way. Using the Closest Point Method, a recent technique for solving partial differential equations (PDEs) on general surfaces, we obtain a very simple algorithm where we merely alternate a time step of the usual Gaussian diffusion (and similarly Perona-Malik) in a small 3D volume containing the surface with an interpolation step. The method uses a closest point function to represent the underlying surface and can treat very general surfaces. Experimental results include image filtering on smooth surfaces, open surfaces, and general triangulated surfaces. © 2013 IEEE.
A volume-based method for denoising on curved surfaces
Biddle, Harry
2013-09-01
We demonstrate a method for removing noise from images or other data on curved surfaces. Our approach relies on in-surface diffusion: we formulate both the Gaussian diffusion and Perona-Malik edge-preserving diffusion equations in a surface-intrinsic way. Using the Closest Point Method, a recent technique for solving partial differential equations (PDEs) on general surfaces, we obtain a very simple algorithm where we merely alternate a time step of the usual Gaussian diffusion (and similarly Perona-Malik) in a small 3D volume containing the surface with an interpolation step. The method uses a closest point function to represent the underlying surface and can treat very general surfaces. Experimental results include image filtering on smooth surfaces, open surfaces, and general triangulated surfaces. © 2013 IEEE.
The method of covariant symbols in curved space-time
International Nuclear Information System (INIS)
Salcedo, L.L.
2007-01-01
Diagonal matrix elements of pseudodifferential operators are needed in order to compute effective Lagrangians and currents. For this purpose the method of symbols is often used, which however lacks manifest covariance. In this work the method of covariant symbols, introduced by Pletnev and Banin, is extended to curved space-time with arbitrary gauge and coordinate connections. For the Riemannian connection we compute the covariant symbols corresponding to external fields, the covariant derivative and the Laplacian, to fourth order in a covariant derivative expansion. This allows one to obtain the covariant symbol of general operators to the same order. The procedure is illustrated by computing the diagonal matrix element of a nontrivial operator to second order. Applications of the method are discussed. (orig.)
Measuring the surgical 'learning curve': methods, variables and competency.
Khan, Nuzhath; Abboudi, Hamid; Khan, Mohammed Shamim; Dasgupta, Prokar; Ahmed, Kamran
2014-03-01
To describe how learning curves are measured and what procedural variables are used to establish a 'learning curve' (LC). To assess whether LCs are a valuable measure of competency. A review of the surgical literature pertaining to LCs was conducted using the Medline and OVID databases. Variables should be fully defined and when possible, patient-specific variables should be used. Trainee's prior experience and level of supervision should be quantified; the case mix and complexity should ideally be constant. Logistic regression may be used to control for confounding variables. Ideally, a learning plateau should reach a predefined/expert-derived competency level, which should be fully defined. When the group splitting method is used, smaller cohorts should be used in order to narrow the range of the LC. Simulation technology and competence-based objective assessments may be used in training and assessment in LC studies. Measuring the surgical LC has potential benefits for patient safety and surgical education. However, standardisation in the methods and variables used to measure LCs is required. Confounding variables, such as participant's prior experience, case mix, difficulty of procedures and level of supervision, should be controlled. Competency and expert performance should be fully defined. © 2013 The Authors. BJU International © 2013 BJU International.
Semiclassical methods in curved spacetime and black hole thermodynamics
International Nuclear Information System (INIS)
Camblong, Horacio E.; Ordonez, Carlos R.
2005-01-01
Improved semiclassical techniques are developed and applied to a treatment of a real scalar field in a D-dimensional gravitational background. This analysis, leading to a derivation of the thermodynamics of black holes, is based on the simultaneous use of (i) a near-horizon description of the scalar field in terms of conformal quantum mechanics; (ii) a novel generalized WKB framework; and (iii) curved-spacetime phase-space methods. In addition, this improved semiclassical approach is shown to be asymptotically exact in the presence of hierarchical expansions of a near-horizon type. Most importantly, this analysis further supports the claim that the thermodynamics of black holes is induced by their near-horizon conformal invariance
International Nuclear Information System (INIS)
Odette, G.R.; Donahue, E.; Lucas, G.E.; Sheckherd, J.W.
1996-01-01
The influence of loading rate and constraint on the effective fracture toughness as a function of temperature [K e (T)] of the fusion program heat of V-4Cr-4Ti was measured using subsized, three point bend specimens. The constitutive behavior was characterized as a function of temperature and strain rate using small tensile specimens. Data in the literature on this alloy was also analysed to determine the effect of irradiation on K e (T) and the energy temperature (E-T) curves measured in subsized Charpy V-notch tests. It was found that V-4Cr-4Ti undergoes open-quotes normalclose quotes stress-controlled cleavage fracture below a temperature marking a sharp ductile-to-brittle transition. The transition temperature is increased by higher loading rates, irradiation hardening and triaxial constraint. Shifts in a reference transition temperature due to higher loading rates and irradiation can be reasonably predicted by a simple equivalent yield stress model. These results also suggest that size and geometry effects, which mediate constraint, can be modeled by combining local critical stressed area σ*/A* fracture criteria with finite element method simulations of crack tip stress fields. The fundamental understanding reflected in these models will be needed to develop K e (T) curves for a range of loading rates, irradiation conditions, structural size scales and geometries relying (in large part) on small specimen tests. Indeed, it may be possible to develop a master K e (T) curve-shift method to account for these variables. Such reliable and flexible failure assessment methods are critical to the design and safe operation of defect tolerant vanadium structures
Hong Shen
2011-01-01
The concepts of curve profile, curve intercept, curve intercept density, curve profile area density, intersection density in containing intersection (or intersection density relied on intersection reference), curve profile intersection density in surface (or curve intercept intersection density relied on intersection of containing curve), and curve profile area density in surface (AS) were defined. AS expressed the amount of curve profile area of Y phase in the unit containing surface area, S...
Statistical re-evaluation of the ASME KIC and KIR fracture toughness reference curves
International Nuclear Information System (INIS)
Wallin, K.
1999-01-01
Historically the ASME reference curves have been treated as representing absolute deterministic lower bound curves of fracture toughness. In reality, this is not the case. They represent only deterministic lower bound curves to a specific set of data, which represent a certain probability range. A recently developed statistical lower bound estimation method called the 'master curve', has been proposed as a candidate for a new lower bound reference curve concept. From a regulatory point of view, the master curve is somewhat problematic in that it does not claim to be an absolute deterministic lower bound, but corresponds to a specific theoretical failure probability that can be chosen freely based on application. In order to be able to substitute the old ASME reference curves with lower bound curves based on the master curve concept, the inherent statistical nature (and confidence level) of the ASME reference curves must be revealed. In order to estimate the true inherent level of safety, represented by the reference curves, the original database was re-evaluated with statistical methods and compared to an analysis based on the master curve concept. The analysis reveals that the 5% lower bound master curve has the same inherent degree of safety as originally intended for the K IC -reference curve. Similarly, the 1% lower bound master curve corresponds to the K IR -reference curve. (orig.)
Statistical re-evaluation of the ASME KIC and KIR fracture toughness reference curves
International Nuclear Information System (INIS)
Wallin, K.; Rintamaa, R.
1998-01-01
Historically the ASME reference curves have been treated as representing absolute deterministic lower bound curves of fracture toughness. In reality, this is not the case. They represent only deterministic lower bound curves to a specific set of data, which represent a certain probability range. A recently developed statistical lower bound estimation method called the 'Master curve', has been proposed as a candidate for a new lower bound reference curve concept. From a regulatory point of view, the Master curve is somewhat problematic in that it does not claim to be an absolute deterministic lower bound, but corresponds to a specific theoretical failure probability that can be chosen freely based on application. In order to be able to substitute the old ASME reference curves with lower bound curves based on the master curve concept, the inherent statistical nature (and confidence level) of the ASME reference curves must be revealed. In order to estimate the true inherent level of safety, represented by the reference curves, the original data base was re-evaluated with statistical methods and compared to an analysis based on the master curve concept. The analysis reveals that the 5% lower bound Master curve has the same inherent degree of safety as originally intended for the K IC -reference curve. Similarly, the 1% lower bound Master curve corresponds to the K IR -reference curve. (orig.)
Functional methods for arbitrary densities in curved spacetime
International Nuclear Information System (INIS)
Basler, M.
1993-01-01
This paper gives an introduction to the technique of functional differentiation and integration in curved spacetime, applied to examples from quantum field theory. Special attention is drawn on the choice of functional integral measure. Referring to a suggestion by Toms, fields are choosen as arbitrary scalar, spinorial or vectorial densities. The technique developed by Toms for a pure quadratic Lagrangian are extended to the calculation of the generating functional with external sources. Included are two examples of interacting theories, a self-interacting scalar field and a Yang-Mills theory. For these theories the complete set of Feynman graphs depending on the weight of variables is derived. (orig.)
Comparison of two methods to determine fan performance curves using computational fluid dynamics
Onma, Patinya; Chantrasmi, Tonkid
2018-01-01
This work investigates a systematic numerical approach that employs Computational Fluid Dynamics (CFD) to obtain performance curves of a backward-curved centrifugal fan. Generating the performance curves requires a number of three-dimensional simulations with varying system loads at a fixed rotational speed. Two methods were used and their results compared to experimental data. The first method incrementally changes the mass flow late through the inlet boundary condition while the second method utilizes a series of meshes representing the physical damper blade at various angles. The generated performance curves from both methods are compared with an experiment setup in accordance with the AMCA fan performance testing standard.
Rapid and convenient method for preparing masters for microcontact printing with 1-12 μm features
International Nuclear Information System (INIS)
Zilch, Lloyd W.; Husseini, Ghaleb A.; Lua, Y.-Y.; Lee, Michael V.; Gertsch, Kevin R.; Cannon, Bennion R.; Perry, Robert M.; Sevy, Eric T.; Asplund, Matthew C.; Woolley, Adam T.; Linford, Matthew R.
2004-01-01
Mechanical scribing can be employed to create surfaces with recessed features. Through replica molding elastomeric copies of these scribed surfaces are created that function as stamps for microcontact printing. It is shown that this new method for creating masters for microcontact printing can be performed with a computer-controlled milling machine (CNC), making this method particularly straightforward and accessible to a large technical community that does not need to work in a particle free environment. Thus, no clean room, or other specialized equipment is required, as is commonly needed to prepare masters. Time-of-flight secondary ion mass spectrometry confirms surface pattering by this method. Finally, it is shown that feature size in the scribed master can be controlled by varying the force on the tip during scribing
Directory of Open Access Journals (Sweden)
Sylvie Troncale
Full Text Available MOTIVATION: Reverse phase protein array (RPPA is a powerful dot-blot technology that allows studying protein expression levels as well as post-translational modifications in a large number of samples simultaneously. Yet, correct interpretation of RPPA data has remained a major challenge for its broad-scale application and its translation into clinical research. Satisfying quantification tools are available to assess a relative protein expression level from a serial dilution curve. However, appropriate tools allowing the normalization of the data for external sources of variation are currently missing. RESULTS: Here we propose a new method, called NormaCurve, that allows simultaneous quantification and normalization of RPPA data. For this, we modified the quantification method SuperCurve in order to include normalization for (i background fluorescence, (ii variation in the total amount of spotted protein and (iii spatial bias on the arrays. Using a spike-in design with a purified protein, we test the capacity of different models to properly estimate normalized relative expression levels. The best performing model, NormaCurve, takes into account a negative control array without primary antibody, an array stained with a total protein stain and spatial covariates. We show that this normalization is reproducible and we discuss the number of serial dilutions and the number of replicates that are required to obtain robust data. We thus provide a ready-to-use method for reliable and reproducible normalization of RPPA data, which should facilitate the interpretation and the development of this promising technology. AVAILABILITY: The raw data, the scripts and the normacurve package are available at the following web site: http://microarrays.curie.fr.
A Novel Method for Detecting and Computing Univolatility Curves in Ternary Mixtures
DEFF Research Database (Denmark)
Shcherbakov, Nataliya; Rodriguez-Donis, Ivonne; Abildskov, Jens
2017-01-01
Residue curve maps (RCMs) and univolatility curves are crucial tools for analysis and design of distillation processes. Even in the case of ternary mixtures, the topology of these maps is highly non-trivial. We propose a novel method allowing detection and computation of univolatility curves...... of the generalized univolatility and unidistribution curves in the three dimensional composition – temperature state space lead to a simple and efficient algorithm of computation of the univolatility curves. Two peculiar ternary systems, namely diethylamine – chloroform – methanol and hexane – benzene...
Energy Technology Data Exchange (ETDEWEB)
Etim, E; Basili, C [Rome Univ. (Italy). Ist. di Matematica
1978-08-21
The lagrangian in the path integral solution of the master equation of a stationary Markov process is derived by application of the Ehrenfest-type theorem of quantum mechanics and the Cauchy method of finding inverse functions. Applied to the non-linear Fokker-Planck equation the authors reproduce the result obtained by integrating over Fourier series coefficients and by other methods.
Gordon, Stephen P.; Oliver, John
2015-01-01
The purpose of this study was to determine the value that graduate students place on different types of instructional methods used by professors in educational leadership preparation programs, and to determine if master's and doctoral students place different values on different instructional methods. The participants included 87 graduate…
Directory of Open Access Journals (Sweden)
Wenting Luo
2016-04-01
Full Text Available Pavement horizontal curve is designed to serve as a transition between straight segments, and its presence may cause a series of driving-related safety issues to motorists and drivers. As is recognized that traditional methods for curve geometry investigation are time consuming, labor intensive, and inaccurate, this study attempts to develop a method that can automatically conduct horizontal curve identification and measurement at network level. The digital highway data vehicle (DHDV was utilized for data collection, in which three Euler angles, driving speed, and acceleration of survey vehicle were measured with an inertial measurement unit (IMU. The 3D profiling data used for cross slope calibration was obtained with PaveVision3D Ultra technology at 1 mm resolution. In this study, the curve identification was based on the variation of heading angle, and the curve radius was calculated with kinematic method, geometry method, and lateral acceleration method. In order to verify the accuracy of the three methods, the analysis of variance (ANOVA test was applied by using the control variable of curve radius measured by field test. Based on the measured curve radius, a curve safety analysis model was used to predict the crash rates and safe driving speeds at horizontal curves. Finally, a case study on 4.35 km road segment demonstrated that the proposed method could efficiently conduct network level analysis.
IPR CURVE CALCULATING FOR A WELL PRODUCING BY INTERMITTENT GAS-LIFT METHOD
Directory of Open Access Journals (Sweden)
Zoran Mršić
2009-12-01
Full Text Available Master degree thesis (Mršić Z., 2009 shows the detailed procedure of calculating inflow performance curve for intermittent gas lift, based entirely on the data measured at surface. This article explains the detailed approach of the mentioned research and the essence of the results and observations acquired during the study. To evaluate the proposed method of calculating the average bottom hole flowing pressure (BHFP as the key parameter of inflow performance calculation, downhole pressure surveys have been conducted in three producing wells at Šandrovac and Bilogora oil fields: Šandrovac-75α, Bilogora-52 and Šandrovac-34. Absolute difference between measured and calculated values of average BHFP for first two wells was Δp=0,64 bar and Δp=0,06 bar while calculated relative error was εr=0,072 and εr=0,0038 respectively. Due to gas-lift valve malfunction in well Šandrovac-34, noticed during downhole pressure survey, value of calculated BHFP cannot be considered correct to compare with measured value. Based on the measured data the information have been revealed about actual values of a certain intermittent gas lift parameters that are usually assumed based on experience gained values or are calculated using empirical equations given in literature. The significant difference has been noticed for a parameter t2. The length of a minimum pressure period for which the measured values were in range of 10,74 min up to 16 min, while empirical equation gives values in the range of 1,23 min up to 1,75 min. Based on measured values of above mentioned parameter a new empirical equation has been established (the paper is published in Croatian.
International Nuclear Information System (INIS)
Xu, Yanbin; Pei, Yang; Dong, Feng
2016-01-01
The L-curve method is a popular regularization parameter choice method for the ill-posed inverse problem of electrical resistance tomography (ERT). However the method cannot always determine a proper parameter for all situations. An investigation into those situations where the L-curve method failed show that a new corner point appears on the L-curve and the parameter corresponding to the new corner point can obtain a satisfactory reconstructed solution. Thus an extended L-curve method, which determines the regularization parameter associated with either global corner or the new corner, is proposed. Furthermore, two strategies are provided to determine the new corner–one is based on the second-order differential of L-curve, and the other is based on the curvature of L-curve. The proposed method is examined by both numerical simulations and experimental tests. And the results indicate that the extended method can handle the parameter choice problem even in the case where the typical L-curve method fails. Finally, in order to reduce the running time of the method, the extended method is combined with a projection method based on the Krylov subspace, which was able to boost the extended L-curve method. The results verify that the speed of the extended L-curve method is distinctly improved. The proposed method extends the application of the L-curve in the field of choosing regularization parameter with an acceptable running time and can also be used in other kinds of tomography. (paper)
Statistical re-evaluation of the ASME K{sub IC} and K{sub IR} fracture toughness reference curves
Energy Technology Data Exchange (ETDEWEB)
Wallin, K.; Rintamaa, R. [Valtion Teknillinen Tutkimuskeskus, Espoo (Finland)
1998-11-01
Historically the ASME reference curves have been treated as representing absolute deterministic lower bound curves of fracture toughness. In reality, this is not the case. They represent only deterministic lower bound curves to a specific set of data, which represent a certain probability range. A recently developed statistical lower bound estimation method called the `Master curve`, has been proposed as a candidate for a new lower bound reference curve concept. From a regulatory point of view, the Master curve is somewhat problematic in that it does not claim to be an absolute deterministic lower bound, but corresponds to a specific theoretical failure probability that can be chosen freely based on application. In order to be able to substitute the old ASME reference curves with lower bound curves based on the master curve concept, the inherent statistical nature (and confidence level) of the ASME reference curves must be revealed. In order to estimate the true inherent level of safety, represented by the reference curves, the original data base was re-evaluated with statistical methods and compared to an analysis based on the master curve concept. The analysis reveals that the 5% lower bound Master curve has the same inherent degree of safety as originally intended for the K{sub IC}-reference curve. Similarly, the 1% lower bound Master curve corresponds to the K{sub IR}-reference curve. (orig.)
Hacke, Uwe G; Venturas, Martin D; MacKinnon, Evan D; Jacobsen, Anna L; Sperry, John S; Pratt, R Brandon
2015-01-01
The standard centrifuge method has been frequently used to measure vulnerability to xylem cavitation. This method has recently been questioned. It was hypothesized that open vessels lead to exponential vulnerability curves, which were thought to be indicative of measurement artifact. We tested this hypothesis in stems of olive (Olea europea) because its long vessels were recently claimed to produce a centrifuge artifact. We evaluated three predictions that followed from the open vessel artifact hypothesis: shorter stems, with more open vessels, would be more vulnerable than longer stems; standard centrifuge-based curves would be more vulnerable than dehydration-based curves; and open vessels would cause an exponential shape of centrifuge-based curves. Experimental evidence did not support these predictions. Centrifuge curves did not vary when the proportion of open vessels was altered. Centrifuge and dehydration curves were similar. At highly negative xylem pressure, centrifuge-based curves slightly overestimated vulnerability compared to the dehydration curve. This divergence was eliminated by centrifuging each stem only once. The standard centrifuge method produced accurate curves of samples containing open vessels, supporting the validity of this technique and confirming its utility in understanding plant hydraulics. Seven recommendations for avoiding artefacts and standardizing vulnerability curve methodology are provided. © 2014 The Authors. New Phytologist © 2014 New Phytologist Trust.
Forcella, Davide; He, Yang-Hui; Zaffaroni, Alberto
2008-01-01
Supersymmetric gauge theories have an important but perhaps under-appreciated notion of a master space, which controls the full moduli space. For world-volume theories of D-branes probing a Calabi-Yau singularity X the situation is particularly illustrative. In the case of one physical brane, the master space F is the space of F-terms and a particular quotient thereof is X itself. We study various properties of F which encode such physical quantities as Higgsing, BPS spectra, hidden global symmetries, etc. Using the plethystic program we also discuss what happens at higher number N of branes. This letter is a summary and some extensions of the key points of a longer companion paper arXiv:0801.1585.
On the Shadow Simplex Method for Curved Polyhedra
D.N. Dadush (Daniel); N. Hähnle
2015-01-01
htmlabstractWe study the simplex method over polyhedra satisfying certain “discrete curvature” lower bounds, which enforce that the boundary always meets vertices at sharp angles. Motivated by linear programs with totally unimodular constraint matrices, recent results of Bonifas et al
On the Shadow Simplex Method for curved polyhedra
D.N. Dadush (Daniel); N. Hähnle
2016-01-01
htmlabstractWe study the simplex method over polyhedra satisfying certain “discrete curvature” lower bounds, which enforce that the boundary always meets vertices at sharp angles. Motivated by linear programs with totally unimodular constraint matrices, recent results of Bonifas et al. (Discrete
Core supervision methods and future improvements of the core master/presto system at KKB
International Nuclear Information System (INIS)
Lundberg, S.; Wenisch, J.; Teeffelen, W.V.
2000-01-01
Kernkraftwerk Brunsbuettel (KKB) is a KWU 806 MW e BWR located at the lower river Elbe, in Germany. The reactor has been in operation since 1976 and is now operating in its 14. cycle. The core supervision at KKB is performed with the ABB CORE MASTER system. This system mainly contains the 3-D simulator PRESTO supplied by Studsvik Scandpower A/S. The core supervision is performed by periodic PRESTO 3-D evaluations of the reactor operation state. The power distribution calculated by PRESTO is adapted with the ABB UPDAT program using the on-line LPRM readings. The thermal margins are based on this adapted power distribution. Related to core supervision, the function of the PRESTO/UPDAT codes is presented. The UPDAT method is working well and is capable of reproducing the true core power distribution. The quality of the 3-D calculation is, however, an important ingredient of the quality of the adapted power distribution. The adaptation method as such is also important for this quality. The data quality of this system during steady state and off-rate states (reactor manoeuvres) are discussed by presenting comparisons between PRESTO and UPDAT thermal margin utilisation from Cycle 13. Recently analysed asymmetries in the UPDAT evaluated MCPR values are also presented and discussed. Improvements in the core supervision such as the introduction of advanced modern nodal methods (PRESTO-2) are presented and an alternative core supervision philosophy is discussed. An ongoing project with the goal to update the data and result presentation interface (GUI) is also presented. (authors)
Shock melting method to determine melting curve by molecular dynamics: Cu, Pd, and Al
International Nuclear Information System (INIS)
Liu, Zhong-Li; Zhang, Xiu-Lu; Cai, Ling-Cang
2015-01-01
A melting simulation method, the shock melting (SM) method, is proposed and proved to be able to determine the melting curves of materials accurately and efficiently. The SM method, which is based on the multi-scale shock technique, determines melting curves by preheating and/or prepressurizing materials before shock. This strategy was extensively verified using both classical and ab initio molecular dynamics (MD). First, the SM method yielded the same satisfactory melting curve of Cu with only 360 atoms using classical MD, compared to the results from the Z-method and the two-phase coexistence method. Then, it also produced a satisfactory melting curve of Pd with only 756 atoms. Finally, the SM method combined with ab initio MD cheaply achieved a good melting curve of Al with only 180 atoms, which agrees well with the experimental data and the calculated results from other methods. It turned out that the SM method is an alternative efficient method for calculating the melting curves of materials
Shock melting method to determine melting curve by molecular dynamics: Cu, Pd, and Al.
Liu, Zhong-Li; Zhang, Xiu-Lu; Cai, Ling-Cang
2015-09-21
A melting simulation method, the shock melting (SM) method, is proposed and proved to be able to determine the melting curves of materials accurately and efficiently. The SM method, which is based on the multi-scale shock technique, determines melting curves by preheating and/or prepressurizing materials before shock. This strategy was extensively verified using both classical and ab initio molecular dynamics (MD). First, the SM method yielded the same satisfactory melting curve of Cu with only 360 atoms using classical MD, compared to the results from the Z-method and the two-phase coexistence method. Then, it also produced a satisfactory melting curve of Pd with only 756 atoms. Finally, the SM method combined with ab initio MD cheaply achieved a good melting curve of Al with only 180 atoms, which agrees well with the experimental data and the calculated results from other methods. It turned out that the SM method is an alternative efficient method for calculating the melting curves of materials.
Comparative Study on Two Melting Simulation Methods: Melting Curve of Gold
International Nuclear Information System (INIS)
Liu Zhong-Li; Li Rui; Sun Jun-Sheng; Zhang Xiu-Lu; Cai Ling-Cang
2016-01-01
Melting simulation methods are of crucial importance to determining melting temperature of materials efficiently. A high-efficiency melting simulation method saves much simulation time and computational resources. To compare the efficiency of our newly developed shock melting (SM) method with that of the well-established two-phase (TP) method, we calculate the high-pressure melting curve of Au using the two methods based on the optimally selected interatomic potentials. Although we only use 640 atoms to determine the melting temperature of Au in the SM method, the resulting melting curve accords very well with the results from the TP method using much more atoms. Thus, this shows that a much smaller system size in SM method can still achieve a fully converged melting curve compared with the TP method, implying the robustness and efficiency of the SM method. (paper)
US Agency for International Development — OPS Master is a management tool and database for integrated financial planning and portfolio management in USAID Missions. Using OPS Master, the three principal...
Statistical inference methods for two crossing survival curves: a comparison of methods.
Li, Huimin; Han, Dong; Hou, Yawen; Chen, Huilin; Chen, Zheng
2015-01-01
A common problem that is encountered in medical applications is the overall homogeneity of survival distributions when two survival curves cross each other. A survey demonstrated that under this condition, which was an obvious violation of the assumption of proportional hazard rates, the log-rank test was still used in 70% of studies. Several statistical methods have been proposed to solve this problem. However, in many applications, it is difficult to specify the types of survival differences and choose an appropriate method prior to analysis. Thus, we conducted an extensive series of Monte Carlo simulations to investigate the power and type I error rate of these procedures under various patterns of crossing survival curves with different censoring rates and distribution parameters. Our objective was to evaluate the strengths and weaknesses of tests in different situations and for various censoring rates and to recommend an appropriate test that will not fail for a wide range of applications. Simulation studies demonstrated that adaptive Neyman's smooth tests and the two-stage procedure offer higher power and greater stability than other methods when the survival distributions cross at early, middle or late times. Even for proportional hazards, both methods maintain acceptable power compared with the log-rank test. In terms of the type I error rate, Renyi and Cramér-von Mises tests are relatively conservative, whereas the statistics of the Lin-Xu test exhibit apparent inflation as the censoring rate increases. Other tests produce results close to the nominal 0.05 level. In conclusion, adaptive Neyman's smooth tests and the two-stage procedure are found to be the most stable and feasible approaches for a variety of situations and censoring rates. Therefore, they are applicable to a wider spectrum of alternatives compared with other tests.
Saaranen, Terhi; Vaajoki, Anne; Kellomäki, Marjaana; Hyvärinen, Marja-Leena
2015-02-01
This article describes the experiences of master students of nursing science in learning interpersonal communication competence through the simulation method. The exercises reflected challenging interactive situations in the field of health care. Few studies have been published on using the simulation method in the communication education of teachers, managers, and experts in this field. The aim of this study is to produce information which can be utilised in developing the simulation method to promote the interpersonal communication competence of master-level students of health sciences. This study used the qualitative, descriptive research method. At the Department of Nursing Science, the University of Eastern Finland, students major in nursing science specialise in nursing leadership and management, preventive nursing science, or nurse teacher education. Students from all three specialties taking the Challenging Situations in Speech Communication course participated (n=47). Essays on meaningful learning experiences collected using the critical incident technique, underwent content analysis. Planning of teaching, carrying out different stages of the simulation exercise, participant roles, and students' personal factors were central to learning interpersonal communication competence. Simulation is a valuable method in developing the interpersonal communication competence of students of health sciences at the masters' level. The methods used in the simulation teaching of emergency care are not necessarily applicable as such to communication education. The role of teacher is essential to supervising students' learning in simulation exercises. In the future, it is important to construct questions that help students to reflect specifically on communication. Copyright © 2014 Elsevier Ltd. All rights reserved.
Multimodal determination of Rayleigh dispersion and attenuation curves using the circle fit method
Verachtert, R.; Lombaert, G.; Degrande, G.
2018-03-01
This paper introduces the circle fit method for the determination of multi-modal Rayleigh dispersion and attenuation curves as part of a Multichannel Analysis of Surface Waves (MASW) experiment. The wave field is transformed to the frequency-wavenumber (fk) domain using a discretized Hankel transform. In a Nyquist plot of the fk-spectrum, displaying the imaginary part against the real part, the Rayleigh wave modes correspond to circles. The experimental Rayleigh dispersion and attenuation curves are derived from the angular sweep of the central angle of these circles. The method can also be applied to the analytical fk-spectrum of the Green's function of a layered half-space in order to compute dispersion and attenuation curves, as an alternative to solving an eigenvalue problem. A MASW experiment is subsequently simulated for a site with a regular velocity profile and a site with a soft layer trapped between two stiffer layers. The performance of the circle fit method to determine the dispersion and attenuation curves is compared with the peak picking method and the half-power bandwidth method. The circle fit method is found to be the most accurate and robust method for the determination of the dispersion curves. When determining attenuation curves, the circle fit method and half-power bandwidth method are accurate if the mode exhibits a sharp peak in the fk-spectrum. Furthermore, simulated and theoretical attenuation curves determined with the circle fit method agree very well. A similar correspondence is not obtained when using the half-power bandwidth method. Finally, the circle fit method is applied to measurement data obtained for a MASW experiment at a site in Heverlee, Belgium. In order to validate the soil profile obtained from the inversion procedure, force-velocity transfer functions were computed and found in good correspondence with the experimental transfer functions, especially in the frequency range between 5 and 80 Hz.
Application of numerical methods in spectroscopy : fitting of the curve of thermoluminescence
International Nuclear Information System (INIS)
RANDRIAMANALINA, S.
1999-01-01
The method of non linear least squares is one of the mathematical tools widely employed in spectroscopy, it is used for the determination of parameters of a model. In other hand, the spline function is among fitting functions that introduce the smallest error. It is used for the calculation of the area under the curve. We present an application of these methods, with the details of the corresponding algorithms, to the fitting of the thermoluminescence curve. [fr
Surface charge method for molecular surfaces with curved areal elements I. Spherical triangles
Yu, Yi-Kuo
2018-03-01
Parametrizing a curved surface with flat triangles in electrostatics problems creates a diverging electric field. One way to avoid this is to have curved areal elements. However, charge density integration over curved patches appears difficult. This paper, dealing with spherical triangles, is the first in a series aiming to solve this problem. Here, we lay the ground work for employing curved patches for applying the surface charge method to electrostatics. We show analytically how one may control the accuracy by expanding in powers of the the arc length (multiplied by the curvature). To accommodate not extremely small curved areal elements, we have provided enough details to include higher order corrections that are needed for better accuracy when slightly larger surface elements are used.
A graph-based method for fitting planar B-spline curves with intersections
Directory of Open Access Journals (Sweden)
Pengbo Bo
2016-01-01
Full Text Available The problem of fitting B-spline curves to planar point clouds is studied in this paper. A novel method is proposed to deal with the most challenging case where multiple intersecting curves or curves with self-intersection are necessary for shape representation. A method based on Delauney Triangulation of data points is developed to identify connected components which is also capable of removing outliers. A skeleton representation is utilized to represent the topological structure which is further used to create a weighted graph for deciding the merging of curve segments. Different to existing approaches which utilize local shape information near intersections, our method considers shape characteristics of curve segments in a larger scope and is thus capable of giving more satisfactory results. By fitting each group of data points with a B-spline curve, we solve the problems of curve structure reconstruction from point clouds, as well as the vectorization of simple line drawing images by drawing lines reconstruction.
Comparison of McMaster and FECPAKG2 methods for counting nematode eggs in the faeces of alpacas.
Rashid, Mohammed H; Stevenson, Mark A; Waenga, Shea; Mirams, Greg; Campbell, Angus J D; Vaughan, Jane L; Jabbar, Abdul
2018-05-02
This study aimed to compare the FECPAK G2 and the McMaster techniques for counting of gastrointestinal nematode eggs in the faeces of alpacas using two floatation solutions (saturated sodium chloride and sucrose solutions). Faecal eggs counts from both techniques were compared using the Lin's concordance correlation coefficient and Bland and Altman statistics. Results showed moderate to good agreement between the two methods, with better agreement achieved when saturated sugar is used as a floatation fluid, particularly when faecal egg counts are less than 1000 eggs per gram of faeces. To the best of our knowledge this is the first study to assess agreement of measurements between McMaster and FECPAK G2 methods for estimating faecal eggs in South American camelids.
Makowska, Joanna; Bagiñska, Katarzyna; Makowski, Mariusz; Jagielska, Anna; Liwo, Adam; Kasprzykowski, Franciszek; Chmurzyñski, Lech; Scheraga, Harold A
2006-03-09
We compared the ability of two theoretical methods of pH-dependent conformational calculations to reproduce experimental potentiometric titration curves of two models of peptides: Ac-K5-NHMe in 95% methanol (MeOH)/5% water mixture and Ac-XX(A)7OO-NH2 (XAO) (where X is diaminobutyric acid, A is alanine, and O is ornithine) in water, methanol (MeOH), and dimethyl sulfoxide (DMSO), respectively. The titration curve of the former was taken from the literature, and the curve of the latter was determined in this work. The first theoretical method involves a conformational search using the electrostatically driven Monte Carlo (EDMC) method with a low-cost energy function (ECEPP/3 plus the SRFOPT surface-solvation model, assumming that all titratable groups are uncharged) and subsequent reevaluation of the free energy at a given pH with the Poisson-Boltzmann equation, considering variable protonation states. In the second procedure, molecular dynamics (MD) simulations are run with the AMBER force field and the generalized Born model of electrostatic solvation, and the protonation states are sampled during constant-pH MD runs. In all three solvents, the first pKa of XAO is strongly downshifted compared to the value for the reference compounds (ethylamine and propylamine, respectively); the water and methanol curves have one, and the DMSO curve has two jumps characteristic of remarkable differences in the dissociation constants of acidic groups. The predicted titration curves of Ac-K5-NHMe are in good agreement with the experimental ones; better agreement is achieved with the MD-based method. The titration curves of XAO in methanol and DMSO, calculated using the MD-based approach, trace the shape of the experimental curves, reproducing the pH jump, while those calculated with the EDMC-based approach and the titration curve in water calculated using the MD-based approach have smooth shapes characteristic of the titration of weak multifunctional acids with small differences
A method of non-destructive quantitative analysis of the ancient ceramics with curved surface
International Nuclear Information System (INIS)
He Wenquan; Xiong Yingfei
2002-01-01
Generally the surface of the sample should be smooth and flat in XRF analysis, but the ancient ceramics and hardly match this condition. Two simple methods are put forward in fundamental method and empirical correction method of XRF analysis, so the analysis of little sample or the sample with curved surface can be easily completed
Light Curve Periodic Variability of Cyg X-1 using Jurkevich Method ...
Indian Academy of Sciences (India)
Abstract. The Jurkevich method is a useful method to explore periodic- ity in the unevenly sampled observational data. In this work, we adopted the method to the light curve of Cyg X-1 from 1996 to 2012, and found that there is an interesting period of 370 days, which appears in both low/hard and high/soft states.
Light Curve Periodic Variability of Cyg X-1 using Jurkevich Method
Indian Academy of Sciences (India)
The Jurkevich method is a useful method to explore periodicity in the unevenly sampled observational data. In this work, we adopted the method to the light curve of Cyg X-1 from 1996 to 2012, and found that there is an interesting period of 370 days, which appears in both low/hard and high/soft states. That period may be ...
Wind turbine performance: Methods and criteria for reliability of measured power curves
Energy Technology Data Exchange (ETDEWEB)
Griffin, D.A. [Advanced Wind Turbines Inc., Seattle, WA (United States)
1996-12-31
In order to evaluate the performance of prototype turbines, and to quantify incremental changes in performance through field testing, Advanced Wind Turbines (AWT) has been developing methods and requirements for power curve measurement. In this paper, field test data is used to illustrate several issues and trends which have resulted from this work. Averaging and binning processes, data hours per wind-speed bin, wind turbulence levels, and anemometry methods are all shown to have significant impacts on the resulting power curves. Criteria are given by which the AWT power curves show a high degree of repeatability, and these criteria are compared and contrasted with current published standards for power curve measurement. 6 refs., 5 figs., 5 tabs.
International Nuclear Information System (INIS)
Purohit, D.N.; Goswami, A.K.; Chauhan, R.S.; Ressalan, S.
1999-01-01
A spectrophotometric method for determination of stability constants making use of Job's curves has been developed. Using this method stability constants of Zn(II), Cd(II), Mo(VI) and V(V) complexes of hydroxytriazenes have been determined. For the sake of comparison, values of the stability constants were also determined using Harvey and Manning's method. The values of the stability constants developed by two methods compare well. This new method has been named as Purohit's method. (author)
Experimental Method for Plotting S-N Curve with a Small Number of Specimens
Directory of Open Access Journals (Sweden)
Strzelecki Przemysław
2016-12-01
Full Text Available The study presents two approaches to plotting an S-N curve based on the experimental results. The first approach is commonly used by researchers and presented in detail in many studies and standard documents. The model uses a linear regression whose parameters are estimated by using the least squares method. A staircase method is used for an unlimited fatigue life criterion. The second model combines the S-N curve defined as a straight line and the record of random occurrence of the fatigue limit. A maximum likelihood method is used to estimate the S-N curve parameters. Fatigue data for C45+C steel obtained in the torsional bending test were used to compare the estimated S-N curves. For pseudo-random numbers generated by using the Mersenne Twister algorithm, the estimated S-N curve for 10 experimental results plotted by using the second model, estimates the fatigue life in the scatter band of the factor 3. The result gives good approximation, especially regarding the time required to plot the S-N curve.
A new method for measuring coronary artery diameters with CT spatial profile curves
International Nuclear Information System (INIS)
Shimamoto, Ryoichi; Suzuki, Jun-ichi; Yamazaki, Tadashi; Tsuji, Taeko; Ohmoto, Yuki; Morita, Toshihiro; Yamashita, Hiroshi; Honye, Junko; Nagai, Ryozo; Akahane, Masaaki; Ohtomo, Kuni
2007-01-01
Purpose: Coronary artery vascular edge recognition on computed tomography (CT) angiograms is influenced by window parameters. A noninvasive method for vascular edge recognition independent of window setting with use of multi-detector row CT was contrived and its feasibility and accuracy were estimated by intravascular ultrasound (IVUS). Methods: Multi-detector row CT was performed to obtain 29 CT spatial profile curves by setting a line cursor across short-axis coronary angiograms processed by multi-planar reconstruction. IVUS was also performed to determine the reference coronary diameter. IVUS diameter was fitted horizontally between two points on the upward and downward slopes of the profile curves and Hounsfield number was measured at the fitted level to test seven candidate indexes for definition of intravascular coronary diameter. The best index from the curves should show the best agreement with IVUS diameter. Results: Of the seven candidates the agreement was the best (agreement: 16 ± 11%) when the two ratios of Hounsfield number at the level of IVUS diameter over that at the peak on the profile curves were used with water and with fat as the background tissue. These edge definitions were achieved by cutting the horizontal distance by the curves at the level defined by the ratio of 0.41 for water background and 0.57 for fat background. Conclusions: Vascular edge recognition of the coronary artery with CT spatial profile curves was feasible and the contrived method could define the coronary diameter with reasonable agreement
Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei
2017-09-01
The laser induced breakdown spectroscopy (LIBS) technique is an effective method to detect material composition by obtaining the plasma emission spectrum. The overlapping peaks in the spectrum are a fundamental problem in the qualitative and quantitative analysis of LIBS. Based on a curve fitting method, this paper studies an error compensation method to achieve the decomposition and correction of overlapping peaks. The vital step is that the fitting residual is fed back to the overlapping peaks and performs multiple curve fitting processes to obtain a lower residual result. For the quantitative experiments of Cu, the Cu-Fe overlapping peaks in the range of 321-327 nm obtained from the LIBS spectrum of five different concentrations of CuSO 4 ·5H 2 O solution were decomposed and corrected using curve fitting and error compensation methods. Compared with the curve fitting method, the error compensation reduced the fitting residual about 18.12-32.64% and improved the correlation about 0.86-1.82%. Then, the calibration curve between the intensity and concentration of the Cu was established. It can be seen that the error compensation method exhibits a higher linear correlation between the intensity and concentration of Cu, which can be applied to the decomposition and correction of overlapping peaks in the LIBS spectrum.
Bortoluzzi, C; Paras, K L; Applegate, T J; Verocai, G G
2018-04-30
Monitoring Eimeria shedding has become more important due to the recent restrictions to the use of antibiotics within the poultry industry. Therefore, there is a need for the implementation of more precise and accurate quantitative diagnostic techniques. The objective of this study was to compare the precision and accuracy between the Mini-FLOTAC and the McMaster techniques for quantitative diagnosis of Eimeria maxima oocyst in poultry. Twelve pools of excreta samples of broiler chickens experimentally infected with E. maxima were analyzed for the comparison between Mini-FLOTAC and McMaster technique using, the detection limits (dl) of 23 and 25, respectively. Additionally, six excreta samples were used to compare the precision of different dl (5, 10, 23, and 46) using the Mini-FLOTAC technique. For precision comparisons, five technical replicates of each sample (five replicate slides on one excreta slurry) were read for calculating the mean oocyst per gram of excreta (OPG) count, standard deviation (SD), coefficient of variation (CV), and precision of both aforementioned comparisons. To compare accuracy between the methods (McMaster, and Mini-FLOTAC dl 5 and 23), excreta from uninfected chickens was spiked with 100, 500, 1,000, 5,000, or 10,000 OPG; additional samples remained unspiked (negative control). For each spiking level, three samples were read in triplicate, totaling nine reads per spiking level per technique. Data were transformed using log10 to obtain normality and homogeneity of variances. A significant correlation (R = 0.74; p = 0.006) was observed between the mean OPG of the McMaster dl 25 and the Mini-FLOTAC dl 23. Mean OPG, CV, SD, and precision were not statistically different between the McMaster dl 25 and Mini-FLOTAC dl 23. Despite the absence of statistical difference (p > 0.05), Mini-FLOTAC dl 5 showed a numerically lower SD and CV than Mini-FLOTAC dl 23. The Pearson correlation coefficient revealed significant and positive
DEFF Research Database (Denmark)
Tatu, Aditya Jayant
This thesis deals with two unrelated issues, restricting curve evolution to subspaces and computing image patches in the equivalence class of Histogram of Gradient orientation based features using nonlinear projection methods. Curve evolution is a well known method used in various applications like...... tracking interfaces, active contour based segmentation methods and others. It can also be used to study shape spaces, as deforming a shape can be thought of as evolving its boundary curve. During curve evolution a curve traces out a path in the infinite dimensional space of curves. Due to application...... specific requirements like shape priors or a given data model, and due to limitations of the computer, the computed curve evolution forms a path in some finite dimensional subspace of the space of curves. We give methods to restrict the curve evolution to a finite dimensional linear or implicitly defined...
Residual stress measurement by X-ray diffraction with the Gaussian curve method and its automation
International Nuclear Information System (INIS)
Kurita, M.
1987-01-01
X-ray technique with the Gaussian curve method and its automation are described for rapid and nondestructive measurement of residual stress. A simplified equation for measuring the stress by the Gaussian curve method is derived because in its previous form this method required laborious calculation. The residual stress can be measured in a few minutes, depending on materials, using an automated X-ray stress analyzer with a microcomputer which was developed in the laboratory. The residual stress distribution of a partially induction hardened and tempered (at 280 0 C) steel bar was measured with the Gaussian curve method. A sharp residual tensile stress peak of 182 MPa appeared right outside the hardened region at which fatigue failure is liable to occur
Application of Glow Curve Deconvolution Method to Evaluate Low Dose TLD LiF
International Nuclear Information System (INIS)
Kurnia, E; Oetami, H R; Mutiah
1996-01-01
Thermoluminescence Dosimeter (TLD), especially LiF:Mg, Ti material, is one of the most practical personal dosimeter in known to date. Dose measurement under 100 uGy using TLD reader is very difficult in high precision level. The software application is used to improve the precision of the TLD reader. The objectives of the research is to compare three Tl-glow curve analysis method irradiated in the range between 5 up to 250 uGy. The first method is manual analysis, dose information is obtained from the area under the glow curve between pre selected temperature limits, and background signal is estimated by a second readout following the first readout. The second method is deconvolution method, separating glow curve into four peaks mathematically and dose information is obtained from area of peak 5, and background signal is eliminated computationally. The third method is deconvolution method but the dose is represented by the sum of area of peak 3,4 and 5. The result shown that the sum of peak 3,4 and 5 method can improve reproducibility six times better than manual analysis for dose 20 uGy, the ability to reduce MMD until 10 uGy rather than 60 uGy with manual analysis or 20 uGy with peak 5 area method. In linearity, the sum of peak 3,4 and 5 method yields exactly linear dose response curve over the entire dose range
METHOD TO DEVELOP THE DOUBLE-CURVED SURFACE OF THE ROOF
Directory of Open Access Journals (Sweden)
JURCO Ancuta Nadia
2017-05-01
Full Text Available This work present two methods for determining the development of double-curved surface. The aims of this paper is to show a comparative study between methods for determination of the sheet metal requirements for complex roof cover shape. In first part of the paper are presented the basic sketch and information about the roof shape and some consecrated buildings, which have a complex roof shape. The second part of the paper shows two methods for determining the developed of the spherical roof. The graphical method is the first method used for developing of the spherical shape. In this method it used the poly-cylindrical method to develop the double-curved surface. The second method is accomplishing by using the dedicated CAD software method.
The nuclear fluctuation width and the method of maxima in excitation curves
International Nuclear Information System (INIS)
Burjan, V.
1988-01-01
The method of counting maxima of excitation curves in the region of the occurrence of nuclear cross section fluctuations is extended to the case of the more realistic maxima defined as a sequence of five points instead of the simpler and commonly used case of a sequence of three points of an excitation curve. The dependence of the coefficient b (5) (κ), relating the number of five-point maxima and the mean level width Γ of the compound nucleus, on the relative distance K of excitation curve points is calculated. The influence of the random background on the coefficient b (5) (κ) is discussed and a comparison with the properties of the three-point coefficient b (3) (κ) is made - also in connection with the contribution of the random background. The calculated values of b (5) (κ) are well reproduced by the data obtained from the analysis of artificial excitation curves. (orig.)
A method to enhance the curve negotiation performance of HTS Maglev
Che, T.; Gou, Y. F.; Deng, Z. G.; Zheng, J.; Zheng, B. T.; Chen, P.
2015-09-01
High temperature superconducting (HTS) Maglev has attracted more and more attention due to its special self-stable characteristic, and much work has been done to achieve its actual application, but the research about the curve negotiation is not systematic and comprehensive. In this paper, we focused on the change of the lateral displacements of the Maglev vehicle when going through curves under different velocities, and studied the change of the electromagnetic forces through experimental methods. Experimental results show that setting an appropriate initial eccentric distance (ED), which is the distance between the center of the bulk unit and the center of the permanent magnet guideway (PMG), when cooling the bulks is favorable for the Maglev system’s curve negotiation. This work will provide some available suggestions for improving the curve negotiation performance of the HTS Maglev system.
Creep curve modeling of hastelloy-X alloy by using the theta projection method
International Nuclear Information System (INIS)
Woo Gon, Kim; Woo-Seog, Ryu; Jong-Hwa, Chang; Song-Nan, Yin
2007-01-01
To model the creep curves of the Hastelloy-X alloy which is being considered as a candidate material for the VHTR (Very High Temperature gas-cooled Reactor) components, full creep curves were obtained by constant-load creep tests for different stress levels at 950 C degrees. Using the experimental creep data, the creep curves were modeled by applying the Theta projection method. A number of computing processes of a nonlinear least square fitting (NLSF) analysis was carried out to establish the suitably of the four Theta parameters. The results showed that the Θ 1 and Θ 2 parameters could not be optimized well with a large error during the fitting of the full creep curves. On the other hand, the Θ 3 and Θ 4 parameters were optimized well without an error. For this result, to find a suitable cutoff strain criterion, the NLSF analysis was performed with various cutoff strains for all the creep curves. An optimum cutoff strain range for defining the four Theta parameters accurately was found to be a 3% cutoff strain. At the 3% cutoff strain, the predicted curves coincided well with the experimental ones. The variation of the four Theta parameters as the function of a stress showed a good linearity, and the creep curves were modeled well for the low stress levels. Predicted minimum creep rate showed a good agreement with the experimental data. Also, for a design usage of the Hastelloy-X alloy, the plot of the log stress versus log the time to a 1% strain was predicted, and the creep rate curves with time and a cutoff strain at 950 C degrees were constructed numerically for a wide rang of stresses by using the Theta projection method. (authors)
A non-iterative method for fitting decay curves with background
International Nuclear Information System (INIS)
Mukoyama, T.
1982-01-01
A non-iterative method for fitting a decay curve with background is presented. The sum of an exponential function and a constant term is linearized by the use of the difference equation and parameters are determined by the standard linear least-squares fitting. The validity of the present method has been tested against pseudo-experimental data. (orig.)
A standard curve based method for relative real time PCR data processing
Directory of Open Access Journals (Sweden)
Krause Andreas
2005-03-01
Full Text Available Abstract Background Currently real time PCR is the most precise method by which to measure gene expression. The method generates a large amount of raw numerical data and processing may notably influence final results. The data processing is based either on standard curves or on PCR efficiency assessment. At the moment, the PCR efficiency approach is preferred in relative PCR whilst the standard curve is often used for absolute PCR. However, there are no barriers to employ standard curves for relative PCR. This article provides an implementation of the standard curve method and discusses its advantages and limitations in relative real time PCR. Results We designed a procedure for data processing in relative real time PCR. The procedure completely avoids PCR efficiency assessment, minimizes operator involvement and provides a statistical assessment of intra-assay variation. The procedure includes the following steps. (I Noise is filtered from raw fluorescence readings by smoothing, baseline subtraction and amplitude normalization. (II The optimal threshold is selected automatically from regression parameters of the standard curve. (III Crossing points (CPs are derived directly from coordinates of points where the threshold line crosses fluorescence plots obtained after the noise filtering. (IV The means and their variances are calculated for CPs in PCR replicas. (V The final results are derived from the CPs' means. The CPs' variances are traced to results by the law of error propagation. A detailed description and analysis of this data processing is provided. The limitations associated with the use of parametric statistical methods and amplitude normalization are specifically analyzed and found fit to the routine laboratory practice. Different options are discussed for aggregation of data obtained from multiple reference genes. Conclusion A standard curve based procedure for PCR data processing has been compiled and validated. It illustrates that
International Nuclear Information System (INIS)
Bykova, L.N.; Chesnokova, O.Ya.; Orlova, M.V.
1995-01-01
The method for linearizing the potentiometric curves of precipitation titration is studied for its application in the determination of halide ions (Cl - , Br - , I - ) in dimethylacetamide, dimethylformamide, in which titration is complicated by additional equilibrium processes. It is found that the method of linearization permits the determination of the titrant volume at the end point of titration to high accuracy in the case of titration curves without a potential jump in the proximity of the equivalent point (5 x 10 -5 M). 3 refs., 2 figs., 3 tabs
International Nuclear Information System (INIS)
Ros, F C; Sidek, L M; Desa, M N; Arifin, K; Tosaka, H
2013-01-01
The purpose of the stage-discharge curves varies from water quality study, flood modelling study, can be used to project climate change scenarios and so on. As the bed of the river often changes due to the annual monsoon seasons that sometimes cause by massive floods, the capacity of the river will changed causing shifting controlled to happen. This study proposes to use the historical flood event data from 1960 to 2009 in calculating the stage-discharge curve of Guillemard Bridge located in Sg. Kelantan. Regression analysis was done to check the quality of the data and examine the correlation between the two variables, Q and H. The mean values of the two variables then were adopted to find the value of difference between zero gauge height and the level of zero flow, 'a', K and 'n' to fit into rating curve equation and finally plotting the stage-discharge rating curve. Regression analysis of the historical flood data indicate that 91 percent of the original uncertainty has been explained by the analysis with the standard error of 0.085.
Directory of Open Access Journals (Sweden)
Esteban Pérez-López
2014-11-01
Full Text Available Because of the importance of quantitative chemical analysis in research, quality control, sales of services and other areas of interest , and the limiting of some instrumental analysis methods for quantification with linear calibration curve, sometimes because the short linear dynamic ranges of the analyte, and sometimes by limiting the technique itself, is that there is a need to investigate a little more about the convenience of using quadratic curves for analytical quantification, which seeks demonstrate that it is a valid calculation model for chemical analysis instruments. To this was taken as an analysis method based on the technique and atomic absorption spectroscopy in particular a determination of magnesium in a sample of drinking water Tacares sector Northern Grecia, employing a nonlinear calibration curve and a curve specific quadratic behavior, which was compared with the test results obtained for the same analysis with a linear calibration curve. The results show that the methodology is valid for the determination referred to, with all confidence, since the concentrations are very similar, and as used hypothesis testing can be considered equal.
International Nuclear Information System (INIS)
Gelido, G; Angiletta, S; Pujalte, A; Quiroga, P; Cornes, P; Craiem, D
2007-01-01
Measurement of peripheral arterial pressure using the oscillometric method is commonly used by professionals as well as by patients in their homes. This non invasive automatic method is fast, efficient and the required equipment is affordable with a low cost. The measurement method consists of obtaining parameters from a calibrated decreasing curve that is modulated by heart beats witch appear when arterial pressure reaches the cuff pressure. Diastolic, mean and systolic pressures are obtained calculating particular instants from the heart beats envelope curve. In this article we analyze the envelope of this amplified curve to find out if its morphology is related to arterial stiffness in patients. We found, in 33 volunteers, that the envelope waveform width correlates to systolic pressure (r=0.4, p<0.05), to pulse pressure (r=0.6, p<0.05) and to pulse pressure normalized to systolic pressure (r=0.6, p<0.05). We believe that the morphology of the heart beats envelope curve obtained with the oscillometric method for peripheral pressure measurement depends on arterial stiffness and can be used to enhance pressure measurements
International Nuclear Information System (INIS)
Kim, J.W.
1980-01-01
Observed magnetic resonance curves are statistically reexamined. Typical models of resonance lines are Lorentzian and Gaussian distribution functions. In the case of metallic, alloy or intermetallic compound samples, observed resonance lines are supperposed with the absorption line and the dispersion line. The analyzing methods of supperposed resonance lines are demonstrated. (author)
International Nuclear Information System (INIS)
Ha-Kawa, Sang Kil; Suga, Yutaka; Kouda, Katsuyasu; Ikeda, Koshi; Tanaka, Yoshimasa
1997-01-01
We investigated a curve-fitting method for the rate of blood retention of 99m Tc-galactosyl serum albumin (GSA) as a substitute for the blood sampling method. Seven healthy volunteers and 27 patients with liver disease underwent 99m Tc-GSA scanning. After normalization of the y-intercept as 100 percent, a biexponential regression curve for the precordial time-activity curve provided the percent injected dose (%ID) of 99m Tc-GSA in the blood without blood sampling. The discrepancy between %ID obtained by the curve-fitting method and that by the multiple blood samples was minimal in normal volunteers 3.1±2.1% (mean±standard deviation, n=77 sampling). Slightly greater discrepancy was observed in patients with liver disease (7.5±6.1%, n=135 sampling). The %ID at 15 min after injection obtained from the fitted curve was significantly greater in patients with liver cirrhosis than in the controls (53.2±11.6%, n=13; vs. 31.9±2.8%, n=7, p 99m Tc-GSA and the plasma retention rate for indocyanine green (r=-0.869, p 99m Tc-GSA and could be a substitute for the blood sampling method. (author)
Energy Technology Data Exchange (ETDEWEB)
Sokolov, Mikhail A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
2017-05-01
Small specimens are playing the key role in evaluating properties of irradiated materials. The use of small specimens provides several advantages. Typically, only a small volume of material can be irradiated in a reactor at desirable conditions in terms of temperature, neutron flux, and neutron dose. A small volume of irradiated material may also allow for easier handling of specimens. Smaller specimens reduce the amount of radioactive material, minimizing personnel exposures and waste disposal. However, use of small specimens imposes a variety of challenges as well. These challenges are associated with proper accounting for size effects and transferability of small specimen data to the real structures of interest. Any fracture toughness specimen that can be made out of the broken halves of standard Charpy specimens may have exceptional utility for evaluation of reactor pressure vessels (RPVs) since it would allow one to determine and monitor directly actual fracture toughness instead of requiring indirect predictions using correlations established with impact data. The Charpy V-notch specimen is the most commonly used specimen geometry in surveillance programs. Validation of the mini compact tension specimen (mini-CT) geometry has been performed on previously well characterized Midland beltline Linde 80 (WF-70) weld in the unirradiated condition. It was shown that the fracture toughness transition temperature, To, measured by these Mini-CT specimens is almost the same as To value that was derived from various larger fracture toughness specimens. Moreover, an International collaborative program has been established to extend the assessment and validation efforts to irradiated Linde 80 weld metal. The program is underway and involves the Oak Ridge National Laboratory (ORNL), Central Research Institute for Electrical Power Industry (CRIEPI), and Electric Power Research Institute (EPRI). The irradiated Mini-CT specimens from broken halves of previously tested Charpy specimens of Midland beltline weld have been machined and just arrived to ORNL as part of this international collaboration. The ORNL will initiate tests of the irradiated Linde 80 weld in FY2017 and results of this international program will be reported in FY2018.
A new method for curve fitting to the data with low statistics not using the chi2-method
International Nuclear Information System (INIS)
Awaya, T.
1979-01-01
A new method which does not use the chi 2 -fitting method is investigated in order to fit the theoretical curve to data with low statistics. The method is compared with the usual and modified chi 2 -fitting ones. The analyses are done for data which are generated by computers. It is concluded that the new method gives good results in all the cases. (Auth.)
The strategy curve. A method for representing and interpreting generator bidding strategies
International Nuclear Information System (INIS)
Lucas, N.; Taylor, P.
1995-01-01
The pool is the novel trading arrangement at the heart of the privatized electricity market in England and Wales. This central role in the new system makes it crucial that it is seen to function efficiently. Unfortunately, it is governed by a set of complex rules, which leads to a lack of transparency, and this makes monitoring of its operation difficult. This paper seeks to provide a method for illuminating one aspect of the pool, that of generator bidding behaviour. We introduce the concept of a strategy curve, which is a concise device for representing generator bidding strategies. This curve has the appealing characteristic of directly revealing any deviation in the bid price of a genset from the costs of generating electricity. After a brief discussion about what constitutes price and cost in this context we present a number of strategy curves for different days and provide some interpretation of their form, based in part on our earlier work with game theory. (author)
Determination of Dispersion Curves for Composite Materials with the Use of Stiffness Matrix Method
Directory of Open Access Journals (Sweden)
Barski Marek
2017-06-01
Full Text Available Elastic waves used in Structural Health Monitoring systems have strongly dispersive character. Therefore it is necessary to determine the appropriate dispersion curves in order to proper interpretation of a received dynamic response of an analyzed structure. The shape of dispersion curves as well as number of wave modes depends on mechanical properties of layers and frequency of an excited signal. In the current work, the relatively new approach is utilized, namely stiffness matrix method. In contrast to transfer matrix method or global matrix method, this algorithm is considered as numerically unconditionally stable and as effective as transfer matrix approach. However, it will be demonstrated that in the case of hybrid composites, where mechanical properties of particular layers differ significantly, obtaining results could be difficult. The theoretical relationships are presented for the composite plate of arbitrary stacking sequence and arbitrary direction of elastic waves propagation. As a numerical example, the dispersion curves are estimated for the lamina, which is made of carbon fibers and epoxy resin. It is assumed that elastic waves travel in the parallel, perpendicular and arbitrary direction to the fibers in lamina. Next, the dispersion curves are determined for the following laminate [0°, 90°, 0°, 90°, 0°, 90°, 0°, 90°] and hybrid [Al, 90°, 0°, 90°, 0°, 90°, 0°], where Al is the aluminum alloy PA38 and the rest of layers are made of carbon fibers and epoxy resin.
A new method for testing pile by single-impact energy and P-S curve
Xu, Zhao-Yong; Duan, Yong-Kang; Wang, Bin; Hu, Yi-Li; Yang, Run-Hai; Xu, Jun; Zhao, Jin-Ming
2004-11-01
By studying the pile-formula and stress-wave methods ( e.g., CASE method), the authors propose a new method for testing piles using the single-impact energy and P-S curves. The vibration and wave figures are recorded, and the dynamic and static displacements are measured by different transducers near the top of piles when the pile is impacted by a heavy hammer or micro-rocket. By observing the transformation coefficient of driving energy (total energy), the consumed energy of wave motion and vibration and so on, the vertical bearing capacity for single pile is measured and calculated. Then, using the vibration wave diagram, the dynamic relation curves between the force ( P) and the displacement ( S) is calculated and the yield points are determined. Using the static-loading test, the dynamic results are checked and the relative constants of dynamic-static P-S curves are determined. Then the subsidence quantity corresponding to the bearing capacity is determined. Moreover, the shaped quality of the pile body can be judged from the formation of P-S curves.
Elastic-plastic fracture assessment using a J-R curve by direct method
International Nuclear Information System (INIS)
Asta, E.P.
1996-01-01
In the elastic-plastic evaluation methods, based on J integral and tearing modulus procedures, an essential input is the material fracture resistance (J-R) curve. In order to simplify J-R determination direct, a method from load-load point displacement records of the single specimen tests may be employed. This procedure has advantages such as avoiding accuracy problems of the crack growth measuring devices and reducing testing time. This paper presents a structural integrity assessment approach, for ductile fracture, using the J-R obtained by a direct method from small single specimen fracture tests. The J-R direct method was carried out by means of a developed computational program based on theoretical elastic-plastic expressions. A comparative evaluation between the direct method J resistance curves and those obtained by the standard testing methodology on typical pressure vessel steels has been made. The J-R curves estimated from the direct method give an acceptable agreement with the approach proposed in this study which is reliable to use for engineering determinations. (orig.)
Prediction Method for the Complete Characteristic Curves of a Francis Pump-Turbine
Directory of Open Access Journals (Sweden)
Wei Huang
2018-02-01
Full Text Available Complete characteristic curves of a pump-turbine are essential for simulating the hydraulic transients and designing pumped storage power plants but are often unavailable in the preliminary design stage. To solve this issue, a prediction method for the complete characteristics of a Francis pump-turbine was proposed. First, based on Euler equations and the velocity triangles at the runners, a mathematical model describing the complete characteristics of a Francis pump-turbine was derived. According to multiple sets of measured complete characteristic curves, explicit expressions for the characteristic parameters of characteristic operating point sets (COPs, as functions of a specific speed and guide vane opening, were then developed to determine the undetermined coefficients in the mathematical model. Ultimately, by combining the mathematical model with the regression analysis of COPs, the complete characteristic curves for an arbitrary specific speed were predicted. Moreover, a case study shows that the predicted characteristic curves are in good agreement with the measured data. The results obtained by 1D numerical simulation of the hydraulic transient process using the predicted characteristics deviate little from the measured characteristics. This method is effective and sufficient for a priori simulations before obtaining the measured characteristics and provides important support for the preliminary design of pumped storage power plants.
International Nuclear Information System (INIS)
He, Kun; Wang, Xinying; Lu, Jiayu; Cui, Quansheng; Pang, Lei; Di, Dongxu; Zhang, Qiaogen
2015-01-01
To obtain the energy deposition curve is very important in the fields to which nanosecond pulse dielectric barrier discharges (NPDBDs) are applied. It helps the understanding of the discharge physics and fast gas heating. In this paper, an equivalent circuit model, composed of three capacitances, is introduced and a method of calculating the energy deposition curve is proposed for a nanosecond pulse surface dielectric barrier discharge (NPSDBD) plasma actuator. The capacitance C d and the energy deposition curve E R are determined by mathematically proving that the mapping from C d to E R is bijective and numerically searching one C d that satisfies the requirement for E R to be a monotonically non-decreasing function. It is found that the value of capacitance C d varies with the amplitude of applied pulse voltage due to the change of discharge area and is dependent on the polarity of applied voltage. The bijectiveness of the mapping from C d to E R in nanosecond pulse volumetric dielectric barrier discharge (NPVDBD) is demonstrated and the feasibility of the application of the new method to NPVDBD is validated. This preliminarily shows a high possibility of developing a unified approach to calculate the energy deposition curve in NPDBD. (paper)
Learning curve for robotic-assisted surgery for rectal cancer: use of the cumulative sum method.
Yamaguchi, Tomohiro; Kinugasa, Yusuke; Shiomi, Akio; Sato, Sumito; Yamakawa, Yushi; Kagawa, Hiroyasu; Tomioka, Hiroyuki; Mori, Keita
2015-07-01
Few data are available to assess the learning curve for robotic-assisted surgery for rectal cancer. The aim of the present study was to evaluate the learning curve for robotic-assisted surgery for rectal cancer by a surgeon at a single institute. From December 2011 to August 2013, a total of 80 consecutive patients who underwent robotic-assisted surgery for rectal cancer performed by the same surgeon were included in this study. The learning curve was analyzed using the cumulative sum method. This method was used for all 80 cases, taking into account operative time. Operative procedures included anterior resections in 6 patients, low anterior resections in 46 patients, intersphincteric resections in 22 patients, and abdominoperineal resections in 6 patients. Lateral lymph node dissection was performed in 28 patients. Median operative time was 280 min (range 135-683 min), and median blood loss was 17 mL (range 0-690 mL). No postoperative complications of Clavien-Dindo classification Grade III or IV were encountered. We arranged operative times and calculated cumulative sum values, allowing differentiation of three phases: phase I, Cases 1-25; phase II, Cases 26-50; and phase III, Cases 51-80. Our data suggested three phases of the learning curve in robotic-assisted surgery for rectal cancer. The first 25 cases formed the learning phase.
Dispersion curve estimation via a spatial covariance method with ultrasonic wavefield imaging.
Chong, See Yenn; Todd, Michael D
2018-05-01
Numerous Lamb wave dispersion curve estimation methods have been developed to support damage detection and localization strategies in non-destructive evaluation/structural health monitoring (NDE/SHM) applications. In this paper, the covariance matrix is used to extract features from an ultrasonic wavefield imaging (UWI) scan in order to estimate the phase and group velocities of S0 and A0 modes. A laser ultrasonic interrogation method based on a Q-switched laser scanning system was used to interrogate full-field ultrasonic signals in a 2-mm aluminum plate at five different frequencies. These full-field ultrasonic signals were processed in three-dimensional space-time domain. Then, the time-dependent covariance matrices of the UWI were obtained based on the vector variables in Cartesian and polar coordinate spaces for all time samples. A spatial covariance map was constructed to show spatial correlations within the full wavefield. It was observed that the variances may be used as a feature for S0 and A0 mode properties. The phase velocity and the group velocity were found using a variance map and an enveloped variance map, respectively, at five different frequencies. This facilitated the estimation of Lamb wave dispersion curves. The estimated dispersion curves of the S0 and A0 modes showed good agreement with the theoretical dispersion curves. Copyright © 2018 Elsevier B.V. All rights reserved.
Hongyang, Yu; Zhengang, Lu; Xi, Yang
2017-05-01
Modular Multilevel Converter is more and more widely used in high voltage DC transmission system and high power motor drive system. It is a major topological structure for high power AC-DC converter. Due to the large module number, the complex control algorithm, and the high power user’s back ground, the MMC model used for simulation should be as accurate as possible to simulate the details of how MMC works for the dynamic testing of the MMC controller. But so far, there is no sample simulation MMC model which can simulate the switching dynamic process. In this paper, one curve embedded full-bridge MMC modeling method with detailed representation of IGBT characteristics is proposed. This method is based on the switching curve referring and sample circuit calculation, and it is sample for implementation. Based on the simulation comparison test under Matlab/Simulink, the proposed method is proved to be correct.
About the method of approximation of a simple closed plane curve with a sharp edge
Directory of Open Access Journals (Sweden)
Zelenyy A.S.
2017-02-01
Full Text Available it was noted in the article, that initially the problem of interpolation of the simple plane curve arose in the problem of simulation of subsonic flow around a body with the subsequent calculation of the velocity potential using the vortex panel method. However, as it turned out, the practical importance of this method is much wider. This algorithm can be successfully applied in any task that requires a discrete set of points which describe an arbitrary curve: potential function method, flow around an airfoil with the trailing edge (airfoil, liquid drop, etc., analytic expression, which is very difficult to obtain, creation of the font and logo and in some tasks of architecture and garment industry.
S-curve networks and an approximate method for estimating degree distributions of complex networks
International Nuclear Information System (INIS)
Guo Jin-Li
2010-01-01
In the study of complex networks almost all theoretical models have the property of infinite growth, but the size of actual networks is finite. According to statistics from the China Internet IPv4 (Internet Protocol version 4) addresses, this paper proposes a forecasting model by using S curve (logistic curve). The growing trend of IPv4 addresses in China is forecasted. There are some reference values for optimizing the distribution of IPv4 address resource and the development of IPv6. Based on the laws of IPv4 growth, that is, the bulk growth and the finitely growing limit, it proposes a finite network model with a bulk growth. The model is said to be an S-curve network. Analysis demonstrates that the analytic method based on uniform distributions (i.e., Barabási-Albert method) is not suitable for the network. It develops an approximate method to predict the growth dynamics of the individual nodes, and uses this to calculate analytically the degree distribution and the scaling exponents. The analytical result agrees with the simulation well, obeying an approximately power-law form. This method can overcome a shortcoming of Barabási-Albert method commonly used in current network research. (general)
S-curve networks and an approximate method for estimating degree distributions of complex networks
Guo, Jin-Li
2010-12-01
In the study of complex networks almost all theoretical models have the property of infinite growth, but the size of actual networks is finite. According to statistics from the China Internet IPv4 (Internet Protocol version 4) addresses, this paper proposes a forecasting model by using S curve (logistic curve). The growing trend of IPv4 addresses in China is forecasted. There are some reference values for optimizing the distribution of IPv4 address resource and the development of IPv6. Based on the laws of IPv4 growth, that is, the bulk growth and the finitely growing limit, it proposes a finite network model with a bulk growth. The model is said to be an S-curve network. Analysis demonstrates that the analytic method based on uniform distributions (i.e., Barabási-Albert method) is not suitable for the network. It develops an approximate method to predict the growth dynamics of the individual nodes, and uses this to calculate analytically the degree distribution and the scaling exponents. The analytical result agrees with the simulation well, obeying an approximately power-law form. This method can overcome a shortcoming of Barabási-Albert method commonly used in current network research.
Interior design. Mastering the master plan.
Mesbah, C E
1995-10-01
Reflecting on the results of the survey, this proposed interior design master planning process addresses the concerns and issues of both CEOs and facility managers in ways that focus on problem-solving strategies and methods. Use of the interior design master plan process further promotes the goals and outcomes expressed in the survey by both groups. These include enhanced facility image, the efficient selection of finishes and furnishings, continuity despite staff changes, and overall savings in both costs and time. The interior design master plan allows administrators and facility managers to anticipate changes resulting from the restructuring of health care delivery. The administrators and facility managers are then able to respond in ways that manage those changes in the flexible and cost-effective manner they are striving for. This framework permits staff members to concentrate their time and energy on the care of their patients--which is, after all, what it's all about.
Ensemble Learning Method for Outlier Detection and its Application to Astronomical Light Curves
Nun, Isadora; Protopapas, Pavlos; Sim, Brandon; Chen, Wesley
2016-09-01
Outlier detection is necessary for automated data analysis, with specific applications spanning almost every domain from financial markets to epidemiology to fraud detection. We introduce a novel mixture of the experts outlier detection model, which uses a dynamically trained, weighted network of five distinct outlier detection methods. After dimensionality reduction, individual outlier detection methods score each data point for “outlierness” in this new feature space. Our model then uses dynamically trained parameters to weigh the scores of each method, allowing for a finalized outlier score. We find that the mixture of experts model performs, on average, better than any single expert model in identifying both artificially and manually picked outliers. This mixture model is applied to a data set of astronomical light curves, after dimensionality reduction via time series feature extraction. Our model was tested using three fields from the MACHO catalog and generated a list of anomalous candidates. We confirm that the outliers detected using this method belong to rare classes, like Novae, He-burning, and red giant stars; other outlier light curves identified have no available information associated with them. To elucidate their nature, we created a website containing the light-curve data and information about these objects. Users can attempt to classify the light curves, give conjectures about their identities, and sign up for follow up messages about the progress made on identifying these objects. This user submitted data can be used further train of our mixture of experts model. Our code is publicly available to all who are interested.
Feasibility of the correlation curves method in calorimeters of different types
Grushevskaya, E. A.; Lebedev, I. A.; Fedosimova, A. I.
2014-01-01
The simulation of the development of cascade processes in calorimeters of different types for the implementation of energy measurement by correlation curves method, is carried out. Heterogeneous calorimeter has a significant transient effects, associated with the difference of the critical energy in the absorber and the detector. The best option is a mixed calorimeter, which has a target block, leading to the rapid development of the cascade, and homogeneous measuring unit. Uncertainties of e...
Serôdio, João; Ezequiel, João; Frommlet, Jörg; Laviale, Martin; Lavaud, Johann
2013-11-01
Light-response curves (LCs) of chlorophyll fluorescence are widely used in plant physiology. Most commonly, LCs are generated sequentially, exposing the same sample to a sequence of distinct actinic light intensities. These measurements are not independent, as the response to each new light level is affected by the light exposure history experienced during previous steps of the LC, an issue particularly relevant in the case of the popular rapid light curves. In this work, we demonstrate the proof of concept of a new method for the rapid generation of LCs from nonsequential, temporally independent fluorescence measurements. The method is based on the combined use of sample illumination with digitally controlled, spatially separated beams of actinic light and a fluorescence imaging system. It allows the generation of a whole LC, including a large number of actinic light steps and adequate replication, within the time required for a single measurement (and therefore named "single-pulse light curve"). This method is illustrated for the generation of LCs of photosystem II quantum yield, relative electron transport rate, and nonphotochemical quenching on intact plant leaves exhibiting distinct light responses. This approach makes it also possible to easily characterize the integrated dynamic light response of a sample by combining the measurement of LCs (actinic light intensity is varied while measuring time is fixed) with induction/relaxation kinetics (actinic light intensity is fixed and the response is followed over time), describing both how the response to light varies with time and how the response kinetics varies with light intensity.
A neural network driving curve generation method for the heavy-haul train
Directory of Open Access Journals (Sweden)
Youneng Huang
2016-05-01
Full Text Available The heavy-haul train has a series of characteristics, such as the locomotive traction properties, the longer length of train, and the nonlinear train pipe pressure during train braking. When the train is running on a continuous long and steep downgrade railway line, the safety of the train is ensured by cycle braking, which puts high demands on the driving skills of the driver. In this article, a driving curve generation method for the heavy-haul train based on a neural network is proposed. First, in order to describe the nonlinear characteristics of train braking, the neural network model is constructed and trained by practical driving data. In the neural network model, various nonlinear neurons are interconnected to work for information processing and transmission. The target value of train braking pressure reduction and release time is achieved by modeling the braking process. The equation of train motion is computed to obtain the driving curve. Finally, in four typical operation scenarios, comparing the curve data generated by the method with corresponding practical data of the Shuohuang heavy-haul railway line, the results show that the method is effective.
Comparison of Optimization and Two-point Methods in Estimation of Soil Water Retention Curve
Ghanbarian-Alavijeh, B.; Liaghat, A. M.; Huang, G.
2009-04-01
Soil water retention curve (SWRC) is one of the soil hydraulic properties in which its direct measurement is time consuming and expensive. Since, its measurement is unavoidable in study of environmental sciences i.e. investigation of unsaturated hydraulic conductivity and solute transport, in this study the attempt is to predict soil water retention curve from two measured points. By using Cresswell and Paydar (1996) method (two-point method) and an optimization method developed in this study on the basis of two points of SWRC, parameters of Tyler and Wheatcraft (1990) model (fractal dimension and air entry value) were estimated and then water content at different matric potentials were estimated and compared with their measured values (n=180). For each method, we used both 3 and 1500 kPa (case 1) and 33 and 1500 kPa (case 2) as two points of SWRC. The calculated RMSE values showed that in the Creswell and Paydar (1996) method, there exists no significant difference between case 1 and case 2. However, the calculated RMSE value in case 2 (2.35) was slightly less than case 1 (2.37). The results also showed that the developed optimization method in this study had significantly less RMSE values for cases 1 (1.63) and 2 (1.33) rather than Cresswell and Paydar (1996) method.
Synthesis mechanism of an Al-Ti-C grain refiner master alloy prepared by a new method
Zhang, B. Q.; Lu, L.; Lai, M. O.; Fang, H. S.; Ma, H. T.; Li, J. G.
2003-08-01
The mechanisms of in-situ synthesis of an Al-Ti-C grain-refiner master alloy, prepared by adding a powder mixture of potassium titanium fluoride and carbon into an aluminum melt, have been systematically studied. It was found that vigorous reactions occurred at the initial stage of reaction and then slowed down. After about 20 minutes, the reactions, which led the formation of blocky titanium aluminides and submicron titanium carbides in the aluminum matrix, appeared to reach completion. Potassium titanium fluoride reacted with aluminum and carbon at 724 °C and 736 °C, respectively, resulting in the formation of titanium aluminides and titanium carbides in the aluminum matrix as well as in the formation of a low-melting-point slag of binary potassium aluminofluorides. The reaction between potassium titanium fluoride and carbon is believed to be the predominant mechanism in the synthesis of TiC by this method.
Computer Drawing Method for Operating Characteristic Curve of PV Power Plant Array Unit
Tan, Jianbin
2018-02-01
According to the engineering design of large-scale grid-connected photovoltaic power stations and the research and development of many simulation and analysis systems, it is necessary to draw a good computer graphics of the operating characteristic curves of photovoltaic array elements and to propose a good segmentation non-linear interpolation algorithm. In the calculation method, Component performance parameters as the main design basis, the computer can get 5 PV module performances. At the same time, combined with the PV array series and parallel connection, the computer drawing of the performance curve of the PV array unit can be realized. At the same time, the specific data onto the module of PV development software can be calculated, and the good operation of PV array unit can be improved on practical application.
A study of potential energy curves from the model space quantum Monte Carlo method
Energy Technology Data Exchange (ETDEWEB)
Ohtsuka, Yuhki; Ten-no, Seiichiro, E-mail: tenno@cs.kobe-u.ac.jp [Department of Computational Sciences, Graduate School of System Informatics, Kobe University, Nada-ku, Kobe 657-8501 (Japan)
2015-12-07
We report on the first application of the model space quantum Monte Carlo (MSQMC) to potential energy curves (PECs) for the excited states of C{sub 2}, N{sub 2}, and O{sub 2} to validate the applicability of the method. A parallel MSQMC code is implemented with the initiator approximation to enable efficient sampling. The PECs of MSQMC for various excited and ionized states are compared with those from the Rydberg-Klein-Rees and full configuration interaction methods. The results indicate the usefulness of MSQMC for precise PECs in a wide range obviating problems concerning quasi-degeneracy.
A New Method of Preparing a Master Card from the "National Union Catalog"
Schertz, Morris; Shavit, David
1971-01-01
The University of Denver is employing a new method for producing copy from the National Union Catalog" which has distinct advantages over other existing methods, particularly as far as cost per copy is concerned. (1 reference) (Author)
Directory of Open Access Journals (Sweden)
Van Than Dung
Full Text Available B-spline functions are widely used in many industrial applications such as computer graphic representations, computer aided design, computer aided manufacturing, computer numerical control, etc. Recently, there exist some demands, e.g. in reverse engineering (RE area, to employ B-spline curves for non-trivial cases that include curves with discontinuous points, cusps or turning points from the sampled data. The most challenging task in these cases is in the identification of the number of knots and their respective locations in non-uniform space in the most efficient computational cost. This paper presents a new strategy for fitting any forms of curve by B-spline functions via local algorithm. A new two-step method for fast knot calculation is proposed. In the first step, the data is split using a bisecting method with predetermined allowable error to obtain coarse knots. Secondly, the knots are optimized, for both locations and continuity levels, by employing a non-linear least squares technique. The B-spline function is, therefore, obtained by solving the ordinary least squares problem. The performance of the proposed method is validated by using various numerical experimental data, with and without simulated noise, which were generated by a B-spline function and deterministic parametric functions. This paper also discusses the benchmarking of the proposed method to the existing methods in literature. The proposed method is shown to be able to reconstruct B-spline functions from sampled data within acceptable tolerance. It is also shown that, the proposed method can be applied for fitting any types of curves ranging from smooth ones to discontinuous ones. In addition, the method does not require excessive computational cost, which allows it to be used in automatic reverse engineering applications.
Directory of Open Access Journals (Sweden)
Linning Ye
2018-01-01
Full Text Available Presence of outliers in tracer concentration-time curves derived from dynamic contrast-enhanced imaging can adversely affect the analysis of the tracer curves by model-fitting. A computationally efficient method for detecting outliers in tracer concentration-time curves is presented in this study. The proposed method is based on a piecewise linear model and implemented using a robust clustering algorithm. The method is noniterative and all the parameters are automatically estimated. To compare the proposed method with existing Gaussian model based and robust regression-based methods, simulation studies were performed by simulating tracer concentration-time curves using the generalized Tofts model and kinetic parameters derived from different tissue types. Results show that the proposed method and the robust regression-based method achieve better detection performance than the Gaussian model based method. Compared with the robust regression-based method, the proposed method can achieve similar detection performance with much faster computation speed.
Assessment of Estimation Methods ForStage-Discharge Rating Curve in Rippled Bed Rivers
Directory of Open Access Journals (Sweden)
P. Maleki
2016-02-01
in a flume located at the hydraulic laboratory ofShahrekordUniversity, Iran. Bass (1993 [reported in Joep (1999], determined an empirical relation between median grain size, D50, and equilibrium ripple length, l: L=75.4 (logD50+197 Eq.(1 Where l and D50 are both given in millimeters. Raudkivi (1997 [reported in Joep (1999], proposed another empirical relation to estimate the ripple length that D50 is given in millimeters: L=245(D500.35 Eq. (2 Flemming (1988 [reported in Joep (1999], derived an empirical relation between mean ripple length and ripple height based on a large dataset: hm= 0.0677l 0.8098 Eq.(3 Where hm is the mean ripple height (m and l is the mean ripple length (m. Ikeda S. and Asaeda (1983 investigated the characteristics of flow over ripples. They found that there are separation areas and vortices at lee of ripples and maximum turbulent diffusion occurs in these areas. Materials and Methods: In this research, the effects of two different type of ripples onthe hydraulic characteristics of flow were experimentally studied in a flume located at the hydraulic laboratory of ShahrekordUniversity, Iran. The flume has the dimensions of 0.4 m wide and depth and 12 m long. Generally 48 tests variety slopes of 0.0005 to 0.003 and discharges of 10 to 40 lit/s, were conducted. Velocity and the shear stress were measured by using an Acoustic Doppler Velocimeter (ADV. Two different types of ripples (parallel and flake ripples were used. The stage- discharge rating curve was then estimated in different ways, such as Einstein - Barbarvsa, shen and White et al. Results and Discussion: In order to investigateresult of the tests, were usedst atistical methods.White method as amaximum valueofα, RMSE, and average absolute error than other methods. Einstein method offitting the discharge under estimated. Evaluation of stage- discharge rating curve methods based on the obtained results from this research showed that Shen method had the highest accuracy for developing the
Directory of Open Access Journals (Sweden)
Wei Chen
2017-01-01
Full Text Available Automated tool trajectory planning for spray painting robots is still a challenging problem, especially for a large complex curved surface. This paper presents a new method of trajectory optimization for spray painting robot based on exponential mean Bézier method. The definition and the three theorems of exponential mean Bézier curves are discussed. Then a spatial painting path generation method based on exponential mean Bézier curves is developed. A new simple algorithm for trajectory optimization on complex curved surfaces is introduced. A golden section method is adopted to calculate the values. The experimental results illustrate that the exponential mean Bézier curves enhanced flexibility of the path planning, and the trajectory optimization algorithm achieved satisfactory performance. This method can also be extended to other applications.
Gompertz: A Scilab Program for Estimating Gompertz Curve Using Gauss-Newton Method of Least Squares
Directory of Open Access Journals (Sweden)
Surajit Ghosh Dastidar
2006-04-01
Full Text Available A computer program for estimating Gompertz curve using Gauss-Newton method of least squares is described in detail. It is based on the estimation technique proposed in Reddy (1985. The program is developed using Scilab (version 3.1.1, a freely available scientific software package that can be downloaded from http://www.scilab.org/. Data is to be fed into the program from an external disk file which should be in Microsoft Excel format. The output will contain sample size, tolerance limit, a list of initial as well as the final estimate of the parameters, standard errors, value of Gauss-Normal equations namely GN1 GN2 and GN3 , No. of iterations, variance(σ2 , Durbin-Watson statistic, goodness of fit measures such as R2 , D value, covariance matrix and residuals. It also displays a graphical output of the estimated curve vis a vis the observed curve. It is an improved version of the program proposed in Dastidar (2005.
Gompertz: A Scilab Program for Estimating Gompertz Curve Using Gauss-Newton Method of Least Squares
Directory of Open Access Journals (Sweden)
Surajit Ghosh Dastidar
2006-04-01
Full Text Available A computer program for estimating Gompertz curve using Gauss-Newton method of least squares is described in detail. It is based on the estimation technique proposed in Reddy (1985. The program is developed using Scilab (version 3.1.1, a freely available scientific software package that can be downloaded from http://www.scilab.org/. Data is to be fed into the program from an external disk file which should be in Microsoft Excel format. The output will contain sample size, tolerance limit, a list of initial as well as the final estimate of the parameters, standard errors, value of Gauss-Normal equations namely GN1 GN2 and GN3, No. of iterations, variance(σ2, Durbin-Watson statistic, goodness of fit measures such as R2, D value, covariance matrix and residuals. It also displays a graphical output of the estimated curve vis a vis the observed curve. It is an improved version of the program proposed in Dastidar (2005.
Mandel, Lauren H.
2017-01-01
Research methods education in LIS master's degree programs is facing several difficult questions: should a methods course be required, what content should be taught in that course, and what is the most effective mechanism for teaching that content. There is little consensus about what should be taught or how, but the American Library Association,…
International Nuclear Information System (INIS)
Nakagawa, Yasuaki
1996-01-01
The methods for testing permanent magnets stipulated in the usual industrial standards are so-called closed magnetic circuit methods which employ a loop tracer using an iron-core electromagnet. If the coercivity exceeds the highest magnetic field generated by the electromagnet, full hysteresis curves cannot be obtained. In the present work, magnetic fields up to 15 T were generated by a high-power water-cooled magnet, and the magnetization was measured by an induction method with an open magnetic circuit, in which the effect of a demagnetizing field should be taken into account. Various rare earth magnets materials such as sintered or bonded Sm-Co and Nd-Fe-B were provided by a number of manufacturers. Hysteresis curves for cylindrical samples with 10 nm in diameter and 2 mm, 3.5 mm, 5 mm, 14 mm or 28 mm in length were measured. Correction for the demagnetizing field is rather difficult because of its non-uniformity. Roughly speaking, a mean demagnetizing factor for soft magnetic materials can be used for the correction, although the application of this factor to hard magnetic material is hardly justified. Thus the dimensions of the sample should be specified when the data obtained by the open magnetic circuit method are used as industrial standards. (author)
An information preserving method for producing full coverage CoRoT light curves
Directory of Open Access Journals (Sweden)
Pascual-Granado J.
2015-01-01
Full Text Available Invalid flux measurements, caused mainly by the South Atlantic Anomaly crossing of the CoRoT satellite, introduce aliases in the periodogram and wrong amplitudes. It has been demonstrated that replacing such invalid data with a linear interpolation is not harmless. On the other side, using power spectrum estimators for unevenly sampled time series is not only less computationally efficient but it leads to difficulties in the interpretation of the results. Therefore, even when the gaps are rather small and the duty cycle is high enough the use of gap-filling methods is a gain in frequency analysis. However, the method must preserve the information contained in the time series. In this work we give a short description of an information preserving method (MIARMA and show some results when applying it to CoRoT seismo light curves. The method is implemented as the second step of a pipeline for CoRoT data analysis.
A simple method for determining the critical point of the soil water retention curve
DEFF Research Database (Denmark)
Chen, Chong; Hu, Kelin; Ren, Tusheng
2017-01-01
he transition point between capillary water and adsorbed water, which is the critical point Pc [defined by the critical matric potential (ψc) and the critical water content (θc)] of the soil water retention curve (SWRC), demarcates the energy and water content region where flow is dominated......, a fixed tangent line method was developed to estimate Pc as an alternative to the commonly used flexible tangent line method. The relationships between Pc, and particle-size distribution and specific surface area (SSA) were analyzed. For 27 soils with various textures, the mean RMSE of water content from...... the fixed tangent line method was 0.007 g g–1, which was slightly better than that of the flexible tangent line method. With increasing clay content or SSA, ψc was more negative initially but became less negative at clay contents above ∼30%. Increasing the silt contents resulted in more negative ψc values...
A simple method for one-loop renormalization in curved space-time
Energy Technology Data Exchange (ETDEWEB)
Markkanen, Tommi [Helsinki Institute of Physics and Department of Physics, P.O. Box 64, FI-00014, University of Helsinki (Finland); Tranberg, Anders, E-mail: tommi.markkanen@helsinki.fi, E-mail: anders.tranberg@uis.no [Niels Bohr International Academy and Discovery Center, Niels Bohr Institute, Blegdamsvej 17, 2100 Copenhagen (Denmark)
2013-08-01
We present a simple method for deriving the renormalization counterterms from the components of the energy-momentum tensor in curved space-time. This method allows control over the finite parts of the counterterms and provides explicit expressions for each term separately. As an example, the method is used for the self-interacting scalar field in a Friedmann-Robertson-Walker metric in the adiabatic approximation, where we calculate the renormalized equation of motion for the field and the renormalized components of the energy-momentum tensor to fourth adiabatic order while including interactions to one-loop order. Within this formalism the trace anomaly, including contributions from interactions, is shown to have a simple derivation. We compare our results to those obtained by two standard methods, finding agreement with the Schwinger-DeWitt expansion but disagreement with adiabatic subtractions for interacting theories.
Nauleau, Pierre; Minonzio, Jean-Gabriel; Chekroun, Mathieu; Cassereau, Didier; Laugier, Pascal; Prada, Claire; Grimal, Quentin
2016-07-01
Our long-term goal is to develop an ultrasonic method to characterize the thickness, stiffness and porosity of the cortical shell of the femoral neck, which could enhance hip fracture risk prediction. To this purpose, we proposed to adapt a technique based on the measurement of guided waves. We previously evidenced the feasibility of measuring circumferential guided waves in a bone-mimicking phantom of a circular cross-section of even thickness. The goal of this study is to investigate the impact of the complex geometry of the femoral neck on the measurement of guided waves. Two phantoms of an elliptical cross-section and one phantom of a realistic cross-section were investigated. A 128-element array was used to record the inter-element response matrix of these waveguides. This experiment was simulated using a custom-made hybrid code. The response matrices were analyzed using a technique based on the physics of wave propagation. This method yields portions of dispersion curves of the waveguides which were compared to reference dispersion curves. For the elliptical phantoms, three portions of dispersion curves were determined with a good agreement between experiment, simulation and theory. The method was thus validated. The characteristic dimensions of the shell were found to influence the identification of the circumferential wave signals. The method was then applied to the signals backscattered by the superior half of constant thickness of the realistic phantom. A cut-off frequency and some portions of modes were measured, with a good agreement with the theoretical curves of a plate waveguide. We also observed that the method cannot be applied directly to the signals backscattered by the lower half of varying thicknesses of the phantom. The proposed approach could then be considered to evaluate the properties of the superior part of the femoral neck, which is known to be a clinically relevant site.
Michel, Claude; Andréassian, Vazken; Perrin, Charles
2005-02-01
This paper unveils major inconsistencies in the age-old and yet efficient Soil Conservation Service Curve Number (SCS-CN) procedure. Our findings are based on an analysis of the continuous soil moisture accounting procedure implied by the SCS-CN equation. It is shown that several flaws plague the original SCS-CN procedure, the most important one being a confusion between intrinsic parameter and initial condition. A change of parameterization and a more complete assessment of the initial condition lead to a renewed SCS-CN procedure, while keeping the acknowledged efficiency of the original method.
Directory of Open Access Journals (Sweden)
Junyi Li
2017-01-01
Full Text Available A BP (backpropagation neural network method is employed to solve the problems existing in the synthetic characteristic curve processing of hydroturbine at present that most studies are only concerned with data in the high efficiency and large guide vane opening area, which can hardly meet the research requirements of transition process especially in large fluctuation situation. The principle of the proposed method is to convert the nonlinear characteristics of turbine to torque and flow characteristics, which can be used for real-time simulation directly based on neural network. Results show that obtained sample data can be extended successfully to cover working areas wider under different operation conditions. Another major contribution of this paper is the resampling technique proposed in the paper to overcome the limitation to sample period simulation. In addition, a detailed analysis for improvements of iteration convergence of the pressure loop is proposed, leading to a better iterative convergence during the head pressure calculation. Actual applications verify that methods proposed in this paper have better simulation results which are closer to the field and provide a new perspective for hydroturbine synthetic characteristic curve fitting and modeling.
Li, Daniel
2014-01-01
This easy-to-understand tutorial provides you with several engaging projects that show you how to utilize Grunt with various web technologies, teaching you how to master build automation and testing with Grunt in your applications.If you are a JavaScript developer who is looking to streamline their workflow with build-automation, then this book will give you a kick start in fully understanding the importance of the described web technologies and automate their processes using Grunt.
Pappa, Richard S. (Technical Monitor); Black, Jonathan T.
2003-01-01
This report discusses the development and application of metrology methods called photogrammetry and videogrammetry that make accurate measurements from photographs. These methods have been adapted for the static and dynamic characterization of gossamer structures, as four specific solar sail applications demonstrate. The applications prove that high-resolution, full-field, non-contact static measurements of solar sails using dot projection photogrammetry are possible as well as full-field, non-contact, dynamic characterization using dot projection videogrammetry. The accuracy of the measurement of the resonant frequencies and operating deflection shapes that were extracted surpassed expectations. While other non-contact measurement methods exist, they are not full-field and require significantly more time to take data.
International Nuclear Information System (INIS)
Visbal, Jorge H. Wilches; Costa, Alessandro M.
2016-01-01
Percentage depth dose of electron beams represents an important item of data in radiation therapy treatment since it describes the dosimetric properties of these. Using an accurate transport theory, or the Monte Carlo method, has been shown obvious differences between the dose distribution of electron beams of a clinical accelerator in a water simulator object and the dose distribution of monoenergetic electrons of nominal energy of the clinical accelerator in water. In radiotherapy, the electron spectra should be considered to improve the accuracy of dose calculation since the shape of PDP curve depends of way how radiation particles deposit their energy in patient/phantom, that is, the spectrum. Exist three principal approaches to obtain electron energy spectra from central PDP: Monte Carlo Method, Direct Measurement and Inverse Reconstruction. In this work it will be presented the Simulated Annealing method as a practical, reliable and simple approach of inverse reconstruction as being an optimal alternative to other options. (author)
Assessment of p-y Curves from Numerical Methods for a non-Slender Monopile in Cohesionless Soil
DEFF Research Database (Denmark)
Ibsen, Lars Bo; Roesen, Hanne Ravn; Wolf, Torben K.
2013-01-01
In current design the stiff large diameter monopile is a widely used solution as foundation of offshore wind turbines. Winds and waves subject the monopile to considerable lateral loads. The current design guidances apply the p-y curve method with formulations for the curves based on slender piles....... However, the behaviour of the stiff monopiles during lateral loading is not fully understood. In this paper case study from Barrow Offshore Wind Farm is used in a 3D finite element model. The analysis forms a basis for extraction of p-y curves which are used in an evaluation of the traditional curves...
Assessment of p-y Curves from Numerical Methods for a non-Slender Monopile in Cohesionless Soil
DEFF Research Database (Denmark)
Wolf, Torben K.; Rasmussen, Kristian L.; Hansen, Mette
In current design the stiff large diameter monopile is a widely used solution as foundation of offshore wind turbines. Winds and waves subject the monopile to considerable lateral loads. The current design guidances apply the p-y curve method with formulations for the curves based on slender piles....... However, the behaviour of the stiff monopiles during lateral loading is not fully understood. In this paper case study from Barrow Offshore Wind Farm is used in a 3D finite element model. The analysis forms a basis for extraction of p-y curves which are used in an evaluation of the traditional curves...
Thermoluminescence glow curve analysis and CGCD method for erbium doped CaZrO{sub 3} phosphor
Energy Technology Data Exchange (ETDEWEB)
Tiwari, Ratnesh, E-mail: 31rati@gmail.com [Department of Physics, Bhilai Institute of Technology, Raipur, 493661 (India); Chopra, Seema [Department Physics, G.D Goenka Public School (India)
2016-05-06
The manuscript report the synthesis, thermoluminescence study at fixed concentration of Er{sup 3+} (1 mol%) doped CaZrO{sub 3} phosphor. The phosphors were prepared by modified solid state reaction method. The powder sample was characterized by thermoluminescence (TL) glow curve analysis. In TL glow curve the optimized concentration in 1mol% for UV irradiated sample. The kinetic parameters were calculated by computerized glow curve deconvolution (CGCD) techniaue. Trapping parameters gives the information of dosimetry loss in prepared phosphor and its usability in environmental monitoring and for personal monitoring. CGCD is the advance tool for analysis of complicated TL glow curves.
Kochanowski, Maciej; Dabrowska, Joanna; Karamon, Jacek; Cencek, Tomasz; Osiński, Zbigniew
2013-07-01
The aim of this study was to determine the accuracy and precision of McMaster method with Raynaud's modification in the detection of the eggs of the nematodes Toxocara canis (Werner, 1782) and Trichuris ovis (Abildgaard, 1795) in faeces of dogs. Four variants of McMaster method were used for counting: in one grid, two grids, the whole McMaster chamber and flotation in the tube. One hundred sixty samples were prepared from dog faeces (20 repetitions for each egg quantity) containing 15, 25, 50, 100, 150, 200, 250 and 300 eggs of T. canis and T. ovis in 1 g of faeces. To compare the influence of kind of faeces on the results, samples of dog faeces were enriched at the same levels with the eggs of another nematode, Ascaris suum Goeze, 1782. In addition, 160 samples of pig faeces were prepared and enriched only with A. suum eggs in the same way. The highest limit of detection (the lowest level of eggs that were detected in at least 50% of repetitions) in all McMaster chamber variants were obtained for T. canis eggs (25-250 eggs/g faeces). In the variant with flotation in the tube, the highest limit of detection was obtained for T. ovis eggs (100 eggs/g). The best results of the limit of detection, sensitivity and the lowest coefficients of variation were obtained with the use of the whole McMaster chamber variant. There was no significant impact of properties of faeces on the obtained results. Multiplication factors for the whole chamber were calculated on the basis of the transformed equation of the regression line, illustrating the relationship between the number of detected eggs and that of the eggs added to the'sample. Multiplication factors calculated for T. canis and T. ovis eggs were higher than those expected using McMaster method with Raynaud modification.
Beare, R. A.
2008-01-01
Professional astronomers use specialized software not normally available to students to determine the rotation periods of asteroids from fragmented light curve data. This paper describes a simple yet accurate method based on Microsoft Excel[R] that enables students to find periods in asteroid light curve and other discontinuous time series data of…
Directory of Open Access Journals (Sweden)
William Senkondo
2017-12-01
Full Text Available Information on aquifer processes and characteristics across scales has long been a cornerstone for understanding water resources. However, point measurements are often limited in extent and representativeness. Techniques that increase the support scale (footprint of measurements or leverage existing observations in novel ways can thus be useful. In this study, we used a recession-curve-displacement method to estimate regional-scale aquifer transmissivity (T from streamflow records across the Kilombero Valley of Tanzania. We compare these estimates to local-scale estimates made from pumping tests across the Kilombero Valley. The median T from the pumping tests was 0.18 m2/min. This was quite similar to the median T estimated from the recession-curve-displacement method applied during the wet season for the entire basin (0.14 m2/min and for one of the two sub-basins tested (0.16 m2/min. On the basis of our findings, there appears to be reasonable potential to inform water resource management and hydrologic model development through streamflow-derived transmissivity estimates, which is promising for data-limited environments facing rapid development, such as the Kilombero Valley.
High cycle fatigue test and regression methods of S-N curve
International Nuclear Information System (INIS)
Kim, D. W.; Park, J. Y.; Kim, W. G.; Yoon, J. H.
2011-11-01
The fatigue design curve in the ASME Boiler and Pressure Vessel Code Section III are based on the assumption that fatigue life is infinite after 106 cycles. This is because standard fatigue testing equipment prior to the past decades was limited in speed to less than 200 cycles per second. Traditional servo-hydraulic machines work at frequency of 50 Hz. Servo-hydraulic machines working at 1000 Hz have been developed after 1997. This machines allow high frequency and displacement of up to ±0.1 mm and dynamic load of ±20 kN are guaranteed. The frequency of resonant fatigue test machine is 50-250 Hz. Various forced vibration-based system works at 500 Hz or 1.8 kHz. Rotating bending machines allow testing frequency at 0.1-200 Hz. The main advantage of ultrasonic fatigue testing at 20 kHz is performing Although S-N curve is determined by experiment, the fatigue strength corresponding to a given fatigue life should be determined by statistical method considering the scatter of fatigue properties. In this report, the statistical methods for evaluation of fatigue test data is investigated
International Nuclear Information System (INIS)
Milosevic, M.
1979-01-01
One-dimensional variational method for cylindrical configuration was applied for calculating group constants, together with effects of elastic slowing down, anisotropic elastic scattering, inelastic scattering, heterogeneous resonance absorption with the aim to include the presence of a number of different isotopes and effects of neutron leakage from the reactor core. Neutron flux shape P 3 and adjoint function are proposed in order to enable calculation of smaller size reactors and inclusion of heterogeneity effects by cell calculations. Microscopic multigroup constants were prepared based on the UKNDL data library. Analytical-numerical approach was applied for solving the equations of the P 3 approximation to obtain neutron flux moments and adjoint functions
SiFTO: An Empirical Method for Fitting SN Ia Light Curves
Conley, A.; Sullivan, M.; Hsiao, E. Y.; Guy, J.; Astier, P.; Balam, D.; Balland, C.; Basa, S.; Carlberg, R. G.; Fouchez, D.; Hardin, D.; Howell, D. A.; Hook, I. M.; Pain, R.; Perrett, K.; Pritchet, C. J.; Regnault, N.
2008-07-01
We present SiFTO, a new empirical method for modeling Type Ia supernova (SN Ia) light curves by manipulating a spectral template. We make use of high-redshift SN data when training the model, allowing us to extend it bluer than rest-frame U. This increases the utility of our high-redshift SN observations by allowing us to use more of the available data. We find that when the shape of the light curve is described using a stretch prescription, applying the same stretch at all wavelengths is not an adequate description. SiFTO therefore uses a generalization of stretch which applies different stretch factors as a function of both the wavelength of the observed filter and the stretch in the rest-frame B band. We compare SiFTO to other published light-curve models by applying them to the same set of SN photometry, and demonstrate that SiFTO and SALT2 perform better than the alternatives when judged by the scatter around the best-fit luminosity distance relationship. We further demonstrate that when SiFTO and SALT2 are trained on the same data set the cosmological results agree. Based on observations obtained with MegaPrime/MegaCam, a joint project of CFHT and CEA/DAPNIA, at the Canada-France-Hawaii Telescope (CFHT) which is operated by the National Research Council (NRC) of Canada, the Institut National des Sciences de l'Univers of the Centre National de la Recherche Scientifique (CNRS) of France, and the University of Hawaii. This work is based in part on data products produced at the Canadian Astronomy Data Centre as part of the Canada-France-Hawaii Telescope Legacy Survey, a collaborative project of NRC and CNRS.
Directory of Open Access Journals (Sweden)
Balgaisha Mukanova
2017-01-01
Full Text Available The problem of electrical sounding of a medium with ground surface relief is modelled using the integral equations method. This numerical method is based on the triangulation of the computational domain, which is adapted to the shape of the relief and the measuring line. The numerical algorithm is tested by comparing the results with the known solution for horizontally layered media with two layers. Calculations are also performed to verify the fulfilment of the “reciprocity principle” for the 4-electrode installations in our numerical model. Simulations are then performed for a two-layered medium with a surface relief. The quantitative influences of the relief, the resistivity ratios of the contacting media, and the depth of the second layer on the apparent resistivity curves are established.
Efficient method for finding square roots for elliptic curves over OEF
CSIR Research Space (South Africa)
Abu-Mahfouz, Adnan M
2009-01-01
Full Text Available Elliptic curve cryptosystems like others public key encryption schemes, require computing a square roots modulo a prime number. The arithmetic operations in elliptic curve schemes over Optimal Extension Fields (OEF) can be efficiently computed...
2018-04-01
Crashes occur every day on Utahs highways. Curves can be particularly dangerous as they require driver focus due to potentially unseen hazards. Often, crashes occur on curves due to poor curve geometry, a lack of warning signs, or poor surface con...
International Nuclear Information System (INIS)
Yoon, Deok Yong
1981-01-01
This book tells of system and function of 8051 like what micro controller is, command and addressing mode of 8051, handling of interrupt of 8051, and IO port and timer of 8051, outer interface of 8051 such as semiconductor memory and interface, timer and 82C54 PIT, serial communication and 82C55A PPI, parallel transmission and 82C55A PPI, and AP/D/A converter, tool for software development of 8051, 8051 master kit OK-8051, assembly language programming like instruction manual of OK-8051 kit and addition and subtraction program and C-language programing.
Directory of Open Access Journals (Sweden)
Robotin MC
2016-02-01
Full Text Available Monica C Robotin,1,2 Muthau Shaheem,3 Aishath S Ismail3 1Faculty of Medicine, School of Public Health, University of Sydney, 2Cancer Programs Division, Cancer Council New South Wales, Sydney, Australia; 3Faculty of Health Sciences, Maldives National University, Male, Maldives Background: Over the last four decades, the health status of Maldivian people improved considerably, as reflected in child and maternal mortality indicators and the eradication or control of many communicable diseases. However, changing disease patterns are now undermining these successes, so the local public health practitioners need new skills to perform effectively in this changing environment. To address these needs, in 2013 the Faculty of Health Sciences of the Maldives National University developed the country's first Master of Public Health (MPH program.Methods: The process commenced with a wide scoping exercise and an analysis of the curricular structure of MPH programs of high-ranking universities. Thereafter, a stakeholder consultation using consensus methods reached agreement on overall course structure and the competencies required for local MPH graduates. Subsequently, a working group developed course descriptors and identified local public health research priorities, which could be addressed by MPH students.Results: Ten semistructured interviews explored specific training needs of prospective MPH students, key public health competencies required by local employers and preferred MPH training models. The recommendations informed a nominal group meeting, where participants agreed on MPH core competencies, overall curricular structure and core subjects. The 17 public health electives put forward by the group were prioritized using an online Delphi process. Participants ranked them by their propensity to address local public health needs and the locally available teaching expertise. The first student cohort commenced their MPH studies in January 2014.Conclusion
Methods for fitting of efficiency curves obtained by means of HPGe gamma rays spectrometers
International Nuclear Information System (INIS)
Cardoso, Vanderlei
2002-01-01
The present work describes a few methodologies developed for fitting efficiency curves obtained by means of a HPGe gamma-ray spectrometer. The interpolated values were determined by simple polynomial fitting and polynomial fitting between the ratio of experimental peak efficiency and total efficiency, calculated by Monte Carlo technique, as a function of gamma-ray energy. Moreover, non-linear fitting has been performed using a segmented polynomial function and applying the Gauss-Marquardt method. For the peak area obtainment different methodologies were developed in order to estimate the background area under the peak. This information was obtained by numerical integration or by using analytical functions associated to the background. One non-calibrated radioactive source has been included in the curve efficiency in order to provide additional calibration points. As a by-product, it was possible to determine the activity of this non-calibrated source. For all fittings developed in the present work the covariance matrix methodology was used, which is an essential procedure in order to give a complete description of the partial uncertainties involved. (author)
A Method for Formulizing Disaster Evacuation Demand Curves Based on SI Model
Directory of Open Access Journals (Sweden)
Yulei Song
2016-10-01
Full Text Available The prediction of evacuation demand curves is a crucial step in the disaster evacuation plan making, which directly affects the performance of the disaster evacuation. In this paper, we discuss the factors influencing individual evacuation decision making (whether and when to leave and summarize them into four kinds: individual characteristics, social influence, geographic location, and warning degree. In the view of social contagion of decision making, a method based on Susceptible-Infective (SI model is proposed to formulize the disaster evacuation demand curves to address both social influence and other factors’ effects. The disaster event of the “Tianjin Explosions” is used as a case study to illustrate the modeling results influenced by the four factors and perform the sensitivity analyses of the key parameters of the model. Some interesting phenomena are found and discussed, which is meaningful for authorities to make specific evacuation plans. For example, due to the lower social influence in isolated communities, extra actions might be taken to accelerate evacuation process in those communities.
Solving eigenvalue problems on curved surfaces using the Closest Point Method
Macdonald, Colin B.
2011-06-01
Eigenvalue problems are fundamental to mathematics and science. We present a simple algorithm for determining eigenvalues and eigenfunctions of the Laplace-Beltrami operator on rather general curved surfaces. Our algorithm, which is based on the Closest Point Method, relies on an embedding of the surface in a higher-dimensional space, where standard Cartesian finite difference and interpolation schemes can be easily applied. We show that there is a one-to-one correspondence between a problem defined in the embedding space and the original surface problem. For open surfaces, we present a simple way to impose Dirichlet and Neumann boundary conditions while maintaining second-order accuracy. Convergence studies and a series of examples demonstrate the effectiveness and generality of our approach. © 2011 Elsevier Inc.
International Nuclear Information System (INIS)
Tran Dai Nghiep; Vu Hoang Lam; Vo Tuong Hanh; Do Nguyet Minh; Nguyen Ngoc Son
1995-01-01
Present work is aimed at a formulation of an experimental approach to search the proposed nonexponential deviations from decay curve and at description of an attempt to test them in case of 52 V. Some theoretical description of decay processes are formulated in clarified forms. A continuous kinetic function (CKF) method is described for analysis of experimental data and CKF for purely exponential case is considered as a standard for comparison between theoretical and experimental data. The degree of agreement is defined by the factor of goodness. Typical deviations of oscillation behaviour of 52 V decay were observed in a wide range of time. The proposed deviation related to interaction between decay products and environment is researched. A complex type of decay is discussed. (authors). 10 refs., 4 figs., 2 tabs
A curve fitting method for extrinsic camera calibration from a single image of a cylindrical object
International Nuclear Information System (INIS)
Winkler, A W; Zagar, B G
2013-01-01
An important step in the process of optical steel coil quality assurance is to measure the proportions of width and radius of steel coils as well as the relative position and orientation of the camera. This work attempts to estimate these extrinsic parameters from single images by using the cylindrical coil itself as the calibration target. Therefore, an adaptive least-squares algorithm is applied to fit parametrized curves to the detected true coil outline in the acquisition. The employed model allows for strictly separating the intrinsic and the extrinsic parameters. Thus, the intrinsic camera parameters can be calibrated beforehand using available calibration software. Furthermore, a way to segment the true coil outline in the acquired images is motivated. The proposed optimization method yields highly accurate results and can be generalized even to measure other solids which cannot be characterized by the identification of simple geometric primitives. (paper)
Methods for extracting dose response curves from radiation therapy data. I. A unified approach
International Nuclear Information System (INIS)
Herring, D.F.
1980-01-01
This paper discusses an approach to fitting models to radiation therapy data in order to extract dose response curves for tumor local control and normal tissue damage. The approach is based on the method of maximum likelihood and is illustrated by several examples. A general linear logistic equation which leads to the Ellis nominal standard dose (NSD) equation is discussed; the fit of this equation to experimental data for mouse foot skin reactions produced by fractionated irradiation is described. A logistic equation based on the concept that normal tissue reactions are associated with the surviving fraction of cells is also discussed, and the fit of this equation to the same set of mouse foot skin reaction data is also described. These two examples illustrate the importance of choosing a model based on underlying mechanisms when one seeks to attach biological significance to a model's parameters
A curve fitting method for extrinsic camera calibration from a single image of a cylindrical object
Winkler, A. W.; Zagar, B. G.
2013-08-01
An important step in the process of optical steel coil quality assurance is to measure the proportions of width and radius of steel coils as well as the relative position and orientation of the camera. This work attempts to estimate these extrinsic parameters from single images by using the cylindrical coil itself as the calibration target. Therefore, an adaptive least-squares algorithm is applied to fit parametrized curves to the detected true coil outline in the acquisition. The employed model allows for strictly separating the intrinsic and the extrinsic parameters. Thus, the intrinsic camera parameters can be calibrated beforehand using available calibration software. Furthermore, a way to segment the true coil outline in the acquired images is motivated. The proposed optimization method yields highly accurate results and can be generalized even to measure other solids which cannot be characterized by the identification of simple geometric primitives.
International Nuclear Information System (INIS)
Svec, A.; Schrader, H.
2002-01-01
An ionization chamber without and with an iron liner (absorber) was calibrated by a set of radionuclide activity standards of the Physikalisch-Technische Bundesanstalt (PTB). The ionization chamber is used as a secondary standard measuring system for activity at the Slovak Institute of Metrology (SMU). Energy-dependent photon-efficiency curves were established for the ionization chamber in defined measurement geometry without and with the liner, and radionuclide efficiencies were calculated. Programmed calculation with an analytical efficiency function and a nonlinear regression algorithm of Microsoft (MS) Excel for fitting was used. Efficiencies from bremsstrahlung of pure beta-particle emitters were calibrated achieving a 10% accuracy level. Such efficiency components are added to obtain the total radionuclide efficiency of photon emitters after beta decay. The method yields differences of experimental and calculated radionuclide efficiencies for most of the photon-emitting radionuclides in the order of a few percent
International Nuclear Information System (INIS)
Park, Ho Jin; Shim, Hyung Jin; Joo, Han Gyu; Kim, Chang Hyo
2011-01-01
The purpose of this paper is to examine the qualification of few group constants estimated by the Seoul National University Monte Carlo particle transport analysis code McCARD in terms of core neutronics analyses and thus to validate the McCARD method as a few group constant generator. The two- step core neutronics analyses are conducted for a mini and a realistic PWR by the McCARD/MASTER code system in which McCARD is used as an MC group constant generation code and MASTER as a diffusion core analysis code. The two-step calculations for the effective multiplication factors and assembly power distributions of the two PWR cores by McCARD/MASTER are compared with the reference McCARD calculations. By showing excellent agreements between McCARD/MASTER and the reference MC core neutronics analyses for the two PWRs, it is concluded that the MC method implemented in McCARD can generate few group constants which are well qualified for high-accuracy two-step core neutronics calculations. (author)
Shaw, Stephen B.; Walter, M. Todd
2009-03-01
The Soil Conservation Service curve number (SCS-CN) method is widely used to predict storm runoff for hydraulic design purposes, such as sizing culverts and detention basins. As traditionally used, the probability of calculated runoff is equated to the probability of the causative rainfall event, an assumption that fails to account for the influence of variations in soil moisture on runoff generation. We propose a modification to the SCS-CN method that explicitly incorporates rainfall return periods and the frequency of different soil moisture states to quantify storm runoff risks. Soil moisture status is assumed to be correlated to stream base flow. Fundamentally, this approach treats runoff as the outcome of a bivariate process instead of dictating a 1:1 relationship between causative rainfall and resulting runoff volumes. Using data from the Fall Creek watershed in western New York and the headwaters of the French Broad River in the mountains of North Carolina, we show that our modified SCS-CN method improves frequency discharge predictions in medium-sized watersheds in the eastern United States in comparison to the traditional application of the method.
Scare, J A; Slusarewicz, P; Noel, M L; Wielgus, K M; Nielsen, M K
2017-11-30
Fecal egg counts are emphasized for guiding equine helminth parasite control regimens due to the rise of anthelmintic resistance. This, however, poses further challenges, since egg counting results are prone to issues such as operator dependency, method variability, equipment requirements, and time commitment. The use of image analysis software for performing fecal egg counts is promoted in recent studies to reduce the operator dependency associated with manual counts. In an attempt to remove operator dependency associated with current methods, we developed a diagnostic system that utilizes a smartphone and employs image analysis to generate automated egg counts. The aims of this study were (1) to determine precision of the first smartphone prototype, the modified McMaster and ImageJ; (2) to determine precision, accuracy, sensitivity, and specificity of the second smartphone prototype, the modified McMaster, and Mini-FLOTAC techniques. Repeated counts on fecal samples naturally infected with equine strongyle eggs were performed using each technique to evaluate precision. Triplicate counts on 36 egg count negative samples and 36 samples spiked with strongyle eggs at 5, 50, 500, and 1000 eggs per gram were performed using a second smartphone system prototype, Mini-FLOTAC, and McMaster to determine technique accuracy. Precision across the techniques was evaluated using the coefficient of variation. In regards to the first aim of the study, the McMaster technique performed with significantly less variance than the first smartphone prototype and ImageJ (psmartphone and ImageJ performed with equal variance. In regards to the second aim of the study, the second smartphone system prototype had significantly better precision than the McMaster (psmartphone system were 64.51%, 21.67%, and 32.53%, respectively. The Mini-FLOTAC was significantly more accurate than the McMaster (psmartphone system (psmartphone and McMaster counts did not have statistically different accuracies
Wang, Fei; Gong, Haoran; Chen, Xi; Chen, C. Q.
2016-09-01
Origami structures enrich the field of mechanical metamaterials with the ability to convert morphologically and systematically between two-dimensional (2D) thin sheets and three-dimensional (3D) spatial structures. In this study, an in-plane design method is proposed to approximate curved surfaces of interest with generalized Miura-ori units. Using this method, two combination types of crease lines are unified in one reprogrammable procedure, generating multiple types of cylindrical structures. Structural completeness conditions of the finite-thickness counterparts to the two types are also proposed. As an example of the design method, the kinematics and elastic properties of an origami-based circular cylindrical shell are analysed. The concept of Poisson’s ratio is extended to the cylindrical structures, demonstrating their auxetic property. An analytical model of rigid plates linked by elastic hinges, consistent with numerical simulations, is employed to describe the mechanical response of the structures. Under particular load patterns, the circular shells display novel mechanical behaviour such as snap-through and limiting folding positions. By analysing the geometry and mechanics of the origami structures, we extend the design space of mechanical metamaterials and provide a basis for their practical applications in science and engineering.
Weathering Patterns of Ignitable Liquids with the Advanced Distillation Curve Method.
Bruno, Thomas J; Allen, Samuel
2013-01-01
One can take advantage of the striking similarity of ignitable liquid vaporization (or weathering) patterns and the separation observed during distillation to predict the composition of residual compounds in fire debris. This is done with the advanced distillation curve (ADC) metrology, which separates a complex fluid by distillation into fractions that are sampled, and for which thermodynamically consistent temperatures are measured at atmospheric pressure. The collected sample fractions can be analyzed by any method that is appropriate. Analytical methods we have applied include gas chromatography (with flame ionization, mass spectrometric and sulfur chemiluminescence detection), thin layer chromatography, FTIR, Karl Fischer coulombic titrimetry, refractometry, corrosivity analysis, neutron activation analysis and cold neutron prompt gamma activation analysis. We have applied this method on product streams such as finished fuels (gasoline, diesel fuels, aviation fuels, rocket propellants), crude oils (including a crude oil made from swine manure) and waste oils streams (used automotive and transformer oils). In this paper, we present results on a variety of ignitable liquids that are not commodity fuels, chosen from the Ignitable Liquids Reference Collection (ILRC). These measurements are assembled into a preliminary database. From this selection, we discuss the significance and forensic application of the temperature data grid and the composition explicit data channel of the ADC.
Weathering Patterns of Ignitable Liquids with the Advanced Distillation Curve Method
Bruno, Thomas J; Allen, Samuel
2013-01-01
One can take advantage of the striking similarity of ignitable liquid vaporization (or weathering) patterns and the separation observed during distillation to predict the composition of residual compounds in fire debris. This is done with the advanced distillation curve (ADC) metrology, which separates a complex fluid by distillation into fractions that are sampled, and for which thermodynamically consistent temperatures are measured at atmospheric pressure. The collected sample fractions can be analyzed by any method that is appropriate. Analytical methods we have applied include gas chromatography (with flame ionization, mass spectrometric and sulfur chemiluminescence detection), thin layer chromatography, FTIR, Karl Fischer coulombic titrimetry, refractometry, corrosivity analysis, neutron activation analysis and cold neutron prompt gamma activation analysis. We have applied this method on product streams such as finished fuels (gasoline, diesel fuels, aviation fuels, rocket propellants), crude oils (including a crude oil made from swine manure) and waste oils streams (used automotive and transformer oils). In this paper, we present results on a variety of ignitable liquids that are not commodity fuels, chosen from the Ignitable Liquids Reference Collection (ILRC). These measurements are assembled into a preliminary database. From this selection, we discuss the significance and forensic application of the temperature data grid and the composition explicit data channel of the ADC. PMID:26401423
Effects of different premature chromosome condensation method on dose-curve of 60Co γ-ray
International Nuclear Information System (INIS)
Guo Yicao; Yang Haoxian; Yang Yuhua; Li Xi'na; Huang Weixu; Zheng Qiaoling
2012-01-01
Objective: To study the effect of traditional method and improved method of the premature chromosome condensation (PCC) on the dose-effect curve of 60 Co γ ray, for choosing the rapid and accurate biological dose estimating method for the accident emergency. Methods: Collected 3 healthy male cubits venous blood (23 to 28 years old), and irradiated by 0, 1.0, 5.0, 10.0, 15.0, 20.0 Gy 60 Co γ ray (absorbed dose rate: 0.635 Gy/min). Observed the relation of dose-effect curve in the 2 incubation time (50 hours and 60 hours) of the traditional method and improved method. Used the dose-effect curve to verify the exposure of 10.0 Gy (absorbed dose rate: 0.670 Gy/min). Results: (1) In the traditional method of 50-hour culture, the PCC cell count in 15.0 Gy and 20.0 Gy was of no statistical significance. But there were statistical significance in the traditional method of 60-hours culture and improved method (50-hour culture and 60-hour culture). Used the last 3 culture methods to make dose curve. (2) In the above 3 culture methods, the related coefficient between PCC ring and exposure dose was quite close (all of more than 0.996, P 0.05), the morphology of regression straight lines almost overlap. (3) Used the above 3 dose-effect curves to estimate the irradiation results (10.0 Gy), the error was less than or equal to 8%, all of them were within the allowable range of the biological experiment (15%). Conclusion: The 3 dose-effect curves of the above 3 culture methods can apply to biological dose estimating of large doses of ionizing radiation damage. Especially the improved method of 50-hour culture,it is much faster to estimate and it should be regarded as the first choice in accident emergency. (authors)
International Nuclear Information System (INIS)
Awad, M.M.
2014-01-01
The S-shaped curve was observed by Yilbas and Bin Mansoor (2013). In this study, an alternative method to predict the S-shaped curve for logistic characteristics of phonon transport in silicon thin film is presented by using an analytical prediction method. This analytical prediction method was introduced by Bejan and Lorente in 2011 and 2012. The Bejan and Lorente method is based on two-mechanism flow of fast “invasion” by convection and slow “consolidation” by diffusion.
Hooshyar, M.; Wang, D.
2016-12-01
The empirical proportionality relationship, which indicates that the ratio of cumulative surface runoff and infiltration to their corresponding potentials are equal, is the basis of the extensively used Soil Conservation Service Curve Number (SCS-CN) method. The objective of this paper is to provide the physical basis of the SCS-CN method and its proportionality hypothesis from the infiltration excess runoff generation perspective. To achieve this purpose, an analytical solution of Richards' equation is derived for ponded infiltration in shallow water table environment under the following boundary conditions: 1) the soil is saturated at the land surface; and 2) there is a no-flux boundary which moves downward. The solution is established based on the assumptions of negligible gravitational effect, constant soil water diffusivity, and hydrostatic soil moisture profile between the no-flux boundary and water table. Based on the derived analytical solution, the proportionality hypothesis is a reasonable approximation for rainfall partitioning at the early stage of ponded infiltration in areas with a shallow water table for coarse textured soils.
A Method of Timbre-Shape Synthesis Based On Summation of Spherical Curves
DEFF Research Database (Denmark)
Putnam, Lance Jonathan
2014-01-01
It is well-known that there is a rich correspondence between sound and visual curves, perhaps most widely explored through direct input of sound into an oscilloscope. However, there have been relatively few proposals on how to translate sound into three-dimensional curves. We present a novel meth...
Applicability of the θ projection method to creep curves of Ni-22Cr-18Fe-9Mo alloy
International Nuclear Information System (INIS)
Kurata, Yuji; Utsumi, Hirokazu
1998-01-01
Applicability of the θ projection method has been examined for constant-load creep test results at 800 and 1000degC on Ni-22Cr-18Fe-9Mo alloy in the solution-treated and aged conditions. The results obtained are as follows: (1) Normal type creep curves obtained at 1000degC for aged Ni-22Cr-18Fe-9Mo alloy are fitted using the θ projection method with four θ parameters. Stress dependence of θ parameters can be expressed in terms of simple equations. (2) The θ projection method with four θ parameters cannot be applied to the remaining creep curves where most of the life is occupied by a tertiary creep stage. Therefore, the θ projection method consisting of only the tertiary creep component with two θ parameters was applied. The creep curves can be fitted using this method. (3) If the θ projection method with four θ or two θ parameters is applied to creep curves in accordance with creep curve shapes, creep rupture time can be predicted in terms of formulation of stress and/or temperature dependence of θ parameters. (author)
Directory of Open Access Journals (Sweden)
E. Sauquet
2011-08-01
Full Text Available The study aims at estimating flow duration curves (FDC at ungauged sites in France and quantifying the associated uncertainties using a large dataset of 1080 FDCs. The interpolation procedure focuses here on 15 percentiles standardised by the mean annual flow, which is assumed to be known at each site. In particular, this paper discusses the impact of different catchment grouping procedures on the estimation of percentiles by regional regression models.
In a first step, five parsimonious FDC parametric models are tested to approximate FDCs at gauged sites. The results show that the model based on the expansion of Empirical Orthogonal Functions (EOF outperforms the other tested models. In the EOF model, each FDC is interpreted as a linear combination of regional amplitude functions with spatially variable weighting factors corresponding to the parameters of the model. In this approach, only one amplitude function is required to obtain a satisfactory fit with most of the observed curves. Thus, the considered model requires only two parameters to be applicable at ungauged locations.
Secondly, homogeneous regions are derived according to hydrological response, on the one hand, and geological, climatic and topographic characteristics on the other hand. Hydrological similarity is assessed through two simple indicators: the concavity index (IC representing the shape of the dimensionless FDC and the seasonality ratio (SR, which is the ratio of summer and winter median flows. These variables are used as homogeneity criteria in three different methods for grouping catchments: (i according to an a priori classification of French Hydro-EcoRegions (HERs, (ii by applying regression tree clustering and (iii by using neighbourhoods obtained by canonical correlation analysis.
Finally, considering all the data, and subsequently for each group obtained through the tested grouping techniques, we derive regression models between
Learning profiles of Master students
DEFF Research Database (Denmark)
Sprogøe, Jonas; Hemmingsen, Lis
2005-01-01
at DPU in 2001 several evaluations and research have been carried out on several topics relating to form, content, and didactics, but one important focus is missing: the research about the psychological profile and learning style of the master student. Knowledge is lacking on how teaching methods......Master education as a part of lifelong learning/education has over the last years increased in Denmark. Danish Universities now offer more than110 different programmes. One of the characteristics of the master education is that the students get credits for their prior learning and practical work...... experiences, and during the study/education theory and practise is combined. At the Master of Adult Learning and Human Resource Development, one of DPU´s master programmes, the students have a very diverse background and have many different experiences and practises. Since the first programme was introduced...
Recrystallization curve study of zircaloy-4 with DRX line width method
International Nuclear Information System (INIS)
Juarez, G; Buioli, C; Samper, R; Vizcaino, P
2012-01-01
X-ray diffraction peak broadening analysis is a method that allows to characterize the plastic deformation in metals. This technique is a complement of transmission electron microscopy (TEM) to determine dislocation densities. So that, both techniques may cover a wide range in the analysis of metals deformation. The study of zirconium alloys is an issue of usual interest in the nuclear industry since such materials present the best combination of good mechanical properties, corrosion behavior and low neutron cross section. It is worth noting there are two factors to be taken into account in the application of the method developed for this purpose: the characteristic anisotropy of the hexagonals and the strong texture that these alloys acquire during the manufacturing process. In order to assess the recrystallization curve of Zircaloy-4, a powder of this alloy was produced through filing. Then, fractions of the powder were subjected to thermal treatments at different temperatures for the same time. Since the powder has a random crystallographic orientation, the texture effect practically disappears; this is the reason why the Williamson and Hall method may be easily used, producing good fittings and predicting confidence values of diffraction domain size and the accumulated deformation. The temperatures selected for the thermal treatments were 1000, 700, 600, 500, 420, 300 and 200 o C during 2 hs. As a result of these annealings, powders in different recovery stages were obtained (completely recrystallized, partially recrystallized and non-recrystallized structures with different levels of stress relieve). The obtained values were also compared with the non annealed powder ones. All the microstructural evolution through the annealings was followed by optical microscopy (author)
Burger, Jessica L.
2015-07-16
© This article not subject to U.S. Copyright. Published 2015 by the American Chemical Society. Incremental but fundamental changes are currently being made to fuel composition and combustion strategies to diversify energy feedstocks, decrease pollution, and increase engine efficiency. The increase in parameter space (by having many variables in play simultaneously) makes it difficult at best to propose strategic changes to engine and fuel design by use of conventional build-and-test methodology. To make changes in the most time- and cost-effective manner, it is imperative that new computational tools and surrogate fuels are developed. Currently, sets of fuels are being characterized by industry groups, such as the Coordinating Research Council (CRC) and other entities, so that researchers in different laboratories have access to fuels with consistent properties. In this work, six gasolines (FACE A, C, F, G, I, and J) are characterized by the advanced distillation curve (ADC) method to determine the composition and enthalpy of combustion in various distillate volume fractions. Tracking the composition and enthalpy of distillate fractions provides valuable information for determining structure property relationships, and moreover, it provides the basis for the development of equations of state that can describe the thermodynamic properties of these complex mixtures and lead to development of surrogate fuels composed of major hydrocarbon classes found in target fuels.
Energy Technology Data Exchange (ETDEWEB)
Keilacker, H; Becker, G; Ziegler, M; Gottschling, H D [Zentralinstitut fuer Diabetes, Karlsburg (German Democratic Republic)
1980-10-01
In order to handle all types of radioimmunoassay (RIA) calibration curves obtained in the authors' laboratory in the same way, they tried to find a non-linear expression for their regression which allows calibration curves with different degrees of curvature to be fitted. Considering the two boundary cases of the incubation protocol they derived a hyperbolic inverse regression function: x = a/sub 1/y + a/sub 0/ + asub(-1)y/sup -1/, where x is the total concentration of antigen, asub(i) are constants, and y is the specifically bound radioactivity. An RIA evaluation procedure based on this function is described providing a fitted inverse RIA calibration curve and some statistical quality parameters. The latter are of an order which is normal for RIA systems. There is an excellent agreement between fitted and experimentally obtained calibration curves having a different degree of curvature.
Assessment of p-y curves from numerical methods for a non-slender monopile in cohesionless soil
Energy Technology Data Exchange (ETDEWEB)
Ibsen, L. B.; Ravn Roesen, H. [Aalborg Univ. Dept. of Civil Engineering, Aalborg (Denmark); Hansen, Mette; Kirk Wolf, T. [COWI, Kgs. Lyngby (Denmark); Lange Rasmussen, K. [Niras, Aalborg (Denmark)
2013-06-15
In current design the monopile is a widely used solution as foundation of offshore wind turbines. Winds and waves subject the monopile to considerable lateral loads. The behaviour of monopiles under lateral loading is not fully understood and the current design guidances apply the p-y curve method in a Winkler model approach. The p-y curve method was originally developed for jag-piles used in the oil and gas industry which are much more slender than the monopile foundation. In recent years the 3D finite element analysis has become a tool in the investigation of complex geotechnical situations, such as the laterally loaded monopile. In this paper a 3D FEA is conducted as basis of an extraction of p-y curves, as a basis for an evaluation of the traditional curves. Two different methods are applied to create a list of data points used for the p-y curves: A force producing a similar response as seen in the ULS situation is applied stepwise; hereby creating the most realistic soil response. This method, however, does not generate sufficient data points around the rotation point of the pile. Therefore, also a forced horizontal displacement of the entire pile is applied, whereby displacements are created over the entire length of the pile. The response is extracted from the interface and the nearby soil elements respectively, as to investigate the influence this has on the computed curves. p-y curves are obtained near the rotation point by evaluation of soil response during a prescribed displacement but the response is not in clear agreement with the response during an applied load. Two different material models are applied. It is found that the applied material models have a significant influence on the stiffness of the evaluated p-y curves. The p-y curves evaluated by means of FEA are compared to the conventional p-y curve formulation which provides a much stiffer response. It is found that the best response is computed by implementing the Hardening Soil model and
S-curve networks and an approximate method for estimating degree distributions of complex networks
Guo, Jin-Li
2010-01-01
In the study of complex networks almost all theoretical models have the property of infinite growth, but the size of actual networks is finite. According to statistics from the China Internet IPv4 (Internet Protocol version 4) addresses, this paper proposes a forecasting model by using S curve (Logistic curve). The growing trend of IPv4 addresses in China is forecasted. There are some reference value for optimizing the distribution of IPv4 address resource and the development of IPv6. Based o...
International Nuclear Information System (INIS)
Lott, B.; Escande, L.; Larsson, S.; Ballet, J.
2012-01-01
Here, we present a method enabling the creation of constant-uncertainty/constant-significance light curves with the data of the Fermi-Large Area Telescope (LAT). The adaptive-binning method enables more information to be encapsulated within the light curve than with the fixed-binning method. Although primarily developed for blazar studies, it can be applied to any sources. Furthermore, this method allows the starting and ending times of each interval to be calculated in a simple and quick way during a first step. The reported mean flux and spectral index (assuming the spectrum is a power-law distribution) in the interval are calculated via the standard LAT analysis during a second step. In the absence of major caveats associated with this method Monte-Carlo simulations have been established. We present the performance of this method in determining duty cycles as well as power-density spectra relative to the traditional fixed-binning method.
Inferring Lévy walks from curved trajectories: A rescaling method
Tromer, R. M.; Barbosa, M. B.; Bartumeus, F.; Catalan, J.; da Luz, M. G. E.; Raposo, E. P.; Viswanathan, G. M.
2015-08-01
An important problem in the study of anomalous diffusion and transport concerns the proper analysis of trajectory data. The analysis and inference of Lévy walk patterns from empirical or simulated trajectories of particles in two and three-dimensional spaces (2D and 3D) is much more difficult than in 1D because path curvature is nonexistent in 1D but quite common in higher dimensions. Recently, a new method for detecting Lévy walks, which considers 1D projections of 2D or 3D trajectory data, has been proposed by Humphries et al. The key new idea is to exploit the fact that the 1D projection of a high-dimensional Lévy walk is itself a Lévy walk. Here, we ask whether or not this projection method is powerful enough to cleanly distinguish 2D Lévy walk with added curvature from a simple Markovian correlated random walk. We study the especially challenging case in which both 2D walks have exactly identical probability density functions (pdf) of step sizes as well as of turning angles between successive steps. Our approach extends the original projection method by introducing a rescaling of the projected data. Upon projection and coarse-graining, the renormalized pdf for the travel distances between successive turnings is seen to possess a fat tail when there is an underlying Lévy process. We exploit this effect to infer a Lévy walk process in the original high-dimensional curved trajectory. In contrast, no fat tail appears when a (Markovian) correlated random walk is analyzed in this way. We show that this procedure works extremely well in clearly identifying a Lévy walk even when there is noise from curvature. The present protocol may be useful in realistic contexts involving ongoing debates on the presence (or not) of Lévy walks related to animal movement on land (2D) and in air and oceans (3D).
International Nuclear Information System (INIS)
Rijssel, Jos van; Kuipers, Bonny W.M.; Erné, Ben H.
2014-01-01
A numerical inversion method known from the analysis of light scattering by colloidal dispersions is now applied to magnetization curves of ferrofluids. The distribution of magnetic particle sizes or dipole moments is determined without assuming that the distribution is unimodal or of a particular shape. The inversion method enforces positive number densities via a non-negative least squares procedure. It is tested successfully on experimental and simulated data for ferrofluid samples with known multimodal size distributions. The created computer program MINORIM is made available on the web. - Highlights: • A method from light scattering is applied to analyze ferrofluid magnetization curves. • A magnetic size distribution is obtained without prior assumption of its shape. • The method is tested successfully on ferrofluids with a known size distribution. • The practical limits of the method are explored with simulated data including noise. • This method is implemented in the program MINORIM, freely available online
International Nuclear Information System (INIS)
Harvey, John A.; Rodrigues, Miesher L.; Kearfott, Kimberlee J.
2011-01-01
A computerized glow curve analysis (GCA) program for handling of thermoluminescence data originating from WinREMS is presented. The MATLAB program fits the glow peaks using the first-order kinetics model. Tested materials are LiF:Mg,Ti, CaF 2 :Dy, CaF 2 :Tm, CaF 2 :Mn, LiF:Mg,Cu,P, and CaSO 4 :Dy, with most having an average figure of merit (FOM) of 1.3% or less, with CaSO 4 :Dy 2.2% or less. Output is a list of fit parameters, peak areas, and graphs for each fit, evaluating each glow curve in 1.5 s or less. - Highlights: → Robust algorithm for performing thermoluminescent dosimeter glow curve analysis. → Written in MATLAB so readily implemented on variety of computers. → Usage of figure of merit demonstrated for six different materials.
The development of a curved beam element model applied to finite elements method
International Nuclear Information System (INIS)
Bento Filho, A.
1980-01-01
A procedure for the evaluation of the stiffness matrix for a thick curved beam element is developed, by means of the minimum potential energy principle, applied to finite elements. The displacement field is prescribed through polynomial expansions, and the interpolation model is determined by comparison of results obtained by the use of a sample of different expansions. As a limiting case of the curved beam, three cases of straight beams, with different dimensional ratios are analised, employing the approach proposed. Finally, an interpolation model is proposed and applied to a curved beam with great curvature. Desplacements and internal stresses are determined and the results are compared with those found in the literature. (Author) [pt
International Nuclear Information System (INIS)
Perez-Lopez, Esteban
2014-01-01
The quantitative chemical analysis has been importance in research. Also, aspects like: quality control, sales of services and other areas of interest. Some instrumental analysis methods for quantification with linear calibration curve have presented limitations, because the short liner dynamic ranges of the analyte, or sometimes, by limiting the technique itself. The need has been to investigate a little more about the convenience of using quadratic calibration curves for analytical quantification, with which it has seeked demonstrate that has been a valid calculation model for chemical analysis instruments. An analysis base method is used on the technique of atomic absorption spectroscopy and in particular a determination of magnesium in a drinking water sample of the Tacares sector North of Grecia. A nonlinear calibration curve was used and specifically a curve with quadratic behavior. The same was compared with the test results obtained for the equal analysis with a linear calibration curve. The results have showed that the methodology has been valid for the determination referred with all confidence, since the concentrations have been very similar and, according to the used hypothesis testing, can be considered equal. (author) [es
A bottom-up method to develop pollution abatement cost curves for coal-fired utility boilers
This paper illustrates a new method to create supply curves for pollution abatement using boiler-level data that explicitly accounts for technology costs and performance. The Coal Utility Environmental Cost (CUECost) model is used to estimate retrofit costs for five different NO...
Schipper, H.R.; Grünewald, S.; Eigenraam, P.; Raghunath, P.; Kok, M.A.D.
2014-01-01
Free-form buildings tend to be expensive. By optimizing the production process, economical and well-performing precast concrete structures can be manufactured. In this paper, a method is presented that allows producing highly accurate double curved-elements without the need for milling two expensive
New methods for deriving cometary secular light curves: C/1995 O1 (Hale-Bopp) revisited
Womack, Maria; Lastra, Nathan; Harrington, Olga; Curtis, Anthony; Wierzchos, Kacper; Ruffini, Nicholas; Charles, Mentzer; Rabson, David; Cox, Timothy; Rivera, Isabel; Micciche, Anthony
2017-10-01
We present an algorithm for reducing scatter and increasing precision in a comet light curve. As a demonstration, we processed apparent magnitudes of comet Hale-Bopp from 16 highly experienced observers (archived with the International Comet Quarterly), correcting for distance from Earth and phase angle. Different observers tend to agree on the difference in magnitudes of an object at different distances, but the magnitude reported by observer is shifted relative to that of another for an object at a fixed distance. We estimated the shifts using a self-consistent statistical approach, leading to a sharper light curve and improving the precision of the measured slopes. The final secular lightcurve for comet Hale-Bopp ranges from -7 au (pre-perihelion) to +8 au (post-perihelion) and is the best secular light curve produced to date for this “great” comet. We discuss Hale-Bopp’s lightcurve evolution and possibly related physical implications, and potential usefulness of this light curve for comparisons with other future bright comets. We also assess the appropriateness of using secular lightcurves to characterize dust production rates in Hale-Bopp and other dust-rich comets. M.W. acknowledges support from NSF grant AST-1615917.
Schipper, H.R.
2015-01-01
The production of precast, concrete elements with complex, double-curved geometry is expensive due to the high costcosts of the necessary moulds and the limited possibilities for mould reuse. Currently, CNC-milled foam moulds are the solution applied mostly in projects, offering good aesthetic
TWO METHODS OF ESTIMATING SEMIPARAMETRIC COMPONENT IN THE ENVIRONMENTAL KUZNET'S CURVE (EKC)
Paudel, Krishna P.; Zapata, Hector O.
2004-01-01
This study compares parametric and semiparametric smoothing techniques to estimate the environmental Kuznet curve. The ad hoc functional form where income is related either as a square or a cubic function to environmental quality is relaxed in search of a better nonlinear fit to the pollution-income relationship for panel data.
Fan, Fenglei; Deng, Yingbin; Hu, Xuefei; Weng, Qihao
2013-01-01
The rainfall and runoff relationship becomes an intriguing issue as urbanization continues to evolve worldwide. In this paper, we developed a simulation model based on the soil conservation service curve number (SCS-CN) method to analyze the rainfall-runoff relationship in Guangzhou, a rapid growing metropolitan area in southern China. The SCS-CN method was initially developed by the Natural Resources Conservation Service (NRCS) of the United States Department of Agriculture (USDA), and is on...
Bauer, James M.; Grav, Tommy; Buratti, Bonnie J.; Hicks, Michael D.
2006-09-01
During its 2005 January opposition, the saturnian system could be viewed at an unusually low phase angle. We surveyed a subset of Saturn's irregular satellites to obtain their true opposition magnitudes, or nearly so, down to phase angle values of 0.01°. Combining our data taken at the Palomar 200-inch and Cerro Tololo Inter-American Observatory's 4-m Blanco telescope with those in the literature, we present the first phase curves for nearly half the irregular satellites originally reported by Gladman et al. [2001. Nature 412, 163-166], including Paaliaq (SXX), Siarnaq (SXXIX), Tarvos (SXXI), Ijiraq (SXXII), Albiorix (SXVI), and additionally Phoebe's narrowest angle brightness measured to date. We find centaur-like steepness in the phase curves or opposition surges in most cases with the notable exception of three, Albiorix and Tarvos, which are suspected to be of similar origin based on dynamical arguments, and Siarnaq.
Absolute Distances to Nearby Type Ia Supernovae via Light Curve Fitting Methods
Vinkó, J.; Ordasi, A.; Szalai, T.; Sárneczky, K.; Bányai, E.; Bíró, I. B.; Borkovits, T.; Hegedüs, T.; Hodosán, G.; Kelemen, J.; Klagyivik, P.; Kriskovics, L.; Kun, E.; Marion, G. H.; Marschalkó, G.; Molnár, L.; Nagy, A. P.; Pál, A.; Silverman, J. M.; Szakáts, R.; Szegedi-Elek, E.; Székely, P.; Szing, A.; Vida, K.; Wheeler, J. C.
2018-06-01
We present a comparative study of absolute distances to a sample of very nearby, bright Type Ia supernovae (SNe) derived from high cadence, high signal-to-noise, multi-band photometric data. Our sample consists of four SNe: 2012cg, 2012ht, 2013dy and 2014J. We present new homogeneous, high-cadence photometric data in Johnson–Cousins BVRI and Sloan g‧r‧i‧z‧ bands taken from two sites (Piszkesteto and Baja, Hungary), and the light curves are analyzed with publicly available light curve fitters (MLCS2k2, SNooPy2 and SALT2.4). When comparing the best-fit parameters provided by the different codes, it is found that the distance moduli of moderately reddened SNe Ia agree within ≲0.2 mag, and the agreement is even better (≲0.1 mag) for the highest signal-to-noise BVRI data. For the highly reddened SN 2014J the dispersion of the inferred distance moduli is slightly higher. These SN-based distances are in good agreement with the Cepheid distances to their host galaxies. We conclude that the current state-of-the-art light curve fitters for Type Ia SNe can provide consistent absolute distance moduli having less than ∼0.1–0.2 mag uncertainty for nearby SNe. Still, there is room for future improvements to reach the desired ∼0.05 mag accuracy in the absolute distance modulus.
Marginal abatement cost curves for policy recommendation – A method for energy system analysis
International Nuclear Information System (INIS)
Tomaschek, Jan
2015-01-01
The transport sector is seen as one of the key factors for driving future energy consumption and greenhouse gas (GHG) emissions. In order to rank possible measures marginal abatement cost curves have become a tool to graphically represent the relationship between abatement costs and emission reduction. This paper demonstrates how to derive marginal abatement cost curves for well-to-wheel GHG emissions of the transport sector considering the full energy provision chain and the interlinkages and interdependencies within the energy system. Presented marginal abatement cost curves visualize substitution effects between measures for different marginal mitigation costs. The analysis makes use of an application of the energy system model generator TIMES for South Africa (TIMES-GEECO). For the example of Gauteng province, this study exemplary shows that the transport sector is not the first sector to address for cost-efficient reduction of GHG emissions. However, the analysis also demonstrates that several options are available to mitigate transport related GHG emissions at comparable low marginal abatement costs. This methodology can be transferred to other economic sectors as well as to other regions in the world to derive cost-efficient GHG reduction strategies
Spectro-photometric determinations of Mn, Fe and Cu in aluminum master alloys
Rehan; Naveed, A.; Shan, A.; Afzal, M.; Saleem, J.; Noshad, M. A.
2016-08-01
Highly reliable, fast and cost effective Spectro-photometric methods have been developed for the determination of Mn, Fe & Cu in aluminum master alloys, based on the development of calibration curves being prepared via laboratory standards. The calibration curves are designed so as to induce maximum sensitivity and minimum instrumental error (Mn 1mg/100ml-2mg/100ml, Fe 0.01mg/100ml-0.2mg/100ml and Cu 2mg/100ml-10mg/ 100ml). The developed Spectro-photometric methods produce accurate results while analyzing Mn, Fe and Cu in certified reference materials. Particularly, these methods are suitable for all types of Al-Mn, Al-Fe and Al-Cu master alloys (5%, 10%, 50% etc. master alloys).Moreover, the sampling practices suggested herein include a reasonable amount of analytical sample, which truly represent the whole lot of a particular master alloy. Successive dilution technique was utilized to meet the calibration curve range. Furthermore, the workout methods were also found suitable for the analysis of said elements in ordinary aluminum alloys. However, it was observed that Cush owed a considerable interference with Fe, the later one may not be accurately measured in the presence of Cu greater than 0.01 %.
Energy Technology Data Exchange (ETDEWEB)
Lu Yiyun, E-mail: luyiyun6666@vip.sohu.co [Luoyang Institute of Science and Technology, Luoyang, Henan 471023 (China); Qin Yujie; Dang Qiaohong [Luoyang Institute of Science and Technology, Luoyang, Henan 471023 (China); Wang Jiasu [Applied Superconductivity Laboratory, Southwest Jiaotong University, P.O. Box 152, Chengdu, Sichuan 610031 (China)
2010-12-01
The crossing in magnetic levitation force-gap hysteresis curve of melt high-temperature superconductor (HTS) vs. NdFeB permanent magnet (PM) was experimentally studied. One HTS bulk and PM was used in the experiments. Four experimental methods were employed combining of high/low speed of movement of PM with/without heat insulation materials (HIM) enclosed respectively. Experimental results show that crossing of the levitation force-gap curve is related to experimental methods. A crossing occurs in the magnetic force-gap curve while the PM moves approaching to and departing from the sample with high or low speed of movement without HIM enclosed. When the PM is enclosed with HIM during the measurement procedures, there is no crossing in the force-gap curve no matter high speed or low speed of movement of the PM. It was found experimentally that, with the increase of the moving speed of the PM, the maximum magnitude of levitation force of the HTS increases also. The results are interpreted based on Maxwell theories and flux flow-creep models of HTS.
International Nuclear Information System (INIS)
Lu Yiyun; Qin Yujie; Dang Qiaohong; Wang Jiasu
2010-01-01
The crossing in magnetic levitation force-gap hysteresis curve of melt high-temperature superconductor (HTS) vs. NdFeB permanent magnet (PM) was experimentally studied. One HTS bulk and PM was used in the experiments. Four experimental methods were employed combining of high/low speed of movement of PM with/without heat insulation materials (HIM) enclosed respectively. Experimental results show that crossing of the levitation force-gap curve is related to experimental methods. A crossing occurs in the magnetic force-gap curve while the PM moves approaching to and departing from the sample with high or low speed of movement without HIM enclosed. When the PM is enclosed with HIM during the measurement procedures, there is no crossing in the force-gap curve no matter high speed or low speed of movement of the PM. It was found experimentally that, with the increase of the moving speed of the PM, the maximum magnitude of levitation force of the HTS increases also. The results are interpreted based on Maxwell theories and flux flow-creep models of HTS.
Wang, Nianfeng; Guo, Hao; Chen, Bicheng; Cui, Chaoyu; Zhang, Xianmin
2018-05-01
Dielectric elastomers (DE), known as electromechanical transducers, have been widely used in the field of sensors, generators, actuators and energy harvesting for decades. A large number of DE actuators including bending actuators, linear actuators and rotational actuators have been designed utilizing an experience design method. This paper proposes a new method for the design of DE actuators by using a topology optimization method based on pairs of curves. First, theoretical modeling and optimization design are discussed, after which a rotary dielectric elastomer actuator has been designed using this optimization method. Finally, experiments and comparisons between several DE actuators have been made to verify the optimized result.
Pereckiene, A; Kaziūnaite, V; Vysniauskas, A; Petkevicius, S; Malakauskas, A; Sarkūnas, M; Taylor, M A
2007-10-21
The comparative efficacies of seven published McMaster method modifications for faecal egg counting were evaluated on pig faecal samples containing Ascaris suum eggs. Comparisons were made as to the number of samples found to be positive by each of the methods, the total egg counts per gram (EPG) of faeces, the variations in EPG obtained in the samples examined, and the ease of use of each of the methods. Each method was evaluated after the examination of 30 samples of faeces. The positive samples were identified by counting A. suum eggs in one, two and three sections of newly designed McMaster chamber. In the present study compared methods were reported by: I-Henriksen and Aagaard [Henriksen, S.A., Aagaard, K.A., 1976. A simple flotation and McMaster method. Nord. Vet. Med. 28, 392-397]; II-Kassai [Kassai, T., 1999. Veterinary Helminthology. Butterworth-Heinemann, Oxford, 260 pp.]; III and IV-Urquhart et al. [Urquhart, G.M., Armour, J., Duncan, J.L., Dunn, A.M., Jennings, F.W., 1996. Veterinary Parasitology, 2nd ed. Blackwell Science Ltd., Oxford, UK, 307 pp.] (centrifugation and non-centrifugation methods); V and VI-Grønvold [Grønvold, J., 1991. Laboratory diagnoses of helminths common routine methods used in Denmark. In: Nansen, P., Grønvold, J., Bjørn, H. (Eds.), Seminars on Parasitic Problems in Farm Animals Related to Fodder Production and Management. The Estonian Academy of Sciences, Tartu, Estonia, pp. 47-48] (salt solution, and salt and glucose solution); VII-Thienpont et al. [Thienpont, D., Rochette, F., Vanparijs, O.F.J., 1986. Diagnosing Helminthiasis by Coprological Examination. Coprological Examination, 2nd ed. Janssen Research Foundation, Beerse, Belgium, 205 pp.]. The number of positive samples by examining single section ranged from 98.9% (method I), to 51.1% (method VII). Only with methods I and II, there was a 100% positivity in two out of three of the chambers examined, and FEC obtained using these methods were significantly (pcoefficient
Use of Monte Carlo Methods for determination of isodose curves in brachytherapy
International Nuclear Information System (INIS)
Vieira, Jose Wilson
2001-08-01
Brachytherapy is a special form of cancer treatment in which the radioactive source is very close to or inside the tumor with the objective of causing the necrosis of the cancerous tissue. The intensity of cell response to the radiation varies according to the tissue type and degree of differentiation. Since the malign cells are less differentiated than the normal ones, they are more sensitive to the radiation. This is the basis for radiotherapy techniques. Institutes that work with the application of high dose rates use sophisticated computer programs to calculate the necessary dose to achieve the necrosis of the tumor and the same time, minimizing the irradiation of tissues and organs of the neighborhood. With knowledge the characteristics of the source and the tumor, it is possible to trace isodose curves with the necessary information for planning the brachytherapy in patients. The objective of this work is, using Monte Carlo techniques, to develop a computer program - the ISODOSE - which allows to determine isodose curves in turn of linear radioactive sources used in brachytherapy. The development of ISODOSE is important because the available commercial programs, in general, are very expensive and practically inaccessible to small clinics. The use of Monte Carlo techniques is viable because they avoid problems inherent to analytic solutions as, for instance , the integration of functions with singularities in its domain. The results of ISODOSE were compared with similar data found in the literature and also with those obtained at the institutes of radiotherapy of the 'Hospital do Cancer do Recife' and of the 'Hospital Portugues do Recife'. ISODOSE presented good performance, mainly, due to the Monte Carlo techniques, that allowed a quite detailed drawing of the isodose curves in turn of linear sources. (author)
Energy Technology Data Exchange (ETDEWEB)
Spinler, E.A.; Baldwin, B.A. [Phillips Petroleum Co., Bartlesville, OK (United States)
1997-08-01
A method is being developed for direct experimental determination of capillary pressure curves from saturation distributions produced during centrifuging fluids in a rock plug. A free water level is positioned along the length of the plugs to enable simultaneous determination of both positive and negative capillary pressures. Octadecane as the oil phase is solidified by temperature reduction while centrifuging to prevent fluid redistribution upon removal from the centrifuge. The water saturation is then measured via magnetic resonance imaging. The saturation profile within the plug and the calculation of pressures for each point of the saturation profile allows for a complete capillary pressure curve to be determined from one experiment. Centrifuging under oil with a free water level into a 100 percent water saturated plug results in the development of a primary drainage capillary pressure curve. Centrifuging similarly at an initial water saturation in the plug results in the development of an imbibition capillary pressure curve. Examples of these measurements are presented for Berea sandstone and chalk rocks.
Vickers Andrew; Hozo Iztok; Tsalatsanis Athanasios; Djulbegovic Benjamin
2010-01-01
Abstract Background Decision curve analysis (DCA) has been proposed as an alternative method for evaluation of diagnostic tests, prediction models, and molecular markers. However, DCA is based on expected utility theory, which has been routinely violated by decision makers. Decision-making is governed by intuition (system 1), and analytical, deliberative process (system 2), thus, rational decision-making should reflect both formal principles of rationality and intuition about good decisions. ...
Department of Veterans Affairs — As of June 28, 2010, the Master Veteran Index (MVI) database based on the enhanced Master Patient Index (MPI) is the authoritative identity service within the VA,...
Directory of Open Access Journals (Sweden)
M Hoseini
2012-05-01
Full Text Available
Background and Objectives: LMS is a general monitoring method for fitting smooth reference centile curves in medical sciences. They provide the distribution of a measurement as it changes according to some covariates like age or time. This method describes the distribution of changes by three parameters; Mean, Coefficient of variation and Cox-Box power (skewness. Applying maximum penalized likelihood and spline function, the three curves are estimated and fitted and optimum smoothness is expressed by three curves. This study was conducted to provide the percentiles of lipid profile of Iranian children and adolescents by LMS.
Methods: Smoothed reference centile curves of four groups of lipids (triglycerides, total-LDL- and HDL-cholesterol were developed from the data of 4824 Iranian school students, aged 6-18 years, living in six cities (Tabriz, Rasht, Gorgan, Mashad, Yazd and Tehran-Firouzkouh in Iran. Demographic and laboratory data were taken from the national study of the surveillance and prevention of non-communicable diseases from childhood (CASPIAN Study. After data management, data of 4824 students were included in the statistical analysis, which was conducted by the modified LMS method proposed by Cole. The curves were developed with a degree of freedom of four to ten with some tools such as deviance, Q tests, and detrended Q-Q plot were used for monitoring goodness of fit models.
Results: All tools confirmed the model, and the LMS method was used as an appropriate method in smoothing reference centile. This method revealed the distributing features of variables serving as an objective tool to determine their relative importance.
Conclusion: This study showed that the triglycerides level is higher and
International Nuclear Information System (INIS)
Okano, Yasushi; Yamano, Hidemasa
2016-01-01
A method to obtain a hazard curve of a forest fire was developed. The method has four steps: a logic tree formulation, a response surface evaluation, a Monte Carlo simulation, and an annual exceedance frequency calculation. The logic tree consists domains of 'forest fire breakout and spread conditions', 'weather conditions', 'vegetation conditions', and 'forest fire simulation conditions.' Condition parameters of the logic boxes are static if stable during a forest fire or not sensitive to a forest fire intensity, and non-static parameters are variables whose frequency/probability is given based on existing databases or evaluations. Response surfaces of a reaction intensity and a fireline intensity were prepared by interpolating outputs from a number of forest fire propagation simulations by fire area simulator (FARSITE). The Monte Carlo simulation was performed where one sample represented a set of variable parameters of the logic boxes and a corresponding intensity was evaluated from the response surface. The hazard curve, i.e. an annual exceedance frequency of the intensity, was therefore calculated from the histogram of the Monte Carlo simulation outputs. The new method was applied to evaluate hazard curves of a reaction intensity and a fireline intensity for a typical location around a sodium-cooled fast reactor in Japan. (author)
Kumar, Gautam; Maji, Kuntal
2018-04-01
This article deals with the prediction of strain-and stress-based forming limit curves for advanced high strength steel DP590 sheet using Marciniak-Kuczynski (M-K) method. Three yield criteria namely Von-Mises, Hill's 48 and Yld2000-2d and two hardening laws i.e., Hollomon power and Swift hardening laws were considered to predict the forming limit curves (FLCs) for DP590 steel sheet. The effects of imperfection factor and initial groove angle on prediction of FLC were also investigated. It was observed that the FLCs shifted upward with the increase of imperfection factor value. The initial groove angle was found to have significant effects on limit strains in the left side of FLC, and insignificant effect for the right side of FLC for certain range of strain paths. The limit strains were calculated at zero groove angle for the right side of FLC, and a critical groove angle was used for the left side of FLC. The numerically predicted FLCs considering the different combinations of yield criteria and hardening laws were compared with the published experimental results of FLCs for DP590 steel sheet. The FLC predicted using the combination of Yld2000-2d yield criterion and swift hardening law was in better coorelation with the experimental data. Stress based forming limit curves (SFLCs) were also calculated from the limiting strain values obtained by M-K model. Theoretically predicted SFLCs were compared with that obtained from the experimental forming limit strains. Stress based forming limit curves were seen to better represent the forming limits of DP590 steel sheet compared to that by strain-based forming limit curves.
Dual arm master controller for a bilateral servo-manipulator
Kuban, Daniel P.; Perkins, Gerald S.
1989-01-01
A master controller for a mechanically dissimilar bilateral slave servo-manipulator is disclosed. The master controller includes a plurality of drive trains comprising a plurality of sheave arrangements and cables for controlling upper and lower degrees of master movement. The cables and sheaves of the master controller are arranged to effect kinematic duplication of the slave servo-manipulator, despite mechanical differences therebetween. A method for kinematically matching a master controller to a slave servo-manipulator is also disclosed.
International Nuclear Information System (INIS)
Lu Jia; Zhou Huaichun
2016-01-01
To deal with the staircase approximation problem in the standard finite-difference time-domain (FDTD) simulation, the two-dimensional boundary condition equations (BCE) method is proposed in this paper. In the BCE method, the standard FDTD algorithm can be used as usual, and the curved surface is treated by adding the boundary condition equations. Thus, while maintaining the simplicity and computational efficiency of the standard FDTD algorithm, the BCE method can solve the staircase approximation problem. The BCE method is validated by analyzing near field and far field scattering properties of the PEC and dielectric cylinders. The results show that the BCE method can maintain a second-order accuracy by eliminating the staircase approximation errors. Moreover, the results of the BCE method show good accuracy for cylinder scattering cases with different permittivities. (paper)
An Empirical Fitting Method for Type Ia Supernova Light Curves: A Case Study of SN 2011fe
Energy Technology Data Exchange (ETDEWEB)
Zheng, WeiKang; Filippenko, Alexei V., E-mail: zwk@astro.berkeley.edu [Department of Astronomy, University of California, Berkeley, CA 94720-3411 (United States)
2017-03-20
We present a new empirical fitting method for the optical light curves of Type Ia supernovae (SNe Ia). We find that a variant broken-power-law function provides a good fit, with the simple assumption that the optical emission is approximately the blackbody emission of the expanding fireball. This function is mathematically analytic and is derived directly from the photospheric velocity evolution. When deriving the function, we assume that both the blackbody temperature and photospheric velocity are constant, but the final function is able to accommodate these changes during the fitting procedure. Applying it to the case study of SN 2011fe gives a surprisingly good fit that can describe the light curves from the first-light time to a few weeks after peak brightness, as well as over a large range of fluxes (∼5 mag, and even ∼7 mag in the g band). Since SNe Ia share similar light-curve shapes, this fitting method has the potential to fit most other SNe Ia and characterize their properties in large statistical samples such as those already gathered and in the near future as new facilities become available.
Energy Technology Data Exchange (ETDEWEB)
Dias, Mafalda; Seery, David [Astronomy Centre, University of Sussex, Brighton BN1 9QH (United Kingdom); Frazer, Jonathan, E-mail: m.dias@sussex.ac.uk, E-mail: j.frazer@sussex.ac.uk, E-mail: a.liddle@sussex.ac.uk [Department of Theoretical Physics, University of the Basque Country, UPV/EHU, 48040 Bilbao (Spain)
2015-12-01
We describe how to apply the transport method to compute inflationary observables in a broad range of multiple-field models. The method is efficient and encompasses scenarios with curved field-space metrics, violations of slow-roll conditions and turns of the trajectory in field space. It can be used for an arbitrary mass spectrum, including massive modes and models with quasi-single-field dynamics. In this note we focus on practical issues. It is accompanied by a Mathematica code which can be used to explore suitable models, or as a basis for further development.
International Nuclear Information System (INIS)
Dias, Mafalda; Seery, David; Frazer, Jonathan
2015-01-01
We describe how to apply the transport method to compute inflationary observables in a broad range of multiple-field models. The method is efficient and encompasses scenarios with curved field-space metrics, violations of slow-roll conditions and turns of the trajectory in field space. It can be used for an arbitrary mass spectrum, including massive modes and models with quasi-single-field dynamics. In this note we focus on practical issues. It is accompanied by a Mathematica code which can be used to explore suitable models, or as a basis for further development
International Nuclear Information System (INIS)
Jesenik, M.; Gorican, V.; Trlep, M.; Hamler, A.; Stumberger, B.
2006-01-01
A lot of magnetic materials are anisotropic. In the 3D finite element method calculation, anisotropy of the material is taken into account. Anisotropic magnetic material is described with magnetization curves for different magnetization directions. The 3D transient calculation of the rotational magnetic field in the sample of the round rotational single sheet tester with circular sample considering eddy currents is made and compared with the measurement to verify the correctness of the method and to analyze the magnetic field in the sample
International Nuclear Information System (INIS)
Yu, Shiwei; Zhang, Junjie; Zheng, Shuhong; Sun, Han
2015-01-01
This study aims to estimate carbon intensity abatement potential in China at the regional level by proposing a particle swarm optimization–genetic algorithm (PSO–GA) multivariate environmental learning curve estimation method. The model uses two independent variables, namely, per capita gross domestic product (GDP) and the proportion of the tertiary industry in GDP, to construct carbon intensity learning curves (CILCs), i.e., CO 2 emissions per unit of GDP, of 30 provinces in China. Instead of the traditional ordinary least squares (OLS) method, a PSO–GA intelligent optimization algorithm is used to optimize the coefficients of a learning curve. The carbon intensity abatement potentials of the 30 Chinese provinces are estimated via PSO–GA under the business-as-usual scenario. The estimation reveals the following results. (1) For most provinces, the abatement potentials from improving a unit of the proportion of the tertiary industry in GDP are higher than the potentials from raising a unit of per capita GDP. (2) The average potential of the 30 provinces in 2020 will be 37.6% based on the emission's level of 2005. The potentials of Jiangsu, Tianjin, Shandong, Beijing, and Heilongjiang are over 60%. Ningxia is the only province without intensity abatement potential. (3) The total carbon intensity in China weighted by the GDP shares of the 30 provinces will decline by 39.4% in 2020 compared with that in 2005. This intensity cannot achieve the 40%–45% carbon intensity reduction target set by the Chinese government. Additional mitigation policies should be developed to uncover the potentials of Ningxia and Inner Mongolia. In addition, the simulation accuracy of the CILCs optimized by PSO–GA is higher than that of the CILCs optimized by the traditional OLS method. - Highlights: • A PSO–GA-optimized multi-factor environmental learning curve method is proposed. • The carbon intensity abatement potentials of the 30 Chinese provinces are estimated by
Regional Master on Medical Physics
International Nuclear Information System (INIS)
Gutt, F.
2001-01-01
It points out: the master project; the master objective; the medical physicist profile and tasks; the requirements to be a master student; the master programmatic contents and the investigation priorities [es
Approximation by planar elastic curves
DEFF Research Database (Denmark)
Brander, David; Gravesen, Jens; Nørbjerg, Toke Bjerge
2016-01-01
We give an algorithm for approximating a given plane curve segment by a planar elastic curve. The method depends on an analytic representation of the space of elastic curve segments, together with a geometric method for obtaining a good initial guess for the approximating curve. A gradient......-driven optimization is then used to find the approximating elastic curve....
Romano, N.; Petroselli, A.; Grimaldi, S.
2012-04-01
With the aim of combining the practical advantages of the Soil Conservation Service - Curve Number (SCS-CN) method and Green-Ampt (GA) infiltration model, we have developed a mixed procedure, which is referred to as CN4GA (Curve Number for Green-Ampt). The basic concept is that, for a given storm, the computed SCS-CN total net rainfall amount is used to calibrate the soil hydraulic conductivity parameter of the Green-Ampt model so as to distribute in time the information provided by the SCS-CN method. In a previous contribution, the proposed mixed procedure was evaluated on 100 observed events showing encouraging results. In this study, a sensitivity analysis is carried out to further explore the feasibility of applying the CN4GA tool in small ungauged catchments. The proposed mixed procedure constrains the GA model with boundary and initial conditions so that the GA soil hydraulic parameters are expected to be insensitive toward the net hyetograph peak. To verify and evaluate this behaviour, synthetic design hyetograph and synthetic rainfall time series are selected and used in a Monte Carlo analysis. The results are encouraging and confirm that the parameter variability makes the proposed method an appropriate tool for hydrologic predictions in ungauged catchments. Keywords: SCS-CN method, Green-Ampt method, rainfall excess, ungauged basins, design hydrograph, rainfall-runoff modelling.
Sawall, Mathias; Kubis, Christoph; Börner, Armin; Selent, Detlef; Neymeyr, Klaus
2015-09-03
Modern computerized spectroscopic instrumentation can result in high volumes of spectroscopic data. Such accurate measurements rise special computational challenges for multivariate curve resolution techniques since pure component factorizations are often solved via constrained minimization problems. The computational costs for these calculations rapidly grow with an increased time or frequency resolution of the spectral measurements. The key idea of this paper is to define for the given high-dimensional spectroscopic data a sequence of coarsened subproblems with reduced resolutions. The multiresolution algorithm first computes a pure component factorization for the coarsest problem with the lowest resolution. Then the factorization results are used as initial values for the next problem with a higher resolution. Good initial values result in a fast solution on the next refined level. This procedure is repeated and finally a factorization is determined for the highest level of resolution. The described multiresolution approach allows a considerable convergence acceleration. The computational procedure is analyzed and is tested for experimental spectroscopic data from the rhodium-catalyzed hydroformylation together with various soft and hard models. Copyright © 2015 Elsevier B.V. All rights reserved.
Hanafiah, Hazlenah; Jemain, Abdul Aziz
2013-11-01
In recent years, the study of fertility has been getting a lot of attention among research abroad following fear of deterioration of fertility led by the rapid economy development. Hence, this study examines the feasibility of developing fertility forecasts based on age structure. Lee Carter model (1992) is applied in this study as it is an established and widely used model in analysing demographic aspects. A singular value decomposition approach is incorporated with an ARIMA model to estimate age specific fertility rates in Peninsular Malaysia over the period 1958-2007. Residual plots is used to measure the goodness of fit of the model. Fertility index forecast using random walk drift is then utilised to predict the future age specific fertility. Results indicate that the proposed model provides a relatively good and reasonable data fitting. In addition, there is an apparent and continuous decline in age specific fertility curves in the next 10 years, particularly among mothers' in their early 20's and 40's. The study on the fertility is vital in order to maintain a balance between the population growth and the provision of facilities related resources.
Modified Spectral Fatigue Methods for S-N Curves With MIL-HDBK-5J Coefficients
Irvine, Tom; Larsen, Curtis
2016-01-01
The rainflow method is used for counting fatigue cycles from a stress response time history, where the fatigue cycles are stress-reversals. The rainflow method allows the application of Palmgren-Miner's rule in order to assess the fatigue life of a structure subject to complex loading. The fatigue damage may also be calculated from a stress response power spectral density (PSD) using the semi-empirical Dirlik, Single Moment, Zhao-Baker and other spectral methods. These methods effectively assume that the PSD has a corresponding time history which is stationary with a normal distribution. This paper shows how the probability density function for rainflow stress cycles can be extracted from each of the spectral methods. This extraction allows for the application of the MIL-HDBK-5J fatigue coefficients in the cumulative damage summation. A numerical example is given in this paper for the stress response of a beam undergoing random base excitation, where the excitation is applied separately by a time history and by its corresponding PSD. The fatigue calculation is performed in the time domain, as well as in the frequency domain via the modified spectral methods. The result comparison shows that the modified spectral methods give comparable results to the time domain rainflow counting method.
Directory of Open Access Journals (Sweden)
Jiang Lin
2016-01-01
Full Text Available The overall efficiency of PV arrays is affected by hot spots which should be detected and diagnosed by applying responsible monitoring techniques. The method using the IR thermal image to detect hot spots has been studied as a direct, noncontact, nondestructive technique. However, IR thermal images suffer from relatively high stochastic noise and non-uniformity clutter, so the conventional methods of image processing are not effective. The paper proposes a method to detect hotspots based on curve fitting of gray histogram. The result of MATLAB simulation proves the method proposed in the paper is effective to detect the hot spots suppressing the noise generated during the process of image acquisition.
Finite element method for one-dimensional rill erosion simulation on a curved slope
Directory of Open Access Journals (Sweden)
Lijuan Yan
2015-03-01
Full Text Available Rill erosion models are important to hillslope soil erosion prediction and to land use planning. The development of rill erosion models and their use has become increasingly of great concern. The purpose of this research was to develop mathematic models with computer simulation procedures to simulate and predict rill erosion. The finite element method is known as an efficient tool in many other applications than in rill soil erosion. In this study, the hydrodynamic and sediment continuity model equations for a rill erosion system were solved by the Galerkin finite element method and Visual C++ procedures. The simulated results are compared with the data for spatially and temporally measured processes for rill erosion under different conditions. The results indicate that the one-dimensional linear finite element method produced excellent predictions of rill erosion processes. Therefore, this study supplies a tool for further development of a dynamic soil erosion prediction model.
International Nuclear Information System (INIS)
Watanabe, Yoshirou; Sakai, Akira; Inada, Mitsuo; Shiraishi, Tomokuni; Kobayashi, Akitoshi
1982-01-01
S2-gated (the second heart sound) method was designed by authors. In 6 normal subjects and 16 patients (old myocardial infarction 12 cases, hypertension 2 cases and aortic regurgitation 2 cases), radioisotope (RI) angiography using S2-gated equilibrium method was performed. In RI angiography, sup(99m)Tc-human serum albumin (HSA) 555MBq (15mCi) as tracer, PDP11/34 as minicomputer and PCG/ECG symchromizer (Metro Inst.) were used. Then left ventricular (LV) volume curve by S2-gated and electrocardiogram (ECG) R wave-gated method were obtained. Using LV volume curve, left ventricular ejection fraction (EF), mean ejection rate (mER, s -1 ), mean filling rate (mFR, -1 ) and rapid filling fraction (RFF) were calculated. mFR indicated mean filling rate during rapid filling phase. RFF was defined as the filling fraction during rapid filling phase among stroke volume. S2-gated method was reliable in evaluation of early diastolic phase, compared with ECG-gated method. There was the difference between RFF in normal group and myocardial infarction (MI) group (p < 0.005). RFF in 2 groups were correlated with EF (r = 0.82, p < 0.01). RFF was useful in evaluating MI cases who had normal EF values. The comparison with mER by ECG-gated and mFR by S2-gated was useful in evaluating MI cases who had normal mER values. mFR was remarkably lower than mER in MI group, but was equal to mER in normal group approximately. In conclusion, the evaluation using RFF and mFR by S2-gated method was useful in MI cases who had normal systolic phase indices. (author)
International Nuclear Information System (INIS)
Faripour, H.; Faripour, N.
2003-01-01
Mixed-single Crystals: pure KBr-LiBr and KBr-LiBr with Ti dopant were grown by Czochralski method. Because of difference between lattice parameters of KBr and LiBr, the growth speed of crystals were relatively low, and they were annealed in a special temperature condition providing some cleavages. They were exposed by β radiation and the glow curve was analysed for each crystal. Analysing of glow curve, showed that Ti impurity has been the curves of main peak curve appearance temperature decreasing
Reflector construction by sound path curves - A method of manual reflector evaluation in the field
International Nuclear Information System (INIS)
Siciliano, F.; Heumuller, R.
1985-01-01
In order to describe the time-of-flight behavior of various reflectors we have set up models and derived from them analytical and graphic approaches to reflector reconstruction. In the course of this work, maximum achievable accuracy and possible simplifications were investigated. The aim of the time-of-flight reconstruction method is to determine the points of a reflector on the basis of a sound path function (sound path as the function of the probe index position). This method can only be used on materials which are isotropic in terms of sound velocity since the method relies on time of flight being converted into sound path. This paper deals only with two-dimensional reconstruction, in other words all statements relate to the plane of incidence. The method is based on the fact that the geometrical location of the points equidistant from a certain probe index position is a circle. If circles with radiuses equal to the associated sound path are drawn for various search unit positions the points of intersection of the circles are the desired reflector points
Borghi, E.; Onis, M. de; Garza, C.; Broeck, J. van den; Frongillo, E.A.; Grummer-Strawn, L.; Buuren, S. van; Pan, H.; Molinari, L.; Martorell, R.; Onyango, A.W.; Martines, J.C.; Pinol, A.; Siyam, A.; Victoria, C.G.; Bhan, M.K.; Araújo, C.L.; Lartey, A.; Owusu, W.B.; Bhandari, N.; Norum, K.R.; Bjoerneboe, G.-E.Aa.; Mohamed, A.J.; Dewey, K.G.; Belbase, K.; Chumlea, C.; Cole, T.; Shrimpton, R.; Albernaz, E.; Tomasi, E.; Cássia Fossati da Silveira, R. de; Nader, G.; Sagoe-Moses, I.; Gomez, V.; Sagoe-Moses, C.; Taneja, S.; Rongsen, T.; Chetia, J.; Sharma, P.; Bahl, R.; Baerug, A.; Tufte, E.; Alasfoor, D.; Prakash, N.S.; Mabry, R.M.; Al Rajab, H.J.; Helmi, S.A.; Nommsen-Rivers, L.A.; Cohen, R.J.; Heinig, M.J.
2006-01-01
The World Health Organization (WHO), in collaboration with a number of research institutions worldwide, is developing new child growth standards. As part of a broad consultative process for selecting the best statistical methods, WHO convened a group of statisticians and child growth experts to
Kishi, Ryohei; Nakano, Masayoshi
2011-04-21
A novel method for the calculation of the dynamic polarizability (α) of open-shell molecular systems is developed based on the quantum master equation combined with the broken-symmetry (BS) time-dependent density functional theory within the Tamm-Dancoff approximation, referred to as the BS-DFTQME method. We investigate the dynamic α density distribution obtained from BS-DFTQME calculations in order to analyze the spatial contributions of electrons to the field-induced polarization and clarify the contributions of the frontier orbital pair to α and its density. To demonstrate the performance of this method, we examine the real part of dynamic α of singlet 1,3-dipole systems having a variety of diradical characters (y). The frequency dispersion of α, in particular in the resonant region, is shown to strongly depend on the exchange-correlation functional as well as on the diradical character. Under sufficiently off-resonant condition, the dynamic α is found to decrease with increasing y and/or the fraction of Hartree-Fock exchange in the exchange-correlation functional, which enhances the spin polarization, due to the decrease in the delocalization effects of π-diradical electrons in the frontier orbital pair. The BS-DFTQME method with the BHandHLYP exchange-correlation functional also turns out to semiquantitatively reproduce the α spectra calculated by a strongly correlated ab initio molecular orbital method, i.e., the spin-unrestricted coupled-cluster singles and doubles.
PIV Measurement of Pulsatile Flows in 3D Curved Tubes Using Refractive Index Matching Method
Energy Technology Data Exchange (ETDEWEB)
Hong, Hyeon Ji; Ji, Ho Seong; Kim, Kyung Chun [Pusan Nat’l Univ., Busan (Korea, Republic of)
2016-08-15
Three-dimensional models of stenosis blood vessels were prepared using a 3D printer. The models included a straight pipe with axisymmetric stenosis and a pipe that was bent 10° from the center of stenosis. A refractive index matching method was utilized to measure accurate velocity fields inside the 3D tubes. Three different pulsatile flows were generated and controlled by changing the rotational speed frequency of the peristaltic pump. Unsteady velocity fields were measured by a time-resolved particle image velocimetry method. Periodic shedding of vortices occurred and moves depended on the maximum velocity region. The sizes and the positions of the vortices and symmetry are influenced by mean Reynolds number and tube geometry. In the case of the bent pipe, a recirculation zone observed at the post-stenosis could explain the possibility of blood clot formation and blood clot adhesion in view of hemodynamics.
PIV Measurement of Pulsatile Flows in 3D Curved Tubes Using Refractive Index Matching Method
International Nuclear Information System (INIS)
Hong, Hyeon Ji; Ji, Ho Seong; Kim, Kyung Chun
2016-01-01
Three-dimensional models of stenosis blood vessels were prepared using a 3D printer. The models included a straight pipe with axisymmetric stenosis and a pipe that was bent 10° from the center of stenosis. A refractive index matching method was utilized to measure accurate velocity fields inside the 3D tubes. Three different pulsatile flows were generated and controlled by changing the rotational speed frequency of the peristaltic pump. Unsteady velocity fields were measured by a time-resolved particle image velocimetry method. Periodic shedding of vortices occurred and moves depended on the maximum velocity region. The sizes and the positions of the vortices and symmetry are influenced by mean Reynolds number and tube geometry. In the case of the bent pipe, a recirculation zone observed at the post-stenosis could explain the possibility of blood clot formation and blood clot adhesion in view of hemodynamics.
Obuchowski, Nancy A.; Bullen, Jennifer A.
2018-04-01
Receiver operating characteristic (ROC) analysis is a tool used to describe the discrimination accuracy of a diagnostic test or prediction model. While sensitivity and specificity are the basic metrics of accuracy, they have many limitations when characterizing test accuracy, particularly when comparing the accuracies of competing tests. In this article we review the basic study design features of ROC studies, illustrate sample size calculations, present statistical methods for measuring and comparing accuracy, and highlight commonly used ROC software. We include descriptions of multi-reader ROC study design and analysis, address frequently seen problems of verification and location bias, discuss clustered data, and provide strategies for testing endpoints in ROC studies. The methods are illustrated with a study of transmission ultrasound for diagnosing breast lesions.
Analysis and Extension of the PCA Method, Estimating a Noise Curve from a Single Image
Directory of Open Access Journals (Sweden)
Miguel Colom
2016-12-01
Full Text Available In the article 'Image Noise Level Estimation by Principal Component Analysis', S. Pyatykh, J. Hesser, and L. Zheng propose a new method to estimate the variance of the noise in an image from the eigenvalues of the covariance matrix of the overlapping blocks of the noisy image. Instead of using all the patches of the noisy image, the authors propose an iterative strategy to adaptively choose the optimal set containing the patches with lowest variance. Although the method measures uniform Gaussian noise, it can be easily adapted to deal with signal-dependent noise, which is realistic with the Poisson noise model obtained by a CMOS or CCD device in a digital camera.
Petroselli, A.; Grimaldi, S.; Romano, N.
2012-12-01
The Soil Conservation Service - Curve Number (SCS-CN) method is a popular rainfall-runoff model widely used to estimate losses and direct runoff from a given rainfall event, but its use is not appropriate at sub-daily time resolution. To overcome this drawback, a mixed procedure, referred to as CN4GA (Curve Number for Green-Ampt), was recently developed including the Green-Ampt (GA) infiltration model and aiming to distribute in time the information provided by the SCS-CN method. The main concept of the proposed mixed procedure is to use the initial abstraction and the total volume given by the SCS-CN to calibrate the Green-Ampt soil hydraulic conductivity parameter. The procedure is here applied on a real case study and a sensitivity analysis concerning the remaining parameters is presented; results show that CN4GA approach is an ideal candidate for the rainfall excess analysis at sub-daily time resolution, in particular for ungauged basin lacking of discharge observations.
Liu, Boshi; Huang, Renliang; Yu, Yanjun; Su, Rongxin; Qi, Wei; He, Zhimin
2018-01-01
Ochratoxin A (OTA) is a type of mycotoxin generated from the metabolism of Aspergillus and Penicillium , and is extremely toxic to humans, livestock, and poultry. However, traditional assays for the detection of OTA are expensive and complicated. Other than OTA aptamer, OTA itself at high concentration can also adsorb on the surface of gold nanoparticles (AuNPs), and further inhibit AuNPs salt aggregation. We herein report a new OTA assay by applying the localized surface plasmon resonance effect of AuNPs and their aggregates. The result obtained from only one single linear calibration curve is not reliable, and so we developed a "double calibration curve" method to address this issue and widen the OTA detection range. A number of other analytes were also examined, and the structural properties of analytes that bind with the AuNPs were further discussed. We found that various considerations must be taken into account in the detection of these analytes when applying AuNP aggregation-based methods due to their different binding strengths.
A new method of testing pile using dynamic P-S-curve made by amplitude of wave train
Hu, Yi-Li; Xu, Jun; Duan, Yong-Kong; Xu, Zhao-Yong; Yang, Run-Hai; Zhao, Jin-Ming
2004-11-01
A new method of detecting the vertical bearing capacity for single-pile with high strain is discussed in this paper. A heavy hammer or a small type of rocket is used to strike the pile top and the detectors are used to record vibration graphs. An expression of higher degree of strain (deformation force) is introduced. It is testified theoretically that the displacement, velocity and acceleration cannot be obtained by simple integral acceleration and differential velocity when long displacement and high strain exist, namely when the pile phase generates a whole slip relative to the soil body. That is to say that there are non-linear relations between them. It is educed accordingly that the force P and displacement S are calculated from the amplitude of wave train and (dynamic) P-S curve is drew so as to determine the yield points. Further, a method of determining the vertical bearing capacity for single-pile is discussed. A static load test is utilized to check the result of dynamic test and determine the correlative constants of dynamic-static P( Q)- S curve.
A bottom-up method to develop pollution abatement cost curves for coal-fired utility boilers
International Nuclear Information System (INIS)
Vijay, Samudra; DeCarolis, Joseph F.; Srivastava, Ravi K.
2010-01-01
This paper illustrates a new method to create supply curves for pollution abatement using boiler-level data that explicitly accounts for technology cost and performance. The Coal Utility Environmental Cost (CUECost) model is used to estimate retrofit costs for five different NO x control configurations on a large subset of the existing coal-fired, utility-owned boilers in the US. The resultant data are used to create technology-specific marginal abatement cost curves (MACCs) and also serve as input to an integer linear program, which minimizes system-wide control costs by finding the optimal distribution of NO x controls across the modeled boilers under an emission constraint. The result is a single optimized MACC that accounts for detailed, boiler-specific information related to NO x retrofits. Because the resultant MACCs do not take into account regional differences in air-quality standards or pre-existing NO x controls, the results should not be interpreted as a policy prescription. The general method as well as NO x -specific results presented here should be of significant value to modelers and policy analysts who must estimate the costs of pollution reduction.
Melchior, A.-L.; Ansari, R.; Aubourg, E.; Baillon, P.; Bareyre, P.; Bauer, F.; Beaulieu, J.-Ph.; Bouquet, A.; Brehin, S.; Cavalier, F.; Char, S.; Couchot, F.; Coutures, C.; Ferlet, R.; Fernandez, J.; Gaucherel, C.; Giraud-Heraud, Y.; Glicenstein, J.-F.; Goldman, B.; Gondolo, P.; Gros, M.; Guibert, J.; Gry, C.; Hardin, D.; Kaplan, J.; de Kat, J.; Lachieze-Rey, M.; Laurent, B.; Lesquoy, E.; Magneville, Ch.; Mansoux, B.; Marquette, J.-B.; Maurice, E.; Milsztajn, A.; Moniez, M.; Moreau, O.; Moscoso, L.; Palanque-Delabrouille, N.; Perdereau, O.; Prevot, L.; Renault, C.; Queinnec, F.; Rich, J.; Spiro, M.; Vigroux, L.; Zylberajch, S.; Vidal-Madjar, A.; Magneville, Ch.
1999-01-01
The presence and abundance of MAssive Compact Halo Objects (MACHOs) towards the Large Magellanic Cloud (LMC) can be studied with microlensing searches. The 10 events detected by the EROS and MACHO groups suggest that objects with 0.5 Mo could fill 50% of the dark halo. This preferred mass is quite surprising, and increasing the presently small statistics is a crucial issue. Additional microlensing of stars too dim to be resolved in crowded fields should be detectable using the Pixel Method. We present here an application of this method to the EROS 91-92 data (one tenth of the whole existing data set). We emphasize the data treatment required for monitoring pixel fluxes. Geometric and photometric alignments are performed on each image. Seeing correction and error estimates are discussed. 3.6" x 3.6" super-pixel light curves, thus produced, are very stable over the 120 days time-span. Fluctuations at a level of 1.8% of the flux in blue and 1.3% in red are measured on the pixel light curves. This level of stabil...
Slicing Method for curved façade and window extraction from point clouds
Iman Zolanvari, S. M.; Laefer, Debra F.
2016-09-01
Laser scanning technology is a fast and reliable method to survey structures. However, the automatic conversion of such data into solid models for computation remains a major challenge, especially where non-rectilinear features are present. Since, openings and the overall dimensions of the buildings are the most critical elements in computational models for structural analysis, this article introduces the Slicing Method as a new, computationally-efficient method for extracting overall façade and window boundary points for reconstructing a façade into a geometry compatible for computational modelling. After finding a principal plane, the technique slices a façade into limited portions, with each slice representing a unique, imaginary section passing through a building. This is done along a façade's principal axes to segregate window and door openings from structural portions of the load-bearing masonry walls. The method detects each opening area's boundaries, as well as the overall boundary of the façade, in part, by using a one-dimensional projection to accelerate processing. Slices were optimised as 14.3 slices per vertical metre of building and 25 slices per horizontal metre of building, irrespective of building configuration or complexity. The proposed procedure was validated by its application to three highly decorative, historic brick buildings. Accuracy in excess of 93% was achieved with no manual intervention on highly complex buildings and nearly 100% on simple ones. Furthermore, computational times were less than 3 sec for data sets up to 2.6 million points, while similar existing approaches required more than 16 hr for such datasets.
SRF cavity alignment detection method using beam-induced HOM with curved beam orbit
Hattori, Ayaka; Hayano, Hitoshi
2017-09-01
We have developed a method to obtain mechanical centers of nine cell superconducting radio frequency (SRF) cavities from localized dipole modes, that is one of the higher order modes (HOM) induced by low-energy beams. It is to be noted that low-energy beams, which are used as alignment probes, are easy to bend in fringe fields of accelerator cavities. The estimation of the beam passing orbit is important because only information about the beam positions measured by beam position monitors outside the cavities is available. In this case, the alignment information about the cavities can be obtained by optimizing the parameters of the acceleration components over the beam orbit simulation to consistently represent the position of the beam position monitors measured at every beam sweep. We discuss details of the orbit estimation method, and estimate the mechanical center of the localized modes through experiments performed at the STF accelerator. The mechanical center is determined as (x , y) =(0 . 44 ± 0 . 56 mm , - 1 . 95 ± 0 . 40 mm) . We also discuss the error and the applicable range of this method.
48 CFR 217.7103 - Master agreements and job orders.
2010-10-01
... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Master agreements and job... SYSTEM, DEPARTMENT OF DEFENSE CONTRACTING METHODS AND CONTRACT TYPES SPECIAL CONTRACTING METHODS Master Agreement for Repair and Alteration of Vessels 217.7103 Master agreements and job orders. ...
Directory of Open Access Journals (Sweden)
Qihao Weng
2013-03-01
Full Text Available The rainfall and runoff relationship becomes an intriguing issue as urbanization continues to evolve worldwide. In this paper, we developed a simulation model based on the soil conservation service curve number (SCS-CN method to analyze the rainfall-runoff relationship in Guangzhou, a rapid growing metropolitan area in southern China. The SCS-CN method was initially developed by the Natural Resources Conservation Service (NRCS of the United States Department of Agriculture (USDA, and is one of the most enduring methods for estimating direct runoff volume in ungauged catchments. In this model, the curve number (CN is a key variable which is usually obtained by the look-up table of TR-55. Due to the limitations of TR-55 in characterizing complex urban environments and in classifying land use/cover types, the SCS-CN model cannot provide more detailed runoff information. Thus, this paper develops a method to calculate CN by using remote sensing variables, including vegetation, impervious surface, and soil (V-I-S. The specific objectives of this paper are: (1 To extract the V-I-S fraction images using Linear Spectral Mixture Analysis; (2 To obtain composite CN by incorporating vegetation types, soil types, and V-I-S fraction images; and (3 To simulate direct runoff under the scenarios with precipitation of 57mm (occurred once every five years by average and 81mm (occurred once every ten years. Our experiment shows that the proposed method is easy to use and can derive composite CN effectively.
Energy Technology Data Exchange (ETDEWEB)
Milligan, M R
1996-04-01
As an intermittent resource, capturing the temporal variation in windpower is an important issue in the context of utility production cost modeling. Many of the production cost models use a method that creates a cumulative probability distribution that is outside the time domain. The purpose of this report is to examine two production cost models that represent the two major model types: chronological and load duration cure models. This report is part of the ongoing research undertaken by the Wind Technology Division of the National Renewable Energy Laboratory in utility modeling and wind system integration.
Analysis and Extension of the Percentile Method, Estimating a Noise Curve from a Single Image
Directory of Open Access Journals (Sweden)
Miguel Colom
2013-12-01
Full Text Available Given a white Gaussian noise signal on a sampling grid, its variance can be estimated from a small block sample. However, in natural images we observe the combination of the geometry of the scene being photographed and the added noise. In this case, estimating directly the standard deviation of the noise from block samples is not reliable since the measured standard deviation is not explained just by the noise but also by the geometry of the image. The Percentile method tries to estimate the standard deviation of the noise from blocks of a high-passed version of the image and a small p-percentile of these standard deviations. The idea behind is that edges and textures in a block of the image increase the observed standard deviation but they never make it decrease. Therefore, a small percentile (0.5%, for example in the list of standard deviations of the blocks is less likely to be affected by the edges and textures than a higher percentile (50%, for example. The 0.5%-percentile is empirically proven to be adequate for most natural, medical and microscopy images. The Percentile method is adapted to signal-dependent noise, which is realistic with the Poisson noise model obtained by a CCD device in a digital camera.
Leem, Dohyun; Kim, Jin-Hwan; Barlat, Frédéric; Song, Jung Han; Lee, Myoung-Gyu
2018-03-01
An inverse approach based on the virtual fields method (VFM) is presented to identify the material hardening parameters under dynamic deformation. This dynamic-VFM (D-VFM) method does not require load information for the parameter identification. Instead, it utilizes acceleration fields in a specimen's gage region. To investigate the feasibility of the proposed inverse approach for dynamic deformation, the virtual experiments using dynamic finite element simulations were conducted. The simulation could provide all the necessary data for the identification such as displacement, strain, and acceleration fields. The accuracy of the identification results was evaluated by changing several parameters such as specimen geometry, velocity, and traction boundary conditions. The analysis clearly shows that the D-VFM which utilizes acceleration fields can be a good alternative to the conventional identification procedure that uses load information. Also, it was found that proper deformation conditions are required for generating sufficient acceleration fields during dynamic deformation to enhance the identification accuracy with the D-VFM.
Calibration curves for on-line leakage detection using radiotracer injection method
Directory of Open Access Journals (Sweden)
Ayoub Khatooni
2017-11-01
Full Text Available One of the most important requirements for industrial pipelines is the leakage detection. In this paper, detection of leak and determination of its amount using radioactive tracer injection method has been simulated by Monte Carlo MCNP code. The detector array included two NaI (Tl detectors which were located before and after the considered position, measure emitted gamma from radioactive tracer. After calibration of radiation detectors, the amount of leakage can be calculated based on the count difference of detectors. Also, the effect of material and thickness and diameter of pipe, crystal dimension, types of fluid, activity of tracer and its type (24Na, 82Br, 131I, 99mTc, 113mIn as well as have been investigated on the detectable amount of leakage. According to the results, for example, leakage more than 0.007% in volume of the inlet fluid for iron pipe with outer diameter 4 inch and thickness of 0.5 cm, Petrol as fluid inside pipe, 3 3 inch detector and 24Na with activity of 100 mCi can be detected by this presented method.
Test of nonexponential deviations from decay curve of 52V using continuous kinetic function method
International Nuclear Information System (INIS)
Tran Dai Nghiep; Vu Hoang Lam; Vo Tuong Hanh; Do Nguyet Minh; Nguyen Ngoc Son
1993-01-01
The present work is aimed at a formulation of an experimental approach to search the proposed description of an attempt to test them in case of 52 V. Some theoretical description of decay processes are formulated in clarified forms. The continuous kinetic function (CKF) method is used for analysis of experimental data and CKF for purely exponential case is considered as a standard for comparison between theoretical and experimental data. The degree of agreement is defined by the factor of goodness. Typical deviations of oscillation behavior of 52 V decay were observed in a wide range of time. The proposed deviation related to interaction between decay products and environment is research. A complex type of decay is discussed. (author). 10 refs, 2 tabs, 5 figs
A MACHINE-LEARNING METHOD TO INFER FUNDAMENTAL STELLAR PARAMETERS FROM PHOTOMETRIC LIGHT CURVES
International Nuclear Information System (INIS)
Miller, A. A.; Bloom, J. S.; Richards, J. W.; Starr, D. L.; Lee, Y. S.; Butler, N. R.; Tokarz, S.; Smith, N.; Eisner, J. A.
2015-01-01
A fundamental challenge for wide-field imaging surveys is obtaining follow-up spectroscopic observations: there are >10 9 photometrically cataloged sources, yet modern spectroscopic surveys are limited to ∼few× 10 6 targets. As we approach the Large Synoptic Survey Telescope era, new algorithmic solutions are required to cope with the data deluge. Here we report the development of a machine-learning framework capable of inferring fundamental stellar parameters (T eff , log g, and [Fe/H]) using photometric-brightness variations and color alone. A training set is constructed from a systematic spectroscopic survey of variables with Hectospec/Multi-Mirror Telescope. In sum, the training set includes ∼9000 spectra, for which stellar parameters are measured using the SEGUE Stellar Parameters Pipeline (SSPP). We employed the random forest algorithm to perform a non-parametric regression that predicts T eff , log g, and [Fe/H] from photometric time-domain observations. Our final optimized model produces a cross-validated rms error (RMSE) of 165 K, 0.39 dex, and 0.33 dex for T eff , log g, and [Fe/H], respectively. Examining the subset of sources for which the SSPP measurements are most reliable, the RMSE reduces to 125 K, 0.37 dex, and 0.27 dex, respectively, comparable to what is achievable via low-resolution spectroscopy. For variable stars this represents a ≈12%-20% improvement in RMSE relative to models trained with single-epoch photometric colors. As an application of our method, we estimate stellar parameters for ∼54,000 known variables. We argue that this method may convert photometric time-domain surveys into pseudo-spectrographic engines, enabling the construction of extremely detailed maps of the Milky Way, its structure, and history
Directory of Open Access Journals (Sweden)
Tatsuhiro Gotanda
2016-01-01
Full Text Available Radiochromic film dosimeters have a disadvantage in comparison with an ionization chamber in that the dosimetry process is time-consuming for creating a density-absorbed dose calibration curve. The purpose of this study was the development of a simplified method of creating a density-absorbed dose calibration curve from radiochromic film within a short time. This simplified method was performed using Gafchromic EBT3 film with a low energy dependence and step-shaped Al filter. The simplified method was compared with the standard method. The density-absorbed dose calibration curves created using the simplified and standard methods exhibited approximately similar straight lines, and the gradients of the density-absorbed dose calibration curves were −32.336 and −33.746, respectively. The simplified method can obtain calibration curves within a much shorter time compared to the standard method. It is considered that the simplified method for EBT3 film offers a more time-efficient means of determining the density-absorbed dose calibration curve within a low absorbed dose range such as the diagnostic range.
International Nuclear Information System (INIS)
Haaker, L.W.; Jelatis, D.G.
1981-01-01
A remote control master-slave manipulator for performing work on the opposite side of a barrier wall, is described. The manipulator consists of a rotatable horizontal support adapted to extend through the wall and two longitudinally extensible arms, a master and a slave, pivotally connected one to each end of the support. (U.K.)
Directory of Open Access Journals (Sweden)
О В Максименкова
2015-12-01
Full Text Available In article the special attention is paid to a role of informative feedback in the course of introduction of a method in a training course of the second step of higher education. Options of correction of tasks by results of the forming control and the direction of further researches are offered
A way to the Photo Master Expert
Inagaki, Toshihiko
After the author presided over the photographer's group for 15 years or more, the author met with the Photo Master certificate examination. And the author took the certificate examination, and was authorized as a Photo Master Expert in 2005. In this report, the outline how photographic technology has been mastered in order to adapt the photographer's group to the great change of photography from film to digital and how the contents of the activity of a photographer's group have changed is described. And the progress which took the Photo Master certificate examination as a good opportunity to prove the achievement level of those activities is described. And as a photographic activity after Photo Master Expert authorization, the shooting method of mural painting in the royal tomb of Amenophis III is described.
Probing the A1 to L10 transformation in FeCuPt using the first order reversal curve method
Directory of Open Access Journals (Sweden)
Dustin A. Gilbert
2014-08-01
Full Text Available The A1-L10 phase transformation has been investigated in (001 FeCuPt thin films prepared by atomic-scale multilayer sputtering and rapid thermal annealing (RTA. Traditional x-ray diffraction is not always applicable in generating a true order parameter, due to non-ideal crystallinity of the A1 phase. Using the first-order reversal curve (FORC method, the A1 and L10 phases are deconvoluted into two distinct features in the FORC distribution, whose relative intensities change with the RTA temperature. The L10 ordering takes place via a nucleation-and-growth mode. A magnetization-based phase fraction is extracted, providing a quantitative measure of the L10 phase homogeneity.
Zhang, S. Y.; Wang, G. F.; Wu, Y. T.; Baldwin, K. M. (Principal Investigator)
1993-01-01
On a partition chromatographic column in which the support is Kieselguhr and the stationary phase is sulfuric acid solution (2 mol/L), three components of compound theophylline tablet were simultaneously eluted by chloroform and three other components were simultaneously eluted by ammonia-saturated chloroform. The two mixtures were determined by computer-aided convolution curve method separately. The corresponding average recovery and relative standard deviation of the six components were as follows: 101.6, 1.46% for caffeine; 99.7, 0.10% for phenacetin; 100.9, 1.31% for phenobarbitone; 100.2, 0.81% for theophylline; 99.9, 0.81% for theobromine and 100.8, 0.48% for aminopyrine.
Ceylan, Selim
2015-04-01
In this study, pyrolysis of plum stone was investigated by thermogravimetric analysis in a nitrogen atmosphere at heating rates of 5, 10, 20 and 40 °C min(-1). Pyrolysis characteristics and the thermal-decomposition rate were significantly affected by variation in the heating rate. However, the heating rate slightly affected the total yield of the volatile matters. Activation energy of the pyrolysis reaction was evaluated by model-free methods, Friedman and Kissingere-Akahirae-Sunose. Results of the Master-Plots method indicated that the most probable reaction model function was the nth order reaction model function as f(x) = (1-x) (3.11), A = 8.02x10(12) under a mean activation energy of 150.61 kJ mol(-1). Proximate and ultimate analysis showed that plum stone can be considered as a favourable source for energy production owing to its low moisture and ash content, and high volatile matter ratio and moderate heating value. © The Author(s) 2015.
International Nuclear Information System (INIS)
Baug Tapas; Chandrasekhar Thyagarajan
2013-01-01
A lunar occultation (LO) technique in the near-infrared (NIR) provides angular resolution down to milliarcseconds for an occulted source, even with ground-based 1 m class telescopes. LO observations are limited to brighter objects because they require a high signal-to-noise ratio (S/N ∼40) for proper extraction of angular diameter values. Hence, methods to improve the S/N ratio by reducing noise using Fourier and wavelet transforms have been explored in this study. A sample of 54 NIR LO light curves observed with the IR camera at Mt Abu Observatory has been used. It is seen that both Fourier and wavelet methods have shown an improvement in S/N compared to the original data. However, the application of wavelet transforms causes a slight smoothing of the fringes and results in a higher value for angular diameter. Fourier transforms which reduce discrete noise frequencies do not distort the fringe. The Fourier transform method seems to be effective in improving the S/N, as well as improving the model fit, particularly in the fainter regime of our sample. These methods also provide a better model fit for brighter sources in some cases, though there may not be a significant improvement in S/N
Yang, Chuan-Xiao; Sun, Xiang-Ying; Liu, Bin
2009-06-01
From the digital images of the red complex which resulted in the interaction of nitrite with N-(1-naphthyl) ethylenediamine dihydrochloride and P-Aminobenzene sulfonic acid, it could be seen that the solution colors obviously increased with increasing the concentration of nitrite ion. The JPEG format of the digital images was transformed into gray-scale format by origin 7.0 software, and the gray values were measured with scion image software. It could be seen that the gray values of the digital image obviously increased with increasing the concentration of nitrite ion, too. Thus a novel digital imaging colorimetric (DIC) method to determine nitrogen oxides (NO(x)) contents in air was developed. Based on the red, green and blue (RGB) tricolor theory, the principle of the digital imaging colorimetric method and the influential factors on digital imaging were discussed. The present method was successfully applied to the determination of the daily changes curve of nitrogen oxides in the atmosphere and NO2- in synthetic samples with the recovery of 97.3%-104.0%, and the relative standard deviation (RSD) was less than 5.0%. The results of the determination were consistent with those obtained by spectrophotometric method.
Directory of Open Access Journals (Sweden)
Konings Maurits K
2012-08-01
Full Text Available Abstract Background In this paper a new non-invasive, operator-free, continuous ventricular stroke volume monitoring device (Hemodynamic Cardiac Profiler, HCP is presented, that measures the average stroke volume (SV for each period of 20 seconds, as well as ventricular volume-time curves for each cardiac cycle, using a new electric method (Ventricular Field Recognition with six independent electrode pairs distributed over the frontal thoracic skin. In contrast to existing non-invasive electric methods, our method does not use the algorithms of impedance or bioreactance cardiography. Instead, our method is based on specific 2D spatial patterns on the thoracic skin, representing the distribution, over the thorax, of changes in the applied current field caused by cardiac volume changes during the cardiac cycle. Since total heart volume variation during the cardiac cycle is a poor indicator for ventricular stroke volume, our HCP separates atrial filling effects from ventricular filling effects, and retrieves the volume changes of only the ventricles. Methods ex-vivo experiments on a post-mortem human heart have been performed to measure the effects of increasing the blood volume inside the ventricles in isolation, leaving the atrial volume invariant (which can not be done in-vivo. These effects have been measured as a specific 2D pattern of voltage changes on the thoracic skin. Furthermore, a working prototype of the HCP has been developed that uses these ex-vivo results in an algorithm to decompose voltage changes, that were measured in-vivo by the HCP on the thoracic skin of a human volunteer, into an atrial component and a ventricular component, in almost real-time (with a delay of maximally 39 seconds. The HCP prototype has been tested in-vivo on 7 human volunteers, using G-suit inflation and deflation to provoke stroke volume changes, and LVot Doppler as a reference technique. Results The ex-vivo measurements showed that ventricular filling
International Nuclear Information System (INIS)
Alhossen, I; Bugarin, F; Segonds, S; Villeneuve-Faure, C; Baudoin, F
2017-01-01
Previous studies have demonstrated that the electrostatic force distance curve (EFDC) is a relevant way of probing injected charge in 3D. However, the EFDC needs a thorough investigation to be accurately analyzed and to provide information about charge localization. Interpreting the EFDC in terms of charge distribution is not straightforward from an experimental point of view. In this paper, a sensitivity analysis of the EFDC is studied using buried electrodes as a first approximation. In particular, the influence of input factors such as the electrode width, depth and applied potential are investigated. To reach this goal, the EFDC is fitted to a law described by four parameters, called logistic law, and the influence of the electrode parameters on the law parameters has been investigated. Then, two methods are applied—Sobol’s method and the factorial design of experiment—to quantify the effect of each factor on each parameter of the logistic law. Complementary results are obtained from both methods, demonstrating that the EFDC is not the result of the superposition of the contribution of each electrode parameter, but that it exhibits a strong contribution from electrode parameter interaction. Furthermore, thanks to these results, a matricial model has been developed to predict EFDCs for any combination of electrode characteristics. A good correlation is observed with the experiments, and this is promising for charge investigation using an EFDC. (paper)
Alhossen, I.; Villeneuve-Faure, C.; Baudoin, F.; Bugarin, F.; Segonds, S.
2017-01-01
Previous studies have demonstrated that the electrostatic force distance curve (EFDC) is a relevant way of probing injected charge in 3D. However, the EFDC needs a thorough investigation to be accurately analyzed and to provide information about charge localization. Interpreting the EFDC in terms of charge distribution is not straightforward from an experimental point of view. In this paper, a sensitivity analysis of the EFDC is studied using buried electrodes as a first approximation. In particular, the influence of input factors such as the electrode width, depth and applied potential are investigated. To reach this goal, the EFDC is fitted to a law described by four parameters, called logistic law, and the influence of the electrode parameters on the law parameters has been investigated. Then, two methods are applied—Sobol’s method and the factorial design of experiment—to quantify the effect of each factor on each parameter of the logistic law. Complementary results are obtained from both methods, demonstrating that the EFDC is not the result of the superposition of the contribution of each electrode parameter, but that it exhibits a strong contribution from electrode parameter interaction. Furthermore, thanks to these results, a matricial model has been developed to predict EFDCs for any combination of electrode characteristics. A good correlation is observed with the experiments, and this is promising for charge investigation using an EFDC.
DEFF Research Database (Denmark)
Ding, Tao; Li, Cheng; Huang, Can
2018-01-01
–slave structure and improves traditional centralized modeling methods by alleviating the big data problem in a control center. Specifically, the transmission-distribution-network coordination issue of the hierarchical modeling method is investigated. First, a curve-fitting approach is developed to provide a cost......In order to solve the reactive power optimization with joint transmission and distribution networks, a hierarchical modeling method is proposed in this paper. It allows the reactive power optimization of transmission and distribution networks to be performed separately, leading to a master...... optimality. Numerical results on two test systems verify the effectiveness of the proposed hierarchical modeling and curve-fitting methods....
International Nuclear Information System (INIS)
Gopakumar, R.
1996-01-01
We review recent work on the master field in large N theories. In particular the mathematical framework appropriate for its construction is sketched. The calculational utility of this framework is demonstrated in the case of QCD 2 . (orig.)
Energy Technology Data Exchange (ETDEWEB)
NONE
1995-06-01
This document is a master list of acronyms and other abbreviations that are used by or could be useful to, the personnel at Los Alamos National Laboratory. Many specialized and well-known abbreviations are not included in this list.
DEFF Research Database (Denmark)
2006-01-01
Development and content of an international Master in Urban Quality development and management. The work has been done in a cooperation between Berlage institut, Holland; Chulalongkorn University, Thailand; Mahidol University, Thailand; University Kebangsaan Malaysia, Malaysia; og Aalborg...
Cardoso, Ciro
2014-01-01
This book is designed for all levels of Lumion users; from beginner to advanced, you will find useful insights and professional techniques to improve and develop your skills in order to fully control and master Lumion.
Directory of Open Access Journals (Sweden)
Jessica L. Kevill
2017-10-01
Full Text Available Deformed wing virus (DWV is one of the most prevalent honey bee viral pathogens in the world. Typical of many RNA viruses, DWV is a quasi-species, which is comprised of a large number of different variants, currently consisting of three master variants: Type A, B, and C. Little is known about the impact of each variant or combinations of variants upon the biology of individual hosts. Therefore, we have developed a new set of master variant-specific DWV primers and a set of standards that allow for the quantification of each of the master variants. Competitive reverse transcriptase polymerase chain reaction (RT-PCR experimental design confirms that each new DWV primer set is specific to the retrospective master variant. The sensitivity of the ABC assay is dependent on whether DNA or RNA is used as the template and whether other master variants are present in the sample. Comparison of the overall proportions of each master variant within a sample of known diversity, as confirmed by next-generation sequence (NGS data, validates the efficiency of the ABC assay. The ABC assay was used on archived material from a Devon overwintering colony loss (OCL 2006–2007 study; further implicating DWV type A and, for the first time, possibly C in the untimely collapse of honey bee colonies. Moreover, in this study DWV type B was not associated with OCL. The use of the ABC assay will allow researchers to quickly and cost effectively pre-screen for the presence of DWV master variants in honey bees.
International Nuclear Information System (INIS)
Liu, L.H.
2004-01-01
A discrete curved ray-tracing method is developed to analyze the radiative transfer in one-dimensional absorbing-emitting semitransparent slab with variable spatial refractive index. The curved ray trajectory is locally treated as straight line and the complicated and time-consuming computation of ray trajectory is cut down. A problem of radiative equilibrium with linear variable spatial refractive index is taken as an example to examine the accuracy of the proposed method. The temperature distributions are determined by the proposed method and compared with the data in references, which are obtained by other different methods. The results show that the discrete curved ray-tracing method has a good accuracy in solving the radiative transfer in one-dimensional semitransparent slab with variable spatial refractive index
Ben Abdessalem, A.; Jenson, F.; Calmon, P.
2016-02-01
This contribution provides an example of the possible advantages of adopting a Bayesian inversion approach to uncertainty quantification in nondestructive inspection methods. In such problem, the uncertainty associated to the random parameters is not always known and needs to be characterised from scattering signal measurements. The uncertainties may then correctly propagated in order to determine a reliable probability of detection curve. To this end, we establish a general Bayesian framework based on a non-parametric maximum likelihood function formulation and some priors from expert knowledge. However, the presented inverse problem is time-consuming and computationally intensive. To cope with this difficulty, we replace the real model by a surrogate one in order to speed-up the model evaluation and to make the problem to be computationally feasible for implementation. The least squares support vector regression is adopted as metamodelling technique due to its robustness to deal with non-linear problems. We illustrate the usefulness of this methodology through the control of tube with enclosed defect using ultrasonic inspection method.
Hooshyar, Milad; Wang, Dingbao
2016-08-01
The empirical proportionality relationship, which indicates that the ratio of cumulative surface runoff and infiltration to their corresponding potentials are equal, is the basis of the extensively used Soil Conservation Service Curve Number (SCS-CN) method. The objective of this paper is to provide the physical basis of the SCS-CN method and its proportionality hypothesis from the infiltration excess runoff generation perspective. To achieve this purpose, an analytical solution of Richards' equation is derived for ponded infiltration in shallow water table environment under the following boundary conditions: (1) the soil is saturated at the land surface; and (2) there is a no-flux boundary which moves downward. The solution is established based on the assumptions of negligible gravitational effect, constant soil water diffusivity, and hydrostatic soil moisture profile between the no-flux boundary and water table. Based on the derived analytical solution, the proportionality hypothesis is a reasonable approximation for rainfall partitioning at the early stage of ponded infiltration in areas with a shallow water table for coarse textured soils.
International Nuclear Information System (INIS)
Christensen, S.M.
1976-01-01
A method known as covariant geodesic point separation is developed to calculate the vacuum expectation value of the stress tensor for a massive scalar field in an arbitrary gravitational field. The vacuum expectation value will diverge because the stress-tensor operator is constructed from products of field operators evaluated at the same space-time point. To remedy this problem, one of the field operators is taken to a nearby point. The resultant vacuum expectation value is finite and may be expressed in terms of the Hadamard elementary function. This function is calculated using a curved-space generalization of Schwinger's proper-time method for calculating the Feynman Green's function. The expression for the Hadamard function is written in terms of the biscalar of geodetic interval which gives a measure of the square of the geodesic distance between the separated points. Next, using a covariant expansion in terms of the tangent to the geodesic, the stress tensor may be expanded in powers of the length of the geodesic. Covariant expressions for each divergent term and for certain terms in the finite portion of the vacuum expectation value of the stress tensor are found. The properties, uses, and limitations of the results are discussed
Park, Young-Seok; Chang, Mi-Sook; Lee, Seung-Pyo
2011-01-01
This study attempted to establish three-dimensional average curves of the gingival line of maxillary teeth using reconstructed virtual models to utilize as guides for dental implant restorations. Virtual models from 100 full-mouth dental stone cast sets were prepared with a three-dimensional scanner and special reconstruction software. Marginal gingival lines were defined by transforming the boundary points to the NURBS (nonuniform rational B-spline) curve. Using an iterative closest point algorithm, the sample models were aligned and the gingival curves were isolated. Each curve was tessellated by 200 points using a uniform interval. The 200 tessellated points of each sample model were averaged according to the index of each model. In a pilot experiment, regression and fitting analysis of one obtained average curve was performed to depict it as mathematical formulae. The three-dimensional average curves of six maxillary anterior teeth, two maxillary right premolars, and a maxillary right first molar were obtained, and their dimensions were measured. Average curves of the gingival lines of young people were investigated. It is proposed that dentists apply these data to implant platforms or abutment designs to achieve ideal esthetics. The curves obtained in the present study may be incorporated as a basis for implant component design to improve the biologic nature and related esthetics of restorations.
Simulating Supernova Light Curves
International Nuclear Information System (INIS)
Even, Wesley Paul; Dolence, Joshua C.
2016-01-01
This report discusses supernova light simulations. A brief review of supernovae, basics of supernova light curves, simulation tools used at LANL, and supernova results are included. Further, it happens that many of the same methods used to generate simulated supernova light curves can also be used to model the emission from fireballs generated by explosions in the earth's atmosphere.
Simulating Supernova Light Curves
Energy Technology Data Exchange (ETDEWEB)
Even, Wesley Paul [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Dolence, Joshua C. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-05-05
This report discusses supernova light simulations. A brief review of supernovae, basics of supernova light curves, simulation tools used at LANL, and supernova results are included. Further, it happens that many of the same methods used to generate simulated supernova light curves can also be used to model the emission from fireballs generated by explosions in the earth’s atmosphere.
Image scaling curve generation
2012-01-01
The present invention relates to a method of generating an image scaling curve, where local saliency is detected in a received image. The detected local saliency is then accumulated in the first direction. A final scaling curve is derived from the detected local saliency and the image is then
Image scaling curve generation.
2011-01-01
The present invention relates to a method of generating an image scaling curve, where local saliency is detected in a received image. The detected local saliency is then accumulated in the first direction. A final scaling curve is derived from the detected local saliency and the image is then
International Nuclear Information System (INIS)
Faigler, S.; Mazeh, T.; Tal-Or, L.; Quinn, S. N.; Latham, D. W.
2012-01-01
We present seven newly discovered non-eclipsing short-period binary systems with low-mass companions, identified by the recently introduced BEER algorithm, applied to the publicly available 138-day photometric light curves obtained by the Kepler mission. The detection is based on the beaming effect (sometimes called Doppler boosting), which increases (decreases) the brightness of any light source approaching (receding from) the observer, enabling a prediction of the stellar Doppler radial-velocity (RV) modulation from its precise photometry. The BEER algorithm identifies the BEaming periodic modulation, with a combination of the well-known Ellipsoidal and Reflection/heating periodic effects, induced by short-period companions. The seven detections were confirmed by spectroscopic RV follow-up observations, indicating minimum secondary masses in the range 0.07-0.4 M ☉ . The binaries discovered establish for the first time the feasibility of the BEER algorithm as a new detection method for short-period non-eclipsing binaries, with the potential to detect in the near future non-transiting brown-dwarf secondaries, or even massive planets.
Directory of Open Access Journals (Sweden)
Elin Yusibani
2013-12-01
Full Text Available Application of a curved vibrating wire method (CVM to measure gas viscosity has been widely used. A ﬁne Tungsten wire with 50 mm of diameter is bent into a semi-circular shape and arranged symmetrically in a magnetic ﬁeld of about 0.2 T. The frequency domain is used for calculating the viscosity as a response for forced oscillation of the wire. Internal friction is one of the parameter in the CVM which is has to be measured beforeahead. Internal friction coefﬁcien for the wire material which is the inverse of the quality factor has to be measured in a vacuum condition. The term involving internal friction actually represents the effective resistance of motion due to all non-viscous damping phenomena including internal friction and magnetic damping. The testing of internal friction measurement shows that at different induced voltage and elevated temperature at a vacuum condition, it gives the value of internal friction for Tungsten is around 1 to 4 10-4.
Energy Technology Data Exchange (ETDEWEB)
Santos, Calink Indiara do Livramento; Carvalho, Melissa Souza; Raphael, Ellen; Ferrari, Jefferson Luis; Schiavon, Marco Antonio, E-mail: schiavon@ufsj.edu.br [Universidade Federal de Sao Joao del-Rei (UFSJ), MG (Brazil). Grupo de Pesquisa em Quimica de Materiais; Dantas, Clecio [Universidade Estadual do Maranhao (LQCINMETRIA/UEMA), Caxias, MA (Brazil). Lab. de Quimica Computacional Inorganica e Quimiometria
2016-11-15
In this work a colloidal approach to synthesize water-soluble CdSe quantum dots (QDs) bearing a surface ligand, such as thioglycolic acid (TGA), 3-mercaptopropionic acid (MPA), glutathione (GSH), or thioglycerol (TGH) was applied. The synthesized material was characterized by X-ray diffraction (XRD), Fourier-transform infrared spectroscopy (FT-IR), UV-visible spectroscopy (UV-Vis), and fluorescence spectroscopy (PL). Additionally, a comparative study of the optical properties of different CdSe QDs was performed, demonstrating how the surface ligand affected crystal growth. The particles sizes were calculated from a polynomial function that correlates the particle size with the maximum fluorescence position. Curve resolution methods (EFA and MCR-ALS) were employed to decompose a series of fluorescence spectra to investigate the CdSe QDs size distribution and determine the number of fraction with different particle size. The results for the MPA-capped CdSe sample showed only two main fraction with different particle sizes with maximum emission at 642 and 686 nm. The calculated diameters from these maximum emission were, respectively, 2.74 and 3.05 nm. (author)
Directory of Open Access Journals (Sweden)
Yin eLiu
2014-09-01
Full Text Available Background: Molecular genetic alterations with prognostic significance have been described in childhood acute myeloid leukemia (AML. The aim of this study was to establish cost-effective techniques to detect mutations of FMS-like tyrosine kinase 3 (FLT3, Nucleophosmin 1 (NPM1, and a partial tandem duplication within the mixed lineage leukemia (MLL-PTD genes in childhood AML. Procedure: Ninety-nine children with newly diagnosed AML were included in this study. We developed a fluoresent dye SYTO-82 based high resolution melting curve (HRM anaylsis to detect FLT3 internal tandem duplication (FLT3-ITD, FLT3 tyrosine kinase domain (FLT3-TKD and NPM1 mutations. MLL-PTD was screened by real-time quantitative PCR. Results: The HRM methodology correlated well with gold standard Sanger sequencing with less cost. Among the 99 patients studied, the FLT3-ITD mutation was associated with significantly worse event free survival (EFS. Patients with the NPM1 mutation had significantly better EFS and overall survival. However, HRM was not sensitive enough for minimal residual disease monitoring. Conclusions: HRM was a rapid and efficient method for screening of FLT3 and NPM1 gene mutations. It was both affordable and accurate, especially in resource underprivileged regions. Our results indicated that HRM could be a useful clinical tool for rapid and cost effective screening of the FLT3 and NPM1 mutations in AML patients.
Study of adsorption states in ZnO—Ag gas-sensitive ceramics using the ECTV curves method
Directory of Open Access Journals (Sweden)
Lyashkov A. Yu.
2013-12-01
Full Text Available The ZnO—Ag ceramic system as the material for semiconductor sensors of ethanol vapors was proposed quite a long time ago. The main goal of this work was to study surface electron states of this system and their relation with the electric properties of the material. The quantity of doping with Ag2O was changed in the range of 0,1–2,0% of mass. The increase of the Ag doping leads to a shift of the Fermi level down (closer to the valence zone. The paper presents research results on electrical properties of ZnO-Ag ceramics using the method of thermal vacuum curves of electrical conductivity. Changes in the electrical properties during heating in vacuum in the temperature range of 300—800 K were obtained and discussed. The increase of Tvac leads to removal of oxygen from the surface of samples The oxygen is adsorbed in the form of O2– and O– ions and is the acceptor for ZnO. This results in the lowering of the inter-crystallite potential barriers in the ceramic. The surface electron states (SES above the Fermi level are virtually uncharged. The increase of the conductivity causes desorption of oxygen from the SES settled below the Fermi level of the semiconductor. The model allows evaluating the depth of the Fermi level in the inhomogeneous semiconductor materials.
Dual arm master controller for a bilateral servo-manipulator
International Nuclear Information System (INIS)
Kuban, D.P.; Perkins, G.S.
1989-01-01
A master controller for a mechanically dissimilar bilateral slave servo-manipulator is disclosed. The master controller includes a plurality of drive trains comprising a plurality of sheave arrangements and cables for controlling upper and lower degrees of master movement. The cables and sheaves of the master controller are arranged to effect kinematic duplication of the slave servo-manipulator, despite mechanical differences there between. A method for kinematically matching a master controller to a slave servo-manipulator is also disclosed. 13 figs
Li, Yi; Abdel-Monem, Mohamed; Gopalakrishnan, Rahul; Berecibar, Maitane; Nanini-Maury, Elise; Omar, Noshin; van den Bossche, Peter; Van Mierlo, Joeri
2018-01-01
This paper proposes an advanced state of health (SoH) estimation method for high energy NMC lithium-ion batteries based on the incremental capacity (IC) analysis. IC curves are used due to their ability of detect and quantify battery degradation mechanism. A simple and robust smoothing method is proposed based on Gaussian filter to reduce the noise on IC curves, the signatures associated with battery ageing can therefore be accurately identified. A linear regression relationship is found between the battery capacity with the positions of features of interest (FOIs) on IC curves. Results show that the developed SoH estimation function from one single battery cell is able to evaluate the SoH of other batteries cycled under different cycling depth with less than 2.5% maximum errors, which proves the robustness of the proposed method on SoH estimation. With this technique, partial charging voltage curves can be used for SoH estimation and the testing time can be therefore largely reduced. This method shows great potential to be applied in reality, as it only requires static charging curves and can be easily implemented in battery management system (BMS).
Directory of Open Access Journals (Sweden)
E. O. Adam
2017-11-01
Full Text Available The arid and semi-arid catchments in dry lands in general require a special effective management as the scarcity of resources and information which is needed to leverage studies and investigations is the common characteristic. Hydrology is one of the most important elements in the management of resources. Deep understanding of hydrological responses is the key towards better planning and land management. Surface runoff quantification of such ungauged semi-arid catchments considered among the important challenges. A 7586 km2 catchment under investigation is located in semi-arid region in central Sudan where mean annual rainfall is around 250 mm and represent the ultimate source for water supply. The objective is to parameterize hydrological characteristics of the catchment and estimate surface runoff using suitable methods and hydrological models that suit the nature of such ungauged catchments with scarce geospatial information. In order to produce spatial runoff estimations, satellite rainfall was used. Remote sensing and GIS were incorporated in the investigations and the generation of landcover and soil information. Five days rainfall event (50.2 mm was used for the SCS CN model which is considered the suitable for this catchment, as SCS curve number (CN method is widely used for estimating infiltration characteristics depending on the landcover and soil property. Runoff depths of 3.6, 15.7 and 29.7 mm were estimated for the three different Antecedent Moisture Conditions (AMC-I, AMC-II and AMC-III. The estimated runoff depths of AMCII and AMCIII indicate the possibility of having small artificial surface reservoirs that could provide water for domestic and small household agricultural use.
Kollmann-Camaiora, A; Brogly, N; Alsina, E; Gilsanz, F
2017-10-01
Although ultrasound is a basic competence for anaesthesia residents (AR) there is few data available on the learning process. This prospective observational study aims to assess the learning process of ultrasound-guided continuous femoral nerve block and to determine the number of procedures that a resident would need to perform in order to reach proficiency using the cumulative sum (CUSUM) method. We recruited 19 AR without previous experience. Learning curves were constructed using the CUSUM method for ultrasound-guided continuous femoral nerve block considering 2 success criteria: a decrease of pain score>2 in a [0-10] scale after 15minutes, and time required to perform it. We analyse data from 17 AR for a total of 237 ultrasound-guided continuous femoral nerve blocks. 8/17 AR became proficient for pain relief, however all the AR who did more than 12 blocks (8/8) became proficient. As for time of performance 5/17 of AR achieved the objective of 12minutes, however all the AR who did more than 20 blocks (4/4) achieved it. The number of procedures needed to achieve proficiency seems to be 12, however it takes more procedures to reduce performance time. The CUSUM methodology could be useful in training programs to allow early interventions in case of repeated failures, and develop competence-based curriculum. Copyright © 2017 Sociedad Española de Anestesiología, Reanimación y Terapéutica del Dolor. Publicado por Elsevier España, S.L.U. All rights reserved.
Kobayashi, R.; Koketsu, K.
2008-12-01
Great earthquakes along the Sagami trough, where the Philippine Sea slab is subducting, have repeatedly occurred. The 1703 Genroku and 1923 (Taisho) Kanto earthquakes (M 8.2 and M 7.9, respectively) are known as typical ones, and cause severe damages in the metropolitan area. The recurrence periods of Genroku- and Taisho-type earthquakes inferred from studies of wave cut terraces are about 200-400 and 2000 years, respectively (e.g., Earthquake Research Committee, 2004). We have inferred the source process of the 1923 Kanto earthquake from geodetic, teleseismic, and strong motion data (Kobayashi and Koketsu, 2005). Two asperities of the 1923 Kanto earthquake are located around the western part of Kanagawa prefecture (the base of the Izu peninsula) and around the Miura peninsula. After we adopted an updated fault plane model, which is based on a recent model of the Philippine Sea slab, the asperity around the Miura peninsula moves to the north (Sato et al., 2005). We have also investigated the slip distribution of the 1703 Genroku earthquake. We used crustal uplift and subsidence data investigated by Shishikura (2003), and inferred the slip distribution by using the same geometry of the fault as the 1923 Kanto earthquake. The peak of slip of 16 m is located the southern part of the Boso peninsula. Shape of the upper surface of the Philippine Sea slab is important to constrain extent of the asperities well. Sato et al. (2005) presented the shape in inland part, but less information in oceanic part except for the Tokyo bay. Kimura (2006) and Takeda et al. (2007) presented the shape in oceanic part. In this study, we compiled these slab models, and planed to reanalyze the slip distributions of the 1703 and 1923 earthquakes. We developed a new curved fault plane on the plate boundary between the Philippine Sea slab and inland plate. The curved fault plane was divided into 56 triangle subfaults. Point sources for the Green's function calculations are located at centroids
International Nuclear Information System (INIS)
Liang, Fusheng; Zhao, Ji; Ji, Shijun; Zhang, Bing; Fan, Cheng
2017-01-01
The B-spline curve has been widely used in the reconstruction of measurement data. The error-bounded sampling points reconstruction can be achieved by the knot addition method (KAM) based B-spline curve fitting. In KAM, the selection pattern of initial knot vector has been associated with the ultimate necessary number of knots. This paper provides a novel initial knots selection method to condense the knot vector required for the error-bounded B-spline curve fitting. The initial knots are determined by the distribution of features which include the chord length (arc length) and bending degree (curvature) contained in the discrete sampling points. Firstly, the sampling points are fitted into an approximate B-spline curve Gs with intensively uniform knot vector to substitute the description of the feature of the sampling points. The feature integral of Gs is built as a monotone increasing function in an analytic form. Then, the initial knots are selected according to the constant increment of the feature integral. After that, an iterative knot insertion (IKI) process starting from the initial knots is introduced to improve the fitting precision, and the ultimate knot vector for the error-bounded B-spline curve fitting is achieved. Lastly, two simulations and the measurement experiment are provided, and the results indicate that the proposed knot selection method can reduce the number of ultimate knots available. (paper)
Directory of Open Access Journals (Sweden)
Vladimir Lipunov
2010-01-01
Full Text Available The main goal of the MASTER-Net project is to produce a unique fast sky survey with all sky observed over a single night down to a limiting magnitude of 19-20. Such a survey will make it possible to address a number of fundamental problems: search for dark energy via the discovery and photometry of supernovae (including SNIa, search for exoplanets, microlensing effects, discovery of minor bodies in the Solar System, and space-junk monitoring. All MASTER telescopes can be guided by alerts, and we plan to observe prompt optical emission from gamma-ray bursts synchronously in several filters and in several polarization planes.
Directory of Open Access Journals (Sweden)
Vickers Andrew
2010-09-01
Full Text Available Abstract Background Decision curve analysis (DCA has been proposed as an alternative method for evaluation of diagnostic tests, prediction models, and molecular markers. However, DCA is based on expected utility theory, which has been routinely violated by decision makers. Decision-making is governed by intuition (system 1, and analytical, deliberative process (system 2, thus, rational decision-making should reflect both formal principles of rationality and intuition about good decisions. We use the cognitive emotion of regret to serve as a link between systems 1 and 2 and to reformulate DCA. Methods First, we analysed a classic decision tree describing three decision alternatives: treat, do not treat, and treat or no treat based on a predictive model. We then computed the expected regret for each of these alternatives as the difference between the utility of the action taken and the utility of the action that, in retrospect, should have been taken. For any pair of strategies, we measure the difference in net expected regret. Finally, we employ the concept of acceptable regret to identify the circumstances under which a potentially wrong strategy is tolerable to a decision-maker. Results We developed a novel dual visual analog scale to describe the relationship between regret associated with "omissions" (e.g. failure to treat vs. "commissions" (e.g. treating unnecessary and decision maker's preferences as expressed in terms of threshold probability. We then proved that the Net Expected Regret Difference, first presented in this paper, is equivalent to net benefits as described in the original DCA. Based on the concept of acceptable regret we identified the circumstances under which a decision maker tolerates a potentially wrong decision and expressed it in terms of probability of disease. Conclusions We present a novel method for eliciting decision maker's preferences and an alternative derivation of DCA based on regret theory. Our approach may
Tsalatsanis, Athanasios; Hozo, Iztok; Vickers, Andrew; Djulbegovic, Benjamin
2010-09-16
Decision curve analysis (DCA) has been proposed as an alternative method for evaluation of diagnostic tests, prediction models, and molecular markers. However, DCA is based on expected utility theory, which has been routinely violated by decision makers. Decision-making is governed by intuition (system 1), and analytical, deliberative process (system 2), thus, rational decision-making should reflect both formal principles of rationality and intuition about good decisions. We use the cognitive emotion of regret to serve as a link between systems 1 and 2 and to reformulate DCA. First, we analysed a classic decision tree describing three decision alternatives: treat, do not treat, and treat or no treat based on a predictive model. We then computed the expected regret for each of these alternatives as the difference between the utility of the action taken and the utility of the action that, in retrospect, should have been taken. For any pair of strategies, we measure the difference in net expected regret. Finally, we employ the concept of acceptable regret to identify the circumstances under which a potentially wrong strategy is tolerable to a decision-maker. We developed a novel dual visual analog scale to describe the relationship between regret associated with "omissions" (e.g. failure to treat) vs. "commissions" (e.g. treating unnecessary) and decision maker's preferences as expressed in terms of threshold probability. We then proved that the Net Expected Regret Difference, first presented in this paper, is equivalent to net benefits as described in the original DCA. Based on the concept of acceptable regret we identified the circumstances under which a decision maker tolerates a potentially wrong decision and expressed it in terms of probability of disease. We present a novel method for eliciting decision maker's preferences and an alternative derivation of DCA based on regret theory. Our approach may be intuitively more appealing to a decision-maker, particularly
Kanter, Rosabeth Moss
1984-01-01
The change masters are identified as corporate managers who have the resources and the vision to effect an economic renaissance in the United States. Strategies for change should emphasize horizontal as well as vertical communication, and should reward enterprise and innovation at all levels. (JB)
Thorn, Alan
2015-01-01
Mastering Unity Scripting is an advanced book intended for students, educators, and professionals familiar with the Unity basics as well as the basics of scripting. Whether you've been using Unity for a short time or are an experienced user, this book has something important and valuable to offer to help you improve your game development workflow.
Groner, Loiane
2013-01-01
Designed to be a structured guide, Mastering Ext JS is full of engaging examples to help you learn in a practical context.This book is for developers who are familiar with using Ext JS who want to augment their skills to create even better web applications.
African Journals Online (AJOL)
will be based on the ten clinical domains of family medicine, ... tutors), before finding the model answers online: http://www. ... The series, “Mastering your Fellowship”, provides examples of the question format ... 3.1 What is the argument for the social value of the study? ..... Primary health care re-engineering policy and the.
Hvorfor master i medborgerskab?
DEFF Research Database (Denmark)
Korsgaard, Ove
2002-01-01
Danmarks Pædagogiske Universitet planlægger i samarbejde med Syddansk Universitet at udbyde en master i medborgerskab: etisk og demokratisk dannelse. Artiklens forfatter gør rede for nogle af de tanker, der ligger bag uddannelsen, og belyser, hvorfor medborgerskab er blevet et nøglebegreb i nyere...
Thomas, Aliki; Han, Lu; Osler, Brittony P; Turnbull, Emily A; Douglas, Erin
2017-03-27
Most health professions, including occupational therapy, have made the application of evidence-based practice a desired competency and professional responsibility. Despite the increasing emphasis on evidence-based practice for improving patient outcomes, there are numerous research-practice gaps in the health professions. In addition to efforts aimed at promoting evidence-based practice with clinicians, there is a strong impetus for university programs to design curricula that will support the development of the knowledge, attitudes, skills and behaviours associated with evidence-based practice. Though occupational therapy curricula in North America are becoming increasingly focused on evidence-based practice, research on students' attitudes towards evidence-based practice, their perceptions regarding the integration and impact of this content within the curricula, and the impact of the curriculum on their readiness for evidence-based practice is scarce. The present study examined occupational therapy students' perceptions towards the teaching and assessment of evidence-based practice within a professional master's curriculum and their self-efficacy for evidence-based practice. The study used a mixed methods explanatory sequential design. The quantitative phase included a cross-sectional questionnaire exploring attitudes towards evidence-based practice, perceptions of the teaching and assessment of evidence-based practice and evidence-based practice self-efficacy for four cohorts of students enrolled in the program and a cohort of new graduates. The questionnaire was followed by a focus group of senior students aimed at further exploring the quantitative findings. All student cohorts held favourable attitudes towards evidence-based practice; there was no difference across cohorts. There were significant differences with regards to perceptions of the teaching and assessment of evidence-based practice within the curriculum; junior cohorts and students with previous
Sun, Yan; Strobel, Johannes; Newby, Timothy J.
2017-01-01
Adopting a two-phase explanatory sequential mixed methods research design, the current study examined the impact of student teaching experiences on pre-service teachers' readiness for technology integration. In phase-1 of quantitative investigation, 2-level growth curve models were fitted using online repeated measures survey data collected from…
A.R. Ansari; B. Hossain; B. Koren (Barry); G.I. Shishkin (Gregori)
2007-01-01
textabstractWe investigate the model problem of flow of a viscous incompressible fluid past a symmetric curved surface when the flow is parallel to its axis. This problem is known to exhibit boundary layers. Also the problem does not have solutions in closed form, it is modelled by boundary-layer
International Nuclear Information System (INIS)
Pickles, W.L.; McClure, J.W.; Howell, R.H.
1978-01-01
A sophisticated nonlinear multiparameter fitting program was used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantities with a known error. Error estimates for the calibration curve parameters can be obtained from the curvature of the ''Chi-Squared Matrix'' or from error relaxation techniques. It was shown that nondispersive XRFA of 0.1 to 1 mg freeze-dried UNO 3 can have an accuracy of 0.2% in 1000 s. 5 figures
Climbing the health learning curve together | IDRC - International ...
International Development Research Centre (IDRC) Digital Library (Canada)
2011-01-25
Jan 25, 2011 ... Climbing the health learning curve together ... Many of the projects are creating master's programs at their host universities ... Formerly based in the high Arctic, Atlantis is described by Dr Martin Forde of St George's University ...
Kholeif, S A
2001-06-01
A new method that belongs to the differential category for determining the end points from potentiometric titration curves is presented. It uses a preprocess to find first derivative values by fitting four data points in and around the region of inflection to a non-linear function, and then locate the end point, usually as a maximum or minimum, using an inverse parabolic interpolation procedure that has an analytical solution. The behavior and accuracy of the sigmoid and cumulative non-linear functions used are investigated against three factors. A statistical evaluation of the new method using linear least-squares method validation and multifactor data analysis are covered. The new method is generally applied to symmetrical and unsymmetrical potentiometric titration curves, and the end point is calculated using numerical procedures only. It outperforms the "parent" regular differential method in almost all factors levels and gives accurate results comparable to the true or estimated true end points. Calculated end points from selected experimental titration curves compatible with the equivalence point category of methods, such as Gran or Fortuin, are also compared with the new method.
Directory of Open Access Journals (Sweden)
Zhang Guowei
2014-01-01
Full Text Available Based on a full-scale bookcase fire experiment, a fire development model is proposed for the whole process of localized fires in large-space buildings. We found that for localized fires in large-space buildings full of wooden combustible materials the fire growing phases can be simplified into a t2 fire with a 0.0346 kW/s2 fire growth coefficient. FDS technology is applied to study the smoke temperature curve for a 2 MW to 25 MW fire occurring within a large space with a height of 6 m to 12 m and a building area of 1 500 m2 to 10 000 m2 based on the proposed fire development model. Through the analysis of smoke temperature in various fire scenarios, a new approach is proposed to predict the smoke temperature curve. Meanwhile, a modified model of steel temperature development in localized fire is built. In the modified model, the localized fire source is treated as a point fire source to evaluate the flame net heat flux to steel. The steel temperature curve in the whole process of a localized fire could be accurately predicted by the above findings. These conclusions obtained in this paper could provide valuable reference to fire simulation, hazard assessment, and fire protection design.
Kuc, Rafal
2013-01-01
A practical tutorial that covers the difficult design, implementation, and management of search solutions.Mastering ElasticSearch is aimed at to intermediate users who want to extend their knowledge about ElasticSearch. The topics that are described in the book are detailed, but we assume that you already know the basics, like the query DSL or data indexing. Advanced users will also find this book useful, as the examples are getting deep into the internals where it is needed.
Neeraj, Nishant
2013-01-01
Mastering Apache Cassandra is a practical, hands-on guide with step-by-step instructions. The smooth and easy tutorial approach focuses on showing people how to utilize Cassandra to its full potential.This book is aimed at intermediate Cassandra users. It is best suited for startups where developers have to wear multiple hats: programmer, DevOps, release manager, convincing clients, and handling failures. No prior knowledge of Cassandra is required.
48 CFR 217.7103-6 - Modification of master agreements.
2010-10-01
... REGULATIONS SYSTEM, DEPARTMENT OF DEFENSE CONTRACTING METHODS AND CONTRACT TYPES SPECIAL CONTRACTING METHODS... only by modifying the master agreement itself. It shall not be changed through a job order. (c) A modification to a master agreement shall not affect job orders issued before the effective date of the...
Transparency masters for mathematics revealed
Berman, Elizabeth
1980-01-01
Transparency Masters for Mathematics Revealed focuses on master diagrams that can be used for transparencies for an overhead projector or duplicator masters for worksheets. The book offers information on a compilation of master diagrams prepared by John R. Stafford, Jr., audiovisual supervisor at the University of Missouri at Kansas City. Some of the transparencies are designed to be shown horizontally. The initial three masters are number lines and grids that can be used in a mathematics course, while the others are adaptations of text figures which are slightly altered in some instances. The
Wan, Xiaomin; Peng, Liubao; Li, Yuanjian
2015-01-01
In general, the individual patient-level data (IPD) collected in clinical trials are not available to independent researchers to conduct economic evaluations; researchers only have access to published survival curves and summary statistics. Thus, methods that use published survival curves and summary statistics to reproduce statistics for economic evaluations are essential. Four methods have been identified: two traditional methods 1) least squares method, 2) graphical method; and two recently proposed methods by 3) Hoyle and Henley, 4) Guyot et al. The four methods were first individually reviewed and subsequently assessed regarding their abilities to estimate mean survival through a simulation study. A number of different scenarios were developed that comprised combinations of various sample sizes, censoring rates and parametric survival distributions. One thousand simulated survival datasets were generated for each scenario, and all methods were applied to actual IPD. The uncertainty in the estimate of mean survival time was also captured. All methods provided accurate estimates of the mean survival time when the sample size was 500 and a Weibull distribution was used. When the sample size was 100 and the Weibull distribution was used, the Guyot et al. method was almost as accurate as the Hoyle and Henley method; however, more biases were identified in the traditional methods. When a lognormal distribution was used, the Guyot et al. method generated noticeably less bias and a more accurate uncertainty compared with the Hoyle and Henley method. The traditional methods should not be preferred because of their remarkable overestimation. When the Weibull distribution was used for a fitted model, the Guyot et al. method was almost as accurate as the Hoyle and Henley method. However, if the lognormal distribution was used, the Guyot et al. method was less biased compared with the Hoyle and Henley method.
International Nuclear Information System (INIS)
Brandao, Jose Odinilson de C.; Souza, Priscilla L.G.; Santos, Joelan A.L.; Vilela, Eudice C.; Lima, Fabiana F.; Calixto, Merilane S.; Santos, Neide
2009-01-01
There is increasing concern about airline crew members (about one million worldwide) are exposed to measurable neutrons doses. Historically, cytogenetic biodosimetry assays have been based on quantifying asymmetrical chromosome alterations (dicentrics, centric rings and acentric fragments) in mytogen-stimulated T-lymphocytes in their first mitosis after radiation exposure. Increased levels of chromosome damage in peripheral blood lymphocytes are a sensitive indicator of radiation exposure and they are routinely exploited for assessing radiation absorbed dose after accidental or occupational exposure. Since radiological accidents are not common, not all nations feel that it is economically justified to maintain biodosimetry competence. However, dependable access to biological dosimetry capabilities is completely critical in event of an accident. In this paper the dose-response curve was measured for the induction of chromosomal alterations in peripheral blood lymphocytes after chronic exposure in vitro to neutron-gamma mixes field. Blood was obtained from one healthy donor and exposed to two neutron-gamma mixed field from sources 241 AmBe (20 Ci) at the Neutron Calibration Laboratory (NCL-CRCN/NE-PE-Brazil). The evaluated absorbed doses were 0.2 Gy; 1.0 Gy and 2.5 Gy. The dicentric chromosomes were observed at metaphase, following colcemid accumulation and 1000 well-spread metaphase figures were analyzed for the presence of dicentrics by two experienced scorers after painted by giemsa 5%. Our preliminary results showed a linear dependence between radiations absorbed dose and dicentric chromosomes frequencies. Dose-response curve described in this paper will contribute to the construction of calibration curve that will be used in our laboratory for biological dosimetry. (author)
Energy Technology Data Exchange (ETDEWEB)
Brandao, Jose Odinilson de C.; Souza, Priscilla L.G.; Santos, Joelan A.L.; Vilela, Eudice C.; Lima, Fabiana F., E-mail: jodinilson@cnen.gov.b, E-mail: fflima@cnen.gov.b, E-mail: jasantos@cnen.gov.b [Centro Regional de Ciencias Nucleares do Nordeste (CRCN-NE/CNEN-PE), Recife, PE (Brazil); Calixto, Merilane S.; Santos, Neide, E-mail: santos_neide@yahoo.com.b [Universidade Federal de Pernambuco (UFPE), Recife, PE (Brazil). Dept. de Genetica
2009-07-01
There is increasing concern about airline crew members (about one million worldwide) are exposed to measurable neutrons doses. Historically, cytogenetic biodosimetry assays have been based on quantifying asymmetrical chromosome alterations (dicentrics, centric rings and acentric fragments) in mytogen-stimulated T-lymphocytes in their first mitosis after radiation exposure. Increased levels of chromosome damage in peripheral blood lymphocytes are a sensitive indicator of radiation exposure and they are routinely exploited for assessing radiation absorbed dose after accidental or occupational exposure. Since radiological accidents are not common, not all nations feel that it is economically justified to maintain biodosimetry competence. However, dependable access to biological dosimetry capabilities is completely critical in event of an accident. In this paper the dose-response curve was measured for the induction of chromosomal alterations in peripheral blood lymphocytes after chronic exposure in vitro to neutron-gamma mixes field. Blood was obtained from one healthy donor and exposed to two neutron-gamma mixed field from sources {sup 241}AmBe (20 Ci) at the Neutron Calibration Laboratory (NCL-CRCN/NE-PE-Brazil). The evaluated absorbed doses were 0.2 Gy; 1.0 Gy and 2.5 Gy. The dicentric chromosomes were observed at metaphase, following colcemid accumulation and 1000 well-spread metaphase figures were analyzed for the presence of dicentrics by two experienced scorers after painted by giemsa 5%. Our preliminary results showed a linear dependence between radiations absorbed dose and dicentric chromosomes frequencies. Dose-response curve described in this paper will contribute to the construction of calibration curve that will be used in our laboratory for biological dosimetry. (author)
Tanioka, Y.; Miranda, G. J. A.; Gusman, A. R.
2017-12-01
Recently, tsunami early warning technique has been improved using tsunami waveforms observed at the ocean bottom pressure gauges such as NOAA DART system or DONET and S-NET systems in Japan. However, for tsunami early warning of near field tsunamis, it is essential to determine appropriate source models using seismological analysis before large tsunamis hit the coast, especially for tsunami earthquakes which generated significantly large tsunamis. In this paper, we develop a technique to determine appropriate source models from which appropriate tsunami inundation along the coast can be numerically computed The technique is tested for four large earthquakes, the 1992 Nicaragua tsunami earthquake (Mw7.7), the 2001 El Salvador earthquake (Mw7.7), the 2004 El Astillero earthquake (Mw7.0), and the 2012 El Salvador-Nicaragua earthquake (Mw7.3), which occurred off Central America. In this study, fault parameters were estimated from the W-phase inversion, then the fault length and width were determined from scaling relationships. At first, the slip amount was calculated from the seismic moment with a constant rigidity of 3.5 x 10**10N/m2. The tsunami numerical simulation was carried out and compared with the observed tsunami. For the 1992 Nicaragua tsunami earthquake, the computed tsunami was much smaller than the observed one. For the 2004 El Astillero earthquake, the computed tsunami was overestimated. In order to solve this problem, we constructed a depth dependent rigidity curve, similar to suggested by Bilek and Lay (1999). The curve with a central depth estimated by the W-phase inversion was used to calculate the slip amount of the fault model. Using those new slip amounts, tsunami numerical simulation was carried out again. Then, the observed tsunami heights, run-up heights, and inundation areas for the 1992 Nicaragua tsunami earthquake were well explained by the computed one. The other tsunamis from the other three earthquakes were also reasonably well explained
Energy Technology Data Exchange (ETDEWEB)
Fertitta, E.; Paulus, B. [Institut für Chemie und Biochemie, Freie Universität Berlin, Takustr. 3, 14195 Berlin (Germany); Barcza, G.; Legeza, Ö. [Strongly Correlated Systems “Lendület” Research Group, Wigner Research Centre for Physics, P.O. Box 49, Budapest (Hungary)
2015-09-21
The method of increments (MoI) has been employed using the complete active space formalism in order to calculate the dissociation curve of beryllium ring-shaped clusters Be{sub n} of different sizes. Benchmarks obtained through different quantum chemical methods including the ab initio density matrix renormalization group were used to verify the validity of the MoI truncation which showed a reliable behavior for the whole dissociation curve. Moreover we investigated the size dependence of the correlation energy at different interatomic distances in order to extrapolate the values for the periodic chain and to discuss the transition from a metal-like to an insulator-like behavior of the wave function through quantum chemical considerations.
International Nuclear Information System (INIS)
Civalek, Oemer
2005-01-01
The nonlinear dynamic response of doubly curved shallow shells resting on Winkler-Pasternak elastic foundation has been studied for step and sinusoidal loadings. Dynamic analogues of Von Karman-Donnel type shell equations are used. Clamped immovable and simply supported immovable boundary conditions are considered. The governing nonlinear partial differential equations of the shell are discretized in space and time domains using the harmonic differential quadrature (HDQ) and finite differences (FD) methods, respectively. The accuracy of the proposed HDQ-FD coupled methodology is demonstrated by numerical examples. The shear parameter G of the Pasternak foundation and the stiffness parameter K of the Winkler foundation have been found to have a significant influence on the dynamic response of the shell. It is concluded from the present study that the HDQ-FD methodolgy is a simple, efficient, and accurate method for the nonlinear analysis of doubly curved shallow shells resting on two-parameter elastic foundation
International Nuclear Information System (INIS)
Shishkin, Yu.L.
2007-01-01
A portable low weight low cost apparatus 'Phasafot' and method for determining pour and cloud points of petroleum products, as well as precipitation and melting temperatures of paraffins in both transparent (diesel fuels), semi-transparent (lube oils) and opaque (crude oils) samples are described. The method consists in illuminating the surface of a sample with an oblique light beam and registering the intensity of specularly reflected light while heating/cooling the sample in the temperature range of its structural transitions. The mirror reflection of a light beam from an ideally smooth liquid surface falls in intensity when the surface becomes rough (dim) due to crystal formation. Simultaneous recording of the temperature ramp curve and the mirror reflection curve enables the determination of the beginning and end of crystallization of paraffins in both transparent and opaque petroleum products. Besides, their rheological properties can be accurately determined by rocking or tilting the instrument while monitoring the sample movement via its mirror reflection
Topographic characterization of nanostructures on curved polymer surfaces
DEFF Research Database (Denmark)
Feidenhans'l, Nikolaj Agentoft; Petersen, Jan C.; Taboryski, Rafael J.
2014-01-01
The availability of portable instrumentation for characterizing surface topography on the micro- and nanometer scale is very limited. Particular the handling of curved surfaces, both concave and convex, is complicated or not possible on current instrumentation. However, the currently growing use...... method with a portable instrument that can be used in a production environment, and topographically characterize nanometer-scale surface structures on both flat and curved surfaces. To facilitate the commercialization of injection moulded polymer parts featuring nanostructures, it is pivotal...... of injection moulding of polymer parts featuring nanostructured surfaces, requires an instrument that can characterize these structures to ensure replication-confidence between master structure and replicated polymer parts. This project concerns the development of a metrological traceable quality control...
Simorgh, L; Torkaman, G; Firoozabadi, S M
2008-01-01
This study aimed at examining the effect of tripolar TENS of vertebral column on the activity of slow and fast motoneurons on 10 healthy non-athlete women aged 22.7 +/- 2.21 yrs. H-reflex recovery curve of soleus (slow) and gastrocnemius (fast) muscles were recorded before and after applying tripolar TENS. For recording of this curve, rectangular paired stimuli were applied on tibial nerve (with 40-520 ISI, frequency of 0.2 Hz and pulse width of 600 micros). Our findings showed that maximum H-reflex recovery in gastrocnemius muscle appeared in the shorter ISI, while in soleus muscle, it appeared in the longer ISI and its amplitude slightly decreased after applying tripolar TENS. It is suggested that tripolar TENS excites not only the skin but also Ia and Ib afferents in the dorsal column. A Synaptic interaction of these afferents in spinal cord causes the inhibition of type I MNs and facilitation of type II MNs. This effect can be used in muscle tone modulation.
International Nuclear Information System (INIS)
Medeiros, Marcos P.C.; Rebello, Wilson F.; Andrade, Edson R.; Silva, Ademir X.
2015-01-01
Nuclear explosions are usually described in terms of its total yield and associated shock wave, thermal radiation and nuclear radiation effects. The nuclear radiation produced in such events has several components, consisting mainly of alpha and beta particles, neutrinos, X-rays, neutrons and gamma rays. For practical purposes, the radiation from a nuclear explosion is divided into i nitial nuclear radiation , referring to what is issued within one minute after the detonation, and 'residual nuclear radiation' covering everything else. The initial nuclear radiation can also be split between 'instantaneous or 'prompt' radiation, which involves neutrons and gamma rays from fission and from interactions between neutrons and nuclei of surrounding materials, and 'delayed' radiation, comprising emissions from the decay of fission products and from interactions of neutrons with nuclei of the air. This work aims at presenting isodose curves calculations at ground level by Monte Carlo simulation, allowing risk assessment and consequences modeling in radiation protection context. The isodose curves are related to neutrons produced by the prompt nuclear radiation from a hypothetical nuclear explosion with a total yield of 20 KT. Neutron fluency and emission spectrum were based on data available in the literature. Doses were calculated in the form of ambient dose equivalent due to neutrons H*(10) n - . (author)
Energy Technology Data Exchange (ETDEWEB)
Medeiros, Marcos P.C.; Rebello, Wilson F.; Andrade, Edson R., E-mail: rebello@ime.eb.br, E-mail: daltongirao@yahoo.com.br [Instituto Militar de Engenharia (IME), Rio de Janeiro, RJ (Brazil). Secao de Engenharia Nuclear; Silva, Ademir X., E-mail: ademir@nuclear.ufrj.br [Corrdenacao dos Programas de Pos-Graduacao em Egenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Nuclear
2015-07-01
Nuclear explosions are usually described in terms of its total yield and associated shock wave, thermal radiation and nuclear radiation effects. The nuclear radiation produced in such events has several components, consisting mainly of alpha and beta particles, neutrinos, X-rays, neutrons and gamma rays. For practical purposes, the radiation from a nuclear explosion is divided into {sup i}nitial nuclear radiation{sup ,} referring to what is issued within one minute after the detonation, and 'residual nuclear radiation' covering everything else. The initial nuclear radiation can also be split between 'instantaneous or 'prompt' radiation, which involves neutrons and gamma rays from fission and from interactions between neutrons and nuclei of surrounding materials, and 'delayed' radiation, comprising emissions from the decay of fission products and from interactions of neutrons with nuclei of the air. This work aims at presenting isodose curves calculations at ground level by Monte Carlo simulation, allowing risk assessment and consequences modeling in radiation protection context. The isodose curves are related to neutrons produced by the prompt nuclear radiation from a hypothetical nuclear explosion with a total yield of 20 KT. Neutron fluency and emission spectrum were based on data available in the literature. Doses were calculated in the form of ambient dose equivalent due to neutrons H*(10){sub n}{sup -}. (author)
Signature Curves Statistics of DNA Supercoils
Shakiban, Cheri; Lloyd, Peter
2004-01-01
In this paper we describe the Euclidean signature curves for two dimensional closed curves in the plane and their generalization to closed space curves. The focus will be on discrete numerical methods for approximating such curves. Further we will apply these numerical methods to plot the signature curves related to three-dimensional simulated DNA supercoils. Our primary focus will be on statistical analysis of the data generated for the signature curves of the supercoils. We will try to esta...
Dabiri, M.; Ghafouri, M.; Rohani Raftar, H. R.; Björk, T.
2018-03-01
Methods to estimate the strain-life curve, which were divided into three categories: simple approximations, artificial neural network-based approaches and continuum damage mechanics models, were examined, and their accuracy was assessed in strain-life evaluation of a direct-quenched high-strength steel. All the prediction methods claim to be able to perform low-cycle fatigue analysis using available or easily obtainable material properties, thus eliminating the need for costly and time-consuming fatigue tests. Simple approximations were able to estimate the strain-life curve with satisfactory accuracy using only monotonic properties. The tested neural network-based model, although yielding acceptable results for the material in question, was found to be overly sensitive to the data sets used for training and showed an inconsistency in estimation of the fatigue life and fatigue properties. The studied continuum damage-based model was able to produce a curve detecting early stages of crack initiation. This model requires more experimental data for calibration than approaches using simple approximations. As a result of the different theories underlying the analyzed methods, the different approaches have different strengths and weaknesses. However, it was found that the group of parametric equations categorized as simple approximations are the easiest for practical use, with their applicability having already been verified for a broad range of materials.
Palamar, Todd
2011-01-01
The exclusive, official guide to the very latest version of Maya Get extensive, hands-on, intermediate to advanced coverage of Autodesk Maya 2012, the top-selling 3D software on the market. If you already know Maya basics, this authoritative book takes you to the next level. From modeling, texturing, animation, and visual effects to high-level techniques for film, television, games, and more, this book provides professional-level Maya instruction. With pages of scenarios and examples from some of the leading professionals in the industry, author Todd Palamar will help you master the entire CG
Keller, Eric
2010-01-01
A beautifully-packaged, advanced reference on the very latest version of Maya. If you already know the basics of Maya, the latest version of this authoritative book takes you to the next level. From modeling, texturing, animation, and visual effects to high-level techniques for film, television, games, and more, this book provides professional-level Maya instruction. With pages of scenarios and examples from some of the leading professionals in the industry, this book will help you master the entire CG production pipeline.: Provides professional-level instruction on Maya, the industry-leading
Feng, Dai; Cortese, Giuliana; Baumgartner, Richard
2017-12-01
The receiver operating characteristic (ROC) curve is frequently used as a measure of accuracy of continuous markers in diagnostic tests. The area under the ROC curve (AUC) is arguably the most widely used summary index for the ROC curve. Although the small sample size scenario is common in medical tests, a comprehensive study of small sample size properties of various methods for the construction of the confidence/credible interval (CI) for the AUC has been by and large missing in the literature. In this paper, we describe and compare 29 non-parametric and parametric methods for the construction of the CI for the AUC when the number of available observations is small. The methods considered include not only those that have been widely adopted, but also those that have been less frequently mentioned or, to our knowledge, never applied to the AUC context. To compare different methods, we carried out a simulation study with data generated from binormal models with equal and unequal variances and from exponential models with various parameters and with equal and unequal small sample sizes. We found that the larger the true AUC value and the smaller the sample size, the larger the discrepancy among the results of different approaches. When the model is correctly specified, the parametric approaches tend to outperform the non-parametric ones. Moreover, in the non-parametric domain, we found that a method based on the Mann-Whitney statistic is in general superior to the others. We further elucidate potential issues and provide possible solutions to along with general guidance on the CI construction for the AUC when the sample size is small. Finally, we illustrate the utility of different methods through real life examples.
DEFF Research Database (Denmark)
Gardner, Ian A.; Greiner, Matthias
2006-01-01
Receiver-operating characteristic (ROC) curves provide a cutoff-independent method for the evaluation of continuous or ordinal tests used in clinical pathology laboratories. The area under the curve is a useful overall measure of test accuracy and can be used to compare different tests (or...... different equipment) used by the same tester, as well as the accuracy of different diagnosticians that use the same test material. To date, ROC analysis has not been widely used in veterinary clinical pathology studies, although it should be considered a useful complement to estimates of sensitivity...... and specificity in test evaluation studies. In addition, calculation of likelihood ratios can potentially improve the clinical utility of such studies because likelihood ratios provide an indication of how the post-test probability changes as a function of the magnitude of the test results. For ordinal test...
Directory of Open Access Journals (Sweden)
Yu Xiu-Juan
2007-10-01
Full Text Available Abstract Background The nucleotide compositional asymmetry between the leading and lagging strands in bacterial genomes has been the subject of intensive study in the past few years. It is interesting to mention that almost all bacterial genomes exhibit the same kind of base asymmetry. This work aims to investigate the strand biases in Chlamydia muridarum genome and show the potential of the Z curve method for quantitatively differentiating genes on the leading and lagging strands. Results The occurrence frequencies of bases of protein-coding genes in C. muridarum genome were analyzed by the Z curve method. It was found that genes located on the two strands of replication have distinct base usages in C. muridarum genome. According to their positions in the 9-D space spanned by the variables u1 – u9 of the Z curve method, K-means clustering algorithm can assign about 94% of genes to the correct strands, which is a few percent higher than those correctly classified by K-means based on the RSCU. The base usage and codon usage analyses show that genes on the leading strand have more G than C and more T than A, particularly at the third codon position. For genes on the lagging strand the biases is reverse. The y component of the Z curves for the complete chromosome sequences show that the excess of G over C and T over A are more remarkable in C. muridarum genome than in other bacterial genomes without separating base and/or codon usages. Furthermore, for the genomes of Borrelia burgdorferi, Treponema pallidum, Chlamydia muridarum and Chlamydia trachomatis, in which distinct base and/or codon usages have been observed, closer phylogenetic distance is found compared with other bacterial genomes. Conclusion The nature of the strand biases of base composition in C. muridarum is similar to that in most other bacterial genomes. However, the base composition asymmetry between the leading and lagging strands in C. muridarum is more significant than that in
Lowe, David; Machin, Graham
2012-06-01
The future mise en pratique for the realization of the kelvin will be founded on the melting temperatures of particular metal-carbon eutectic alloys as thermodynamic temperature references. However, at the moment there is no consensus on what should be taken as the melting temperature. An ideal melting or freezing curve should be a completely flat plateau at a specific temperature. Any departure from the ideal is due to shortcomings in the realization and should be accommodated within the uncertainty budget. However, for the proposed alloy-based fixed points, melting takes place over typically some hundreds of millikelvins. Including the entire melting range within the uncertainties would lead to an unnecessarily pessimistic view of the utility of these as reference standards. Therefore, detailed analysis of the shape of the melting curve is needed to give a value associated with some identifiable aspect of the phase transition. A range of approaches are or could be used; some purely practical, determining the point of inflection (POI) of the melting curve, some attempting to extrapolate to the liquidus temperature just at the end of melting, and a method that claims to give the liquidus temperature and an impurity correction based on the analytical Scheil model of solidification that has not previously been applied to eutectic melting. The different methods have been applied to cobalt-carbon melting curves that were obtained under conditions for which the Scheil model might be valid. In the light of the findings of this study it is recommended that the POI continue to be used as a pragmatic measure of temperature but where required a specified limits approach should be used to define and determine the melting temperature.
International Nuclear Information System (INIS)
Rickwood, Peter
2013-01-01
Continuing global efforts to improve the security of nuclear and other radioactive material against the threat of malicious acts are being assisted by a new initiative, the development of a corps of professional experts to strengthen nuclear security. The IAEA, the European Commission, universities, research institutions and other bodies working in collaboration have established an International Nuclear Security Education Network (INSEN). In 2011, six European academic institutions, the Vienna University of Technology, the Brandenburg University of Applied Sciences, the Demokritos National Centre for Scientific Research in Greece, the Reactor Institute Delft of the Delft University of Technology in the Netherlands, the University of Oslo, and the University of Manchester Dalton Nuclear Institute, started developing a European Master of Science Programme in Nuclear Security Management. In March 2013, the masters project was inaugurated when ten students commenced studies at the Brandenburg University of Applied Sciences in Germany for two weeks. In April, they moved to the Delft University of Technology in the Netherlands for a further two weeks of studies. The pilot programme consists of six teaching sessions in different academic institutions. At the inauguration in Delft, IAEA Director General Yukiya Amano commended this effort to train a new generation of experts who can help to improve global nuclear security. ''It is clear that we will need a new generation of policy-makers and nuclear professionals - people like you - who will have a proper understanding of the importance of nuclear security,'' Mr. Amano told students and faculty members. ''The IAEA's goal is to support the development of such programmes on a global basis,'' said David Lambert, Senior Training Officer in the IAEA's Office of Nuclear Security. ''An existing postgraduate degree programme focused on nuclear security at Naif Arab University for Security Sciences (NAUSS) is currently supported by
Method for determining scan timing based on analysis of formation process of the time-density curve
International Nuclear Information System (INIS)
Yamaguchi, Isao; Ishida, Tomokazu; Kidoya, Eiji; Higashimura, Kyoji; Suzuki, Masayuki
2005-01-01
A strict determination of scan timing is needed for dynamic multi-phase scanning and 3D-CT angiography (3D-CTA) by multi-detector row CT (MDCT). In the present study, contrast media arrival time (T AR ) was measured in the abdominal aorta at the bifurcation of the celiac artery for confirmation of circulatory differences in patients. In addition, we analyzed the process of formation of the time-density curve (TDC) and examined factors that affect the time to peak aortic enhancement (T PA ). Mean T AR was 15.57±3.75 s. TDCs were plotted for each duration of injection. The rising portions of TDCs were superimposed on one another. TDCs with longer injection durations were piled up upon one another. Rise angle was approximately constant in response to each flow rate. Rise time (T R ) showed a good correlation with injection duration (T ID ). T R was 1.01 T ID (R 2 =0.994) in the phantom study and 0.94 T lD -0.60 (R 2 =0.988) in the clinical study. In conclusion, for the selection of optimal scan timing it is useful to determine T R at a given point and to determine the time from T AR . (author)
International Nuclear Information System (INIS)
Grevel, J.; Napoli, K.L.; Gibbons, S.; Kahan, B.D.
1990-01-01
The measurement of areas under the concentration-time curve (AUC) was recently introduced as an alternative to trough level monitoring of cyclosporine therapy. The AUC is divided by the oral dosing interval to calculate an average concentration. All measurements are performed at clinical steady state. The initial evaluation of AUC monitoring showed advantages over trough level monitoring with concentrations of cyclosporine measured in serum by the polyclonal radioimmunoassay of Sandoz. This assay technique is no longer available and the following assays were performed in parallel during up to 173 AUC determinations in 51 consecutive renal transplant patients: polyclonal fluorescence polarization immunoassay of Abbott in serum, specific and nonspecific monoclonal radioimmunoassays using 3 H and 125 I tracers in serum and whole blood, and high performance liquid chromatography in whole blood. Both trough levels and average concentrations at steady state measured by those different techniques were significantly correlated with the oral dose. The best correlation (r2 = 0.54) was shown by average concentrations measured in whole blood by the specific monoclonal radioimmunoassay of Sandoz ( 3 H tracer). This monitoring technique was also associated with the smallest absolute error between repeated observations in the same patient while the oral dose rate remained the same or was changed. Both allegedly specific monoclonal radioimmunoassays (with 3 H and 125 I tracer) measured significantly higher concentrations than the liquid chromatography
Bernstein, D.J.; Birkner, P.; Lange, T.; Peters, C.P.
2013-01-01
This paper introduces EECM-MPFQ, a fast implementation of the elliptic-curve method of factoring integers. EECM-MPFQ uses fewer modular multiplications than the well-known GMP-ECM software, takes less time than GMP-ECM, and finds more primes than GMP-ECM. The main improvements above the
DEFF Research Database (Denmark)
Dyre, Jeppe
1995-01-01
energies chosen randomly according to a Gaussian. The random-walk model is here derived from Newton's laws by making a number of simplifying assumptions. In the second part of the paper an approximate low-temperature description of energy fluctuations in the random-walk modelthe energy master equation...... (EME)is arrived at. The EME is one dimensional and involves only energy; it is derived by arguing that percolation dominates the relaxational properties of the random-walk model at low temperatures. The approximate EME description of the random-walk model is expected to be valid at low temperatures...... of the random-walk model. The EME allows a calculation of the energy probability distribution at realistic laboratory time scales for an arbitrarily varying temperature as function of time. The EME is probably the only realistic equation available today with this property that is also explicitly consistent...
Curran, James R.
2013-01-01
As early as the 1930s the term Master Hearing Aid (MHA) described a device used in the fitting of hearing aids. In their original form, the MHA was a desktop system that allowed for simulated or actual adjustment of hearing aid components that resulted in a changed hearing aid response. Over the years the MHA saw many embodiments and contributed to a number of rationales for the fitting of hearing aids. During these same years, the MHA was viewed by many as an inappropriate means of demonstrating hearing aids; the audio quality of the desktop systems was often superior to the hearing aids themselves. These opinions and the evolution of the MHA have molded the modern perception of hearing aids and the techniques used in the fitting of hearing aids. This article reports on a history of the MHA and its influence on the fitting of hearing aids. PMID:23686682
Mentorship, learning curves, and balance.
Cohen, Meryl S; Jacobs, Jeffrey P; Quintessenza, James A; Chai, Paul J; Lindberg, Harald L; Dickey, Jamie; Ungerleider, Ross M
2007-09-01
Professionals working in the arena of health care face a variety of challenges as their careers evolve and develop. In this review, we analyze the role of mentorship, learning curves, and balance in overcoming challenges that all such professionals are likely to encounter. These challenges can exist both in professional and personal life. As any professional involved in health care matures, complex professional skills must be mastered, and new professional skills must be acquired. These skills are both technical and judgmental. In most circumstances, these skills must be learned. In 2007, despite the continued need for obtaining new knowledge and learning new skills, the professional and public tolerance for a "learning curve" is much less than in previous decades. Mentorship is the key to success in these endeavours. The success of mentorship is two-sided, with responsibilities for both the mentor and the mentee. The benefits of this relationship must be bidirectional. It is the responsibility of both the student and the mentor to assure this bidirectional exchange of benefit. This relationship requires time, patience, dedication, and to some degree selflessness. This mentorship will ultimately be the best tool for mastering complex professional skills and maturing through various learning curves. Professional mentorship also requires that mentors identify and explicitly teach their mentees the relational skills and abilities inherent in learning the management of the triad of self, relationships with others, and professional responsibilities.Up to two decades ago, a learning curve was tolerated, and even expected, while professionals involved in healthcare developed the techniques that allowed for the treatment of previously untreatable diseases. Outcomes have now improved to the point that this type of learning curve is no longer acceptable to the public. Still, professionals must learn to perform and develop independence and confidence. The responsibility to
Zhou, Chuan; Chan, Heang-Ping; Guo, Yanhui; Wei, Jun; Chughtai, Aamer; Hadjiiski, Lubomir M.; Sundaram, Baskaran; Patel, Smita; Kuriakose, Jean W.; Kazerooni, Ella A.
2013-03-01
The curved planar reformation (CPR) method re-samples the vascular structures along the vessel centerline to generate longitudinal cross-section views. The CPR technique has been commonly used in coronary CTA workstation to facilitate radiologists' visual assessment of coronary diseases, but has not yet been used for pulmonary vessel analysis in CTPA due to the complicated tree structures and the vast network of pulmonary vasculature. In this study, a new curved planar reformation and optimal path tracing (CROP) method was developed to facilitate feature extraction and false positive (FP) reduction and improve our PE detection system. PE candidates are first identified in the segmented pulmonary vessels at prescreening. Based on Dijkstra's algorithm, the optimal path (OP) is traced from the pulmonary trunk bifurcation point to each PE candidate. The traced vessel is then straightened and a reformatted volume is generated using CPR. Eleven new features that characterize the intensity, gradient, and topology are extracted from the PE candidate in the CPR volume and combined with the previously developed 9 features to form a new feature space for FP classification. With IRB approval, CTPA of 59 PE cases were retrospectively collected from our patient files (UM set) and 69 PE cases from the PIOPED II data set with access permission. 595 and 800 PEs were manually marked by experienced radiologists as reference standard for the UM and PIOPED set, respectively. At a test sensitivity of 80%, the average FP rate was improved from 18.9 to 11.9 FPs/case with the new method for the PIOPED set when the UM set was used for training. The FP rate was improved from 22.6 to 14.2 FPs/case for the UM set when the PIOPED set was used for training. The improvement in the free response receiver operating characteristic (FROC) curves was statistically significant (p<0.05) by JAFROC analysis, indicating that the new features extracted from the CROP method are useful for FP reduction.
Moore, K. M.; Jaeger, W. K.; Jones, J. A.
2013-12-01
A central characteristic of large river basins in the western US is the spatial and temporal disjunction between the supply of and demand for water. Water sources are typically concentrated in forested mountain regions distant from municipal and agricultural water users, while precipitation is super-abundant in winter and deficient in summer. To cope with these disparities, systems of reservoirs have been constructed throughout the West. These reservoir systems are managed to serve two main competing purposes: to control flooding during winter and spring, and to store spring runoff and deliver it to populated, agricultural valleys during the summer. The reservoirs also provide additional benefits, including recreation, hydropower and instream flows for stream ecology. Since the storage capacity of the reservoirs cannot be used for both flood control and storage at the same time, these uses are traded-off during spring, as the most important, or dominant use of the reservoir, shifts from buffering floods to storing water for summer use. This tradeoff is expressed in the operations rule curve, which specifies the maximum level to which a reservoir can be filled throughout the year, apart from real-time flood operations. These rule curves were often established at the time a reservoir was built. However, climate change and human impacts may be altering the timing and amplitude of flood events and water scarcity is expected to intensify with anticipated changes in climate, land cover and population. These changes imply that reservoir management using current rule curves may not match future societal values for the diverse uses of water from reservoirs. Despite a broad literature on mathematical optimization for reservoir operation, these methods are not often used because they 1) simplify the hydrologic system, raising doubts about the real-world applicability of the solutions, 2) exhibit perfect foresight and assume stationarity, whereas reservoir operators face
Energy Technology Data Exchange (ETDEWEB)
Chowdhury, Tamshuk, E-mail: tamshuk@gmail.com [Deep Sea Technologies, National Institute of Ocean Technology, Chennai, 600100 (India); Sivaprasad, S.; Bar, H.N.; Tarafder, S. [Fatigue & Fracture Group, Materials Science and Technology Division, CSIR-National Metallurgical Laboratory, Jamshedpur, 831007 (India); Bandyopadhyay, N.R. [School of Materials Science and Engineering, Indian Institute of Engineering, Science and Technology, Shibpur, Howrah, 711103 (India)
2016-04-15
Cyclic J-R behaviour of a reactor pressure vessel steel using different methods available in literature has been examined to identify the best suitable method for cyclic fracture problems. Crack opening point was determined by moving average method. The η factor was experimentally determined for cyclic loading conditions and found to be similar to that of ASTM value. Analyses showed that adopting a procedure analogous to the ASTM standard for monotonic fracture is reasonable for cyclic fracture problems, and makes the comparison to monotonic fracture results straightforward. - Highlights: • Different methods of cyclic J-R evaluation compared. • A moving average method for closure point proposed. • η factor for cyclic J experimentally validated. • Method 1 is easier, provides a lower bound and direct comparison to monotonic fracture.
Bekana, Teshome; Mekonnen, Zeleke; Zeynudin, Ahmed; Ayana, Mio; Getachew, Mestawet; Vercruysse, Jozef; Levecke, Bruno
2015-10-01
There is a paucity of studies that compare efficacy of drugs obtained by different diagnostic methods. We compared the efficacy of a single oral dose albendazole (400 mg), measured as egg reduction rate, against soil-transmitted helminth infections in 210 school children (Jimma Town, Ethiopia) using both Kato-Katz thick smear and McMaster egg counting method. Our results indicate that differences in sensitivity and faecal egg counts did not imply a significant difference in egg reduction rate estimates. The choice of a diagnostic method to assess drug efficacy should not be based on sensitivity and faecal egg counts only. © The Author 2015. Published by Oxford University Press on behalf of Royal Society of Tropical Medicine and Hygiene. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Damay, Nicolas; Forgez, Christophe; Bichat, Marie-Pierre; Friedrich, Guy
2016-11-01
The entropy-variation of a battery is responsible for heat generation or consumption during operation and its prior measurement is mandatory for developing a thermal model. It is generally done through the potentiometric method which is considered as a reference. However, it requires several days or weeks to get a look-up table with a 5 or 10% SoC (State of Charge) resolution. In this study, a calorimetric method based on the inversion of a thermal model is proposed for the fast estimation of a nearly continuous curve of entropy-variation. This is achieved by separating the heats produced while charging and discharging the battery. The entropy-variation is then deduced from the extracted entropic heat. The proposed method is validated by comparing the results obtained with several current rates to measurements made with the potentiometric method.
Lambert, Chip
2015-01-01
You've started down the path of jQuery Mobile, now begin mastering some of jQuery Mobile's higher level topics. Go beyond jQuery Mobile's documentation and master one of the hottest mobile technologies out there. Previous JavaScript and PHP experience can help you get the most out of this book.
Hornikx, Maarten; Dragna, Didier
2015-07-01
The Fourier pseudospectral time-domain method is an efficient wave-based method to model sound propagation in inhomogeneous media. One of the limitations of the method for atmospheric sound propagation purposes is its restriction to a Cartesian grid, confining it to staircase-like geometries. A transform from the physical coordinate system to the curvilinear coordinate system has been applied to solve more arbitrary geometries. For applicability of this method near the boundaries, the acoustic velocity variables are solved for their curvilinear components. The performance of the curvilinear Fourier pseudospectral method is investigated in free field and for outdoor sound propagation over an impedance strip for various types of shapes. Accuracy is shown to be related to the maximum grid stretching ratio and deformation of the boundary shape and computational efficiency is reduced relative to the smallest grid cell in the physical domain. The applicability of the curvilinear Fourier pseudospectral time-domain method is demonstrated by investigating the effect of sound propagation over a hill in a nocturnal boundary layer. With the proposed method, accurate and efficient results for sound propagation over smoothly varying ground surfaces with high impedances can be obtained.
Wang, Dingbao
2018-01-01
Following the Budyko framework, soil wetting ratio (the ratio between soil wetting and precipitation) as a function of soil storage index (the ratio between soil wetting capacity and precipitation) is derived from the SCS-CN method and the VIC type of model. For the SCS-CN method, soil wetting ratio approaches one when soil storage index approaches infinity, due to the limitation of the SCS-CN method in which the initial soil moisture condition is not explicitly represented. However, for the ...
Zhang, Wanjun; Gao, Shanping; Cheng, Xiyan; Zhang, Feng
2017-04-01
A novel on high-grade CNC machines tools for B Spline curve method of High-speed interpolation arithmetic is introduced. In the high-grade CNC machines tools CNC system existed the type value points is more trouble, the control precision is not strong and so on, In order to solve this problem. Through specific examples in matlab7.0 simulation result showed that that the interpolation error significantly reduced, the control precision is improved markedly, and satisfy the real-time interpolation of high speed, high accuracy requirements.
Comparison of embrittlement trend curves to high fluence surveillance results
International Nuclear Information System (INIS)
Bogaert, A.S.; Gerard, R.; Chaouadi, R.
2011-01-01
In the regulatory justification of the integrity of the reactor pressure vessels (RPV) for long term operation, use is made of predictive formulas (also called trend curves) to evaluate the RPV embrittlement (expressed in terms of RTNDT shifts) in function of fluence, chemical composition and in some cases temperature, neutron flux or product form. It has been shown recently that some of the existing or proposed trend curves tend to underpredict high dose embrittlement. Due to the scarcity of representative surveillance data at high dose, some test reactor results were used in these evaluations and raise the issue of representativeness of the accelerated test reactor irradiations (dose rate effects). In Belgium the surveillance capsules withdrawal schedule was modified in the nineties in order to obtain results corresponding to 60 years of operation or more with the initial surveillance program. Some of these results are already available and offer a good opportunity to test the validity of the predictive formulas at high dose. In addition, advanced surveillance methods are used in Belgium like the Master Curve, increased tensile tests, and microstructural investigations. These techniques made it possible to show the conservatism of the regulatory approach and to demonstrate increased margins, especially for the first generation units. In this paper the surveillance results are compared to different predictive formulas, as well as to an engineering hardening model developed at SCK.CEN. Generally accepted property-to-property correlations are critically revisited. Conclusions are made on the reliability and applicability of the embrittlement trend curves. (authors)
Grimaldi, S.; Petroselli, A.; Romano, N.
2012-04-01
The Soil Conservation Service - Curve Number (SCS-CN) method is a popular rainfall-runoff model that is widely used to estimate direct runoff from small and ungauged basins. The SCS-CN is a simple and valuable approach to estimate the total stream-flow volume generated by a storm rainfall, but it was developed to be used with daily rainfall data. To overcome this drawback, we propose to include the Green-Ampt (GA) infiltration model into a mixed procedure, which is referred to as CN4GA (Curve Number for Green-Ampt), aiming to distribute in time the information provided by the SCS-CN method so as to provide estimation of sub-daily incremental rainfall excess. For a given storm, the computed SCS-CN total net rainfall amount is used to calibrate the soil hydraulic conductivity parameter of the Green-Ampt model. The proposed procedure was evaluated by analyzing 100 rainfall-runoff events observed in four small catchments of varying size. CN4GA appears an encouraging tool for predicting the net rainfall peak and duration values and has shown, at least for the test cases considered in this study, a better agreement with observed hydrographs than that of the classic SCS-CN method.
Reid, J. C.; Seibert, Warren F.
The analysis of previously obtained data concerning short-term visual memory and cognition by a method suggested by Tucker is proposed. Although interesting individual differences undoubtedly exist in people's ability and capacity to process short-term visual information, studies have not generally examined these differences. In fact, conventional…
An automatic method to analyze the Capacity-Voltage and Current-Voltage curves of a sensor
AUTHOR|(CDS)2261553
2017-01-01
An automatic method to perform Capacity versus voltage analysis for all kind of silicon sensor is provided. It successfully calculates the depletion voltage to unirradiated and irradiated sensors, and with measurements with outliers or reaching breakdown. It is built using C++ and using ROOT trees with an analogous skeleton as TRICS, where the data as well as the results of the ts are saved, to make further analysis.
Fischer, Leonard S; Lumsden, Antoinette; Leung, Felix W
2012-07-01
Water exchange colonoscopy has been reported to reduce examination discomfort and to provide salvage cleansing in unsedated or minimally sedated patients. The prolonged insertion time and perceived difficulty of insertion associated with water exchange have been cited as a barrier to its widespread use. To assess the feasibility of learning and using the water exchange method of colonoscopy in a U.S. community practice setting. Quality improvement program in nonacademic community endoscopy centers. Patients undergoing sedated diagnostic, surveillance, or screening colonoscopy. After direct coaching by a knowledgeable trainer, an experienced colonoscopist initiated colonoscopy using the water method. Whenever >5 min elapsed without advancing the colonoscope, conversion to air insufflation was made to ensure timely completion of the examination. Water Method Intention-to-treat (ITT) cecal intubation rate (CIR). Female patients had a significantly higher rate of past abdominal surgery and a significantly lower ITTCIR. The ITTCIR showed a progressive increase over time in both males and females to 85-90%. Mean insertion time was maintained at 9 to 10 min. The overall CIR was 99%. Use of water exchange did not preclude cecal intubation upon conversion to usual air insufflation in sedated patients examined by an experienced colonoscopist. With practice ITTCIR increased over time in both male and female patients. Larger volumes of water exchanged were associated with higher ITTCIR and better quality scores of bowel preparation. The data suggest that learning water exchange by a busy colonoscopist in a community practice setting is feasible and outcomes conform to accepted quality standards.
Low Impact Development Master Plan
Energy Technology Data Exchange (ETDEWEB)
Loftin, Samuel R. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-10-02
This project creates a Low Impact Development (LID) Master Plan to guide and prioritize future development of LID projects at Los Alamos National Laboratory (LANL or the Laboratory). The LID Master Plan applies to developed areas across the Laboratory and focuses on identifying opportunities for storm water quality and hydrological improvements in the heavily urbanized areas of Technical Areas 03, 35 and 53. The LID Master Plan is organized to allow the addition of LID projects for other technical areas as time and funds allow in the future.
[MODERN EDUCATIONAL TECHNOLOGY MASTERING PRACTICAL SKILLS OF GENERAL PRACTITIONERS].
Kovalchuk, L I; Prokopchuk, Y V; Naydyonova, O V
2015-01-01
The article presents the experience of postgraduate training of general practitioners--family medicine. Identified current trends, forms and methods of pedagogical innovations that enhance the quality of learning and mastering the practical skills of primary professionals providing care.
Yin, K.; Belonoshko, A. B.; Zhou, H.; Lu, X.
2016-12-01
The melting temperatures of materials in the interior of the Earth has significant implications in many areas of geophysics. The direct calculations of the melting point by atomic simulations would face substantial hysteresis problem. To overcome the hysteresis encountered in the atomic simulations there are a few different melting-point determination methods available nowadays, which are founded independently, such as the free energy method, the two-phase or coexistence method, and the Z method, etc. In this study, we provide a theoretical understanding the relations of these methods from a geometrical perspective based on a quantitative construction of the volume-entropy-energy thermodynamic surface, a model first proposed by J. Willard Gibbs in 1873. Then combining with an experimental data and/or a previous melting-point determination method, we apply this model to derive the high-pressure melting curves for several lower mantle minerals with less computational efforts relative to using previous methods only. Through this way, some polyatomic minerals at extreme pressures which are almost unsolvable before are calculated fully from first principles now.
International Nuclear Information System (INIS)
Gao Jinsheng; Zheng Siying; Cai Feng
1993-08-01
The micronucleus technique of cytokines block has been proposed as a new method to measure chromosome damage in cytogenetic. The cytokines is blocked by using cytochalasin B (Cyt-B), and micronuclei are scored in cytokines-blocked (CB) cells. This can easily be done owing to the appearance of binucleate cells and large numbers accumulated by adding 3.0 μg/ml cytochalasin B at 44 hours and scoring at 72 hours. The results show that the optimum concentration of Cyt-B is 3.0 μg/ml. the Cyt-B itself can not induce the increase of micronuclei. The micronucleus frequency of normal individuals in vivo, there is an approximately linear relationship between the frequency of induced micronuclei and irradiation dose. The formula is Y 0.36 D + 2.74 (γ 2 = 0.995 P<0.01). Because the cytokines block method is simple and reliable, it is effective for assaying chromosome damage caused by genetic toxic materials
Lagrangian Curves on Spectral Curves of Monopoles
International Nuclear Information System (INIS)
Guilfoyle, Brendan; Khalid, Madeeha; Ramon Mari, Jose J.
2010-01-01
We study Lagrangian points on smooth holomorphic curves in TP 1 equipped with a natural neutral Kaehler structure, and prove that they must form real curves. By virtue of the identification of TP 1 with the space LE 3 of oriented affine lines in Euclidean 3-space, these Lagrangian curves give rise to ruled surfaces in E 3 , which we prove have zero Gauss curvature. Each ruled surface is shown to be the tangent lines to a curve in E 3 , called the edge of regression of the ruled surface. We give an alternative characterization of these curves as the points in E 3 where the number of oriented lines in the complex curve Σ that pass through the point is less than the degree of Σ. We then apply these results to the spectral curves of certain monopoles and construct the ruled surfaces and edges of regression generated by the Lagrangian curves.
Energy Technology Data Exchange (ETDEWEB)
Sang, Nguyen Duy, E-mail: ndsang@ctu.edu.vn [College of Rural Development, Can Tho University, Can Tho 270000 (Viet Nam); Faculty of Physics and Engineering Physics, University of Science, Ho Chi Minh 700000 (Viet Nam); Van Hung, Nguyen [Nuclear Research Institute, VAEI, Dalat 670000 (Viet Nam); Van Hung, Tran; Hien, Nguyen Quoc [Research and Development Center for Radiation Technology, VAEI, Ho Chi Minh 700000 (Viet Nam)
2017-03-01
Highlights: • TL analysis aims to calculate the kinetic parameters of the chilli powder. • There is difference of the kinetic parameters caused by the difference of radiation doses. • There is difference of the kinetic parameters due to applying GOK model or OTOR one. • The software R is apllied for the first time in TL glow curve analysis of the chilli powder. - Abstract: The kinetic parameters of thermoluminescence (TL) glow peaks of chilli powder irradiated by gamma rays with the different doses of 0, 4 and 8 kGy have been calculated and estimate by computerized glow curve deconvolution (CGCD) method and the R package tgcd by using the TL glow curve data. The kinetic parameters of TL glow peaks (i.e. activation energies (E), order of kinetics (b), trapping and recombination probability coefficients (R) and frequency factors (s)) are fitted by modeled general-orders of kinetics (GOK) and one trap-one recombination (OTOR). The kinetic parameters of the chilli powder are different toward the difference of the sample time-storage, radiation doses, GOK model and OTOR one. The samples spending the shorter period of storage time have the smaller the kinetic parameters values than the samples spending the longer period of storage. The results obtained as comparing the kinetic parameters values of the three samples show that the value of non-irradiated samples are lowest whereas the 4 kGy irradiated-samples’ value are greater than the 8 kGy irradiated-samples’ one time.
Chun, Sehun
2017-07-01
Applying the method of moving frames to Maxwell's equations yields two important advancements for scientific computing. The first is the use of upwind flux for anisotropic materials in Maxwell's equations, especially in the context of discontinuous Galerkin (DG) methods. Upwind flux has been available only to isotropic material, because of the difficulty of satisfying the Rankine-Hugoniot conditions in anisotropic media. The second is to solve numerically Maxwell's equations on curved surfaces without the metric tensor and composite meshes. For numerical validation, spectral convergences are displayed for both two-dimensional anisotropic media and isotropic spheres. In the first application, invisible two-dimensional metamaterial cloaks are simulated with a relatively coarse mesh by both the lossless Drude model and the piecewisely-parametered layered model. In the second application, extremely low frequency propagation on various surfaces such as spheres, irregular surfaces, and non-convex surfaces is demonstrated.
Energy Technology Data Exchange (ETDEWEB)
Mueller, Martin; /SLAC
2010-12-16
The study of the power density spectrum (PDS) of fluctuations in the X-ray flux from active galactic nuclei (AGN) complements spectral studies in giving us a view into the processes operating in accreting compact objects. An important line of investigation is the comparison of the PDS from AGN with those from galactic black hole binaries; a related area of focus is the scaling relation between time scales for the variability and the black hole mass. The PDS of AGN is traditionally modeled using segments of power laws joined together at so-called break frequencies; associations of the break time scales, i.e., the inverses of the break frequencies, with time scales of physical processes thought to operate in these sources are then sought. I analyze the Method of Light Curve Simulations that is commonly used to characterize the PDS in AGN with a view to making the method as sensitive as possible to the shape of the PDS. I identify several weaknesses in the current implementation of the method and propose alternatives that can substitute for some of the key steps in the method. I focus on the complications introduced by uneven sampling in the light curve, the development of a fit statistic that is better matched to the distributions of power in the PDS, and the statistical evaluation of the fit between the observed data and the model for the PDS. Using archival data on one AGN, NGC 3516, I validate my changes against previously reported results. I also report new results on the PDS in NGC 4945, a Seyfert 2 galaxy with a well-determined black hole mass. This source provides an opportunity to investigate whether the PDS of Seyfert 1 and Seyfert 2 galaxies differ. It is also an attractive object for placement on the black hole mass-break time scale relation. Unfortunately, with the available data on NGC 4945, significant uncertainties on the break frequency in its PDS remain.
Enhanced Master Station History Report
National Oceanic and Atmospheric Administration, Department of Commerce — The Enhanced Master Station History Report (EMSHR) is a compiled list of basic, historical information for every station in the station history database, beginning...
Energy Technology Data Exchange (ETDEWEB)
Cardoso, Vanderlei
2002-07-01
The present work describes a few methodologies developed for fitting efficiency curves obtained by means of a HPGe gamma-ray spectrometer. The interpolated values were determined by simple polynomial fitting and polynomial fitting between the ratio of experimental peak efficiency and total efficiency, calculated by Monte Carlo technique, as a function of gamma-ray energy. Moreover, non-linear fitting has been performed using a segmented polynomial function and applying the Gauss-Marquardt method. For the peak area obtainment different methodologies were developed in order to estimate the background area under the peak. This information was obtained by numerical integration or by using analytical functions associated to the background. One non-calibrated radioactive source has been included in the curve efficiency in order to provide additional calibration points. As a by-product, it was possible to determine the activity of this non-calibrated source. For all fittings developed in the present work the covariance matrix methodology was used, which is an essential procedure in order to give a complete description of the partial uncertainties involved. (author)
Directory of Open Access Journals (Sweden)
Jens Jirschitzka
Full Text Available In four studies we tested a new methodological approach to the investigation of evaluation bias. The usage of piecewise growth curve modeling allowed for investigation into the impact of people's attitudes on their persuasiveness ratings of pro- and con-arguments, measured over the whole range of the arguments' polarity from an extreme con to an extreme pro position. Moreover, this method provided the opportunity to test specific hypotheses about the course of the evaluation bias within certain polarity ranges. We conducted two field studies with users of an existing online information portal (Studies 1a and 2a as participants, and two Internet laboratory studies with mostly student participants (Studies 1b and 2b. In each of these studies we presented pro- and con-arguments, either for the topic of MOOCs (massive open online courses, Studies 1a and 1b or for the topic of M-learning (mobile learning, Studies 2a and 2b. Our results indicate that using piecewise growth curve models is more appropriate than simpler approaches. An important finding of our studies was an asymmetry of the evaluation bias toward pro- or con-arguments: the evaluation bias appeared over the whole polarity range of pro-arguments and increased with more and more extreme polarity. This clear-cut result pattern appeared only on the pro-argument side. For the con-arguments, in contrast, the evaluation bias did not feature such a systematic picture.
Structural master plan of flood mitigation measures
A. Heidari
2009-01-01
Flood protection is one of the practical methods in damage reduction. Although it not possible to be completely protected from flood disaster but major part of damages can be reduced by mitigation plans. In this paper, the optimum flood mitigation master plan is determined by economic evaluation in trading off between the construction costs and expected value of damage reduction as the benefits. Size of the certain mitigation alternative is also be obtained by risk analysis by accepting possi...
DEFF Research Database (Denmark)
Danielsen, Oluf
2004-01-01
The Master in ICT and Learning (MIL)was started in 2000, and it is owned in collaboration by five Danish universities. It is an accredited virtual part-time 2-year education. MIL is unique in that it builds on the pedagogical framework of project pedagogy and is based in virtual collaboration....... It is organized around ICT and Learning. This is illustrated through a presentation of the study program, the four modules, the projects and the master thesis....
Dual arm master controller concept
International Nuclear Information System (INIS)
Kuban, D.P.; Perkins, G.S.
1984-01-01
The Advanced Servomanipulator (ASM) slave was designed with an anthropomorphic stance, gear/torque tube power drives, and modular construction. These features resulted in increased inertia, friction, and backlash relative to tape-driven manipulators. Studies were performed which addressed the human factors design and performance trade-offs associated with the corresponding master controller best suited for the ASM. The results of these studies, as well as the conceptual design of the dual arm master controller, are presented. 6 references, 3 figures
Dual arm master controller development
Kuban, D. P.; Perkins, G. S.
1985-01-01
The advanced servomanipulator (ASM) slave was designed with an anthropomorphic stance gear/torque tube power drives, and modular construction. These features resulted in increased inertia, friction, and backlash relative to tape driven manipulators. Studies were performed which addressed to human factor design and performance tradeoffs associated with the corresponding master controller best suited for the ASM. The results of these studies, as well as the conceptual design of the dual arm master controller, are presented.
International Nuclear Information System (INIS)
Yun, Deok Yong
1999-06-01
The contents of this book are explanation of basic conception for DSP, perfect a complete master of TMS320C31, I/O interface design and memory, practice with PC print port, basic programing skill, assembly and C programing technique, timer and interrupt application skill, serial communication programing technique, application of digital conditioning and application of digital servo control. This book is divided into two parts, which is about TMS320C31 master of theory and application.
Anatomical curve identification
Bowman, Adrian W.; Katina, Stanislav; Smith, Joanna; Brown, Denise
2015-01-01
Methods for capturing images in three dimensions are now widely available, with stereo-photogrammetry and laser scanning being two common approaches. In anatomical studies, a number of landmarks are usually identified manually from each of these images and these form the basis of subsequent statistical analysis. However, landmarks express only a very small proportion of the information available from the images. Anatomically defined curves have the advantage of providing a much richer expression of shape. This is explored in the context of identifying the boundary of breasts from an image of the female torso and the boundary of the lips from a facial image. The curves of interest are characterised by ridges or valleys. Key issues in estimation are the ability to navigate across the anatomical surface in three-dimensions, the ability to recognise the relevant boundary and the need to assess the evidence for the presence of the surface feature of interest. The first issue is addressed by the use of principal curves, as an extension of principal components, the second by suitable assessment of curvature and the third by change-point detection. P-spline smoothing is used as an integral part of the methods but adaptations are made to the specific anatomical features of interest. After estimation of the boundary curves, the intermediate surfaces of the anatomical feature of interest can be characterised by surface interpolation. This allows shape variation to be explored using standard methods such as principal components. These tools are applied to a collection of images of women where one breast has been reconstructed after mastectomy and where interest lies in shape differences between the reconstructed and unreconstructed breasts. They are also applied to a collection of lip images where possible differences in shape between males and females are of interest. PMID:26041943
International Nuclear Information System (INIS)
Dietrich, R.
1984-01-01
The basic concepts of the finite element method are explained. The results are compared to existing calibration curves for such test piece geometries derived using experimental procedures. (orig./HP) [de
Cardoso, F C; Sears, W; LeBlanc, S J; Drackley, J K
2011-12-01
The objective of the study was to compare 3 methods for calculating the area under the curve (AUC) for plasma glucose and nonesterified fatty acids (NEFA) after an intravenous epinephrine (EPI) challenge in dairy cows. Cows were assigned to 1 of 6 dietary niacin treatments in a completely randomized 6 × 6 Latin square with an extra period to measure carryover effects. Periods consisted of a 7-d (d 1 to 7) adaptation period followed by a 7-d (d 8 to 14) measurement period. On d 12, cows received an i.v. infusion of EPI (1.4 μg/kg of BW). Blood was sampled at -45, -30, -20, -10, and -5 min before EPI infusion and 2.5, 5, 10, 15, 20, 30, 45, 60, 90, and 120 min after. The AUC was calculated by incremental area, positive incremental area, and total area using the trapezoidal rule. The 3 methods resulted in different statistical inferences. When comparing the 3 methods for NEFA and glucose response, no significant differences among treatments and no interactions between treatment and AUC method were observed. For glucose and NEFA response, the method was statistically significant. Our results suggest that the positive incremental method and the total area method gave similar results and interpretation but differed from the incremental area method. Furthermore, the 3 methods evaluated can lead to different results and statistical inferences for glucose and NEFA AUC after an EPI challenge. Copyright © 2011 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Quantum trajectories for time-dependent adiabatic master equations
Yip, Ka Wa; Albash, Tameem; Lidar, Daniel A.
2018-02-01
We describe a quantum trajectories technique for the unraveling of the quantum adiabatic master equation in Lindblad form. By evolving a complex state vector of dimension N instead of a complex density matrix of dimension N2, simulations of larger system sizes become feasible. The cost of running many trajectories, which is required to recover the master equation evolution, can be minimized by running the trajectories in parallel, making this method suitable for high performance computing clusters. In general, the trajectories method can provide up to a factor N advantage over directly solving the master equation. In special cases where only the expectation values of certain observables are desired, an advantage of up to a factor N2 is possible. We test the method by demonstrating agreement with direct solution of the quantum adiabatic master equation for 8-qubit quantum annealing examples. We also apply the quantum trajectories method to a 16-qubit example originally introduced to demonstrate the role of tunneling in quantum annealing, which is significantly more time consuming to solve directly using the master equation. The quantum trajectories method provides insight into individual quantum jump trajectories and their statistics, thus shedding light on open system quantum adiabatic evolution beyond the master equation.
Directory of Open Access Journals (Sweden)
Giovanni Carlo Di Renzo
2009-12-01
Full Text Available Oranges quality is strictly dependent on their variety, pre-harvest and post-harvest practices. Especially post harvest management is responsible for fruits damages, causing quality deterioration and commercial losses, as underlined by many authors, which studied the influence of individual post harvest operations on the fruit quality. In this article Authors, using an instrumented sphere (IS 100 similar for shape and size to a true orange, showed a method for the control of orange damages along the processing line. Results allow a fundamental knowledge about the critical damage curve, which defines the incidence of the damages during the oranges processing and packaging. Data show that the fruit discharge (bins or boxes discharge and the packaging step are the most critical operations in order to reduce or eliminate the fruits collisions and the consequent damages
Han, Yang; Hou, Shao-Yang; Ji, Shang-Zhi; Cheng, Juan; Zhang, Meng-Yue; He, Li-Juan; Ye, Xiang-Zhong; Li, Yi-Min; Zhang, Yi-Xuan
2017-11-15
A novel method, real-time reverse transcription PCR (real-time RT-PCR) coupled with probe-melting curve analysis, has been established to detect two kinds of samples within one fluorescence channel. Besides a conventional TaqMan probe, this method employs another specially designed melting-probe with a 5' terminus modification which meets the same label with the same fluorescent group. By using an asymmetric PCR method, the melting-probe is able to detect an extra sample in the melting stage effectively while it almost has little influence on the amplification detection. Thus, this method allows the availability of united employment of both amplification stage and melting stage for detecting samples in one reaction. The further demonstration by simultaneous detection of human immunodeficiency virus (HIV) and hepatitis C virus (HCV) in one channel as a model system is presented in this essay. The sensitivity of detection by real-time RT-PCR coupled with probe-melting analysis was proved to be equal to that detected by conventional real-time RT-PCR. Because real-time RT-PCR coupled with probe-melting analysis can double the detection throughputs within one fluorescence channel, it is expected to be a good solution for the problem of low-throughput in current real-time PCR. Copyright © 2017 Elsevier Inc. All rights reserved.
Wilson, Robert H.; Vishwanath, Karthik; Mycek, Mary-Ann
2009-02-01
Monte Carlo (MC) simulations are considered the "gold standard" for mathematical description of photon transport in tissue, but they can require large computation times. Therefore, it is important to develop simple and efficient methods for accelerating MC simulations, especially when a large "library" of related simulations is needed. A semi-analytical method involving MC simulations and a path-integral (PI) based scaling technique generated time-resolved reflectance curves from layered tissue models. First, a zero-absorption MC simulation was run for a tissue model with fixed scattering properties in each layer. Then, a closed-form expression for the average classical path of a photon in tissue was used to determine the percentage of time that the photon spent in each layer, to create a weighted Beer-Lambert factor to scale the time-resolved reflectance of the simulated zero-absorption tissue model. This method is a unique alternative to other scaling techniques in that it does not require the path length or number of collisions of each photon to be stored during the initial simulation. Effects of various layer thicknesses and absorption and scattering coefficients on the accuracy of the method will be discussed.
Linhart, S. Mike; Nania, Jon F.; Sanders, Curtis L.; Archfield, Stacey A.
2012-01-01
linear regression method and the daily mean streamflow for the 15th day of every other month. The Flow Duration Curve Transfer method was used to estimate unregulated daily mean streamflow from the physical and climatic characteristics of gaged basins. For the Flow Duration Curve Transfer method, daily mean streamflow quantiles at the ungaged site were estimated with the parameter-based regression model, which results in a continuous daily flow-duration curve (the relation between exceedance probability and streamflow for each day of observed streamflow) at the ungaged site. By the use of a reference streamgage, the Flow Duration Curve Transfer is converted to a time series. Data used in the Flow Duration Curve Transfer method were retrieved for 113 continuous-record streamgages in Iowa and within a 50-mile buffer of Iowa. The final statewide regression equations for Iowa were computed by using a weighted-least-squares multiple linear regression method and were computed for the 0.01-, 0.05-, 0.10-, 0.15-, 0.20-, 0.30-, 0.40-, 0.50-, 0.60-, 0.70-, 0.80-, 0.85-, 0.90-, and 0.95-exceedance probability statistics determined from the daily mean streamflow with a reporting limit set at 0.1 ft3/s. The final statewide regression equation for Iowa computed by using left-censored regression techniques was computed for the 0.99-exceedance probability statistic determined from the daily mean streamflow with a low limit threshold and a reporting limit set at 0.1 ft3/s. For the Flow Anywhere method, results of the validation study conducted by using six streamgages show that differences between the root-mean-square error and the mean absolute error ranged from 1,016 to 138 ft3/s, with the larger value signifying a greater occurrence of outliers between observed and estimated streamflows. Root-mean-square-error values ranged from 1,690 to 237 ft3/s. Values of the percent root-mean-square error ranged from 115 percent to 26.2 percent. The logarithm (base 10) streamflow percent root
Vo, Martin
2017-08-01
Light Curves Classifier uses data mining and machine learning to obtain and classify desired objects. This task can be accomplished by attributes of light curves or any time series, including shapes, histograms, or variograms, or by other available information about the inspected objects, such as color indices, temperatures, and abundances. After specifying features which describe the objects to be searched, the software trains on a given training sample, and can then be used for unsupervised clustering for visualizing the natural separation of the sample. The package can be also used for automatic tuning parameters of used methods (for example, number of hidden neurons or binning ratio). Trained classifiers can be used for filtering outputs from astronomical databases or data stored locally. The Light Curve Classifier can also be used for simple downloading of light curves and all available information of queried stars. It natively can connect to OgleII, OgleIII, ASAS, CoRoT, Kepler, Catalina and MACHO, and new connectors or descriptors can be implemented. In addition to direct usage of the package and command line UI, the program can be used through a web interface. Users can create jobs for ”training” methods on given objects, querying databases and filtering outputs by trained filters. Preimplemented descriptors, classifier and connectors can be picked by simple clicks and their parameters can be tuned by giving ranges of these values. All combinations are then calculated and the best one is used for creating the filter. Natural separation of the data can be visualized by unsupervised clustering.
Double degree master program: Optical Design
Bakholdin, Alexey; Kujawinska, Malgorzata; Livshits, Irina; Styk, Adam; Voznesenskaya, Anna; Ezhova, Kseniia; Ermolayeva, Elena; Ivanova, Tatiana; Romanova, Galina; Tolstoba, Nadezhda
2015-10-01
Modern tendencies of higher education require development of master programs providing achievement of learning outcomes corresponding to quickly variable job market needs. ITMO University represented by Applied and Computer Optics Department and Optical Design and Testing Laboratory jointly with Warsaw University of Technology represented by the Institute of Micromechanics and Photonics at The Faculty of Mechatronics have developed a novel international master double-degree program "Optical Design" accumulating the expertise of both universities including experienced teaching staff, educational technologies, and experimental resources. The program presents studies targeting research and professional activities in high-tech fields connected with optical and optoelectronics devices, optical engineering, numerical methods and computer technologies. This master program deals with the design of optical systems of various types, assemblies and layouts using computer modeling means; investigation of light distribution phenomena; image modeling and formation; development of optical methods for image analysis and optical metrology including optical testing, materials characterization, NDT and industrial control and monitoring. The goal of this program is training a graduate capable to solve a wide range of research and engineering tasks in optical design and metrology leading to modern manufacturing and innovation. Variability of the program structure provides its flexibility and adoption according to current job market demands and personal learning paths for each student. In addition considerable proportion of internship and research expands practical skills. Some special features of the "Optical Design" program which implements the best practices of both Universities, the challenges and lessons learnt during its realization are presented in the paper.
The Heating Curve Adjustment Method
Kornaat, W.; Peitsman, H.C.
1995-01-01
In apartment buildings with a collective heating system usually a weather compensator is used for controlling the heat delivery to the various apartments. With this weather compensator the supply water temperature to the apartments is regulated depending on the outside air temperature. With
Extended analysis of cooling curves
International Nuclear Information System (INIS)
Djurdjevic, M.B.; Kierkus, W.T.; Liliac, R.E.; Sokolowski, J.H.
2002-01-01
Thermal Analysis (TA) is the measurement of changes in a physical property of a material that is heated through a phase transformation temperature range. The temperature changes in the material are recorded as a function of the heating or cooling time in such a manner that allows for the detection of phase transformations. In order to increase accuracy, characteristic points on the cooling curve have been identified using the first derivative curve plotted versus time. In this paper, an alternative approach to the analysis of the cooling curve has been proposed. The first derivative curve has been plotted versus temperature and all characteristic points have been identified with the same accuracy achieved using the traditional method. The new cooling curve analysis also enables the Dendrite Coherency Point (DCP) to be detected using only one thermocouple. (author)
Transition curves for highway geometric design
Kobryń, Andrzej
2017-01-01
This book provides concise descriptions of the various solutions of transition curves, which can be used in geometric design of roads and highways. It presents mathematical methods and curvature functions for defining transition curves. .
Zakirov, T.; Galeev, A.; Khramchenkov, M.
2018-05-01
The study deals with the features of the technique for simulating the capillary pressure curves of porous media on their X-ray microtomographic images. The results of a computational experiment on the immiscible displacement of an incompressible fluid by another in the pore space represented by a digital image of the Berea sandstone are presented. For the mathematical description of two-phase fluid flow we use Lattice Boltzmann Equation (LBM), and phenomena at the fluids interface are described by the color-gradient model. Compared with laboratory studies, the evaluation of capillary pressure based on the results of a computational filtration experiment is a non-destructive method and has a number of advantages: the absence of labor for preparation of fluids and core; the possibility of modeling on the scale of very small core fragments (several mm), which is difficult to realize under experimental conditions; three-dimensional visualization of the dynamics of filling the pore space with a displacing fluid during drainage and impregnation; the possibility of carrying out multivariate calculations for specified parameters of multiphase flow (density and viscosity of fluids, surface tension, wetting contact angle). A satisfactory agreement of the capillary pressure curves during drainage with experimental results was obtained. It is revealed that with the increase in the volume of the digital image, the relative deviation of the calculated and laboratory data decreases and for cubic digital cores larger than 1 mm it does not exceed 5%. The behavior of the non-wetting fluid flow during drainage is illustrated. It is shown that flow regimes under which computational and laboratory experiments are performed the distribution of the injected phase in directions different from the gradient of the hydrodynamic drop, including the opposite ones, is characteristic. Experimentally confirmed regularities are obtained when carrying out calculations for drainage and imbibition at
International Nuclear Information System (INIS)
Jorda, Michel.
1976-01-01
The dissolution of a solid in an aqueous phase is studied, the solid consisting of dispersed particles. A continuous colorimetric analysis method is developed to study the dissolution process and a two-parameter optimization method is established to investigate the kinetic curves obtained. This method is based on the differential equation dx/dt=K(1-x)sup(n). (n being the decrease in the dissolution velocity when the dissolved part increases and K a velocity parameter). The dissolution of SO 4 Cu and MnO 4 K in water and UO 3 in SO 4 H 2 is discussed. It is shown that the dissolution velocity of UO 3 is proportional to the concentration of the H + ions in the solution as far as this one is not higher than 0.25N. The study of the temperature dependence of the UO 3 dissolution reaction shows that a transition phase takes place from 25 to 65 0 C between a phase in which the dissolution is controlled by the diffusion of the H + ions and the chemical reaction at the interface and a phase in which the kinetics is only controlled by the diffusion [fr
Very Bright CV discovered by MASTER-ICATE (Argentina)
Saffe, C.; Levato, H.; Mallamaci, C.; Lopez, C.; Lipunov, F. Podest V.; Denisenko, D.; Gorbovskoy, E.; Tiurina, N.; Balanutsa, P.; Kornilov, V.; Belinski, A.; Shatskiy, N.; Chazov, V.; Kuznetsov, A.; Yecheistov, V.; Yurkov, V.; Sergienko, Y.; Varda, D.; Sinyakov, E.; Gabovich, A.; Ivanov, K.; Yazev, S.; Budnev, N.; Konstantinov, E.; Chuvalaev, O.; Poleshchuk, V.; Gress, O.; Frolova, A.; Krushinsky, V.; Zalozhnih, I.; Popov, A.; Bourdanov, A.; Parkhomenko, A.; Tlatov, A.; Dormidontov, D.; Senik, V.; Podvorotny, P.; Shumkov, V.; Shurpakov, S.
2013-06-01
MASTER-ICATE very wide-field camera (d=72mm f/1.2 lens + 11 Mpix CCD) located near San Juan, Argentina has discovered OT source at (RA, Dec) = 14h 20m 23.5s -48d 55m 40s on the combined image (exposure 275 sec) taken on 2013-06-08.048 UT. The OT unfiltered magnitude is 12.1m (limit 13.1m). There is no minor planet at this place. The OT is seen in more than 10 images starting from 2013-06-02.967 UT (275 sec exposure) when it was first detected at 12.4m.
DEFF Research Database (Denmark)
Bernstein, Daniel J.; Birkner, Peter; Lange, Tanja
2013-01-01
-arithmetic level are as follows: (1) use Edwards curves instead of Montgomery curves; (2) use extended Edwards coordinates; (3) use signed-sliding-window addition-subtraction chains; (4) batch primes to increase the window size; (5) choose curves with small parameters and base points; (6) choose curves with large...
Wan, Wenshuai; Itri, Jason
2016-01-01
Prices charged for imaging services can be found in the charge master, a catalog of retail list prices for medical goods and services. This article reviews the evolution of reimbursement in the United States and provides a balanced discussion of the factors that influence charge master prices. Reduced payments to hospitals have pressured hospitals to generate additional revenue by increasing charge master prices. An unfortunate consequence is that those least able to pay for health care, the uninsured, are subjected to the highest charges. Yet, differences in pricing also represent an opportunity for radiology practices, which provide imaging services that are larger in scope or superior in quality to promote product differentiation. Physicians, hospital executives, and policy makers need to work together to improve the existing reimbursement system to promote high-quality, low-cost imaging. Copyright © 2016 Mosby, Inc. All rights reserved.
International Nuclear Information System (INIS)
Haaker, L.W.; Jelatis, D.G.
1979-01-01
Remote control manipulator of the master-slave type for carrying out work on the other side of a shield wall. This appliance allows a Y movement relative displacement, the function of which is to extend the range of the manipulator towards the front and also to facilitate its installation, the lateral rotation or inclination of the slave arm in relation to the master arm, and the Z movement extension through which the length of the slave arm is increased in comparison with that of the master arm. Devices have been developed which transform the linear movements into rotational movements to enable these movements to be transmitted through rotational seal fittings capable of ensuring the safety of the separation between the operator's environment and that in the work area. Particular improvements have been made to the handles, handle seals, pincer mechanisms, etc [fr
African Journals Online (AJOL)
Each of these question types is presented based on the College of Family ... include evidence-based medicine and primary care research methods. This month's ... appraising qualitative research), unit standard 2 (Evaluate and manage a ...
MASTER- an indigenous nuclear design code of KAERI
International Nuclear Information System (INIS)
Cho, Byung Oh; Lee, Chang Ho; Park, Chan Oh; Lee, Chong Chul
1996-01-01
KAERI has recently developed the nuclear design code MASTER for the application to reactor physics analyses for pressurized water reactors. Its neutronics model solves the space-time dependent neutron diffusion equations with the advanced nodal methods. The major calculation categories of MASTER consist of microscopic depletion, steady-state and transient solution, xenon dynamics, adjoint solution and pin power and burnup reconstruction. The MASTER validation analyses, which are in progress aiming to submit the Uncertainty Topical Report to KINS in the first half of 1996, include global reactivity calculations and detailed pin-by-pin power distributions as well as in-core detector reaction rate calculations. The objective of this paper is to give an overall description of the CASMO/MASTER code system whose verification results are in details presented in the separate papers
Mastering Ninject for dependency injection
Baharestani, Daniel
2013-01-01
Mastering Ninject for Dependency Injection teaches you the most powerful concepts of Ninject in a simple and easy-to-understand format using lots of practical examples, diagrams, and illustrations.Mastering Ninject for Dependency Injection is aimed at software developers and architects who wish to create maintainable, extensible, testable, and loosely coupled applications. Since Ninject targets the .NET platform, this book is not suitable for software developers of other platforms. Being familiar with design patterns such as singleton or factory would be beneficial, but no knowledge of depende
Dual arm master controller development
International Nuclear Information System (INIS)
Kuban, D.P.; Perkins, G.S.
1985-01-01
The advanced servomanipulator (ASM) slave was designed with an anthropomorphic stance, gear/torque tube power drives, and modular construction. These features resulted in increased inertia, friction, and backlash relative to tape-driven manipulators. Studies were performed which addressed the human factors design and performance trade-offs associated with the corresponding master controller best suited for the ASM. The results of these studies, as well as the conceptual design of the dual arm master controller, are presented. This work was performed as part of the Consolidated Fuel Reprocessing Program at the Oak Ridge National Laboratory. 5 refs., 7 figs., 1 tab
Enhanced Master Controller Unit Tester
Benson, Patricia; Johnson, Yvette; Johnson, Brian; Williams, Philip; Burton, Geoffrey; McCoy, Anthony
2007-01-01
The Enhanced Master Controller Unit Tester (EMUT) software is a tool for development and testing of software for a master controller (MC) flight computer. The primary function of the EMUT software is to simulate interfaces between the MC computer and external analog and digital circuitry (including other computers) in a rack of equipment to be used in scientific experiments. The simulations span the range of nominal, off-nominal, and erroneous operational conditions, enabling the testing of MC software before all the equipment becomes available.
Mastering IDEAScript the definitive guide
Mueller, John Paul
2011-01-01
With approximately 44,000 users in the U.S. and Canada, as well as 42,000 in Europe, IDEA software has become a leading provider of data analysis software for use by auditors and accountants. Written to provide users with a quick access guide for optimal use of IDEAScript, Mastering IDEAScript: The Definitive Guide is IDEA's official guide to mastering IDEAScript, covering essential topics such as Introducing IDEAScript, Understanding the Basics of IDEAScript Editor, Designing Structured Applications, Understanding IDEA Databases and much more. For auditors, accountants and controllers.
Mengtong Jin; Haiquan Liu; Wenshuo Sun; Qin Li; Zhaohuan Zhang; Jibing Li; Yingjie Pan; Yong Zhao
2015-01-01
Vibrio parahemolyticus is an important pathogen that leads to food illness associated seafood. Therefore, rapid and reliable methods to detect and quantify the total viable V. parahaemolyticus in seafood are needed. In this assay, a RNA-based real-time reverse-transcriptase PCR (RT-qPCR) without an enrichment step has been developed for detection and quantification of the total viable V. parahaemolyticus in shrimp. RNA standards with the target segments were synthesized in vitro with T7 RNA p...
Energy Technology Data Exchange (ETDEWEB)
Milosevic, M [Institute of Nuclear Sciences Vinca, Beograd (Serbia and Montenegro)
1979-07-01
One-dimensional variational method for cylindrical configuration was applied for calculating group constants, together with effects of elastic slowing down, anisotropic elastic scattering, inelastic scattering, heterogeneous resonance absorption with the aim to include the presence of a number of different isotopes and effects of neutron leakage from the reactor core. Neutron flux shape P{sub 3} and adjoint function are proposed in order to enable calculation of smaller size reactors and inclusion of heterogeneity effects by cell calculations. Microscopic multigroup constants were prepared based on the UKNDL data library. Analytical-numerical approach was applied for solving the equations of the P{sub 3} approximation to obtain neutron flux moments and adjoint functions.
Differential geometry and topology of curves
Animov, Yu
2001-01-01
Differential geometry is an actively developing area of modern mathematics. This volume presents a classical approach to the general topics of the geometry of curves, including the theory of curves in n-dimensional Euclidean space. The author investigates problems for special classes of curves and gives the working method used to obtain the conditions for closed polygonal curves. The proof of the Bakel-Werner theorem in conditions of boundedness for curves with periodic curvature and torsion is also presented. This volume also highlights the contributions made by great geometers. past and present, to differential geometry and the topology of curves.
International Nuclear Information System (INIS)
Zhou, J; Lasio, G; Chen, S; Zhang, B; Langen, K; Prado, K; D’Souza, W; Yi, B; Huang, J
2015-01-01
Purpose: To develop a CBCT HU correction method using a patient specific HU to mass density conversion curve based on a novel image registration and organ mapping method for head-and-neck radiation therapy. Methods: There are three steps to generate a patient specific CBCT HU to mass density conversion curve. First, we developed a novel robust image registration method based on sparseness analysis to register the planning CT (PCT) and the CBCT. Second, a novel organ mapping method was developed to transfer the organs at risk (OAR) contours from the PCT to the CBCT and corresponding mean HU values of each OAR were measured in both the PCT and CBCT volumes. Third, a set of PCT and CBCT HU to mass density conversion curves were created based on the mean HU values of OARs and the corresponding mass density of the OAR in the PCT. Then, we compared our proposed conversion curve with the traditional Catphan phantom based CBCT HU to mass density calibration curve. Both curves were input into the treatment planning system (TPS) for dose calculation. Last, the PTV and OAR doses, DVH and dose distributions of CBCT plans are compared to the original treatment plan. Results: One head-and-neck cases which contained a pair of PCT and CBCT was used. The dose differences between the PCT and CBCT plans using the proposed method are −1.33% for the mean PTV, 0.06% for PTV D95%, and −0.56% for the left neck. The dose differences between plans of PCT and CBCT corrected using the CATPhan based method are −4.39% for mean PTV, 4.07% for PTV D95%, and −2.01% for the left neck. Conclusion: The proposed CBCT HU correction method achieves better agreement with the original treatment plan compared to the traditional CATPhan based calibration method
Energy Technology Data Exchange (ETDEWEB)
Zhou, J [University of Maryland School of Medicine, Bel Air, MD (United States); Lasio, G; Chen, S; Zhang, B; Langen, K; Prado, K; D’Souza, W [University of Maryland School of Medicine, Baltimore, MD (United States); Yi, B [Univ. of Maryland School Of Medicine, Baltimore, MD (United States); Huang, J [University of Texas at Arlington, Arlington, TX (United States)
2015-06-15
Purpose: To develop a CBCT HU correction method using a patient specific HU to mass density conversion curve based on a novel image registration and organ mapping method for head-and-neck radiation therapy. Methods: There are three steps to generate a patient specific CBCT HU to mass density conversion curve. First, we developed a novel robust image registration method based on sparseness analysis to register the planning CT (PCT) and the CBCT. Second, a novel organ mapping method was developed to transfer the organs at risk (OAR) contours from the PCT to the CBCT and corresponding mean HU values of each OAR were measured in both the PCT and CBCT volumes. Third, a set of PCT and CBCT HU to mass density conversion curves were created based on the mean HU values of OARs and the corresponding mass density of the OAR in the PCT. Then, we compared our proposed conversion curve with the traditional Catphan phantom based CBCT HU to mass density calibration curve. Both curves were input into the treatment planning system (TPS) for dose calculation. Last, the PTV and OAR doses, DVH and dose distributions of CBCT plans are compared to the original treatment plan. Results: One head-and-neck cases which contained a pair of PCT and CBCT was used. The dose differences between the PCT and CBCT plans using the proposed method are −1.33% for the mean PTV, 0.06% for PTV D95%, and −0.56% for the left neck. The dose differences between plans of PCT and CBCT corrected using the CATPhan based method are −4.39% for mean PTV, 4.07% for PTV D95%, and −2.01% for the left neck. Conclusion: The proposed CBCT HU correction method achieves better agreement with the original treatment plan compared to the traditional CATPhan based calibration method.
Directory of Open Access Journals (Sweden)
Lihong Zhang
2013-01-01
Full Text Available To bridge the convergence between traditional Chinese medicine (TCM and modern medicine originated from the West, a new method of area under the absorbance-wavelength curve (AUAWC by spectrophotometer scanning was investigated and compared with HPLC method to explore metabolomic pharmacokinetics in rats. AUAWC and drug total concentration were obtained after Yangxue was injected to rats. Meanwhile, individual concentrations of sodium ferulate, tetramethylpyrazine hydrochloride, tanshinol sodium, and sodium tanshinone IIA sulfonate in plasma were determined by HPLC. Metabolomic profile of multicomponents plasma concentration time from AUAWC and that of individual components from HPLC were compared. The data from AUAWC had one-compartment model with mean area under concentration versus time (AUC of 9370.58 min·μg/mL and mean elimination half-life (t1/2 of 12.92 min. The results by HPLC demonstrated that sodium ferulate and tetramethylpyrazine hydrochloride had one-compartment model with AUC of 6075.50 and 876.94 min·μg/mL, t1/2 of 10.85 and 20.57 min, respectively. Tanshinol sodium and sodium tanshinone IIA sulfonate showed two-compartment model, and AUC was 29.58 and 201.46 with t1/2β of 1.76 and 16.90, respectively. The profiles indicated that method of AUAWC can be used to study pharmacokinetics of TCM with multicomponents and to improve its development of active theory and application in clinic combined with in vivo metabolomic profile of HPLC.
Fan, Zhichao; Hwang, Keh-Chih; Rogers, John A.; Huang, Yonggang; Zhang, Yihui
2018-02-01
Mechanically-guided 3D assembly based on controlled, compressive buckling represents a promising, emerging approach for forming complex 3D mesostructures in advanced materials. Due to the versatile applicability to a broad set of material types (including device-grade single-crystal silicon) over length scales from nanometers to centimeters, a wide range of novel applications have been demonstrated in soft electronic systems, interactive bio-interfaces as well as tunable electromagnetic devices. Previously reported 3D designs relied mainly on finite element analyses (FEA) as a guide, but the massive numerical simulations and computational efforts necessary to obtain the assembly parameters for a targeted 3D geometry prevent rapid exploration of engineering options. A systematic understanding of the relationship between a 3D shape and the associated parameters for assembly requires the development of a general theory for the postbuckling process. In this paper, a double perturbation method is established for the postbuckling analyses of planar curved beams, of direct relevance to the assembly of ribbon-shaped 3D mesostructures. By introducing two perturbation parameters related to the initial configuration and the deformation, the highly nonlinear governing equations can be transformed into a series of solvable, linear equations that give analytic solutions to the displacements and curvatures during postbuckling. Systematic analyses of postbuckling in three representative ribbon shapes (sinusoidal, polynomial and arc configurations) illustrate the validity of theoretical method, through comparisons to the results of experiment and FEA. These results shed light on the relationship between the important deformation quantities (e.g., mode ratio and maximum strain) and the assembly parameters (e.g., initial configuration and the applied strain). This double perturbation method provides an attractive route to the inverse design of ribbon-shaped 3D geometries, as
20 years of power station master training
International Nuclear Information System (INIS)
Schwarz, O.
1977-01-01
In the early fifties, the VGB working group 'Power station master training' elaborated plans for systematic and uniform training of power station operating personnel. In 1957, the first power station master course was held. In the meantime, 1.720 power station masters are in possession of a master's certificate of a chamber of commerce and trade. Furthermore, 53 power station masters have recently obtained in courses of the 'Kraftwerksschule e.V.' the know-how which enables them to also carry out their duty as a master in nuclear power stations. (orig.) [de
Garnavich, Peter; McClelland, Colin
2013-02-01
We observed the optical transient MASTER OT J065608.28+744455.2 (ATEL #4783) with the Vatican Advanced Technology Telescope (VATT) and VATT4K CCD camera. V-band imaging began at 2013 Feb. 5.15 (UT) and continued for 3.3 hours with a time resolution of 22 seconds.
International Nuclear Information System (INIS)
Yang, Xiaoli; Hofmann, Ralf; Dapp, Robin; Van de Kamp, Thomas; Rolo, Tomy dos Santos; Xiao, Xianghui; Moosmann, Julian; Kashef, Jubin; Stotzka, Rainer
2015-01-01
High-resolution, three-dimensional (3D) imaging of soft tissues requires the solution of two inverse problems: phase retrieval and the reconstruction of the 3D image from a tomographic stack of two-dimensional (2D) projections. The number of projections per stack should be small to accommodate fast tomography of rapid processes and to constrain X-ray radiation dose to optimal levels to either increase the duration o fin vivo time-lapse series at a given goal for spatial resolution and/or the conservation of structure under X-ray irradiation. In pursuing the 3D reconstruction problem in the sense of compressive sampling theory, we propose to reduce the number of projections by applying an advanced algebraic technique subject to the minimisation of the total variation (TV) in the reconstructed slice. This problem is formulated in a Lagrangian multiplier fashion with the parameter value determined by appealing to a discrete L-curve in conjunction with a conjugate gradient method. The usefulness of this reconstruction modality is demonstrated for simulated and in vivo data, the latter acquired in parallel-beam imaging experiments using synchrotron radiation
Yang, Xiaoli; Hofmann, Ralf; Dapp, Robin; van de Kamp, Thomas; dos Santos Rolo, Tomy; Xiao, Xianghui; Moosmann, Julian; Kashef, Jubin; Stotzka, Rainer
2015-03-09
High-resolution, three-dimensional (3D) imaging of soft tissues requires the solution of two inverse problems: phase retrieval and the reconstruction of the 3D image from a tomographic stack of two-dimensional (2D) projections. The number of projections per stack should be small to accommodate fast tomography of rapid processes and to constrain X-ray radiation dose to optimal levels to either increase the duration of in vivo time-lapse series at a given goal for spatial resolution and/or the conservation of structure under X-ray irradiation. In pursuing the 3D reconstruction problem in the sense of compressive sampling theory, we propose to reduce the number of projections by applying an advanced algebraic technique subject to the minimisation of the total variation (TV) in the reconstructed slice. This problem is formulated in a Lagrangian multiplier fashion with the parameter value determined by appealing to a discrete L-curve in conjunction with a conjugate gradient method. The usefulness of this reconstruction modality is demonstrated for simulated and in vivo data, the latter acquired in parallel-beam imaging experiments using synchrotron radiation.
Directory of Open Access Journals (Sweden)
Weiping Liu
2017-10-01
Full Text Available It is important to determine the soil–water characteristic curve (SWCC for analyzing slope seepage and stability under the conditions of rainfall. However, SWCCs exhibit high uncertainty because of complex influencing factors, which has not been previously considered in slope seepage and stability analysis under conditions of rainfall. This study aimed to evaluate the uncertainty of the SWCC and its effects on the seepage and stability analysis of an unsaturated soil slope under conditions of rainfall. The SWCC model parameters were treated as random variables. An uncertainty evaluation of the parameters was conducted based on the Bayesian approach and the Markov chain Monte Carlo (MCMC method. Observed data from granite residual soil were used to test the uncertainty of the SWCC. Then, different confidence intervals for the model parameters of the SWCC were constructed. The slope seepage and stability analysis under conditions of rainfall with the SWCC of different confidence intervals was investigated using finite element software (SEEP/W and SLOPE/W. The results demonstrated that SWCC uncertainty had significant effects on slope seepage and stability. In general, the larger the percentile value, the greater the reduction of negative pore-water pressure in the soil layer and the lower the safety factor of the slope. Uncertainties in the model parameters of the SWCC can lead to obvious errors in predicted pore-water pressure profiles and the estimated safety factor of the slope under conditions of rainfall.
Detection of player learning curve in a car driving game
Bontchev, Boyan; Vassileva, Dessislava
2018-01-01
Detection of learning curves of player metrics is very important for the serious (or so called applied) games, because it provides an indicator representing how players master the game tasks by acquiring cognitive abilities, knowledge, and necessary skills for solving the game challenges. Real
Ballestrero, Sergio; The ATLAS collaboration; Fazio, Daniel; Gament, Costin-Eugen; Lee, Christopher; Scannicchio, Diana; Twomey, Matthew Shaun
2016-01-01
Within the ATLAS detector, the Trigger and Data Acquisition system is responsible for the online processing of data streamed from the detector during collisions at the Large Hadron Collider at CERN. The online farm is comprised of ~4000 servers processing the data read out from ~100 million detector channels through multiple trigger levels. Configuring of these servers is not an easy task, especially since the detector itself is made up of multiple different sub-detectors, each with their own particular requirements. The previous method of configuring these servers, using Quattor and a hierarchical scripts system was cumbersome and restrictive. A better, unified system was therefore required to simplify the tasks of the TDAQ Systems Administrators, for both the local and net booted systems, and to be able to fulfil the requirements of TDAQ, Detector Control Systems and the sub-detectors groups. Various configuration management systems were evaluated, though in the end, Puppet was chosen as the application of ...
Zheng, WeiKang; Kelly, Patrick L.; Filippenko, Alexei V.
2018-05-01
We examine the relationship between three parameters of Type Ia supernovae (SNe Ia): peak magnitude, rise time, and photospheric velocity at the time of peak brightness. The peak magnitude is corrected for extinction using an estimate determined from MLCS2k2 fitting. The rise time is measured from the well-observed B-band light curve with the first detection at least 1 mag fainter than the peak magnitude, and the photospheric velocity is measured from the strong absorption feature of Si II λ6355 at the time of peak brightness. We model the relationship among these three parameters using an expanding fireball with two assumptions: (a) the optical emission is approximately that of a blackbody, and (b) the photospheric temperatures of all SNe Ia are the same at the time of peak brightness. We compare the precision of the distance residuals inferred using this physically motivated model against those from the empirical Phillips relation and the MLCS2k2 method for 47 low-redshift SNe Ia (0.005 Ia in our sample with higher velocities are inferred to be intrinsically fainter. Eliminating the high-velocity SNe and applying a more stringent extinction cut to obtain a “low-v golden sample” of 22 SNe, we obtain significantly reduced scatter of 0.108 ± 0.018 mag in the new relation, better than those of the Phillips relation and the MLCS2k2 method. For 250 km s‑1 of residual peculiar motions, we find 68% and 95% upper limits on the intrinsic scatter of 0.07 and 0.10 mag, respectively.
Chen, Xingyuan; Miller, Gretchen R; Rubin, Yoram; Baldocchi, Dennis D
2012-12-01
The heat pulse method is widely used to measure water flux through plants; it works by using the speed at which a heat pulse is propagated through the system to infer the velocity of water through a porous medium. No systematic, non-destructive calibration procedure exists to determine the site-specific parameters necessary for calculating sap velocity, e.g., wood thermal diffusivity and probe spacing. Such parameter calibration is crucial to obtain the correct transpiration flux density from the sap flow measurements at the plant scale and subsequently to upscale tree-level water fluxes to canopy and landscape scales. The purpose of this study is to present a statistical framework for sampling and simultaneously estimating the tree's thermal diffusivity and probe spacing from in situ heat response curves collected by the implanted probes of a heat ratio measurement device. Conditioned on the time traces of wood temperature following a heat pulse, the parameters are inferred using a Bayesian inversion technique, based on the Markov chain Monte Carlo sampling method. The primary advantage of the proposed methodology is that it does not require knowledge of probe spacing or any further intrusive sampling of sapwood. The Bayesian framework also enables direct quantification of uncertainty in estimated sap flow velocity. Experiments using synthetic data show that repeated tests using the same apparatus are essential for obtaining reliable and accurate solutions. When applied to field conditions, these tests can be obtained in different seasons and can be automated using the existing data logging system. Empirical factors are introduced to account for the influence of non-ideal probe geometry on the estimation of heat pulse velocity, and are estimated in this study as well. The proposed methodology may be tested for its applicability to realistic field conditions, with an ultimate goal of calibrating heat ratio sap flow systems in practical applications.
Directory of Open Access Journals (Sweden)
Janusz Charatonik
1991-11-01
Full Text Available Results concerning contractibility of curves (equivalently: of dendroids are collected and discussed in the paper. Interrelations tetween various conditions which are either sufficient or necessary for a curve to be contractible are studied.
Superspace formulation for the master equation
International Nuclear Information System (INIS)
Abreu, E.M.; Braga, N.R.
1996-01-01
It is shown that the quantum master equation of the field-antifield quantization method at one-loop order can be translated into the requirement of a superfield structure for the action. The Pauli-Villars regularization is implemented in this BRST superspace and the case of anomalous gauge theories is investigated. The quantum action, including Wess-Zumino terms, shows up as one of the components of a superfield that includes the BRST anomalies in the other component. The example of W2 quantum gravity is also discussed. copyright 1996 The American Physical Society
From convolutionless generalized master to Pauli master equations
International Nuclear Information System (INIS)
Capek, V.
1995-01-01
The paper is a continuation of previous work within which it has been proved that time integrals of memory function (i.e. Markovian transfer rates from Pauli Master Equations, PME) in Time-Convolution Generalized Master Equations (TC-GME) for probabilities of finding a state of an asymmetric system interacting with a bath with a continuous spectrum are exactly zero, provided that no approximation is involved, irrespective of the usual finite-perturbation-order correspondence with the Golden Rule transition rates. In this paper, attention is paid to an alternative way of deriving the rigorous PME from the TCL-GME. Arguments are given in favor of the proposition that the long-time limit of coefficients in TCL-GME for the above probabilities, under the same assumption and presuming that this limit exists, is equal to zero. 11 refs
51Cr - erythrocyte survival curves
International Nuclear Information System (INIS)
Paiva Costa, J. de.
1982-07-01
Sixteen patients were studied, being fifteen patients in hemolytic state, and a normal individual as a witness. The aim was to obtain better techniques for the analysis of the erythrocytes, survival curves, according to the recommendations of the International Committee of Hematology. It was used the radiochromatic method as a tracer. Previously a revisional study of the International Literature was made in its aspects inherent to the work in execution, rendering possible to establish comparisons and clarify phonomena observed in cur investigation. Several parameters were considered in this study, hindering both the exponential and the linear curves. The analysis of the survival curves of the erythrocytes in the studied group, revealed that the elution factor did not present a homogeneous answer quantitatively to all, though, the result of the analysis of these curves have been established, through listed programs in the electronic calculator. (Author) [pt
Analysis of characteristic performance curves in radiodiagnosis by an observer
International Nuclear Information System (INIS)
Kossovoj, A.L.
1988-01-01
Methods and ways of construction of performance characteristic curves (PX-curves) in roentgenology, their qualitative and quantitative estimation are described. Estimation of PX curves application for analysis of scintigraphic and sonographic images is presented
a new approach of Analysing GRB light curves
International Nuclear Information System (INIS)
Varga, B.; Horvath, I.
2005-01-01
We estimated the T xx quantiles of the cumulative GRB light curves using our recalculated background. The basic information of the light curves was extracted by multivariate statistical methods. The possible classes of the light curves are also briefly discussed
Considerations for reference pump curves
International Nuclear Information System (INIS)
Stockton, N.B.
1992-01-01
This paper examines problems associated with inservice testing (IST) of pumps to assess their hydraulic performance using reference pump curves to establish acceptance criteria. Safety-related pumps at nuclear power plants are tested under the American Society of Mechanical Engineers (ASME) Boiler and Pressure Vessel Code (the Code), Section 11. The Code requires testing pumps at specific reference points of differential pressure or flow rate that can be readily duplicated during subsequent tests. There are many cases where test conditions cannot be duplicated. For some pumps, such as service water or component cooling pumps, the flow rate at any time depends on plant conditions and the arrangement of multiple independent and constantly changing loads. System conditions cannot be controlled to duplicate a specific reference value. In these cases, utilities frequently request to use pump curves for comparison of test data for acceptance. There is no prescribed method for developing a pump reference curve. The methods vary and may yield substantially different results. Some results are conservative when compared to the Code requirements; some are not. The errors associated with different curve testing techniques should be understood and controlled within reasonable bounds. Manufacturer's pump curves, in general, are not sufficiently accurate to use as reference pump curves for IST. Testing using reference curves generated with polynomial least squares fits over limited ranges of pump operation, cubic spline interpolation, or cubic spline least squares fits can provide a measure of pump hydraulic performance that is at least as accurate as the Code required method. Regardless of the test method, error can be reduced by using more accurate instruments, by correcting for systematic errors, by increasing the number of data points, and by taking repetitive measurements at each data point
Curve Boxplot: Generalization of Boxplot for Ensembles of Curves.
Mirzargar, Mahsa; Whitaker, Ross T; Kirby, Robert M
2014-12-01
In simulation science, computational scientists often study the behavior of their simulations by repeated solutions with variations in parameters and/or boundary values or initial conditions. Through such simulation ensembles, one can try to understand or quantify the variability or uncertainty in a solution as a function of the various inputs or model assumptions. In response to a growing interest in simulation ensembles, the visualization community has developed a suite of methods for allowing users to observe and understand the properties of these ensembles in an efficient and effective manner. An important aspect of visualizing simulations is the analysis of derived features, often represented as points, surfaces, or curves. In this paper, we present a novel, nonparametric method for summarizing ensembles of 2D and 3D curves. We propose an extension of a method from descriptive statistics, data depth, to curves. We also demonstrate a set of rendering and visualization strategies for showing rank statistics of an ensemble of curves, which is a generalization of traditional whisker plots or boxplots to multidimensional curves. Results are presented for applications in neuroimaging, hurricane forecasting and fluid dynamics.
Hybrid quantum-classical master equations
International Nuclear Information System (INIS)
Diósi, Lajos
2014-01-01
We discuss hybrid master equations of composite systems, which are hybrids of classical and quantum subsystems. A fairly general form of hybrid master equations is suggested. Its consistency is derived from the consistency of Lindblad quantum master equations. We emphasize that quantum measurement is a natural example of exact hybrid systems. We derive a heuristic hybrid master equation of time-continuous position measurement (monitoring). (paper)
International Nuclear Information System (INIS)
O’Brien, Donal; Shalloo, Laurence; Crosson, Paul; Donnellan, Trevor; Farrelly, Niall; Finnan, John; Hanrahan, Kevin; Lalor, Stan; Lanigan, Gary; Thorne, Fiona; Schulte, Rogier
2014-01-01
Highlights: • Improving productivity was the most effective strategy to reduce emissions and costs. • The accounting methods disagreed on the total abatement potential of mitigation measures. • Thus, it may be difficult to convince farmers to adopt certain abatement measures. • Domestic offsetting and consumption based accounting are options to overcome current methodological issues. - Abstract: Marginal abatement cost curve (MACC) analysis allows the evaluation of strategies to reduce agricultural greenhouse gas (GHG) emissions relative to some reference scenario and encompasses their costs or benefits. A popular approach to quantify the potential to abate national agricultural emissions is the Intergovernmental Panel on Climate Change guidelines for national GHG inventories (IPCC-NI method). This methodology is the standard for assessing compliance with binding national GHG reduction targets and uses a sector based framework to attribute emissions. There is however an alternative to the IPCC-NI method, known as life cycle assessment (LCA), which is the preferred method to assess the GHG intensity of food production (kg of GHG/unit of food). The purpose of this study was to compare the effect of using the IPCC-NI and LCA methodologies when completing a MACC analysis of national agricultural GHG emissions. The MACC was applied to the Irish agricultural sector and mitigation measures were only constrained by the biophysical environment. The reference scenario chosen assumed that the 2020 growth targets set by the Irish agricultural industry would be achieved. The comparison of methodologies showed that only 1.1 Mt of the annual GHG abatement potential that can be achieved at zero or negative cost could be attributed to agricultural sector using the IPCC-NI method, which was only 44% of the zero or negative cost abatement potential attributed to the sector using the LCA method. The difference between methodologies was because the IPCC-NI method attributes the
International Nuclear Information System (INIS)
Kim, Young Suk; Jeong, Hyeon Cheol; Ahn, Sang Bok
2005-01-01
The Direct Current Potential Drop(DCPD) method and the Unloading Compliance(UC) method with a crack opening displacement gauge were applied simultaneously to the Zr-2.5Nb Curved Compact Tension (CCT) specimens to determine which of the two methods can precisely determine the crack initiation point and hence the crack length for evaluation of their fracture toughness. The DCPD method detected the crack initiation at a smaller load-time displacement compared to the UC method. As a verification, a direct observation of the fracture surfaces on the curved compact tension specimens was made on the CCT specimens experiencing either 0.8 to 1.0 mm load line displacement or various loads from 50% to 80% of the maximum peak load, or P max . The DCPD method is concluded to be more precise in determining the crack initiation and fracture toughness, J in Zr-2.5Nb CCT specimens than the UC method
2002-01-01
The Atlas of Stress-Strain Curves, Second Edition is substantially bigger in page dimensions, number of pages, and total number of curves than the previous edition. It contains over 1,400 curves, almost three times as many as in the 1987 edition. The curves are normalized in appearance to aid making comparisons among materials. All diagrams include metric (SI) units, and many also include U.S. customary units. All curves are captioned in a consistent format with valuable information including (as available) standard designation, the primary source of the curve, mechanical properties (including hardening exponent and strength coefficient), condition of sample, strain rate, test temperature, and alloy composition. Curve types include monotonic and cyclic stress-strain, isochronous stress-strain, and tangent modulus. Curves are logically arranged and indexed for fast retrieval of information. The book also includes an introduction that provides background information on methods of stress-strain determination, on...
International Nuclear Information System (INIS)
Escande, L.
2012-01-01
The Fermi Gamma-ray Space Telescope was launched on 2008 June 11, carrying the Large Area Telescope (LAT), sensitive to gamma-rays in the 20 MeV - 300 GeV energy range. The data collected since then allowed to multiply by a factor of 10 the number of Active Galactic Nuclei (AGN) detected in the GeV range. Gamma-rays observed in AGNs come from energetic precesses bringing into play very high energy charged particles. These particles are confined in a magnetized plasma jet rising in a region close to the supermassive black hole in the center of the host galaxy. This jet moves away with velocities as high as 0.9999 c, forming in many cases radio lobes on kilo-parsec or even mega-parsec scales. Among the AGNs, those whose jet inclination angle to the line of sight is small are called blazars. The combination of this small inclination angle with relativistic ejection speeds led to relativistic effects: apparent superluminal motions, amplification of the luminosity and modification of the time scales. Blazars are characterized by extreme variability at all wavelengths, on time scales from a few minutes to several months. A temporal and spectral study of the most luminous of those detected by the LAT, 3C 454.3, was done so as to constrain emission models. A new method for generating adaptive-binning light curves is also suggested in this thesis. It allows to extract the maximum of information from the LAT data whatever the flux state of the source. (author)
[Master course in biomedical engineering].
Jobbágy, Akos; Benyó, Zoltán; Monos, Emil
2009-11-22
The Bologna Declaration aims at harmonizing the European higher education structure. In accordance with the Declaration, biomedical engineering will be offered as a master (MSc) course also in Hungary, from year 2009. Since 1995 biomedical engineering course has been held in cooperation of three universities: Semmelweis University, Budapest Veterinary University, and Budapest University of Technology and Economics. One of the latter's faculties, Faculty of Electrical Engineering and Informatics, has been responsible for the course. Students could start their biomedical engineering studies - usually in parallel with their first degree course - after they collected at least 180 ECTS credits. Consequently, the biomedical engineering course could have been considered as a master course even before the Bologna Declaration. Students had to collect 130 ECTS credits during the six-semester course. This is equivalent to four-semester full-time studies, because during the first three semesters the curriculum required to gain only one third of the usual ECTS credits. The paper gives a survey on the new biomedical engineering master course, briefly summing up also the subjects in the curriculum.
Optimization on Spaces of Curves
DEFF Research Database (Denmark)
Møller-Andersen, Jakob
in Rd, and methods to solve the initial and boundary value problem for geodesics allowing us to compute the Karcher mean and principal components analysis of data of curves. We apply the methods to study shape variation in synthetic data in the Kimia shape database, in HeLa cell nuclei and cycles...... of cardiac deformations. Finally we investigate a new application of Riemannian shape analysis in shape optimization. We setup a simple elliptic model problem, and describe how to apply shape calculus to obtain directional derivatives in the manifold of planar curves. We present an implementation based...
Collection of master-slave synchronized chaotic systems
Lerescu, AI; Constandache, N; Oancea, S; Grosu, [No Value
2004-01-01
In this work the open-plus-closed-loop (OPCL) method of synchronization is used in order to synchronize the systems from the Sprott's collection of the simplest chaotic systems. The method is general and we looked for the simplest coupling between master and slave. The main result is that for the
International Nuclear Information System (INIS)
Dobrowolski, Tomasz
2012-01-01
The constant curvature one and quasi-one dimensional Josephson junction is considered. On the base of Maxwell equations, the sine–Gordon equation that describes an influence of curvature on the kink motion was obtained. It is showed that the method of geometrical reduction of the sine–Gordon model from three to lower dimensional manifold leads to an identical form of the sine–Gordon equation. - Highlights: ► The research on dynamics of the phase in a curved Josephson junction is performed. ► The geometrical reduction is applied to the sine–Gordon model. ► The results of geometrical reduction and the fundamental research are compared.
Evaluation of Metallurgical Quality of Master Heat IN-713C Nickel Alloy Ingots
Directory of Open Access Journals (Sweden)
F. Binczyk
2012-12-01
Full Text Available The paper presents the results of evaluation of the metallurgical quality of master heat ingots and of the identification of non-metallic inclusions (oxides of Al., Zr, Hf, Cr, etc., which have been found in the shrinkage cavities formed in these ingots. The inclusions penetrate into the liquid alloy, and on pouring of mould are transferred to the casting, especially when the filtering system is not sufficiently effective. The specific nature of the melting process of nickel and cobalt alloys, carried out in vacuum induction furnaces,excludes the possibility of alloy refining and slag removal from the melt surface. Therefore, to improve the quality of castings (parts of aircraft engines, it is so important to evaluate the quality of ingots before charging them into the crucible of an induction furnace. It has been proved that one of the methods for rapid quality evaluation is an ATD analysis of the sample solidification process, where samples are taken from different areas of the master heat ingot. The evaluation is based on a set of parameters plotted on the graph of the dT/dt derivative curve during the last stage of the solidification process in a range from TEut to Tsol.
Evaluation of Metallurgical Quality of Master Heat IN-713C Nickel Alloy Ingots
Directory of Open Access Journals (Sweden)
Binczyk F.
2012-12-01
Full Text Available The paper presents the results of evaluation of the metallurgical quality of master heat ingots and of the identification of non-metallic inclusions (oxides of Al., Zr, Hf, Cr, etc., which have been found in the shrinkage cavities formed in these ingots. The inclusions penetrate into the liquid alloy, and on pouring of mould are transferred to the casting, especially when the filtering system is not sufficiently effective. The specific nature of the melting process of nickel and cobalt alloys, carried out in vacuum induction furnaces, excludes the possibility of alloy refining and slag removal from the melt surface. Therefore, to improve the quality of castings (parts of aircraft engines, it is so important to evaluate the quality of ingots before charging them into the crucible of an induction furnace. It has been proved that one of the methods for rapid quality evaluation is an ATD analysis of the sample solidification process, where samples are taken from different areas of the master heat ingot. The evaluation is based on a set of parameters plotted on the graph of the dT/dt derivative curve during the last stage of the solidification process in a range from TEut to Tsol.
International Nuclear Information System (INIS)
Gibson, G.P.
1989-01-01
An evaluation has been carried out of the a.c. potential drop technique for determining J-crack growth resistance curves for a pressure vessel steel. The technique involves passing an alternating current through the specimen and relating the changes in the potential drop across the crack mouth to changes in crack length occuring during the test. The factors investigated were the current and voltage probe positions, the a.c. frequency and the test temperature. In addition, by altering the heat treatment of the material, J-crack resistance curves were obtained under both contained and non-contained yielding conditions. In all situations, accurate J-R curves could be determined. (author)
A master plan for the radwaste management
International Nuclear Information System (INIS)
Kim, Y.E.; Lee, S.H.; Lee, C.K.; Moon, S.H.; Sung, R.J.; Sung, K.W.
1983-01-01
The accumulated total amount of low-level radioactive wastes to be produced from operating power reactors and nuclear installations up until the year 2007 is estimated to 900,000 drum(approximately 200,000M 3 ). An effective master plan for the safe disposal of the wastes is necessary. Among many different disposal methods available for low-and medium-level radwastes, the engineered trench approach was chosen by an extensive feasibility study as the optimum method for Korea. Site selection, construction and commissioning of such a disposal facility are presumed to take two and a half years, beginning in July 1983. The total cost in opening the site and the unit disposal cost per drum were estimated to be 11 billion won and 40,000 won, respectively. An agency(KORDA) managing the operation of the disposal site is recommended to be established by 1987, assuming that the agency's economic feasibility can be justified by that time. When the disposal site is commissioned, a regulatory guide for ground disposal will be available, and supporting R and D work on the disposal site will be complete. Studies on the technology of radwaste treatment will continue through this period. For the longer term, staff training and future planning have been undertaken to ensure that a master plan, which can be expected to be used as a guideline for disposal of all radioactive waste arising, is fully adequate. (Author)
Setting the stage for master's level success
Roberts, Donna
Comprehensive reading, writing, research, and study skills play a critical role in a graduate student's success and ability to contribute to a field of study effectively. The literature indicated a need to support graduate student success in the areas of mentoring, navigation, as well as research and writing. The purpose of this two-phased mixed methods explanatory study was to examine factors that characterize student success at the Master's level in the fields of education, sociology and social work. The study was grounded in a transformational learning framework which focused on three levels of learning: technical knowledge, practical or communicative knowledge, and emancipatory knowledge. The study included two data collection points. Phase one consisted of a Master's Level Success questionnaire that was sent via Qualtrics to graduate level students at three colleges and universities in the Central Valley of California: a California State University campus, a University of California campus, and a private college campus. The results of the chi-square indicated that seven questionnaire items were significant with p values less than .05. Phase two in the data collection included semi-structured interview questions that resulted in three themes emerged using Dedoose software: (1) the need for more language and writing support at the Master's level, (2) the need for mentoring, especially for second-language learners, and (3) utilizing the strong influence of faculty in student success. It is recommended that institutions continually assess and strengthen their programs to meet the full range of learners and to support students to degree completion.
Master classes - What do they offer?
Hanken, Ingrid Maria; Long, Marion
2012-01-01
Master classes are a common way to teach music performance, but how useful are they in helping young musicians in their musical development? Based on his experiences of master classes Lali (2003:24) states that “For better or for worse, master classes can be life-changing events.” Anecdotal evidence confirm that master classes can provide vital learning opportunities, but also that they can be of little use to the student, or worse, detrimental. Since master classes are a common component in ...
Comparison of Space Debris Environment Models: ORDEM2000, MASTER-2001, MASTER-2005 and MASTER-2009
Kanemitsu, Yuki; 赤星, 保浩; Akahoshi, Yasuhiro; 鳴海, 智博; Narumi, Tomohiro; Faure, Pauline; 松本, 晴久; Matsumoto, Haruhisa; 北澤, 幸人; Kitazawa, Yukihito
2012-01-01
Hypervelocity impact by space debris on spacecraft is one of the most important issues for space development and operation, especially considering the growing amount of space debris in recent years. It is therefore important for spacecraft design to evaluate the impact risk by using environment models. In this paper, the authors compared the results of the debris impact flux in low Earth orbit, as calculated by four debris environment engineering models -NASA's ORDEM2000 and ESA's MASTER-2001...
Burger, Jessica L; Lovestead, Tara M; LaFollette, Mark; Bruno, Thomas J
2017-08-17
Although they are amongst the most efficient engine types, compression-ignition engines have difficulties achieving acceptable particulate emission and NO x formation. Indeed, catalytic after-treatment of diesel exhaust has become common and current efforts to reformulate diesel fuels have concentrated on the incorporation of oxygenates into the fuel. One of the best ways to characterize changes to a fuel upon the addition of oxygenates is to examine the volatility of the fuel mixture. In this paper, we present the volatility, as measured by the advanced distillation curve method, of a prototype diesel fuel with novel diesel fuel oxygenates: 2,5,7,10-tetraoxaundecane (TOU), 2,4,7,9-tetraoxadecane (TOD), and ethanol/fatty acid methyl ester (FAME) mixtures. We present the results for the initial boiling behavior, the distillation curve temperatures, and track the oxygenates throughout the distillations. These diesel fuel blends have several interesting thermodynamic properties that have not been seen in our previous oxygenate studies. Ethanol reduces the temperatures observed early in the distillation (near ethanol's boiling temperature). After these early distillation points (once the ethanol has distilled out), B100 has the greatest impact on the remaining distillation curve and shifts the curve to higher temperatures than what is seen for diesel fuel/ethanol blends. In fact, for the 15% B100 mixture most of the distillation curve reaches temperatures higher than those seen diesel fuel alone. In addition, blends with TOU and TOD also exhibited uncommon characteristics. These additives are unusual because they distill over most the distillation curve (up to 70%). The effects of this can be seen both in histograms of oxygenate concentration in the distillate cuts and in the distillation curves. Our purpose for studying these oxygenate blends is consistent with our vision for replacing fit-for-purpose properties with fundamental properties to enable the development of
Rapid analysis of molybdenum contents in molybdenum master alloys by X-ray fluorescence technique
International Nuclear Information System (INIS)
Tongkong, P.
1985-01-01
Determination of molybdenum contents in molybdenum master alloy had been performed using energy dispersive x-ray fluorescence (EDX) technique where analysis were made via standard additions and calibration curves. Comparison of EDX technique with other analyzing techniques, i.e., wavelength dispersive x-ray fluorescence, neutron activation analysis and inductive coupled plasma spectrometry, showed consistency in the results. This technique was found to yield reliable results when molybdenum contents in master alloys were in the range of 13 to 50 percent using HPGe detector or proportional counter. When the required error was set at 1%, the minimum analyzing time was found to be 30 and 60 seconds for Fe-Mo master alloys with molybdenum content of 13.54 and 49.09 percent respectively. For Al-Mo master alloys, the minimum times required were 120 and 300 seconds with molybdenum content of 15.22 and 47.26 percent respectively
Directory of Open Access Journals (Sweden)
René Pellissier
2012-01-01
Full Text Available This paper explores the notion ofjump ing the curve,following from Handy 's S-curve onto a new curve with new rules policies and procedures. . It claims that the curve does not generally lie in wait but has to be invented by leadership. The focus of this paper is the identification (mathematically and inferentially ofthat point in time, known as the cusp in catastrophe theory, when it is time to change - pro-actively, pre-actively or reactively. These three scenarios are addressed separately and discussed in terms ofthe relevance ofeach.
Mansfield, Richard
2010-01-01
A comprehensive guide to the language used to customize Microsoft Office. Visual Basic for Applications (VBA) is the language used for writing macros, automating Office applications, and creating custom applications in Word, Excel, PowerPoint, Outlook, and Access. This complete guide shows both IT professionals and novice developers how to master VBA in order to customize the entire Office suite for specific business needs.: Office 2010 is the leading productivity suite, and the VBA language enables customizations of all the Office programs; this complete guide gives both novice and experience
Mastering Microsoft Azure infrastructure services
Savill, John
2015-01-01
Understand, create, deploy, and maintain a public cloud using Microsoft Azure Mastering Microsoft Azure Infrastructure Services guides you through the process of creating and managing a public cloud and virtual network using Microsoft Azure. With step-by-step instruction and clear explanation, this book equips you with the skills required to provide services both on-premises and off-premises through full virtualization, providing a deeper understanding of Azure's capabilities as an infrastructure service. Each chapter includes online videos that visualize and enhance the concepts presented i
Johnson, L. E.; Kim, J.; Cifelli, R.; Chandra, C. V.
2016-12-01
Potential water retention, S, is one of parameters commonly used in hydrologic modeling for soil moisture accounting. Physically, S indicates total amount of water which can be stored in soil and is expressed in units of depth. S can be represented as a change of soil moisture content and in this context is commonly used to estimate direct runoff, especially in the Soil Conservation Service (SCS) curve number (CN) method. Generally, the lumped and the distributed hydrologic models can easily use the SCS-CN method to estimate direct runoff. Changes in potential water retention have been used in previous SCS-CN studies; however, these studies have focused on long-term hydrologic simulations where S is allowed to vary at the daily time scale. While useful for hydrologic events that span multiple days, the resolution is too coarse for short-term applications such as flash flood events where S may not recover its full potential. In this study, a new method for estimating a time-variable potential water retention at hourly time-scales is presented. The methodology is applied for the Napa River basin, California. The streamflow gage at St Helena, located in the upper reaches of the basin, is used as the control gage site to evaluate the model performance as it is has minimal influences by reservoirs and diversions. Rainfall events from 2011 to 2012 are used for estimating the event-based SCS CN to transfer to S. As a result, we have derived the potential water retention curve and it is classified into three sections depending on the relative change in S. The first is a negative slope section arising from the difference in the rate of moving water through the soil column, the second is a zero change section representing the initial recovery the potential water retention, and the third is a positive change section representing the full recovery of the potential water retention. Also, we found that the soil water moving has traffic jam within 24 hours after finished first
Measuring Model Rocket Engine Thrust Curves
Penn, Kim; Slaton, William V.
2010-01-01
This paper describes a method and setup to quickly and easily measure a model rocket engine's thrust curve using a computer data logger and force probe. Horst describes using Vernier's LabPro and force probe to measure the rocket engine's thrust curve; however, the method of attaching the rocket to the force probe is not discussed. We show how a…
Rational points on elliptic curves
Silverman, Joseph H
2015-01-01
The theory of elliptic curves involves a pleasing blend of algebra, geometry, analysis, and number theory. This book stresses this interplay as it develops the basic theory, thereby providing an opportunity for advanced undergraduates to appreciate the unity of modern mathematics. At the same time, every effort has been made to use only methods and results commonly included in the undergraduate curriculum. This accessibility, the informal writing style, and a wealth of exercises make Rational Points on Elliptic Curves an ideal introduction for students at all levels who are interested in learning about Diophantine equations and arithmetic geometry. Most concretely, an elliptic curve is the set of zeroes of a cubic polynomial in two variables. If the polynomial has rational coefficients, then one can ask for a description of those zeroes whose coordinates are either integers or rational numbers. It is this number theoretic question that is the main subject of this book. Topics covered include the geometry and ...
Control of master-slave manipulator using virtual force
International Nuclear Information System (INIS)
Kosuge, Kazuhiro; Fukuda, Toshio; Itoh, Tomotaka; Sakamoto, Keizoh; Noma, Yasuo.
1994-01-01
We propose a control system for a master-slave manipulator system having a rate-controlled slave manipulator. In this system, the master manipulator is stiffness-controlled in the Cartesian coordinate system, and the slave manipulator is damping-controlled in the Cartesian coordinate system. The desired velocity of the slave arm is given by a displacement of the master arm from a nominal position. The operator feels virtual contact force from the environment because the contact force is proportional to the displacement when the slave arm motion is constrained by the environment. The proposed method is experimentally applied to manipulators with three degrees of freedom. The experimental results illustrate the validity of the proposed system. (author)
Martínez, Sol Sáez; de la Rosa, Félix Martínez; Rojas, Sergio
2017-01-01
In Advanced Calculus, our students wonder if it is possible to graphically represent a tornado by means of a three-dimensional curve. In this paper, we show it is possible by providing the parametric equations of such tornado-shaped curves.
Tempo curves considered harmful
Desain, P.; Honing, H.
1993-01-01
In the literature of musicology, computer music research and the psychology of music, timing or tempo measurements are mostly presented in the form of continuous curves. The notion of these tempo curves is dangerous, despite its widespread use, because it lulls its users into the false impression
Energy Technology Data Exchange (ETDEWEB)
Yamagami, Y; Tani, T [Science University of Tokyo, Tokyo (Japan)
1996-10-27
Based on the basic quality equation of photovoltaic (PV) cell, a quality equation of PV module has been constructed by considering the spectral distribution of solar radiation and its intensity. A calculation method has been also proposed for determining the output from current-voltage (I-V) curves. Effectiveness of this method was examined by comparing calculated results and observed results. Amorphous Si (a-Si) and polycrystal Si PV modules were examined. By considering the environmental factors, differences of the annual output between the calculated and observed values were reduced from 2.50% to 0.95% for the a-Si PV module, and from 2.52% to 1.24% for the polycrystal Si PV module, which resulted in the reduction more than 50%. For the a-Si PV module, the environmental factor most greatly affecting the annual output was the spectral distribution of solar radiation, which was 3.86 times as large as the cell temperature, and 1.04 times as large as the intensity of solar radiation. For the polycrystal PV module, the environmental factor most greatly affecting the annual output was the cell temperature, which was 7.05 times as large as the spectral distribution of solar radiation, and 1.74 times as large as the intensity of solar radiation. 6 refs., 4 figs., 1 tab.
Chou, Kai-Seng
2001-01-01
Although research in curve shortening flow has been very active for nearly 20 years, the results of those efforts have remained scattered throughout the literature. For the first time, The Curve Shortening Problem collects and illuminates those results in a comprehensive, rigorous, and self-contained account of the fundamental results.The authors present a complete treatment of the Gage-Hamilton theorem, a clear, detailed exposition of Grayson''s convexity theorem, a systematic discussion of invariant solutions, applications to the existence of simple closed geodesics on a surface, and a new, almost convexity theorem for the generalized curve shortening problem.Many questions regarding curve shortening remain outstanding. With its careful exposition and complete guide to the literature, The Curve Shortening Problem provides not only an outstanding starting point for graduate students and new investigations, but a superb reference that presents intriguing new results for those already active in the field.
Ziegeweid, Jeffrey R.; Lorenz, David L.; Sanocki, Chris A.; Czuba, Christiana R.
2015-12-24
Knowledge of the magnitude and frequency of low flows in streams, which are flows in a stream during prolonged dry weather, is fundamental for water-supply planning and design; waste-load allocation; reservoir storage design; and maintenance of water quality and quantity for irrigation, recreation, and wildlife conservation. This report presents the results of a statewide study for which regional regression equations were developed for estimating 13 flow-duration curve statistics and 10 low-flow frequency statistics at ungaged stream locations in Minnesota. The 13 flow-duration curve statistics estimated by regression equations include the 0.0001, 0.001, 0.02, 0.05, 0.1, 0.25, 0.50, 0.75, 0.9, 0.95, 0.99, 0.999, and 0.9999 exceedance-probability quantiles. The low-flow frequency statistics include annual and seasonal (spring, summer, fall, winter) 7-day mean low flows, seasonal 30-day mean low flows, and summer 122-day mean low flows for a recurrence interval of 10 years. Estimates of the 13 flow-duration curve statistics and the 10 low-flow frequency statistics are provided for 196 U.S. Geological Survey continuous-record streamgages using streamflow data collected through September 30, 2012.
International Nuclear Information System (INIS)
Mazaheri, G.
1991-11-01
In conjunction with the development of a Beam Size Monitor (BSM) for the Final Focus Test Beam (FFTB) at SLAC, we have built a general purpose timing device with capabilities useful for many different applications. The Time Master consists of a fast clock, a large memory loaded via a PC, and a time vernier (analog) with 8-bit resolution. The Time Master generates an arbitrary pattern of pulses on 16 different channels (up to 256), with a resolution of 1/2 8 times the clock period. The clock content is stored in another memory to measure the time of up to 16 channels, with a resolution of 1/2 8 times the clock period (frequency is set at 50 Mhz), using a time-to-amplitude vernier. The data stored in the memory is accessed via a PC. The depth of the memory for pattern generation is 15 bits (32767), equal to the depth of the time measuring part. The device is self-calibrating, simply by prescribing a pattern on the output channels, and reading it into the time measuring section. The total clock length is 24 bits, equivalent to 334 ms of time at 50 Mhz frequency. Therefore, the resolution is of the order of 32 bits (i.e., 24 bits of clock plus 8 bits of vernier). 2 refs., 2 figs
Quantum adiabatic Markovian master equations
International Nuclear Information System (INIS)
Albash, Tameem; Zanardi, Paolo; Boixo, Sergio; Lidar, Daniel A
2012-01-01
We develop from first principles Markovian master equations suited for studying the time evolution of a system evolving adiabatically while coupled weakly to a thermal bath. We derive two sets of equations in the adiabatic limit, one using the rotating wave (secular) approximation that results in a master equation in Lindblad form, the other without the rotating wave approximation but not in Lindblad form. The two equations make markedly different predictions depending on whether or not the Lamb shift is included. Our analysis keeps track of the various time and energy scales associated with the various approximations we make, and thus allows for a systematic inclusion of higher order corrections, in particular beyond the adiabatic limit. We use our formalism to study the evolution of an Ising spin chain in a transverse field and coupled to a thermal bosonic bath, for which we identify four distinct evolution phases. While we do not expect this to be a generic feature, in one of these phases dissipation acts to increase the fidelity of the system state relative to the adiabatic ground state. (paper)
REVIEW: DOG, MASTER, AND RELATIVES
Directory of Open Access Journals (Sweden)
Reviewed by Caihua Dorji (Tshe dpal rdo rje ཚེ་དཔལ་རྡོ་རྗེ། Caihuan Duojie 才还多杰
2018-05-01
Full Text Available Stag 'bum rgyal (b. 1966 is from a herding family in Mang ra (Guinan County, Mtsho lho (Hainan Tibetan Autonomous Prefecture, Mtsho sngon (Qinghai Province. A member of the China Writers' Association and the Standing Committee of Mtsho lho Writers' Association, Stag 'bum rgyal teaches the Tibetan language at Mang ra Nationalities Middle School. He graduated from Mtsho lho Nationalities Normal School in 1986 and began his teaching career in the same year. Later in 1988, he attended a training program at Northwest Nationalities University and earned a graduation certificate. Stag 'bum rgyal has published more than sixty short stories, novellas, and novels since 1980s. Among his novellas, Sgo khyi 'The Watch Dog', Khyi rgan 'The Old Dog', h+'a pa gsos pa'i zin bris 'The Story of Dog Adoption', Mi tshe'i glu dbyangs 'The Song of Life', and khyi dang bdag po/ da dung gnyen tshan dag 'Dog, Master, and Relatives' have been translated into Chinese and published in such magazines as Xizang Wenxue 'Tibet Literature', Minzu Wenxue 'Nationalities Literature', and Qinghai Hu 'Qinghai Lake'. Rnam shes 'The Soul', Rgud 'Degeneration', and khyi dang bdag po/ da dung gnyen tshan dag 'Dog, Master, and Relatives', won the Sbrang char Literature Prize in 1999, 2003, and 2006, respectively. ..........
Product Variant Master as a Means to Handle Variant Design
DEFF Research Database (Denmark)
Hildre, Hans Petter; Mortensen, Niels Henrik; Andreasen, Mogens Myrup
1996-01-01
be implemented in the CAD system I-DEAS. A precondition for high degree of computer support is identification of a product variant master from which new variants can be derived. This class platform defines how a product build up fit certain production methods and rules governing determination of modules...
Auditing the process of ethics approval for Master's degrees at a ...
African Journals Online (AJOL)
Objective. This study audited the process of ethics approval for Master's research at the Nelson R Mandela School of Medicine, Durban, KwaZulu-Natal, South Africa. Methods. After obtaining the appropriate ethical approval, all the correspondence surrounding each Master's proposal for the year 2010 was reviewed.
Inverse Diffusion Curves Using Shape Optimization.
Zhao, Shuang; Durand, Fredo; Zheng, Changxi
2018-07-01
The inverse diffusion curve problem focuses on automatic creation of diffusion curve images that resemble user provided color fields. This problem is challenging since the 1D curves have a nonlinear and global impact on resulting color fields via a partial differential equation (PDE). We introduce a new approach complementary to previous methods by optimizing curve geometry. In particular, we propose a novel iterative algorithm based on the theory of shape derivatives. The resulting diffusion curves are clean and well-shaped, and the final image closely approximates the input. Our method provides a user-controlled parameter to regularize curve complexity, and generalizes to handle input color fields represented in a variety of formats.
Energy Technology Data Exchange (ETDEWEB)
Berretta, Ana Lucia Olmedo
1999-07-01
The hydraulic conductivity is one of the most important parameters to understand the movement of water in the unsaturated zone. Reliable estimations are difficult to obtain, once the hydraulic conductivity is highly variable. This study was carried out at 'Escola Superior de Agricultura Luiz de Queiroz', Universidade de Sao Paulo, in a Kandiudalfic Eutrudox soil. The hydraulic conductivity was determined by a direct and an indirect method. The instantaneous profile method was described and the hydraulic conductivity as a function of soil water content was determined by solving the Richards equation. Tensiometers were used to estimate the total soil water potential, and the neutron probe and the soil retention curve were used to estimate soil water content in the direct method. The neutron probe showed to be not adequately sensible to the changes of soil water content in this soil. Despite of the soil retention curve provides best correlation values to soil water content as a function of water redistribution time, the soil water content in this soil did not vary too much till the depth of 50 cm, reflecting the influence of the presence of a Bt horizon. The soil retention curve was well fitted by the van Genuchten model used as an indirect method. The values of the van Genuchten and the experimental relative hydraulic conductivity obtained by the instantaneous profile method provided a good correlation. However, the values estimated by the model were always lower than that ones obtained experimentally. (author)
EVALUATION OF THE MASTER MARKETER NEWSLETTER
McCorkle, Dean A.; Waller, Mark L.; Amosson, Stephen H.; Smith, Jackie; Bevers, Stanley J.; Borchardt, Robert
2001-01-01
Several support programs have been developed to help support, reinforce, enhance, and improve the effectiveness of the educational experience of Master Marketer graduates and other marketing club participants. One of those products, the Master Marketer Newsletter, is currently mailed to over 700 Master Marketer graduates and Extension faculty on a quarterly basis. In the June 2000 newsletter, a questionnaire was sent to newsletter recipients asking them to evaluate the various sections of the...
From master slave interferometry to complex master slave interferometry: theoretical work
Rivet, Sylvain; Bradu, Adrian; Maria, Michael; Feuchter, Thomas; Leick, Lasse; Podoleanu, Adrian
2018-03-01
A general theoretical framework is described to obtain the advantages and the drawbacks of two novel Fourier Domain Optical Coherence Tomography (OCT) methods denoted as Master/Slave Interferometry (MSI) and its extension denoted as Complex Master/Slave Interferometry (CMSI). Instead of linearizing the digital data representing the channeled spectrum before a Fourier transform can be applied to it (as in OCT standard methods), channeled spectrum is decomposed on the basis of local oscillations. This replaces the need for linearization, generally time consuming, before any calculation of the depth profile in the range of interest. In this model two functions, g and h, are introduced. The function g describes the modulation chirp of the channeled spectrum signal due to nonlinearities in the decoding process from wavenumber to time. The function h describes the dispersion in the interferometer. The utilization of these two functions brings two major improvements to previous implementations of the MSI method. The paper details the steps to obtain the functions g and h, and represents the CMSI in a matrix formulation that enables to implement easily this method in LabVIEW by using parallel programming with multi-cores.
Directory of Open Access Journals (Sweden)
Paulo Prochno
2004-07-01
Full Text Available Learning curves have been studied for a long time. These studies provided strong support to the hypothesis that, as organizations produce more of a product, unit costs of production decrease at a decreasing rate (see Argote, 1999 for a comprehensive review of learning curve studies. But the organizational mechanisms that lead to these results are still underexplored. We know some drivers of learning curves (ADLER; CLARK, 1991; LAPRE et al., 2000, but we still lack a more detailed view of the organizational processes behind those curves. Through an ethnographic study, I bring a comprehensive account of the first year of operations of a new automotive plant, describing what was taking place on in the assembly area during the most relevant shifts of the learning curve. The emphasis is then on how learning occurs in that setting. My analysis suggests that the overall learning curve is in fact the result of an integration process that puts together several individual ongoing learning curves in different areas throughout the organization. In the end, I propose a model to understand the evolution of these learning processes and their supporting organizational mechanisms.
United States Shipbuilding Standards Master Plan
National Research Council Canada - National Science Library
Horsmon, Jr, Albert W
1992-01-01
This Shipbuilding Standards Master Plan was developed using extensive surveys, interviews, and an iterative editing process to include the views and opinions of key persons and organizations involved...
International Nuclear Information System (INIS)
Choi, Chang Woong; Lee, Tae Joon; Kim, Joon Yun; Cho, Yun Ho; Hah, Jong Hyun
1993-07-01
This report was to development the computerized schedule and progress control system for the master schedule of KMRR project with ARTEMIS 7000/386 CM (Ver. 7.4.2.) based on project management theory (PERT/CPM, PDM, and S-curve). This system has been efficiently used for KMRR master schedule and will be utilized for the detail scheduling of KMRR project. (Author) 23 refs., 26 figs., 52 tabs
Energy Technology Data Exchange (ETDEWEB)
Choi, Chang Woong; Lee, Tae Joon; Kim, Joon Yun; Cho, Yun Ho; Hah, Jong Hyun [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)
1993-07-01
This report was to development the computerized schedule and progress control system for the master schedule of KMRR project with ARTEMIS 7000/386 CM (Ver. 7.4.2.) based on project management theory (PERT/CPM, PDM, and S-curve). This system has been efficiently used for KMRR master schedule and will be utilized for the detail scheduling of KMRR project. (Author) 23 refs., 26 figs., 52 tabs.
Buonanno, Paolo; Fergusson, Leopoldo; Vargas, Juan Fernando
2014-01-01
We document the existence of a Crime Kuznets Curve in US states since the 1970s. As income levels have risen, crime has followed an inverted U-shaped pattern, first increasing and then dropping. The Crime Kuznets Curve is not explained by income inequality. In fact, we show that during the sample period inequality has risen monotonically with income, ruling out the traditional Kuznets Curve. Our finding is robust to adding a large set of controls that are used in the literature to explain the...
Energy Technology Data Exchange (ETDEWEB)
Vieira, Jose Wilson
2001-08-01
Brachytherapy is a special form of cancer treatment in which the radioactive source is very close to or inside the tumor with the objective of causing the necrosis of the cancerous tissue. The intensity of cell response to the radiation varies according to the tissue type and degree of differentiation. Since the malign cells are less differentiated than the normal ones, they are more sensitive to the radiation. This is the basis for radiotherapy techniques. Institutes that work with the application of high dose rates use sophisticated computer programs to calculate the necessary dose to achieve the necrosis of the tumor and the same time, minimizing the irradiation of tissues and organs of the neighborhood. With knowledge the characteristics of the source and the tumor, it is possible to trace isodose curves with the necessary information for planning the brachytherapy in patients. The objective of this work is, using Monte Carlo techniques, to develop a computer program - the ISODOSE - which allows to determine isodose curves in turn of linear radioactive sources used in brachytherapy. The development of ISODOSE is important because the available commercial programs, in general, are very expensive and practically inaccessible to small clinics. The use of Monte Carlo techniques is viable because they avoid problems inherent to analytic solutions as, for instance , the integration of functions with singularities in its domain. The results of ISODOSE were compared with similar data found in the literature and also with those obtained at the institutes of radiotherapy of the 'Hospital do Cancer do Recife' and of the 'Hospital Portugues do Recife'. ISODOSE presented good performance, mainly, due to the Monte Carlo techniques, that allowed a quite detailed drawing of the isodose curves in turn of linear sources. (author)
Energy Technology Data Exchange (ETDEWEB)
Vieira, Jose Wilson
2001-08-01
Brachytherapy is a special form of cancer treatment in which the radioactive source is very close to or inside the tumor with the objective of causing the necrosis of the cancerous tissue. The intensity of cell response to the radiation varies according to the tissue type and degree of differentiation. Since the malign cells are less differentiated than the normal ones, they are more sensitive to the radiation. This is the basis for radiotherapy techniques. Institutes that work with the application of high dose rates use sophisticated computer programs to calculate the necessary dose to achieve the necrosis of the tumor and the same time, minimizing the irradiation of tissues and organs of the neighborhood. With knowledge the characteristics of the source and the tumor, it is possible to trace isodose curves with the necessary information for planning the brachytherapy in patients. The objective of this work is, using Monte Carlo techniques, to develop a computer program - the ISODOSE - which allows to determine isodose curves in turn of linear radioactive sources used in brachytherapy. The development of ISODOSE is important because the available commercial programs, in general, are very expensive and practically inaccessible to small clinics. The use of Monte Carlo techniques is viable because they avoid problems inherent to analytic solutions as, for instance , the integration of functions with singularities in its domain. The results of ISODOSE were compared with similar data found in the literature and also with those obtained at the institutes of radiotherapy of the 'Hospital do Cancer do Recife' and of the 'Hospital Portugues do Recife'. ISODOSE presented good performance, mainly, due to the Monte Carlo techniques, that allowed a quite detailed drawing of the isodose curves in turn of linear sources. (author)
Nuclear safety research master plan
Energy Technology Data Exchange (ETDEWEB)
Ha, Jae Joo; Yang, J. U.; Jun, Y. S. and others
2001-06-01
The SRMP (Safety Research Master Plan) is established to cope with the changes of nuclear industry environments. The tech. tree is developed according to the accident progress of the nuclear reactor. The 11 research fields are derived to cover the necessary technologies to ensure the safety of nuclear reactors. Based on the developed tech. tree, the following four main research fields are derived as the main safety research areas: 1. Integrated nuclear safety enhancement, 2. Thermal hydraulic experiment and assessment, 3. Severe accident management and experiment, and 4. The integrity of equipment and structure. The research frame and strategies are also recommended to enhance the efficiency of research activity, and to extend the applicability of research output.
Directory of Open Access Journals (Sweden)
Kožul Nataša
2014-01-01
Full Text Available In the broadest sense, yield curve indicates the market's view of the evolution of interest rates over time. However, given that cost of borrowing it closely linked to creditworthiness (ability to repay, different yield curves will apply to different currencies, market sectors, or even individual issuers. As government borrowing is indicative of interest rate levels available to other market players in a particular country, and considering that bond issuance still remains the dominant form of sovereign debt, this paper describes yield curve construction using bonds. The relationship between zero-coupon yield, par yield and yield to maturity is given and their usage in determining curve discount factors is described. Their usage in deriving forward rates and pricing related derivative instruments is also discussed.
U.S. Environmental Protection Agency — an UV calibration curve for SRHA quantitation. This dataset is associated with the following publication: Chang, X., and D. Bouchard. Surfactant-Wrapped Multiwalled...
International Nuclear Information System (INIS)
Gruhn, C.R.
1981-05-01
An alternative utilization is presented for the gaseous ionization chamber in the detection of energetic heavy ions, which is called Bragg Curve Spectroscopy (BCS). Conceptually, BCS involves using the maximum data available from the Bragg curve of the stopping heavy ion (HI) for purposes of identifying the particle and measuring its energy. A detector has been designed that measures the Bragg curve with high precision. From the Bragg curve the range from the length of the track, the total energy from the integral of the specific ionization over the track, the dE/dx from the specific ionization at the beginning of the track, and the Bragg peak from the maximum of the specific ionization of the HI are determined. This last signal measures the atomic number, Z, of the HI unambiguously
Directory of Open Access Journals (Sweden)
Sutawanir Darwis
2012-05-01
Full Text Available Empirical decline curve analysis of oil production data gives reasonable answer in hyperbolic type curves situations; however the methodology has limitations in fitting real historical production data in present of unusual observations due to the effect of the treatment to the well in order to increase production capacity. The development ofrobust least squares offers new possibilities in better fitting production data using declinecurve analysis by down weighting the unusual observations. This paper proposes a robustleast squares fitting lmRobMM approach to estimate the decline rate of daily production data and compares the results with reservoir simulation results. For case study, we usethe oil production data at TBA Field West Java. The results demonstrated that theapproach is suitable for decline curve fitting and offers a new insight in decline curve analysis in the present of unusual observations.
DEFF Research Database (Denmark)
Georgieva Yankova, Ginka; Federici, Paolo
This report describes power curve measurements carried out on a given turbine in a chosen period. The measurements are carried out in accordance to IEC 61400-12-1 Ed. 1 and FGW Teil 2.......This report describes power curve measurements carried out on a given turbine in a chosen period. The measurements are carried out in accordance to IEC 61400-12-1 Ed. 1 and FGW Teil 2....
Alexeev, Valery; Clemens, C Herbert; Beauville, Arnaud
2008-01-01
This book is devoted to recent progress in the study of curves and abelian varieties. It discusses both classical aspects of this deep and beautiful subject as well as two important new developments, tropical geometry and the theory of log schemes. In addition to original research articles, this book contains three surveys devoted to singularities of theta divisors, of compactified Jacobians of singular curves, and of "strange duality" among moduli spaces of vector bundles on algebraic varieties.
Consistent Valuation across Curves Using Pricing Kernels
Directory of Open Access Journals (Sweden)
Andrea Macrina
2018-03-01
Full Text Available The general problem of asset pricing when the discount rate differs from the rate at which an asset’s cash flows accrue is considered. A pricing kernel framework is used to model an economy that is segmented into distinct markets, each identified by a yield curve having its own market, credit and liquidity risk characteristics. The proposed framework precludes arbitrage within each market, while the definition of a curve-conversion factor process links all markets in a consistent arbitrage-free manner. A pricing formula is then derived, referred to as the across-curve pricing formula, which enables consistent valuation and hedging of financial instruments across curves (and markets. As a natural application, a consistent multi-curve framework is formulated for emerging and developed inter-bank swap markets, which highlights an important dual feature of the curve-conversion factor process. Given this multi-curve framework, existing multi-curve approaches based on HJM and rational pricing kernel models are recovered, reviewed and generalised and single-curve models extended. In another application, inflation-linked, currency-based and fixed-income hybrid securities are shown to be consistently valued using the across-curve valuation method.
Minimal families of curves on surfaces
Lubbes, Niels
2014-01-01
A minimal family of curves on an embedded surface is defined as a 1-dimensional family of rational curves of minimal degree, which cover the surface. We classify such minimal families using constructive methods. This allows us to compute the minimal
Symmetry Properties of Potentiometric Titration Curves.
Macca, Carlo; Bombi, G. Giorgio
1983-01-01
Demonstrates how the symmetry properties of titration curves can be efficiently and rigorously treated by means of a simple method, assisted by the use of logarithmic diagrams. Discusses the symmetry properties of several typical titration curves, comparing the graphical approach and an explicit mathematical treatment. (Author/JM)
Deep-learnt classification of light curves
DEFF Research Database (Denmark)
Mahabal, Ashish; Gieseke, Fabian; Pai, Akshay Sadananda Uppinakudru
2017-01-01
Astronomy light curves are sparse, gappy, and heteroscedastic. As a result standard time series methods regularly used for financial and similar datasets are of little help and astronomers are usually left to their own instruments and techniques to classify light curves. A common approach is to d...
Valuing Initial Teacher Education at Master's Level
Brooks, Clare; Brant, Jacek; Abrahams, Ian; Yandell, John
2012-01-01
The future of Master's-level work in initial teacher education (ITE) in England seems uncertain. Whilst the coalition government has expressed support for Master's-level work, its recent White Paper focuses on teaching skills as the dominant form of professional development. This training discourse is in tension with the view of professional…
Presentation master thesis at EAPRIL 2015 Conference
Iris Sutherland; Richard Kragten; Zac Woolfitt
2015-01-01
Three graduates of the Inholland Master Leren en Innoveren (Zac Woolfitt, Iris Sutherland and Richard Kragten) each presented their master thesis in an interactive 'flipped' session which involved providing content in advance via a video for those attending the session. The session was well attended
Principal Curves on Riemannian Manifolds.
Hauberg, Soren
2016-09-01
Euclidean statistics are often generalized to Riemannian manifolds by replacing straight-line interpolations with geodesic ones. While these Riemannian models are familiar-looking, they are restricted by the inflexibility of geodesics, and they rely on constructions which are optimal only in Euclidean domains. We consider extensions of Principal Component Analysis (PCA) to Riemannian manifolds. Classic Riemannian approaches seek a geodesic curve passing through the mean that optimizes a criteria of interest. The requirements that the solution both is geodesic and must pass through the mean tend to imply that the methods only work well when the manifold is mostly flat within the support of the generating distribution. We argue that instead of generalizing linear Euclidean models, it is more fruitful to generalize non-linear Euclidean models. Specifically, we extend the classic Principal Curves from Hastie & Stuetzle to data residing on a complete Riemannian manifold. We show that for elliptical distributions in the tangent of spaces of constant curvature, the standard principal geodesic is a principal curve. The proposed model is simple to compute and avoids many of the pitfalls of traditional geodesic approaches. We empirically demonstrate the effectiveness of the Riemannian principal curves on several manifolds and datasets.
RMS fatigue curves for random vibrations
International Nuclear Information System (INIS)
Brenneman, B.; Talley, J.G.
1984-01-01
Fatigue usage factors for deterministic or constant amplitude vibration stresses may be calculated with well known procedures and fatigue curves given in the ASME Boiler and Pressure Vessel Code. However, some phenomena produce nondeterministic cyclic stresses which can only be described and analyzed with statistical concepts and methods. Such stresses may be caused by turbulent fluid flow over a structure. Previous methods for solving this statistical fatigue problem are often difficult to use and may yield inaccurate results. Two such methods examined herein are Crandall's method and the ''3sigma'' method. The objective of this paper is to provide a method for creating ''RMS fatigue curves'' which accurately incorporate the requisite statistical information. These curves are given and may be used by analysts with the same ease and in the same manner as the ASME fatigue curves
PROFESSIONAL MASTER AND ITS CHALLENGES.
Ferreira, Lydia Masako
2015-01-01
To describe the history, origin, objectives, characteristics, implications, the questions of the evaluation form and some examples of the Professional Masters (MP), to differentiate the Academic Master, and identify the challenges for the next quadrennial assessment. The CAPES site on Professional Masters and documents and meeting area of reports from 2004 to 2013 of Medicine III were read as well as the reports and the sub-page of the area in Capes site. The data relating to the evaluation process and the Scoreboard of the other areas were computed and analyzed. From these data it was detected the challenges of Medicine III for the next four years (2013-2016). The creation of the Professional Master is very recent in Medicine III and no Professional Master of Medicine III course was evaluated yet. Were described the objectives, assumptions, characteristics, motivations, the possibilities, the feasibility, the profile of the students, the faculty, the curriculum, funding, intellectual production, social inclusion, the general requirements of Ordinance No. 193/2011 CAPES and some examples of proposals, technological lines of scientific activities, partnerships and counterparties. The evaluation form of the MP was discussed, the need for social, economic and political intellectual production and the differences with the MA. It was also reported the global importance of the MP and its evolution in Brazil. From the understanding of the MP, Medicine III outlined some challenges and goals to be developed in the 2013-2016 quadrennium. Medicine III understood the MP as a new technological scientific horizon within the strict sensu post-graduate and full consistency with the area. Descrever o histórico, a origem, os objetivos, as características, as implicações, os quesitos da ficha de avaliação e alguns exemplos do Mestrado Profissional (MP), sua diferenciação com o Mestrado Acadêmico, e detectar os desafios para o próximo quadriênio de avaliação. O site
Simpson, Ewan; Andronikou, Savvas; Vedajallam, Schadie; Chacko, Anith; Thai, Ngoc Jade
2016-09-01
Hypoxic-ischaemic encephalopathy is optimally imaged with brain MRI in the neonatal period. However neuroimaging is often also performed later in childhood (e.g., when parents seek compensation in cases of alleged birth asphyxia). We describe a standardised technique for creating two curved reconstructions of the cortical surface to show the characteristic surface changes of hypoxic-ischaemic encephalopathy in children imaged after the neonatal period. The technique was applied for 10 cases of hypoxic-ischaemic encephalopathy and also for age-matched healthy children to assess the visibility of characteristic features of hypoxic-ischaemic encephalopathy. In the abnormal brains, fissural or sulcal widening was seen in all cases and ulegyria was identifiable in 7/10. These images could be used as a visual aid for communicating MRI findings to clinicians and other interested parties.
Nguyen, Vu-Hieu; Tran, Tho N H T; Sacchi, Mauricio D; Naili, Salah; Le, Lawrence H
2017-08-01
We present a semi-analytical finite element (SAFE) scheme for accurately computing the velocity dispersion and attenuation in a trilayered system consisting of a transversely-isotropic (TI) cortical bone plate sandwiched between the soft tissue and marrow layers. The soft tissue and marrow are mimicked by two fluid layers of finite thickness. A Kelvin-Voigt model accounts for the absorption of all three biological domains. The simulated dispersion curves are validated by the results from the commercial software DISPERSE and published literature. Finally, the algorithm is applied to a viscoelastic trilayered TI bone model to interpret the guided modes of an ex-vivo experimental data set from a bone phantom. Copyright © 2017 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Soji Morishita
Full Text Available Detection of the JAK2V617F mutation is essential for diagnosing patients with classical myeloproliferative neoplasms (MPNs. However, detection of the low-frequency JAK2V617F mutation is a challenging task due to the necessity of discriminating between true-positive and false-positive results. Here, we have developed a highly sensitive and accurate assay for the detection of JAK2V617F and named it melting curve analysis after T allele enrichment (MelcaTle. MelcaTle comprises three steps: 1 two cycles of JAK2V617F allele enrichment by PCR amplification followed by BsaXI digestion, 2 selective amplification of the JAK2V617F allele in the presence of a bridged nucleic acid (BNA probe, and 3 a melting curve assay using a BODIPY-FL-labeled oligonucleotide. Using this assay, we successfully detected nearly a single copy of the JAK2V617F allele, without false-positive signals, using 10 ng of genomic DNA standard. Furthermore, MelcaTle showed no positive signals in 90 assays screening healthy individuals for JAK2V617F. When applying MelcaTle to 27 patients who were initially classified as JAK2V617F-positive on the basis of allele-specific PCR analysis and were thus suspected as having MPNs, we found that two of the patients were actually JAK2V617F-negative. A more careful clinical data analysis revealed that these two patients had developed transient erythrocytosis of unknown etiology but not polycythemia vera, a subtype of MPNs. These findings indicate that the newly developed MelcaTle assay should markedly improve the diagnosis of JAK2V617F-positive MPNs.
Freezing the Master Production Schedule Under Rolling Planning Horizons
V. Sridharan; William L. Berry; V. Udayabhanu
1987-01-01
The stability of the Master Production Schedule (MPS) is a critical issue in managing production operations with a Material Requirements Planning System. One method of achieving stability is to freeze some portion or all of the MPS. While freezing the MPS can limit the number of schedule changes, it can also produce an increase in production and inventory costs. This paper examines three decision variables in freezing the MPS: the freezing method, the freeze interval length, and the planning ...
DEFF Research Database (Denmark)
Gómez Arranz, Paula; Vesth, Allan
This report describes the power curve measurements carried out on a given wind turbine in a chosen period. The measurements were carried out following the measurement procedure in the draft of IEC 61400-12-1 Ed.2 [1], with some deviations mostly regarding uncertainty calculation. Here, the refere......This report describes the power curve measurements carried out on a given wind turbine in a chosen period. The measurements were carried out following the measurement procedure in the draft of IEC 61400-12-1 Ed.2 [1], with some deviations mostly regarding uncertainty calculation. Here......, the reference wind speed used in the power curve is the equivalent wind speed obtained from lidar measurements at several heights between lower and upper blade tip, in combination with a hub height meteorological mast. The measurements have been performed using DTU’s measurement equipment, the analysis...
Hypertension in master endurance athletes.
Hernelahti, M; Kujala, U M; Kaprio, J; Karjalainen, J; Sarna, S
1998-11-01
To determine whether long-term very vigorous endurance training prevents hypertension. Cohort study of master orienteering runners and controls. Finland. In 1995, a health questionnaire was completed by 264 male orienteering runners (response rate 90.4%) who had been top-ranked in competitions among men aged 35-59 years in 1984, and by 388 similarly aged male controls (response rate 87.1%) who were healthy at the age of 20 years and free of overt ischemic heart disease in 1985. Self-report of medication for hypertension. In the endurance athlete group, the crude prevalence (8.7%) of subjects who had used medication for hypertension was less than a third of that in the control group (27.8%). Even after adjusting for age and body mass index, the difference between the groups was still significant (odds ratio for athletes 0.43, 95% confidence interval 0.25-0.76). Long-term vigorous endurance training is associated with a low prevalence of hypertension. Some of the effect can be explained by a lower body mass, but exercise seems to induce a lower rate of hypertension by other mechanisms than by decreasing body weight
Curved electromagnetic missiles
International Nuclear Information System (INIS)
Myers, J.M.; Shen, H.M.; Wu, T.T.
1989-01-01
Transient electromagnetic fields can exhibit interesting behavior in the limit of great distances from their sources. In situations of finite total radiated energy, the energy reaching a distant receiver can decrease with distance much more slowly than the usual r - 2 . Cases of such slow decrease have been referred to as electromagnetic missiles. All of the wide variety of known missiles propagate in essentially straight lines. A sketch is presented here of a missile that can follow a path that is strongly curved. An example of a curved electromagnetic missile is explicitly constructed and some of its properties are discussed. References to details available elsewhere are given