WorldWideScience

Sample records for qm-based end-point method

  1. Gran method for end point anticipation in monosegmented flow titration

    Directory of Open Access Journals (Sweden)

    Aquino Emerson V

    2004-01-01

    Full Text Available An automatic potentiometric monosegmented flow titration procedure based on Gran linearisation approach has been developed. The controlling program can estimate the end point of the titration after the addition of three or four aliquots of titrant. Alternatively, the end point can be determined by the second derivative procedure. In this case, additional volumes of titrant are added until the vicinity of the end point and three points before and after the stoichiometric point are used for end point calculation. The performance of the system was assessed by the determination of chloride in isotonic beverages and parenteral solutions. The system employs a tubular Ag2S/AgCl indicator electrode. A typical titration, performed according to the IUPAC definition, requires only 60 mL of sample and about the same volume of titrant (AgNO3 solution. A complete titration can be carried out in 1 - 5 min. The accuracy and precision (relative standard deviation of ten replicates are 2% and 1% for the Gran and 1% and 0.5% for the Gran/derivative end point determination procedures, respectively. The proposed system reduces the time to perform a titration, ensuring low sample and reagent consumption, and full automatic sampling and titrant addition in a calibration-free titration protocol.

  2. Comparison of methods for accurate end-point detection of potentiometric titrations

    International Nuclear Information System (INIS)

    Villela, R L A; Borges, P P; Vyskočil, L

    2015-01-01

    Detection of the end point in potentiometric titrations has wide application on experiments that demand very low measurement uncertainties mainly for certifying reference materials. Simulations of experimental coulometric titration data and consequential error analysis of the end-point values were conducted using a programming code. These simulations revealed that the Levenberg-Marquardt method is in general more accurate than the traditional second derivative technique used currently as end-point detection for potentiometric titrations. Performance of the methods will be compared and presented in this paper

  3. Comparison of methods for accurate end-point detection of potentiometric titrations

    Science.gov (United States)

    Villela, R. L. A.; Borges, P. P.; Vyskočil, L.

    2015-01-01

    Detection of the end point in potentiometric titrations has wide application on experiments that demand very low measurement uncertainties mainly for certifying reference materials. Simulations of experimental coulometric titration data and consequential error analysis of the end-point values were conducted using a programming code. These simulations revealed that the Levenberg-Marquardt method is in general more accurate than the traditional second derivative technique used currently as end-point detection for potentiometric titrations. Performance of the methods will be compared and presented in this paper.

  4. A time domain inverse dynamic method for the end point tracking control of a flexible manipulator

    Science.gov (United States)

    Kwon, Dong-Soo; Book, Wayne J.

    1991-01-01

    The inverse dynamic equation of a flexible manipulator was solved in the time domain. By dividing the inverse system equation into the causal part and the anticausal part, we calculated the torque and the trajectories of all state variables for a given end point trajectory. The interpretation of this method in the frequency domain was explained in detail using the two-sided Laplace transform and the convolution integral. The open loop control of the inverse dynamic method shows an excellent result in simulation. For real applications, a practical control strategy is proposed by adding a feedback tracking control loop to the inverse dynamic feedforward control, and its good experimental performance is presented.

  5. A step towards standardization: A method for end-point titer determination by fluorescence index of an automated microscope. End-point titer determination by fluorescence index.

    Science.gov (United States)

    Carbone, Teresa; Gilio, Michele; Padula, Maria Carmela; Tramontano, Giuseppina; D'Angelo, Salvatore; Pafundi, Vito

    2018-05-01

    Indirect Immunofluorescence (IIF) is widely considered the Gold Standard for Antinuclear Antibody (ANA) screening. However, the high inter-reader variability remains the major disadvantage associated with ANA testing and the main reason for the increasing demand of the computer-aided immunofluorescence microscope. Previous studies proposed the quantification of the fluorescence intensity as an alternative for the classical end-point titer evaluation. However, the different distribution of bright/dark light linked to the nature of the self-antigen and its location in the cells result in different mean fluorescence intensities. The aim of the present study was to correlate Fluorescence Index (F.I.) with end-point titers for each well-defined ANA pattern. Routine serum samples were screened for ANA testing on HEp-2000 cells using Immuno Concepts Image Navigator System, and positive samples were serially diluted to assign the end-point titer. A comparison between F.I. and end-point titers related to 10 different staining patterns was made. According to our analysis, good technical performance of F.I. (97% sensitivity and 94% specificity) was found. A significant correlation between quantitative reading of F.I. and end-point titer groups was observed using Spearman's test and regression analysis. A conversion scale of F.I. in end-point titers for each recognized ANA-pattern was obtained. The Image Navigator offers the opportunity to improve worldwide harmonization of ANA test results. In particular, digital F.I. allows quantifying ANA titers by using just one sample dilution. It could represent a valuable support for the routine laboratory and an effective tool to reduce inter- and intra-laboratory variability. Copyright © 2018. Published by Elsevier B.V.

  6. CaFE: a tool for binding affinity prediction using end-point free energy methods.

    Science.gov (United States)

    Liu, Hui; Hou, Tingjun

    2016-07-15

    Accurate prediction of binding free energy is of particular importance to computational biology and structure-based drug design. Among those methods for binding affinity predictions, the end-point approaches, such as MM/PBSA and LIE, have been widely used because they can achieve a good balance between prediction accuracy and computational cost. Here we present an easy-to-use pipeline tool named Calculation of Free Energy (CaFE) to conduct MM/PBSA and LIE calculations. Powered by the VMD and NAMD programs, CaFE is able to handle numerous static coordinate and molecular dynamics trajectory file formats generated by different molecular simulation packages and supports various force field parameters. CaFE source code and documentation are freely available under the GNU General Public License via GitHub at https://github.com/huiliucode/cafe_plugin It is a VMD plugin written in Tcl and the usage is platform-independent. tingjunhou@zju.edu.cn. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  7. An application of the 'end-point' method to the minimum critical mass problem in two group transport theory

    International Nuclear Information System (INIS)

    Williams, M.M.R.

    2003-01-01

    A two group integral equation derived using transport theory, which describes the fuel distribution necessary for a flat thermal flux and minimum critical mass, is solved by the classical end-point method. This method has a number of advantages and in particular highlights the changing behaviour of the fissile mass distribution function in the neighbourhood of the core-reflector interface. We also show how the reflector thermal flux behaves and explain the origin of the maximum which arises when the critical size is less than that corresponding to minimum critical mass. A comparison is made with diffusion theory and the necessary and somewhat artificial presence of surface delta functions in the fuel distribution is shown to be analogous to the edge transients that arise naturally in transport theory

  8. Comparison between amperometric and true potentiometric end-point detection in the determination of water by the Karl Fischer method.

    Science.gov (United States)

    Cedergren, A

    1974-06-01

    A rapid and sensitive method using true potentiometric end-point detection has been developed and compared with the conventional amperometric method for Karl Fischer determination of water. The effect of the sulphur dioxide concentration on the shape of the titration curve is shown. By using kinetic data it was possible to calculate the course of titrations and make comparisons with those found experimentally. The results prove that the main reaction is the slow step, both in the amperometric and the potentiometric method. Results obtained in the standardization of the Karl Fischer reagent showed that the potentiometric method, including titration to a preselected potential, gave a standard deviation of 0.001(1) mg of water per ml, the amperometric method using extrapolation 0.002(4) mg of water per ml and the amperometric titration to a pre-selected diffusion current 0.004(7) mg of water per ml. Theories and results dealing with dilution effects are presented. The time of analysis was 1-1.5 min for the potentiometric and 4-5 min for the amperometric method using extrapolation.

  9. Four points function fitted and first derivative procedure for determining the end points in potentiometric titration curves: statistical analysis and method comparison.

    Science.gov (United States)

    Kholeif, S A

    2001-06-01

    A new method that belongs to the differential category for determining the end points from potentiometric titration curves is presented. It uses a preprocess to find first derivative values by fitting four data points in and around the region of inflection to a non-linear function, and then locate the end point, usually as a maximum or minimum, using an inverse parabolic interpolation procedure that has an analytical solution. The behavior and accuracy of the sigmoid and cumulative non-linear functions used are investigated against three factors. A statistical evaluation of the new method using linear least-squares method validation and multifactor data analysis are covered. The new method is generally applied to symmetrical and unsymmetrical potentiometric titration curves, and the end point is calculated using numerical procedures only. It outperforms the "parent" regular differential method in almost all factors levels and gives accurate results comparable to the true or estimated true end points. Calculated end points from selected experimental titration curves compatible with the equivalence point category of methods, such as Gran or Fortuin, are also compared with the new method.

  10. Calorimetry end-point predictions

    International Nuclear Information System (INIS)

    Fox, M.A.

    1981-01-01

    This paper describes a portion of the work presently in progress at Rocky Flats in the field of calorimetry. In particular, calorimetry end-point predictions are outlined. The problems associated with end-point predictions and the progress made in overcoming these obstacles are discussed. The two major problems, noise and an accurate description of the heat function, are dealt with to obtain the most accurate results. Data are taken from an actual calorimeter and are processed by means of three different noise reduction techniques. The processed data are then utilized by one to four algorithms, depending on the accuracy desired to determined the end-point

  11. UO3 deactivation end point criteria

    International Nuclear Information System (INIS)

    Stefanski, L.D.

    1994-01-01

    The UO 3 Deactivation End Point Criteria are necessary to facilitate the transfer of the UO 3 Facility from the Office of Facility Transition and Management (EM-60) to the office of Environmental Restoration (EM-40). The criteria were derived from a logical process for determining end points for the systems and spaces at the UO 3 , Facility based on the objectives, tasks, and expected future uses pertinent to that system or space. Furthermore, the established criteria meets the intent and supports the draft guidance for acceptance criteria prepared by EM-40, open-quotes U.S. Department of Energy office of Environmental Restoration (EM-40) Decontamination and Decommissioning Guidance Document (Draft).close quotes For the UO 3 Facility, the overall objective of deactivation is to achieve a safe, stable and environmentally sound condition, suitable for an extended period, as quickly and economically as possible. Once deactivated, the facility is kept in its stable condition by means of a methodical surveillance and maintenance (S ampersand M) program, pending ultimate decontamination and decommissioning (D ampersand D). Deactivation work involves a range of tasks, such as removal of hazardous material, elimination or shielding of radiation fields, partial decontamination to permit access for inspection, installation of monitors and alarms, etc. it is important that the end point of each of these tasks be established clearly and in advance, for the following reasons: (1) End points must be such that the central element of the deactivation objective - to achieve stability - is unquestionably achieved. (2) Much of the deactivation work involves worker exposure to radiation or dangerous materials. This can be minimized by avoiding unnecessary work. (3) Each task is, in effect, competing for resources with other deactivation tasks and other facilities. By assuring that each task is appropriately bounded, DOE's overall resources can be used most fully and effectively

  12. End points and assessments in esthetic dental treatment.

    Science.gov (United States)

    Ishida, Yuichi; Fujimoto, Keiko; Higaki, Nobuaki; Goto, Takaharu; Ichikawa, Tetsuo

    2015-10-01

    There are two key considerations for successful esthetic dental treatments. This article systematically describes the two key considerations: the end points of esthetic dental treatments and assessments of esthetic outcomes, which are also important for acquiring clinical skill in esthetic dental treatments. The end point and assessment of esthetic dental treatment were discussed through literature reviews and clinical practices. Before designing a treatment plan, the end point of dental treatment should be established. The section entitled "End point of esthetic dental treatment" discusses treatments for maxillary anterior teeth and the restoration of facial profile with prostheses. The process of assessing treatment outcomes entitled "Assessments of esthetic dental treatment" discusses objective and subjective evaluation methods. Practitioners should reach an agreement regarding desired end points with patients through medical interviews, and continuing improvements and developments of esthetic assessments are required to raise the therapeutic level of esthetic dental treatments. Copyright © 2015 Japan Prosthodontic Society. Published by Elsevier Ltd. All rights reserved.

  13. Evaluation of silicon-chemiluminescence monitoring as a novel method for atomic fluorine determination and end point detection in plasma etch systems

    NARCIS (Netherlands)

    Zijlstra, P.A.; Beenakker, C.I.M.

    1981-01-01

    Optical methods for the detection of atomic fluorine in plasma etch systems are discussed and an experimental comparison is made between detection by optical emission and by a novel method based on the chemiluminescence from solid silicon in the presence of atomic fluorine. Although both methods

  14. Studies on the interference of hydrofluoric acid and phosphoric acid in the determination of uranium using Ti(III) reduction method-biamperometry end point

    International Nuclear Information System (INIS)

    Shiny, T.S.; Rajalakshmi, A.; Phal, D.G.; Charyulu, M.M.; Ramakumar, K.L.

    2007-01-01

    Accurate and precise determination of uranium in nuclear materials is necessary for chemical quality control as well as for nuclear material accounting purposes. Different types of uranium samples are received for the measurements. Depending upon the nature of the sample dissolution procedure is selected. Mixed oxide samples of uranium and plutonium, for example, are dissolved in nitric acid containing hydrofluoric acid under IR lamp. The fluoride ions are removed by repeated evaporation of the solution. However, some fluoride ions are left in the solutions depending on the conditions of evaporation. Uranium samples and alloy samples are dissolved in dilute hydrochloric acid. The rate of dissolution depends on concentration of acid. Sometimes a mixture of hydrochloric acid and hydrofluoric acid is used for the dissolution metal alloy samples, which may contain silica. Another method of dissolution of these samples is using a mixture of phosphoric acid and 1% hydrofluoric acid. It is necessary to study the interference of hydrofluoric acid and phosphoric acid on the determination of uranium

  15. End-point sharpness in thermometric titrimetry.

    Science.gov (United States)

    Tyrrell, H J

    1967-07-01

    It is shown that the sharpness of an end-point in a thermometric titration where the simple reaction A + B right harpoon over left harpoon AB takes place, depends on Kc(A') where K is the equilibrium constant for the reaction, and c(A') is the total concentration of the titrand (A) in the reaction mixture. The end-point is sharp if, (i) the enthalpy change in the reaction is not negligible, and (ii) Kc(A') > 10(3). This shows that it should, for example, be possible to titrate 0.1 M acid, pK(A) = 10, using a thennometric end-point. Some aspects of thermometric titrimetry when Kc(A') < 10(3) are also considered.

  16. End points for validating early warning scores in the context of rapid response systems

    DEFF Research Database (Denmark)

    Pedersen, N. E.; Oestergaard, D.; Lippert, A.

    2016-01-01

    with optimal treatment. This could pose a limitation to studies using these end points. We studied current expert opinion on end points for validating tools for the identification of patients in hospital wards at risk of imminent critical illness. METHODS: The Delphi consensus methodology was used. We......INTRODUCTION: When investigating early warning scores and similar physiology-based risk stratification tools, death, cardiac arrest and intensive care unit admission are traditionally used as end points. A large proportion of the patients identified by these end points cannot be saved, even...

  17. Methods of a large prospective, randomised, open-label, blinded end-point study comparing morning versus evening dosing in hypertensive patients: the Treatment In Morning versus Evening (TIME) study.

    Science.gov (United States)

    Rorie, David A; Rogers, Amy; Mackenzie, Isla S; Ford, Ian; Webb, David J; Willams, Bryan; Brown, Morris; Poulter, Neil; Findlay, Evelyn; Saywood, Wendy; MacDonald, Thomas M

    2016-02-09

    Nocturnal blood pressure (BP) appears to be a better predictor of cardiovascular outcome than daytime BP. The BP lowering effects of most antihypertensive therapies are often greater in the first 12 h compared to the next 12 h. The Treatment In Morning versus Evening (TIME) study aims to establish whether evening dosing is more cardioprotective than morning dosing. The TIME study uses the prospective, randomised, open-label, blinded end-point (PROBE) design. TIME recruits participants by advertising in the community, from primary and secondary care, and from databases of consented patients in the UK. Participants must be aged over 18 years, prescribed at least one antihypertensive drug taken once a day, and have a valid email address. After the participants have self-enrolled and consented on the secure TIME website (http://www.timestudy.co.uk) they are randomised to take their antihypertensive medication in the morning or the evening. Participant follow-ups are conducted after 1 month and then every 3 months by automated email. The trial is expected to run for 5 years, randomising 10,269 participants, with average participant follow-up being 4 years. The primary end point is hospitalisation for the composite end point of non-fatal myocardial infarction (MI), non-fatal stroke (cerebrovascular accident; CVA) or any vascular death determined by record-linkage. Secondary end points are: each component of the primary end point, hospitalisation for non-fatal stroke, hospitalisation for non-fatal MI, cardiovascular death, all-cause mortality, hospitalisation or death from congestive heart failure. The primary outcome will be a comparison of time to first event comparing morning versus evening dosing using an intention-to-treat analysis. The sample size is calculated for a two-sided test to detect 20% superiority at 80% power. TIME has ethical approval in the UK, and results will be published in a peer-reviewed journal. UKCRN17071; Pre-results. Published by the BMJ

  18. Consensus Statement on Diagnostic End Points for Infant Tuberculosis Vaccine Trials

    NARCIS (Netherlands)

    Hatherill, Mark; Verver, Suzanne; Mahomed, Hassan; Barker, Lew; Behr, Marcel; Cardenas, Vicky; Eisele, Bernd; Douoguih, Macaya; Evans, Thomas G.; Eskola, Juhani; Fourie, Bernard; Grewal, Harleen; Grode, Leander; Hawkridge, Tony; Hesseling, Anneke; Hussey, Gregory; Kiringa, Grace; Landry, Bernard; Lockhart, Stephen; Marais, Ben; Måseide, Kårstein; Mayanja, Harriet; McClain, Bruce; McShane, Helen; Moyo, Sizulu; Ofori, Opokua; Parida, Shreemanta K.; Ryall, Robert P.; Sacarlal, Jahit; Sadoff, Jerry; Shea, Jacqui; Tameris, Michele; van Rie, Annelies; von Reyn, C. Fordham; Wajja, Anne; Walker, Bob; Walzl, Gerhard; Wilkinson, Robert J.

    2012-01-01

    Background. Definition of clinical trial end points for childhood tuberculosis is hindered by lack of a standard case definition. We aimed to identify areas of consensus or debate on potential end points for tuberculosis vaccine trials among human immunodeficiency virus-uninfected children. Methods.

  19. Validation of intermediate end points in cancer research.

    Science.gov (United States)

    Schatzkin, A; Freedman, L S; Schiffman, M H; Dawsey, S M

    1990-11-21

    Investigations using intermediate end points as cancer surrogates are quicker, smaller, and less expensive than studies that use malignancy as the end point. We present a strategy for determining whether a given biomarker is a valid intermediate end point between an exposure and incidence of cancer. Candidate intermediate end points may be selected from case series, ecologic studies, and animal experiments. Prospective cohort and sometimes case-control studies may be used to quantify the intermediate end point-cancer association. The most appropriate measure of this association is the attributable proportion. The intermediate end point is a valid cancer surrogate if the attributable proportion is close to 1.0, but not if it is close to 0. Usually, the attributable proportion is close to neither 1.0 nor 0; in this case, valid surrogacy requires that the intermediate end point mediate an established exposure-cancer relation. This would in turn imply that the exposure effect would vanish if adjusted for the intermediate end point. We discuss the relative advantages of intervention and observational studies for the validation of intermediate end points. This validation strategy also may be applied to intermediate end points for adverse reproductive outcomes and chronic diseases other than cancer.

  20. Use of Surrogate end points in HTA

    Directory of Open Access Journals (Sweden)

    Mangiapane, Sandra

    2009-08-01

    Full Text Available The different actors involved in health system decision-making and regulation have to deal with the question which are valid parameters to assess the health value of health technologies.So called surrogate endpoints represent in the best case preliminary steps in the casual chain leading to the relevant outcome (e. g. mortality, morbidity and are not usually directly perceptible by patients. Surrogate endpoints are not only used in trials of pharmaceuticals but also in studies of other technologies. Their use in the assessment of the benefit of a health technology is however problematic. In this report we intend to answer the following research questions: Which criteria need to be fulfilled for a surrogate parameter to be considered a valid endpoint? Which methods have been described in the literature for the assessment of the validity of surrogate endpoints? Which methodological recommendations concerning the use of surrogate endpoints have been made by international HTA agencies? Which place has been given to surrogate endpoints in international and German HTA reports? For this purpose, we choose three different approaches. Firstly, we conduct a review of the methodological literature dealing with the issue of surrogate endpoints and their validation. Secondly, we analyse current methodological guidelines of HTA agencies members of the International network of agencies for Health Technology Assessment (INAHTA as well as of agencies concerned with assessments for reimbursement purposes. Finally, we analyse the outcome parameter used in a sample of HTA reports available for the public. The analysis of methodological guidelines shows a very cautious position of HTA institutions regarding the use of surrogate endpoints in technology assessment. Surrogate endpoints have not been prominently used in HTA reports. None of the analysed reports based its conclusions solely on the results of surrogate endpoints. The analysis of German HTA reports shows a

  1. Radiometric titration of officinal radiopharmaceuticals using radioactive kryptonates as end-point indicators. II

    International Nuclear Information System (INIS)

    Harangozo, M.; Jombik, J.; Schiller, P.; Toelgyessy, J.

    1981-01-01

    A method for the determination of citric, tartaric and undecylenic acids based on radiometric titration with 0.1 or 0.05 mole.l -1 NaOH was developed. As an indicator of the end point, radioactive kryptonate of glass was used. Experimental technique, results of determinations as well as other possible applications of the radioactive kryptonate of glass for end point determination in alkalimetric analyses of officinal pharmaceuticals are discussed. (author)

  2. Surrogate end points in clinical research: hazardous to your health.

    Science.gov (United States)

    Grimes, David A; Schulz, Kenneth F

    2005-05-01

    Surrogate end points in clinical research pose real danger. A surrogate end point is an outcome measure, commonly a laboratory test, that substitutes for a clinical event of true importance. Resistance to activated protein C, for example, has been used as a surrogate for venous thrombosis in women using oral contraceptives. Other examples of inappropriate surrogate end points in contraception include the postcoital test instead of pregnancy to evaluate new spermicides, breakage and slippage instead of pregnancy to evaluate condoms, and bone mineral density instead of fracture to assess the safety of depo-medroxyprogesterone acetate. None of these markers captures the effect of the treatment on the true outcome. A valid surrogate end point must both correlate with and accurately predict the outcome of interest. Although many surrogate markers correlate with an outcome, few have been shown to capture the effect of a treatment (for example, oral contraceptives) on the outcome (venous thrombosis). As a result, thousands of useless and misleading reports on surrogate end points litter the medical literature. New drugs have been shown to benefit a surrogate marker, but, paradoxically, triple the risk of death. Thousands of patients have died needlessly because of reliance on invalid surrogate markers. Researchers should avoid surrogate end points unless they have been validated; that requires at least one well done trial using both the surrogate and true outcome. The clinical maxim that "a difference to be a difference must make a difference" applies to research as well. Clinical research should focus on outcomes that matter.

  3. Radiometric titration of officinal radiopharmaceuticals using radioactive kryptonates as end-point indicators. I

    International Nuclear Information System (INIS)

    Toelgyessy, J.; Dillinger, P.; Harangozo, M.; Jombik, J.

    1980-01-01

    A method for the determination of salicylic, acetylsalicylic and benzoic acids in officinal pharmaceutical based on radiometric titration with 0.1 mol.l -1 NaOH was developed. The end-point was detected with the aid of radioactive glass kryptonate. After the end-point, the excess titrant attacks the glass surface layers and this results in releasing 85 Kr, and consequently, in decreasing the radioactivity of the kryptonate employed. The radioactive kryptonate used as an indicator was prepared by the bombardment of glass with accelerated 85 Kr ions. The developed method is simple, accurate and correct. (author)

  4. Modeling hard clinical end-point data in economic analyses.

    Science.gov (United States)

    Kansal, Anuraag R; Zheng, Ying; Palencia, Roberto; Ruffolo, Antonio; Hass, Bastian; Sorensen, Sonja V

    2013-11-01

    The availability of hard clinical end-point data, such as that on cardiovascular (CV) events among patients with type 2 diabetes mellitus, is increasing, and as a result there is growing interest in using hard end-point data of this type in economic analyses. This study investigated published approaches for modeling hard end-points from clinical trials and evaluated their applicability in health economic models with different disease features. A review of cost-effectiveness models of interventions in clinically significant therapeutic areas (CV diseases, cancer, and chronic lower respiratory diseases) was conducted in PubMed and Embase using a defined search strategy. Only studies integrating hard end-point data from randomized clinical trials were considered. For each study included, clinical input characteristics and modeling approach were summarized and evaluated. A total of 33 articles (23 CV, eight cancer, two respiratory) were accepted for detailed analysis. Decision trees, Markov models, discrete event simulations, and hybrids were used. Event rates were incorporated either as constant rates, time-dependent risks, or risk equations based on patient characteristics. Risks dependent on time and/or patient characteristics were used where major event rates were >1%/year in models with fewer health states (Models of infrequent events or with numerous health states generally preferred constant event rates. The detailed modeling information and terminology varied, sometimes requiring interpretation. Key considerations for cost-effectiveness models incorporating hard end-point data include the frequency and characteristics of the relevant clinical events and how the trial data is reported. When event risk is low, simplification of both the model structure and event rate modeling is recommended. When event risk is common, such as in high risk populations, more detailed modeling approaches, including individual simulations or explicitly time-dependent event rates, are

  5. Electrical breakthrough effect for end pointing in 90 and 45 nm node circuit edit

    International Nuclear Information System (INIS)

    Liu, Kun; Soskov, Alex; Scipioni, Larry; Bassom, Neil; Sijbrandij, Sybren; Smith, Gerald

    2006-01-01

    The interaction between high-energy Ga + ions and condensed matter is studied for circuit edit applications. A new 'electrical breakthrough effect' due to charging of, and Ga + penetration/doping into, dielectrics is discovered. This new effect is proposed for end pointing in 90 and 45 nm node circuit edits where integrated circuit device dimensions are of a few hundred nanometers. This new end point approach is very sensitive, reliable, and precise. Most importantly, it is not sensitive to device dimensions. A series of circuit edits involving milling holes of high aspect ratio (5-30) and small cross-section area (0.01-0.25 μm 2 ) on real chips has been successfully performed using the electrical breakthrough effect as the end point method

  6. Impact of confinement housing on study end-points in the calf model of cryptosporidiosis.

    Science.gov (United States)

    Graef, Geneva; Hurst, Natalie J; Kidder, Lance; Sy, Tracy L; Goodman, Laura B; Preston, Whitney D; Arnold, Samuel L M; Zambriski, Jennifer A

    2018-04-01

    Diarrhea is the second leading cause of death in children confinement housing, and Interval Collection (IC), which permits use of box stalls. CFC mimics human challenge model methodology but it is unknown if confinement housing impacts study end-points and if data gathered via this method is suitable for generalization to human populations. Using a modified crossover study design we compared CFC and IC and evaluated the impact of housing on study end-points. At birth, calves were randomly assigned to confinement (n = 14) or box stall housing (n = 9), or were challenged with 5 x 107 C. parvum oocysts, and followed for 10 days. Study end-points included fecal oocyst shedding, severity of diarrhea, degree of dehydration, and plasma cortisol. Calves in confinement had no significant differences in mean log oocysts enumerated per gram of fecal dry matter between CFC and IC samples (P = 0.6), nor were there diurnal variations in oocyst shedding (P = 0.1). Confinement housed calves shed significantly more oocysts (P = 0.05), had higher plasma cortisol (P = 0.001), and required more supportive care (P = 0.0009) than calves in box stalls. Housing method confounds study end-points in the calf model of cryptosporidiosis. Due to increased stress data collected from calves in confinement housing may not accurately estimate the efficacy of chemotherapeutics targeting C. parvum.

  7. End point control of an actinide precipitation reactor

    International Nuclear Information System (INIS)

    Muske, K.R.

    1997-01-01

    The actinide precipitation reactors in the nuclear materials processing facility at Los Alamos National Laboratory are used to remove actinides and other heavy metals from the effluent streams generated during the purification of plutonium. These effluent streams consist of hydrochloric acid solutions, ranging from one to five molar in concentration, in which actinides and other metals are dissolved. The actinides present are plutonium and americium. Typical actinide loadings range from one to five grams per liter. The most prevalent heavy metals are iron, chromium, and nickel that are due to stainless steel. Removal of these metals from solution is accomplished by hydroxide precipitation during the neutralization of the effluent. An end point control algorithm for the semi-batch actinide precipitation reactors at Los Alamos National Laboratory is described. The algorithm is based on an equilibrium solubility model of the chemical species in solution. This model is used to predict the amount of base hydroxide necessary to reach the end point of the actinide precipitation reaction. The model parameters are updated by on-line pH measurements

  8. Low dose response analysis through a cytogenetic end-point

    International Nuclear Information System (INIS)

    Bojtor, I.; Koeteles, G.J.

    1998-01-01

    The effects of low doses were studied on human lymphocytes of various individuals. The frequency of micronuclei in cytokinesis-blocked cultured lymphocytes was taken as end-point. The probability distribution of radiation-induced increment was statistically proved and identified as to be asymmetric when the blood samples had been irradiated with doses of 0.01-0.05 Gy of X-rays, similarly to that in unirradiated control population. On the contrary, at or above 1 Gy the corresponding normal curve could be accepted only reflecting an approximately symmetrical scatter of the increments about their mean value. It was found that the slope as well as the closeness of correlation of the variables considerably changed when lower and lower dose ranges had been selected. Below approximately 0.2 Gy even an unrelatedness was found betwen the absorbed dose and the increment

  9. Radiometric titration of officinal radiopharmaceuticals using radioactive kryptonates as end-point indicators. II. Citric, tartaric, undecylenic acids

    Energy Technology Data Exchange (ETDEWEB)

    Harangozo, M.; Jombik, J.; Schiller, P. (Komenskeho Univ., Bratislava (Czechoslovakia). Farmaceuticka Fakulta); Toelgyessy, J. (Slovenska Vysoka Skola Technicka, Bratislava (Czechoslovakia). Chemickotechnologicka Fakulta)

    1981-01-01

    A method for the determination of citric, tartaric and undecylenic acids based on radiometric titration with 0.1 or 0.05 mole.l/sup -1/ NaOH was developed. As an indicator of the end point, radioactive kryptonate of glass was used. Experimental technique, results of determinations as well as other possible applications of the radioactive kryptonate of glass for end point determination in alkalimetric analyses of officinal pharmaceuticals are discussed.

  10. Detection of Bordetella pertussis from Clinical Samples by Culture and End-Point PCR in Malaysian Patients.

    Science.gov (United States)

    Ting, Tan Xue; Hashim, Rohaidah; Ahmad, Norazah; Abdullah, Khairul Hafizi

    2013-01-01

    Pertussis or whooping cough is a highly infectious respiratory disease caused by Bordetella pertussis. In vaccinating countries, infants, adolescents, and adults are relevant patients groups. A total of 707 clinical specimens were received from major hospitals in Malaysia in year 2011. These specimens were cultured on Regan-Lowe charcoal agar and subjected to end-point PCR, which amplified the repetitive insertion sequence IS481 and pertussis toxin promoter gene. Out of these specimens, 275 were positive: 4 by culture only, 6 by both end-point PCR and culture, and 265 by end-point PCR only. The majority of the positive cases were from ≤3 months old patients (77.1%) (P 0.05). Our study showed that the end-point PCR technique was able to pick up more positive cases compared to culture method.

  11. Kinetic titration with differential thermometric determination of the end-point.

    Science.gov (United States)

    Sajó, I

    1968-06-01

    A method has been described for the determination of concentrations below 10(-4)M by applying catalytic reactions and using thermometric end-point determination. A reference solution, identical with the sample solution except for catalyst, is titrated with catalyst solution until the rates of reaction become the same, as shown by a null deflection on a galvanometer connected via bridge circuits to two opposed thermistors placed in the solutions.

  12. A New Test Unit for Disintegration End-Point Determination of Orodispersible Films.

    Science.gov (United States)

    Low, Ariana; Kok, Si Ling; Khong, Yuet Mei; Chan, Sui Yung; Gokhale, Rajeev

    2015-11-01

    No standard time or pharmacopoeia disintegration test method for orodispersible films (ODFs) exists. The USP disintegration test for tablets and capsules poses significant challenges for end-point determination when used for ODFs. We tested a newly developed disintegration test unit (DTU) against the USP disintegration test. The DTU is an accessory to the USP disintegration apparatus. It holds the ODF in a horizontal position, allowing top-view of the ODF during testing. A Gauge R&R study was conducted to assign relative contributions of the total variability from the operator, sample or the experimental set-up. Precision was compared using commercial ODF products in different media. Agreement between the two measurement methods was analysed. The DTU showed improved repeatability and reproducibility compared to the USP disintegration system with tighter standard deviations regardless of operator or medium. There is good agreement between the two methods, with the USP disintegration test giving generally longer disintegration times possibly due to difficulty in end-point determination. The DTU provided clear end-point determination and is suitable for quality control of ODFs during product developmental stage or manufacturing. This may facilitate the development of a standardized methodology for disintegration time determination of ODFs. © 2015 Wiley Periodicals, Inc. and the American Pharmacists Association.

  13. The effective QCD phase diagram and the critical end point

    Directory of Open Access Journals (Sweden)

    Alejandro Ayala

    2015-08-01

    Full Text Available We study the QCD phase diagram on the temperature T and quark chemical potential μ plane, modeling the strong interactions with the linear sigma model coupled to quarks. The phase transition line is found from the effective potential at finite T and μ taking into account the plasma screening effects. We find the location of the critical end point (CEP to be (μCEP/Tc,TCEP/Tc∼(1.2,0.8, where Tc is the (pseudocritical temperature for the crossover phase transition at vanishing μ. This location lies within the region found by lattice inspired calculations. The results show that in the linear sigma model, the CEP's location in the phase diagram is expectedly determined solely through chiral symmetry breaking. The same is likely to be true for all other models which do not exhibit confinement, provided the proper treatment of the plasma infrared properties for the description of chiral symmetry restoration is implemented. Similarly, we also expect these corrections to be substantially relevant in the QCD phase diagram.

  14. Defining the end-point of mastication: A conceptual model.

    Science.gov (United States)

    Gray-Stuart, Eli M; Jones, Jim R; Bronlund, John E

    2017-10-01

    properties define the end-point texture and enduring sensory perception of the food. © 2017 Wiley Periodicals, Inc.

  15. Radiometric titration of officinal radiopharmaceuticals using radioactive kryptonates as end-point indicators. I. Salicylic, acetylosalicylic, benzoic acids

    Energy Technology Data Exchange (ETDEWEB)

    Toelgyessy, J; Dillinger, P [Slovenska Vysoka Skola Technicka, Bratislava (Czechoslovakia). Chemickotechnologicka Fakulta; Harangozo, M; Jombik, J [Komenskeho Univ., Bratislava (Czechoslovakia). Farmaceuticka Fakulta

    1980-01-01

    A method for the determination of salicylic, acetylsalicylic and benzoic acids in officinal pharmaceutical based on radiometric titration with 0.1 mol.l/sup -1/ NaOH was developed. The end-point was detected with the aid of radioactive glass kryptonate. After the end-point, the excess titrant attacks the glass surface layers and this results in releasing /sup 85/Kr, and consequently, in decreasing the radioactivity of the kryptonate employed. The radioactive kryptonate used as an indicator was prepared by the bombardment of glass with accelerated /sup 85/Kr ions. The developed method is simple, accurate and correct.

  16. End points for adjuvant therapy trials: has the time come to accept disease-free survival as a surrogate end point for overall survival?

    Science.gov (United States)

    Gill, Sharlene; Sargent, Daniel

    2006-06-01

    The intent of adjuvant therapy is to eradicate micro-metastatic residual disease following curative resection with the goal of preventing or delaying recurrence. The time-honored standard for demonstrating efficacy of new adjuvant therapies is an improvement in overall survival (OS). This typically requires phase III trials of large sample size with lengthy follow-up. With the intent of reducing the cost and time of completing such trials, there is considerable interest in developing alternative or surrogate end points. A surrogate end point may be employed as a substitute to directly assess the effects of an intervention on an already accepted clinical end point such as mortality. When used judiciously, surrogate end points can accelerate the evaluation of new therapies, resulting in the more timely dissemination of effective therapies to patients. The current review provides a perspective on the suitability and validity of disease-free survival (DFS) as an alternative end point for OS. Criteria for establishing surrogacy and the advantages and limitations associated with the use of DFS as a primary end point in adjuvant clinical trials and as the basis for approval of new adjuvant therapies are discussed.

  17. Is automated kinetic measurement superior to end-point for advanced oxidation protein product?

    Science.gov (United States)

    Oguz, Osman; Inal, Berrin Bercik; Emre, Turker; Ozcan, Oguzhan; Altunoglu, Esma; Oguz, Gokce; Topkaya, Cigdem; Guvenen, Guvenc

    2014-01-01

    Advanced oxidation protein product (AOPP) was first described as an oxidative protein marker in chronic uremic patients and measured with a semi-automatic end-point method. Subsequently, the kinetic method was introduced for AOPP assay. We aimed to compare these two methods by adapting them to a chemistry analyzer and to investigate the correlation between AOPP and fibrinogen, the key molecule responsible for human plasma AOPP reactivity, microalbumin, and HbA1c in patients with type II diabetes mellitus (DM II). The effects of EDTA and citrate-anticogulated tubes on these two methods were incorporated into the study. This study included 93 DM II patients (36 women, 57 men) with HbA1c levels > or = 7%, who were admitted to the diabetes and nephrology clinics. The samples were collected in EDTA and in citrate-anticoagulated tubes. Both methods were adapted to a chemistry analyzer and the samples were studied in parallel. In both types of samples, we found a moderate correlation between the kinetic and the endpoint methods (r = 0.611 for citrate-anticoagulated, r = 0.636 for EDTA-anticoagulated, p = 0.0001 for both). We found a moderate correlation between fibrinogen-AOPP and microalbumin-AOPP levels only in the kinetic method (r = 0.644 and 0.520 for citrate-anticoagulated; r = 0.581 and 0.490 for EDTA-anticoagulated, p = 0.0001). We conclude that adaptation of the end-point method to automation is more difficult and it has higher between-run CV% while application of the kinetic method is easier and it may be used in oxidative stress studies.

  18. Determination of the Acidity of Oils Using Paraformaldehyde as a Thermometric End-Point Indicator

    Directory of Open Access Journals (Sweden)

    Carneiro Mário J. D.

    2002-01-01

    Full Text Available The determination of the acidity of oils by catalytic thermometric titrimetry using paraformaldehyde as the thermometric end-point indicator was investigated. The sample solvent was a 1:1 (v/v mixture of toluene and 2-propanol and the titrant was 0.1 mol L-1 aqueous sodium hydroxide. Paraformaldehyde, being insoluble in the sample solvent, does not present the inconvenience of other indicators that change the properties of the solvent due to composition changes. The titration can therefore be done effectively in the same medium as the standard potentiometric and visual titration methods. The results of the application of the method to both non-refined and refined oils are presented herein. The proposed method has advantages in relation to the potentiometric method in terms of speed and simplicity.

  19. Quality control for electron beam processing of polymeric materials by end-point analysis

    International Nuclear Information System (INIS)

    DeGraff, E.; McLaughlin, W.L.

    1981-01-01

    Properties of certain plastics, e.g. polytetrafluoroethylene, polyethylene, ethylene vinyl acetate copolymer, can be modified selectively by ionizing radiation. One of the advantages of this treatment over chemical methods is better control of the process and the end-product properties. The most convenient method of dosimetry for monitoring quality control is post-irradiation evaluation of the plastic itself, e.g., melt index and melt point determination. It is shown that by proper calibration in terms of total dose and sufficiently reproducible radiation effects, such product test methods provide convenient and meaningful analyses. Other appropriate standardized analytical methods include stress-crack resistance, stress-strain-to-fracture testing and solubility determination. Standard routine dosimetry over the dose and dose rate ranges of interest confirm that measured product end points can be correlated with calibrated values of absorbed dose in the product within uncertainty limits of the measurements. (author)

  20. End-Point Contact Force Control with Quantitative Feedback Theory for Mobile Robots

    Directory of Open Access Journals (Sweden)

    Shuhuan Wen

    2012-12-01

    Full Text Available Robot force control is an important issue for intelligent mobile robotics. The end-point stiffness of a robot is a key and open problem in the research community. The control strategies are mostly dependent on both the specifications of the task and the environment of the robot. Due to the limited stiffness of the end-effector, we may adopt inherent torque to feedback the oscillations of the controlled force. This paper proposes an effective control strategy which contains a controller using quantitative feedback theory. The nested loop controllers take into account the physical limitation of the system's inner variables and harmful interference. The biggest advantage of the method is its simplicity in both the design process and the implementation of the control algorithm in engineering practice. Taking the one-link manipulator as an example, numerical experiments are carried out to verify the proposed control method. The results show the satisfactory performance.

  1. Aconitase and Developmental End Points as Early Indicators of

    Directory of Open Access Journals (Sweden)

    Oleksandr Vasyliovuch Lozinsky

    2014-03-01

    Full Text Available Background: In this study, the toxicity of the different xenobiotics was tested on the fruit fly Drosophila melanogaster model system. Methods: Fly larvae were raised on food supplemented with xenobiotics at different concentrations (sodium nitroprusside (0.1-1.5 mM, S-nitrosoglutathione (0.5-4 mM, and potassium ferrocyanide (1 mM. Emergence of flies, food intake by larvae, and pupation height preference as well as aconitase activity (in 2-day old flies were measured. Results: Food supplementation with xenobiotics caused a developmental delay in the flies and decreased pupation height. Biochemical analyses of oxidative stress markers and activities of antioxidants and their associated enzymes were carried out on 2-day-old flies emerged from control larvae and larvae fed on food supplemented with chemicals. Larval exposure to chemicals resulted in lower activities of aconitase in flies of both sexes and perturbation in activities of antioxidant enzymes. Conclusions: The results of this study showed that among a variety of parameters tested, aconitase activity, developmental endpoints, and pupation height may be used as reliable early indicators of toxicity caused by different chemicals.

  2. Estimated GFR Decline as a Surrogate End Point for Kidney Failure

    DEFF Research Database (Denmark)

    Lambers Heerspink, Hiddo J; Weldegiorgis, Misghina; Inker, Lesley A

    2014-01-01

    A doubling of serum creatinine value, corresponding to a 57% decline in estimated glomerular filtration rate (eGFR), is used frequently as a component of a composite kidney end point in clinical trials in type 2 diabetes. The aim of this study was to determine whether alternative end points defin...

  3. European network on the determination of site end points for radiologically contaminated land

    International Nuclear Information System (INIS)

    Booth, Peter; Lennon, Chris

    2007-01-01

    Nexia Solutions are currently running a small European network entitled 'European Network on the Determination of Site End Points for Radiologically Contaminated Land (ENDSEP)'. Other network members include NRG (Netherlands), UKAEA (UK), CEA (France), SOGIN (Italy), Wismut (Germany), Saxon State Agency of Environment and Geology (Germany). The network is focused on the technical and socio-economical issues associated with the determination of end points for sites potentially, or actually, impacted by radiological contamination. Such issues will cover: - Those associated with the run up to establishing a site end point; - Those associated with verifying that the end points have been met; and Those associated with post closure. The network's current high level objectives can be summarized as follows: Share experience and best practice in the key issues running up to determining site end points; Gain a better understanding of the potential effects of recent and forthcoming EU legislation; Assess consistency between approaches; Highlight potential gaps within the remit of site end point determination and management; and - Consider the formulation of research projects with a view to sharing time and expense. The programme of work revolves around the following key tasks: - Share information, experience and existing good practice. - Look to determine sustainable approaches to contaminated land site end point management. - Through site visits, gain first hand experience of determining an appropriate end point strategy, and identifying and resolving end point issues. Highlight the key data gaps and consider the development of programmes to either close out these gaps or to build confidence in the approaches taken. Production of position papers on each technical are a highlighting how different countries approach/resolve a specific problem. (authors)

  4. Measurement of β-decay end point energy with planar HPGe detector

    Science.gov (United States)

    Bhattacharjee, T.; Pandit, Deepak; Das, S. K.; Chowdhury, A.; Das, P.; Banerjee, D.; Saha, A.; Mukhopadhyay, S.; Pal, S.; Banerjee, S. R.

    2014-12-01

    The β - γ coincidence measurement has been performed with a segmented planar Hyper-Pure Germanium (HPGe) detector and a single coaxial HPGe detector to determine the end point energies of nuclear β-decays. The experimental end point energies have been determined for some of the known β-decays in 106Rh →106Pd. The end point energies corresponding to three weak branches in 106Rh →106Pd decay have been measured for the first time. The γ ray and β particle responses for the planar HPGe detector were simulated using the Monte Carlo based code GEANT3. The experimentally obtained β spectra were successfully reproduced with the simulation.

  5. Modified titrimetric determination of plutonium using photometric end-point detection

    International Nuclear Information System (INIS)

    Baughman, W.J.; Dahlby, J.W.

    1980-04-01

    A method used at LASL for the accurate and precise assay of plutonium metal was modified for the measurement of plutonium in plutonium oxides, nitrate solutions, and in other samples containing large quantities of plutonium in oxidized states higher than +3. In this modified method, the plutonium oxide or other sample is dissolved using the sealed-reflux dissolution method or other appropriate methods. Weighed aliquots, containing approximately 100 mg of plutonium, of the dissolved sample or plutonium nitrate solution are fumed to dryness with an HC1O 4 -H 2 SO 4 mixture. The dried residue is dissolved in dilute H 2 SO 4 , and the plutonium is reduced to plutonium (III) with zinc metal. The excess zinc metal is dissolved with HCl, and the solution is passed through a lead reductor column to ensure complete reduction of the plutonium to plutonium (III). The solution, with added ferroin indicator, is then titrated immediately with standardized ceric solution to a photometric end point. For the analysis of plutonium metal solutions, plutonium oxides, and nitrate solutions, the relative standard deviation are 0.06, 0.08, and 0.14%, respectively. Of the elements most likely to be found with the plutonium, only iron, neptunium, and uranium interfere. Small amounts of uranium and iron, which titrate quantitatively in the method, are determined by separate analytical methods, and suitable corrections are applied to the plutonium value. 4 tables, 4 figures

  6. Information resources and the correlation of response patterns between biological end points

    Energy Technology Data Exchange (ETDEWEB)

    Malling, H.V. [National Institute of Environmental Health Sciences, Research Triangle Park, NC (United States); Wassom, J.S. [Oak Ridge National Laboratory, TN (United States)

    1990-12-31

    This paper focuses on the analysis of information for mutagenesis, a biological end point that is important in the overall process of assessing possible adverse health effects from chemical exposure. 17 refs.

  7. Pharmaceutics, Drug Delivery and Pharmaceutical Technology: A New Test Unit for Disintegration End-Point Determination of Orodispersible Films.

    Science.gov (United States)

    Low, Ariana; Kok, Si Ling; Khong, Yuetmei; Chan, Sui Yung; Gokhale, Rajeev

    2015-11-01

    No standard time or pharmacopoeia disintegration test method for orodispersible films (ODFs) exists. The USP disintegration test for tablets and capsules poses significant challenges for end-point determination when used for ODFs. We tested a newly developed disintegration test unit (DTU) against the USP disintegration test. The DTU is an accessory to the USP disintegration apparatus. It holds the ODF in a horizontal position, allowing top-view of the ODF during testing. A Gauge R&R study was conducted to assign relative contributions of the total variability from the operator, sample or the experimental set-up. Precision was compared using commercial ODF products in different media. Agreement between the two measurement methods was analysed. The DTU showed improved repeatability and reproducibility compared to the USP disintegration system with tighter standard deviations regardless of operator or medium. There is good agreement between the two methods, with the USP disintegration test giving generally longer disintegration times possibly due to difficulty in end-point determination. The DTU provided clear end-point determination and is suitable for quality control of ODFs during product developmental stage or manufacturing. This may facilitate the development of a standardized methodology for disintegration time determination of ODFs. © 2015 Wiley Periodicals, Inc. and the American Pharmacists Association J Pharm Sci 104:3893-3903, 2015. Copyright © 2015 Wiley Periodicals, Inc. and the American Pharmacists Association.

  8. The effect of adherence to statin therapy on cardiovascular mortality: quantification of unmeasured bias using falsification end-points

    Directory of Open Access Journals (Sweden)

    Maarten J. Bijlsma

    2016-04-01

    Full Text Available Abstract Background To determine the clinical effectiveness of statins on cardiovascular mortality in practice, observational studies are needed. Control for confounding is essential in any observational study. Falsification end-points may be useful to determine if bias is present after adjustment has taken place. Methods We followed starters on statin therapy in the Netherlands aged 46 to 100 years over the period 1996 to 2012, from initiation of statin therapy until cardiovascular mortality or censoring. Within this group (n = 49,688, up to 16 years of follow-up, we estimated the effect of adherence to statin therapy (0 = completely non-adherent, 1 = fully adherent on ischemic heart diseases and cerebrovascular disease (ICD10-codes I20-I25 and I60-I69 as well as respiratory and endocrine disease mortality (ICD10-codes J00-J99 and E00-E90 as falsification end points, controlling for demographic factors, socio-economic factors, birth cohort, adherence to other cardiovascular medications, and diabetes using time-varying Cox regression models. Results Falsification end-points indicated that a simpler model was less biased than a model with more controls. Adherence to statins appeared to be protective against cardiovascular mortality (HR: 0.70, 95 % CI 0.61 to 0.81. Conclusions Falsification end-points helped detect overadjustment bias or bias due to competing risks, and thereby proved to be a useful technique in such a complex setting.

  9. Changes in the Skin Conductance Monitor as an End Point for Sympathetic Nerve Blocks.

    Science.gov (United States)

    Gungor, Semih; Rana, Bhumika; Fields, Kara; Bae, James J; Mount, Lauren; Buschiazzo, Valeria; Storm, Hanne

    2017-11-01

    There is a lack of objective methods for determining the achievement of sympathetic block. This study validates the skin conductance monitor (SCM) as an end point indicator of successful sympathetic blockade as compared with traditional monitors. This interventional study included 13 patients undergoing 25 lumbar sympathetic blocks to compare time to indication of successful blockade between the SCM indices and traditional measures, clinically visible hyperemia, clinically visible engorgement of veins, subjective skin temperature difference, unilateral thermometry monitoring, bilateral comparative thermometry monitoring, and change in waveform amplitude in pulse oximetry plethysmography, within a 30-minute observation period. Differences in the SCM indices were studied pre- and postblock to validate the SCM. SCM showed substantially greater odds of indicating achievement of sympathetic block in the next moment (i.e., hazard rate) compared with all traditional measures (clinically visible hyperemia, clinically visible engorgement of veins, subjective temperature difference, unilateral thermometry monitoring, bilateral comparative thermometry monitoring, and change in waveform amplitude in pulse oximetry plethysmography; P ≤ 0.011). SCM indicated successful block for all (100%) procedures, while the traditional measures failed to indicate successful blocks in 16-84% of procedures. The SCM indices were significantly higher in preblock compared with postblock measurements (P SCM is a more reliable and rapid response indicator of a successful sympathetic blockade when compared with traditional monitors. © 2017 American Academy of Pain Medicine. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  10. End-point detection in potentiometric titration by continuous wavelet transform.

    Science.gov (United States)

    Jakubowska, Małgorzata; Baś, Bogusław; Kubiak, Władysław W

    2009-10-15

    The aim of this work was construction of the new wavelet function and verification that a continuous wavelet transform with a specially defined dedicated mother wavelet is a useful tool for precise detection of end-point in a potentiometric titration. The proposed algorithm does not require any initial information about the nature or the type of analyte and/or the shape of the titration curve. The signal imperfection, as well as random noise or spikes has no influence on the operation of the procedure. The optimization of the new algorithm was done using simulated curves and next experimental data were considered. In the case of well-shaped and noise-free titration data, the proposed method gives the same accuracy and precision as commonly used algorithms. But, in the case of noisy or badly shaped curves, the presented approach works good (relative error mainly below 2% and coefficients of variability below 5%) while traditional procedures fail. Therefore, the proposed algorithm may be useful in interpretation of the experimental data and also in automation of the typical titration analysis, specially in the case when random noise interfere with analytical signal.

  11. The use of a radioactive tracer for the determination of distillation end point in a coke oven

    International Nuclear Information System (INIS)

    Burgio, N.; Capannesi, G.; Ciavola, C.; Sedda, F.

    1995-01-01

    A novel high precision detection method for the determination of the distillation end point of the coking process (usually in the 950 deg C range) has been developed. The system is based on the use of a metallic capsule that melts at a fixed temperature and releases a radioactive gas tracer ( 133 Xe) in the stream of the distillation gas. A series of tests on a pilot oven confirmed the feasibility of the method on industrial scale. Application of the radioactive tracer method to the staging and monitoring in the coking process appears to be possible. (author). 6 refs., 5 figs., 3 tabs

  12. Measurement of β-decay end point energy with planar HPGe detector

    Energy Technology Data Exchange (ETDEWEB)

    Bhattacharjee, T., E-mail: btumpa@vecc.gov.in [Physics Group, Variable Energy Cyclotron Centre, Kolkata 700 064 (India); Pandit, Deepak [Physics Group, Variable Energy Cyclotron Centre, Kolkata 700 064 (India); Das, S.K. [RCD-BARC, Variable Energy Cyclotron Centre, Kolkata 700 064 (India); Chowdhury, A.; Das, P. [Physics Group, Variable Energy Cyclotron Centre, Kolkata 700 064 (India); Banerjee, D. [RCD-BARC, Variable Energy Cyclotron Centre, Kolkata 700 064 (India); Saha, A.; Mukhopadhyay, S.; Pal, S.; Banerjee, S.R. [Physics Group, Variable Energy Cyclotron Centre, Kolkata 700 064 (India)

    2014-12-11

    The β–γ coincidence measurement has been performed with a segmented planar Hyper-Pure Germanium (HPGe) detector and a single coaxial HPGe detector to determine the end point energies of nuclear β-decays. The experimental end point energies have been determined for some of the known β-decays in {sup 106}Rh→{sup 106}Pd. The end point energies corresponding to three weak branches in {sup 106}Rh→{sup 106}Pd decay have been measured for the first time. The γ ray and β particle responses for the planar HPGe detector were simulated using the Monte Carlo based code GEANT3. The experimentally obtained β spectra were successfully reproduced with the simulation.

  13. Soft modes at the critical end point in the chiral effective models

    International Nuclear Information System (INIS)

    Fujii, Hirotsugu; Ohtani, Munehisa

    2004-01-01

    At the critical end point in QCD phase diagram, the scalar, vector and entropy susceptibilities are known to diverge. The dynamic origin of this divergence is identified within the chiral effective models as softening of a hydrodynamic mode of the particle-hole-type motion, which is a consequence of the conservation law of the baryon number and the energy. (author)

  14. Is Chronic Dialysis the Right Hard Renal End Point To Evaluate Renoprotective Drug Effects?

    NARCIS (Netherlands)

    Weldegiorgis, Misghina; de Zeeuw, Dick; Dwyer, Jamie P.; Mol, Peter; Heerspink, Hiddo J. L.

    2017-01-01

    Background and objectives: RRT and doubling of serum creatinine are considered the objective hard end points in nephrology intervention trials. Because both are assumed to reflect changes in the filtration capacity of the kidney, drug effects, if present, are attributed to kidney protection.

  15. End-point construction and systematic titration error in linear titration curves-complexation reactions

    NARCIS (Netherlands)

    Coenegracht, P.M.J.; Duisenberg, A.J.M.

    The systematic titration error which is introduced by the intersection of tangents to hyperbolic titration curves is discussed. The effects of the apparent (conditional) formation constant, of the concentration of the unknown component and of the ranges used for the end-point construction are

  16. New supervised learning theory applied to cerebellar modeling for suppression of variability of saccade end points.

    Science.gov (United States)

    Fujita, Masahiko

    2013-06-01

    A new supervised learning theory is proposed for a hierarchical neural network with a single hidden layer of threshold units, which can approximate any continuous transformation, and applied to a cerebellar function to suppress the end-point variability of saccades. In motor systems, feedback control can reduce noise effects if the noise is added in a pathway from a motor center to a peripheral effector; however, it cannot reduce noise effects if the noise is generated in the motor center itself: a new control scheme is necessary for such noise. The cerebellar cortex is well known as a supervised learning system, and a novel theory of cerebellar cortical function developed in this study can explain the capability of the cerebellum to feedforwardly reduce noise effects, such as end-point variability of saccades. This theory assumes that a Golgi-granule cell system can encode the strength of a mossy fiber input as the state of neuronal activity of parallel fibers. By combining these parallel fiber signals with appropriate connection weights to produce a Purkinje cell output, an arbitrary continuous input-output relationship can be obtained. By incorporating such flexible computation and learning ability in a process of saccadic gain adaptation, a new control scheme in which the cerebellar cortex feedforwardly suppresses the end-point variability when it detects a variation in saccadic commands can be devised. Computer simulation confirmed the efficiency of such learning and showed a reduction in the variability of saccadic end points, similar to results obtained from experimental data.

  17. Free Energy, Enthalpy and Entropy from Implicit Solvent End-Point Simulations.

    Science.gov (United States)

    Fogolari, Federico; Corazza, Alessandra; Esposito, Gennaro

    2018-01-01

    Free energy is the key quantity to describe the thermodynamics of biological systems. In this perspective we consider the calculation of free energy, enthalpy and entropy from end-point molecular dynamics simulations. Since the enthalpy may be calculated as the ensemble average over equilibrated simulation snapshots the difficulties related to free energy calculation are ultimately related to the calculation of the entropy of the system and in particular of the solvent entropy. In the last two decades implicit solvent models have been used to circumvent the problem and to take into account solvent entropy implicitly in the solvation terms. More recently outstanding advancement in both implicit solvent models and in entropy calculations are making the goal of free energy estimation from end-point simulations more feasible than ever before. We review briefly the basic theory and discuss the advancements in light of practical applications.

  18. Free Energy, Enthalpy and Entropy from Implicit Solvent End-Point Simulations

    Directory of Open Access Journals (Sweden)

    Federico Fogolari

    2018-02-01

    Full Text Available Free energy is the key quantity to describe the thermodynamics of biological systems. In this perspective we consider the calculation of free energy, enthalpy and entropy from end-point molecular dynamics simulations. Since the enthalpy may be calculated as the ensemble average over equilibrated simulation snapshots the difficulties related to free energy calculation are ultimately related to the calculation of the entropy of the system and in particular of the solvent entropy. In the last two decades implicit solvent models have been used to circumvent the problem and to take into account solvent entropy implicitly in the solvation terms. More recently outstanding advancement in both implicit solvent models and in entropy calculations are making the goal of free energy estimation from end-point simulations more feasible than ever before. We review briefly the basic theory and discuss the advancements in light of practical applications.

  19. End point detection in ion milling processes by sputter-induced optical emission spectroscopy

    International Nuclear Information System (INIS)

    Lu, C.; Dorian, M.; Tabei, M.; Elsea, A.

    1984-01-01

    The characteristic optical emission from the sputtered material during ion milling processes can provide an unambiguous indication of the presence of the specific etched species. By monitoring the intensity of a representative emission line, the etching process can be precisely terminated at an interface. Enhancement of the etching end point is possible by using a dual-channel photodetection system operating in a ratio or difference mode. The installation of the optical detection system to an existing etching chamber has been greatly facilitated by the use of optical fibers. Using a commercial ion milling system, experimental data for a number of etching processes have been obtained. The result demonstrates that sputter-induced optical emission spectroscopy offers many advantages over other techniques in detecting the etching end point of ion milling processes

  20. End Point of the Ultraspinning Instability and Violation of Cosmic Censorship

    Science.gov (United States)

    Figueras, Pau; Kunesch, Markus; Lehner, Luis; Tunyasuvunakool, Saran

    2017-04-01

    We determine the end point of the axisymmetric ultraspinning instability of asymptotically flat Myers-Perry black holes in D =6 spacetime dimensions. In the nonlinear regime, this instability gives rise to a sequence of concentric rings connected by segments of black membrane on the rotation plane. The latter become thinner over time, resulting in the formation of a naked singularity in finite asymptotic time and hence a violation of the weak cosmic censorship conjecture in asymptotically flat higher-dimensional spaces.

  1. End-point impedance measurements across dominant and nondominant hands and robotic assistance with directional damping.

    Science.gov (United States)

    Erden, Mustafa Suphi; Billard, Aude

    2015-06-01

    The goal of this paper is to perform end-point impedance measurements across dominant and nondominant hands while doing airbrush painting and to use the results for developing a robotic assistance scheme. We study airbrush painting because it resembles in many ways manual welding, a standard industrial task. The experiments are performed with the 7 degrees of freedom KUKA lightweight robot arm. The robot is controlled in admittance using a force sensor attached at the end-point, so as to act as a free-mass and be passively guided by the human. For impedance measurements, a set of nine subjects perform 12 repetitions of airbrush painting, drawing a straight-line on a cartoon horizontally placed on a table, while passively moving the airbrush mounted on the robot's end-point. We measure hand impedance during the painting task by generating sudden and brief external forces with the robot. The results show that on average the dominant hand displays larger impedance than the nondominant in the directions perpendicular to the painting line. We find the most significant difference in the damping values in these directions. Based on this observation, we develop a "directional damping" scheme for robotic assistance and conduct a pilot study with 12 subjects to contrast airbrush painting with and without robotic assistance. Results show significant improvement in precision with both dominant and nondominant hands when using robotic assistance.

  2. Standardized End Point Definitions for Coronary Intervention Trials: The Academic Research Consortium-2 Consensus Document.

    Science.gov (United States)

    Garcia-Garcia, Hector M; McFadden, Eugène P; Farb, Andrew; Mehran, Roxana; Stone, Gregg W; Spertus, John; Onuma, Yoshinobu; Morel, Marie-Angèle; van Es, Gerrit-Anne; Zuckerman, Bram; Fearon, William F; Taggart, David; Kappetein, Arie-Pieter; Krucoff, Mitchell W; Vranckx, Pascal; Windecker, Stephan; Cutlip, Donald; Serruys, Patrick W

    2018-06-14

    The Academic Research Consortium (ARC)-2 initiative revisited the clinical and angiographic end point definitions in coronary device trials, proposed in 2007, to make them more suitable for use in clinical trials that include increasingly complex lesion and patient populations and incorporate novel devices such as bioresorbable vascular scaffolds. In addition, recommendations for the incorporation of patient-related outcomes in clinical trials are proposed. Academic Research Consortium-2 is a collaborative effort between academic research organizations in the United States and Europe, device manufacturers, and European, US, and Asian regulatory bodies. Several in-person meetings were held to discuss the changes that have occurred in the device landscape and in clinical trials and regulatory pathways in the last decade. The consensus-based end point definitions in this document are endorsed by the stakeholders of this document and strongly advocated for clinical trial purposes. This Academic Research Consortium-2 document provides further standardization of end point definitions for coronary device trials, incorporating advances in technology and knowledge. Their use will aid interpretation of trial outcomes and comparison among studies, thus facilitating the evaluation of the safety and effectiveness of these devices.

  3. Medication overuse headache: a critical review of end points in recent follow-up studies

    DEFF Research Database (Denmark)

    Hagen, Knut; Jensen, Rigmor; Bøe, Magne Geir

    2010-01-01

    in headache index at the end of follow-up were reported in only one and two of nine studies, respectively. The present review demonstrated a lack of uniform end points used in recently published follow-up studies. Guidelines for presenting follow-up data on MOH are needed and we propose end points......No guidelines for performing and presenting the results of studies on patients with medication overuse headache (MOH) exist. The aim of this study was to review long-term outcome measures in follow-up studies published in 2006 or later. We included MOH studies with >6 months duration presenting...... a minimum of one predefined end point. In total, nine studies were identified. The 1,589 MOH patients (22% men) had an overall mean frequency of 25.3 headache days/month at baseline. Headache days/month at the end of follow-up was reported in six studies (mean 13.8 days/month). The decrease was more...

  4. Towards free 3D end-point control for robotic-assisted human reaching using binocular eye tracking.

    Science.gov (United States)

    Maimon-Dror, Roni O; Fernandez-Quesada, Jorge; Zito, Giuseppe A; Konnaris, Charalambos; Dziemian, Sabine; Faisal, A Aldo

    2017-07-01

    Eye-movements are the only directly observable behavioural signals that are highly correlated with actions at the task level, and proactive of body movements and thus reflect action intentions. Moreover, eye movements are preserved in many movement disorders leading to paralysis (or amputees) from stroke, spinal cord injury, Parkinson's disease, multiple sclerosis, and muscular dystrophy among others. Despite this benefit, eye tracking is not widely used as control interface for robotic interfaces in movement impaired patients due to poor human-robot interfaces. We demonstrate here how combining 3D gaze tracking using our GT3D binocular eye tracker with custom designed 3D head tracking system and calibration method enables continuous 3D end-point control of a robotic arm support system. The users can move their own hand to any location of the workspace by simple looking at the target and winking once. This purely eye tracking based system enables the end-user to retain free head movement and yet achieves high spatial end point accuracy in the order of 6 cm RMSE error in each dimension and standard deviation of 4 cm. 3D calibration is achieved by moving the robot along a 3 dimensional space filling Peano curve while the user is tracking it with their eyes. This results in a fully automated calibration procedure that yields several thousand calibration points versus standard approaches using a dozen points, resulting in beyond state-of-the-art 3D accuracy and precision.

  5. Guidelines for time-to-event end-point definitions in trials for pancreatic cancer. Results of the DATECAN initiative (Definition for the Assessment of Time-to-event End-points in CANcer trials)

    NARCIS (Netherlands)

    Bonnetain, Franck; Bonsing, Bert; Conroy, Thierry; Dousseau, Adelaide; Glimelius, Bengt; Haustermans, Karin; Lacaine, François; van Laethem, Jean Luc; Aparicio, Thomas; Aust, Daniela; Bassi, Claudio; Berger, Virginie; Chamorey, Emmanuel; Chibaudel, Benoist; Dahan, Laeticia; de Gramont, Aimery; Delpero, Jean Robert; Dervenis, Christos; Ducreux, Michel; Gal, Jocelyn; Gerber, Erich; Ghaneh, Paula; Hammel, Pascal; Hendlisz, Alain; Jooste, Valérie; Labianca, Roberto; Latouche, Aurelien; Lutz, Manfred; Macarulla, Teresa; Malka, David; Mauer, Muriel; Mitry, Emmanuel; Neoptolemos, John; Pessaux, Patrick; Sauvanet, Alain; Tabernero, Josep; Taieb, Julien; van Tienhoven, Geertjan; Gourgou-Bourgade, Sophie; Bellera, Carine; Mathoulin-Pélissier, Simone; Collette, Laurence

    2014-01-01

    Using potential surrogate end-points for overall survival (OS) such as Disease-Free- (DFS) or Progression-Free Survival (PFS) is increasingly common in randomised controlled trials (RCTs). However, end-points are too often imprecisely defined which largely contributes to a lack of homogeneity across

  6. Criteria for use of composite end points for competing risks-a systematic survey of the literature with recommendations.

    Science.gov (United States)

    Manja, Veena; AlBashir, Siwar; Guyatt, Gordon

    2017-02-01

    Composite end points are frequently used in reports of clinical trials. One rationale for the use of composite end points is to account for competing risks. In the presence of competing risks, the event rate of a specific event depends on the rates of other competing events. One proposed solution is to include all important competing events in one composite end point. Clinical trialists require guidance regarding when this approach is appropriate. To identify publications describing criteria for use of composite end points for competing risk and to offer guidance regarding when a composite end point is appropriate on the basis of competing risks. We searched MEDLINE, CINAHL, EMBASE, The Cochrane's Central & Systematic Review databases including the Health Technology Assessment database, and the Cochrane's Methodology register from inception to April 2015, and candidate textbooks, to identify all articles providing guidance on this issue. Eligible publications explicitly addressed the issue of a composite outcome to address competing risks. Two reviewers independently screened the titles and abstracts for full-text review; independently reviewed full-text publications; and abstracted specific criteria authors offered for use of composite end points to address competing risks. Of 63,645 titles and abstracts, 166 proved potentially relevant of which 43 publications were included in the final review. Most publications note competing risks as a reason for using composite end points without further elaboration. None of the articles or textbook chapters provide specific criteria for use of composite end points for competing risk. Some advocate using composite end points to avoid bias due to competing risks and others suggest that composite end points seldom or never be used for this purpose. We recommend using composite end points for competing risks only if the competing risk is plausible and if it occurs with sufficiently high frequency to influence the interpretation

  7. Guidelines for the definition of time-to-event end points in renal cell cancer clinical trials: results of the DATECAN project†.

    Science.gov (United States)

    Kramar, A; Negrier, S; Sylvester, R; Joniau, S; Mulders, P; Powles, T; Bex, A; Bonnetain, F; Bossi, A; Bracarda, S; Bukowski, R; Catto, J; Choueiri, T K; Crabb, S; Eisen, T; El Demery, M; Fitzpatrick, J; Flamand, V; Goebell, P J; Gravis, G; Houédé, N; Jacqmin, D; Kaplan, R; Malavaud, B; Massard, C; Melichar, B; Mourey, L; Nathan, P; Pasquier, D; Porta, C; Pouessel, D; Quinn, D; Ravaud, A; Rolland, F; Schmidinger, M; Tombal, B; Tosi, D; Vauleon, E; Volpe, A; Wolter, P; Escudier, B; Filleron, T

    2015-12-01

    In clinical trials, the use of intermediate time-to-event end points (TEEs) is increasingly common, yet their choice and definitions are not standardized. This limits the usefulness for comparing treatment effects between studies. The aim of the DATECAN Kidney project is to clarify and recommend definitions of TEE in renal cell cancer (RCC) through a formal consensus method for end point definitions. A formal modified Delphi method was used for establishing consensus. From a 2006-2009 literature review, the Steering Committee (SC) selected 9 TEE and 15 events in the nonmetastatic (NM) and metastatic/advanced (MA) RCC disease settings. Events were scored on the range of 1 (totally disagree to include) to 9 (totally agree to include) in the definition of each end point. Rating Committee (RC) experts were contacted for the scoring rounds. From these results, final recommendations were established for selecting pertinent end points and the associated events. Thirty-four experts scored 121 events for 9 end points. Consensus was reached for 31%, 43% and 85% events during the first, second and third rounds, respectively. The expert recommend the use of three and two endpoints in NM and MA setting, respectively. In the NM setting: disease-free survival (contralateral RCC, appearance of metastases, local or regional recurrence, death from RCC or protocol treatment), metastasis-free survival (appearance of metastases, regional recurrence, death from RCC); and local-regional-free survival (local or regional recurrence, death from RCC). In the MA setting: kidney cancer-specific survival (death from RCC or protocol treatment) and progression-free survival (death from RCC, local, regional, or metastatic progression). The consensus method revealed that intermediate end points have not been well defined, because all of the selected end points had at least one event definition for which no consensus was obtained. These clarified definitions of TEE should become standard practice in

  8. Quantum Triple Point and Quantum Critical End Points in Metallic Magnets.

    Science.gov (United States)

    Belitz, D; Kirkpatrick, T R

    2017-12-29

    In low-temperature metallic magnets, ferromagnetic (FM) and antiferromagnetic (AFM) orders can exist, adjacent to one another or concurrently, in the phase diagram of a single system. We show that universal quantum effects qualitatively alter the known phase diagrams for classical magnets. They shrink the region of concurrent FM and AFM order, change various transitions from second to first order, and, in the presence of a magnetic field, lead to either a quantum triple point where the FM, AFM, and paramagnetic phases all coexist or a quantum critical end point.

  9. Potentiometric end point detection in the EDTA titrimetric determination of gallium

    International Nuclear Information System (INIS)

    Gopinath, N.; Renuka, M.; Aggarwal, S.K.

    2001-01-01

    Gallium is titrated in presence of known amount of Fe (III) with EDTA in HNO 3 solution at pH 2 to 3. The end point is detected potentiometrically employing a bright platinum wire - saturated calomel (SCE) reference electrode system, the redox couple being Fe (III) / Fe (II). Since Fe (III) is also titrated by EDTA, it is, therefore, subtracted from titre value to get the EDTA equivalent to gallium only. Precision and accuracy 0.2 to 0.4% was obtained in the results of gallium in the range of 8 to 2 mg. (author)

  10. Guidelines for time-to-event end point definitions in breast cancer trials: results of the DATECAN initiative (Definition for the Assessment of Time-to-event Endpoints in CANcer trials)†.

    Science.gov (United States)

    Gourgou-Bourgade, S; Cameron, D; Poortmans, P; Asselain, B; Azria, D; Cardoso, F; A'Hern, R; Bliss, J; Bogaerts, J; Bonnefoi, H; Brain, E; Cardoso, M J; Chibaudel, B; Coleman, R; Cufer, T; Dal Lago, L; Dalenc, F; De Azambuja, E; Debled, M; Delaloge, S; Filleron, T; Gligorov, J; Gutowski, M; Jacot, W; Kirkove, C; MacGrogan, G; Michiels, S; Negreiros, I; Offersen, B V; Penault Llorca, F; Pruneri, G; Roche, H; Russell, N S; Schmitt, F; Servent, V; Thürlimann, B; Untch, M; van der Hage, J A; van Tienhoven, G; Wildiers, H; Yarnold, J; Bonnetain, F; Mathoulin-Pélissier, S; Bellera, C; Dabakuyo-Yonli, T S

    2015-05-01

    Using surrogate end points for overall survival, such as disease-free survival, is increasingly common in randomized controlled trials. However, the definitions of several of these time-to-event (TTE) end points are imprecisely which limits interpretation and cross-trial comparisons. The estimation of treatment effects may be directly affected by the definitions of end points. The DATECAN initiative (Definition for the Assessment of Time-to-event Endpoints in CANcer trials) aims to provide recommendations for definitions of TTE end points. We report guidelines for randomized cancer clinical trials (RCTs) in breast cancer. A literature review was carried out to identify TTE end points (primary or secondary) reported in publications of randomized trials or guidelines. An international multidisciplinary panel of experts proposed recommendations for the definitions of these end points based on a validated consensus method that formalize the degree of agreement among experts. Recommended guidelines for the definitions of TTE end points commonly used in RCTs for breast cancer are provided for non-metastatic and metastatic settings. The use of standardized definitions should facilitate comparisons of trial results and improve the quality of trial design and reporting. These guidelines could be of particular interest to those involved in the design, conducting, reporting, or assessment of RCT. © The Author 2015. Published by Oxford University Press on behalf of the European Society for Medical Oncology. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  11. Biomarkers of Host Response Predict Primary End-Point Radiological Pneumonia in Tanzanian Children with Clinical Pneumonia: A Prospective Cohort Study

    Science.gov (United States)

    Erdman, Laura K.; D’Acremont, Valérie; Hayford, Kyla; Kilowoko, Mary; Kyungu, Esther; Hongoa, Philipina; Alamo, Leonor; Streiner, David L.; Genton, Blaise; Kain, Kevin C.

    2015-01-01

    Background Diagnosing pediatric pneumonia is challenging in low-resource settings. The World Health Organization (WHO) has defined primary end-point radiological pneumonia for use in epidemiological and vaccine studies. However, radiography requires expertise and is often inaccessible. We hypothesized that plasma biomarkers of inflammation and endothelial activation may be useful surrogates for end-point pneumonia, and may provide insight into its biological significance. Methods We studied children with WHO-defined clinical pneumonia (n = 155) within a prospective cohort of 1,005 consecutive febrile children presenting to Tanzanian outpatient clinics. Based on x-ray findings, participants were categorized as primary end-point pneumonia (n = 30), other infiltrates (n = 31), or normal chest x-ray (n = 94). Plasma levels of 7 host response biomarkers at presentation were measured by ELISA. Associations between biomarker levels and radiological findings were assessed by Kruskal-Wallis test and multivariable logistic regression. Biomarker ability to predict radiological findings was evaluated using receiver operating characteristic curve analysis and Classification and Regression Tree analysis. Results Compared to children with normal x-ray, children with end-point pneumonia had significantly higher C-reactive protein, procalcitonin and Chitinase 3-like-1, while those with other infiltrates had elevated procalcitonin and von Willebrand Factor and decreased soluble Tie-2 and endoglin. Clinical variables were not predictive of radiological findings. Classification and Regression Tree analysis generated multi-marker models with improved performance over single markers for discriminating between groups. A model based on C-reactive protein and Chitinase 3-like-1 discriminated between end-point pneumonia and non-end-point pneumonia with 93.3% sensitivity (95% confidence interval 76.5–98.8), 80.8% specificity (72.6–87.1), positive likelihood ratio 4.9 (3.4–7

  12. Dysglycemia and Index60 as Prediagnostic End Points for Type 1 Diabetes Prevention Trials.

    Science.gov (United States)

    Nathan, Brandon M; Boulware, David; Geyer, Susan; Atkinson, Mark A; Colman, Peter; Goland, Robin; Russell, William; Wentworth, John M; Wilson, Darrell M; Evans-Molina, Carmella; Wherrett, Diane; Skyler, Jay S; Moran, Antoinette; Sosenko, Jay M

    2017-11-01

    We assessed dysglycemia and a T1D Diagnostic Index60 (Index60) ≥1.00 (on the basis of fasting C-peptide, 60-min glucose, and 60-min C-peptide levels) as prediagnostic end points for type 1 diabetes among Type 1 Diabetes TrialNet Pathway to Prevention Study participants. Two cohorts were analyzed: 1 ) baseline normoglycemic oral glucose tolerance tests (OGTTs) with an incident dysglycemic OGTT and 2 ) baseline Index60 <1.00 OGTTs with an incident Index60 ≥1.00 OGTT. Incident dysglycemic OGTTs were divided into those with (DYS/IND+) and without (DYS/IND-) concomitant Index60 ≥1.00. Incident Index60 ≥1.00 OGTTs were divided into those with (IND/DYS+) and without (IND/DYS-) concomitant dysglycemia. The cumulative incidence for type 1 diabetes was greater after IND/DYS- than after DYS/IND- ( P < 0.01). Within the normoglycemic cohort, the cumulative incidence of type 1 diabetes was higher after DYS/IND+ than after DYS/IND- ( P < 0.001), whereas within the Index60 <1.00 cohort, the cumulative incidence after IND/DYS+ and after IND/DYS- did not differ significantly. Among nonprogressors, type 1 diabetes risk at the last OGTT was greater for IND/DYS- than for DYS/IND- ( P < 0.001). Hazard ratios (HRs) of DYS/IND- with age and 30- to 0-min C-peptide were positive ( P < 0.001 for both), whereas HRs of type 1 diabetes with these variables were inverse ( P < 0.001 for both). In contrast, HRs of IND/DYS- and type 1 diabetes with age and 30- to 0-min C-peptide were consistent (all inverse [ P < 0.01 for all]). The findings suggest that incident dysglycemia without Index60 ≥1.00 is a suboptimal prediagnostic end point for type 1 diabetes. Measures that include both glucose and C-peptide levels, such as Index60 ≥1.00, appear better suited as prediagnostic end points. © 2017 by the American Diabetes Association.

  13. Predicting the outcome of oral food challenges with hen's egg through skin test end-point titration.

    Science.gov (United States)

    Tripodi, S; Businco, A Di Rienzo; Alessandri, C; Panetta, V; Restani, P; Matricardi, P M

    2009-08-01

    Oral food challenge (OFC) is the diagnostic 'gold standard' of food allergies but it is laborious and time consuming. Attempts to predict a positive OFC through specific IgE assays or conventional skin tests so far gave suboptimal results. To test whether skin test with titration curves predict with enough confidence the outcome of an oral food challenge. Children (n=47; mean age 6.2 +/- 4.2 years) with suspected and diagnosed allergic reactions to hen's egg (HE) were examined through clinical history, physical examination, oral food challenge, conventional and end-point titrated skin tests with HE white extract and determination of serum specific IgE against HE white. Predictive decision points for a positive outcome of food challenges were calculated through receiver operating characteristic (ROC) analysis for HE white using IgE concentration, weal size and end-point titration (EPT). OFC was positive (Sampson's score >or=3) in 20/47 children (42.5%). The area under the ROC curve obtained with the EPT method was significantly bigger than the one obtained by measuring IgE-specific antibodies (0.99 vs. 0.83, P<0.05) and weal size (0.99 vs. 0.88, P<0.05). The extract's dilution that successfully discriminated a positive from a negative OFC (sensitivity 95%, specificity 100%) was 1 : 256, corresponding to a concentration of 5.9 microg/mL of ovotransferrin, 22.2 microg/mL of ovalbumin, and 1.4 microg/mL of lysozyme. EPT is a promising approach to optimize the use of skin prick tests and to predict the outcome of OFC with HE in children. Further studies are needed to test whether this encouraging finding can be extended to other populations and food allergens.

  14. No significant effect of angiotensin II receptor blockade on intermediate cardiovascular end points in hemodialysis patients

    DEFF Research Database (Denmark)

    Peters, Christian Daugaard; Kjaergaard, Krista D; Jensen, Jens D

    2014-01-01

    Agents blocking the renin-angiotensin-aldosterone system are frequently used in patients with end-stage renal disease, but whether they exert beneficial cardiovascular effects is unclear. Here the long-term effects of the angiotensin II receptor blocker, irbesartan, were studied in hemodialysis......, and residual renal function. Brachial blood pressure decreased significantly in both groups, but there was no significant difference between placebo and irbesartan. Use of additional antihypertensive medication, ultrafiltration volume, and dialysis dosage were not different. Intermediate cardiovascular end...... points such as central aortic blood pressure, carotid-femoral pulse wave velocity, left ventricular mass index, N-terminal brain natriuretic prohormone, heart rate variability, and plasma catecholamines were not significantly affected by irbesartan treatment. Changes in systolic blood pressure during...

  15. Lack of Bystander Effects From High-LET Radiation For Early Cytogenetic End Points

    International Nuclear Information System (INIS)

    Groesser, Torsten; Cooper, Brian; Rydberg, Bjorn

    2008-01-01

    The aim of this work was to study radiation-induced bystander effects for early cytogenetic end points in various cell lines using the medium transfer technique after exposure to high- and low-LET radiation. Cells were exposed to 20 MeV/ nucleon nitrogen ions, 968 MeV/nucleon iron ions, or 575 MeV/nucleon iron ions followed by transfer of the conditioned medium from the irradiated cells to unirradiated test cells. The effects studied included DNA double-strand break induction, γ-H2AX focus formation, induction of chromatid breaks in prematurely condensed chromosomes, and micronucleus formation using DNA repair-proficient and -deficient hamster and human cell lines (xrs6, V79, SW48, MO59K and MO59J). Cell survival was also measured in SW48 bystander cells using X rays. Although it was occasionally possible to detect an increase in chromatid break levels using nitrogen ions and to see a higher number of γ-H2AX foci using nitrogen and iron ions in xrs6 bystander cells in single experiments, the results were not reproducible. After we pooled all the data, we could not verify a significant bystander effect for any of these end points. Also, we did not detect a significant bystander effect for DSB induction or micronucleus formation in these cell lines or for clonogenic survival in SW48 cells. The data suggest that DNA damage and cytogenetic changes are not induced in bystander cells. In contrast, data in the literature show pronounced bystander effects in a variety of cell lines, including clonogenic survival in SW48 cells and induction of chromatid breaks and micronuclei in hamster cells. To reconcile these conflicting data, it is possible that the epigenetic status of the specific cell line or the precise culture conditions and medium supplements, such as serum, may be critical for inducing bystander effects.

  16. Guidelines for time-to-event end-point definitions in trials for pancreatic cancer. Results of the DATECAN initiative (Definition for the Assessment of Time-to-event End-points in CANcer trials).

    Science.gov (United States)

    Bonnetain, Franck; Bonsing, Bert; Conroy, Thierry; Dousseau, Adelaide; Glimelius, Bengt; Haustermans, Karin; Lacaine, François; Van Laethem, Jean Luc; Aparicio, Thomas; Aust, Daniela; Bassi, Claudio; Berger, Virginie; Chamorey, Emmanuel; Chibaudel, Benoist; Dahan, Laeticia; De Gramont, Aimery; Delpero, Jean Robert; Dervenis, Christos; Ducreux, Michel; Gal, Jocelyn; Gerber, Erich; Ghaneh, Paula; Hammel, Pascal; Hendlisz, Alain; Jooste, Valérie; Labianca, Roberto; Latouche, Aurelien; Lutz, Manfred; Macarulla, Teresa; Malka, David; Mauer, Muriel; Mitry, Emmanuel; Neoptolemos, John; Pessaux, Patrick; Sauvanet, Alain; Tabernero, Josep; Taieb, Julien; van Tienhoven, Geertjan; Gourgou-Bourgade, Sophie; Bellera, Carine; Mathoulin-Pélissier, Simone; Collette, Laurence

    2014-11-01

    Using potential surrogate end-points for overall survival (OS) such as Disease-Free- (DFS) or Progression-Free Survival (PFS) is increasingly common in randomised controlled trials (RCTs). However, end-points are too often imprecisely defined which largely contributes to a lack of homogeneity across trials, hampering comparison between them. The aim of the DATECAN (Definition for the Assessment of Time-to-event End-points in CANcer trials)-Pancreas project is to provide guidelines for standardised definition of time-to-event end-points in RCTs for pancreatic cancer. Time-to-event end-points currently used were identified from a literature review of pancreatic RCT trials (2006-2009). Academic research groups were contacted for participation in order to select clinicians and methodologists to participate in the pilot and scoring groups (>30 experts). A consensus was built after 2 rounds of the modified Delphi formal consensus approach with the Rand scoring methodology (range: 1-9). For pancreatic cancer, 14 time to event end-points and 25 distinct event types applied to two settings (detectable disease and/or no detectable disease) were considered relevant and included in the questionnaire sent to 52 selected experts. Thirty experts answered both scoring rounds. A total of 204 events distributed over the 14 end-points were scored. After the first round, consensus was reached for 25 items; after the second consensus was reached for 156 items; and after the face-to-face meeting for 203 items. The formal consensus approach reached the elaboration of guidelines for standardised definitions of time-to-event end-points allowing cross-comparison of RCTs in pancreatic cancer. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. A quantitative analysis of statistical power identifies obesity end points for improved in vivo preclinical study design.

    Science.gov (United States)

    Selimkhanov, J; Thompson, W C; Guo, J; Hall, K D; Musante, C J

    2017-08-01

    The design of well-powered in vivo preclinical studies is a key element in building the knowledge of disease physiology for the purpose of identifying and effectively testing potential antiobesity drug targets. However, as a result of the complexity of the obese phenotype, there is limited understanding of the variability within and between study animals of macroscopic end points such as food intake and body composition. This, combined with limitations inherent in the measurement of certain end points, presents challenges to study design that can have significant consequences for an antiobesity program. Here, we analyze a large, longitudinal study of mouse food intake and body composition during diet perturbation to quantify the variability and interaction of the key metabolic end points. To demonstrate how conclusions can change as a function of study size, we show that a simulated preclinical study properly powered for one end point may lead to false conclusions based on secondary end points. We then propose the guidelines for end point selection and study size estimation under different conditions to facilitate proper power calculation for a more successful in vivo study design.

  18. Survival End Points for Huntington Disease Trials Prior to a Motor Diagnosis.

    Science.gov (United States)

    Long, Jeffrey D; Mills, James A; Leavitt, Blair R; Durr, Alexandra; Roos, Raymund A; Stout, Julie C; Reilmann, Ralf; Landwehrmeyer, Bernhard; Gregory, Sarah; Scahill, Rachael I; Langbehn, Douglas R; Tabrizi, Sarah J

    2017-11-01

    Predictive genetic testing in Huntington disease (HD) enables therapeutic trials in HTT gene expansion mutation carriers prior to a motor diagnosis. Progression-free survival (PFS) is the composite of a motor diagnosis or a progression event, whichever comes first. To determine if PFS provides feasible sample sizes for trials with mutation carriers who have not yet received a motor diagnosis. This study uses data from the 2-phase, longitudinal cohort studies called Track and from a longitudinal cohort study called the Cooperative Huntington Observational Research Trial (COHORT). Track had 167 prediagnosis mutation carriers and 156 noncarriers, whereas COHORT had 366 prediagnosis mutation carriers and noncarriers. Track studies were conducted at 4 sites in 4 countries (Canada, France, England, and the Netherlands) from which data were collected from January 17, 2008, through November 17, 2014. The COHORT was conducted at 38 sites in 3 countries (Australia, Canada, and the United States) from which data were collected from February 14, 2006, through December 31, 2009. Results from the Track data were externally validated with data from the COHORT. The required sample size was estimated for a 2-arm prediagnosis clinical trial. Data analysis took place from May 1, 2016, to June 10, 2017. The primary end point is PFS. Huntington disease progression events are defined for the Unified Huntington's Disease Rating Scale total motor score, total functional capacity, symbol digit modalities test, and Stroop word test. Of Track's 167 prediagnosis mutation carriers, 93 (55.6%) were women, and the mean (SD) age was 40.06 (8.92) years; of the 156 noncarriers, 87 (55.7%) were women, and the mean (SD) age was 45.58 (10.30) years. Of the 366 COHORT participants, 229 (62.5%) were women and the mean (SD) age was 42.21 (12.48) years. The PFS curves of the Track mutation carriers showed good external validity with the COHORT mutation carriers after adjusting for initial progression. For

  19. Guidelines for time-to-event end point definitions in sarcomas and gastrointestinal stromal tumors (GIST) trials: results of the DATECAN initiative (Definition for the Assessment of Time-to-event Endpoints in CANcer trials)†.

    Science.gov (United States)

    Bellera, C A; Penel, N; Ouali, M; Bonvalot, S; Casali, P G; Nielsen, O S; Delannes, M; Litière, S; Bonnetain, F; Dabakuyo, T S; Benjamin, R S; Blay, J-Y; Bui, B N; Collin, F; Delaney, T F; Duffaud, F; Filleron, T; Fiore, M; Gelderblom, H; George, S; Grimer, R; Grosclaude, P; Gronchi, A; Haas, R; Hohenberger, P; Issels, R; Italiano, A; Jooste, V; Krarup-Hansen, A; Le Péchoux, C; Mussi, C; Oberlin, O; Patel, S; Piperno-Neumann, S; Raut, C; Ray-Coquard, I; Rutkowski, P; Schuetze, S; Sleijfer, S; Stoeckle, E; Van Glabbeke, M; Woll, P; Gourgou-Bourgade, S; Mathoulin-Pélissier, S

    2015-05-01

    The use of potential surrogate end points for overall survival, such as disease-free survival (DFS) or time-to-treatment failure (TTF) is increasingly common in randomized controlled trials (RCTs) in cancer. However, the definition of time-to-event (TTE) end points is rarely precise and lacks uniformity across trials. End point definition can impact trial results by affecting estimation of treatment effect and statistical power. The DATECAN initiative (Definition for the Assessment of Time-to-event End points in CANcer trials) aims to provide recommendations for definitions of TTE end points. We report guidelines for RCT in sarcomas and gastrointestinal stromal tumors (GIST). We first carried out a literature review to identify TTE end points (primary or secondary) reported in publications of RCT. An international multidisciplinary panel of experts proposed recommendations for the definitions of these end points. Recommendations were developed through a validated consensus method formalizing the degree of agreement among experts. Recommended guidelines for the definition of TTE end points commonly used in RCT for sarcomas and GIST are provided for adjuvant and metastatic settings, including DFS, TTF, time to progression and others. Use of standardized definitions should facilitate comparison of trials' results, and improve the quality of trial design and reporting. These guidelines could be of particular interest to research scientists involved in the design, conduct, reporting or assessment of RCT such as investigators, statisticians, reviewers, editors or regulatory authorities. © The Author 2014. Published by Oxford University Press on behalf of the European Society for Medical Oncology. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  20. Infrared interference patterns for new capabilities in laser end point detection

    International Nuclear Information System (INIS)

    Heason, D J; Spencer, A G

    2003-01-01

    Standard laser interferometry is used in dry etch fabrication of semiconductor and MEMS devices to measure etch depth, rate and to detect the process end point. However, many wafer materials, such as silicon are absorbing at probing wavelengths in the visible, severely limiting the amount of information that can be obtained using this technique. At infrared (IR) wavelengths around 1500 nm and above, silicon is highly transparent. In this paper we describe an instrument that can be used to monitor etch depth throughout a thru-wafer etch. The provision of this information could eliminate the requirement of an 'etch stop' layer and improve the performance of fabricated devices. We have added a further new capability by using tuneable lasers to scan through wavelengths in the near IR to generate an interference pattern. Fitting a theoretical curve to this interference pattern gives in situ measurement of film thickness. Whereas conventional interferometry would only allow etch depth to be monitored in real time, we can use a pre-etch thickness measurement to terminate the etch on a remaining thickness of film material. This paper discusses the capabilities of, and the opportunities offered by, this new technique and gives examples of applications in MEMS and waveguides

  1. End-Point Immobilization of Recombinant Thrombomodulin via Sortase-Mediated Ligation

    Science.gov (United States)

    Jiang, Rui; Weingart, Jacob; Zhang, Hailong; Ma, Yong; Sun, Xue-Long

    2012-01-01

    We report an enzymatic end-point modification and immobilization of recombinant human thrombomodulin (TM), a cofactor for activation of anticoagulant protein C pathway via thrombin. First, a truncated TM mutant consisting of epidermal growth factor-like domains 4–6 (TM456) with a conserved pentapeptide LPETG motif at its C-terminal was expressed and purified in E. coli. Next, the truncated TM456 derivative was site-specifically modified with N-terminal diglycine containing molecules such as biotin and the fluorescent probe dansyl via sortase A (SrtA) mediated ligation (SML). The successful ligations were confirmed by SDS-PAGE and fluorescence imaging. Finally, the truncated TM456 was immobilized onto N-terminal diglycine-functionalized glass slide surface via SML directly. Alternatively, the truncated TM456 was biotinylated via SML and then immobilized onto streptavidin-functionalized glass slide surface indirectly. The successful immobilizations were confirmed by fluorescence imaging. The bioactivity of the immobilized truncated TM456 was further confirmed by protein C activation assay, in which enhanced activation of protein C by immobilized recombinant TM was observed. The sortase A-catalyzed surface ligation took place under mild conditions and is rapid occurring in a single step without prior chemical modification of the target protein. This site-specific covalent modification leads to molecules being arranged in a definitively ordered fashion and facilitating the preservation of the protein’s biological activity. PMID:22372933

  2. Chronic obstructive pulmonary disease may be one of the terminal end points of metabolic syndrome

    International Nuclear Information System (INIS)

    Helvaci, M.R.; Aydin, L.Y.; Aydin, Y.

    2012-01-01

    Objective: We tried to understand presence of any effect of excess weight on respiratory system by means of excessive adipose tissue functioning as an endocrine organ and causing a Methodology: Mild (stage 1), moderate (stage 2), and severe (stage 3 and 4) chronic obstructive pulmonary disease (COPD) patients were detected, and compared according to the metabolic parameters in between. Results: There were 145, 56, and 34 patients in the mild, moderate, and severe COPD groups, respectively. The mean age increased gradually (52.4, 56.4, and 60.0 years) from the mild towards the severe COPD groups, respectively (p<0.05 nearly in all steps). Similarly, the mean direction (p<0.05 nearly in all steps). Parallel to them, the mean body mass index increased Conclusion: The metabolic syndrome includes some reversible indicators such as overweight, hyperbetalipoproteinemia, hypertriglyceridemia, dyslipidemia, impaired fasting glucose, impaired glucose tolerance, and white coat hypertension for the development of terminal diseases including obesity, hypertension, diabetes mellitus, peripheral artery disease, coronary heart disease, and stroke. In our opinion, COPD may be one of the terminal end points of the syndrome. (author)

  3. Chronic obstructive pulmonary disease may be one of the terminal end points of metabolic syndrome

    Energy Technology Data Exchange (ETDEWEB)

    Helvaci, M R; Aydin, L Y; Aydin, Y

    2012-04-15

    Objective: We tried to understand presence of any effect of excess weight on respiratory system by means of excessive adipose tissue functioning as an endocrine organ and causing a Methodology: Mild (stage 1), moderate (stage 2), and severe (stage 3 and 4) chronic obstructive pulmonary disease (COPD) patients were detected, and compared according to the metabolic parameters in between. Results: There were 145, 56, and 34 patients in the mild, moderate, and severe COPD groups, respectively. The mean age increased gradually (52.4, 56.4, and 60.0 years) from the mild towards the severe COPD groups, respectively (p<0.05 nearly in all steps). Similarly, the mean direction (p<0.05 nearly in all steps). Parallel to them, the mean body mass index increased Conclusion: The metabolic syndrome includes some reversible indicators such as overweight, hyperbetalipoproteinemia, hypertriglyceridemia, dyslipidemia, impaired fasting glucose, impaired glucose tolerance, and white coat hypertension for the development of terminal diseases including obesity, hypertension, diabetes mellitus, peripheral artery disease, coronary heart disease, and stroke. In our opinion, COPD may be one of the terminal end points of the syndrome. (author)

  4. Association between time to disease progression end points and overall survival in patients with neuroendocrine tumors

    Directory of Open Access Journals (Sweden)

    Singh S

    2014-08-01

    Full Text Available Simron Singh,1 Xufang Wang,2 Calvin HL Law1 1Sunnybrook Odette Cancer Center, University of Toronto, Toronto, ON, Canada; 2Novartis Oncology, Florham Park, NJ, USA Abstract: Overall survival can be difficult to determine for slowly progressing malignancies, such as neuroendocrine tumors. We investigated whether time to disease progression is positively associated with overall survival in patients with such tumors. A literature review identified 22 clinical trials in patients with neuroendocrine tumors that reported survival probabilities for both time to disease progression (progression-free survival and time to progression and overall survival. Associations between median time to disease progression and median overall survival and between treatment effects on time to disease progression and treatment effects on overall survival were analyzed using weighted least-squares regression. Median time to disease progression was significantly associated with median overall survival (coefficient 0.595; P=0.022. In the seven randomized studies identified, the risk reduction for time to disease progression was positively associated with the risk reduction for overall survival (coefficient on −ln[HR] 0.151; 95% confidence interval −0.843, 1.145; P=0.713. The significant association between median time to disease progression and median overall survival supports the assertion that time to disease progression is an alternative end point to overall survival in patients with neuroendocrine tumors. An apparent albeit not significant trend correlates treatment effects on time to disease progression and treatment effects on overall survival. Informal surveys of physicians’ perceptions are consistent with these concepts, although additional randomized trials are needed. Keywords: neuroendocrine tumors, progression-free survival, disease progression, mortality

  5. Centralized adjudication of cardiovascular end points in cardiovascular and noncardiovascular pharmacologic trials: a report from the Cardiac Safety Research Consortium.

    Science.gov (United States)

    Seltzer, Jonathan H; Turner, J Rick; Geiger, Mary Jane; Rosano, Giuseppe; Mahaffey, Kenneth W; White, William B; Sabol, Mary Beth; Stockbridge, Norman; Sager, Philip T

    2015-02-01

    This white paper provides a summary of presentations and discussions at a cardiovascular (CV) end point adjudication think tank cosponsored by the Cardiac Safety Research Committee and the US Food and Drug Administration (FDA) that was convened at the FDA's White Oak headquarters on November 6, 2013. Attention was focused on the lack of clarity concerning the need for end point adjudication in both CV and non-CV trials: there is currently an absence of widely accepted academic or industry standards and a definitive regulatory policy on how best to structure and use clinical end point committees (CECs). This meeting therefore provided a forum for leaders in the fields of CV clinical trials and CV safety to develop a foundation of initial best practice recommendations for use in future CEC charters. Attendees included representatives from pharmaceutical companies, regulatory agencies, end point adjudication specialist groups, clinical research organizations, and active, academically based adjudicators. The manuscript presents recommendations from the think tank regarding when CV end point adjudication should be considered in trials conducted by cardiologists and by noncardiologists as well as detailing key issues in the composition of a CEC and its charter. In addition, it presents several recommended best practices for the establishment and operation of CECs. The science underlying CV event adjudication is evolving, and suggestions for additional areas of research will be needed to continue to advance this science. This manuscript does not constitute regulatory guidance. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. Evolution of Randomized Trials in Advanced/Metastatic Soft Tissue Sarcoma: End Point Selection, Surrogacy, and Quality of Reporting.

    Science.gov (United States)

    Zer, Alona; Prince, Rebecca M; Amir, Eitan; Abdul Razak, Albiruni

    2016-05-01

    Randomized controlled trials (RCTs) in soft tissue sarcoma (STS) have used varying end points. The surrogacy of intermediate end points, such as progression-free survival (PFS), response rate (RR), and 3-month and 6-month PFS (3moPFS and 6moPFS) with overall survival (OS), remains unknown. The quality of efficacy and toxicity reporting in these studies is also uncertain. A systematic review of systemic therapy RCTs in STS was performed. Surrogacy between intermediate end points and OS was explored using weighted linear regression for the hazard ratio for OS with the hazard ratio for PFS or the odds ratio for RR, 3moPFS, and 6moPFS. The quality of reporting for efficacy and toxicity was also evaluated. Fifty-two RCTs published between 1974 and 2014, comprising 9,762 patients, met the inclusion criteria. There were significant correlations between PFS and OS (R = 0.61) and between RR and OS (R = 0.51). Conversely, there were nonsignificant correlations between 3moPFS and 6moPFS with OS. A reduction in the use of RR as the primary end point was observed over time, favoring time-based events (P for trend = .02). In 14% of RCTs, the primary end point was not met, but the study was reported as being positive. Toxicity was comprehensively reported in 47% of RCTs, whereas 14% inadequately reported toxicity. In advanced STS, PFS and RR seem to be appropriate surrogates for OS. There is poor correlation between OS and both 3moPFS and 6moPFS. As such, caution is urged with the use of these as primary end points in randomized STS trials. The quality of toxicity reporting and interpretation of results is suboptimal. © 2016 by American Society of Clinical Oncology.

  7. Modulation of cigarette smoke-related end-points in mutagenesis and carcinogenesis

    International Nuclear Information System (INIS)

    De Flora, Silvio; D'Agostini, Francesco; Balansky, Roumen; Camoirano, Anna; Bennicelli, Carlo; Bagnasco, Maria; Cartiglia, Cristina; Tampa, Elena; Longobardi, Maria Grazia; Lubet, Ronald A.; Izzotti, Alberto

    2003-01-01

    The epidemic of lung cancer and the increase of other tumours and chronic degenerative diseases associated with tobacco smoking have represented one of the most dramatic catastrophes of the 20th century. The control of this plague is one of the major challenges of preventive medicine for the next decades. The imperative goal is to refrain from smoking. However, chemoprevention by dietary and/or pharmacological agents provides a complementary strategy, which can be targeted not only to current smokers but also to former smokers and passive smokers. This article summarises the results of studies performed in our laboratories during the last 10 years, and provides new data generated in vitro, in experimental animals and in humans. We compared the ability of 63 putative chemopreventive agents to inhibit the bacterial mutagenicity of mainstream cigarette smoke. Modulation by ethanol and the mechanisms involved were also investigated both in vitro and in vivo. Several studies evaluated the effects of dietary chemopreventive agents towards smoke-related intermediate biomarkers in various cells, tissues and organs of rodents. The investigated end-points included metabolic parameters, adducts to haemoglobin, bulky adducts to nuclear DNA, oxidative DNA damage, adducts to mitochondrial DNA, apoptosis, cytogenetic damage in alveolar macrophages, bone marrow and peripheral blood erytrocytes, proliferation markers, and histopathological alterations. The agents tested in vivo included N-acetyl-L-cysteine, 1,2-dithiole-3-thione, oltipraz, phenethyl isothiocyanate, 5,6-benzoflavone, and sulindac. We started applying multigene expression analysis to chemoprevention research, and postulated that an optimal agent should not excessively alter per se the physiological background of gene expression but should be able to attenuate the alterations produced by cigarette smoke or other carcinogens. We are working to develop an animal model for the induction of lung tumours following exposure

  8. Free-time and fixed end-point multi-target optimal control theory: Application to quantum computing

    International Nuclear Information System (INIS)

    Mishima, K.; Yamashita, K.

    2011-01-01

    Graphical abstract: The two-state Deutsch-Jozsa algortihm used to demonstrate the utility of free-time and fixed-end point multi-target optimal control theory. Research highlights: → Free-time and fixed-end point multi-target optimal control theory (FRFP-MTOCT) was constructed. → The features of our theory include optimization of the external time-dependent perturbations with high transition probabilities, that of the temporal duration, the monotonic convergence, and the ability to optimize multiple-laser pulses simultaneously. → The advantage of the theory and a comparison with conventional fixed-time and fixed end-point multi-target optimal control theory (FIFP-MTOCT) are presented by comparing data calculated using the present theory with those published previously [K. Mishima, K. Yamashita, Chem. Phys. 361 (2009) 106]. → The qubit system of our interest consists of two polar NaCl molecules coupled by dipole-dipole interaction. → The calculation examples show that our theory is useful for minor adjustment of the external fields. - Abstract: An extension of free-time and fixed end-point optimal control theory (FRFP-OCT) to monotonically convergent free-time and fixed end-point multi-target optimal control theory (FRFP-MTOCT) is presented. The features of our theory include optimization of the external time-dependent perturbations with high transition probabilities, that of the temporal duration, the monotonic convergence, and the ability to optimize multiple-laser pulses simultaneously. The advantage of the theory and a comparison with conventional fixed-time and fixed end-point multi-target optimal control theory (FIFP-MTOCT) are presented by comparing data calculated using the present theory with those published previously [K. Mishima, K. Yamashita, Chem. Phys. 361, (2009), 106]. The qubit system of our interest consists of two polar NaCl molecules coupled by dipole-dipole interaction. The calculation examples show that our theory is useful for minor

  9. New drugs and patient-centred end-points in old age: setting the wheels in motion.

    Science.gov (United States)

    Mangoni, Arduino A; Pilotto, Alberto

    2016-01-01

    Older patients with various degrees of frailty and disability, a key population target of pharmacological interventions in acute and chronic disease states, are virtually neglected in pre-marketing studies assessing the efficacy and safety of investigational drugs. Moreover, aggressively pursuing established therapeutic targets in old age, e.g. blood pressure, serum glucose or cholesterol concentrations, is not necessarily associated with the beneficial effects, and the acceptable safety, reported in younger patient cohorts. Measures of self-reported health and functional status might represent additional, more meaningful, therapeutic end-points in the older population, particularly in patients with significant frailty and relatively short life expectancy, e.g. in the presence of cancer and/or neurodegenerative disease conditions. Strategies enhancing early knowledge about key pharmacological characteristics of investigational drugs targeting older adults are discussed, together with the rationale for incorporating non-traditional, patient-centred, end-points in this ever-increasing group.

  10. A Case Study Application of the Aggregate Exposure Pathway (AEP) and Adverse Outcome Pathway (AOP) Frameworks to Facilitate the Integration of Human Health and Ecological End Points for Cumulative Risk Assessment (CRA)

    Science.gov (United States)

    Cumulative risk assessment (CRA) methods promote the use of a conceptual site model (CSM) to apportion exposures and integrate risk from multiple stressors. While CSMs may encompass multiple species, evaluating end points across taxa can be challenging due to data availability an...

  11. Summary statistics for end-point conditioned continuous-time Markov chains

    DEFF Research Database (Denmark)

    Hobolth, Asger; Jensen, Jens Ledet

    Continuous-time Markov chains are a widely used modelling tool. Applications include DNA sequence evolution, ion channel gating behavior and mathematical finance. We consider the problem of calculating properties of summary statistics (e.g. mean time spent in a state, mean number of jumps between...... two states and the distribution of the total number of jumps) for discretely observed continuous time Markov chains. Three alternative methods for calculating properties of summary statistics are described and the pros and cons of the methods are discussed. The methods are based on (i) an eigenvalue...... decomposition of the rate matrix, (ii) the uniformization method, and (iii) integrals of matrix exponentials. In particular we develop a framework that allows for analyses of rather general summary statistics using the uniformization method....

  12. Experiments for quick and accurate thorium assay by means of potentiometric end point determination

    International Nuclear Information System (INIS)

    Mainka, E.; Coerdt, W.

    1978-10-01

    Two methods are described which allow quick thorium assay easily to be automated. In the potentiometric titration with NaF the ion sensitive fluoride electrode is used as the indicator. The analysis can be performed for pH values 3 to 4. Th(OH) 4 precipitation must be avoided although the acidity of the solution must not be too high since otherwise erroneous measurements will be obtained. In the complexometric thorium assay EDTA is used for titration and the indicator is the copper sensitive electrode. This method offers the advantage that the analysis can be performed in the presence of large amounts of uranium, which is excluded under the NaF method. (orig.) [de

  13. At-line determination of pharmaceuticals small molecule's blending end point using chemometric modeling combined with Fourier transform near infrared spectroscopy

    Science.gov (United States)

    Tewari, Jagdish; Strong, Richard; Boulas, Pierre

    2017-02-01

    This article summarizes the development and validation of a Fourier transform near infrared spectroscopy (FT-NIR) method for the rapid at-line prediction of active pharmaceutical ingredient (API) in a powder blend to optimize small molecule formulations. The method was used to determine the blend uniformity end-point for a pharmaceutical solid dosage formulation containing a range of API concentrations. A set of calibration spectra from samples with concentrations ranging from 1% to 15% of API (w/w) were collected at-line from 4000 to 12,500 cm- 1. The ability of the FT-NIR method to predict API concentration in the blend samples was validated against a reference high performance liquid chromatography (HPLC) method. The prediction efficiency of four different types of multivariate data modeling methods such as partial least-squares 1 (PLS1), partial least-squares 2 (PLS2), principal component regression (PCR) and artificial neural network (ANN), were compared using relevant multivariate figures of merit. The prediction ability of the regression models were cross validated against results generated with the reference HPLC method. PLS1 and ANN showed excellent and superior prediction abilities when compared to PLS2 and PCR. Based upon these results and because of its decreased complexity compared to ANN, PLS1 was selected as the best chemometric method to predict blend uniformity at-line. The FT-NIR measurement and the associated chemometric analysis were implemented in the production environment for rapid at-line determination of the end-point of the small molecule blending operation. FIGURE 1: Correlation coefficient vs Rank plot FIGURE 2: FT-NIR spectra of different steps of Blend and final blend FIGURE 3: Predictions ability of PCR FIGURE 4: Blend uniformity predication ability of PLS2 FIGURE 5: Prediction efficiency of blend uniformity using ANN FIGURE 6: Comparison of prediction efficiency of chemometric models TABLE 1: Order of Addition for Blending Steps

  14. A novel mechatronic system for measuring end-point stiffness: mechanical design and preliminary tests.

    Science.gov (United States)

    Masia, L; Sandini, G; Morasso, P G

    2011-01-01

    Measuring arm stiffness is of great interest for many disciplines from biomechanics to medicine especially because modulation of impedance represents one of the main mechanism underlying control of movement and interaction with external environment. Previous works have proposed different methods to identify multijoint hand stiffness by using planar or even tridimensional haptic devices, but the associated computational burden makes them not easy to implement. We present a novel mechanism conceived for measuring multijoint planar stiffness by a single measurement and in a reduced execution time. A novel mechanical rotary device applies cyclic radial perturbation to human arm of a known displacement and the force is acquired by means of a 6-axes commercial load cell. The outcomes suggest that the system is not only reliable but allows obtaining a bi-dimensional estimation of arm stiffness in reduced amount of time and the results are comparable with those reported in previous researches. © 2011 IEEE

  15. Removal of oxides from alkali metal melts by reductive titration to electrical resistance-change end points

    Science.gov (United States)

    Tsang, Floris Y.

    1980-01-01

    Alkali metal oxides dissolved in alkali metal melts are reduced with soluble metals which are converted to insoluble oxides. The end points of the reduction is detected as an increase in electrical resistance across an alkali metal ion-conductive membrane interposed between the oxide-containing melt and a material capable of accepting the alkali metal ions from the membrane when a difference in electrical potential, of the appropriate polarity, is established across it. The resistance increase results from blocking of the membrane face by ions of the excess reductant metal, to which the membrane is essentially non-conductive.

  16. Worldwide associations between air quality and health end-points: Are they meaningful?

    Directory of Open Access Journals (Sweden)

    Peter Wallner

    2014-10-01

    Full Text Available Objectives: The World Health Organization (WHO provides data on national indices of health, environment and economy. When we were asked, why air pollution is negatively correlated with cancer mortality, our first response (presumably the mortality data are not age-adjusted was not sufficient to explain the paradox. Material and Methods: A table including all-cause, cancer and childhood mortality, life expectancy, gross national product per person, smoking prevalence, physician density and particulate matter (PM10 per country (N = 193 was developed. For explorative purposes weighted cross-sectional multiple linear regressions models were built. Results: Air pollution is positively correlated with infant and overall mortality and negatively with life expectancy. This might not only depict a true causal effect of PM10 because air quality is also an indicator of a country’s prosperity and general state of environment. Cancer mortality is negatively correlated with PM10. However, this association turns positive when economic or health system indicators are controlled. Conclusions: The World Health Organization’s world-wide data sets demonstrate the large disparity of our world. A careful and professional approach is needed as interpretation is difficult, especially for lay persons. Therefore, with publicly available data WHO should also provide interpretation and guidance.

  17. GFR Decline as an Alternative End Point to Kidney Failure in Clinical Trials : A Meta-analysis of Treatment Effects From 37 Randomized Trials

    NARCIS (Netherlands)

    Inker, Lesley A.; Lambers Heerspink, Hiddo J.; Mondal, Hasi; Schmid, Christopher H.; Tighiouart, Hocine; Noubary, Farzad; Coresh, Josef; Greene, Tom; Levey, Andrew S.

    2014-01-01

    Background: There is increased interest in using alternative end points for trials of kidney disease progression. The currently established end points of end-stage renal disease and doubling of serum creatinine level, equivalent to a 57% decline in estimated glomerular filtration rate (eGFR), are

  18. Testing of an End-Point Control Unit Designed to Enable Precision Control of Manipulator-Coupled Spacecraft

    Science.gov (United States)

    Montgomery, Raymond C.; Ghosh, Dave; Tobbe, Patrick A.; Weathers, John M.; Manouchehri, Davoud; Lindsay, Thomas S.

    1994-01-01

    This paper presents an end-point control concept designed to enable precision telerobotic control of manipulator-coupled spacecraft. The concept employs a hardware unit (end-point control unit EPCU) that is positioned between the end-effector of the Space Shuttle Remote Manipulator System and the payload. Features of the unit are active compliance (control of the displacement between the end-effector and the payload), to allow precision control of payload motions, and inertial load relief, to prevent the transmission of loads between the end-effector and the payload. This paper presents the concept and studies the active compliance feature using a simulation and hardware. Results of the simulation show the effectiveness of the EPCU in smoothing the motion of the payload. Results are presented from initial, limited tests of a laboratory hardware unit on a robotic arm testbed at the l Space Flight Center. Tracking performance of the arm in a constant speed automated retraction and extension maneuver of a heavy payload with and without the unit active is compared for the design speed and higher speeds. Simultaneous load reduction and tracking performance are demonstrated using the EPCU.

  19. End points of planar reaching movements are disrupted by small force pulses: an evaluation of the hypothesis of equifinality.

    Science.gov (United States)

    Popescu, F C; Rymer, W Z

    2000-11-01

    A single force pulse was applied unexpectedly to the arms of five normal human subjects during nonvisually guided planar reaching movements of 10-cm amplitude. The pulse was applied by a powered manipulandum in a direction perpendicular to the motion of the hand, which gripped the manipulandum via a handle at the beginning, at the middle, or toward the end the movement. It was small and brief (10 N, 10 ms), so that it was barely perceptible. We found that the end points of the perturbed motions were systematically different from those of the unperturbed movements. This difference, dubbed "terminal error," averaged 14.4 +/- 9.8% (mean +/- SD) of the movement distance. The terminal error was not necessarily in the direction of the perturbation, although it was affected by it, and it did not decrease significantly with practice. For example, while perturbations involving elbow extension resulted in a statistically significant shift in mean end-point and target-acquisition frequency, the flexion perturbations were not clearly affected. We argue that this error distribution is inconsistent with the "equilibrium point hypothesis" (EPH), which predicts minimal terminal error is determined primarily by the variance in the command signal itself, a property referred to as "equifinality." This property reputedly derives from the "spring-like" properties of muscle and is enhanced by reflexes. To ensure that terminal errors were not due to mid-course voluntary corrections, we only accepted trials in which the final position was already established before such a voluntary response to the perturbation could have begun, that is, in a time interval shorter than the minimum reaction time (RT) for that subject. This RT was estimated for each subject in supplementary experiments in which the subject was instructed to move to a new target if perturbed and to the old target if no perturbation was detected. These RT movements were found to either stop or slow greatly at the original

  20. The upgrade of the multiwire drift chamber readout of the HADES experiment at GSI: the optical end point board

    Energy Technology Data Exchange (ETDEWEB)

    Tarantola, Attilio; Michel, Jan; Muentz, Christian; Stroth, Joachim [Institut fuer Kernphysik, Goethe-Universitaet, Frankfurt (Germany); GSI, Helmholtzzentrum fuer Schwerionenforschung, Darmstadt (Germany); Froehlich, Ingo; Stroebele, Herbert [Institut fuer Kernphysik, Goethe-Universitaet, Frankfurt (Germany); Kolb, Burkhard; Traxler, Michael [GSI, Helmholtzzentrum fuer Schwerionenforschung, Darmstadt (Germany); Palka, Marek [Smoluchowski Institute of Physics, Jagiellonian University, Krakow (Poland); GSI, Helmholtzzentrum fuer Schwerionenforschung, Darmstadt (Germany); Wuestenfeld, Joern [Institut fuer Strahlenphysik, Forschungszentrum, Dresden-Rossendorf (Germany)

    2009-07-01

    One of the goal of the HADES upgrade project is the realization of a new data acquisition scheme for the 24 Multiwire Drift Chambers (MDCs), which allows to increase the readout speed of the 40.000 TDC channels. On the existing MDC Front End Electronic (FEE) side an Optical End Point Board (OEPB) has been designed to control configuration and readout of the chamber's TDCs. The OEPB uses Plastic Optical Fibres (POF) for data transmission, which results in total electromagnetic immunity, amazing simplicity in handling and low power consumption. The employment of a Lattice ECP2/M FPGA with SERDES manages serial data transmission and its large resources allow for the storage of several events close-to-front-end. As 400 OEPBs will be located in the detector acceptance, dedicated FPGA hardware is used to detect Single Event Upsets (SEUs).

  1. Some thoughts on the nature of chromosomal aberrations and their use as a quantitative end-point for radiobiological studies

    International Nuclear Information System (INIS)

    Savage, J.R.K.

    1978-01-01

    A vital condition when chromosomal aberrations are to be used as a quantitative end-point (e.g. for constructing a dose response curve) is that a specific dose must produce a specific yield of aberrations under a given set of experimental conditions. In practice, there are very few cell systems where this condition is met. The majority show significant variations in observed yield with time between irradiation and sampling, indicative of variable radiosensitivity within the cell population. The profile of this yield time curve is determined by the cell-cycle kinetics and therefore is itself subject to modification by radiation through mitotic delay and perturbation. Thus in such heterogeneous populations, each increment of dose not only induces more aberrations, but at the same time modifies the recovered yield per cell. This has an obvious bearing upon the interpretation of the shape of any dose-response curve obtained

  2. Evaluation of Short-Term Changes in Serum Creatinine Level as a Meaningful End Point in Randomized Clinical Trials.

    Science.gov (United States)

    Coca, Steven G; Zabetian, Azadeh; Ferket, Bart S; Zhou, Jing; Testani, Jeffrey M; Garg, Amit X; Parikh, Chirag R

    2016-08-01

    Observational studies have shown that acute change in kidney function (specifically, AKI) is a strong risk factor for poor outcomes. Thus, the outcome of acute change in serum creatinine level, regardless of underlying biology or etiology, is frequently used in clinical trials as both efficacy and safety end points. We performed a meta-analysis of clinical trials to quantify the relationship between positive or negative short-term effects of interventions on change in serum creatinine level and more meaningful clinical outcomes. After a thorough literature search, we included 14 randomized trials of interventions that altered risk for an acute increase in serum creatinine level and had reported between-group differences in CKD and/or mortality rate ≥3 months after randomization. Seven trials assessed interventions that, compared with placebo, increased risk of acute elevation in serum creatinine level (pooled relative risk, 1.52; 95% confidence interval, 1.22 to 1.89), and seven trials assessed interventions that, compared with placebo, reduced risk of acute elevation in serum creatinine level (pooled relative risk, 0.57; 95% confidence interval, 0.44 to 0.74). However, pooled risks for CKD and mortality associated with interventions did not differ from those with placebo in either group. In conclusion, several interventions that affect risk of acute, mild to moderate, often temporary elevation in serum creatinine level in placebo-controlled randomized trials showed no appreciable effect on CKD or mortality months later, raising questions about the value of using small to moderate changes in serum creatinine level as end points in clinical trials. Copyright © 2016 by the American Society of Nephrology.

  3. The end point of the first-order phase transition of the SU(2) gauge-Higgs model on a four-dimensional isotropic lattice

    International Nuclear Information System (INIS)

    Aoki, Y.; Csikor, F.; Fodor, Z.; Ukawa, A.

    1999-01-01

    We report results of a study of the end point of the electroweak phase transition of the SU(2) gauge-Higgs model defined on a four-dimensional isotropic lattice with N t = 2. Finite-size scaling study of Lee-Yang zeros yields λ c = 0.00116(16) for the end point. Combined with a zero-temperature measurement of Higgs and W boson masses, this leads to M H,c = 68.2 ± 6.6 GeV for the critical Higgs boson mass. An independent analysis of Binder cumulant gives a consistent value λ c = 0.00102(3) for the end point

  4. End-point effector stress mediators in neuroimmune interactions: their role in immune system homeostasis and autoimmune pathology.

    Science.gov (United States)

    Dimitrijevic, Mirjana; Stanojevic, Stanislava; Kustrimovic, Natasa; Leposavic, Gordana

    2012-04-01

    Much evidence has identified a direct anatomical and functional link between the brain and the immune system, with glucocorticoids (GCs), catecholamines (CAs), and neuropeptide Y (NPY) as its end-point mediators. This suggests the important role of these mediators in immune system homeostasis and the pathogenesis of inflammatory autoimmune diseases. However, although it is clear that these mediators can modulate lymphocyte maturation and the activity of distinct immune cell types, their putative role in the pathogenesis of autoimmune disease is not yet completely understood. We have contributed to this field by discovering the influence of CAs and GCs on fine-tuning thymocyte negative selection and, in particular, by pointing to the putative CA-mediated mechanisms underlying this influence. Furthermore, we have shown that CAs are implicated in the regulation of regulatory T-cell development in the thymus. Moreover, our investigations related to macrophage biology emphasize the complex interaction between GCs, CAs and NPY in the modulation of macrophage functions and their putative significance for the pathogenesis of autoimmune inflammatory diseases.

  5. Angiographic core laboratory reproducibility analyses: implications for planning clinical trials using coronary angiography and left ventriculography end-points.

    Science.gov (United States)

    Steigen, Terje K; Claudio, Cheryl; Abbott, David; Schulzer, Michael; Burton, Jeff; Tymchak, Wayne; Buller, Christopher E; John Mancini, G B

    2008-06-01

    To assess reproducibility of core laboratory performance and impact on sample size calculations. Little information exists about overall reproducibility of core laboratories in contradistinction to performance of individual technicians. Also, qualitative parameters are being adjudicated increasingly as either primary or secondary end-points. The comparative impact of using diverse indexes on sample sizes has not been previously reported. We compared initial and repeat assessments of five quantitative parameters [e.g., minimum lumen diameter (MLD), ejection fraction (EF), etc.] and six qualitative parameters [e.g., TIMI myocardial perfusion grade (TMPG) or thrombus grade (TTG), etc.], as performed by differing technicians and separated by a year or more. Sample sizes were calculated from these results. TMPG and TTG were also adjudicated by a second core laboratory. MLD and EF were the most reproducible, yielding the smallest sample size calculations, whereas percent diameter stenosis and centerline wall motion require substantially larger trials. Of the qualitative parameters, all except TIMI flow grade gave reproducibility characteristics yielding sample sizes of many 100's of patients. Reproducibility of TMPG and TTG was only moderately good both within and between core laboratories, underscoring an intrinsic difficulty in assessing these. Core laboratories can be shown to provide reproducibility performance that is comparable to performance commonly ascribed to individual technicians. The differences in reproducibility yield huge differences in sample size when comparing quantitative and qualitative parameters. TMPG and TTG are intrinsically difficult to assess and conclusions based on these parameters should arise only from very large trials.

  6. UST-ID robotics: Wireless communication and minimum conductor technology, and end-point tracking technology surveys

    International Nuclear Information System (INIS)

    Holliday, M.A.

    1993-10-01

    This report is a technology review of the current state-of-the-art in two technologies applicable to the Underground Storage Tank (UST) program at the Hanford Nuclear Reservation. The first review is of wireless and minimal conductor technologies for in-tank communications. The second review is of advanced concepts for independent tool-point tracking. This study addresses the need to provide wireless transmission media or minimum conductor technology for in-tank communications and robot control. At present, signals are conducted via contacting transmission media, i.e., cables. Replacing wires with radio frequencies or invisible light are commonplace in the communication industry. This technology will be evaluated for its applicability to the needs of robotics. Some of these options are radio signals, leaky coax, infrared, microwave, and optical fiber systems. Although optical fiber systems are contacting transmission media, they will be considered because of their ability to reduce the number of conductors. In this report we will identify, evaluate, and recommend the requirements for wireless and minimum conductor technology to replace the present cable system. The second section is a technology survey of concepts for independent end-point tracking (tracking the position of robot end effectors). The position of the end effector in current industrial robots is determined by computing that position from joint information, which is basically a problem of locating a point in three-dimensional space. Several approaches are presently being used in industrial robotics, including: stereo-triangulation with a theodolite network and electrocamera system, photogrammetry, and multiple-length measurement with laser interferometry and wires. The techniques that will be evaluated in this survey are advanced applications of the aforementioned approaches. These include laser tracking (3-D and 5-D), ultrasonic tracking, vision-guided servoing, and adaptive robotic visual tracking

  7. Constraints on grip selection in hemiparetic cerebral palsy: effects of lesional side, end-point accuracy, and context.

    Science.gov (United States)

    Steenbergen, Bert; Meulenbroek, Ruud G J; Rosenbaum, David A

    2004-04-01

    This study was concerned with selection criteria used for grip planning in adolescents with left or right hemiparetic cerebral palsy. In the first experiment, we asked participants to pick up a pencil and place the tip in a pre-defined target region. We varied the size of the target to test the hypothesis that increased end-point precision demands would favour the use of a grip that affords end-state comfort. In the second experiment, we studied grip planning in three task contexts that were chosen to let us test the hypothesis that a more functional task context would likewise promote the end-state comfort effect. When movements were performed with the impaired hand, we found that participants with right hemiparesis (i.e., left brain damage) aimed for postural comfort at the start rather than at the end of the object-manipulation phase in both experiments. By contrast, participants with left hemiparesis (i.e., right brain damage) did not favour a particular selection criterion with the impaired hand in the first experiment, but aimed for postural comfort at the start in the second experiment. When movements were performed with the unimpaired hand, grip selection criteria again differed for right and left hemiparetic participants. Participants with right hemiparesis did not favour a particular selection criterion with the unimpaired hand in the first experiment and only showed the end-state comfort effect in the most functional tasks of the second experiment. By contrast, participants with left hemiparesis showed the end-state comfort effect in all conditions of both experiments. These data suggest that the left hemisphere plays a special role in action planning, as has been recognized before, and that one of the deficits accompanying left brain damage is a deficit in forward movement planning, which has not been recognized before. Our findings have both theoretical and clinical implications.

  8. Application of a computer model to predict optimum slaughter end points for different biological types of feeder cattle.

    Science.gov (United States)

    Williams, C B; Bennett, G L

    1995-10-01

    A bioeconomic model was developed to predict slaughter end points of different genotypes of feeder cattle, where profit/rotation and profit/day were maximized. Growth, feed intake, and carcass weight and composition were simulated for 17 biological types of steers. Distribution of carcass weight and proportion in four USDA quality and five USDA yield grades were obtained from predicted carcass weights and composition. Average carcass value for each genotype was calculated from these distributions under four carcass pricing systems that varied from value determined on quality grade alone to value determined on yield grade alone. Under profitable market conditions, rotation length was shorter and carcass weights lighter when the producer's goal was maximum profit/day, compared with maximum profit/rotation. A carcass value system based on yield grade alone resulted in greater profit/rotation and in lighter and leaner carcasses than a system based on quality grade alone. High correlations ( > .97) were obtained between breed profits obtained with different sets of input/output prices and carcass price discount weight ranges. This suggests that breed rankings on the basis of breed profits may not be sensitive to changes in input/output market prices. Steers that were on a grower-stocker system had leaner carcasses, heavier optimum carcass weight, greater profits, and less variation in optimum carcass weights between genotypes than steers that were started on a high-energy finishing diet at weaning. Overall results suggest that breed choices may change with different carcass grading and value systems and postweaning production systems. This model has potential to provide decision support in marketing fed cattle.

  9. Overview of the "epigenetic end points in toxicologic pathology and relevance to human health" session of the 2014 Society Of Toxicologic Pathology Annual Symposium.

    Science.gov (United States)

    Hoenerhoff, Mark J; Hartke, James

    2015-01-01

    The theme of the Society of Toxicologic Pathology 2014 Annual Symposium was "Translational Pathology: Relevance of Toxicologic Pathology to Human Health." The 5th session focused on epigenetic end points in biology, toxicity, and carcinogenicity, and how those end points are relevant to human exposures. This overview highlights the various presentations in this session, discussing integration of epigenetics end points in toxicologic pathology studies, investigating the role of epigenetics in product safety assessment, epigenetic changes in cancers, methodologies to detect them, and potential therapies, chromatin remodeling in development and disease, and epigenomics and the microbiome. The purpose of this overview is to discuss the application of epigenetics to toxicologic pathology and its utility in preclinical or mechanistic based safety, efficacy, and carcinogenicity studies. © 2014 by The Author(s).

  10. Biomarkers of Host Response Predict Primary End-Point Radiological Pneumonia in Tanzanian Children with Clinical Pneumonia: A Prospective Cohort Study.

    Directory of Open Access Journals (Sweden)

    Laura K Erdman

    Full Text Available Diagnosing pediatric pneumonia is challenging in low-resource settings. The World Health Organization (WHO has defined primary end-point radiological pneumonia for use in epidemiological and vaccine studies. However, radiography requires expertise and is often inaccessible. We hypothesized that plasma biomarkers of inflammation and endothelial activation may be useful surrogates for end-point pneumonia, and may provide insight into its biological significance.We studied children with WHO-defined clinical pneumonia (n = 155 within a prospective cohort of 1,005 consecutive febrile children presenting to Tanzanian outpatient clinics. Based on x-ray findings, participants were categorized as primary end-point pneumonia (n = 30, other infiltrates (n = 31, or normal chest x-ray (n = 94. Plasma levels of 7 host response biomarkers at presentation were measured by ELISA. Associations between biomarker levels and radiological findings were assessed by Kruskal-Wallis test and multivariable logistic regression. Biomarker ability to predict radiological findings was evaluated using receiver operating characteristic curve analysis and Classification and Regression Tree analysis.Compared to children with normal x-ray, children with end-point pneumonia had significantly higher C-reactive protein, procalcitonin and Chitinase 3-like-1, while those with other infiltrates had elevated procalcitonin and von Willebrand Factor and decreased soluble Tie-2 and endoglin. Clinical variables were not predictive of radiological findings. Classification and Regression Tree analysis generated multi-marker models with improved performance over single markers for discriminating between groups. A model based on C-reactive protein and Chitinase 3-like-1 discriminated between end-point pneumonia and non-end-point pneumonia with 93.3% sensitivity (95% confidence interval 76.5-98.8, 80.8% specificity (72.6-87.1, positive likelihood ratio 4.9 (3.4-7.1, negative likelihood ratio 0

  11. Development of a versatile tool for the simultaneous differential detection of Pseudomonas savastanoi pathovars by End Point and Real-Time PCR.

    Science.gov (United States)

    Tegli, Stefania; Cerboneschi, Matteo; Libelli, Ilaria Marsili; Santilli, Elena

    2010-05-28

    Pseudomonas savastanoi pv. savastanoi is the causal agent of olive knot disease. The strains isolated from oleander and ash belong to the pathovars nerii and fraxini, respectively. When artificially inoculated, pv. savastanoi causes disease also on ash, and pv. nerii attacks also olive and ash. Surprisingly nothing is known yet about their distribution in nature on these hosts and if spontaneous cross-infections occur. On the other hand sanitary certification programs for olive plants, also including P. savastanoi, were launched in many countries. The aim of this work was to develop several PCR-based tools for the rapid, simultaneous, differential and quantitative detection of these P. savastanoi pathovars, in multiplex and in planta. Specific PCR primers and probes for the pathovars savastanoi, nerii and fraxini of P. savastanoi were designed to be used in End Point and Real-Time PCR, both with SYBR Green or TaqMan chemistries. The specificity of all these assays was 100%, as assessed by testing forty-four P. savastanoi strains, belonging to the three pathovars and having different geographical origins. For comparison strains from the pathovars phaseolicola and glycinea of P. savastanoi and bacterial epiphytes from P. savastanoi host plants were also assayed, and all of them tested always negative. The analytical detection limits were about 5 - 0.5 pg of pure genomic DNA and about 102 genome equivalents per reaction. Similar analytical thresholds were achieved in Multiplex Real-Time PCR experiments, even on artificially inoculated olive plants. Here for the first time a complex of PCR-based assays were developed for the simultaneous discrimination and detection of P. savastanoi pv. savastanoi, pv. nerii and pv. fraxini. These tests were shown to be highly reliable, pathovar-specific, sensitive, rapid and able to quantify these pathogens, both in multiplex reactions and in vivo. Compared with the other methods already available for P. savastanoi, the identification

  12. Global phase equilibrium calculations: Critical lines, critical end points and liquid-liquid-vapour equilibrium in binary mixtures

    DEFF Research Database (Denmark)

    Cismondi, Martin; Michelsen, Michael Locht

    2007-01-01

    A general strategy for global phase equilibrium calculations (GPEC) in binary mixtures is presented in this work along with specific methods for calculation of the different parts involved. A Newton procedure using composition, temperature and Volume as independent variables is used for calculation...

  13. Estimated GFR Decline as a Surrogate End Point for Kidney Failure : A Post Hoc Analysis From the Reduction of End Points in Non-Insulin-Dependent Diabetes With the Angiotensin II Antagonist Losartan (RENAAL) Study and Irbesartan Diabetic Nephropathy Trial (IDNT)

    NARCIS (Netherlands)

    Lambers Heerspink, Hiddo; Weldegiorgis, Misghina; Inker, Lesley A.; Gansevoort, Ron; Parving, Hans-Henrik; Dwyer, Jamie P.; Mondal, Hasi; Coresh, Josef; Greene, Tom; Levey, Andrew S.; de Zeeuw, Dick

    Background: A doubling of serum creatinine value, corresponding to a 57% decline in estimated glomerular filtration rate (eGFR), is used frequently as a component of a composite kidney end point in clinical trials in type 2 diabetes. The aim of this study was to determine whether alternative end

  14. Titration of thorium and rare earths with ethylenediaminetetraacetic acid using semimethylthymol blue by visual end-point indication

    International Nuclear Information System (INIS)

    Hafez, M.A.H.; Kenawy, I.M.M.; Ramadan, M.A.M.

    1994-01-01

    The precision and accuracy attainable in direct complexometric titrations of Thsup(4+) consecutively with either lighter (La 3+ , Nd 3+ , Sm 3+ , Eu 3+ or Gd 3+ ) or heavier lanthanides (Dy 3+ ) in different proportions using Semimethylthymol Blue (SMTB) as a metallochromic indicator and disodium dihydrogen ethylenediaminetetraacetate were studied. Thorium (IV) was titrated at pH 2, the Ph was adjusted to 5.5-6.0 by adding hexamethylenetetramine (hexamine) buffer and acetylacetone-acetone solution and La 3+ (or Nd 3+ , Sm 3+ , Eu 3+ , Gd 3+ or Dy 3+ ) was then titrated. A comparison of the indicators SMTB and Methylthymol Blue (MTB) for successive titrations of Th 4+ and any of the rare earth ions was carried out. The proposed titration method was applied successfully to some naturally occurring ores and minerals containing thorium and some lanthanides and the results were satisfactory. (Author)

  15. Effect of stereotactic dosimetric end points on overall survival for Stage I non–small cell lung cancer: A critical review

    Energy Technology Data Exchange (ETDEWEB)

    Mulryan, Kathryn; Leech, Michelle; Forde, Elizabeth, E-mail: eforde@tcd.ie

    2015-01-01

    Stereotactic body radiation therapy (SBRT) delivers a high biologically effective dose while minimizing toxicities to surrounding tissues. Within the scope of clinical trials and local practice, there are inconsistencies in dosimetrics used to evaluate plan quality. The purpose of this critical review was to determine if dosimetric parameters used in SBRT plans have an effect on local control (LC), overall survival (OS), and toxicities. A database of relevant trials investigating SBRT for patients with early-stage non–small cell lung cancer was compiled, and a table of dosimetric variables used was created. These parameters were compared and contrasted for LC, OS, and toxicities. Dosimetric end points appear to have no effect on OS or LC. Incidences of rib fractures correlate with a lack of dose-volume constraints (DVCs) reported. This review highlights the great disparity present in clinical trials reporting dosimetrics, DVCs, and toxicities for lung SBRT. Further evidence is required before standard DVCs guidelines can be introduced. Dosimetric end points specific to stereotactic treatment planning have been proposed but require further investigation before clinical implementation.

  16. Swimming speed alteration of Artemia sp. and Brachionus plicatilis as a sub-lethal behavioural end-point for ecotoxicological surveys.

    Science.gov (United States)

    Garaventa, Francesca; Gambardella, Chiara; Di Fino, Alessio; Pittore, Massimiliano; Faimali, Marco

    2010-03-01

    In this study, we investigated the possibility to improve a new behavioural bioassay (Swimming Speed Alteration test-SSA test) using larvae of marine cyst-forming organisms: e.g. the brine shrimp Artemia sp. and the rotifer Brachionus plicatilis. Swimming speed was investigated as a behavioural end-point for application in ecotoxicology studies. A first experiment to analyse the linear swimming speed of the two organisms was performed to verify the applicability of the video-camera tracking system, here referred to as Swimming Behavioural Recorder (SBR). A second experiment was performed, exposing organisms to different toxic compounds (zinc pyrithione, Macrotrol MT-200, and Eserine). Swimming speed alteration was analyzed together with mortality. The results of the first experiment indicate that SBR is a suitable tool to detect linear swimming speed of the two organisms, since the values have been obtained in accordance with other studies using the same organisms (3.05 mm s(-1) for Artemia sp. and 0.62 mm s(-1) for B. plicatilis). Toxicity test results clearly indicate that swimming speed of Artemia sp. and B. plicatilis is a valid behavioural end-point to detect stress at sub-lethal toxic substance concentrations. Indeed, alterations in swimming speed have been detected at toxic compound concentrations as low as less then 0.1-5% of their LC(50) values. In conclusion, the SSA test with B. plicatilis and Artemia sp. can be a good behavioural integrated output for application in marine ecotoxicology and environmental monitoring programs.

  17. Effects of Cooking End-point Temperature and Muscle Part on Sensory 'Hardness' and 'Chewiness' Assessed Using Scales Presented in ISO11036:1994.

    Science.gov (United States)

    Sasaki, Keisuke; Motoyama, Michiyo; Narita, Takumi; Chikuni, Koichi

    2013-10-01

    Texture and 'tenderness' in particular, is an important sensory characteristic for consumers' satisfaction of beef. Objective and detailed sensory measurements of beef texture have been needed for the evaluation and management of beef quality. This study aimed to apply the sensory scales defined in ISO11036:1994 to evaluate the texture of beef. Longissimus and Semitendinosus muscles of three Holstein steers cooked to end-point temperatures of 60°C and 72°C were subjected to sensory analyses by a sensory panel with expertise regarding the ISO11036 scales. For the sensory analysis, standard scales of 'chewiness' (9-points) and 'hardness' (7-points) were presented to the sensory panel with reference materials defined in ISO11036. As a result, both 'chewiness' and 'hardness' assessed according to the ISO11036 scales increased by increasing the cooking end-point temperature, and were different between Longissimus and Semitendinosus muscles. The sensory results were in good agreement with instrumental texture measurements. However, both texture ratings in this study were in a narrower range than the full ISO scales. For beef texture, ISO11036 scales for 'chewiness' and 'hardness' are useful for basic studies, but some alterations are needed for practical evaluation of muscle foods.

  18. Effects of Cooking End-point Temperature and Muscle Part on Sensory ‘Hardness’ and ‘Chewiness’ Assessed Using Scales Presented in ISO11036:1994

    Directory of Open Access Journals (Sweden)

    Keisuke Sasaki

    2013-10-01

    Full Text Available Texture and ‘tenderness’ in particular, is an important sensory characteristic for consumers’ satisfaction of beef. Objective and detailed sensory measurements of beef texture have been needed for the evaluation and management of beef quality. This study aimed to apply the sensory scales defined in ISO11036:1994 to evaluate the texture of beef. Longissimus and Semitendinosus muscles of three Holstein steers cooked to end-point temperatures of 60°C and 72°C were subjected to sensory analyses by a sensory panel with expertise regarding the ISO11036 scales. For the sensory analysis, standard scales of ‘chewiness’ (9-points and ‘hardness’ (7-points were presented to the sensory panel with reference materials defined in ISO11036. As a result, both ‘chewiness’ and ‘hardness’ assessed according to the ISO11036 scales increased by increasing the cooking end-point temperature, and were different between Longissimus and Semitendinosus muscles. The sensory results were in good agreement with instrumental texture measurements. However, both texture ratings in this study were in a narrower range than the full ISO scales. For beef texture, ISO11036 scales for ‘chewiness’ and ‘hardness’ are useful for basic studies, but some alterations are needed for practical evaluation of muscle foods.

  19. Inter-laboratory assessment by trained panelists from France and the United Kingdom of beef cooked at two different end-point temperatures.

    Science.gov (United States)

    Gagaoua, Mohammed; Micol, Didier; Picard, Brigitte; Terlouw, Claudia E M; Moloney, Aidan P; Juin, Hervé; Meteau, Karine; Scollan, Nigel; Richardson, Ian; Hocquette, Jean-François

    2016-12-01

    Eating quality of the same meat samples from different animal types cooked at two end-point cooking temperatures (55°C and 74°C) was evaluated by trained panels in France and the United Kingdom. Tenderness and juiciness scores were greater at 55°C than at 74°C, irrespective of the animal type and location of the panel. The UK panel, independently of animal type, gave greater scores for beef flavour (+7 to +24%, PFrench panel was higher at 74°C than at 55°C (+26%, Pcooking beef at a lower temperature increased tenderness and juiciness, irrespective of the location of the panel. In contrast, cooking beef at higher temperatures increased beef flavour and decreased abnormal flavour for the UK panelists but increased abnormal flavour for the French panel. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Effects of Different End-Point Cooking Temperatures on the Efficiency of Encapsulated Phosphates on Lipid Oxidation Inhibition in Ground Meat.

    Science.gov (United States)

    Kılıç, B; Şimşek, A; Claus, J R; Atılgan, E; Aktaş, N

    2015-10-01

    Effects of 0.5% encapsulated (e) phosphates (sodium tripolyphosphate, STP; sodium hexametaphosphate, HMP; sodium pyrophosphate, SPP) on lipid oxidation during storage (0, 1, and 7 d) of ground meat (chicken, beef) after being cooked to 3 end-point cooking temperatures (EPCT; 71, 74, and 77 °C) were evaluated. The use of STP or eSTP resulted in lower (P cooking loss (CL) compared to encapsulated or unencapsulated forms of HMP and SPP. Increasing EPCT led to a significant increase in CL (P chicken compared to 74 and 71 °C (P chicken samples (P < 0.05). Findings suggest that encapsulated phosphates can be a strategy to inhibit lipid oxidation for meat industry and the efficiency of encapsulated phosphates on lipid oxidation inhibition can be enhanced by lowering EPCT. © 2015 Institute of Food Technologists®

  1. Use of cluster analysis and preference mapping to evaluate consumer acceptability of choice and select bovine M. longissimus lumborum steaks cooked to various end-point temperatures.

    Science.gov (United States)

    Schmidt, T B; Schilling, M W; Behrends, J M; Battula, V; Jackson, V; Sekhon, R K; Lawrence, T E

    2010-01-01

    Consumer research was conducted to evaluate the acceptability of choice and select steaks from the Longissimus lumborum that were cooked to varying degrees of doneness using demographic information, cluster analysis and descriptive analysis. On average, using data from approximately 155 panelists, no differences (P>0.05) existed in consumer acceptability among select and choice steaks, and all treatment means ranged between like slightly and like moderately (6-7) on the hedonic scale. Individual consumers were highly variable in their perception of acceptability and consumers were grouped into clusters (eight for select and seven for choice) based on their preference and liking of steaks. The largest consumer groups liked steaks from all treatments, but other groups preferred (Pconsumers could be grouped together according to preference, liking and descriptive sensory attributes, (juiciness, tenderness, bloody, metallic, and roasted) to further understand consumer perception of steaks that were cooked to different end-point temperatures.

  2. Molecular recognition in a diverse set of protein-ligand interactions studied with molecular dynamics simulations and end-point free energy calculations.

    Science.gov (United States)

    Wang, Bo; Li, Liwei; Hurley, Thomas D; Meroueh, Samy O

    2013-10-28

    End-point free energy calculations using MM-GBSA and MM-PBSA provide a detailed understanding of molecular recognition in protein-ligand interactions. The binding free energy can be used to rank-order protein-ligand structures in virtual screening for compound or target identification. Here, we carry out free energy calculations for a diverse set of 11 proteins bound to 14 small molecules using extensive explicit-solvent MD simulations. The structure of these complexes was previously solved by crystallography and their binding studied with isothermal titration calorimetry (ITC) data enabling direct comparison to the MM-GBSA and MM-PBSA calculations. Four MM-GBSA and three MM-PBSA calculations reproduced the ITC free energy within 1 kcal·mol(-1) highlighting the challenges in reproducing the absolute free energy from end-point free energy calculations. MM-GBSA exhibited better rank-ordering with a Spearman ρ of 0.68 compared to 0.40 for MM-PBSA with dielectric constant (ε = 1). An increase in ε resulted in significantly better rank-ordering for MM-PBSA (ρ = 0.91 for ε = 10), but larger ε significantly reduced the contributions of electrostatics, suggesting that the improvement is due to the nonpolar and entropy components, rather than a better representation of the electrostatics. The SVRKB scoring function applied to MD snapshots resulted in excellent rank-ordering (ρ = 0.81). Calculations of the configurational entropy using normal-mode analysis led to free energies that correlated significantly better to the ITC free energy than the MD-based quasi-harmonic approach, but the computed entropies showed no correlation with the ITC entropy. When the adaptation energy is taken into consideration by running separate simulations for complex, apo, and ligand (MM-PBSAADAPT), there is less agreement with the ITC data for the individual free energies, but remarkably good rank-ordering is observed (ρ = 0.89). Interestingly, filtering MD snapshots by prescoring

  3. Radiographic Progression-Free Survival as a Clinically Meaningful End Point in Metastatic Castration-Resistant Prostate Cancer: The PREVAIL Randomized Clinical Trial.

    Science.gov (United States)

    Rathkopf, Dana E; Beer, Tomasz M; Loriot, Yohann; Higano, Celestia S; Armstrong, Andrew J; Sternberg, Cora N; de Bono, Johann S; Tombal, Bertrand; Parli, Teresa; Bhattacharya, Suman; Phung, De; Krivoshik, Andrew; Scher, Howard I; Morris, Michael J

    2018-05-01

    Drug development for metastatic castration-resistant prostate cancer has been limited by a lack of clinically relevant trial end points short of overall survival (OS). Radiographic progression-free survival (rPFS) as defined by the Prostate Cancer Clinical Trials Working Group 2 (PCWG2) is a candidate end point that represents a clinically meaningful benefit to patients. To demonstrate the robustness of the PCWG2 definition and to examine the relationship between rPFS and OS. PREVAIL was a phase 3, randomized, double-blind, placebo-controlled multinational study that enrolled 1717 chemotherapy-naive men with metastatic castration-resistant prostate cancer from September 2010 through September 2012. The data were analyzed in November 2016. Patients were randomized 1:1 to enzalutamide 160 mg or placebo until confirmed radiographic disease progression or a skeletal-related event and initiation of either cytotoxic chemotherapy or an investigational agent for prostate cancer treatment. Sensitivity analyses (SAs) of investigator-assessed rPFS were performed using the final rPFS data cutoff (May 6, 2012; 439 events; SA1) and the interim OS data cutoff (September 16, 2013; 540 events; SA2). Additional SAs using investigator-assessed rPFS from the final rPFS data cutoff assessed the impact of skeletal-related events (SA3), clinical progression (SA4), a confirmatory scan for soft-tissue disease progression (SA5), and all deaths regardless of time after study drug discontinuation (SA6). Correlations between investigator-assessed rPFS (SA2) and OS were calculated using Spearman ρ and Kendall τ via Clayton copula. In the 1717 men (mean age, 72.0 [range, 43.0-93.0] years in enzalutamide arm and 71.0 [range, 42.0-93.0] years in placebo arm), enzalutamide significantly reduced risk of radiographic progression or death in all SAs, with hazard ratios of 0.22 (SA1; 95% CI, 0.18-0.27), 0.31 (SA2; 95% CI, 0.27-0.35), 0.21 (SA3; 95% CI, 0.18-0.26), 0.21 (SA4; 95% CI, 0.17-0.26), 0

  4. Comet assay with gill cells of Mytilus galloprovincialis end point tools for biomonitoring of water antibiotic contamination: Biological treatment is a reliable process for detoxification.

    Science.gov (United States)

    Mustapha, Nadia; Zouiten, Amina; Dridi, Dorra; Tahrani, Leyla; Zouiten, Dorra; Mosrati, Ridha; Cherif, Ameur; Chekir-Ghedira, Leila; Mansour, Hedi Ben

    2016-04-01

    This article investigates the ability of Pseudomonas peli to treat industrial pharmaceuticals wastewater (PW). Liquid chromatography-mass spectrometry (MS)/MS analysis revealed the presence, in this PW, of a variety of antibiotics such as sulfathiazole, sulfamoxole, norfloxacine, cloxacilline, doxycycline, and cefquinome.P. peli was very effective to be grown in PW and inducts a remarkable increase in chemical oxygen demand and biochemical oxygen demand (140.31 and 148.51%, respectively). On the other hand, genotoxicity of the studied effluent, before and after 24 h of shaking incubation with P. peli, was evaluated in vivo in the Mediterranean wild mussels Mytilus galloprovincialis using comet assay for quantification of DNA fragmentation. Results show that PW exhibited a statistically significant (pbody weight (b.w.); 0.33 ml/kg b.w. of PW, respectively. However, genotoxicity decreased strongly when tested with the PW obtained after incubation with P. peli We can conclude that using comet assay genotoxicity end points are useful tools to biomonitor the physicochemical and biological quality of water. Also, it could be concluded that P. peli can treat and detoxify the studied PW. © The Author(s) 2013.

  5. Pressure Injury Progression and Factors Associated With Different End-Points in a Home Palliative Care Setting: A Retrospective Chart Review Study.

    Science.gov (United States)

    Artico, Marco; D'Angelo, Daniela; Piredda, Michela; Petitti, Tommasangelo; Lamarca, Luciano; De Marinis, Maria Grazia; Dante, Angelo; Lusignani, Maura; Matarese, Maria

    2018-07-01

    Patients with advanced illnesses show the highest prevalence for pressure injuries. In the palliative care setting, the ultimate goal is injury healing, but equally important is wound maintenance, wound palliation (wound-related pain and symptom management), and primary and secondary wound prevention. To describe the course of healing for pressure injuries in a home palliative care setting according to different end-points, and to explore patient and caregiver characteristics and specific care activities associated with their achievement. Four-year retrospective chart review of 669 patients cared for in a home palliative care service, of those 124 patients (18.5%) had at least one pressure injury with a survival rate less than or equal to six months. The proportion of healed pressure injuries was 24.4%. Of the injuries not healed, 34.0% were in a maintenance phase, whereas 63.6% were in a process of deterioration. Body mass index (P = 0.0014), artificial nutrition (P = 0.002), and age pay attention to artificial nutrition, continuous deep sedation, and the caregiver's role and gender. Copyright © 2018 American Academy of Hospice and Palliative Medicine. Published by Elsevier Inc. All rights reserved.

  6. The free fractions of circulating docosahexaenoic acid and eicosapentenoic acid as optimal end-point of measure in bioavailability studies on n-3 fatty acids.

    Science.gov (United States)

    Scarsi, Claudia; Levesque, Ann; Lisi, Lucia; Navarra, Pierluigi

    2015-05-01

    The high complexity of n-3 fatty acids absorption process, along with the huge amount of endogenous fraction, makes bioavailability studies with these agents very challenging and deserving special consideration. In this paper we report the results of a bioequivalence study between a new formulation of EPA+DHA ethyl esters developed by IBSA Institut Biochimique and reference medicinal product present on the Italian market. Bioequivalence was demonstrated according to the criteria established by the EMA Guideline on the Investigation of Bioequivalence. We found that the free fractions represent a better and more sensitive end-point for bioequivalence investigations on n-3 fatty acids, since: (i) the overall and intra-subject variability of PK parameters was markedly lower compared to the same variability calculated on the total DHA and EPA fractions; (ii) the absorption process was completed within 4h, and the whole PK profile could be drawn within 12-15 h from drug administration. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. An analysis of clinical and treatment related prognostic factors on outcome using biochemical control as an end-point in patients with prostate cancer treated with external beam irradiation

    International Nuclear Information System (INIS)

    Horwitz, Eric M.; Vicini, Frank A.; Ziaja, Ellen L.; Dmuchowski, Carl F.; Stromberg, Jannifer S.; Gustafson, Gary S.; Martinez, Alvaro A.

    1997-01-01

    Purpose: We reviewed our institution's experience in treating patients with clinically localized prostate cancer with external beam irradiation (RT) to determine if previously analyzed clinical and treatment related prognostic factors affected outcome when biochemical control was used as an end-point to evaluate results. Materials and methods: Between 1 January 1987 and 31 December 1991, 470 patients with clinically localized prostate cancer were treated with external beam RT using localized prostate fields at William Beaumont Hospital. Biochemical control was defined as PSA nadir ≤1.5 ng/ml within 1 year of treatment. After achieving nadir, if two consecutive increases of PSA were noted, the patient was scored a failure at the time of the first increase. Prognostic factors, including the total number of days in treatment, the method of diagnosis, a history of any pretreatment transurethral resection of the prostate (TURP) and the type of boost were analyzed. Results: Median follow-up was 48 months. No statistically significant difference in rates of biochemical control were noted for treatment time, overall time (date of biopsy to completion of RT), history of any pretreatment TURP, history of diagnosis by TURP, or boost techniques. Patients diagnosed by TURP had a significant improvement in the overall rate of biochemical control (P < 0.03) compared to transrectal/transperineal biopsy. The 5-year actuarial rates were 58 versus 39%, respectively. This improvement was not evident when pretreatment PSA, T stage, or Gleason score were controlled for. On multivariate analysis, no variable was associated with outcome. When analysis was limited to a more favorable group of patients (T1/T2 tumors, pretreatment PSA ≤20 ng/ml and Gleason score <7), none of these variables were significantly predictive of biochemical control when controlling for pretreatment PSA, T stage and Gleason score. Conclusions: No significant effect of treatment time, overall time, pretreatment

  8. Re-analysis of clinical and treatment related prognostic factors on outcome using biochemical control as an end-point in patients with prostate cancer treated with external beam irradiation

    International Nuclear Information System (INIS)

    Ziaja, Ellen L.; Horwitz, Eric M.; Vicini, Frank A.; Dmuchowski, Carl F.; Brabbins, Donald S.; Gustafson, Gary S.; Hollander, Jay; Matter, Richard C.; Stromberg, Jannifer S.; Martinez, Alvaro A.

    1996-01-01

    Purpose: Prostate specific antigen (PSA) has been established as the most important prognostic factor for prostate cancer. We reviewed our experience treating patients with clinically localized prostate cancer with external beam irradiation (RT) to evaluate if previously defined clinical and treatment related prognostic factors remain valid when biochemical control was used as an end-point to evaluate results. Methods and Materials: Between 1/87 and 12/91, 480 patients with clinically localized prostate cancer received external beam irradiation (RT) using localized prostate fields at William Beaumont Hospital. The median dose to the prostate was 66.6 Gy (range 58 - 70.4 Gy) using a 4 field or arc technique. Pre- and post-treatment serum PSA levels were recorded. Biochemical control was defined as PSA nadir ≤ 1.5 ng/ml within 1 year of treatment completion. After achieving nadir, if 2 consecutive increases of PSA were noted, the patient was scored a failure at the time of the first increase. Patients (pts) were divided into 3 groups according to total number of days on treatment: ≤ 49 days (≤ 7 weeks)- 21 pts; 50-63 days (8-9 weeks)- 429 pts; and ≥ 64 days (≥ 9 weeks)- 15 pts. Patients were also divided into groups with respect to the method of diagnosis: TURP (81 pts), or other means (399 pts). Patients were further divided into 2 groups: 1) if they had ever had a pre-treatment TURP (170 pts) or 2) if they had not (310 pts). Patients were divided into 2 groups according to boost technique: bilateral arcs (459 pts) or 4 field box (21 pts). Patients were also divided into groups according to total RT dose: ≤ 70 Gy (421 pts) or > 70 Gy (59 pts). Patients were further subdivided into 3 dose groups: 70 Gy (59 pts). Results: Median follow-up is 48 months (range 3 - 112 months). No statistically significant difference in rates of biochemical control were noted for treatment time, overall time (date of biopsy to completion of RT), or treatment time divided into

  9. Sensitivity of predictions in an effective model: Application to the chiral critical end point position in the Nambu-Jona-Lasinio model

    International Nuclear Information System (INIS)

    Biguet, Alexandre; Hansen, Hubert; Brugiere, Timothee; Costa, Pedro; Borgnat, Pierre

    2015-01-01

    The measurement of the position of the chiral critical end point (CEP) in the QCD phase diagram is under debate. While it is possible to predict its position by using effective models specifically built to reproduce some of the features of the underlying theory (QCD), the quality of the predictions (e.g., the CEP position) obtained by such effective models, depends on whether solving the model equations constitute a well- or ill-posed inverse problem. Considering these predictions as being inverse problems provides tools to evaluate if the problem is ill-conditioned, meaning that infinitesimal variations of the inputs of the model can cause comparatively large variations of the predictions. If it is ill-conditioned, it has major consequences because of finite variations that could come from experimental and/or theoretical errors. In the following, we shall apply such a reasoning on the predictions of a particular Nambu-Jona-Lasinio model within the mean field + ring approximations, with special attention to the prediction of the chiral CEP position in the (T-μ) plane. We find that the problem is ill-conditioned (i.e. very sensitive to input variations) for the T-coordinate of the CEP, whereas, it is well-posed for the μ-coordinate of the CEP. As a consequence, when the chiral condensate varies in a 10MeV range, μ CEP varies far less. As an illustration to understand how problematic this could be, we show that the main consequence when taking into account finite variation of the inputs, is that the existence of the CEP itself cannot be predicted anymore: for a deviation as low as 0.6% with respect to vacuum phenomenology (well within the estimation of the first correction to the ring approximation) the CEP may or may not exist. (orig.)

  10. Photo-neutron reaction cross-section for 93Nb in the end-point bremsstrahlung energies of 12–16 and 45–70 MeV

    International Nuclear Information System (INIS)

    Naik, H.; Kim, G.N.; Schwengner, R.; Kim, K.; Zaman, M.; Tatari, M.; Sahid, M.; Yang, S.C.; John, R.; Massarczyk, R.; Junghans, A.; Shin, S.G.; Key, Y.; Wagner, A.; Lee, M.W.; Goswami, A.; Cho, M.-H.

    2013-01-01

    The photo-neutron cross-sections of 93 Nb at the end-point bremsstrahlung energies of 12, 14 and 16 MeV as well as 45, 50, 55, 60 and 70 MeV have been determined by the activation and the off-line γ-ray spectrometric techniques using the 20 MeV electron linac (ELBE) at Helmholtz-Zentrum Dresden-Rossendorf (HZDR), Dresden, Germany, and 100 MeV electron linac at Pohang Accelerator Laboratory (PAL), Pohang, Korea. The 93 Nb(γ, xn, x=1–4) reaction cross-sections as a function of photon energy were also calculated using computer code TALYS 1.4. The flux-weighted average values were obtained from the experimental and the theoretical (TALYS) values based on mono-energetic photons. The experimental values of present work are in good agreement with the flux-weighted theoretical values of TALYS 1.4 but are slightly higher than the flux-weighted experimental data of mono-energetic photons. It was also found that the theoretical and the experimental values of present work and literature data for the 93 Nb(γ, xn) reaction cross-sections increase from the threshold values to a certain energy, where other reaction channels opens. However, the increase of 93 Nb(γ, n) and 93 Nb(γ, 2n) reaction cross-sections are sharper compared to 93 Nb(γ, 3n) and 93 Nb(γ, 4n) reaction cross-sections. The sharp increase of 93 Nb(γ, n) and 93 Nb(γ, 2n) reaction cross-sections from the threshold value up to 17–22 MeV is due to the Giant Dipole Resonance (GDR) effect besides the role of excitation energy. After a certain values, the individual 93 Nb(γ, xn) reaction cross-sections decrease with increase of bremsstrahlung energy due to opening of other reaction channels

  11. Comparison of burrowing and stimuli-evoked pain behaviors as end-points in rat models of inflammatory pain and peripheral neuropathic pain

    Directory of Open Access Journals (Sweden)

    Arjun eMuralidharan

    2016-05-01

    Full Text Available Establishment and validation of ethologically-relevant, non-evoked behavioral end-points as surrogate measures of spontaneous pain in rodent pain models has been proposed as a means to improve preclinical to clinical research translation in the pain field. Here, we compared the utility of burrowing behavior with hypersensitivity to applied mechanical stimuli for pain assessment in rat models of chronic inflammatory and peripheral neuropathic pain. Briefly, groups of male Sprague-Dawley rats were habituated to the burrowing environment and trained over a 5-day period. Rats that burrowed ≤450g of gravel on any two days of the individual training phase were excluded from the study. The remaining rats received either a unilateral intraplantar injection of Freund’s complete adjuvant (FCA or saline, or underwent unilateral chronic constriction injury (CCI of the sciatic nerve- or sham-surgery. Baseline burrowing behavior and evoked pain behaviors were assessed prior to model induction, and twice-weekly until study completion on day 14. For FCA- and CCI-rats, but not the corresponding groups of sham-rats, evoked mechanical hypersensitivity developed in a temporal manner in the ipsilateral hindpaws. Although burrowing behavior also decreased in a temporal manner for both FCA- and CCI-rats, there was considerable inter-animal variability. By contrast, mechanical hyperalgesia and mechanical allodynia in the ipsilateral hindpaws of FCA- and CCI-rats respectively, exhibited minimal inter-animal variability. Our data collectively show that burrowing behavior is altered in rodent models of chronic inflammatory pain and peripheral neuropathic pain. However, large group sizes are needed to ensure studies are adequately powered due to considerable inter-animal variability.

  12. Modest blood pressure reduction with valsartan in acute ischemic stroke: a prospective, randomized, open-label, blinded-end-point trial.

    Science.gov (United States)

    Oh, Mi Sun; Yu, Kyung-Ho; Hong, Keun-Sik; Kang, Dong-Wha; Park, Jong-Moo; Bae, Hee-Joon; Koo, Jaseong; Lee, Juneyoung; Lee, Byung-Chul

    2015-07-01

    To assess the efficacy and safety of modest blood pressure (BP) reduction with valsartan within 48 h after symptom onset in patients with acute ischemic stroke and high BP. This was a multicenter, prospective, randomized, open-label, blinded-end-point trial. A total of 393 subjects were recruited at 28 centers and then randomly assigned in a 1:1 ratio to receive valsartan (n = 195) or no treatment (n = 198) for seven-days after presentation. The primary outcome was death or dependency, defined as a score of 3-6 on the modified Rankin Scale (mRS) at 90 days after symptom onset. Early neurological deterioration (END) within seven-days and 90-day major vascular events were also assessed. There were 372 patients who completed the 90-day follow-up. The valsartan group had 46 of 187 patients (24·6%) with a 90-day mRS 3-6, compared with 42 of 185 patients (22·6%) in the control group (odds ratio [OR], 1·11; 95% confidence interval [CI], 0·69-1·79; P = 0·667). The rate of major vascular events did not differ between groups (OR, 1·41; 95% CI, 0·44-4·49; P = 0·771). There was a significant increase of END in the valsartan group (OR, 2·43; 95% CI, 1·25-4·73; P = 0·008). Early reduction of BP with valsartan did not reduce death or dependency and major vascular events at 90 days, but increased the risk of END. © 2015 World Stroke Organization.

  13. Sensitivity of predictions in an effective model: Application to the chiral critical end point position in the Nambu-Jona-Lasinio model

    Energy Technology Data Exchange (ETDEWEB)

    Biguet, Alexandre; Hansen, Hubert; Brugiere, Timothee [Universite Claude Bernard de Lyon, Institut de Physique Nucleaire de Lyon, CNRS/IN2P3, Villeurbanne Cedex (France); Costa, Pedro [Universidade de Coimbra, Centro de Fisica Computacional, Departamento de Fisica, Coimbra (Portugal); Borgnat, Pierre [CNRS, l' Ecole normale superieure de Lyon, Laboratoire de Physique, Lyon Cedex 07 (France)

    2015-09-15

    The measurement of the position of the chiral critical end point (CEP) in the QCD phase diagram is under debate. While it is possible to predict its position by using effective models specifically built to reproduce some of the features of the underlying theory (QCD), the quality of the predictions (e.g., the CEP position) obtained by such effective models, depends on whether solving the model equations constitute a well- or ill-posed inverse problem. Considering these predictions as being inverse problems provides tools to evaluate if the problem is ill-conditioned, meaning that infinitesimal variations of the inputs of the model can cause comparatively large variations of the predictions. If it is ill-conditioned, it has major consequences because of finite variations that could come from experimental and/or theoretical errors. In the following, we shall apply such a reasoning on the predictions of a particular Nambu-Jona-Lasinio model within the mean field + ring approximations, with special attention to the prediction of the chiral CEP position in the (T-μ) plane. We find that the problem is ill-conditioned (i.e. very sensitive to input variations) for the T-coordinate of the CEP, whereas, it is well-posed for the μ-coordinate of the CEP. As a consequence, when the chiral condensate varies in a 10MeV range, μ {sub CEP} varies far less. As an illustration to understand how problematic this could be, we show that the main consequence when taking into account finite variation of the inputs, is that the existence of the CEP itself cannot be predicted anymore: for a deviation as low as 0.6% with respect to vacuum phenomenology (well within the estimation of the first correction to the ring approximation) the CEP may or may not exist. (orig.)

  14. Estimating the Cost of Preeclampsia in the Healthcare System: Cross-Sectional Study Using Data From SCOPE Study (Screening for Pregnancy End Points).

    Science.gov (United States)

    Fox, Aimée; McHugh, Sheena; Browne, John; Kenny, Louise C; Fitzgerald, Anthony; Khashan, Ali S; Dempsey, Eugene; Fahy, Ciara; O'Neill, Ciaran; Kearney, Patricia M

    2017-12-01

    To estimate the cost of preeclampsia from the national health payer's perspective using secondary data from the SCOPE study (Screening for Pregnancy End Points). SCOPE is an international observational prospective study of healthy nulliparous women with singleton pregnancies. Using data from the Irish cohort recruited between November 2008 and February 2011, all women with preeclampsia and a 10% random sample of women without preeclampsia were selected. Additional health service use data were extracted from the consenting participants' medical records for maternity services which were not included in SCOPE. Unit costs were based on estimates from 3 existing Irish studies. Costs were extrapolated to a national level using a prevalence rate of 5% to 7% among nulliparous pregnancies. Within the cohort of 1774 women, 68 developed preeclampsia (3.8%) and 171 women were randomly selected as controls. Women with preeclampsia used higher levels of maternity services. The average cost of a pregnancy complicated by preeclampsia was €5243 per case compared with €2452 per case for an uncomplicated pregnancy. The national cost of preeclampsia is between €6.5 and €9.1 million per annum based on the 5% to 7% prevalence rate. Postpartum care was the largest contributor to these costs (€4.9-€6.9 million), followed by antepartum care (€0.9-€1.3 million) and peripartum care (€0.6-€0.7 million). Women with preeclampsia generate significantly higher maternity costs than women without preeclampsia. These cost estimates will allow policy-makers to efficiently allocate resources for this pregnancy-specific condition. Moreover, these estimates are useful for future research assessing the cost-effectiveness of preeclampsia screening and treatment. © 2017 American Heart Association, Inc.

  15. Capecitabine and oxaliplatin in the preoperative multimodality treatment of rectal cancer: surgical end points from National Surgical Adjuvant Breast and Bowel Project trial R-04.

    Science.gov (United States)

    O'Connell, Michael J; Colangelo, Linda H; Beart, Robert W; Petrelli, Nicholas J; Allegra, Carmen J; Sharif, Saima; Pitot, Henry C; Shields, Anthony F; Landry, Jerome C; Ryan, David P; Parda, David S; Mohiuddin, Mohammed; Arora, Amit; Evans, Lisa S; Bahary, Nathan; Soori, Gamini S; Eakle, Janice; Robertson, John M; Moore, Dennis F; Mullane, Michael R; Marchello, Benjamin T; Ward, Patrick J; Wozniak, Timothy F; Roh, Mark S; Yothers, Greg; Wolmark, Norman

    2014-06-20

    The optimal chemotherapy regimen administered concurrently with preoperative radiation therapy (RT) for patients with rectal cancer is unknown. National Surgical Adjuvant Breast and Bowel Project trial R-04 compared four chemotherapy regimens administered concomitantly with RT. Patients with clinical stage II or III rectal cancer who were undergoing preoperative RT (45 Gy in 25 fractions over 5 weeks plus a boost of 5.4 Gy to 10.8 Gy in three to six daily fractions) were randomly assigned to one of the following chemotherapy regimens: continuous intravenous infusional fluorouracil (CVI FU; 225 mg/m(2), 5 days per week), with or without intravenous oxaliplatin (50 mg/m(2) once per week for 5 weeks) or oral capecitabine (825 mg/m(2) twice per day, 5 days per week), with or without oxaliplatin (50 mg/m(2) once per week for 5 weeks). Before random assignment, the surgeon indicated whether the patient was eligible for sphincter-sparing surgery based on clinical staging. The surgical end points were complete pathologic response (pCR), sphincter-sparing surgery, and surgical downstaging (conversion to sphincter-sparing surgery). From September 2004 to August 2010, 1,608 patients were randomly assigned. No significant differences in the rates of pCR, sphincter-sparing surgery, or surgical downstaging were identified between the CVI FU and capecitabine regimens or between the two regimens with or without oxaliplatin. Patients treated with oxaliplatin experienced significantly more grade 3 or 4 diarrhea (P < .001). Administering capecitabine with preoperative RT achieved similar rates of pCR, sphincter-sparing surgery, and surgical downstaging compared with CVI FU. Adding oxaliplatin did not improve surgical outcomes but added significant toxicity. The definitive analysis of local tumor control, disease-free survival, and overall survival will be performed when the protocol-specified number of events has occurred. © 2014 by American Society of Clinical Oncology.

  16. The usefulness of cytogenetic parameters, level of p53 protein and endogenous glutathione as intermediate end-points in raw betel-nut genotoxicity.

    Science.gov (United States)

    Kumpawat, K; Chatterjee, A

    2003-07-01

    Betel-nut (BN) chewing related oral mucosal lesions are potential hazards to a large population worldwide. Genotoxicity of betel alkaloids, polyphenol and tannin fractions have been reported. It has been shown earlier that BN ingredients altered the level of endogenous glutathione (GSH) which could modulate the host susceptibility to the action of other chemical carcinogens. The north-east Indian variety of BN, locally known as 'kwai', is raw, wet and consumed unprocessed with betel-leaf and slaked lime and contains higher alkaloids, polyphenol and tannins as compared to the dried one. Therefore, the purpose of this study was to investigate the extent of DNA damage, pattern of cell kinetics, the level of p53-protein and endogenous GSH in kwai chewers in the tribal population of Meghalaya state in the northeastern region of India with an aim to see whether these end-points could serve as biomarkers of genetic damage of relevance for genotoxic/carcinogenic process. The present data show higher DNA damage, delay in cell kinetics, p53 expression and lower GSH-level in heavy chewers (HC) than nonchewers (NC). The influence of bleomycin (BLM) on chromatid break induction in G2-phase of peripheral blood lymphocytes in NC and HC has been analysed to determine individual susceptibility to carcinogenic assaults. HC showed higher induction of chromatid breaks than NC. Risk assessment in this study suggests an interaction between carcinogen exposure and mutagen sensitivity measures, risk estimates being higher in those individuals who both consume kwai and express sensitivity to free radical oxygen damage in vitro. From this study it seems that besides cytogenetical parameters, the level of endogenous GSH and the level of p53 protein could act as effective biomarkers for kwai chewers.

  17. Calculation of absorbed dose and biological effectiveness from photonuclear reactions in a bremsstrahlung beam of end point 50 MeV.

    Science.gov (United States)

    Gudowska, I; Brahme, A; Andreo, P; Gudowski, W; Kierkegaard, J

    1999-09-01

    The absorbed dose due to photonuclear reactions in soft tissue, lung, breast, adipose tissue and cortical bone has been evaluated for a scanned bremsstrahlung beam of end point 50 MeV from a racetrack accelerator. The Monte Carlo code MCNP4B was used to determine the photon source spectrum from the bremsstrahlung target and to simulate the transport of photons through the treatment head and the patient. Photonuclear particle production in tissue was calculated numerically using the energy distributions of photons derived from the Monte Carlo simulations. The transport of photoneutrons in the patient and the photoneutron absorbed dose to tissue were determined using MCNP4B; the absorbed dose due to charged photonuclear particles was calculated numerically assuming total energy absorption in tissue voxels of 1 cm3. The photonuclear absorbed dose to soft tissue, lung, breast and adipose tissue is about (0.11-0.12)+/-0.05% of the maximum photon dose at a depth of 5.5 cm. The absorbed dose to cortical bone is about 45% larger than that to soft tissue. If the contributions from all photoparticles (n, p, 3He and 4He particles and recoils of the residual nuclei) produced in the soft tissue and the accelerator, and from positron radiation and gammas due to induced radioactivity and excited states of the nuclei, are taken into account the total photonuclear absorbed dose delivered to soft tissue is about 0.15+/-0.08% of the maximum photon dose. It has been estimated that the RBE of the photon beam of 50 MV acceleration potential is approximately 2% higher than that of conventional 60Co radiation.

  18. Calculation of absorbed dose and biological effectiveness from photonuclear reactions in a bremsstrahlung beam of end point 50 MeV

    International Nuclear Information System (INIS)

    Gudowska, I.; Brahme, A.; Andreo, P.; Gudowski, W.; Kierkegaard, J.

    1999-01-01

    The absorbed dose due to photonuclear reactions in soft tissue, lung, breast, adipose tissue and cortical bone has been evaluated for a scanned bremsstrahlung beam of end point 50 MeV from a racetrack accelerator. The Monte Carlo code MCNP4B was used to determine the photon source spectrum from the bremsstrahlung target and to simulate the transport of photons through the treatment head and the patient. Photonuclear particle production in tissue was calculated numerically using the energy distributions of photons derived from the Monte Carlo simulations. The transport of photoneutrons in the patient and the photoneutron absorbed dose to tissue were determined using MCNP4B; the absorbed dose due to charged photonuclear particles was calculated numerically assuming total energy absorption in tissue voxels of 1 cm 3 . The photonuclear absorbed dose to soft tissue, lung, breast and adipose tissue is about (0.11-0.12)±0.05% of the maximum photon dose at a depth of 5.5 cm. The absorbed dose to cortical bone is about 45% larger than that to soft tissue. If the contributions from all photoparticles (n, p, 3 He and 4 He particles and recoils of the residual nuclei) produced in the soft tissue and the accelerator, and from positron radiation and gammas due to induced radioactivity and excited states of the nuclei, are taken into account the total photonuclear absorbed dose delivered to soft tissue is about 0.15±0.08% of the maximum photon dose. It has been estimated that the RBE of the photon beam of 50 MV acceleration potential is approximately 2% higher than that of conventional 60 Co radiation. (author)

  19. Test-retest reliability of the KINARM end-point robot for assessment of sensory, motor and neurocognitive function in young adult athletes.

    Directory of Open Access Journals (Sweden)

    Cameron S Mang

    Full Text Available Current assessment tools for sport-related concussion are limited by a reliance on subjective interpretation and patient symptom reporting. Robotic assessments may provide more objective and precise measures of neurological function than traditional clinical tests.To determine the reliability of assessments of sensory, motor and cognitive function conducted with the KINARM end-point robotic device in young adult elite athletes.Sixty-four randomly selected healthy, young adult elite athletes participated. Twenty-five individuals (25 M, mean age±SD, 20.2±2.1 years participated in a within-season study, where three assessments were conducted within a single season (assessments labeled by session: S1, S2, S3. An additional 39 individuals (28M; 22.8±6.0 years participated in a year-to-year study, where annual pre-season assessments were conducted for three consecutive seasons (assessments labeled by year: Y1, Y2, Y3. Forty-four parameters from five robotic tasks (Visually Guided Reaching, Position Matching, Object Hit, Object Hit and Avoid, and Trail Making B and overall Task Scores describing performance on each task were quantified.Test-retest reliability was determined by intra-class correlation coefficients (ICCs between the first and second, and second and third assessments. In the within-season study, ICCs were ≥0.50 for 68% of parameters between S1 and S2, 80% of parameters between S2 and S3, and for three of the five Task Scores both between S1 and S2, and S2 and S3. In the year-to-year study, ICCs were ≥0.50 for 64% of parameters between Y1 and Y2, 82% of parameters between Y2 and Y3, and for four of the five Task Scores both between Y1 and Y2, and Y2 and Y3.Overall, the results suggest moderate-to-good test-retest reliability for the majority of parameters measured by the KINARM robot in healthy young adult elite athletes. Future work will consider the potential use of this information for clinical assessment of concussion

  20. Plasma triglycerides and cardiovascular events in the Treating to New Targets and Incremental Decrease in End-Points through Aggressive Lipid Lowering trials of statins in patients with coronary artery disease

    DEFF Research Database (Denmark)

    Faergeman, Ole; Holme, Ingar; Fayyad, Rana

    2009-01-01

    We determined the ability of in-trial measurements of triglycerides (TGs) to predict new cardiovascular events (CVEs) using data from the Incremental Decrease in End Points through Aggressive Lipid Lowering (IDEAL) and Treating to New Targets (TNT) trials. The trials compared atorvastatin 80 mg...

  1. Utilization of the ex vivo LLNA: BrdU-ELISA to distinguish the sensitizers from irritants in respect of 3 end points-lymphocyte proliferation, ear swelling, and cytokine profiles.

    Science.gov (United States)

    Arancioglu, Seren; Ulker, Ozge Cemiloglu; Karakaya, Asuman

    2015-01-01

    Dermal exposure to chemicals may result in allergic or irritant contact dermatitis. In this study, we performed ex vivo local lymph node assay: bromodeoxyuridine-enzyme-linked immunosorbent assay (LLNA: BrdU-ELISA) to compare the differences between irritation and sensitization potency of some chemicals in terms of the 3 end points: lymphocyte proliferation, cytokine profiles (interleukin 2 [IL-2], interferon-γ (IFN-γ), IL-4, IL-5, IL-1, and tumor necrosis factor α [TNF-α]), and ear swelling. Different concentrations of the following well-known sensitizers and irritant chemicals were applied to mice: dinitrochlorobenzene, eugenol, isoeugenol, sodium lauryl sulfate (SLS), and croton oil. According to the lymph node results; the auricular lymph node weights and lymph node cell counts increased after application of both sensitizers and irritants in high concentrations. On the other hand, according to lymph node cell proliferation results, there was a 3-fold increase in proliferation of lymph node cells (stimulation index) for sensitizer chemicals and SLS in the applied concentrations; however, there was not a 3-fold increase for croton oil and negative control. The SLS gave a false-positive response. Cytokine analysis demonstrated that 4 cytokines including IL-2, IFN-γ, IL-4, and IL-5 were released in lymph node cell cultures, with a clear dose trend for sensitizers whereas only TNF-α was released in response to irritants. Taken together, our results suggest that the ex vivo LLNA: BrdU-ELISA method can be useful for discriminating irritants and allergens. © The Author(s) 2015.

  2. Real-time ed end-point Polymerase Chain Reaction per la quantizzazione del DNA di Citomegalovirus: confronto tra metodi e con il test per l’antigene pp65

    Directory of Open Access Journals (Sweden)

    Tiziano Allice

    2006-03-01

    Full Text Available Quantitave Polymerase Chain Reaction (PCR for Cytomegalovirus (CMV DNA provides highly sensitive and specific data for detecting CMV as well as monitoring the infection and determining the appropriate antiviral strategy.To determine the clinical application of a recently introduced real-time (RT PCR assay for CMV DNA quantitation in peripheral blood leukocytes (PBLs and defining its correlation with the commercial quantitative end-point (EP PCR method COBAS AMPLICOR CMV Monitor and pp65 antigen test. Sequential PBL samples (n=158 from 32 liver transplanted patients with CMV asymptomatic infection and positive for CMV DNA by EP-PCR were retrospectively analysed with RT-PCR and studied according to pp65 antigen levels. A good correlation was found between RT-PCR and pp65 antigen test (r=0.691 and between the two PCR assays (r=0.761. RT-PCR data were significantly higher in pre-emptive treated patients (those with >20 pp65+positive cells, median value: 3.8 log10 copies/500,000 PBLs than in not-treated ones (2.9 logs.According to pp65 levels of 0, 1-10, 11-20, 21-50, 51-100 and >100 positive cells/200,000 PBLs, median CMV DNA load by RT-PCR was 2.6, 3.0, 3.6, 4.0. 4.2 and 4.8, log10 copies/ 500,000 PBLs, respectively (EP-PCR CMV DNA levels: 2. 8, 2.9, 3.8, 3.7, 3.9 and 4.0 logs. For samples with >20 pp65+cells, that is above the level at which pre-emptive therapy was started, RT-PCR values were significantly higher than in groups with less than 20 pp65+cells, whereas EP-PCR values did not significantly differ and showed a slower progression rate. Dilutions of DNA from CMV AD169 strain were used to probe RT-PCR reproducibility (between and intra-assay variability < 2% and sensitivity (100% detection rate at 10 copies/reaction, 28.5% with EP-PCR. A significant improvement is coming from the introduction of RT-PCR to the study of CMV DNA dynamics in differently CMV infected patients due to a more reliable quantitation of CMV DNA for moderate and high

  3. Definitions, End Points, and Clinical Trial Designs for Non-Muscle-Invasive Bladder Cancer: Recommendations From the International Bladder Cancer Group

    NARCIS (Netherlands)

    Kamat, A.M.; Sylvester, R.J.; Bohle, A.; Palou, J.; Lamm, D.L.; Brausi, M.; Soloway, M.; Persad, R.; Buckley, R.; Colombel, M.; Witjes, J.A.

    2016-01-01

    PURPOSE: To provide recommendations on appropriate clinical trial designs in non-muscle-invasive bladder cancer (NMIBC) based on current literature and expert consensus of the International Bladder Cancer Group. METHODS: We reviewed published trials, guidelines, meta-analyses, and reviews and

  4. Total cardiovascular disease burden: comparing intensive with moderate statin therapy insights from the IDEAL (Incremental Decrease in End Points Through Aggressive Lipid Lowering) trial

    DEFF Research Database (Denmark)

    Tikkanen, Matti J; Szarek, Michael; Fayyad, Rana

    2009-01-01

    , using the Wei, Lin, and Weissfeld method. BACKGROUND: Time-to-first-event analysis of data is frequently utilized to provide efficacy outcome information in coronary heart disease prevention trials. However, during the course of such long-term trials, a large number of events occur subsequent...

  5. Computing conformational free energy differences in explicit solvent: An efficient thermodynamic cycle using an auxiliary potential and a free energy functional constructed from the end points.

    Science.gov (United States)

    Harris, Robert C; Deng, Nanjie; Levy, Ronald M; Ishizuka, Ryosuke; Matubayasi, Nobuyuki

    2017-06-05

    Many biomolecules undergo conformational changes associated with allostery or ligand binding. Observing these changes in computer simulations is difficult if their timescales are long. These calculations can be accelerated by observing the transition on an auxiliary free energy surface with a simpler Hamiltonian and connecting this free energy surface to the target free energy surface with free energy calculations. Here, we show that the free energy legs of the cycle can be replaced with energy representation (ER) density functional approximations. We compute: (1) The conformational free energy changes for alanine dipeptide transitioning from the right-handed free energy basin to the left-handed basin and (2) the free energy difference between the open and closed conformations of β-cyclodextrin, a "host" molecule that serves as a model for molecular recognition in host-guest binding. β-cyclodextrin contains 147 atoms compared to 22 atoms for alanine dipeptide, making β-cyclodextrin a large molecule for which to compute solvation free energies by free energy perturbation or integration methods and the largest system for which the ER method has been compared to exact free energy methods. The ER method replaced the 28 simulations to compute each coupling free energy with two endpoint simulations, reducing the computational time for the alanine dipeptide calculation by about 70% and for the β-cyclodextrin by > 95%. The method works even when the distribution of conformations on the auxiliary free energy surface differs substantially from that on the target free energy surface, although some degree of overlap between the two surfaces is required. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  6. Prognostic importance of hemoglobin in hypertensive patients with electrocardiographic left ventricular hypertrophy: the Losartan Intervention For End point reduction in hypertension (LIFE) study

    DEFF Research Database (Denmark)

    Olsen, Michael Hecht; Wachtell, Kristian; Beevers, Gareth

    2008-01-01

    BACKGROUND: The prognostic importance of hemoglobin is controversial. We investigated the prognostic importance of baseline and in-treatment hemoglobin in the LIFE study. METHODS: Eight thousand one hundred ninety-four LIFE patients with hypertension and left ventricular hypertrophy with available...... with the same gender-specific definitions for high and low hemoglobin as time-varying covariates with adjustment for treatment allocation and established risk factors and diseases, hemoglobin in the lowest decile was associated with higher rates of all-cause mortality (HR 3.03, 95% CI 1.89-4.85, P

  7. A simple and reliable in vitro test system for the analysis of induced aneuploidy as well as other cytogenetic end-points using Chinese hamster cells

    International Nuclear Information System (INIS)

    Dulout, F.N.; Natarajan, A.T.

    1987-01-01

    Although aneuploidy is a serious human health problem, the experimental methodology devised until now to study the mechanisms involved in the induction of aneuploidy and for the screening of aneuploidy-inducing agents has not been so much employed to have the necessary validation. A procedure using primary cell cultures of Chinese hamster embryo cells grown on cover glasses is described. To avoid the excessive scattering and subsequent loss of chromosomes, a hypotonic treatment with a 0.17% sodium chloride solution, at room temperature, followed by in situ fixation has been standardized. This procedure improves the method through the reduction of the spontaneous frequency of aneuploid cells. Experiments carried out with cells treated with X-rays, X-rays plus caffeine, and the synthetic estrogen diethylstilbestrol (DES) demonstrated the accuracy of the system since the average chromosome number remained constant in spite of the induction of high frequencies of aneuploid cells. Moreover, the method allows for the analysis of other cytogenetic endpoints such as anaphase-telophase alterations, structural chromosome aberrations or sister chromatid exchanges. (author)

  8. Difference in Composite End Point of Readmission and Death Between Malnourished and Nonmalnourished Veterans Assessed Using Academy of Nutrition and Dietetics/American Society for Parenteral and Enteral Nutrition Clinical Characteristics.

    Science.gov (United States)

    Hiller, Lynn D; Shaw, Robert F; Fabri, Peter J

    2017-11-01

    Previous studies have demonstrated an association between malnutrition and poor outcomes. The primary objective of this study was to explore the difference in the composite end point of readmission rate or mortality rate between hospitalized veterans with and without malnutrition. This was a retrospective chart review comparing veterans with malnutrition based on a modified version of the Academy of Nutrition and Dietetics/American Society for Parenteral and Enteral Nutrition consensus characteristics that used 5 of the 6 clinical characteristics to a matched control group of nonmalnourished veterans based on age, admitting service, and date of admission who were admitted between August 1, 2012, and December 1, 2014. Data were extracted from the medical record. Multivariate analysis was used to identify predictors of outcomes. In total, 404 patients were included in the final analysis. All end points were found to be statistically significant. The malnourished group was more likely to meet the composite end point (odds ratio [OR], 5.3), more likely to be readmitted within 30 days (OR, 3.4), more likely to die within 90 days of discharge (OR, 5.5), and more likely to have a length of stay >7 days (OR, 4.3) compared with the nonmalnourished group. Length of stay was significantly longer in the malnourished group, 9.80 (11.5) vs 4.38 (4.5) days. Malnutrition was an independent risk factor for readmission within 30 days or death within 90 days of discharge. Malnourished patients had higher rates of readmission, higher mortality rates, and longer lengths of stay and were more likely to be discharged to nursing homes.

  9. Sensitive endpoint detection for coulometric titration of microgram amounts of plutonium. Part II: Use of amperometric indication for the end point detection

    International Nuclear Information System (INIS)

    Chitnis, R.T.; Talnikar, S.G.; Thakur, V.A.

    1981-01-01

    Subsequent to the work on polarized indicator electrodes in the coulometric titration of PuO 2 2+ with electrolytically generated Fe 2+ , the possibility of applying amperometry for the endpoint detection in the same titration was explored. Earlier Moiseen et al used the amperometric indication in the coulometric titration of plutonium and have reported coefficient of variation of 0.4% for the titration of 1 mg of plutonium. The lowest amount of plutonium determined was in the range of 100 micrograms. In the present method, using similar analytical technique, the titrations of 25 micrograms and lower amounts of plutonium are reported. While titrating microgram amounts using amperometric indication, the residual currents due to the supporting electrolyte affect the titrations to a considerable extent. However, it is shown that by proper choice of the potential to be applied to the indicating electrode, the interference, due to the supporting electrolyte can be minimised. Using this technique, it is possible to titrate even a fraction of a microgram of plutonium. The precision at 0.5 microgram level is found to be about 6% and that for 5 micrograms, about 1%. (author)

  10. Comparative Analysis of End Point Enzymatic Digests of Arabino-Xylan Isolated from Switchgrass (Panicum virgatum L of Varying Maturities using LC-MSn

    Directory of Open Access Journals (Sweden)

    Michael J. Bowman

    2012-11-01

    Full Text Available Switchgrass (Panicum virgatum L., SG is a perennial grass presently used for forage and being developed as a bioenergy crop for conversion of cell wall carbohydrates to biofuels. Up to 50% of the cell wall associated carbohydrates are xylan. SG was analyzed for xylan structural features at variable harvest maturities. Xylan from each of three maturities was isolated using classical alkaline extraction to yield fractions (Xyl A and B with varying compositional ratios. The Xyl B fraction was observed to decrease with plant age. Xylan samples were subsequently prepared for structure analysis by digesting with pure endo-xylanase, which preserved side-groups, or a commercial carbohydrase preparation favored for biomass conversion work. Enzymatic digestion products were successfully permethylated and analyzed by reverse-phase liquid chromatography with mass spectrometric detection (RP-HPLC-MSn. This method is advantageous compared to prior work on plant biomass because it avoids isolation of individual arabinoxylan oligomers. The use of RP-HPLC- MSn differentiated 14 structural oligosaccharides (d.p. 3–9 from the monocomponent enzyme digestion and nine oligosaccharide structures (d.p. 3–9 from hydrolysis with a cellulase enzyme cocktail. The distribution of arabinoxylan oligomers varied depending upon the enzyme(s applied but did not vary with harvest maturity.

  11. Rationale, design, and baseline characteristics in Evaluation of LIXisenatide in Acute Coronary Syndrome, a long-term cardiovascular end point trial of lixisenatide versus placebo

    DEFF Research Database (Denmark)

    Bentley-Lewis, Rhonda; Aguilar, David; Riddle, Matthew C

    2015-01-01

    , lixisenatide, improves glycemia, but its effects on CV events have not been thoroughly evaluated. METHODS: ELIXA (www.clinicaltrials.gov no. NCT01147250) is a randomized, double-blind, placebo-controlled, parallel-group, multicenter study of lixisenatide in patients with T2DM and a recent ACS event......BACKGROUND: Cardiovascular (CV) disease is the leading cause of morbidity and mortality in patients with type 2 diabetes mellitus (T2DM). Furthermore, patients with T2DM and acute coronary syndrome (ACS) have a particularly high risk of CV events. The glucagon-like peptide 1 receptor agonist...... index was 30.2 ± 5.7 kg/m(2), and duration of T2DM was 9.3 ± 8.2 years. The qualifying ACS was a myocardial infarction in 83% and unstable angina in 17%. The study will continue until the positive adjudication of the protocol-specified number of primary CV events. CONCLUSION: ELIXA will be the first...

  12. Selection of appropriate end-points (pCR vs 2yDFS) for tailoring treatments with prediction models in locally advanced rectal cancer

    International Nuclear Information System (INIS)

    Valentini, Vincenzo; Stiphout, Ruud G.P.M. van; Lammering, Guido; Gambacorta, Maria A.; Barba, Maria C.; Bebenek, Marek; Bonnetain, Franck; Bosset, Jean F.; Bujko, Krzysztof; Cionini, Luca; Gerard, Jean P.; Rödel, Claus; Sainato, Aldo; Sauer, Rolf; Minsky, Bruce D.; Collette, Laurence; Lambin, Philippe

    2015-01-01

    Purpose: Personalized treatments based on predictions for patient outcome require early characterization of a rectal cancer patient’s sensitivity to treatment. This study has two aims: (1) identify the main patterns of recurrence and response to the treatments (2) evaluate pathologic complete response (pCR) and two-year disease-free survival (2yDFS) for overall survival (OS) and their potential to be relevant intermediate endpoints to predict. Methods: Pooled and treatment subgroup analyses were performed on five large European rectal cancer trials (2795 patients), who all received long-course radiotherapy with or without concomitant and/or adjuvant chemotherapy. The ratio of distant metastasis (DM) and local recurrence (LR) rates was used to identify patient characteristics that increase the risk of recurrences. Findings: The DM/LR ratio decreased to a plateau in the first 2 years, revealing it to be a critical follow-up period. According to the patterns of recurrences, three patient groups were identified: 5–15% had pCR and were disease free after 2 years (excellent prognosis), 65–75% had no pCR but were disease free (good prognosis) and 15–30% had neither pCR nor 2yDFS (poor prognosis). Interpretation: Compared with pCR, 2yDFS is a stronger predictor of OS. To adapt treatment most efficiently, accurate prediction models should be developed for pCR to select patients for organ preservation and for 2yDFS to select patients for more intensified treatment strategies

  13. Individual patient data analysis of progression-free survival versus overall survival as a first-line end point for metastatic colorectal cancer in modern randomized trials: findings from the analysis and research in cancers of the digestive system database.

    Science.gov (United States)

    Shi, Qian; de Gramont, Aimery; Grothey, Axel; Zalcberg, John; Chibaudel, Benoist; Schmoll, Hans-Joachim; Seymour, Matthew T; Adams, Richard; Saltz, Leonard; Goldberg, Richard M; Punt, Cornelis J A; Douillard, Jean-Yves; Hoff, Paulo M; Hecht, Joel Randolph; Hurwitz, Herbert; Díaz-Rubio, Eduardo; Porschen, Rainer; Tebbutt, Niall C; Fuchs, Charles; Souglakos, John; Falcone, Alfredo; Tournigand, Christophe; Kabbinavar, Fairooz F; Heinemann, Volker; Van Cutsem, Eric; Bokemeyer, Carsten; Buyse, Marc; Sargent, Daniel J

    2015-01-01

    Progression-free survival (PFS) has previously been established as a surrogate for overall survival (OS) for first-line metastatic colorectal cancer (mCRC). Because mCRC treatment has advanced in the last decade with extended OS, this surrogacy requires re-examination. Individual patient data from 16,762 patients were available from 22 first-line mCRC studies conducted from 1997 to 2006; 12 of those studies tested antiangiogenic and/or anti-epidermal growth factor receptor agents. The relationship between PFS (first event of progression or death) and OS was evaluated by using R(2) statistics (the closer the value is to 1, the stronger the correlation) from weighted least squares regression of trial-specific hazard ratios estimated by using Cox and Copula models. Forty-four percent of patients received a regimen that included biologic agents. Median first-line PFS was 8.3 months, and median OS was 18.2 months. The correlation between PFS and OS was modest (R(2), 0.45 to 0.69). Analyses limited to trials that tested treatments with biologic agents, nonstrategy trials, or superiority trials did not improve surrogacy. In modern mCRC trials, in which survival after the first progression exceeds time to first progression, a positive but modest correlation was observed between OS and PFS at both the patient and trial levels. This finding demonstrates the substantial variability in OS introduced by the number of lines of therapy and types of effective subsequent treatments and the associated challenge to the use of OS as an end point to assess the benefit attributable to a single line of therapy. PFS remains an appropriate primary end point for first-line mCRC trials to detect the direct treatment effect of new agents. © 2014 by American Society of Clinical Oncology.

  14. Multicentre, prospective, randomised, open-label, blinded end point trial of the efficacy of allopurinol therapy in improving cardiovascular outcomes in patients with ischaemic heart disease: protocol of the ALL-HEART study.

    Science.gov (United States)

    Mackenzie, Isla S; Ford, Ian; Walker, Andrew; Hawkey, Chris; Begg, Alan; Avery, Anthony; Taggar, Jaspal; Wei, Li; Struthers, Allan D; MacDonald, Thomas M

    2016-09-08

    Ischaemic heart disease (IHD) is one of the most common causes of death in the UK and treatment of patients with IHD costs the National Health System (NHS) billions of pounds each year. Allopurinol is a xanthine oxidase inhibitor used to prevent gout that also has several positive effects on the cardiovascular system. The ALL-HEART study aims to determine whether allopurinol improves cardiovascular outcomes in patients with IHD. The ALL-HEART study is a multicentre, controlled, prospective, randomised, open-label blinded end point (PROBE) trial of allopurinol (up to 600 mg daily) versus no treatment in a 1:1 ratio, added to usual care, in 5215 patients aged 60 years and over with IHD. Patients are followed up by electronic record linkage and annual questionnaires for an average of 4 years. The primary outcome is the composite of non-fatal myocardial infarction, non-fatal stroke or cardiovascular death. Secondary outcomes include all-cause mortality, quality of life and cost-effectiveness of allopurinol. The study will end when 631 adjudicated primary outcomes have occurred. The study is powered at 80% to detect a 20% reduction in the primary end point for the intervention. Patient recruitment to the ALL-HEART study started in February 2014. The study received ethical approval from the East of Scotland Research Ethics Service (EoSRES) REC 2 (13/ES/0104). The study is event-driven and results are expected after 2019. Results will be reported in peer-reviewed journals and at scientific meetings. Results will also be disseminated to guideline committees, NHS organisations and patient groups. 32017426, pre-results. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  15. Efficacy and tolerability balance of oxycodone/naloxone and tapentadol in chronic low back pain with a neuropathic component: a blinded end point analysis of randomly selected routine data from 12-week prospective open-label observations.

    Science.gov (United States)

    Ueberall, Michael A; Mueller-Schwefe, Gerhard H H

    2016-01-01

    To evaluate the benefit-risk profile (BRP) of oxycodone/naloxone (OXN) and tapentadol (TAP) in patients with chronic low back pain (cLBP) with a neuropathic component (NC) in routine clinical practice. This was a blinded end point analysis of randomly selected 12-week routine/open-label data of the German Pain Registry on adult patients with cLBP-NC who initiated an index treatment in compliance with the current German prescribing information between 1st January and 31st October 2015 (OXN/TAP, n=128/133). Primary end point was defined as a composite of three efficacy components (≥30% improvement of pain, pain-related disability, and quality of life each at the end of observation vs baseline) and three tolerability components (normal bowel function, absence of either central nervous system side effects, and treatment-emergent adverse event [TEAE]-related treatment discontinuation during the observation period) adopted to reflect BRP assessments under real-life conditions. Demographic as well as baseline and pretreatment characteristics were comparable for the randomly selected data sets of both index groups without any indicators for critical selection biases. Treatment with OXN resulted formally in a BRP noninferior to that of TAP and showed a significantly higher primary end point response vs TAP (39.8% vs 25.6%, odds ratio: 1.93; P =0.014), due to superior analgesic effects. Between-group differences increased with stricter response definitions for all three efficacy components in favor of OXN: ≥30%/≥50%/≥70% response rates for OXN vs TAP were seen for pain intensity in 85.2%/67.2%/39.1% vs 83.5%/54.1%/15.8% ( P = ns/0.031/<0.001), for pain-related disability in 78.1%/64.8%/43.8% vs 66.9%/50.4%/24.8% ( P =0.043/0.018/0.001), and for quality of life in 76.6%/68.0%/50.0% vs 63.9%/54.1%/34.6% ( P =0.026/0.022/0.017). Overall, OXN vs TAP treatments were well tolerated, and proportions of patients who either maintained a normal bowel function (68.0% vs 72

  16. Congestive heart failure is associated with lipoprotein components in statin-treated patients with coronary heart disease Insights from the Incremental Decrease in End points Through Aggressive Lipid Lowering Trial (IDEAL)

    DEFF Research Database (Denmark)

    Holme, Ingar; Strandberg, Timo E; Faergeman, Ole

    2009-01-01

    BACKGROUND: Very few, if any, studies have assessed the ability of apolipoproteins to predict new-onset of congestive heart failure (HF) in statin-treated patients with coronary heart disease (CHD). AIMS: To employ the Incremental Decrease in End points Through Aggressive Lipid Lowering Trial...... with the occurrence of new-onset HF. Variables related to low-density lipoprotein cholesterol (LDL-C) carried less predictive information than those related to high-density lipoprotein cholesterol (HDL-C), and apoA-1 was the single variable most strongly associated with HF. LDL-C was less predictive than both non......-HDL-C (total cholesterol minus HDL-C) and apoB. The ratio of apoB to apoA-1 was most strongly related to HF after adjustment for potential confounders, among which diabetes had a stronger correlation with HF than did hypertension. ApoB/apoA-1 carried approximately 2.2 times more of the statistical information...

  17. TRAIL Activates a Caspase 9/7-Dependent Pathway in Caspase 8/10-Defective SK-N-SH Neuroblastoma Cells with Two Functional End Points: Induction of Apoptosis and PGE2 Release

    Directory of Open Access Journals (Sweden)

    Giorgio Zauli

    2003-09-01

    Full Text Available Most neuroblastoma cell lines do not express apical caspases 8 and 10, which play a key role in mediating tumor necrosis factor-related apoptosis-inducing ligand (TRAIL cytotoxicity in a variety of malignant cell types. In this study, we demonstrated that TRAIL induced a moderate but significant increase of apoptosis in the caspase 8/10-deficient SK-N-SH neuroblastoma cell line, through activation of a novel caspase 9/7 pathway. Concomitant to the induction of apoptosis, TRAIL also promoted a significant increase of prostaglandin E2 (PGE2 release by SKN-SH cells. Moreover, coadministration of TRAIL plus indomethacin, a pharmacological inhibitor of cyclooxygenase (COX, showed an additive effect on SKN-SH cell death. In spite of the ability of TRAIL to promote the phosphorylation of both ERKi/2 and p38/MAPK, which have been involved in the control of COX expression/activity, neither PD98059 nor SB203580, pharmacological inhibitors of the ERKi/2 and p38/MAPK pathways, respectively, affected either PGE2 production or apoptosis induced by TRAIL. Finally, both induction of apoptosis and PGE2 release were completely abrogated by the broad caspase inhibitor z-VAD4mk, suggesting that both biologic end points were regulated in SK-N-SH cells through a caspase 9/7-dependent pathway.

  18. Measurement of photo-neutron cross sections and isomeric yield ratios in the {sup 89}Y(γ,xn){sup 89-x}Y reactions at the bremsstrahlung end-point energies of 65, 70 and 75 MeV

    Energy Technology Data Exchange (ETDEWEB)

    Tatari, Mansoureh [Yazd Univ. (Iran, Islamic Republic of). Physics Dept.; Naik, Haladhara [Bhabha Atomic Research Centre, Mumbai (India). Radiochemistry Div.; Kim, Guinyun; Kim, Kwangsoo [Kyungpook National Univ., Daegu (Korea, Republic of). Dept. of Physics; Shin, Sung-Gyun; Cho, Moo-Hyun [Pohang Univ. of Science and Technology (Korea, Republic of). Div. of Advanced Nuclear Engineering

    2017-07-01

    The flux-weighted average cross sections of the {sup 89}Y(γ,xn; x=1-4){sup 89-x}Y reactions and the isomeric yield ratios of the {sup 87m,g}Y, {sup 86m,g}Y, and {sup 85m,g}Y radionuclides produced in these reactions with the bremsstrahlung end-point energies of 65, 70 and 75 MeV have been determined by an activation and off-line γ-ray spectrometric technique using the 100 MeV electron linac in Pohang Accelerator Laboratory, Korea. The theoretical {sup 89}Y(γ,xn; x=1-4){sup 89-x}Y reaction cross sections for mono-energetic photons have been calculated using the computer code TALYS 1.6. Then the flux-weighted theoretical values were obtained to compare with the present data. The flux-weighted experimental and theoretical {sup 89}Y(γ,xn; x=1-4){sup 89-x}Y reaction cross sections increase very fast from the threshold values to a certain bremsstrahlung energy, where the other reaction channels open up. Thereafter it remains constant a while and then slowly decreases with the increase of cross sections for other reactions. Similarly, the isomeric yield ratios of {sup 87m,g}Y, {sup 86m,g}Y and {sup 85m,g}Y in the {sup 89}Y(γ,xn; x=2-4){sup 89-x}Y reactions from the present work and literature data show an increasing trend from their respective threshold values to a certain bremsstrahlung energy. After a certain point of energy, the isomeric yield ratios increase slowly with the bremsstrahlung energy. These observations indicate the role of excitation energy and its partitioning in different reaction channels.

  19. Adhesion kinetics of human primary monocytes, dendritic cells, and macrophages: Dynamic cell adhesion measurements with a label-free optical biosensor and their comparison with end-point assays.

    Science.gov (United States)

    Orgovan, Norbert; Ungai-Salánki, Rita; Lukácsi, Szilvia; Sándor, Noémi; Bajtay, Zsuzsa; Erdei, Anna; Szabó, Bálint; Horvath, Robert

    2016-09-01

    obtained with the high-temporal-resolution Epic BT, but could only provide end-point data. In contrast, complex, nonmonotonic cell adhesion kinetics measured by the high-throughput optical biosensor is expected to open a window on the hidden background of the immune cell-extracellular matrix interactions.

  20. Measurement of flux-weighted average cross-sections and isomeric yield ratios for {sup 103}Rh(γ, xn) reactions in the bremsstrahlung end-point energies of 55 and 60 MeV

    Energy Technology Data Exchange (ETDEWEB)

    Shakilur Rahman, Md.; Kim, Kwangsoo; Kim, Guinyun; Nadeem, Muhammad; Thi Hien, Nguyen; Shahid, Muhammad [Kyungpook National University, Department of Physics, Daegu (Korea, Republic of); Naik, Haladhara [Bhabha Atomic Research Centre, Radiochemistry Division, Mumbai (India); Yang, Sung-Chul; Cho, Young-Sik; Lee, Young-Ouk [Korea Atomic Energy Research Institute, Nuclear Data Center, Daejeon (Korea, Republic of); Shin, Sung-Gyun; Cho, Moo-Hyun [Pohang University of Science and Technology, Division of Advanced Nuclear Engineering, Pohang (Korea, Republic of); Woo Lee, Man; Kang, Yeong-Rok; Yang, Gwang-Mo [Dongnam Institute of Radiological and Medical Science, Research Center, Busan (Korea, Republic of); Ro, Tae-Ik [Dong-A University, Department of Materials Physics, Busan (Korea, Republic of)

    2016-07-15

    We measured the flux-weighted average cross-sections and the isomeric yield ratios of {sup 99m,g,100m,g,101m,g,102m,g}Rh in the {sup 103}Rh(γ, xn) reactions with the bremsstrahlung end-point energies of 55 and 60 MeV by the activation and the off-line γ-ray spectrometric technique, using the 100 MeV electron linac at the Pohang Accelerator Laboratory (PAL), Korea. The flux-weighted average cross-sections were calculated by using the computer code TALYS 1.6 based on mono-energetic photons, and compared with the present experimental data. The flux-weighted average cross-sections of {sup 103}Rh(γ, xn) reactions in intermediate bremsstrahlung energies are the first time measurement and are found to increase from their threshold value to a particular value, where the other reaction channels open up. Thereafter, it decreases with bremsstrahlung energy due to its partition in different reaction channels. The isomeric yield ratios (IR) of {sup 99m,g,100m,g,101m,g,102m,g}Rh in the {sup 103}Rh(γ, xn) reactions from the present work were compared with the literature data in the {sup 103}Rh(d, x), {sup 102-99}Ru(p, x), {sup 103}Rh(α, αn), {sup 103}Rh(α, 2p3n), {sup 102}Ru({sup 3}He, x), and {sup 103}Rh(γ, xn) reactions. It was found that the IR values of {sup 102,101,100,99}Rh in all these reactions increase with the projectile energy, which indicates the role of excitation energy. At the same excitation energy, the IR values of {sup 102,101,100,99}Rh are higher in the charged particle-induced reactions than in the photon-induced reaction, which indicates the role of input angular momentum. (orig.)

  1. Individual Patient Data Analysis of Progression-Free Survival Versus Overall Survival As a First-Line End Point for Metastatic Colorectal Cancer in Modern Randomized Trials: Findings From the Analysis and Research in Cancers of the Digestive System Database

    NARCIS (Netherlands)

    Shi, Qian; de Gramont, Aimery; Grothey, Axel; Zalcberg, John; Chibaudel, Benoist; Schmoll, Hans-Joachim; Seymour, Matthew T.; Adams, Richard; Saltz, Leonard; Goldberg, Richard M.; Punt, Cornelis J. A.; Douillard, Jean-Yves; Hoff, Paulo M.; Hecht, Joel Randolph; Hurwitz, Herbert; Díaz-Rubio, Eduardo; Porschen, Rainer; Tebbutt, Niall C.; Fuchs, Charles; Souglakos, John; Falcone, Alfredo; Tournigand, Christophe; Kabbinavar, Fairooz F.; Heinemann, Volker; van Cutsem, Eric; Bokemeyer, Carsten; Buyse, Marc; Sargent, Daniel J.

    2015-01-01

    Purpose Progression-free survival (PFS) has previously been established as a surrogate for overall survival (OS) for first-line metastatic colorectal cancer (mCRC). Because mCRC treatment has advanced in the last decade with extended OS, this surrogacy requires re-examination. Methods Individual

  2. Evaluation of in-line Raman data for end-point determination of a coating process: Comparison of Science-Based Calibration, PLS-regression and univariate data analysis.

    Science.gov (United States)

    Barimani, Shirin; Kleinebudde, Peter

    2017-10-01

    A multivariate analysis method, Science-Based Calibration (SBC), was used for the first time for endpoint determination of a tablet coating process using Raman data. Two types of tablet cores, placebo and caffeine cores, received a coating suspension comprising a polyvinyl alcohol-polyethylene glycol graft-copolymer and titanium dioxide to a maximum coating thickness of 80µm. Raman spectroscopy was used as in-line PAT tool. The spectra were acquired every minute and correlated to the amount of applied aqueous coating suspension. SBC was compared to another well-known multivariate analysis method, Partial Least Squares-regression (PLS) and a simpler approach, Univariate Data Analysis (UVDA). All developed calibration models had coefficient of determination values (R 2 ) higher than 0.99. The coating endpoints could be predicted with root mean square errors (RMSEP) less than 3.1% of the applied coating suspensions. Compared to PLS and UVDA, SBC proved to be an alternative multivariate calibration method with high predictive power. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. A versatile method for confirmatory evaluation of the effects of a covariate in multiple models

    DEFF Research Database (Denmark)

    Pipper, Christian Bressen; Ritz, Christian; Bisgaard, Hans

    2012-01-01

    to provide a fine-tuned control of the overall type I error in a wide range of epidemiological experiments where in reality no other useful alternative exists. The methodology proposed is applied to a multiple-end-point study of the effect of neonatal bacterial colonization on development of childhood asthma.......Modern epidemiology often requires testing of the effect of a covariate on multiple end points from the same study. However, popular state of the art methods for multiple testing require the tests to be evaluated within the framework of a single model unifying all end points. This severely limits...

  4. A strategy for end point criteria for Superfund remediation

    International Nuclear Information System (INIS)

    Hwang, S.T.

    1992-06-01

    Since the inception of cleanup for hazardous waste sites, estimating target cleanup levels has been the subject of considerable investigation and debate in the Superfund remediation process. Establishing formal procedures for assessing human health risks associated with hazardous waste sites has provided a conceptual framework for determining remediation goals and target cleanup levels (TCLs) based on human health and ecological risk consideration. This approach was once considered at variance with the concept of the pre-risk assessment period; that is, cleaning up to the background level, or using containment design or best available control technologies. The concept has been gradually adopted by the regulatory agencies and the parties responsible for cleanup. Evaluation of cleanup strategies at the outset of the planning stage will eventually benefit the parties responsible for cleanup and the oversight organizations, including regulatory agencies. Development of the strategies will provide an opportunity to promote an improvement in the pace and quality of many activities to be carried out. The strategies should help address the issues related to (1) improving remediation management activities to arrive at remediation as expeditiously as possible, (2) developing alternate remediation management activities, (3) identifying obstructing issues to management for resolution, (4) adapting the existing framework to correspond to the change in remediation statutes and guidelines, and (5) providing the basis for evaluating options for the record of decision process. This paper will discuss some of the issues and the research efforts that were addressed as part of the strategies requiring future discussion and comment

  5. Carcinogenesis as an end point in health impact assessment

    International Nuclear Information System (INIS)

    Schneiderman, M.A.

    1976-01-01

    All sources of energy production, with perhaps the exception of falling water, wind, and geothermal sources, appear to have associated carcinogenic hazards. Epidemiological information does not allow us to pinpoint single ''causes,'' but urban dwellers, coal miners, and petroleum refinery workers all appear to be at increased risk of cancer. Radiation--cancer relationships have been shown in exposed populations, and work is currently under way assessing the risks of former Atomic Energy Commission workers. With the exception of radiation exposure, there are few human data for estimating dose--response relationships to environmental carcinogens. At very low doses for add-on type exposures, safety-oriented behavior leads us to assume no thresholds and to operate as if the dose--response curve in man is likely to be linear

  6. The Hot and Energetic Universe: End points of stellar evolution

    NARCIS (Netherlands)

    Motch, Christian; Wilms, Jörn; Barret, Didier; Becker, Werner; Bogdanov, Slavko; Boirin, Laurence; Corbel, Stéphane; Cackett, Ed; Campana, Sergio; de Martino, Domitilla; Haberl, Frank; in't Zand, Jean; Méndez, Mariano; Mignani, Roberto; Miller, Jon; Orio, Marina; Psaltis, Dimitrios; Rea, Nanda; Rodriguez, Jérôme; Rozanska, Agata; Schwope, Axel; Steiner, Andrew; Webb, Natalie; Zampieri, Luca; Zane, Silvia

    2013-01-01

    White dwarfs, neutron stars and stellar mass black holes are key laboratories to study matter in most extreme conditions of gravity and magnetic field. The unprecedented effective area of Athena+ will allow us to advance our understanding of emission mechanisms and accretion physics over a wide

  7. Mutagenesis and teratogenesis as end points in health impact assessment

    International Nuclear Information System (INIS)

    Bender, M.A.

    1976-01-01

    The genetic and teratogenic effects of agents released to the environment as a consequence of energy production are exceedingly difficult to evaluate. Nevertheless, these effects on human health may be very costly in the context of cost-benefit analysis. In fact, the procedures required to limit mutagenic or teratogenic agents to the levels considered acceptable by regulatory bodies may constitute a major fraction of the cost of energy, especially where prudence dictates that a lack of empirical data requires extremely conservative regulations. Experience with ionizing radiation and with regulation of nuclear power installations illustrates the difficulty of genetic and teratogenic health impact assessment and the great uncertainties involved, as well as the influence of these impacts on the regulatory process and the consequent increased cost of power from this source. Data on genetic and teratogenic impacts on human health from chemical agents released to the environment by other energy technologies are much less complete, and, because of the large number of potentially active agents involved, it is evident that generic solutions to health impact assessment will be required to evaluate these energy alternatives

  8. The End Point Tagger physics program at A2@MAMI

    Science.gov (United States)

    Steffen, Oliver

    2017-04-01

    The A2-Collaboration uses a beam of real photons from the tagged photon facility at the electron accelerator MAMI in Mainz, Germany, to study photo-produced mesons. A new tagging device allows access to the higher photon beam energy range of 1.4 to 1.6 GeV. A large dataset containing more than 6 million η' and roughly 29 million ω decays has been obtained. Analyses are ongoing, including a study of the cusp effect and Dalitz plot in η' → ηπ0π0, giving insight to the ππ scattering length and the structure of the ηππ system, as well as the measurement of the electromagnetic transition form factor in η' → e+e-γ, a cross section measurement of γp → 3π0, and branching ratio analyses of η' → ωγ and ω → ηγ.

  9. Robotics virtual rail system and method

    Science.gov (United States)

    Bruemmer, David J [Idaho Falls, ID; Few, Douglas A [Idaho Falls, ID; Walton, Miles C [Idaho Falls, ID

    2011-07-05

    A virtual track or rail system and method is described for execution by a robot. A user, through a user interface, generates a desired path comprised of at least one segment representative of the virtual track for the robot. Start and end points are assigned to the desired path and velocities are also associated with each of the at least one segment of the desired path. A waypoint file is generated including positions along the virtual track representing the desired path with the positions beginning from the start point to the end point including the velocities of each of the at least one segment. The waypoint file is sent to the robot for traversing along the virtual track.

  10. Method

    Directory of Open Access Journals (Sweden)

    Ling Fiona W.M.

    2017-01-01

    Full Text Available Rapid prototyping of microchannel gain lots of attention from researchers along with the rapid development of microfluidic technology. The conventional methods carried few disadvantages such as high cost, time consuming, required high operating pressure and temperature and involve expertise in operating the equipment. In this work, new method adapting xurography method is introduced to replace the conventional method of fabrication of microchannels. The novelty in this study is replacing the adhesion film with clear plastic film which was used to cut the design of the microchannel as the material is more suitable for fabricating more complex microchannel design. The microchannel was then mold using polymethyldisiloxane (PDMS and bonded with a clean glass to produce a close microchannel. The microchannel produced had a clean edge indicating good master mold was produced using the cutting plotter and the bonding between the PDMS and glass was good where no leakage was observed. The materials used in this method is cheap and the total time consumed is less than 5 hours where this method is suitable for rapid prototyping of microchannel.

  11. method

    Directory of Open Access Journals (Sweden)

    L. M. Kimball

    2002-01-01

    Full Text Available This paper presents an interior point algorithm to solve the multiperiod hydrothermal economic dispatch (HTED. The multiperiod HTED is a large scale nonlinear programming problem. Various optimization methods have been applied to the multiperiod HTED, but most neglect important network characteristics or require decomposition into thermal and hydro subproblems. The algorithm described here exploits the special bordered block diagonal structure and sparsity of the Newton system for the first order necessary conditions to result in a fast efficient algorithm that can account for all network aspects. Applying this new algorithm challenges a conventional method for the use of available hydro resources known as the peak shaving heuristic.

  12. A fluorescence high throughput screening method for the detection of reactive electrophiles as potential skin sensitizers

    Science.gov (United States)

    Skin sensitization is an important toxicological end-point in the risk assessment of chemical allergens. Because of the complexity of the biological mechanisms associated with skin sensitization integrated approaches combining different chemical, biological and in silico methods are recommended to r...

  13. An adaptive segment method for smoothing lidar signal based on noise estimation

    Science.gov (United States)

    Wang, Yuzhao; Luo, Pingping

    2014-10-01

    An adaptive segmentation smoothing method (ASSM) is introduced in the paper to smooth the signal and suppress the noise. In the ASSM, the noise is defined as the 3σ of the background signal. An integer number N is defined for finding the changing positions in the signal curve. If the difference of adjacent two points is greater than 3Nσ, the position is recorded as an end point of the smoothing segment. All the end points detected as above are recorded and the curves between them will be smoothed separately. In the traditional method, the end points of the smoothing windows in the signals are fixed. The ASSM creates changing end points in different signals and the smoothing windows could be set adaptively. The windows are always set as the half of the segmentations and then the average smoothing method will be applied in the segmentations. The Iterative process is required for reducing the end-point aberration effect in the average smoothing method and two or three times are enough. In ASSM, the signals are smoothed in the spacial area nor frequent area, that means the frequent disturbance will be avoided. A lidar echo was simulated in the experimental work. The echo was supposed to be created by a space-born lidar (e.g. CALIOP). And white Gaussian noise was added to the echo to act as the random noise resulted from environment and the detector. The novel method, ASSM, was applied to the noisy echo to filter the noise. In the test, N was set to 3 and the Iteration time is two. The results show that, the signal could be smoothed adaptively by the ASSM, but the N and the Iteration time might be optimized when the ASSM is applied in a different lidar.

  14. [A study for testing the antifungal susceptibility of yeast by the Japanese Society for Medical Mycology (JSMM) method. The proposal of the modified JSMM method 2009].

    Science.gov (United States)

    Nishiyama, Yayoi; Abe, Michiko; Ikeda, Reiko; Uno, Jun; Oguri, Toyoko; Shibuya, Kazutoshi; Maesaki, Shigefumi; Mohri, Shinobu; Yamada, Tsuyoshi; Ishibashi, Hiroko; Hasumi, Yayoi; Abe, Shigeru

    2010-01-01

    The Japanese Society for Medical Mycology (JSMM) method used for testing the antifungal susceptibility of yeast, the MIC end point for azole antifungal agents, is currently set at IC(80). It was recently shown, however that there is an inconsistency in the MIC value between the JSMM method and the CLSI M27-A2 (CLSI) method, in which the end- point was to read as IC(50). To resolve this discrepancy and reassess the JSMM method, the MIC for three azoles, fluconazole, itraconazole and voriconazole were compared to 5 strains of each of the following Candida species: C. albicans, C. glabrata, C. tropicalis, C. parapsilosis and C. krusei, for a total of 25 comparisons, using the JSMM method, a modified JSMM method, and the CLSI method. The results showed that when the MIC end- point criterion of the JSMM method was changed from IC(80) to IC(50) (the modified JSMM method) , the MIC value was consistent and compatible with the CLSI method. Finally, it should be emphasized that the JSMM method, using a spectrophotometer for MIC measurement, was superior in both stability and reproducibility, as compared to the CLSI method in which growth was assessed by visual observation.

  15. The state-of-the-art and prospects of the oxidation titration method for the determination of uranium in geological samples

    International Nuclear Information System (INIS)

    Sun Jiayan

    1986-01-01

    The state-of-the-art of the oxidation titration method for the determination of uranium in geological samples is reviewed in some respects such as the prereduction of U(VI), oxidation of U(IV) and the detection of the end-point. Comments are also made on the prospects of further improvements of this method

  16. Improved methylene blue two-phase titration method for determining cationic surfactant concentration in high-salinity brine.

    Science.gov (United States)

    Cui, Leyu; Puerto, Maura; López-Salinas, José L; Biswal, Sibani L; Hirasaki, George J

    2014-11-18

    The methylene blue (MB) two-phase titration method is a rapid and efficient method for determining the concentrations of anionic surfactants. The point at which the aqueous and chloroform phases appear equally blue is called Epton's end point. However, many inorganic anions, e.g., Cl(-), NO3(-), Br(-), and I(-), can form ion pairs with MB(+) and interfere with Epton's end point, resulting in the failure of the MB two-phase titration in high-salinity brine. Here we present a method to extend the MB two-phase titration method for determining the concentration of various cationic surfactants in both deionized water and high-salinity brine (22% total dissolved solid). A colorless end point, at which the blue color is completely transferred from the aqueous phase to the chloroform phase, is proposed as titration end point. Light absorbance at the characteristic wavelength of MB is measured using a spectrophotometer. When the absorbance falls below a threshold value of 0.04, the aqueous phase is considered colorless, indicating that the end point has been reached. By using this improved method, the overall error for the titration of a permanent cationic surfactant, e.g., dodecyltrimethylammonium bromide, in deionized (DI) water and high-salinity brine is 1.274% and 1.322% with limits of detection (LOD) of 0.149 and 0.215 mM, respectively. Compared to the traditional acid-base titration method, the error of this improved method for a switchable cationic surfactant, e.g., tertiary amine surfactant (Ethomeen C12), is 2.22% in DI water and 0.106% with LOD of 0.369 and 0.439 mM, respectively.

  17. A direct method for estimating the alpha/beta ratio from quantitative dose-response data

    International Nuclear Information System (INIS)

    Stuschke, M.

    1989-01-01

    A one-step optimization method based on a least squares fit of the linear quadratic model to quantitative tissue response data after fractionated irradiation is proposed. Suitable end-points that can be analysed by this method are growth delay, host survival and quantitative biochemical or clinical laboratory data. The functional dependence between the transformed dose and the measured response is approximated by a polynomial. The method allows for the estimation of the alpha/beta ratio and its confidence limits from all observed responses of the different fractionation schedules. Censored data can be included in the analysis. A method to test the appropriateness of the fit is presented. A computer simulation illustrates the method and its accuracy as examplified by the growth delay end point. A comparison with a fit of the linear quadratic model to interpolated isoeffect doses shows the advantages of the direct method. (orig./HP) [de

  18. Using serum urate as a validated surrogate end point for flares in patients with gout

    DEFF Research Database (Denmark)

    Birger Morillon, Melanie; Stamp, L.; Taylor, E W

    2016-01-01

    Introduction: Gout is the most common inflammatory arthritis in men over 40 years of age. Long-term urate-lowering therapy is considered a key strategy for effective gout management. The primary outcome measure for efficacy in clinical trials of urate-lowering therapy is serum urate levels, effec...

  19. Cardiorenal end points in a trial of aliskiren for type 2 diabetes

    DEFF Research Database (Denmark)

    Parving, Hans-Henrik; Brenner, Barry M; McMurray, John J V

    2012-01-01

    This study was undertaken to determine whether use of the direct renin inhibitor aliskiren would reduce cardiovascular and renal events in patients with type 2 diabetes and chronic kidney disease, cardiovascular disease, or both....

  20. Reaction paths and equilibrium end-points in solid-solution aqueous-solution systems

    Science.gov (United States)

    Glynn, P.D.; Reardon, E.J.; Plummer, Niel; Busenberg, E.

    1990-01-01

    Equations are presented describing equilibrium in binary solid-solution aqueous-solution (SSAS) systems after a dissolution, precipitation, or recrystallization process, as a function of the composition and relative proportion of the initial phases. Equilibrium phase diagrams incorporating the concept of stoichiometric saturation are used to interpret possible reaction paths and to demonstrate relations between stoichiometric saturation, primary saturation, and thermodynamic equilibrium states. The concept of stoichiometric saturation is found useful in interpreting and putting limits on dissolution pathways, but there currently is no basis for possible application of this concept to the prediction and/ or understanding of precipitation processes. Previously published dissolution experiments for (Ba, Sr)SO4 and (Sr, Ca)C??O3orth. solids are interpreted using equilibrium phase diagrams. These studies show that stoichiometric saturation can control, or at least influence, initial congruent dissolution pathways. The results for (Sr, Ca)CO3orth. solids reveal that stoichiometric saturation can also control the initial stages of incongruent dissolution, despite the intrinsic instability of some of the initial solids. In contrast, recrystallisation experiments in the highly soluble KCl-KBr-H2O system demonstrate equilibrium. The excess free energy of mixing calculated for K(Cl, Br) solids is closely modeled by the relation GE = ??KBr??KClRT[a0 + a1(2??KBr-1)], where a0 is 1.40 ?? 0.02, a1, is -0.08 ?? 0.03 at 25??C, and ??KBr and ??KCl are the mole fractions of KBr and KCl in the solids. The phase diagram constructed using this fit reveals an alyotropic maximum located at ??KBr = 0.676 and at a total solubility product, ???? = [K+]([Cl-] + [Br-]) = 15.35. ?? 1990.

  1. Estimation of relative biological effectiveness for low energy protons using cytogenetic end points in mammalian cells

    International Nuclear Information System (INIS)

    Bhat, N.N.; Nairy, Rajesh; Chaurasia, Rajesh; Desai, Utkarsha; Shirsath, K.B.; Anjaria, K.B.; Sreedevi, B.

    2013-01-01

    A facility has been designed and developed to facilitate irradiation of biological samples to proton beam using folded tandem ion accelerator (FOTIA) at BARC. The primary proton beam from the accelerator was diffused using gold foil and channelled through a drift tube. Scattered beam was monitored and calibrated. Uniformity and dosimetry studies were conducted to calibrate the setup for precise irradiation of mammalian cells. Irradiation conditions and geometry were optimized for mammalian cells and other biological samples in thin layer. The irradiation facility is housed in a clean air laminar flow to help exposure of samples in aseptic conditions. The set up has been used for studying various radiobiological endpoints in many biological model systems. CHO, MCF-7, A-549 and INT-407 cell lines were studied in the present investigation using micronucleus (MN) induction as an indicator of radiation damage. The mammalian cells grown on petri plates to about 40 % confluence (log phase) were exposed to proton beam of known doses in the range of 0.1 to 2 Gy. The dose estimation was done based on specific ionization in cell medium. Studies were also conducted using 60 Co gamma radiation to compare the results. Linear quadratic response was observed for all the cell lines when exposed to 60 Co gamma radiation. In contrast, linear response was observed for proton beam. In addition, very significant increase in the MN yield was observed for proton beam compared to 60 Co gamma radiation. Estimated α and β values for CHO cells is found to be 0.02±0.003 Gy-1 and 0.042±0.006 Gy-2 respectively for 60 Co gamma radiation. For proton beam, estimated α for linear fit is found to be 0.37±0.011 Gy-1. Estimated RBE was found to be in the range of 4-8 for all the cell lines and dose ranges studied. In conclusion, the proton irradiation facility developed for mammalian cells has helped to study various radiobiological endpoints. In this presentation, facility description, MN as radiobiological endpoint for the various cell lines as model systems, estimation of RBE and estimated fit coefficients are described and presented. (author)

  2. BIOCHEMICAL HOMEOSTASIS AND BODY GROWTH ARE RELIABLE END POINTS IN CLINICAL NUTRITION TRIALS

    Science.gov (United States)

    Studies of biochemical homeostasis and/or body growth have been included as outcome variables in most nutrition trials in paediatric patients. Moreover, these outcome variables have provided important insights into the nutrient requirements of infants and children, and continue to do so. Examples ...

  3. Developing business opportunities from concept to end point for craniofacial surgeons.

    Science.gov (United States)

    Brown, Spencer A

    2012-01-01

    Craniofacial surgeons repair a wide variety of soft and hard tissues that produce the clinical expertise to recognize the need for an improved device or novel regenerative stem cell or use of molecules that may dramatically change the way clinical care for improved patient outcomes. The business pathway to bring a concept to clinical care requires knowledge, mentoring, and a team of experts in business and patent law.

  4. Integrated Systems-Based Approach for Reaching Acceptable End Points for Groundwater - 13629

    International Nuclear Information System (INIS)

    Lee, M. Hope; Wellman, Dawn; Truex, Mike; Freshley, Mark D.; Sorenson, Kent S. Jr.; Wymore, Ryan

    2013-01-01

    The sheer mass and nature of contaminated materials at DOE and DoD sites, makes it impractical to completely restore these sites to pre-disposal conditions. DOE faces long-term challenges, particularly with developing monitoring and end state approaches for clean-up that are protective of the environment, technically based and documented, sustainable, and most importantly cost effective. Integrated systems-based monitoring approaches (e.g., tools for characterization and monitoring, multi-component strategies, geophysical modeling) could provide novel approaches and a framework to (a) define risk-informed endpoints and/or conditions that constitute completion of cleanup and (b) provide the understanding for implementation of advanced scientific approaches to meet cleanup goals. Multi-component strategies which combine site conceptual models, biological, chemical, and physical remediation strategies, as well as iterative review and optimization have proven successful at several DOE sites. Novel tools such as enzyme probes and quantitative PCR for DNA and RNA, and innovative modeling approaches for complex subsurface environments, have been successful at facilitating the reduced operation or shutdown of pump and treat facilities and transition of clean-up activities into monitored natural attenuation remedies. Integrating novel tools with site conceptual models and other lines of evidence to characterize, optimize, and monitor long term remedial approaches for complex contaminant plumes are critical for transitioning active remediation into cost effective, yet technically defensible endpoint strategies. (authors)

  5. Evaluation of COPD Longitudinally to Identify Predictive Surrogate End-points (ECLIPSE)

    DEFF Research Database (Denmark)

    Vestbo, J; Anderson, W; Coxson, H O

    2008-01-01

    Chronic obstructive pulmonary disease (COPD) is a heterogeneous disease and not well understood. The forced expiratory volume in one second is used for the diagnosis and staging of COPD, but there is wide acceptance that it is a crude measure and insensitive to change over shorter periods of time...

  6. Evidence for induced radioresistance from survival and other end points: An introduction

    International Nuclear Information System (INIS)

    Joiner, M.C.

    1994-01-01

    A substantial body of data published during the past 30 years makes a strong case for the existence of cellular radioprotective mechanisms that can be up-regulated in response to exposure to small doses of ionizing radiation. Either these open-quotes inducedclose quotes mechanisms can protect against a subsequent exposure to radiation that may be substantially larger than the initial open-quotes primingclose quotes or open-quotes conditioningclose quotes dose, or they may influence the shape of the survival response to single doses so that small radiation exposures are more effective per unit dose than larger exposures above a threshold where the induced radioprotection is triggered. Evidence for these effects comes from studies in vitro with protozoa, algae, higher plant cells, insect cells, mammalian and human cells, and studies on animal models in vivo. Work at the molecular level is now confirming that changes in levels of some cytoplasmic and nuclear proteins, and the increased expression of some genes, may occur within a few hours or even minutes of irradiation. This would be sufficiently quick to explain the phenomenon of induced radioresistance although the precise mechanism, whether by repair, cell cycle control or some other process, remains yet undefined. 35 refs

  7. End Point of Black Ring Instabilities and the Weak Cosmic Censorship Conjecture.

    Science.gov (United States)

    Figueras, Pau; Kunesch, Markus; Tunyasuvunakool, Saran

    2016-02-19

    We produce the first concrete evidence that violation of the weak cosmic censorship conjecture can occur in asymptotically flat spaces of five dimensions by numerically evolving perturbed black rings. For certain thin rings, we identify a new, elastic-type instability dominating the evolution, causing the system to settle to a spherical black hole. However, for sufficiently thin rings the Gregory-Laflamme mode is dominant, and the instability unfolds similarly to that of black strings, where the horizon develops a structure of bulges connected by necks which become ever thinner over time.

  8. End points in discharge cleaning on TFTR (Tokamak Fusion Test Reactor)

    Energy Technology Data Exchange (ETDEWEB)

    Mueller, D.; Dylla, H.F.; Bell, M.G.; Blanchard, W.R.; Bush, C.E.; Gettelfinger, G.; Hawryluk, R.J.; Hill, K.W.; Janos, A.C.; Jobes, F.C.

    1989-07-01

    It has been found necessary to perform a series of first-wall conditioning steps prior to successful high power plasma operation in the Tokamak Fusion Test Reactor (TFTR). This series begins with glow discharge cleaning (GDC) and is followed by pulse discharge cleaning (PDC). During machine conditioning, the production of impurities is monitored by a Residual Gas Analyzer (RGA). PDC is made in two distinct modes: Taylor discharge cleaning (TDC), where the plasma current is kept low (15--50 kA) and of short duration (50 ms) by means of a relatively high prefill pressure and aggressive PDC, where lower prefill pressure and higher toroidal field result in higher current (200--400 kA) limited by disruptions at q(a) /approx/ 3 at /approx/ 250 ms. At a constant repetition rate of 12 discharges/minute, the production rate of H/sub 2/O, CO, or other impurities has been found to be an unreliable measure of progress in cleaning. However, the ability to produce aggressive PDC with substantial limiter heating, but without the production of x-rays from runaway electrons, is an indication that TDC is no longer necessary after /approx/ 10/sup 5/ pulses. During aggressive PDC, the uncooled limiters are heated by the plasma from the bakeout temperature of 150/degree/C to about 250/degree/C over a period of three to eight hours. This limiter heating is important to enhance the rate at which H/sub 2/O is removed from the graphite limiter. 14 refs., 3 figs., 1 tab.

  9. End points in discharge cleaning on TFTR [Tokamak Fusion Test Reactor

    International Nuclear Information System (INIS)

    Mueller, D.; Dylla, H.F.; Bell, M.G.

    1989-07-01

    It has been found necessary to perform a series of first-wall conditioning steps prior to successful high power plasma operation in the Tokamak Fusion Test Reactor (TFTR). This series begins with glow discharge cleaning (GDC) and is followed by pulse discharge cleaning (PDC). During machine conditioning, the production of impurities is monitored by a Residual Gas Analyzer (RGA). PDC is made in two distinct modes: Taylor discharge cleaning (TDC), where the plasma current is kept low (15--50 kA) and of short duration (50 ms) by means of a relatively high prefill pressure and aggressive PDC, where lower prefill pressure and higher toroidal field result in higher current (200--400 kA) limited by disruptions at q(a) approx 3 at approx 250 ms. At a constant repetition rate of 12 discharges/minute, the production rate of H 2 O, CO, or other impurities has been found to be an unreliable measure of progress in cleaning. However, the ability to produce aggressive PDC with substantial limiter heating, but without the production of x-rays from runaway electrons, is an indication that TDC is no longer necessary after approx 10 5 pulses. During aggressive PDC, the uncooled limiters are heated by the plasma from the bakeout temperature of 150 degree C to about 250 degree C over a period of three to eight hours. This limiter heating is important to enhance the rate at which H 2 O is removed from the graphite limiter. 14 refs., 3 figs., 1 tab

  10. Challenges in translating end points from trials to observational cohort studies in oncology

    Directory of Open Access Journals (Sweden)

    Ording AG

    2016-06-01

    Full Text Available Anne Gulbech Ording,1 Deirdre Cronin-Fenton,1 Vera Ehrenstein,1 Timothy L Lash,1,2 John Acquavella,1 Mikael Rørth,1 Henrik Toft Sørensen1 1Department of Clinical Epidemiology, Aarhus University Hospital, Aarhus, Denmark; 2Department of Epidemiology, Rollins School of Public Health, Emory University, Atlanta, GA, USA Abstract: Clinical trials are considered the gold standard for examining drug efficacy and for approval of new drugs. Medical databases and population surveillance registries are valuable resources for post-approval observational research, which are increasingly used in studies of benefits and risk of new cancer drugs. Here, we address the challenges in translating endpoints from oncology trials to observational studies. Registry-based cohort studies can investigate real-world safety issues – including previously unrecognized concerns – by examining rare endpoints or multiple endpoints at once. In contrast to clinical trials, observational cohort studies typically do not exclude real-world patients from clinical practice, such as old and frail patients with comorbidity. The observational cohort study complements the clinical trial by examining the effectiveness of interventions applied in clinical practice and by providing evidence on long-term clinical outcomes, which are often not feasible to study in a clinical trial. Various endpoints can be included in clinical trials, such as hard endpoints, soft endpoints, surrogate endpoints, and patient-reported endpoints. Each endpoint has it strengths and limitations for use in research studies. Endpoints used in oncology trials are often not applicable in observational cohort studies which are limited by the setting of standard clinical practice and by non-standardized endpoint determination. Observational studies can be more helpful moving research forward if they restrict focus to appropriate and valid endpoints. Keywords: endpoint determination, medical oncology, treatment outcome, neoplasms, research design

  11. Comparison between some determination methods of residual styrene in plastic scintillators

    International Nuclear Information System (INIS)

    Bezuglyi, V.D.; Ponomarev, Yu.P.; Gunder, O.A.; Biteman, V.B.; Senchishin, V.G.

    1988-01-01

    Scintillators made of plastic materials based on polystyrene with addition of p-terphenyl and 1,4-di-[(2,5-phenyl)oxazolyl] benzene have found wide application principally in the detection of radioactivity. The stability of the scintillating characteristics of these materials depends to great degree on the concentration of the residual monomer and for this reason it is important to have a sufficiently convenient method for the determination of this latter. We investigated the bromometric and acid-base titration methods with visual and potentiometric titration end point detection. We also examined the polarographic methods, direct and indirect, using the electroreduction of the mercury acetate complex of the monomer. We checked the methods on a scintillator sample and on synthetic mixtures, i.e., mixtures of monomer, polymer, and p-terphenyl. We compared the determination results for styrene and showed that the most accurate procedure is the bromometric determination with potentiometric indication of the end-point

  12. Automated back titration method to measure phosphate

    International Nuclear Information System (INIS)

    Comer, J.; Tehrani, M.; Avdeef, A.; Ross, J. Jr.

    1987-01-01

    Phosphate was measured in soda drinks and as an additive in flour, by a back titration method in which phosphate was precipitated with lanthanum, and the excess lanthanum was titrated with fluoride. All measurements were performed using the Orion fluoride electrode and the Orion 960 Autochemistry System. In most commercial automatic titrators, the inflection point of the titration curve, calculated from the first derivative of the curve, is used to find the equivalence polar of the titration. The inflection technique is compared with a technique based on Gran functions, which uses data collected after the end point and predicts the equivalence point accordingly

  13. Intermolecular interactions in the condensed phase: Evaluation of semi-empirical quantum mechanical methods.

    Science.gov (United States)

    Christensen, Anders S; Kromann, Jimmy C; Jensen, Jan H; Cui, Qiang

    2017-10-28

    To facilitate further development of approximate quantum mechanical methods for condensed phase applications, we present a new benchmark dataset of intermolecular interaction energies in the solution phase for a set of 15 dimers, each containing one charged monomer. The reference interaction energy in solution is computed via a thermodynamic cycle that integrates dimer binding energy in the gas phase at the coupled cluster level and solute-solvent interaction with density functional theory; the estimated uncertainty of such calculated interaction energy is ±1.5 kcal/mol. The dataset is used to benchmark the performance of a set of semi-empirical quantum mechanical (SQM) methods that include DFTB3-D3, DFTB3/CPE-D3, OM2-D3, PM6-D3, PM6-D3H+, and PM7 as well as the HF-3c method. We find that while all tested SQM methods tend to underestimate binding energies in the gas phase with a root-mean-squared error (RMSE) of 2-5 kcal/mol, they overestimate binding energies in the solution phase with an RMSE of 3-4 kcal/mol, with the exception of DFTB3/CPE-D3 and OM2-D3, for which the systematic deviation is less pronounced. In addition, we find that HF-3c systematically overestimates binding energies in both gas and solution phases. As most approximate QM methods are parametrized and evaluated using data measured or calculated in the gas phase, the dataset represents an important first step toward calibrating QM based methods for application in the condensed phase where polarization and exchange repulsion need to be treated in a balanced fashion.

  14. Intermolecular interactions in the condensed phase: Evaluation of semi-empirical quantum mechanical methods

    Science.gov (United States)

    Christensen, Anders S.; Kromann, Jimmy C.; Jensen, Jan H.; Cui, Qiang

    2017-10-01

    To facilitate further development of approximate quantum mechanical methods for condensed phase applications, we present a new benchmark dataset of intermolecular interaction energies in the solution phase for a set of 15 dimers, each containing one charged monomer. The reference interaction energy in solution is computed via a thermodynamic cycle that integrates dimer binding energy in the gas phase at the coupled cluster level and solute-solvent interaction with density functional theory; the estimated uncertainty of such calculated interaction energy is ±1.5 kcal/mol. The dataset is used to benchmark the performance of a set of semi-empirical quantum mechanical (SQM) methods that include DFTB3-D3, DFTB3/CPE-D3, OM2-D3, PM6-D3, PM6-D3H+, and PM7 as well as the HF-3c method. We find that while all tested SQM methods tend to underestimate binding energies in the gas phase with a root-mean-squared error (RMSE) of 2-5 kcal/mol, they overestimate binding energies in the solution phase with an RMSE of 3-4 kcal/mol, with the exception of DFTB3/CPE-D3 and OM2-D3, for which the systematic deviation is less pronounced. In addition, we find that HF-3c systematically overestimates binding energies in both gas and solution phases. As most approximate QM methods are parametrized and evaluated using data measured or calculated in the gas phase, the dataset represents an important first step toward calibrating QM based methods for application in the condensed phase where polarization and exchange repulsion need to be treated in a balanced fashion.

  15. Cation solvation with quantum chemical effects modeled by a size-consistent multi-partitioning quantum mechanics/molecular mechanics method.

    Science.gov (United States)

    Watanabe, Hiroshi C; Kubillus, Maximilian; Kubař, Tomáš; Stach, Robert; Mizaikoff, Boris; Ishikita, Hiroshi

    2017-07-21

    In the condensed phase, quantum chemical properties such as many-body effects and intermolecular charge fluctuations are critical determinants of the solvation structure and dynamics. Thus, a quantum mechanical (QM) molecular description is required for both solute and solvent to incorporate these properties. However, it is challenging to conduct molecular dynamics (MD) simulations for condensed systems of sufficient scale when adapting QM potentials. To overcome this problem, we recently developed the size-consistent multi-partitioning (SCMP) quantum mechanics/molecular mechanics (QM/MM) method and realized stable and accurate MD simulations, using the QM potential to a benchmark system. In the present study, as the first application of the SCMP method, we have investigated the structures and dynamics of Na + , K + , and Ca 2+ solutions based on nanosecond-scale sampling, a sampling 100-times longer than that of conventional QM-based samplings. Furthermore, we have evaluated two dynamic properties, the diffusion coefficient and difference spectra, with high statistical certainty. Furthermore the calculation of these properties has not previously been possible within the conventional QM/MM framework. Based on our analysis, we have quantitatively evaluated the quantum chemical solvation effects, which show distinct differences between the cations.

  16. The interpolation method based on endpoint coordinate for CT three-dimensional image

    International Nuclear Information System (INIS)

    Suto, Yasuzo; Ueno, Shigeru.

    1997-01-01

    Image interpolation is frequently used to improve slice resolution to reach spatial resolution. Improved quality of reconstructed three-dimensional images can be attained with this technique as a result. Linear interpolation is a well-known and widely used method. The distance-image method, which is a non-linear interpolation technique, is also used to convert CT value images to distance images. This paper describes a newly developed method that makes use of end-point coordinates: CT-value images are initially converted to binary images by thresholding them and then sequences of pixels with 1-value are arranged in vertical or horizontal directions. A sequence of pixels with 1-value is defined as a line segment which has starting and end points. For each pair of adjacent line segments, another line segment was composed by spatial interpolation of the start and end points. Binary slice images are constructed from the composed line segments. Three-dimensional images were reconstructed from clinical X-ray CT images, using three different interpolation methods and their quality and processing speed were evaluated and compared. (author)

  17. A rapid method for the determination of some antihypertensive and antipyretic drugs by thermometric titrimetry.

    Science.gov (United States)

    Abbasi, U M; Chand, F; Bhanger, M I; Memon, S A

    1986-02-01

    A simple and rapid method is described for the direct thermometric determination of milligram amounts of methyl dopa, propranolol hydrochloride, 1-phenyl-3-methylpyrazolone (MPP) and 2,3-dimethyl-1-phenylpyrazol-5-one (phenazone) in the presence of excipients. The compounds are reacted with N'-bromosuccinimide and the heat of reaction is used to determine the end-point of the titration. The time required is approximately 2 min, and the accuracy is analytically acceptable.

  18. Studies on the removal of interference of iron in the determination of uranium by direct titration with ammonium meta vanadate method

    International Nuclear Information System (INIS)

    Chavan, A.A.; Charyulu, M.M.

    2009-01-01

    To determine the uranium content in metal powder and alloys, routinely used method in NUMAC control Lab is dissolution of sample in 10 M phosphoric acid under heating and determination of uranium by ammonium meta vanadate method-visual indicator end point. If iron is present, it interferes quantitatively. The method is modified for removing the interference of iron by dissolving the samples in conc. phosphoric acid and Fe 2+ is quantitatively oxidized to Fe 3+ by nitric acid prior to analysis. (author)

  19. An investigation to compare the performance of methods for the determination of free acid in highly concentrated solutions of plutonium and uranium nitrate

    International Nuclear Information System (INIS)

    Crossley, D.

    1980-08-01

    An investigation has been carried out to compare the performance of the direct titration method and the indirect mass balance method, for the determination of free acid in highly concentrated solutions of uranium nitrate and plutonium nitrate. The direct titration of free acid with alkali is carried out in a fluoride medium to avoid interference from the hydrolysis of uranium or plutonium, while free acid concentration by the mass balance method is obtained by calculation from the metal concentration, metal valency state, and total nitrate concentration in a sample. The Gran plot end-point prediction technique has been used extensively in the investigation to gain information concerning the hydrolysis of uranium and plutonium in fluoride media and in other complexing media. The use of the Gran plot technique has improved the detection of the end-point of the free acid titration which gives an improvement in the precision of the determination. The experimental results obtained show that there is good agreement between the two methods for the determination of free acidity, and that the precision of the direct titration method in a fluoride medium using the Gran plot technique to detect the end-point is 0.75% (coefficient of variation), for a typical separation plant plutonium nitrate solution. The performance of alternative complexing agents in the direct titration method has been studied and is discussed. (author)

  20. A Cutting Pattern Recognition Method for Shearers Based on Improved Ensemble Empirical Mode Decomposition and a Probabilistic Neural Network

    Directory of Open Access Journals (Sweden)

    Jing Xu

    2015-10-01

    Full Text Available In order to guarantee the stable operation of shearers and promote construction of an automatic coal mining working face, an online cutting pattern recognition method with high accuracy and speed based on Improved Ensemble Empirical Mode Decomposition (IEEMD and Probabilistic Neural Network (PNN is proposed. An industrial microphone is installed on the shearer and the cutting sound is collected as the recognition criterion to overcome the disadvantages of giant size, contact measurement and low identification rate of traditional detectors. To avoid end-point effects and get rid of undesirable intrinsic mode function (IMF components in the initial signal, IEEMD is conducted on the sound. The end-point continuation based on the practical storage data is performed first to overcome the end-point effect. Next the average correlation coefficient, which is calculated by the correlation of the first IMF with others, is introduced to select essential IMFs. Then the energy and standard deviation of the reminder IMFs are extracted as features and PNN is applied to classify the cutting patterns. Finally, a simulation example, with an accuracy of 92.67%, and an industrial application prove the efficiency and correctness of the proposed method.

  1. Comparing methods to combine functional loss and mortality in clinical trials for amyotrophic lateral sclerosis

    Directory of Open Access Journals (Sweden)

    van Eijk RPA

    2018-03-01

    Full Text Available Ruben PA van Eijk,1 Marinus JC Eijkemans,2 Dimitris Rizopoulos,3 Leonard H van den Berg,4,* Stavros Nikolakopoulos5,* 1Department of Neurology, University Medical Center Utrecht, Utrecht, the Netherlands; 2Department of Biostatistics, University Medical Center Utrecht, Utrecht, the Netherlands; 3Department of Biostatistics, Erasmus University Medical Center, Rotterdam, the Netherlands; 4Department of Neurology, University Medical Center Utrecht, Utrecht, the Netherlands; 5Department of Biostatistics, University Medical Center Utrecht, Utrecht, the Netherlands *These authors contributed equally to this work Objective: Amyotrophic lateral sclerosis (ALS clinical trials based on single end points only partially capture the full treatment effect when both function and mortality are affected, and may falsely dismiss efficacious drugs as futile. We aimed to investigate the statistical properties of several strategies for the simultaneous analysis of function and mortality in ALS clinical trials. Methods: Based on the Pooled Resource Open-Access ALS Clinical Trials (PRO-ACT database, we simulated longitudinal patterns of functional decline, defined by the revised amyotrophic lateral sclerosis functional rating scale (ALSFRS-R and conditional survival time. Different treatment scenarios with varying effect sizes were simulated with follow-up ranging from 12 to 18 months. We considered the following analytical strategies: 1 Cox model; 2 linear mixed effects (LME model; 3 omnibus test based on Cox and LME models; 4 composite time-to-6-point decrease or death; 5 combined assessment of function and survival (CAFS; and 6 test based on joint modeling framework. For each analytical strategy, we calculated the empirical power and sample size. Results: Both Cox and LME models have increased false-negative rates when treatment exclusively affects either function or survival. The joint model has superior power compared to other strategies. The composite end point

  2. Determination of uranium by an amperometric method

    International Nuclear Information System (INIS)

    John, Mary; Venkataramana, P.; Vaidyanathan, S.; Natarajan, P.R.

    1981-01-01

    An amperometric method has been standardised for the determination of uranium. Uranium is reduced to its quadrivalent state in concentrated phosphoric acid medium with ferrous iron. The excess iron is destroyed with nitric acid in presence of Mo(VI). The medium is diluted and U(IV) is titrated with standard potassium dichromate to an amperometric end point using a pair of identical plantinum wires as electrodes. The reagent volumes and uranium quantities have been scaled down to 30 ml. and 2-5 mg of uranium in the present work with a view to minimising the problems associated with recovery of plutonium. The results are quantitative with an R.S.D. of 0.2% in the present version of weight based titrations. (author)

  3. Method for linearizing the potentiometric curves of precipitation titration in nonaqueous and aqueous-organic solutions

    International Nuclear Information System (INIS)

    Bykova, L.N.; Chesnokova, O.Ya.; Orlova, M.V.

    1995-01-01

    The method for linearizing the potentiometric curves of precipitation titration is studied for its application in the determination of halide ions (Cl - , Br - , I - ) in dimethylacetamide, dimethylformamide, in which titration is complicated by additional equilibrium processes. It is found that the method of linearization permits the determination of the titrant volume at the end point of titration to high accuracy in the case of titration curves without a potential jump in the proximity of the equivalent point (5 x 10 -5 M). 3 refs., 2 figs., 3 tabs

  4. Mapping the end points of large deletions affecting the hprt locus in human peripheral blood cells and cell lines

    International Nuclear Information System (INIS)

    Nelson, S.L.; Grosovsky, A.J.; Jones, I.M.; Burkhart-Schultz, K.; Fuscoe, J.C.

    1995-01-01

    We have examined the extent of of HPRT - total gene deletions in three mutant collections: spontaneous and X-ray-induced deletions in TK6 human B lymphoblasts, and HPRT - deletions arising in vivo in T cells. A set of 13 Xq26 STS markers surrounding hprt and spanning approximately 3.3 Mb was used. Each marker used was observed to be missing in at least one of the hprt deletion mutants analyzed. The largest deletion observed encompassed at least 3 Mb. Nine deletions extended outside of the mapped region in the centromeric direction (>1.7 Mb). In contrast, only two telomeric deletions extended to marker 342R (1.26 Mb), and both exhibited slowed or limited cell growth. These data suggest the existence of a gene, within the vicinity of 342R, which establishes the telomeric limit of recoverable deletions. Most (25/41) X-ray-induced total gene deletion mutants exhibited marker loss, but only 1/8 of the spontaneous deletions encompassed any Xq26 markers (P = 0.0187). Furthermore, nearly half (3/8) of the spontaneous 3' total deletion breakpoints were within 14 kb of the hprt coding sequence. In contrast, 40/41 X-ray-induced HPRT - total deletions extended beyond this point (P = 0.011). Although the overall representation of total gene deletions in the in vivo spectrum is low, 4/5 encompass Xq26 markers flanking hprt. This pattern differs significantly from spontaneous HPRT - large deletions occurring in vitro (P = 0.032) but resembles the spectrum of X-ray-induced deletions. 24 refs., 6 figs., 1 tab

  5. Constraints on grip selection in hemiparetic cerebral palsy: Effects of lesional side, end-point accuracy and context.

    NARCIS (Netherlands)

    Steenbergen, B.; Meulenbroek, R.G.J.; Rosenbaum, D.A.

    2004-01-01

    This study was concerned with the selection criteria used for grip planning in adolescents with left or right hemiparetic cerebral palsy. In the first experiment participants picked up a pencil and placed the tip in a pre-defined target region. We varied the size of the target to test the hypothesis

  6. Quantitative Studies of Sublingual PCO2 as a Resuscitation End-Point in the Diagnosis and Treatment of Hemorrhagic Shock

    National Research Council Canada - National Science Library

    Ivatury, Pao

    2005-01-01

    This clinical study is examining the relationship between sublingual PCO2 (PslCO2) to real-time changes in microcirculatory blood flow of the sublingual mucosa in victims of traumatic and hemorrhagic shock...

  7. Spawning and multiple end points of the embryo-larval bioassay of the blue mussel Mytilus galloprovincialis (Lmk).

    Science.gov (United States)

    Resgalla, Charrid

    2016-10-01

    Since the 1960s, little has been done to improve and simulate the use of short-duration chronic bioassays of bivalve embryos, particularly in mussels. However, these test organisms offer great advantages in relation to other groups, due to the ease of obtaining breeders in cultivation systems, in the environment and any time, and due to their high sensitivity to chemicals or contaminants. To contribute some methodological aspects, this study uses techniques to stimulate spawning or improve the obtaining of gametes for use in bioassays with the mussel Mytilus galloprovincialis. It also evaluates different criteria for determining the effect on the larvae, for estimation of EC 50 and NOEC values, based on morphological analysis of developmental delay and the biometrics of the larvae. KCl proved to be a reliable inducer of spawning, with positive responses in 10 of the 12 months of the year tested. Moreover, this chemical, in association with NH 4 Cl, demonstrated the capacity to activate immature oocytes obtained from extirpated gonads, enabling an improvement in fertilization rates. The different criteria adopted to determine the effects on the larvae in the assays with reference toxicants (SDS and K 2 Cr 2 O 7 ) resulted in EC 50 and NOEC values without significant differences, indicating reliability in the results and freedom in the choice of criteria of effect to be adopted in the trials.

  8. Validity of early MRI structural damage end points and potential impact on clinical trial design in rheumatoid arthritis

    DEFF Research Database (Denmark)

    Baker, Joshua F; Conaghan, Philip G; Emery, Paul

    2016-01-01

    Wilcoxon rank sum tests and tests of proportion estimated the sample size required to detect differences between combination therapy (methotrexate+golimumab) and methotrexate-monotherapy arms in (A) change in damage score and (B) proportion of patients progressing. RESULTS: Patients with early MRI...

  9. A comparison of cationic polymerization and esterification for end-point detection in the catalytic thermometric titration of organic bases.

    Science.gov (United States)

    J Greenhow, E; Viñas, P

    1984-08-01

    A systematic comparison has been made of two indicator systems for the non-aqueous catalytic thermometric titration of strong and weak organic bases. The indicator reagents, alpha-methylstyrene and mixtures of acetic anhydride and hydroxy compounds, are shown to give results (for 14 representative bases) which do not diner significantly in coefficient of variation or titration error. Calibration graphs for all the samples, in the range 0.01-0.1 meq, are linear, with correlation coefficients of 0.995 or better. Aniline, benzylamine, n-butylamine, morpholine, pyrrole, l-dopa, alpha-methyl-l-dopa, dl-alpha-alanine, dl-leucine and l-cysteine cannot be determined when acetic anhydride is present in the sample solution, but some primary and second amines can. This is explained in terms of rates of acetylation of the amino groups.

  10. Determination of aluminum by four analytical methods

    International Nuclear Information System (INIS)

    Hanson, T.J.; Smetana, K.M.

    1975-11-01

    Four procedures have been developed for determining the aluminum concentration in basic matrices. Atomic Absorption Spectroscopy (AAS) was the routine method of analysis. Citrate was required to complex the aluminum and eliminate matrix effects. AAS was the least accurate of the four methods studied and was adversely affected by high aluminum concentrations. The Fluoride Electrode Method was the most accurate and precise of the four methods. A Gran's Plot determination was used to determine the end point and average standard recovery was 100% +- 2%. The Thermometric Titration Method was the fastest method for determining aluminum and could also determine hydroxide concentration at the same time. Standard recoveries were 100% +- 5%. The pH Electrode Method also measures aluminum and hydroxide content simultaneously, but is less accurate and more time consuming that the thermal titration. Samples were analyzed using all four methods and results were compared to determine the strengths and weaknesses of each. On the basis of these comparisons, conclusions were drawn concerning the application of each method to our laboratory needs

  11. Comparison of wet-chemical methods for determination of lipid hydroperoxides

    DEFF Research Database (Denmark)

    Nielsen, Nina Skall; Timm Heinrich, Maike; Jacobsen, Charlotte

    2003-01-01

    Five methods for determination of lipid hydroperoxides were evaluated, including two iodometric procedures involving a titration and a spectrophotometric micro method, and three other spectrophotometric methods namely the ferro, International Dairy Federation (IDF) and FOX2 (ferrous oxidation...... in xylenol orange). Peroxide values determined in a range of food products by these five methods gave different results. The ferro method required large amounts of solvent (50 mL/sample); the FOX2 method had a low range (0.005-0.04 mumol hydroperoxide); the end point detection of the titration method...... was subjective and required a large amount of sample (1 g); and the micro method was sensitive to interruptions during execution. Therefore, only the modified IDF method was chosen for further testing and validation. Stability tests of the standard curve showed a variation coefficient of 4% and within runs...

  12. Determination of water in nuclear materials by means of the Karl Fischer method

    International Nuclear Information System (INIS)

    Pereira, W.; Rocha, S.M.R.; Atalla, L.T.; Abrao, A.

    1987-06-01

    Karl Fischer Method was adapted for water determination in uranium coumpounds and substances of nuclear interest, by using a comercial equipment. The experimental conditions for the analysis of U 3 O 8 , UO 3 , UO 2 , UF 4 KFnHF and (NH 4 ) 4 UO 2 (CO 3 ) 3 were established. The influence of the agitation and contact time between sample and solvent, of the sample weight, of the reaction end point determination and of the sample granulometry on the precision and accuracy of results was also studied. (Author) [pt

  13. A single-beam titration method for the quantification of open-path Fourier transform infrared spectroscopy

    International Nuclear Information System (INIS)

    Sung, Lung-Yu; Lu, Chia-Jung

    2014-01-01

    This study introduced a quantitative method that can be used to measure the concentration of analytes directly from a single-beam spectrum of open-path Fourier Transform Infrared Spectroscopy (OP-FTIR). The peak shapes of the analytes in a single-beam spectrum were gradually canceled (i.e., “titrated”) by dividing an aliquot of a standard transmittance spectrum with a known concentration, and the sum of the squared differential synthetic spectrum was calculated as an indicator for the end point of this titration. The quantity of a standard transmittance spectrum that is needed to reach the end point can be used to calculate the concentrations of the analytes. A NIST traceable gas standard containing six known compounds was used to compare the quantitative accuracy of both this titration method and that of a classic least square (CLS) using a closed-cell FTIR spectrum. The continuous FTIR analysis of industrial exhausting stack showed that concentration trends were consistent between the CLS and titration methods. The titration method allowed the quantification to be performed without the need of a clean single-beam background spectrum, which was beneficial for the field measurement of OP-FTIR. Persistent constituents of the atmosphere, such as NH 3 , CH 4 and CO, were successfully quantified using the single-beam titration method with OP-FTIR data that is normally inaccurate when using the CLS method due to the lack of a suitable background spectrum. Also, the synthetic spectrum at the titration end point contained virtually no peaks of analytes, but it did contain the remaining information needed to provide an alternative means of obtaining an ideal single-beam background for OP-FTIR. - Highlights: • Establish single beam titration quantification method for OP-FTIR. • Define the indicator for the end-point of spectrum titration. • An ideal background spectrum can be obtained using single beam titration. • Compare the quantification between titration

  14. Comparison of Deep Learning With Multiple Machine Learning Methods and Metrics Using Diverse Drug Discovery Data Sets.

    Science.gov (United States)

    Korotcov, Alexandru; Tkachenko, Valery; Russo, Daniel P; Ekins, Sean

    2017-12-04

    Machine learning methods have been applied to many data sets in pharmaceutical research for several decades. The relative ease and availability of fingerprint type molecular descriptors paired with Bayesian methods resulted in the widespread use of this approach for a diverse array of end points relevant to drug discovery. Deep learning is the latest machine learning algorithm attracting attention for many of pharmaceutical applications from docking to virtual screening. Deep learning is based on an artificial neural network with multiple hidden layers and has found considerable traction for many artificial intelligence applications. We have previously suggested the need for a comparison of different machine learning methods with deep learning across an array of varying data sets that is applicable to pharmaceutical research. End points relevant to pharmaceutical research include absorption, distribution, metabolism, excretion, and toxicity (ADME/Tox) properties, as well as activity against pathogens and drug discovery data sets. In this study, we have used data sets for solubility, probe-likeness, hERG, KCNQ1, bubonic plague, Chagas, tuberculosis, and malaria to compare different machine learning methods using FCFP6 fingerprints. These data sets represent whole cell screens, individual proteins, physicochemical properties as well as a data set with a complex end point. Our aim was to assess whether deep learning offered any improvement in testing when assessed using an array of metrics including AUC, F1 score, Cohen's kappa, Matthews correlation coefficient and others. Based on ranked normalized scores for the metrics or data sets Deep Neural Networks (DNN) ranked higher than SVM, which in turn was ranked higher than all the other machine learning methods. Visualizing these properties for training and test sets using radar type plots indicates when models are inferior or perhaps over trained. These results also suggest the need for assessing deep learning further

  15. A highly accurate method for determination of dissolved oxygen: Gravimetric Winkler method

    International Nuclear Information System (INIS)

    Helm, Irja; Jalukse, Lauri; Leito, Ivo

    2012-01-01

    Highlights: ► Probably the most accurate method available for dissolved oxygen concentration measurement was developed. ► Careful analysis of uncertainty sources was carried out and the method was optimized for minimizing all uncertainty sources as far as practical. ► This development enables more accurate calibration of dissolved oxygen sensors for routine analysis than has been possible before. - Abstract: A high-accuracy Winkler titration method has been developed for determination of dissolved oxygen concentration. Careful analysis of uncertainty sources relevant to the Winkler method was carried out and the method was optimized for minimizing all uncertainty sources as far as practical. The most important improvements were: gravimetric measurement of all solutions, pre-titration to minimize the effect of iodine volatilization, accurate amperometric end point detection and careful accounting for dissolved oxygen in the reagents. As a result, the developed method is possibly the most accurate method of determination of dissolved oxygen available. Depending on measurement conditions and on the dissolved oxygen concentration the combined standard uncertainties of the method are in the range of 0.012–0.018 mg dm −3 corresponding to the k = 2 expanded uncertainty in the range of 0.023–0.035 mg dm −3 (0.27–0.38%, relative). This development enables more accurate calibration of electrochemical and optical dissolved oxygen sensors for routine analysis than has been possible before.

  16. A novel method for the determination of the degree of deacetylation of chitosan by coulometric titration.

    Science.gov (United States)

    Wang, Chao; Yuan, Fang; Pan, Jiabao; Jiao, Shining; Jin, Ling; Cai, Hongwei

    2014-09-01

    A novel method to determine the degree of deacetylation of chitosan is described. In this method, the coulometric titrant OH- is generated by the electrolysis of water. The OH- reacted with the residual hydrochloric acid in chitosan solution and the degree of deacetylation was obtained by Faraday's law. The optimized experimental parameters in this study were 1.0 mol/L KCl as supporting electrolyte, 15.00 mA as the intensity of constant current, composite glass electrode as indicating electrode couples, double platinum generated electrode-platinum wire auxiliary electrode as working electrode pairs, pH 3.80 as the titration end-point. The degree of deacetylation in the four samples, which varied from 70 to 95% were measured. The results were similar to those from 1H NMR and the standard deviations were lower than 0.5%. With merit of simplicity, convenience, quickness, high accuracy and precision, automatic detection of titration end-point and low-cost, the proposed method will be very useful in the industrial production. Copyright © 2014 Elsevier B.V. All rights reserved.

  17. A calorimetric method to determine water activity.

    Science.gov (United States)

    Björklund, Sebastian; Wadsö, Lars

    2011-11-01

    A calorimetric method to determine water activity covering the full range of the water activity scale is presented. A dry stream of nitrogen gas is passed either over the solution whose activity should be determined or left dry before it is saturated by bubbling through water in an isothermal calorimeter. The unknown activity is in principle determined by comparing the thermal power of vaporization related to the gas stream with unknown activity to that with zero activity. Except for three minor corrections (for pressure drop, non-perfect humidification, and evaporative cooling) the unknown water activity is calculated solely based on the water activity end-points zero and unity. Thus, there is no need for calibration with references with known water activities. The method has been evaluated at 30 °C by measuring the water activity of seven aqueous sodium chloride solutions ranging from 0.1 mol kg(-1) to 3 mol kg(-1) and seven saturated aqueous salt solutions (LiCl, MgCl(2), NaBr, NaCl, KCl, KNO(3), and K(2)SO(4)) with known water activities. The performance of the method was adequate over the complete water activity scale. At high water activities the performance was excellent, which is encouraging as many other methods used for water activity determination have limited performance at high water activities. © 2011 American Institute of Physics

  18. Protein structure refinement using a quantum mechanics-based chemical shielding predictor

    DEFF Research Database (Denmark)

    Bratholm, Lars Andersen; Jensen, Jan Halborg

    2017-01-01

    The accurate prediction of protein chemical shifts using a quantum mechanics (QM)-based method has been the subject of intense research for more than 20 years but so far empirical methods for chemical shift prediction have proven more accurate. In this paper we show that a QM-based predictor...... of a protein backbone and CB chemical shifts (ProCS15, PeerJ, 2016, 3, e1344) is of comparable accuracy to empirical chemical shift predictors after chemical shift-based structural refinement that removes small structural errors. We present a method by which quantum chemistry based predictions of isotropic...

  19. Linear Strength Vortex Panel Method for NACA 4412 Airfoil

    Science.gov (United States)

    Liu, Han

    2018-03-01

    The objective of this article is to formulate numerical models for two-dimensional potential flow over the NACA 4412 Airfoil using linear vortex panel methods. By satisfying the no penetration boundary condition and Kutta condition, the circulation density on each boundary points (end point of every panel) are obtained and according to which, surface pressure distribution and lift coefficients of the airfoil are predicted and validated by Xfoil, an interactive program for the design and analysis of airfoil. The sensitivity of results to the number of panels is also investigated in the end, which shows that the results are sensitive to the number of panels when panel number ranges from 10 to 160. With the increasing panel number (N>160), the results become relatively insensitive to it.

  20. Method and apparatus for surface characterization and process control utilizing radiation from desorbed particles

    International Nuclear Information System (INIS)

    Feldman, L.C.; Kraus, J.S.; Tolk, N.H.; Traum, M.M.; Tully, J.C.

    1983-01-01

    Emission of characteristic electromagnetic radiation in the infrared, visible, or UV from excited particles, typically ions, molecules, or neutral atoms, desorbed from solid surfaces by an incident beam of low-momentum probe radiation has been observed. Disclosed is a method for characterizing solid surfaces based on the observed effect, with low-momentum probe radiation consisting of electrons or photons. Further disclosed is a method for controlling manufacturing processes that is also based on the observed effect. The latter method can, for instance, be advantageously applied in integrated circuit-, integrated optics-, and magnetic bubble device manufacture. Specific examples of applications of the method are registering of masks, control of a direct-writing processing beam, end-point detection in etching, and control of a processing beam for laser- or electron-beam annealing or ion implantation

  1. Shining a light on LAMP assays--a comparison of LAMP visualization methods including the novel use of berberine.

    Science.gov (United States)

    Fischbach, Jens; Xander, Nina Carolin; Frohme, Marcus; Glökler, Jörn Felix

    2015-04-01

    The need for simple and effective assays for detecting nucleic acids by isothermal amplification reactions has led to a great variety of end point and real-time monitoring methods. Here we tested direct and indirect methods to visualize the amplification of potato spindle tuber viroid (PSTVd) by loop-mediated isothermal amplification (LAMP) and compared features important for one-pot in-field applications. We compared the performance of magnesium pyrophosphate, hydroxynaphthol blue (HNB), calcein, SYBR Green I, EvaGreen, and berberine. All assays could be used to distinguish between positive and negative samples in visible or UV light. Precipitation of magnesium-pyrophosphate resulted in a turbid reaction solution. The use of HNB resulted in a color change from violet to blue, whereas calcein induced a change from orange to yellow-green. We also investigated berberine as a nucleic acid-specific dye that emits a fluorescence signal under UV light after a positive LAMP reaction. It has a comparable sensitivity to SYBR Green I and EvaGreen. Based on our results, an optimal detection method can be chosen easily for isothermal real-time or end point screening applications.

  2. A digital image-based method for determining of total acidity in red wines using acid-base titration without indicator.

    Science.gov (United States)

    Tôrres, Adamastor Rodrigues; Lyra, Wellington da Silva; de Andrade, Stéfani Iury Evangelista; Andrade, Renato Allan Navarro; da Silva, Edvan Cirino; Araújo, Mário César Ugulino; Gaião, Edvaldo da Nóbrega

    2011-05-15

    This work proposes the use of digital image-based method for determination of total acidity in red wines by means of acid-base titration without using an external indicator or any pre-treatment of the sample. Digital images present the colour of the emergent radiation which is complementary to the radiation absorbed by anthocyanines present in wines. Anthocyanines change colour depending on the pH of the medium, and from the variation of colour in the images obtained during titration, the end point can be localized with accuracy and precision. RGB-based values were employed to build titration curves, and end points were localized by second derivative curves. The official method recommends potentiometric titration with a NaOH standard solution, and sample dilution until the pH reaches 8.2-8.4. In order to illustrate the feasibility of the proposed method, titrations of ten red wines were carried out. Results were compared with the reference method, and no statistically significant difference was observed between the results by applying the paired t-test at the 95% confidence level. The proposed method yielded more precise results than the official method. This is due to the trivariate nature of the measurements (RGB), associated with digital images. Copyright © 2011 Elsevier B.V. All rights reserved.

  3. A quality control method for detecting energy changes of medical accelerators

    International Nuclear Information System (INIS)

    McGinley, P.H.

    2000-01-01

    A description is presented of a simple and sensitive method for detecting a change in the energy of the electrons bombarding the target of medical accelerators. This technique is useful for x-ray beams with end point energy in the range of 15.7 to 25 MeV. The method is based on the photoactivation of 16 O and 14 N in a small sample of ammonium nitrate. It was found that the ratio of the activity induced in the oxygen divided by that produced in the nitrogen can be used as a quality control technique to detect a change in the energy of the electrons that bombard the target of the accelerator. An electron energy change of the order of 0.2 MeV can be determined using this method. (author)

  4. Realism and Pragmatism in a mixed methods study.

    Science.gov (United States)

    Allmark, Peter; Machaczek, Katarzyna

    2018-06-01

    A discussion of how adopting a Realist rather than Pragmatist methodology affects the conduct of mixed methods research. Mixed methods approaches are now extensively employed in nursing and other healthcare research. At the same time, realist methodology is increasingly used as philosophical underpinning of research in these areas. However, the standard philosophical underpinning of mixed methods research is Pragmatism, which is generally considered incompatible or at least at odds with Realism. This paper argues that Realism can be used as the basis of mixed methods research and that doing so carries advantages over using Pragmatism. A mixed method study into patient handover reports is used to illustrate how Realism affected its design and how it would have differed had a Pragmatist approach been taken. Discussion Paper. Philosophers Index; Google Scholar. Those undertaking mixed methods research should consider the use of Realist methodology with the addition of some insights from Pragmatism to do with the start and end points of enquiry. Realism is a plausible alternative methodology for those undertaking mixed methods studies. © 2018 John Wiley & Sons Ltd.

  5. Journal of Biosciences | Indian Academy of Sciences

    Indian Academy of Sciences (India)

    2006-09-15

    Sep 15, 2006 ... In the present study, a systematic attempt has been made to develop an accurate method for predicting MHC class I restricted T cell epitopes for a large number of MHC class I alleles. Initially, a quantitative matrix (QM)-based method was developed for 47 MHC class I alleles having at least 15 binders.

  6. Improvements in biamperometric method for remote analysis of uranium

    International Nuclear Information System (INIS)

    Palamalai, A.; Thankachan, T.S.; Balasubramanian, G.R.

    1979-01-01

    One of the titrimetric methods most suitable for remote operations with Master Slave Manipulators inside hot cells is the biamperometric method. The biamperometric method for the analysis of uranium reported in the literature is found to give rise to a significant bias, especially with low aliquots of uranium and the waste volume is also considerable which is not desirable from the point of view of radioactive waste disposal. In the present method, the bias as well as waste volume are reduced. Also addition of vanadyl sulphate is found necessary to provide a sharp end point in the titration curve. The role of vanadyl sulphate in improving the titration method has been investigated by spectrophotometry and electrometry. A new mechanism for the role of vanadyl sulphate which is in conformity with the observations made in coulometric titration of uranium, is proposed. Interference from deliberate additions of high concentrations of stable species of fission product elements is found negligible. Hence this method is considered highly suitable for remote analysis of uranium in intensely radioactive reprocessing solutions for control purposes, provided radioactivity does not pose new problems. (auth.)

  7. A Vector Printing Method for High-Speed Electrohydrodynamic (EHD Jet Printing Based on Encoder Position Sensors

    Directory of Open Access Journals (Sweden)

    Thanh Huy Phung

    2018-02-01

    Full Text Available Electrohyrodynamic (EHD jet printing has been widely used in the field of direct micro-nano patterning applications, due to its high resolution printing capability. So far, vector line printing using a single nozzle has been widely used for most EHD printing applications. However, the application has been limited to low-speed printing, to avoid non-uniform line width near the end points where line printing starts and ends. At end points of line vector printing, the deposited drop amount is likely to be significantly large compared to the rest of the printed lines, due to unavoidable acceleration and deceleration. In this study, we proposed a method to solve the printing quality problems by producing droplets at an equally spaced distance, irrespective of the printing speed. For this purpose, an encoder processing unit (EPU was developed, so that the jetting trigger could be generated according to user-defined spacing by using encoder position signals, which are used for the positioning control of the two linear stages.

  8. A method of analyzing rectal surface area irradiated and rectal complications in prostate conformal radiotherapy

    International Nuclear Information System (INIS)

    Lu Yong; Song, Paul Y.; Li Shidong; Spelbring, Danny R.; Vijayakumar, Srinivasan; Haraf, Daniel J.; Chen, George T.Y.

    1995-01-01

    Purpose: To develop a method of analyzing rectal surface area irradiated and rectal complications in prostate conformal radiotherapy. Methods and Materials: Dose-surface histograms of the rectum, which state the rectal surface area irradiated to any given dose, were calculated for a group of 27 patients treated with a four-field box technique to a total (tumor minimum) dose ranging from 68 to 70 Gy. Occurrences of rectal toxicities as defined by the Radiation Therapy Oncology Group (RTOG) were recorded and examined in terms of dose and rectal surface area irradiated. For a specified end point of rectal complication, the complication probability was analyzed as a function of dose irradiated to a fixed rectal area, and as a function of area receiving a fixed dose. Lyman's model of normal tissue complication probability (NTCP) was used to fit the data. Results: The observed occurrences of rectal complications appear to depend on the rectal surface area irradiated to a given dose level. The patient distribution of each toxicity grade exhibits a maximum as a function of percentage surface area irradiated, and the maximum moves to higher values of percentage surface area as the toxicity grade increases. The dependence of the NTCP for the specified end point on dose and percentage surface area irradiated was fitted to Lyman's NTCP model with a set of parameters. The curvature of the NTCP as a function of the surface area suggests that the rectum is a parallel structured organ. Conclusions: The described method of analyzing rectal surface area irradiated yields interesting insight into understanding rectal complications in prostate conformal radiotherapy. Application of the method to a larger patient data set has the potential to facilitate the construction of a full dose-surface-complication relationship, which would be most useful in guiding clinical practice

  9. Comparison of tissue equalization, and premium view post-processing methods in full field digital mammography

    Energy Technology Data Exchange (ETDEWEB)

    Chen Baoying, E-mail: chenby128@yahoo.co [Department of Radiology, Tangdu Hospital, Fourth Military Medical University, Xinsi Road 1, 710038 Xi' an, Shaanxi (China); Wang Wei; Huang Jin; Zhao Ming; Cui Guangbin [Department of Radiology, Tangdu Hospital, Fourth Military Medical University, Xinsi Road 1, 710038 Xi' an, Shaanxi (China); Xu Jing [Cell Engineering Research Centre and Department of Cell Biology, State Key Laboratory of Cancer Biology, Fourth Military Medical University, Changle West Road 169, 710032 Xi' an, Shaanxi (China); Guo Wei; Du Pang; Li Pei [Department of Radiology, Tangdu Hospital, Fourth Military Medical University, Xinsi Road 1, 710038 Xi' an, Shaanxi (China); Yu Jun, E-mail: pclamper@163.co [Department of Preclinical Experiment Center, Fourth Military Medical University, Changle West Road 169, 710032 Xi' an, Shaanxi (China)

    2010-10-15

    Objective: To retrospectively evaluate the diagnostic abilities of 2 post-processing methods provided by GE Senographe DS system, tissue equalization (TE) and premium view (PV) in full field digital mammography (FFDM). Materials and methods: In accordance with the ethical standards of the World Medical Association, this study was approved by regional ethics committee and signed informed patient consents were obtained. We retrospectively reviewed digital mammograms from 101 women (mean age, 47 years; range, 23-81 years) in the modes of TE and PV, respectively. Three radiologists, fully blinded to the post-processing methods, all patient clinical information and histologic results, read images by using objective image interpretation criteria for diagnostic information end points such as lesion border delineation, definition of disease extent, visualization of internal and surrounding morphologic features of the lesions. Also, overall diagnostic impression in terms of lesion conspicuity, detectability and diagnostic confidence was assessed. Between-group comparisons were performed with Wilcoxon signed rank test. Results: Readers 1, 2, and 3 demonstrated significant overall better impression of PV in 29, 27, and 24 patients, compared with that for TE in 12, 13, and 11 patients, respectively (p < 0.05). Significant (p < 0.05) better impression of PV was also demonstrated for diagnostic information end points. Importantly, PV proved to be more sensitive than TE while detecting malignant lesions in dense breast rather than benign lesions and malignancy in non-dense breast (p < 0.01). Conclusion: PV compared with TE provides marked better diagnostic information in FFDM, particularly for patients with malignancy in dense breast.

  10. Comparison of tissue equalization, and premium view post-processing methods in full field digital mammography

    International Nuclear Information System (INIS)

    Chen Baoying; Wang Wei; Huang Jin; Zhao Ming; Cui Guangbin; Xu Jing; Guo Wei; Du Pang; Li Pei; Yu Jun

    2010-01-01

    Objective: To retrospectively evaluate the diagnostic abilities of 2 post-processing methods provided by GE Senographe DS system, tissue equalization (TE) and premium view (PV) in full field digital mammography (FFDM). Materials and methods: In accordance with the ethical standards of the World Medical Association, this study was approved by regional ethics committee and signed informed patient consents were obtained. We retrospectively reviewed digital mammograms from 101 women (mean age, 47 years; range, 23-81 years) in the modes of TE and PV, respectively. Three radiologists, fully blinded to the post-processing methods, all patient clinical information and histologic results, read images by using objective image interpretation criteria for diagnostic information end points such as lesion border delineation, definition of disease extent, visualization of internal and surrounding morphologic features of the lesions. Also, overall diagnostic impression in terms of lesion conspicuity, detectability and diagnostic confidence was assessed. Between-group comparisons were performed with Wilcoxon signed rank test. Results: Readers 1, 2, and 3 demonstrated significant overall better impression of PV in 29, 27, and 24 patients, compared with that for TE in 12, 13, and 11 patients, respectively (p < 0.05). Significant (p < 0.05) better impression of PV was also demonstrated for diagnostic information end points. Importantly, PV proved to be more sensitive than TE while detecting malignant lesions in dense breast rather than benign lesions and malignancy in non-dense breast (p < 0.01). Conclusion: PV compared with TE provides marked better diagnostic information in FFDM, particularly for patients with malignancy in dense breast.

  11. Comparison of HPLC, UV spectrophotometry and potentiometric titration methods for the determination of lumefantrine in pharmaceutical products.

    Science.gov (United States)

    da Costa César, Isabela; Nogueira, Fernando Henrique Andrade; Pianetti, Gérson Antônio

    2008-09-10

    This paper describes the development and evaluation of a HPLC, UV spectrophotometry and potentiometric titration methods to quantify lumefantrine in raw materials and tablets. HPLC analyses were carried out using a Symmetry C(18) column and a mobile phase composed of methanol and 0.05% trifluoroacetic acid (80:20), with a flow rate of 1.0ml/min and UV detection at 335nm. For the spectrophotometric analyses, methanol was used as solvent and the wavelength of 335nm was selected for the detection. Non-aqueous titration of lumefantrine was carried out using perchloric acid as titrant and glacial acetic acid/acetic anhydride as solvent. The end point was potentiometrically determined. The three evaluated methods showed to be adequate to quantify lumefantrine in raw materials, while HPLC and UV methods presented the most reliable results for the analyses of tablets.

  12. A rapid method for titration of ascovirus infectivity.

    Science.gov (United States)

    Han, Ningning; Chen, Zishu; Wan, Hu; Huang, Guohua; Li, Jianhong; Jin, Byung Rae

    2018-05-01

    Ascoviruses are a recently described family and the traditional plaque assay and end-point PCR assay have been used for their titration. However, these two methods are time-consuming and inaccurate to titrate ascoviruses. In the present study, a quick method for the determination of the titer of ascovirus stocks was developed based on ascovirus-induced apoptosis in infected insect cells. Briefly, cells infected with serial dilutions of virus (10 -2 -10 -10 ) for 24 h were stained with trypan blue. The stained cells were counted, and the percentage of nonviable cells was calculated. The stained cell rate was compared between virus-infected and control cells. The minimum-dilution group that had a significant difference compared with control and the maximum-dilution group that had no significant difference were selected and then compared each well of the two groups with the average stained cell rate of control. The well was marked as positive well if the stained cell rate was higher than the average stained cell rate of control wells; otherwise, the well was marked as negative wells. The percentage of positive wells were calculated according to the number of positive. Subsequently, the virus titer was calculated through the method of Reed and Muench. This novel method is rapid, simple, reproducible, accurate, and less material-consuming and eliminates the subjectivity of the other procedures for titrating ascoviruses. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. Method of construction spatial transition curve

    Directory of Open Access Journals (Sweden)

    S.V. Didanov

    2013-04-01

    Full Text Available Purpose. The movement of rail transport (speed rolling stock, traffic safety, etc. is largely dependent on the quality of the track. In this case, a special role is the transition curve, which ensures smooth insertion of the transition from linear to circular section of road. The article deals with modeling of spatial transition curve based on the parabolic distribution of the curvature and torsion. This is a continuation of research conducted by the authors regarding the spatial modeling of curved contours. Methodology. Construction of the spatial transition curve is numerical methods for solving nonlinear integral equations, where the initial data are taken coordinate the starting and ending points of the curve of the future, and the inclination of the tangent and the deviation of the curve from the tangent plane at these points. System solutions for the numerical method are the partial derivatives of the equations of the unknown parameters of the law of change of torsion and length of the transition curve. Findings. The parametric equations of the spatial transition curve are calculated by finding the unknown coefficients of the parabolic distribution of the curvature and torsion, as well as the spatial length of the transition curve. Originality. A method for constructing the spatial transition curve is devised, and based on this software geometric modeling spatial transition curves of railway track with specified deviations of the curve from the tangent plane. Practical value. The resulting curve can be applied in any sector of the economy, where it is necessary to ensure a smooth transition from linear to circular section of the curved space bypass. An example is the transition curve in the construction of the railway line, road, pipe, profile, flat section of the working blades of the turbine and compressor, the ship, plane, car, etc.

  14. Dynamic phase transitions of the Blume–Emery–Griffiths model under an oscillating external magnetic field by the path probability method

    International Nuclear Information System (INIS)

    Ertaş, Mehmet; Keskin, Mustafa

    2015-01-01

    By using the path probability method (PPM) with point distribution, we study the dynamic phase transitions (DPTs) in the Blume–Emery–Griffiths (BEG) model under an oscillating external magnetic field. The phases in the model are obtained by solving the dynamic equations for the average order parameters and a disordered phase, ordered phase and four mixed phases are found. We also investigate the thermal behavior of the dynamic order parameters to analyze the nature dynamic transitions as well as to obtain the DPT temperatures. The dynamic phase diagrams are presented in three different planes in which exhibit the dynamic tricritical point, double critical end point, critical end point, quadrupole point, triple point as well as the reentrant behavior, strongly depending on the values of the system parameters. We compare and discuss the dynamic phase diagrams with dynamic phase diagrams that were obtained within the Glauber-type stochastic dynamics based on the mean-field theory. - Highlights: • Dynamic magnetic behavior of the Blume–Emery–Griffiths system is investigated by using the path probability method. • The time variations of average magnetizations are studied to find the phases. • The temperature dependence of the dynamic magnetizations is investigated to obtain the dynamic phase transition points. • We compare and discuss the dynamic phase diagrams with dynamic phase diagrams that were obtained within the Glauber-type stochastic dynamics based on the mean-field theory

  15. Dynamic phase transitions of the Blume–Emery–Griffiths model under an oscillating external magnetic field by the path probability method

    Energy Technology Data Exchange (ETDEWEB)

    Ertaş, Mehmet, E-mail: mehmetertas@erciyes.edu.tr; Keskin, Mustafa

    2015-03-01

    By using the path probability method (PPM) with point distribution, we study the dynamic phase transitions (DPTs) in the Blume–Emery–Griffiths (BEG) model under an oscillating external magnetic field. The phases in the model are obtained by solving the dynamic equations for the average order parameters and a disordered phase, ordered phase and four mixed phases are found. We also investigate the thermal behavior of the dynamic order parameters to analyze the nature dynamic transitions as well as to obtain the DPT temperatures. The dynamic phase diagrams are presented in three different planes in which exhibit the dynamic tricritical point, double critical end point, critical end point, quadrupole point, triple point as well as the reentrant behavior, strongly depending on the values of the system parameters. We compare and discuss the dynamic phase diagrams with dynamic phase diagrams that were obtained within the Glauber-type stochastic dynamics based on the mean-field theory. - Highlights: • Dynamic magnetic behavior of the Blume–Emery–Griffiths system is investigated by using the path probability method. • The time variations of average magnetizations are studied to find the phases. • The temperature dependence of the dynamic magnetizations is investigated to obtain the dynamic phase transition points. • We compare and discuss the dynamic phase diagrams with dynamic phase diagrams that were obtained within the Glauber-type stochastic dynamics based on the mean-field theory.

  16. New titrimetric method for oxygen to metal ratio in uranium oxide powders

    International Nuclear Information System (INIS)

    Ray, Vinod Kumar; Brahmananda Reddy, G.; Balaji Rao, Y.; Subba Rao, Y.

    2015-01-01

    O/U ratio is of high importance to both U 3 O 8 and UO 2 powders for different reasons. In UO 2 powder it is a guiding parameter for sintering process where as for U 3 O 8 , it indicates efficiency of ammonium di-uranate (ADU) to U 3 O 8 conversion process. In the present method for O/U determination, UO 2 and U 3 O 8 powders are dissolved in 4.5 M sulphuric acid and little HF by heating on hot plate. Subsequently, optimized quantity of phosphoric acid is added on cooling, for getting sharp end point. The resultant solution is titrated with standard potassium dichromate using barium diphenylamine sulphonate (BDS) as an indicator. The expanded uncertainties calculated for UO 2 and U 3 O 8 powders are ±0.004 and ±0.006 O/U ratio units respectively at 95 % confidence level. (author)

  17. The method of treatment cessation and recurrence rate of amblyopia.

    Science.gov (United States)

    Walsh, Leah A; Hahn, Erik K; LaRoche, G Robert

    2009-09-01

    To date, much of the research regarding amblyopia has been focused on which therapeutic modality is the most efficacious in amblyopia management. Unfortunately, there is a lack of research into which method of treatment cessation is the most appropriate once therapy has been completed. The purpose of this study is to investigate if the cessation method affects the recurrence rate of amblyopia. This study was a prospective randomized clinical trial of 20 subjects who were wearing full-time occlusion and were at the end point of their therapy. The subjects were randomized into one of two groups: abrupt cessation or therapy tapering. All subjects were followed for 3 consecutive 4-week intervals, for a total of 12 weeks, to assess the short-term recurrence rate of amblyopia. Subjects who were in the tapered group had their occlusion reduced from full-time occlusion (all waking hours minus one) to 50% of waking hours at study enrollment (i.e., from 12 hours/day to 6 hours per day); occlusion was reduced by an additional 50% at the first 4-week study visit (i.e., from 6 hours/day to 3 hours), with occlusion being discontinued completely at the week 8 visit. All subjects who were in the abrupt cessation group had their full-time occlusion discontinued completely at the start of the study (i.e., from 12 hours/day to none). Additional assessments were also conducted at week 26 and week 52 post-therapy cessation to determine the longer term amblyopia regression rate. For the purposes of this study, recurrence was defined as a 0.2 (10 letters) or more logarithm of the minimum angle of resolution (logMAR) loss of visual acuity. A recurrence of amblyopia occurred in 4 of 17 (24%; CI 9%-47%) participants completing the study by the week 52 study end point. There were 2 subjects from each treatment group who demonstrated a study protocol-defined recurrence. There was a 24% risk of amblyopia recurrence if therapy was discontinued abruptly or tapered in 8 weeks. In this small

  18. Impact of initial platelet count on baseline angiographic finding and end-points in ST-elevation myocardial infarction referred for primary percutaneous coronary intervention.

    Science.gov (United States)

    Kaplan, Sahin; Kaplan, Safiye Tuba; Kiris, Abdulkadir; Gedikli, Omer

    2014-01-01

    The baseline platelet count (BPC) in patients with acute ST elevation myocardial infarction (STEMI) may reflect the baseline anjiografic finding and may also predic long-term outcomes after primary percutaneous coronary intervention (PPCI). Available data for the value of BPC in patients with STEMI treated with PPCI are still questionable. Therefore, we sought to determine the prognostic value of BPC for baseline angiographic finding and the impact of BPC on clinical outcomes of patients treating with PPCI. Blood sample for BPC was obtained on admission in 140 consecutive patients undergoing PPCI. Patients were divided 2 groups that group-1 (104 patients): TIMI flow-grade 0 and group-2 (36 patients): TIMI flow-grade 1-3. Follow-up was performed at 1-9 months. Baseline demographics were comparable, but, BPC was significantly higher in group-1 comparing 2 (293.7±59.8x10(9)/L vs. 237.7±50.9x10(9)/L, pmeasuring of a BPC on admission may also provide further practical and therapeutic profits.

  19. Trained sensory perception of pork eating quality as affected by fresh and cooked pork quality attributes and end-point cooked temperature.

    Science.gov (United States)

    Moeller, S J; Miller, R K; Aldredge, T L; Logan, K E; Edwards, K K; Zerby, H N; Boggess, M; Box-Steffensmeier, J M; Stahl, C A

    2010-05-01

    The present study evaluated individual and interactive influences of pork loin (n=679) ultimate ph (pH), intramuscular fat (IMF), Minolta L* color (L*), Warner-Bratzler shear force (WBSF), and internal cooked temperatures (62.8 degrees C, 68.3 degrees C, 73.9 degrees C, and 79.4 degrees C) on trained sensory perception of palatability. Logistical regression analyses were used, fitting sensory responses as dependent variables and quality and cooked temperature as independent variables, testing quadratic and interactive effects. Incremental increases in cooked temperature reduced sensory juiciness and tenderness scores by 3.8% and 0.9%, respectively, but did not influence sensory flavor or saltiness scores. An increase of 4.9N in WBSF, from a base of 14.7N (lowest) to 58.8N (greatest) was associated with a 3.7% and 1.8% reduction in sensory tenderness and juiciness scores, respectively, with predicted sensory tenderness scores reduced by 3.55 units when comparing ends of the WBSF range. Modeled sensory responses for loins with pH of 5.40 and 5.60 had reduced tenderness, chewiness, and fat flavor ratings when compared with responses for loins with pH of 5.80 to 6.40, the range indicative of optimal sensory response. Loin IMF and L* were significant model effects; however, their influence on sensory attributes was small, with predicted mean sensory responses measurably improved only when comparing 6% and 1% IMF and L* values of 46.9 (dark) when compared with 65.0 (pale). Tenderness and juiciness scores, were related to a greater extent to loin WBSF and pH, and to a lesser extent to cooked temperature, IMF and L*. Copyright (c) 2009 Elsevier Ltd. All rights reserved.

  20. Determination of trace amounts of thorium and lanthanides by successive titrations using semi-xylenol orange with spectrophotometric end-point indication

    International Nuclear Information System (INIS)

    Hafez, M.A.E.; Abdallah, A.M.A.; El-Gany, N.E.A.

    1990-01-01

    The precision and accuracy attainable in successive titrations of Th 4+ and either La 3+ , Nd 3+ or Gd 3+ with a 0.001 M solution of disodium ethylenediaminetetraacetate using Semi-xylenol Orange (SXO) as a metallochromic indicator were studied. Thorium (IV) was titrated at pH 2, the pH was adjusted to 5.5-5.9 by adding hexamethylenetetramine buffer and La 3+ (or Nd 3+ or Gd 3+ ) was then titrated. A comparison of the indicators SXO and Xylenol Orange for successive titrations of Th 4+ and either La 3+ , Nd 3+ or Gd 3+ was carried out. (author)

  1. Pharmacometric Analysis of the Relationship Between Absolute Lymphocyte Count and Expanded Disability Status Scale and Relapse Rate, Efficacy End Points, in Multiple Sclerosis Trials.

    Science.gov (United States)

    Novakovic, A M; Thorsted, A; Schindler, E; Jönsson, S; Munafo, A; Karlsson, M O

    2018-05-10

    The aim of this work was to assess the relationship between the absolute lymphocyte count (ALC), and disability (as measured by the Expanded Disability Status Scale [EDSS]) and occurrence of relapses, 2 efficacy endpoints, respectively, in patients with remitting-relasping multiple sclerosis. Data for ALC, EDSS, and relapse rate were available from 1319 patients receiving placebo and/or cladribine tablets. Pharmacodynamic models were developed to characterize the time course of the endpoints. ALC-related measures were then evaluated as predictors of the efficacy endpoints. EDSS data were best fitted by a model where the logit-linear disease progression is affected by the dynamics of ALC change from baseline. Relapse rate data were best described by the Weibull hazard function, and the ALC change from baseline was also found to be a significant predictor of time to relapse. Presented models have shown that once cladribine exposure driven ALC-derived measures are included in the model, the need for drug effect components is of less importance (EDSS) or disappears (relapse rate). This simplifies the models and theoretically makes them mechanism specific rather than drug specific. Having a reliable mechanism-specific model would allow leveraging historical data across compounds, to support decision making in drug development and possibly shorten the time to market. © 2018, The American College of Clinical Pharmacology.

  2. Two-phase strategy of neural control for planar reaching movements: I. XY coordination variability and its relation to end-point variability.

    Science.gov (United States)

    Rand, Miya K; Shimansky, Yury P

    2013-03-01

    A quantitative model of optimal transport-aperture coordination (TAC) during reach-to-grasp movements has been developed in our previous studies. The utilization of that model for data analysis allowed, for the first time, to examine the phase dependence of the precision demand specified by the CNS for neurocomputational information processing during an ongoing movement. It was shown that the CNS utilizes a two-phase strategy for movement control. That strategy consists of reducing the precision demand for neural computations during the initial phase, which decreases the cost of information processing at the expense of lower extent of control optimality. To successfully grasp the target object, the CNS increases precision demand during the final phase, resulting in higher extent of control optimality. In the present study, we generalized the model of optimal TAC to a model of optimal coordination between X and Y components of point-to-point planar movements (XYC). We investigated whether the CNS uses the two-phase control strategy for controlling those movements, and how the strategy parameters depend on the prescribed movement speed, movement amplitude and the size of the target area. The results indeed revealed a substantial similarity between the CNS's regulation of TAC and XYC. First, the variability of XYC within individual trials was minimal, meaning that execution noise during the movement was insignificant. Second, the inter-trial variability of XYC was considerable during the majority of the movement time, meaning that the precision demand for information processing was lowered, which is characteristic for the initial phase. That variability significantly decreased, indicating higher extent of control optimality, during the shorter final movement phase. The final phase was the longest (shortest) under the most (least) challenging combination of speed and accuracy requirements, fully consistent with the concept of the two-phase control strategy. This paper further discussed the relationship between motor variability and XYC variability.

  3. Measurement of activation yields for platinum group elements using Bremsstrahlung radiation with end-point energies in the range 11-14 MeV

    Energy Technology Data Exchange (ETDEWEB)

    Tickner, James, E-mail: james.tickner@csiro.a [CSIRO Process Science and Engineering, PMB 5, Menai, NSW 2234 (Australia); Bencardino, Raffaele; Roach, Greg [CSIRO Process Science and Engineering, PMB 5, Menai, NSW 2234 (Australia)

    2010-01-15

    Activation yields have been measured for (gamma,n) reactions of the elements Ru, Rh, Pd, Ir and Pt. Metallic foils of natural isotopic composition were irradiated using Bremsstrahlung radiation produced from an electron linear accelerator operated with electron beam energies in the range 11-14 MeV. Activation products, including both unstable ground states and metastates were measured using a high-purity germanium detector. Cross-sections were estimated from the yield data by assuming a simple two-parameter model for the shape of the cross-section with energy.

  4. Measurement of activation yields for platinum group elements using Bremsstrahlung radiation with end-point energies in the range 11-14 MeV

    International Nuclear Information System (INIS)

    Tickner, James; Bencardino, Raffaele; Roach, Greg

    2010-01-01

    Activation yields have been measured for (γ,n) reactions of the elements Ru, Rh, Pd, Ir and Pt. Metallic foils of natural isotopic composition were irradiated using Bremsstrahlung radiation produced from an electron linear accelerator operated with electron beam energies in the range 11-14 MeV. Activation products, including both unstable ground states and metastates were measured using a high-purity germanium detector. Cross-sections were estimated from the yield data by assuming a simple two-parameter model for the shape of the cross-section with energy.

  5. Late outcome of a controlled trial of enalapril treatment in progressive chronic renal failure. Hard end-points and influence of proteinuria

    DEFF Research Database (Denmark)

    Kamper, A L; Strandgaard, S; Leyssac, P P

    1995-01-01

    An earlier controlled trial showed that over an average of 26 months, enalapril slowed the progression of chronic renal failure. Following completion of the trial, the patients continued to receive antihypertensive treatment according to ordinary clinical criteria. All but four patients...... end-stage renal failure (ESRF) (P renal outcome groups. In all patients, baseline Calb and CIgG were negatively correlated with the rate of change in GFR during the controlled trial (r = -0.37, P .... In the original enalapril group, 12 of the 35 patients (34%) were alive without renal replacement therapy versus five of the 35 patients (14%) in the control group. This difference of 20% in favour of having been in the enalapril group in the original trial was significant (P = 0.05; 95% confidence limits 0...

  6. Real-time viability and apoptosis kinetic detection method of 3D multicellular tumor spheroids using the Celigo Image Cytometer.

    Science.gov (United States)

    Kessel, Sarah; Cribbes, Scott; Bonasu, Surekha; Rice, William; Qiu, Jean; Chan, Leo Li-Ying

    2017-09-01

    The development of three-dimensional (3D) multicellular tumor spheroid models for cancer drug discovery research has increased in the recent years. The use of 3D tumor spheroid models may be more representative of the complex in vivo tumor microenvironments in comparison to two-dimensional (2D) assays. Currently, viability of 3D multicellular tumor spheroids has been commonly measured on standard plate-readers using metabolic reagents such as CellTiter-Glo® for end point analysis. Alternatively, high content image cytometers have been used to measure drug effects on spheroid size and viability. Previously, we have demonstrated a novel end point drug screening method for 3D multicellular tumor spheroids using the Celigo Image Cytometer. To better characterize the cancer drug effects, it is important to also measure the kinetic cytotoxic and apoptotic effects on 3D multicellular tumor spheroids. In this work, we demonstrate the use of PI and caspase 3/7 stains to measure viability and apoptosis for 3D multicellular tumor spheroids in real-time. The method was first validated by staining different types of tumor spheroids with PI and caspase 3/7 and monitoring the fluorescent intensities for 16 and 21 days. Next, PI-stained and nonstained control tumor spheroids were digested into single cell suspension to directly measure viability in a 2D assay to determine the potential toxicity of PI. Finally, extensive data analysis was performed on correlating the time-dependent PI and caspase 3/7 fluorescent intensities to the spheroid size and necrotic core formation to determine an optimal starting time point for cancer drug testing. The ability to measure real-time viability and apoptosis is highly important for developing a proper 3D model for screening tumor spheroids, which can allow researchers to determine time-dependent drug effects that usually are not captured by end point assays. This would improve the current tumor spheroid analysis method to potentially better

  7. Error assessment in recombinant baculovirus titration: evaluation of different methods.

    Science.gov (United States)

    Roldão, António; Oliveira, Rui; Carrondo, Manuel J T; Alves, Paula M

    2009-07-01

    The success of baculovirus/insect cells system in heterologous protein expression depends on the robustness and efficiency of the production workflow. It is essential that process parameters are controlled and include as little variability as possible. The multiplicity of infection (MOI) is the most critical factor since irreproducible MOIs caused by inaccurate estimation of viral titers hinder batch consistency and process optimization. This lack of accuracy is related to intrinsic characteristics of the method such as the inability to distinguish between infectious and non-infectious baculovirus. In this study, several methods for baculovirus titration were compared. The most critical issues identified were the incubation time and cell concentration at the time of infection. These variables influence strongly the accuracy of titers and must be defined for optimal performance of the titration method. Although the standard errors of the methods varied significantly (7-36%), titers were within the same order of magnitude; thus, viral titers can be considered independent of the method of titration. A cost analysis of the baculovirus titration methods used in this study showed that the alamarblue, real time Q-PCR and plaque assays were the most expensive techniques. The remaining methods cost on average 75% less than the former methods. Based on the cost, time and error analysis undertaken in this study, the end-point dilution assay, microculture tetrazolium assay and flow cytometric assay were found to be the techniques that combine all these three main factors better. Nevertheless, it is always recommended to confirm the accuracy of the titration either by comparison with a well characterized baculovirus reference stock or by titration using two different methods and verification of the variability of results.

  8. Reporting of embryo transfer methods in IVF research: a cross-sectional study.

    Science.gov (United States)

    Gambadauro, Pietro; Navaratnarajah, Ramesan

    2015-02-01

    The reporting of embryo transfer methods in IVF research was assessed through a cross-sectional analysis of randomized controlled trials (RCTs) published between 2010 and 2011. A systematic search identified 325 abstracts; 122 RCTs were included in the study. Embryo transfer methods were described in 42 out of 122 articles (34%). Catheters (32/42 [76%]) or ultrasound guidance (31/42 [74%]) were most frequently mentioned. Performer 'blinding' (12%) or technique standardization (7%) were seldom reported. The description of embryo transfer methods was significantly more common in trials published by journals with lower impact factor (less than 3, 39.6%; 3 or greater, 21.5%; P = 0.037). Embryo transfer methods were reported more often in trials with pregnancy as the main end-point (33% versus 16%) or with positive outcomes (37.8% versus 25.0%), albeit not significantly. Multivariate logistic regression confirmed that RCTs published in higher impact factor journals are less likely to describe embryo transfer methods (OR 0.371; 95% CI 0.143 to 0.964). Registered trials, trials conducted in an academic setting, multi-centric studies or full-length articles were not positively associated with embryo transfer methods reporting rate. Recent reports of randomized IVF trials rarely describe embryo transfer methods. The under-reporting of research methods might compromise reproducibility and suitability for meta-analysis. Copyright © 2014 Reproductive Healthcare Ltd. Published by Elsevier Ltd. All rights reserved.

  9. A New Method of Constructing a Drug-Polymer Temperature-Composition Phase Diagram Using Hot-Melt Extrusion.

    Science.gov (United States)

    Tian, Yiwei; Jones, David S; Donnelly, Conor; Brannigan, Timothy; Li, Shu; Andrews, Gavin P

    2018-04-02

    Current experimental methodologies used to determine the thermodynamic solubility of an API within a polymer typically involves establishing the dissolution/melting end point of the crystalline API within a physical mixture or through the use of the glass transition temperature measurement of a demixed amorphous solid dispersion. The measurable "equilibrium" points for solubility are normally well above the glass transition temperature of the system, meaning extrapolation is required to predict the drug solubility at pharmaceutically relevant temperatures. In this manuscript, we argue that the presence of highly viscous polymers in these systems results in experimental data that exhibits an under or overestimated value relative to the true thermodynamic solubility. In previous work, we demonstrated the effects of experimental conditions and their impact on measured and predicted thermodynamic solubility points. In light of current understanding, we have developed a new method to limit error associated with viscosity effects for application in small-scale hot-melt extrusion (HME). In this study, HME was used to generate an intermediate (multiphase) system containing crystalline drug, amorphous drug/polymer-rich regions as well as drug that was molecularly dispersed in polymer. An extended annealing method was used together with high-speed differential scanning calorimetry to accurately determine the upper and lower boundaries of the thermodynamic solubility of a model drug-polymer system (felodipine and Soluplus). Compared to our previously published data, the current results confirmed our hypothesis that the prediction of the liquid-solid curve using dynamic determination of dissolution/melting end point of the crystalline API physical mixture presents an underestimation relative to the thermodynamic solubility point. With this proposed method, we were able to experimentally measure the upper and lower boundaries of the liquid-solid curve for the model system. The

  10. Efficacy and Safety of Two Methadone Titration Methods for the Treatment of Cancer-Related Pain: The EQUIMETH2 Trial (Methadone for Cancer-Related Pain).

    Science.gov (United States)

    Poulain, Philippe; Berleur, Marie-Pierre; Lefki, Shimsi; Lefebvre, Danièle; Chvetzoff, Gisèle; Serra, Eric; Tremellat, Fibra; Derniaux, Alain; Filbet, Marilène

    2016-11-01

    In the European Association for Palliative Care recommendations for cancer pain management, there was no consensus regarding the indications, titration, or monitoring of methadone. This national, randomized, multicenter trial aimed to compare two methadone titration methods (stop-and-go vs. progressive) in patients with cancer-related pain who were inadequately relieved by or intolerant to Level 3 opioids. The primary end point was the rate of success/failure at Day 4, defined as pain relief (reduction of at least two points on the visual scale and a pain score methods were considered equally easy to perform by nearly 60% of the clinicians. Methadone is an effective and sustainable second-line alternative opioid for the treatment of cancer-related pain. The methods of titration are comparable in terms of efficacy, safety, and ease of use. Copyright © 2016 American Academy of Hospice and Palliative Medicine. Published by Elsevier Inc. All rights reserved.

  11. Ammonium vanadate titrimetric method for determination of micro amount uranium in rock and soil by using vanadate-gold indicator

    International Nuclear Information System (INIS)

    Li Yucheng.

    1990-01-01

    A new vanadate-gold indicator was successfully applied to the ammonium vanadate titrimetric method for determination of micro amount uranium in rock and soil. Uranium in 0.1g of sample is reduced by titanium trichloride in phosphoric acid. Excessive Ti (III) and other low-valent ions are oxidized by sodium nitrite, while the complex of uranium (IV)-phosphate is not oxidized. Excessive nitrite is destroyed by urea. When the concentration of phosphoric acid is 22-24 % , adding two drops of vanadate-gold indicator, uranium (IV) is titrated by standardized ammonium vanadate solution (T = 0.02-20gU/ml) and the end-point is judged by a violet-red color appearance

  12. Standardization of radionuclides 45Ca, 137Cs, 204Tl by tracing method using 4πβ-γ coincidence system

    International Nuclear Information System (INIS)

    Ponge-Ferreira, Claudia Regina Ponte

    2005-01-01

    The procedure followed for the standardization of 45 Ca, 137 Cs and 204 Tl is described. The activity measurements was carried out in a 4πβ-γ coincidence system by the tracing method. The radionuclides chosen as the P-y emitting tracer nuclide were 60 Co for the 45 Ca and 134 Cs for 137 Cs and 204 TL because their end-point beta-ray energy are close to the respective beta emitters. The radioactive sources were prepared using two different techniques: one was the drops technique and the other was the solution technique. In the drop technique the sources were prepared by dropping directly on the subtract both solutions (tracer and beta pure). In the other technique a solution of tracer plus beta pure was mixed previously before making the radioactive sources. The activities of the radionuclides obtained with these technique were compared and the values are in agreement within the experimental uncertainties. (author)

  13. A standardized method for beam design in neutron capture therapy

    International Nuclear Information System (INIS)

    Storr, G.J.: Harrington, B.V.

    1993-01-01

    A desirable end point for a given beam design for Neutron Capture Therapy (NCT) should be quantitative description of tumour control probability and normal tissue damage. Achieving this goal will ultimately rely on data from NCT human clinical trials. Traditional descriptions of beam designs have used a variety of assessment methods to quantify proposed or installed beam designs. These methods include measurement and calculation of open-quotes free fieldclose quotes parameters, such as neutron and gamma flux intensities and energy spectra, and figures-of-merit in tissue equivalent phantoms. The authors propose here a standardized method for beam design in NCT. This method would allow all proposed and existing NCT beam facilities to be compared equally. The traditional approach to determining a quantitative description of tumour control probability and normal tissue damage in NCT research may be described by the following path: Beam design → dosimetry → macroscopic effects → microscopic effects. Methods exist that allow neutron and gamma fluxes and energy dependence to be calculated and measured to good accuracy. By using this information and intermediate dosimetric quantities such as kerma factors for neutrons and gammas, macroscopic effect (absorbed dose) in geometries of tissue or tissue-equivalent materials can be calculated. After this stage, for NCT the data begins to become more sparse and in some areas ambiguous. Uncertainties in the Relative Biological Effectiveness (RBE) of some NCT dose components means that beam designs based on assumptions considered valid a few years ago may have to be reassessed. A standard method is therefore useful for comparing different NCT facilities

  14. IMPLEMENTING NDN USING SDN: A REVIEW ON METHODS AND APPLICATIONS

    Directory of Open Access Journals (Sweden)

    Shiva Rowshanrad

    2016-11-01

    Full Text Available In recent years many claims about the limitations of todays’ network architecture, its lack of flexibility and ability to response to ongoing changes and increasing users demands. In this regard, new network architectures are proposed. Software Defined Networking (SDN is one of these new architectures which centralizes the control of network by separating control plane from data plane. This separation leads to intelligence, flexibility and easier control in computer networks. One of the advantages of this framework is the ability to implement and test new protocols and architectures in actual networks without any concern of interruption.Named Data Networking (NDN is another paradigm for future network architecture. With NDN the network becomes aware of the content that is providing, rather than just transferring it among end-points. NDN attracts researchers’ attention and known as the potential future of networking and internet. Providing NDN functionalities over SDN is an important requirement to enable the innovation and optimization of network resources. In this paper first we describe about SDN and NDN, and then we introduce methods for implementing NDN using SDN. We also point out the advantages and applications of implementing NDN over SDN.

  15. Perturbation methods

    CERN Document Server

    Nayfeh, Ali H

    2008-01-01

    1. Introduction 1 2. Straightforward Expansions and Sources of Nonuniformity 23 3. The Method of Strained Coordinates 56 4. The Methods of Matched and Composite Asymptotic Expansions 110 5. Variation of Parameters and Methods of Averaging 159 6. The Method of Multiple Scales 228 7. Asymptotic Solutions of Linear Equations 308 References and Author Index 387 Subject Index 417

  16. Distillation methods

    International Nuclear Information System (INIS)

    Konecny, C.

    1975-01-01

    Two main methods of separation using the distillation method are given and evaluated, namely evaporation and distillation in carrier gas flow. Two basic apparatus are described for illustrating the methods used. The use of the distillation method in radiochemistry is documented by a number of examples of the separation of elements in elemental state, volatile halogenides and oxides. Tables give a survey of distillation methods used for the separation of the individual elements and give conditions under which this separation takes place. The suitability of the use of distillation methods in radiochemistry is discussed with regard to other separation methods. (L.K.)

  17. Método potenciométrico para determinação de cobre em cachaça Potentiometric method for copper determination in sugarcane spirit

    Directory of Open Access Journals (Sweden)

    Ivo L. Küchler

    1999-06-01

    Full Text Available 'Cachaça' is the Brazilian name for the spirit obtained from sugarcane. According to Brazilian regulations, it may be sold raw or with addition of sugar and may contain up to 5 mg/L of copper. Copper in "cachaça" was determined by titration with EDTA, using a homemade copper membrane electrode for end-point detection. It was found a pooled standard deviation of 0,057 mg/L and there was no significant difference between the results obtained by the potentiometric method and by flame atomic absorption spectrometry with standard addition. Among the 21 'cachaça' samples from 16 different brands analyzed, three overpassed the legal copper limit. For its characteristics of accuracy, precision, and speed, the potentiometric method may be employed advantageously in routine analysis, specially when low cost is a major concern.

  18. galerkin's methods

    African Journals Online (AJOL)

    user

    The assumed deflection shapes used in the approximate methods such as in the Galerkin's method were normally ... to direct compressive forces Nx, was derived by Navier. [3]. ..... tend to give higher frequency and stiffness, as well as.

  19. An efficient, maintenance free and approved method for spectroscopic control and monitoring of blend uniformity: The moving F-test.

    Science.gov (United States)

    Besseling, Rut; Damen, Michiel; Tran, Thanh; Nguyen, Thanh; van den Dries, Kaspar; Oostra, Wim; Gerich, Ad

    2015-10-10

    Dry powder mixing is a wide spread Unit Operation in the Pharmaceutical industry. With the advent of in-line Near Infrared (NIR) Spectroscopy and Quality by Design principles, application of Process Analytical Technology to monitor Blend Uniformity (BU) is taking a more prominent role. Yet routine use of NIR for monitoring, let alone control of blending processes is not common in the industry, despite the improved process understanding and (cost) efficiency that it may offer. Method maintenance, robustness and translation to regulatory requirements have been important barriers to implement the method. This paper presents a qualitative NIR-BU method offering a convenient and compliant approach to apply BU control for routine operation and process understanding, without extensive calibration and method maintenance requirements. The method employs a moving F-test to detect the steady state of measured spectral variances and the endpoint of mixing. The fundamentals and performance characteristics of the method are first presented, followed by a description of the link to regulatory BU criteria, the method sensitivity and practical considerations. Applications in upscaling, tech transfer and commercial production are described, along with evaluation of the method performance by comparison with results from quantitative calibration models. A full application, in which end-point detection via the F-test controls the blending process of a low dose product, was successfully filed in Europe and Australia, implemented in commercial production and routinely used for about five years and more than 100 batches. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Implementation of 'Davies and Gray/NBL Method' for potentiometric titration of uranium in the Safeguards Laboratory of CNEN by the use of a DL-67 mettler titrator

    International Nuclear Information System (INIS)

    Araujo, Radier Mario Silveira de; Barros, Pedro Dionisio de

    2005-01-01

    To meet the requirements of the Brazilian State System of Accounting for and Control of Nuclear Materials - SSAC, the Safeguards Laboratory of CNEN - LASAL has been applying the 'Davies and Gray/NBL' method for potentiometric determination of total uranium concentration in uranium samples taken during safeguards inspections at nuclear facilities since 1984, using a Radiometer ETS 822 titrator. In order to improve the analytical capability and the procedures related to the titration methodology, the same method was also implemented by using a METTLER DL - 67 titrator. This equipment is microprocessor - controlled and can be connected to additional devices such as printers, analytical balances, etc. It also provides accurate and reproducible results for end-point titrations, providing analytical performance according to the current international safeguards requirements. The implementation of the method in such equipment included the addition of analytical data as well as the improvement of the equipment parameters for uranium determination. Parameters like predispensing volume; titrant data and end-point value were studied. Some uranium samples (solids and solutions) were used during the initial tests with the titrator. A solution of pure uranyl nitrate was used as reference sample for this paper. From this, aliquots were analyzed in both Radiometer ETS-822 and METTLER DL-67. Results obtained from each equipment were compared with the reference value of the sample. The comparison showed that results from METTLER DL-67 meets the precision and accuracy requirements for this kind of analysis and led to the conclusion that the performance of this titrator is adequate for the determination of total uranium content in samples of nuclear materials for safeguards purposes. (author)

  1. Mining Method

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Young Shik; Lee, Kyung Woon; Kim, Oak Hwan; Kim, Dae Kyung [Korea Institute of Geology Mining and Materials, Taejon (Korea, Republic of)

    1996-12-01

    The reducing coal market has been enforcing the coal industry to make exceptional rationalization and restructuring efforts since the end of the eighties. To the competition from crude oil and natural gas has been added the growing pressure from rising wages and rising production cost as the workings get deeper. To improve the competitive position of the coal mines against oil and gas through cost reduction, studies to improve mining system have been carried out. To find fields requiring improvements most, the technologies using in Tae Bak Colliery which was selected one of long running mines were investigated and analyzed. The mining method appeared the field needing improvements most to reduce the production cost. The present method, so-called inseam roadway caving method presently is using to extract the steep and thick seam. However, this method has several drawbacks. To solve the problems, two mining methods are suggested for a long term and short term method respectively. Inseam roadway caving method with long-hole blasting method is a variety of the present inseam roadway caving method modified by replacing timber sets with steel arch sets and the shovel loaders with chain conveyors. And long hole blasting is introduced to promote caving. And pillar caving method with chock supports method uses chock supports setting in the cross-cut from the hanging wall to the footwall. Two single chain conveyors are needed. One is installed in front of chock supports to clear coal from the cutting face. The other is installed behind the supports to transport caved coal from behind. This method is superior to the previous one in terms of safety from water-inrushes, production rate and productivity. The only drawback is that it needs more investment. (author). 14 tabs., 34 figs.

  2. Projection Methods

    DEFF Research Database (Denmark)

    Wagner, Falko Jens; Poulsen, Mikael Zebbelin

    1999-01-01

    When trying to solve a DAE problem of high index with more traditional methods, it often causes instability in some of the variables, and finally leads to breakdown of convergence and integration of the solution. This is nicely shown in [ESF98, p. 152 ff.].This chapter will introduce projection...... methods as a way of handling these special problems. It is assumed that we have methods for solving normal ODE systems and index-1 systems....

  3. Discipline methods

    OpenAIRE

    Maria Kikila; Ioannis Koutelekos

    2012-01-01

    Child discipline is one of the most important elements of successful parenting. As discipline is defined the process that help children to learn appropriate behaviors and make good choices. Aim: The aim of the present study was to review the literature about the discipline methods. The method οf this study included bibliography research from both the review and the research literature, mainly in the pubmed data base which referred to the discipline methods. Results: In the literature it is ci...

  4. Maintenance methods

    International Nuclear Information System (INIS)

    Sanchis, H.; Aucher, P.

    1990-01-01

    The maintenance method applied at the Hague is summarized. The method was developed in order to solve problems relating to: the different specialist fields, the need for homogeneity in the maintenance work, the equipment diversity, the increase of the materials used at the Hague's new facilities. The aim of the method is to create a knowhow formalism, to facilitate maintenance, to ensure the running of the operations and to improve the estimation of the maintenance cost. One of the method's difficulties is the demonstration of the profitability of the maintenance operations [fr

  5. Demographic Models for Projecting Population and Migration: Methods for African Historical Analysis

    Directory of Open Access Journals (Sweden)

    Patrick Manning

    2015-08-01

    Full Text Available This study presents methods for projecting population and migration over time in cases were empirical data are missing or undependable. The methods are useful for cases in which the researcher has details of population size and structure for a limited period of time (most obviously, the end point, with scattered evidence on other times. It enables estimation of population size, including its structure in age, sex, and status, either forward or backward in time. The program keeps track of all the details. The calculated data can be reported or sampled and compared to empirical findings at various times and places to expected values based on other procedures of estimation. The application of these general methods that is developed here is the projection of African populations backwards in time from 1950, since 1950 is the first date for which consistently strong demographic estimates are available for national-level populations all over the African continent. The models give particular attention to migration through enslavement, which was highly important in Africa from 1650 to 1900. Details include a sensitivity analysis showing relative significance of input variables and techniques for calibrating various dimensions of the projection with each other. These same methods may be applicable to quite different historical situations, as long as the data conform in structure to those considered here.

  6. Development of fitting methods using geometric progression formulae of gamma-ray buildup factors

    International Nuclear Information System (INIS)

    Yoshida, Yoshitaka

    2006-01-01

    The gamma ray buildup factors are represented by an approximation method to speed up calculation using the point attenuation kernel method. The fitting parameters obtained by the GP formula and Taylor's formula are compiled in ANSI/ANS 6.4.3, available without any limitation. The GP formula featured high accuracy but required a high-level fitting technique. Thus the GP formula was divided into a curved line and a part representing the base values and used to develop the a fitting method and X k fitting method. As a result, this methodology showed that (1) when the fitting ranges were identical, there was no change in standard deviation when the unit penetration depth was varied; (2) even with fitting up to 300 mfp, the average standard deviation of 26 materials was 2.9% and acceptable GP parameters were extracted; (3) when the same end points of the fitting were selected and the starting points of fitting were identical with the unit penetration depth, the deviation became smaller with increasing unit penetration depth; and (4) even with the deviation adjusted to the positive side from 0.5 mfp to 300 mfp, the average standard deviation of 26 materials was 5.6%, which was an acceptable value. However, the GP parameters obtained by this methodology cannot be used for direct interpolation using gamma ray energy or materials. (author)

  7. Spectroscopic methods

    International Nuclear Information System (INIS)

    Ivanovich, M.; Murray, A.

    1992-01-01

    The principles involved in the interaction of nuclear radiation with matter are described, as are the principles behind methods of radiation detection. Different types of radiation detectors are described and methods of detection such as alpha, beta and gamma spectroscopy, neutron activation analysis are presented. Details are given of measurements of uranium-series disequilibria. (UK)

  8. Effect of oven cooking method on formation of heterocyclic amines and quality characteristics of chicken patties: steam-assisted hybrid oven versus convection ovens.

    Science.gov (United States)

    Isleroglu, Hilal; Kemerli, Tansel; Özdestan, Özgül; Uren, Ali; Kaymak-Ertekin, Figen

    2014-09-01

    The aim of this study was to evaluate effect of steam-assisted hybrid oven cooking method in comparison with convection ovens (natural and forced) on quality characteristics (color, hardness, cooking loss, soluble protein content, fat retention, and formation of heterocyclic aromatic amines) of chicken patties. The cooking experiments of chicken patties (n = 648) were conducted at oven temperatures of 180, 210, and 240°C until 3 different end point temperatures (75, 90, and 100°C) were reached. Steam-assisted hybrid oven cooking enabled faster cooking than convection ovens and resulted in chicken patties having lower a* and higher L* value, lower hardness, lower fat, and soluble protein content (P cooking loss than convection ovens. Steam-assisted hybrid oven could reduce the formation of heterocyclic aromatic amines that have mutagenic and carcinogenic effects on humans. © 2014 Poultry Science Association Inc.

  9. Method Mixins

    DEFF Research Database (Denmark)

    Ernst, Erik

    2002-01-01

    . Method mixins use shared name spaces to transfer information between caller and callee, as opposed to traditional invocation which uses parameters and returned results. This relieves a caller from dependencies on the callee, and it allows direct transfer of information further down the call stack, e......The procedure call mechanism has conquered the world of programming, with object-oriented method invocation being a procedure call in context of an object. This paper presents an alternative, method mixin invocations, that is optimized for flexible creation of composite behavior, where traditional...

  10. Nuclear energy - Uranium dioxide powder and sintered pellets - Determination of oxygen/uranium atomic ratio by the amperometric method. 2. ed.

    International Nuclear Information System (INIS)

    2007-01-01

    This International Standard specifies an analytical method for the determination of the oxygen/uranium atomic ratio in uranium dioxide powder and sintered pellets. The method is applicable to reactor grade samples of hyper-stoichiometric uranium dioxide powder and pellets. The presence of reducing agents or residual organic additives invalidates the procedure. The test sample is dissolved in orthophosphoric acid, which does not oxidize the uranium(IV) from UO 2 molecules. Thus, the uranium(VI) that is present in the dissolved solution is from UO 3 and/or U 3 O 8 molecules only, and is proportional to the excess oxygen in these molecules. The uranium(VI) content of the solution is determined by titration with a previously standardized solution of ammonium iron(II) sulfate hexahydrate in orthophosphoric acid. The end-point of the titration is determined amperometrically using a pair of polarized platinum electrodes. The oxygen/uranium ratio is calculated from the uranium(VI) content. A portion, weighing about 1 g, of the test sample is dissolved in orthophosphoric acid. The dissolution is performed in an atmosphere of nitrogen or carbon dioxide when sintered material is being analysed. When highly sintered material is being analysed, the dissolution is performed at a higher temperature in purified phosphoric acid from which the water has been partly removed. The cooled solution is titrated with an orthophosphoric acid solution of ammonium iron(II) sulfate, which has previously been standardized against potassium dichromate. The end-point of the titration is detected by the sudden increase of current between a pair of polarized platinum electrodes on the addition of an excess of ammonium iron(II) sulfate solution. The paper provides information about scope, principle, reactions, reagents, apparatus, preparation of test sample, procedure (uranium dioxide powder, sintered pellets of uranium dioxide, highly sintered pellets of uranium dioxide and determination

  11. Method Mixins

    DEFF Research Database (Denmark)

    Ernst, Erik

    2002-01-01

    invocation is optimized for as-is reuse of existing behavior. Tight coupling reduces flexibility, and traditional invocation tightly couples transfer of information and transfer of control. Method mixins decouple these two kinds of transfer, thereby opening the doors for new kinds of abstraction and reuse......The procedure call mechanism has conquered the world of programming, with object-oriented method invocation being a procedure call in context of an object. This paper presents an alternative, method mixin invocations, that is optimized for flexible creation of composite behavior, where traditional....... Method mixins use shared name spaces to transfer information between caller and callee, as opposed to traditional invocation which uses parameters and returned results. This relieves a caller from dependencies on the callee, and it allows direct transfer of information further down the call stack, e...

  12. Dosimetry methods

    DEFF Research Database (Denmark)

    McLaughlin, W.L.; Miller, A.; Kovacs, A.

    2003-01-01

    Chemical and physical radiation dosimetry methods, used for the measurement of absorbed dose mainly during the practical use of ionizing radiation, are discussed with respect to their characteristics and fields of application....

  13. Method Mixins

    DEFF Research Database (Denmark)

    Ernst, Erik

    2005-01-01

    The world of programming has been conquered by the procedure call mechanism, including object-oriented method invocation which is a procedure call in context of an object. This paper presents an alternative, method mixin invocations, that is optimized for flexible creation of composite behavior, ...... the call stack, e.g., to a callee's callee. The mechanism has been implemented in the programming language gbeta. Variants of the mechanism could be added to almost any imperative programming language.......The world of programming has been conquered by the procedure call mechanism, including object-oriented method invocation which is a procedure call in context of an object. This paper presents an alternative, method mixin invocations, that is optimized for flexible creation of composite behavior...

  14. A spectral/B-spline method for the Navier-Stokes equations in unbounded domains

    International Nuclear Information System (INIS)

    Dufresne, L.; Dumas, G.

    2003-01-01

    The numerical method presented in this paper aims at solving the incompressible Navier-Stokes equations in unbounded domains. The problem is formulated in cylindrical coordinates and the method is based on a Galerkin approximation scheme that makes use of vector expansions that exactly satisfy the continuity constraint. More specifically, the divergence-free basis vector functions are constructed with Fourier expansions in the θ and z directions while mapped B-splines are used in the semi-infinite radial direction. Special care has been taken to account for the particular analytical behaviors at both end points r=0 and r→∞. A modal reduction algorithm has also been implemented in the azimuthal direction, allowing for a relaxation of the CFL constraint on the timestep size and a possibly significant reduction of the number of DOF. The time marching is carried out using a mixed quasi-third order scheme. Besides the advantages of a divergence-free formulation and a quasi-spectral convergence, the local character of the B-splines allows for a great flexibility in node positioning while keeping narrow bandwidth matrices. Numerical tests show that the present method compares advantageously with other similar methodologies using purely global expansions

  15. A Generalized Method for the Comparable and Rigorous Calculation of the Polytropic Efficiencies of Turbocompressors

    Science.gov (United States)

    Dimitrakopoulos, Panagiotis

    2018-03-01

    The calculation of polytropic efficiencies is a very important task, especially during the development of new compression units, like compressor impellers, stages and stage groups. Such calculations are also crucial for the determination of the performance of a whole compressor. As processors and computational capacities have substantially been improved in the last years, the need for a new, rigorous, robust, accurate and at the same time standardized method merged, regarding the computation of the polytropic efficiencies, especially based on thermodynamics of real gases. The proposed method is based on the rigorous definition of the polytropic efficiency. The input consists of pressure and temperature values at the end points of the compression path (suction and discharge), for a given working fluid. The average relative error for the studied cases was 0.536 %. Thus, this high-accuracy method is proposed for efficiency calculations related with turbocompressors and their compression units, especially when they are operating at high power levels, for example in jet engines and high-power plants.

  16. Ensemble Methods

    Science.gov (United States)

    Re, Matteo; Valentini, Giorgio

    2012-03-01

    Ensemble methods are statistical and computational learning procedures reminiscent of the human social learning behavior of seeking several opinions before making any crucial decision. The idea of combining the opinions of different "experts" to obtain an overall “ensemble” decision is rooted in our culture at least from the classical age of ancient Greece, and it has been formalized during the Enlightenment with the Condorcet Jury Theorem[45]), which proved that the judgment of a committee is superior to those of individuals, provided the individuals have reasonable competence. Ensembles are sets of learning machines that combine in some way their decisions, or their learning algorithms, or different views of data, or other specific characteristics to obtain more reliable and more accurate predictions in supervised and unsupervised learning problems [48,116]. A simple example is represented by the majority vote ensemble, by which the decisions of different learning machines are combined, and the class that receives the majority of “votes” (i.e., the class predicted by the majority of the learning machines) is the class predicted by the overall ensemble [158]. In the literature, a plethora of terms other than ensembles has been used, such as fusion, combination, aggregation, and committee, to indicate sets of learning machines that work together to solve a machine learning problem [19,40,56,66,99,108,123], but in this chapter we maintain the term ensemble in its widest meaning, in order to include the whole range of combination methods. Nowadays, ensemble methods represent one of the main current research lines in machine learning [48,116], and the interest of the research community on ensemble methods is witnessed by conferences and workshops specifically devoted to ensembles, first of all the multiple classifier systems (MCS) conference organized by Roli, Kittler, Windeatt, and other researchers of this area [14,62,85,149,173]. Several theories have been

  17. Method Mixins

    DEFF Research Database (Denmark)

    Ernst, Erik

    2005-01-01

    The world of programming has been conquered by the procedure call mechanism, including object-oriented method invocation which is a procedure call in context of an object. This paper presents an alternative, method mixin invocations, that is optimized for flexible creation of composite behavior...... of abstraction and reuse. Method mixins use shared name spaces to transfer information between caller and callee, as opposed to traditional invocation which uses parameters and returned results. This relieves the caller from dependencies on the callee, and it allows direct transfer of information further down...... the call stack, e.g., to a callee's callee. The mechanism has been implemented in the programming language gbeta. Variants of the mechanism could be added to almost any imperative programming language....

  18. Statistical methods

    CERN Document Server

    Szulc, Stefan

    1965-01-01

    Statistical Methods provides a discussion of the principles of the organization and technique of research, with emphasis on its application to the problems in social statistics. This book discusses branch statistics, which aims to develop practical ways of collecting and processing numerical data and to adapt general statistical methods to the objectives in a given field.Organized into five parts encompassing 22 chapters, this book begins with an overview of how to organize the collection of such information on individual units, primarily as accomplished by government agencies. This text then

  19. Sieve methods

    CERN Document Server

    Halberstam, Heine

    2011-01-01

    Derived from the techniques of analytic number theory, sieve theory employs methods from mathematical analysis to solve number-theoretical problems. This text by a noted pair of experts is regarded as the definitive work on the subject. It formulates the general sieve problem, explores the theoretical background, and illustrates significant applications.""For years to come, Sieve Methods will be vital to those seeking to work in the subject, and also to those seeking to make applications,"" noted prominent mathematician Hugh Montgomery in his review of this volume for the Bulletin of the Ameri

  20. Characterization methods

    Energy Technology Data Exchange (ETDEWEB)

    Glass, J.T. [North Carolina State Univ., Raleigh (United States)

    1993-01-01

    Methods discussed in this compilation of notes and diagrams are Raman spectroscopy, scanning electron microscopy, transmission electron microscopy, and other surface analysis techniques (auger electron spectroscopy, x-ray photoelectron spectroscopy, electron energy loss spectroscopy, and scanning tunnelling microscopy). A comparative evaluation of different techniques is performed. In-vacuo and in-situ analyses are described.

  1. Digital Methods

    NARCIS (Netherlands)

    Rogers, R.

    2013-01-01

    In Digital Methods, Richard Rogers proposes a methodological outlook for social and cultural scholarly research on the Web that seeks to move Internet research beyond the study of online culture. It is not a toolkit for Internet research, or operating instructions for a software package; it deals

  2. A spectral/B-spline method for the Navier-Stokes equations in unbounded domains

    CERN Document Server

    Dufresne, L

    2003-01-01

    The numerical method presented in this paper aims at solving the incompressible Navier-Stokes equations in unbounded domains. The problem is formulated in cylindrical coordinates and the method is based on a Galerkin approximation scheme that makes use of vector expansions that exactly satisfy the continuity constraint. More specifically, the divergence-free basis vector functions are constructed with Fourier expansions in the theta and z directions while mapped B-splines are used in the semi-infinite radial direction. Special care has been taken to account for the particular analytical behaviors at both end points r=0 and r-> infinity. A modal reduction algorithm has also been implemented in the azimuthal direction, allowing for a relaxation of the CFL constraint on the timestep size and a possibly significant reduction of the number of DOF. The time marching is carried out using a mixed quasi-third order scheme. Besides the advantages of a divergence-free formulation and a quasi-spectral convergence, the lo...

  3. Chromatographic methods

    International Nuclear Information System (INIS)

    Marhol, M.; Stary, J.

    1975-01-01

    The characteristics are given of chromatographic separation and the methods are listed. Methods and data on materials used in partition, adsorption, precipitation and ion exchange chromatography are listed and conditions are described under which ion partition takes place. Special attention is devoted to ion exchange chromatography where tables are given to show the course of values of the partition coefficients of different ions in dependence on the concentration of agents and the course of equilibrium sorptions on different materials in dependence on the solution pH. A theoretical analysis is given and the properties of the most widely used ion exchangers are listed. Experimental conditions and apparatus used for each type of chromatography are listed. (L.K.)

  4. Numerical methods

    CERN Document Server

    Dahlquist, Germund

    1974-01-01

    ""Substantial, detailed and rigorous . . . readers for whom the book is intended are admirably served."" - MathSciNet (Mathematical Reviews on the Web), American Mathematical Society.Practical text strikes fine balance between students' requirements for theoretical treatment and needs of practitioners, with best methods for large- and small-scale computing. Prerequisites are minimal (calculus, linear algebra, and preferably some acquaintance with computer programming). Text includes many worked examples, problems, and an extensive bibliography.

  5. A fluorescence high throughput screening method for the detection of reactive electrophiles as potential skin sensitizers

    Energy Technology Data Exchange (ETDEWEB)

    Avonto, Cristina; Chittiboyina, Amar G. [National Center for Natural Products Research, Research Institute of Pharmaceutical Sciences, School of Pharmacy, The University of Mississippi, University, MS 38677 (United States); Rua, Diego [The Center for Food Safety and Applied Nutrition, US Food and Drug Administration, College Park, MD 20740 (United States); Khan, Ikhlas A., E-mail: ikhan@olemiss.edu [National Center for Natural Products Research, Research Institute of Pharmaceutical Sciences, School of Pharmacy, The University of Mississippi, University, MS 38677 (United States); Division of Pharmacognosy, Department of BioMolecular Sciences, School of Pharmacy, The University of Mississippi, University, MS 38677 (United States)

    2015-12-01

    Skin sensitization is an important toxicological end-point in the risk assessment of chemical allergens. Because of the complexity of the biological mechanisms associated with skin sensitization, integrated approaches combining different chemical, biological and in silico methods are recommended to replace conventional animal tests. Chemical methods are intended to characterize the potential of a sensitizer to induce earlier molecular initiating events. The presence of an electrophilic mechanistic domain is considered one of the essential chemical features to covalently bind to the biological target and induce further haptenation processes. Current in chemico assays rely on the quantification of unreacted model nucleophiles after incubation with the candidate sensitizer. In the current study, a new fluorescence-based method, ‘HTS-DCYA assay’, is proposed. The assay aims at the identification of reactive electrophiles based on their chemical reactivity toward a model fluorescent thiol. The reaction workflow enabled the development of a High Throughput Screening (HTS) method to directly quantify the reaction adducts. The reaction conditions have been optimized to minimize solubility issues, oxidative side reactions and increase the throughput of the assay while minimizing the reaction time, which are common issues with existing methods. Thirty-six chemicals previously classified with LLNA, DPRA or KeratinoSens™ were tested as a proof of concept. Preliminary results gave an estimated 82% accuracy, 78% sensitivity, 90% specificity, comparable to other in chemico methods such as Cys-DPRA. In addition to validated chemicals, six natural products were analyzed and a prediction of their sensitization potential is presented for the first time. - Highlights: • A novel fluorescence-based method to detect electrophilic sensitizers is proposed. • A model fluorescent thiol was used to directly quantify the reaction products. • A discussion of the reaction workflow

  6. A fluorescence high throughput screening method for the detection of reactive electrophiles as potential skin sensitizers

    International Nuclear Information System (INIS)

    Avonto, Cristina; Chittiboyina, Amar G.; Rua, Diego; Khan, Ikhlas A.

    2015-01-01

    Skin sensitization is an important toxicological end-point in the risk assessment of chemical allergens. Because of the complexity of the biological mechanisms associated with skin sensitization, integrated approaches combining different chemical, biological and in silico methods are recommended to replace conventional animal tests. Chemical methods are intended to characterize the potential of a sensitizer to induce earlier molecular initiating events. The presence of an electrophilic mechanistic domain is considered one of the essential chemical features to covalently bind to the biological target and induce further haptenation processes. Current in chemico assays rely on the quantification of unreacted model nucleophiles after incubation with the candidate sensitizer. In the current study, a new fluorescence-based method, ‘HTS-DCYA assay’, is proposed. The assay aims at the identification of reactive electrophiles based on their chemical reactivity toward a model fluorescent thiol. The reaction workflow enabled the development of a High Throughput Screening (HTS) method to directly quantify the reaction adducts. The reaction conditions have been optimized to minimize solubility issues, oxidative side reactions and increase the throughput of the assay while minimizing the reaction time, which are common issues with existing methods. Thirty-six chemicals previously classified with LLNA, DPRA or KeratinoSens™ were tested as a proof of concept. Preliminary results gave an estimated 82% accuracy, 78% sensitivity, 90% specificity, comparable to other in chemico methods such as Cys-DPRA. In addition to validated chemicals, six natural products were analyzed and a prediction of their sensitization potential is presented for the first time. - Highlights: • A novel fluorescence-based method to detect electrophilic sensitizers is proposed. • A model fluorescent thiol was used to directly quantify the reaction products. • A discussion of the reaction workflow

  7. [The methods of Western medicine in on ancient medicine].

    Science.gov (United States)

    Ban, Deokjin

    2010-06-30

    has a systematic method of treatment. The reason for this is that he thought that discoveries are the end point of the method of investigation and the starting point of the procedures used in treatment.

  8. Method for estimating off-axis pulse tube losses

    Science.gov (United States)

    Fang, T.; Mulcahey, T. I.; Taylor, R. P.; Spoor, P. S.; Conrad, T. J.; Ghiaasiaan, S. M.

    2017-12-01

    Some Stirling-type pulse tube cryocoolers (PTCs) exhibit sensitivity to gravitational orientation and often exhibit significant cooling performance losses unless situated with the cold end pointing downward. Prior investigations have indicated that some coolers exhibit sensitivity while others do not; however, a reliable method of predicting the level of sensitivity during the design process has not been developed. In this study, we present a relationship that estimates an upper limit to gravitationally induced losses as a function of the dimensionless pulse tube convection number (NPTC) that can be used to ensure that a PTC would remain functional at adverse static tilt conditions. The empirical relationship is based on experimental data as well as experimentally validated 3-D computational fluid dynamics simulations that examine the effects of frequency, mass flow rate, pressure ratio, mass-pressure phase difference, hot and cold end temperatures, and static tilt angle. The validation of the computational model is based on experimental data collected from six commercial pulse tube cryocoolers. The simulation results are obtained from component-level models of the pulse tube and heat exchangers. Parameter ranges covered in component level simulations are 0-180° for tilt angle, 4-8 for length to diameter ratios, 4-80 K cold tip temperatures, -30° to +30° for mass flow to pressure phase angles, and 25-60 Hz operating frequencies. Simulation results and experimental data are aggregated to yield the relationship between inclined PTC performance and pulse tube convection numbers. The results indicate that the pulse tube convection number can be used as an order of magnitude indicator of the orientation sensitivity, but CFD simulations should be used to calculate the change in energy flow more accurately.

  9. Sampling methods

    International Nuclear Information System (INIS)

    Loughran, R.J.; Wallbrink, P.J.; Walling, D.E.; Appleby, P.G.

    2002-01-01

    Methods for the collection of soil samples to determine levels of 137 Cs and other fallout radionuclides, such as excess 210 Pb and 7 Be, will depend on the purposes (aims) of the project, site and soil characteristics, analytical capacity, the total number of samples that can be analysed and the sample mass required. The latter two will depend partly on detector type and capabilities. A variety of field methods have been developed for different field conditions and circumstances over the past twenty years, many of them inherited or adapted from soil science and sedimentology. The use of them inherited or adapted from soil science and sedimentology. The use of 137 Cs in erosion studies has been widely developed, while the application of fallout 210 Pb and 7 Be is still developing. Although it is possible to measure these nuclides simultaneously, it is common for experiments to designed around the use of 137 Cs along. Caesium studies typically involve comparison of the inventories found at eroded or sedimentation sites with that of a 'reference' site. An accurate characterization of the depth distribution of these fallout nuclides is often required in order to apply and/or calibrate the conversion models. However, depending on the tracer involved, the depth distribution, and thus the sampling resolution required to define it, differs. For example, a depth resolution of 1 cm is often adequate when using 137 Cs. However, fallout 210 Pb and 7 Be commonly has very strong surface maxima that decrease exponentially with depth, and fine depth increments are required at or close to the soil surface. Consequently, different depth incremental sampling methods are required when using different fallout radionuclides. Geomorphic investigations also frequently require determination of the depth-distribution of fallout nuclides on slopes and depositional sites as well as their total inventories

  10. Thermodynamic free energy methods to investigate shape transitions in bilayer membranes.

    Science.gov (United States)

    Ramakrishnan, N; Tourdot, Richard W; Radhakrishnan, Ravi

    2016-06-01

    The conformational free energy landscape of a system is a fundamental thermodynamic quantity of importance particularly in the study of soft matter and biological systems, in which the entropic contributions play a dominant role. While computational methods to delineate the free energy landscape are routinely used to analyze the relative stability of conformational states, to determine phase boundaries, and to compute ligand-receptor binding energies its use in problems involving the cell membrane is limited. Here, we present an overview of four different free energy methods to study morphological transitions in bilayer membranes, induced either by the action of curvature remodeling proteins or due to the application of external forces. Using a triangulated surface as a model for the cell membrane and using the framework of dynamical triangulation Monte Carlo, we have focused on the methods of Widom insertion, thermodynamic integration, Bennett acceptance scheme, and umbrella sampling and weighted histogram analysis. We have demonstrated how these methods can be employed in a variety of problems involving the cell membrane. Specifically, we have shown that the chemical potential, computed using Widom insertion, and the relative free energies, computed using thermodynamic integration and Bennett acceptance method, are excellent measures to study the transition from curvature sensing to curvature inducing behavior of membrane associated proteins. The umbrella sampling and WHAM analysis has been used to study the thermodynamics of tether formation in cell membranes and the quantitative predictions of the computational model are in excellent agreement with experimental measurements. Furthermore, we also present a method based on WHAM and thermodynamic integration to handle problems related to end-point-catastrophe that are common in most free energy methods.

  11. Decontaminating method

    International Nuclear Information System (INIS)

    Furukawa, Toshiharu; Shibuya, Kiichiro.

    1985-01-01

    Purpose: To provide a method of eliminating radioactive contaminations capable of ease treatment for decontaminated liquid wastes and grinding materials. Method: Those organic grinding materials such as fine wall nuts shell pieces cause no secondary contaminations since they are softer as compared with inorganic grinding materials, less pulverizable upon collision against the surface to be treated, being capable of reusing and producing no fine scattering powder. In addition, they can be treated by burning. The organic grinding material and water are sprayed by a nozzle to the surface to be treated, and decontaminated liquid wastes are separated into solid components mainly composed of organic grinding materials and liquid components mainly composed of water by filtering. The thus separated solid components are recovered in a storage tank for reuse as the grinding material and, after repeating use, subjected to burning treatment. While on the other hand, water is recovered into a storage tank and, after repeating use, purified by passing through an ion exchange resin-packed column and decontaminated to discharge. (Horiuchi, T.)

  12. WELDING METHOD

    Science.gov (United States)

    Cornell, A.A.; Dunbar, J.V.; Ruffner, J.H.

    1959-09-29

    A semi-automatic method is described for the weld joining of pipes and fittings which utilizes the inert gasshielded consumable electrode electric arc welding technique, comprising laying down the root pass at a first peripheral velocity and thereafter laying down the filler passes over the root pass necessary to complete the weld by revolving the pipes and fittings at a second peripheral velocity different from the first peripheral velocity, maintaining the welding head in a fixed position as to the specific direction of revolution, while the longitudinal axis of the welding head is disposed angularly in the direction of revolution at amounts between twenty minutas and about four degrees from the first position.

  13. Casting methods

    Science.gov (United States)

    Marsden, Kenneth C.; Meyer, Mitchell K.; Grover, Blair K.; Fielding, Randall S.; Wolfensberger, Billy W.

    2012-12-18

    A casting device includes a covered crucible having a top opening and a bottom orifice, a lid covering the top opening, a stopper rod sealing the bottom orifice, and a reusable mold having at least one chamber, a top end of the chamber being open to and positioned below the bottom orifice and a vacuum tap into the chamber being below the top end of the chamber. A casting method includes charging a crucible with a solid material and covering the crucible, heating the crucible, melting the material, evacuating a chamber of a mold to less than 1 atm absolute through a vacuum tap into the chamber, draining the melted material into the evacuated chamber, solidifying the material in the chamber, and removing the solidified material from the chamber without damaging the chamber.

  14. Radiochemical methods

    International Nuclear Information System (INIS)

    Geary, W.J.

    1986-01-01

    This little volume is one of an extended series of basic textbooks on analytical chemistry produced by the Analytical Chemistry by Open Learning project in the UK. Prefatory sections explain its mission, and how to use the Open Learning format. Seventeen specific sections organized into five chaptrs begin with a general discussion of nuclear properties, types, and laws of nuclear decay and proceeds to specific discussions of three published papers (reproduced in their entirety) giving examples of radiochemical methods which were discussed in the previous chapter. Each section begins with an overview, contains one or more practical problems (called self-assessment questions or SAQ's), and concludes with a summary and a list of objectives for the student. Following the main body are answers to the SAQ's, and several tables of physical constants, SI prefixes, etc. A periodic table graces the inside back cover

  15. Moment methods and Lanczos methods

    International Nuclear Information System (INIS)

    Whitehead, R.R.

    1980-01-01

    In contrast to many of the speakers at this conference I am less interested in average properties of nuclei than in detailed spectroscopy. I will try to show, however, that the two are very closely connected and that shell-model calculations may be used to give a great deal of information not normally associated with the shell-model. It has been demonstrated clearly to us that the level spacing fluctuations in nuclear spectra convey very little physical information. This is true when the fluctuations are averaged over the entire spectrum but not if one's interest is in the lowest few states, whose spacings are relatively large. If one wishes to calculate a ground state (say) accurately, that is with an error much smaller than the excitation energy of the first excited state, very high moments, μ/sub n/, n approx. 200, are needed. As I shall show, we use such moments as a matter of course, albeit without actually calculating them; in fact I will try to show that, if at all possible, the actual calculations of moments is to be avoided like the plague. At the heart of the new shell-model methods embodied in the Glasgow shell-model program and one or two similar ones is the so-called Lanczos method and this, it turns out, has many deep and subtle connections with the mathematical theory of moments. It is these connections that I will explore here

  16. Operator control systems and methods for swing-free gantry-style cranes

    Science.gov (United States)

    Feddema, John T.; Petterson, Ben J.; Robinett, III, Rush D.

    1998-01-01

    A system and method for eliminating swing motions in gantry-style cranes while subject to operator control is presented. The present invention comprises an infinite impulse response ("IIR") filter and a proportional-integral ("PI") feedback controller (50). The IIR filter receives input signals (46) (commanded velocity or acceleration) from an operator input device (45) and transforms them into output signals (47) in such a fashion that the resulting motion is swing free (i.e., end-point swinging prevented). The parameters of the IIR filter are updated in real time using measurements from a hoist cable length encoder (25). The PI feedback controller compensates for modeling errors and external disturbances, such as wind or perturbations caused by collision with objects. The PI feedback controller operates on cable swing angle measurements provided by a cable angle sensor (27). The present invention adjusts acceleration and deceleration to eliminate oscillations. An especially important feature of the present invention is that it compensates for variable-length cable motions from multiple cables attached to a suspended payload.

  17. HPLC determination of plasma dimethylarginines: method validation and preliminary clinical application.

    Science.gov (United States)

    Ivanova, Mariela; Artusi, Carlo; Boffa, Giovanni Maria; Zaninotto, Martina; Plebani, Mario

    2010-11-11

    Asymmetric dimethylarginine (ADMA) has been suggested as a possible marker of endothelial dysfunction, and interest in its use in clinical practice is increasing. However, the potential role of symmetric dimethylarginine (SDMA) as an endogenous marker of renal function, has been less widely investigated. The aims of the present study were therefore to determine reference values for dimethylarginines in plasma after method validation, and to ascertain ADMA plasma concentrations in patients with disorders characterized by endothelial dysfunction; a further end-point was to investigate the relationship between SDMA plasma concentrations and estimated GFR (eGFR) as well as plasmatic creatinine in patients with chronic kidney disease (CKD). HPLC with fluorescence detection was used for the determination of plasma dimethylarginines. To verify the clinical usefulness of ADMA and SDMA, values from 4 groups of patients at a high risk of cardiovascular complications as well renal dysfunction (chronic heart failure n=126; type II diabetes n=43; pulmonary arterial hypertension n=17; chronic kidney disease n=42) were evaluated, and compared with the reference values, obtained from 225 blood donors. The intra- and inter-assay CVs (peadiatric populations, for which the use of eGFR is not recommended. 2010 Elsevier B.V. All rights reserved.

  18. Comparison of methods for calculating conditional expectations of sufficient statistics for continuous time Markov chains.

    Science.gov (United States)

    Tataru, Paula; Hobolth, Asger

    2011-12-05

    Continuous time Markov chains (CTMCs) is a widely used model for describing the evolution of DNA sequences on the nucleotide, amino acid or codon level. The sufficient statistics for CTMCs are the time spent in a state and the number of changes between any two states. In applications past evolutionary events (exact times and types of changes) are unaccessible and the past must be inferred from DNA sequence data observed in the present. We describe and implement three algorithms for computing linear combinations of expected values of the sufficient statistics, conditioned on the end-points of the chain, and compare their performance with respect to accuracy and running time. The first algorithm is based on an eigenvalue decomposition of the rate matrix (EVD), the second on uniformization (UNI), and the third on integrals of matrix exponentials (EXPM). The implementation in R of the algorithms is available at http://www.birc.au.dk/~paula/. We use two different models to analyze the accuracy and eight experiments to investigate the speed of the three algorithms. We find that they have similar accuracy and that EXPM is the slowest method. Furthermore we find that UNI is usually faster than EVD.

  19. Comparison of methods for calculating conditional expectations of sufficient statistics for continuous time Markov chains

    Directory of Open Access Journals (Sweden)

    Tataru Paula

    2011-12-01

    Full Text Available Abstract Background Continuous time Markov chains (CTMCs is a widely used model for describing the evolution of DNA sequences on the nucleotide, amino acid or codon level. The sufficient statistics for CTMCs are the time spent in a state and the number of changes between any two states. In applications past evolutionary events (exact times and types of changes are unaccessible and the past must be inferred from DNA sequence data observed in the present. Results We describe and implement three algorithms for computing linear combinations of expected values of the sufficient statistics, conditioned on the end-points of the chain, and compare their performance with respect to accuracy and running time. The first algorithm is based on an eigenvalue decomposition of the rate matrix (EVD, the second on uniformization (UNI, and the third on integrals of matrix exponentials (EXPM. The implementation in R of the algorithms is available at http://www.birc.au.dk/~paula/. Conclusions We use two different models to analyze the accuracy and eight experiments to investigate the speed of the three algorithms. We find that they have similar accuracy and that EXPM is the slowest method. Furthermore we find that UNI is usually faster than EVD.

  20. Development of automatic extraction method of left ventricular contours on long axis view MR cine images

    International Nuclear Information System (INIS)

    Utsunomiya, Shinichi; Iijima, Naoto; Yamasaki, Kazunari; Fujita, Akinori

    1995-01-01

    In the MRI cardiac function analysis, left ventricular volume curves and diagnosis parameters are obtained by extracting the left ventricular cavities as regions of interest (ROI) from long axis view MR cine images. The ROI extractions had to be done by manual operations, because automatization of the extraction is difficult. A long axis view left ventricular contour consists of a cardiac wall part and an aortic valve part. The above mentioned difficulty is due to the decline of contrast on the cardiac wall part, and the disappearance of edge on the aortic valve part. In this paper, we report a new automatic extraction method for long axis view MR cine images, which needs only 3 manually indicated points on the 1st image to extract all the contours from the total sequence of images. At first, candidate points of a contour are detected by edge detection. Then, selecting the best matched combination of candidate points by Dynamic Programming, the cardiac wall part is automatically extracted. The aortic valve part is manually extracted for the 1st image by indicating both the end points, and is automatically extracted for the rest of the images, by utilizing the aortic valve motion characteristics throughout a cardiac cycle. (author)

  1. Preferential Cyclooxygenase 2 Inhibitors as a Nonhormonal Method of Emergency Contraception: A Look at the Evidence.

    Science.gov (United States)

    Weiss, Erich A; Gandhi, Mona

    2016-04-01

    To review the literature surrounding the use of preferential cyclooxygenase 2 (COX-2) inhibitors as an alternative form of emergency contraception. MEDLINE (1950 to February 2014) was searched using the key words cyclooxygenase or COX-2 combined with contraception, emergency contraception, or ovulation. Results were limited to randomized control trials, controlled clinical trials, and clinical trials. Human trials that measured the effects of COX inhibition on female reproductive potential were included for review. The effects of the COX-2 inhibitors rofecoxib, celecoxib, and meloxicam were evaluated in 6 trials. Each of which was small in scope, enrolled women of variable fertility status, used different dosing regimens, included multiple end points, and had variable results. Insufficient evidence exists to fully support the use of preferential COX-2 inhibitors as a form of emergency contraception. Although all trials resulted in a decrease in ovulatory cycles, outcomes varied between dosing strategies and agents used. A lack of homogeneity in these studies makes comparisons difficult. However, success of meloxicam in multiple trials warrants further study. Larger human trials are necessary before the clinical utility of this method of emergency contraception can be fully appreciated. © The Author(s) 2014.

  2. Determination of microamounts of carbon in various metals and alloys by the combustion-nonaqueous titrimetric method

    Energy Technology Data Exchange (ETDEWEB)

    Yoshimori, T; Koike, A [Science Univ. of Tokyo (Japan). Faculty of Engineering; Katoh, N

    1977-12-01

    Microamounts of carbon (7 -- 600 ppm) in ferrous and non-ferrous metals and alloys were determined by the combustion-nonaqueous titrimetric method. The carbon dioxide liberated by the combustion of a sample was absorbed with dimethylformamide (DMF) containing monoethanolamine and then the absorbent was titrated with the standard benzene-methanol solution of tetra-n-butylammonium hydroxide (0.007-0.002 M). The end point of the titration was located either visibly by using thymolphthalein as an indicator or potentiometrically by using a couple of platinum and calomel (containing DMF) electrodes. Pure benzoic acid was used as the standard substance for the standardization. Many improvements were given on both the combustion apparatus and the procedure. Microamounts of carbon in various samples were determined by the proposed method. They are : plain carbon and high purity ferritic stainless steels (0.05 -- 0.002% C), Inconel X-750 (0.027% C), copper alloys (20 -- 30 ppm C), tantalum powder (40 ppm C) and high purity metallic uranium (7 ppm C). All results were quite satisfactory and indicate that the proposed method was adaptable for the determination of carbon less than 100 ppm in various samples without use of any standard samples or calibration curves.

  3. On method

    Directory of Open Access Journals (Sweden)

    Frederik Kortlandt

    2018-01-01

    Full Text Available The basis of linguistic reconstruction is the comparative method, which starts from the assumption that there is “a stronger affinity, both in the roots of verbs and in the forms of grammar, than could possibly have been produced by accident”, implying the existence of a common source (thus Sir William Jones in 1786. It follows that there must be a possible sequence of developments from the reconstructed system to the attested data. These developments must have been either phonetically regular or analogical. The latter type of change requires a model and a motivation. A theory which does not account for the data in terms of sound laws and well-motivated analogical changes is not a linguistic reconstruction but philosophical speculation.The pre-laryngealist idea that any Proto-Indo-European long vowel became acute in Balto-Slavic is a typical example of philosophical speculation contradicted by the comparative evidence. Other examples are spontaneous glottalization (Jasanoff’s “acute assignment”, unattested anywhere in the world, Jasanoff’s trimoraic long vowels, Eichner’s law, Osthoff’s law, and Szemerényi’s law, which is an instance of circular reasoning. The Balto-Slavic acute continues the Proto-Indo-European laryngeals and the glottalic feature of the traditional Proto-Indo-European “unaspirated voiced” obstruents (Winter’s law. My reconstruction of Proto-Indo-European glottalic obstruents is based on direct evidence from Indo-Iranian, Armenian, Baltic and Germanic and indirect evidence from Indo-Iranian, Greek, Latin and Slavic.

  4. Highly sensitive detection of influenza virus in saliva by real-time PCR method using sugar chain-immobilized gold nanoparticles; application to clinical studies

    Directory of Open Access Journals (Sweden)

    Yasuo Suda

    2015-09-01

    Full Text Available A highly sensitive and convenient method for detecting influenza virus was developed using modified end-point melt curve analysis of a RT-qPCR SYBR Green method and influenza virus-binding sugar chain-immobilized gold-nanoparticles (SGNP. Because SGNPs capture influenza viruses, the virus-SGNP complex was separated easily by centrifugation. Viral RNA was detected at very low concentrations, suggesting that SGNP increased sensitivity compared with standard methods. This method was applied to clinical studies. Influenza viruses were detected in saliva of patients or inpatients who had been considered influenza-free by a rapid diagnostic assay of nasal swabs. Furthermore, the method was applied to a human trial of prophylactic anti-influenza properties of yogurt containing Lactobacillus acidophilus L-92. The incidence of influenza viruses in saliva of the L-92 group was found to be significantly lower compared to the control group. Thus, this method was useful for monitoring the course of anti-influenza treatment or preventive measures against nosocomial infection.

  5. EDTA excess Zn(II) back-titration in the presence of 4-(2-pyridylazo)-resorcinol indicator and naphthol green {beta} as inert dye for determining Cr(III) as Cr(III)/EDTA complex: Application of the method to a leather industry wastewater

    Energy Technology Data Exchange (ETDEWEB)

    Venezia, M.; Alonzo, G. [Dipartimento di Ingegneria e Tecnologie Agro Forestali, Universita degli Studi di Palermo, Viale delle Scienze, 90128 Palermo (Italy); Palmisano, L. [Dipartimento di Ingegneria Chimica dei Processi e dei Materiali, Universita degli Studi di Palermo, Viale delle Scienze, 90128 Palermo (Italy)], E-mail: palmisano@dicpm.unipa.it

    2008-03-01

    The colour changes of 4-(2-pyridylazo)-resorcinol and naphthol green {beta} as new screening metallochromic indicator in back-titration of EDTA excess with Zn(II) to determine Cr(III)/EDTA complex was investigated with the help of tristimulus colorimetry. Specific colour discrimination (SCD) and L*, a*, b* 1976 parameters were successfully applied to evaluate the quality of colour transition at the end-point in non-alkaline media and in the presence of Zn(II) and Ca(II) which resulted in non-interfering species at 1 x 10{sup -3} M and 2 x 10{sup -3} M, respectively. The above concentrations are comparable with those used for Cr(III). Validation of the fast and accurate reported method was performed by atomic absorption spectroscopy. Moreover, the method was applied for determining Cr as Cr(III) in a wastewater effluent deriving from a leather industry.

  6. EDTA excess Zn(II) back-titration in the presence of 4-(2-pyridylazo)-resorcinol indicator and naphthol green β as inert dye for determining Cr(III) as Cr(III)/EDTA complex: Application of the method to a leather industry wastewater

    International Nuclear Information System (INIS)

    Venezia, M.; Alonzo, G.; Palmisano, L.

    2008-01-01

    The colour changes of 4-(2-pyridylazo)-resorcinol and naphthol green β as new screening metallochromic indicator in back-titration of EDTA excess with Zn(II) to determine Cr(III)/EDTA complex was investigated with the help of tristimulus colorimetry. Specific colour discrimination (SCD) and L*, a*, b* 1976 parameters were successfully applied to evaluate the quality of colour transition at the end-point in non-alkaline media and in the presence of Zn(II) and Ca(II) which resulted in non-interfering species at 1 x 10 -3 M and 2 x 10 -3 M, respectively. The above concentrations are comparable with those used for Cr(III). Validation of the fast and accurate reported method was performed by atomic absorption spectroscopy. Moreover, the method was applied for determining Cr as Cr(III) in a wastewater effluent deriving from a leather industry

  7. Susceptibility screening of hyphae-forming fungi with a new, easy, and fast inoculum preparation method.

    Science.gov (United States)

    Schmalreck, Arno; Willinger, Birgit; Czaika, Viktor; Fegeler, Wolfgang; Becker, Karsten; Blum, Gerhard; Lass-Flörl, Cornelia

    2012-12-01

    In vitro susceptibility testing of clinically important fungi becomes more and more essential due to the rising number of fungal infections in patients with impaired immune system. Existing standardized microbroth dilution methods for in vitro testing of molds (CLSI, EUCAST) are not intended for routine testing. These methods are very time-consuming and dependent on sporulating of hyphomycetes. In this multicentre study, a new (independent of sporulation) inoculum preparation method (containing a mixture of vegetative cells, hyphae, and conidia) was evaluated. Minimal inhibitory concentrations (MIC) of amphotericin B, posaconazole, and voriconazole of 180 molds were determined with two different culture media (YST and RPMI 1640) according to the DIN (Deutsches Institut für Normung) microdilution assay. 24 and 48 h MIC of quality control strains, tested per each test run, prepared with the new inoculum method were in the range of DIN. YST and RPMI 1640 media showed similar MIC distributions for all molds tested. MIC readings at 48 versus 24 h yield 1 log(2) higher MIC values and more than 90 % of the MICs read at 24 and 48 h were within ± 2 log(2) dilution. MIC end point reading (log(2 MIC-RPMI 1640)-log(2 MIC-YST)) of both media demonstrated a tendency to slightly lower MICs with RPMI 1640 medium. This study reports the results of a new, time-saving, and easy-to-perform method for inoculum preparation for routine susceptibility testing that can be applied for all types of spore-/non-spore and hyphae-forming fungi.

  8. CarcinoPred-EL: Novel models for predicting the carcinogenicity of chemicals using molecular fingerprints and ensemble learning methods.

    Science.gov (United States)

    Zhang, Li; Ai, Haixin; Chen, Wen; Yin, Zimo; Hu, Huan; Zhu, Junfeng; Zhao, Jian; Zhao, Qi; Liu, Hongsheng

    2017-05-18

    Carcinogenicity refers to a highly toxic end point of certain chemicals, and has become an important issue in the drug development process. In this study, three novel ensemble classification models, namely Ensemble SVM, Ensemble RF, and Ensemble XGBoost, were developed to predict carcinogenicity of chemicals using seven types of molecular fingerprints and three machine learning methods based on a dataset containing 1003 diverse compounds with rat carcinogenicity. Among these three models, Ensemble XGBoost is found to be the best, giving an average accuracy of 70.1 ± 2.9%, sensitivity of 67.0 ± 5.0%, and specificity of 73.1 ± 4.4% in five-fold cross-validation and an accuracy of 70.0%, sensitivity of 65.2%, and specificity of 76.5% in external validation. In comparison with some recent methods, the ensemble models outperform some machine learning-based approaches and yield equal accuracy and higher specificity but lower sensitivity than rule-based expert systems. It is also found that the ensemble models could be further improved if more data were available. As an application, the ensemble models are employed to discover potential carcinogens in the DrugBank database. The results indicate that the proposed models are helpful in predicting the carcinogenicity of chemicals. A web server called CarcinoPred-EL has been built for these models ( http://ccsipb.lnu.edu.cn/toxicity/CarcinoPred-EL/ ).

  9. A method for the estimation of hydration state during hemodialysis using a calf bioimpedance technique

    International Nuclear Information System (INIS)

    Zhu, F; Kuhlmann, M K; Kotanko, P; Seibert, E; Levin, N W; Leonard, E F

    2008-01-01

    Although many methods have been utilized to measure degrees of body hydration, and in particular to estimate normal hydration states (dry weight, DW) in hemodialysis (HD) patients, no accurate methods are currently available for clinical use. Biochemcial measurements are not sufficiently precise and vena cava diameter estimation is impractical. Several bioimpedance methods have been suggested to provide information to estimate clinical hydration and nutritional status, such as phase angle measurement and ratio of body fluid compartment volumes to body weight. In this study, we present a calf bioimpedance spectroscopy (cBIS) technique to monitor calf resistance and resistivity continuously during HD. Attainment of DW is defined by two criteria: (1) the primary criterion is flattening of the change in the resistance curve during dialysis so that at DW little further change is observed and (2) normalized resistivity is in the range of observation of healthy subjects. Twenty maintenance HD patients (12 M/8 F) were studied on 220 occasions. After three baseline (BL) measurements, with patients at their DW prescribed on clinical grounds (DW Clin ), the target post-dialysis weight was gradually decreased in the course of several treatments until the two dry weight criteria outlined above were met (DW cBIS ). Post-dialysis weight was reduced from 78.3 ± 28 to 77.1 ± 27 kg (p −2 Ω m 3 kg −1 (p cBIS was 0.3 ± 0.2%. The results indicate that cBIS utilizing a dynamic technique continuously during dialysis is an accurate and precise approach to specific end points for the estimation of body hydration status. Since no current techniques have been developed to detect DW as precisely, it is suggested as a standard to be evaluated clinically

  10. Split of personality of leader as reason of mass psychosis: nature and methods of influence on crowd

    Directory of Open Access Journals (Sweden)

    Elena V. Chuikova

    2016-02-01

    Full Text Available In the article the theme of technology and method of influence of leader is examined on mass of people. Features are studied personalities of leader and technology, that subordinate people to his influence. It is technology of chain reaction. But as elements of this chainlet the foreign structures of subconsciousness come forward for personality of leader. Аrchetyp of Shade in consciousness of leader activates analogical archetypes of Shade in consciousness of people with destructives tendencies. These features of influence specify, why character and end-point of such influence are destructive. Thus, only a problem leader not able to control a temper to a full degree, because аrchetyp of Shade owns them, is able on principle of chain reaction to influence on crowd and create a mass psychosis. So, leader with the split of personality, an inferior simply infects archetyp of Shade the personality psychosis mass of weak, weak-willed and simultaneously destructive people with propensity to dissociation of personality. Social layer that easily submits to the leader with a divide consciousness, - it countrymen, that moved in a city and are in a state of disorientation, weak will, are unbred, destructived, without the critical thinking, with a limit thinking.

  11. A novel stent inflation protocol improves long-term outcomes compared with rapid inflation/deflation deployment method.

    Science.gov (United States)

    Vallurupalli, Srikanth; Kasula, Srikanth; Kumar Agarwal, Shiv; Pothineni, Naga Venkata K; Abualsuod, Amjad; Hakeem, Abdul; Ahmed, Zubair; Uretsky, Barry F

    2017-08-01

    High-pressure inflation for coronary stent deployment is universally performed. However, the duration of inflation is variable and does not take into account differences in lesion compliance. We developed a standardized "pressure optimization protocol" (POP) using inflation pressure stability rather than an arbitrary inflation time or angiographic balloon appearance for stent deployment. Whether this approach improves long-term outcomes is unknown. 792 patients who underwent PCI using either rapid inflation/deflation (n = 376) or POP (n = 416) between January 2009 and March 2014 were included. Exclusion criteria included PCI for acute myocardial infarction, in-stent restenosis, chronic total occlusion, left main, and saphenous vein graft lesions. Primary endpoint was target vessel failure [TVF = combined end point of target vessel revascularization (TVR), myocardial infarction, and cardiac death]. Outcomes were analyzed in the entire cohort and in a propensity analysis. Stent implantation using POP with a median follow-up of 1317 days was associated with lower TVF compared with rapid inflation/deflation (10.1 vs. 17.8%, P inflation/deflation (10 vs. 18%, P < 0.0001). Stent deployment using POP led to reduced TVF compared to rapid I/D. These results recommend this method to improve long-term outcomes. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  12. Recovery methods for detection and quantification of Campylobacter depend on meat matrices and bacteriological or PCR tools.

    Science.gov (United States)

    Fosse, J; Laroche, M; Rossero, A; Fédérighi, M; Seegers, H; Magras, C

    2006-09-01

    Campylobacter is one of the main causes of human foodborne bacterial disease associated with meat consumption in developed countries. Therefore, the most effective approach for recovery and detection of Campylobacter from meat should be determined. Two hundred ninety pork skin and chine samples were inoculated with Campylobacter jejuni NCTC 11168 and two strains of Campylobacter coli. Campylobacter cells were then recovered from suspensions and enumerated by direct plating. Campylobacter recovery was evaluated by comparing results for two methods of sample collection (swabbing and mechanical pummeling) and three recovery fluids (peptone water, 5% glucose serum, and demineralized water). End-point multiplex PCR was performed to evaluate the compatibility of the recovery fluids with direct PCR detection techniques. Mean recovery ratios differed significantly between pork skin and chine samples. Ratios were higher for mechanical pummeling (0.53 for pork skin and 0.49 for chine) than for swabbing (0.31 and 0.13, respectively). For pork skin, ratios obtained with peptone water (0.50) and with glucose serum (0.55) were higher than those obtained with demineralized water (0.16). Significant differences were not observed for chine samples. Direct multiplex PCR detection of Campylobacter was possible with pork skin samples. The tools for Campylobacter recovery must be appropriate for the meat matrix to be evaluated. In this study, less than 66% of inoculated Campylobacter was recovered from meat. This underestimation must be taken into account for quantitative risk analysis of Campylobacter infection.

  13. Development and Application of Computational/In Vitro Toxicological Methods for Chemical Hazard Risk Reduction of New Materials for Advanced Weapon Systems

    Science.gov (United States)

    Frazier, John M.; Mattie, D. R.; Hussain, Saber; Pachter, Ruth; Boatz, Jerry; Hawkins, T. W.

    2000-01-01

    The development of quantitative structure-activity relationship (QSAR) is essential for reducing the chemical hazards of new weapon systems. The current collaboration between HEST (toxicology research and testing), MLPJ (computational chemistry) and PRS (computational chemistry, new propellant synthesis) is focusing R&D efforts on basic research goals that will rapidly transition to useful products for propellant development. Computational methods are being investigated that will assist in forecasting cellular toxicological end-points. Models developed from these chemical structure-toxicity relationships are useful for the prediction of the toxicological endpoints of new related compounds. Research is focusing on the evaluation tools to be used for the discovery of such relationships and the development of models of the mechanisms of action. Combinations of computational chemistry techniques, in vitro toxicity methods, and statistical correlations, will be employed to develop and explore potential predictive relationships; results for series of molecular systems that demonstrate the viability of this approach are reported. A number of hydrazine salts have been synthesized for evaluation. Computational chemistry methods are being used to elucidate the mechanism of action of these salts. Toxicity endpoints such as viability (LDH) and changes in enzyme activity (glutahoione peroxidase and catalase) are being experimentally measured as indicators of cellular damage. Extrapolation from computational/in vitro studies to human toxicity, is the ultimate goal. The product of this program will be a predictive tool to assist in the development of new, less toxic propellants.

  14. Efficacy of the Roux-en-Y gastric bypass compared to medically managed controls in meeting the American Diabetes Association composite end point goals for management of type 2 diabetes mellitus.

    Science.gov (United States)

    Leslie, Daniel B; Dorman, Robert B; Serrot, Federico J; Swan, Therese W; Kellogg, Todd A; Torres-Villalobos, Gonzalo; Buchwald, Henry; Slusarek, Bridget M; Sampson, Barbara K; Bantle, John P; Ikramuddin, Sayeed

    2012-03-01

    The treatment goals recommended by the American Diabetes Association (ADA) for patients with type 2 diabetes mellitus include hemoglobin A1c (HbA1C) diabetes undergoing RYGB to a database of patients with medically managed type 2 diabetes and at least 2 years of follow-up data. Ultimately, 152 RYGB patients were compared to 115 routine medical management (RMM) patients for whom data on the composite endpoint were available over 2 years. The results show significant decrease in body mass index (kilograms per square meter) in the RYGB group compared to the RMM group (P < 0.001). HbA1C, LDL cholesterol, and SBP all significantly improved in the RYGB group (all P ≤ 0.01) and did not demonstrate any significant change in the RMM group. Over 2 years, when evaluating all three endpoints, the RYGB group (10.5% to 38.2%, P < 0.001) demonstrated increased achievement of the ADA goals compared to the RMM group (13.9% to 17.4%, P = 0.47). There was a significant decrease in medication use in the RYGB cohort; however, discontinuation of medications was sometimes inappropriate. RYGB achieves the ADA composite endpoint more frequently than conventional therapy and with less medication.

  15. Comparison of atorvastatin 80 mg/day versus simvastatin 20 to 40 mg/day on frequency of cardiovascular events late (five years) after acute myocardial infarction (from the Incremental Decrease in End Points through Aggressive Lipid Lowering [IDEAL] trial)

    DEFF Research Database (Denmark)

    Pedersen, TR; Cater, Nilo B; Faergeman, Ole

    2010-01-01

    Previous studies have demonstrated that benefits of intensive statin therapy compared to standard statin therapy begin shortly after an acute event and are continued up to 2 years of follow-up. However, whether efficacy and safety of intensive statin therapy in patients with a recent cardiac even...

  16. Enhanced neoplastic transformation by mammography X rays relative to 200 kVp X rays: indication for a strong dependence on photon energy of the RBE(M) for various end points.

    Science.gov (United States)

    Frankenberg, D; Kelnhofer, K; Bär, K; Frankenberg-Schwager, M

    2002-01-01

    The fundamental assumption implicit in the use of the atomic bomb survivor data to derive risk estimates is that the gamma rays of Hiroshima and Nagasaki are considered to have biological efficiencies equal to those of other low-LET radiations up to 10 keV/microm, including mammography X rays. Microdosimetric and radiobiological data contradict this assumption. It is therefore of scientific and public interest to evaluate the efficiency of mammography X rays (25-30 kVp) to induce cancer. In this study, the efficiency of mammography X rays relative to 200 kVp X rays to induce neoplastic cell transformation was evaluated using cells of a human hybrid cell line (CGL1). For both radiations, a linear-quadratic dose-effect relationship was observed for neoplastic transformation of CGL1 cells; there was a strong linear component for the 29 kVp X rays. The RBE(M) of mammography X rays relative to 200 kVp X rays was determined to be about 4 for doses energies of transformation of CGL1 cells. Both the data available in the literature and the results of the present study strongly suggest an increase of RBE(M) for carcinogenesis in animals, neoplastic cell transformation, and clastogenic effects with decreasing photon energy or increasing LET to an RBE(M) approximately 8 for mammography X rays relative to 60Co gamma rays.

  17. Effect of proton and gamma irradiation on human lung carcinoma cells: Gene expression, cell cycle, cell death, epithelial–mesenchymal transition and cancer-stem cell trait as biological end points

    Energy Technology Data Exchange (ETDEWEB)

    Narang, Himanshi, E-mail: narangh@barc.gov.in [Radiation Biology and Health Sciences Division, Bhabha Atomic Research Centre, Mumbai 400085 (India); Kumar, Amit [Radiation Biology and Health Sciences Division, Bhabha Atomic Research Centre, Mumbai 400085 (India); Bhat, Nagesh [Radiological Physics and Advisory Division, Bhabha Atomic Research Centre, Mumbai 400085 (India); Pandey, Badri N.; Ghosh, Anu [Radiation Biology and Health Sciences Division, Bhabha Atomic Research Centre, Mumbai 400085 (India)

    2015-10-15

    Highlights: • Biological effectiveness of proton and gamma irradiation is compared in A549 cells. • Proton irradiation is two times more cytotoxic than gamma irradiation. • It alters ten times more number of early genes, as observed by microarray study. • It does not enhance cell migration, invasion and adhesion, unlike gamma irradiation. • It was more effective in reducing the percentage of cancer stem cell like cells. - Abstract: Proton beam therapy is a cutting edge modality over conventional gamma radiotherapy because of its physical dose deposition advantage. However, not much is known about its biological effects vis-a-vis gamma irradiation. Here we investigated the effect of proton- and gamma- irradiation on cell cycle, death, epithelial-mesenchymal transition (EMT) and “stemness” in human non-small cell lung carcinoma cells (A549). Proton beam (3 MeV) was two times more cytotoxic than gamma radiation and induced higher and longer cell cycle arrest. At equivalent doses, numbers of genes responsive to proton irradiation were ten times higher than those responsive to gamma irradiation. At equitoxic doses, the proton-irradiated cells had reduced cell adhesion and migration ability as compared to the gamma-irradiated cells. It was also more effective in reducing population of Cancer Stem Cell (CSC) like cells as revealed by aldehyde dehydrogenase activity and surface phenotyping by CD44{sup +}, a CSC marker. These results can have significant implications for proton therapy in the context of suppression of molecular and cellular processes that are fundamental to tumor expansion.

  18. A simple phenotypic method for screening of MCR-1-mediated colistin resistance.

    Science.gov (United States)

    Coppi, M; Cannatelli, A; Antonelli, A; Baccani, I; Di Pilato, V; Sennati, S; Giani, T; Rossolini, G M

    2018-02-01

    To evaluate a novel method, the colistin-MAC test, for phenotypic screening of acquired colistin resistance mediated by transferable mcr-1 resistance determinants, based on colistin MIC reduction in the presence of dipicolinic acid (DPA). The colistin-MAC test consists in a broth microdilution method, in which colistin MIC is tested in the absence or presence of DPA (900 μg/mL). Overall, 74 colistin-resistant strains of Enterobacteriaceae (65 Escherichia coli and nine other species), including 61 strains carrying mcr-1-like genes and 13 strains negative for mcr genes, were evaluated with the colistin-MAC test. The presence of mcr-1-like and mcr-2-like genes was assessed by real-time PCR and end-point PCR. For 20 strains, whole-genome sequencing data were also available. A ≥8-fold reduction of colistin MIC in the presence of DPA was observed with 59 mcr-1-positive strains, including 53 E. coli of clinical origin, three E. coli transconjugants carrying MCR-1-encoding plasmids, one Enterobacter cloacae complex and two Citrobacter spp. Colistin MICs were unchanged, increased or at most reduced by twofold with the 13 mcr-negative colistin-resistant strains (nine E. coli and four Klebsiella pneumoniae), but also with two mcr-1-like-positive K. pneumoniae strains. The colistin-MAC test could be a simple phenotypic test for presumptive identification of mcr-1-positive strains among isolates of colistin-resistant E. coli, based on a ≥8-fold reduction of colistin MIC in the presence of DPA. Evaluation of the test with a larger number of strains, species and mcr-type resistance determinants would be of interest. Copyright © 2017 European Society of Clinical Microbiology and Infectious Diseases. Published by Elsevier Ltd. All rights reserved.

  19. Determination of Titratable Acidity in Wine Using Potentiometric, Conductometric, and Photometric Methods

    Science.gov (United States)

    Volmer, Dietrich A.; Curbani, Luana; Parker, Timothy A.; Garcia, Jennifer; Schultz, Linda D.; Borges, Endler Marcel

    2017-01-01

    This experiment describes a simple protocol for teaching acid-base titrations using potentiometry, conductivity, and/or photometry to determine end points without an added indicator. The chosen example examines the titratable acidity of a red wine with NaOH. Wines contain anthocyanins, the colors of which change with pH. Importantly, at the…

  20. A variational model for propagation time, volumetric and synchronicity optimization in the spinal cord axon network, and a method for testing it

    Science.gov (United States)

    Mota, Bruno

    2014-03-01

    Most information in the central nervous system in general and the (simpler) spinal cord in particular, is transmitted along bundles of parallel axons. Each axon's transmission time increases linearly with length and decreases as a power law of caliber. Therefore, evolution must find a distribution of axonal numbers, lengths and calibers that balances the various tradeoffs between gains in propagation time, signal throughput and synchronicity, against volumetric and metabolic costs. Here I apply a variational method to calculate the distribution of axonal caliber in the spinal cord as a function of axonal length, that minimizes the average axonal signal propagation time, subject to the constraints of white matter total volume and the variance of propagation times, and allowing for arbitrary fiber priorities and end-points. The Lagrange multipliers obtained thereof can be naturally interpreted as 'exchange rates', e.g., how much evolution is willing to pay, in white matter added volume, per unit time decrease of propagation time. This is, to my knowledge, the first model that quantifies explicitly these evolutionary tradeoffs, and can obtain them empirically by measuring the distribution of axonal calibers. We are in the process of doing so using the isotropic fractionator method. I thank FAPERJ for financial support.

  1. Evaluation of normalization methods for cDNA microarray data by k-NN classification

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Wei; Xing, Eric P; Myers, Connie; Mian, Saira; Bissell, Mina J

    2004-12-17

    Non-biological factors give rise to unwanted variations in cDNA microarray data. There are many normalization methods designed to remove such variations. However, to date there have been few published systematic evaluations of these techniques for removing variations arising from dye biases in the context of downstream, higher-order analytical tasks such as classification. Ten location normalization methods that adjust spatial- and/or intensity-dependent dye biases, and three scale methods that adjust scale differences were applied, individually and in combination, to five distinct, published, cancer biology-related cDNA microarray data sets. Leave-one-out cross-validation (LOOCV) classification error was employed as the quantitative end-point for assessing the effectiveness of a normalization method. In particular, a known classifier, k-nearest neighbor (k-NN), was estimated from data normalized using a given technique, and the LOOCV error rate of the ensuing model was computed. We found that k-NN classifiers are sensitive to dye biases in the data. Using NONRM and GMEDIAN as baseline methods, our results show that single-bias-removal techniques which remove either spatial-dependent dye bias (referred later as spatial effect) or intensity-dependent dye bias (referred later as intensity effect) moderately reduce LOOCV classification errors; whereas double-bias-removal techniques which remove both spatial- and intensity effect reduce LOOCV classification errors even further. Of the 41 different strategies examined, three two-step processes, IGLOESS-SLFILTERW7, ISTSPLINE-SLLOESS and IGLOESS-SLLOESS, all of which removed intensity effect globally and spatial effect locally, appear to reduce LOOCV classification errors most consistently and effectively across all data sets. We also found that the investigated scale normalization methods do not reduce LOOCV classification error. Using LOOCV error of k-NNs as the evaluation criterion, three double

  2. A novel method for determining the solubility of small molecules in aqueous media and polymer solvent systems using solution calorimetry.

    Science.gov (United States)

    Fadda, Hala M; Chen, Xin; Aburub, Aktham; Mishra, Dinesh; Pinal, Rodolfo

    2014-07-01

    To explore the application of solution calorimetry for measuring drug solubility in experimentally challenging situations while providing additional information on the physical properties of the solute material. A semi-adiabatic solution calorimeter was used to measure the heat of dissolution of prednisolone and chlorpropamide in aqueous solvents and of griseofulvin and ritonavir in viscous solutions containing polyvinylpyrrolidone and N-ethylpyrrolidone. Dissolution end point was clearly ascertained when heat generation stopped. The heat of solution was a linear function of dissolved mass for all drugs (solution of 9.8 ± 0.8, 28.8 ± 0.6, 45.7 ± 1.6 and 159.8 ± 20.1 J/g were obtained for griseofulvin, ritonavir, prednisolone and chlorpropamide, respectively. Saturation was identifiable by a plateau in the heat signal and the crossing of the two linear segments corresponds to the solubility limit. The solubilities of prednisolone and chlopropamide in water by the calorimetric method were 0.23 and 0.158 mg/mL, respectively, in agreement with the shake-flask/HPLC-UV determined values of 0.212 ± 0.013 and 0.169 ± 0.015 mg/mL, respectively. For the higher solubility and high viscosity systems of griseofulvin and ritonavir in NEP/PVP mixtures, respectively, solubility values of 65 and 594 mg/g, respectively, were obtained. Solution calorimetry offers a reliable method for measuring drug solubility in organic and aqueous solvents. The approach is complementary to the traditional shake-flask method, providing information on the solid properties of the solute. For highly viscous solutions, the calorimetric approach is advantageous.

  3. Intermolecular interactions in the condensed phase

    DEFF Research Database (Denmark)

    Christensen, Anders S.; Kromann, Jimmy Charnley; Jensen, Jan Halborg

    2017-01-01

    To facilitate further development of approximate quantum mechanical methods for condensed phase applications, we present a new benchmark dataset of intermolecular interaction energies in the solution phase for a set of 15 dimers, each containing one charged monomer. The reference interaction energy...... and solution phases. As most approximate QM methods are parametrized and evaluated using data measured or calculated in the gas phase, the dataset represents an important first step toward calibrating QM based methods for application in the condensed phase where polarization and exchange repulsion need...

  4. Ensemble Data Mining Methods

    Data.gov (United States)

    National Aeronautics and Space Administration — Ensemble Data Mining Methods, also known as Committee Methods or Model Combiners, are machine learning methods that leverage the power of multiple models to achieve...

  5. BDF-methods

    DEFF Research Database (Denmark)

    Hostrup, Astrid Kuijers

    1999-01-01

    An introduction to BDF-methods is given. The use of these methods on differential algebraic equations (DAE's) with different indexes with respect to order, stability and convergens of the BDF-methods is presented.......An introduction to BDF-methods is given. The use of these methods on differential algebraic equations (DAE's) with different indexes with respect to order, stability and convergens of the BDF-methods is presented....

  6. Uranium price forecasting methods

    International Nuclear Information System (INIS)

    Fuller, D.M.

    1994-01-01

    This article reviews a number of forecasting methods that have been applied to uranium prices and compares their relative strengths and weaknesses. The methods reviewed are: (1) judgemental methods, (2) technical analysis, (3) time-series methods, (4) fundamental analysis, and (5) econometric methods. Historically, none of these methods has performed very well, but a well-thought-out model is still useful as a basis from which to adjust to new circumstances and try again

  7. Methods in aquatic bacteriology

    National Research Council Canada - National Science Library

    Austin, B

    1988-01-01

    .... Within these sections detailed chapters consider sampling methods, determination of biomass, isolation methods, identification, the bacterial microflora of fish, invertebrates, plants and the deep...

  8. Transport equation solving methods

    International Nuclear Information System (INIS)

    Granjean, P.M.

    1984-06-01

    This work is mainly devoted to Csub(N) and Fsub(N) methods. CN method: starting from a lemma stated by Placzek, an equivalence is established between two problems: the first one is defined in a finite medium bounded by a surface S, the second one is defined in the whole space. In the first problem the angular flux on the surface S is shown to be the solution of an integral equation. This equation is solved by Galerkin's method. The Csub(N) method is applied here to one-velocity problems: in plane geometry, slab albedo and transmission with Rayleigh scattering, calculation of the extrapolation length; in cylindrical geometry, albedo and extrapolation length calculation with linear scattering. Fsub(N) method: the basic integral transport equation of the Csub(N) method is integrated on Case's elementary distributions; another integral transport equation is obtained: this equation is solved by a collocation method. The plane problems solved by the Csub(N) method are also solved by the Fsub(N) method. The Fsub(N) method is extended to any polynomial scattering law. Some simple spherical problems are also studied. Chandrasekhar's method, collision probability method, Case's method are presented for comparison with Csub(N) and Fsub(N) methods. This comparison shows the respective advantages of the two methods: a) fast convergence and possible extension to various geometries for Csub(N) method; b) easy calculations and easy extension to polynomial scattering for Fsub(N) method [fr

  9. Novel liquid chromatography method based on linear weighted regression for the fast determination of isoprostane isomers in plasma samples using sensitive tandem mass spectrometry detection.

    Science.gov (United States)

    Aszyk, Justyna; Kot, Jacek; Tkachenko, Yurii; Woźniak, Michał; Bogucka-Kocka, Anna; Kot-Wasik, Agata

    2017-04-15

    A simple, fast, sensitive and accurate methodology based on a LLE followed by liquid chromatography-tandem mass spectrometry for simultaneous determination of four regioisomers (8-iso prostaglandin F 2α , 8-iso-15(R)-prostaglandin F 2α , 11β-prostaglandin F 2α , 15(R)-prostaglandin F 2α ) in routine analysis of human plasma samples was developed. Isoprostanes are stable products of arachidonic acid peroxidation and are regarded as the most reliable markers of oxidative stress in vivo. Validation of method was performed by evaluation of the key analytical parameters such as: matrix effect, analytical curve, trueness, precision, limits of detection and limits of quantification. As a homoscedasticity was not met for analytical data, weighted linear regression was applied in order to improve the accuracy at the lower end points of calibration curve. The detection limits (LODs) ranged from 1.0 to 2.1pg/mL. For plasma samples spiked with the isoprostanes at the level of 50pg/mL, intra-and interday repeatability ranged from 2.1 to 3.5% and 0.1 to 5.1%, respectively. The applicability of the proposed approach has been verified by monitoring of isoprostane isomers level in plasma samples collected from young patients (n=8) subjected to hyperbaric hyperoxia (100% oxygen at 280kPa(a) for 30min) in a multiplace hyperbaric chamber. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Islet transplantation as safe and efficacious method to restore glycemic control and to avoid severe hypoglycemia after donor organ failure in pancreas transplantation.

    Science.gov (United States)

    Gerber, Philipp A; Hochuli, Michel; Benediktsdottir, Bara D; Zuellig, Richard A; Tschopp, Oliver; Glenck, Michael; de Rougemont, Olivier; Oberkofler, Christian; Spinas, Giatgen A; Lehmann, Roger

    2018-01-01

    The aim of this study was to assess safety and efficacy of islet transplantation after initial pancreas transplantation with subsequent organ failure. Patients undergoing islet transplantation at our institution after pancreas organ failure were compared to a control group of patients with pancreas graft failure, but without islet transplantation and to a group receiving pancreas retransplantation. Ten patients underwent islet transplantation after initial pancreas transplantation failed and were followed for a median of 51 months. The primary end point of HbA1c islet transplantation and in all three patients in the pancreas retransplantation group, but by none of the patients in the group without retransplantation (n = 7). Insulin requirement was reduced by 50% after islet transplantation. Kidney function (eGFR) declined with a rate of -1.0 mL ± 1.2 mL/min/1.73 m 2 per year during follow-up after islet transplantation, which tended to be slower than in the group without retransplantation (P = .07). Islet transplantation after deceased donor pancreas transplant failure is a method that can safely improve glycemic control and reduce the incidence of severe hypoglycemia and thus establish similar glycemic control as after initial pancreas transplantation, despite the need of additional exogenous insulin. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  11. Assessment of the potential irritation and photoirritation of novel amino acid-based surfactants by in vitro methods as alternative to the animal tests

    International Nuclear Information System (INIS)

    Benavides, Tomas; Martinez, Veronica; Mitjans, Montserrat; Infante, Maria Rosa; Moran, Carmen; Clapes, Pere; Clothier, Richard; Vinardell, Maria Pilar

    2004-01-01

    The ultraviolet-A radiation damage effects on skin and eyes will be increased by phototoxic compounds which could be present in pharmaceutical or cosmetic formulations. Great efforts have been made in the last years to find surfactants to replace those with phototoxic potential in commercial use. Series of different in vitro models for phototoxicity, included to validated neutral red uptake (NRU) 3T3 phototoxicity assay are useful screening tools. The phototoxic effects of a novel family of glycerol amino acid-based surfactant compounds were examined via these assays. Human red blood cells and two immortalised cell lines, murine fibroblast cell line 3T3, and one human keratinocyte cell line, HaCaT, were the in vitro models employed to predict potential photoirritation. The phototoxic end-points assessed were hemolysis (human red blood cell test) and resazurin transformation to resorufin and NRU in cell culture methods. The results suggest that no phototoxic effects by any new amino acid derived-surfactants, could be identified

  12. Quantitative Characterization of Major Hepatic UDP-Glucuronosyltransferase Enzymes in Human Liver Microsomes: Comparison of Two Proteomic Methods and Correlation with Catalytic Activity.

    Science.gov (United States)

    Achour, Brahim; Dantonio, Alyssa; Niosi, Mark; Novak, Jonathan J; Fallon, John K; Barber, Jill; Smith, Philip C; Rostami-Hodjegan, Amin; Goosen, Theunis C

    2017-10-01

    Quantitative characterization of UDP-glucuronosyltransferase (UGT) enzymes is valuable in glucuronidation reaction phenotyping, predicting metabolic clearance and drug-drug interactions using extrapolation exercises based on pharmacokinetic modeling. Different quantitative proteomic workflows have been employed to quantify UGT enzymes in various systems, with reports indicating large variability in expression, which cannot be explained by interindividual variability alone. To evaluate the effect of methodological differences on end-point UGT abundance quantification, eight UGT enzymes were quantified in 24 matched liver microsomal samples by two laboratories using stable isotope-labeled (SIL) peptides or quantitative concatemer (QconCAT) standard, and measurements were assessed against catalytic activity in seven enzymes ( n = 59). There was little agreement between individual abundance levels reported by the two methods; only UGT1A1 showed strong correlation [Spearman rank order correlation (Rs) = 0.73, P quantitative proteomic data should be validated against catalytic activity whenever possible. In addition, metabolic reaction phenotyping exercises should consider spurious abundance-activity correlations to avoid misleading conclusions. Copyright © 2017 by The American Society for Pharmacology and Experimental Therapeutics.

  13. Determination of iodide with 1,3-dibromo-5,5-dimethylhydantoin (DBH) in comparison with the ICl-method. Analytical methods of pharmacopeias with DBH in respect to environmental and economical concern. Part 3.

    Science.gov (United States)

    Hilp, M; Senjuk, S

    2001-06-01

    USP 1995 (The United States Pharmacopeia, 23rd Edit., (1995), potassium iodide p. 1265, sodium iodide p. 1424), PH. EUR. 1997 (European Pharmacopoeia, third ed., Council of Europe, Strasbourg, (1997), potassium iodide p. 1367, sodium iodide p. 1493) and JAP 1996 (The Japanes Pharmacopoeia, 13th ed. (1996), potassium iodide p. 578, sodium iodide p. 630) determine iodide with the ICl-method (J. Am. Chem. Soc. 25 (1903) 756-761; Z. Anorg. Chem. 36 (1903) 76-83; Fresenius Z. Anal. Chem. 106 (1936) 12-23; Arzneibuch-Kommentar, Wissenschaftliche Erläuterungen zum Europäischen Arzneibuch, Wissenschaftliche Verlagsgesellschaft mbH, Stuttgart, Govi-Verlag - Pharmazeutischer Verlag GmbH, Eschborn, 12th suppl. (1999), K10 p. 2), using chloroform, which is toxic and hazardous to environment. Without the application of chlorinated hydrocarbons USP 2000 (The United State Pharmacopeia, 24th ed. (2000), potassium iodide p. 1368, sodium iodide p. 1535) and Brit 1999 (British Pharmacopoeia London, (1999), Appendix VIII C, p. A162) titrate iodide with the redox indicator amaranth. A titration with potentiometric indication giving two end-points at the step of I(2) and [ICl(2)](-) is described. Due to the high concentration of hydrochloric acid required for the ICl-method, the determination with DBH (1,3-dibromo-5,5-dimethylhydantoin; 1,3-dibromo-5,5-dimethyl-2,4-imidazolidinedione) can be recommended and is performed easily. Similarly, the iodide content of gallamine triethiodide may be analyzed with DBH by application of a visual two-phase titration in water and ethyl acetate or with potentiometric indication in a mixture of 2-propanol and water. During the removal of the excess of DBH 4-bromo-triethylgallamine (2,2',2"-[1-bromo-benzene-2,3,4-triyltris(oxy)]N,N,N-triethylethanium) is formed.

  14. An Approximate Method for Solving Optimal Control Problems for Discrete Systems Based on Local Approximation of an Attainability Set

    Directory of Open Access Journals (Sweden)

    V. A. Baturin

    2017-03-01

    Full Text Available An optimal control problem for discrete systems is considered. A method of successive improvements along with its modernization based on the expansion of the main structures of the core algorithm about the parameter is suggested. The idea of the method is based on local approximation of attainability set, which is described by the zeros of the Bellman function in the special problem of optimal control. The essence of the problem is as follows: from the end point of the phase is required to find a path that minimizes functional deviations of the norm from the initial state. If the initial point belongs to the attainability set of the original controlled system, the value of the Bellman function equal to zero, otherwise the value of the Bellman function is greater than zero. For this special task Bellman equation is considered. The support approximation and Bellman equation are selected. The Bellman function is approximated by quadratic terms. Along the allowable trajectory, this approximation gives nothing, because Bellman function and its expansion coefficients are zero. We used a special trick: an additional variable is introduced, which characterizes the degree of deviation of the system from the initial state, thus it is obtained expanded original chain. For the new variable initial nonzero conditions is selected, thus obtained trajectory is lying outside attainability set and relevant Bellman function is greater than zero, which allows it to hold a non-trivial approximation. As a result of these procedures algorithms of successive improvements is designed. Conditions for relaxation algorithms and conditions for the necessary conditions of optimality are also obtained.

  15. From Protocols to Publications: A Study in Selective Reporting of Outcomes in Randomized Trials in Oncology

    Science.gov (United States)

    Raghav, Kanwal Pratap Singh; Mahajan, Sminil; Yao, James C.; Hobbs, Brian P.; Berry, Donald A.; Pentz, Rebecca D.; Tam, Alda; Hong, Waun K.; Ellis, Lee M.; Abbruzzese, James; Overman, Michael J.

    2015-01-01

    Purpose The decision by journals to append protocols to published reports of randomized trials was a landmark event in clinical trial reporting. However, limited information is available on how this initiative effected transparency and selective reporting of clinical trial data. Methods We analyzed 74 oncology-based randomized trials published in Journal of Clinical Oncology, the New England Journal of Medicine, and The Lancet in 2012. To ascertain integrity of reporting, we compared published reports with their respective appended protocols with regard to primary end points, nonprimary end points, unplanned end points, and unplanned analyses. Results A total of 86 primary end points were reported in 74 randomized trials; nine trials had greater than one primary end point. Nine trials (12.2%) had some discrepancy between their planned and published primary end points. A total of 579 nonprimary end points (median, seven per trial) were planned, of which 373 (64.4%; median, five per trial) were reported. A significant positive correlation was found between the number of planned and nonreported nonprimary end points (Spearman r = 0.66; P < .001). Twenty-eight studies (37.8%) reported a total of 65 unplanned end points; 52 (80.0%) of which were not identified as unplanned. Thirty-one (41.9%) and 19 (25.7%) of 74 trials reported a total of 52 unplanned analyses involving primary end points and 33 unplanned analyses involving nonprimary end points, respectively. Studies reported positive unplanned end points and unplanned analyses more frequently than negative outcomes in abstracts (unplanned end points odds ratio, 6.8; P = .002; unplanned analyses odd ratio, 8.4; P = .007). Conclusion Despite public and reviewer access to protocols, selective outcome reporting persists and is a major concern in the reporting of randomized clinical trials. To foster credible evidence-based medicine, additional initiatives are needed to minimize selective reporting. PMID:26304898

  16. Advanced differential quadrature methods

    CERN Document Server

    Zong, Zhi

    2009-01-01

    Modern Tools to Perform Numerical DifferentiationThe original direct differential quadrature (DQ) method has been known to fail for problems with strong nonlinearity and material discontinuity as well as for problems involving singularity, irregularity, and multiple scales. But now researchers in applied mathematics, computational mechanics, and engineering have developed a range of innovative DQ-based methods to overcome these shortcomings. Advanced Differential Quadrature Methods explores new DQ methods and uses these methods to solve problems beyond the capabilities of the direct DQ method.After a basic introduction to the direct DQ method, the book presents a number of DQ methods, including complex DQ, triangular DQ, multi-scale DQ, variable order DQ, multi-domain DQ, and localized DQ. It also provides a mathematical compendium that summarizes Gauss elimination, the Runge-Kutta method, complex analysis, and more. The final chapter contains three codes written in the FORTRAN language, enabling readers to q...

  17. Inflow Turbulence Generation Methods

    Science.gov (United States)

    Wu, Xiaohua

    2017-01-01

    Research activities on inflow turbulence generation methods have been vigorous over the past quarter century, accompanying advances in eddy-resolving computations of spatially developing turbulent flows with direct numerical simulation, large-eddy simulation (LES), and hybrid Reynolds-averaged Navier-Stokes-LES. The weak recycling method, rooted in scaling arguments on the canonical incompressible boundary layer, has been applied to supersonic boundary layer, rough surface boundary layer, and microscale urban canopy LES coupled with mesoscale numerical weather forecasting. Synthetic methods, originating from analytical approximation to homogeneous isotropic turbulence, have branched out into several robust methods, including the synthetic random Fourier method, synthetic digital filtering method, synthetic coherent eddy method, and synthetic volume forcing method. This article reviews major progress in inflow turbulence generation methods with an emphasis on fundamental ideas, key milestones, representative applications, and critical issues. Directions for future research in the field are also highlighted.

  18. Methods of nonlinear analysis

    CERN Document Server

    Bellman, Richard Ernest

    1970-01-01

    In this book, we study theoretical and practical aspects of computing methods for mathematical modelling of nonlinear systems. A number of computing techniques are considered, such as methods of operator approximation with any given accuracy; operator interpolation techniques including a non-Lagrange interpolation; methods of system representation subject to constraints associated with concepts of causality, memory and stationarity; methods of system representation with an accuracy that is the best within a given class of models; methods of covariance matrix estimation;methods for low-rank mat

  19. Consumer Behavior Research Methods

    DEFF Research Database (Denmark)

    Chrysochou, Polymeros

    2017-01-01

    This chapter starts by distinguishing consumer behavior research methods based on the type of data used, being either secondary or primary. Most consumer behavior research studies phenomena that require researchers to enter the field and collect data on their own, and therefore the chapter...... emphasizes the discussion of primary research methods. Based on the nature of the data primary research methods are further distinguished into qualitative and quantitative. The chapter describes the most important and popular qualitative and quantitative methods. It concludes with an overall evaluation...... of the methods and how to improve quality in consumer behavior research methods....

  20. Dissolution Methods Database

    Data.gov (United States)

    U.S. Department of Health & Human Services — For a drug product that does not have a dissolution test method in the United States Pharmacopeia (USP), the FDA Dissolution Methods Database provides information on...

  1. The three circle method

    International Nuclear Information System (INIS)

    Garncarek, Z.

    1989-01-01

    The three circle method in its general form is presented. The method is especially useful for investigation of shapes of agglomerations of objects. An example of its applications to investigation of galaxies distribution is given. 17 refs. (author)

  2. Design Methods in Practice

    DEFF Research Database (Denmark)

    Jensen, Torben Elgaard; Andreasen, Mogens Myrup

    2010-01-01

    The paper challenges the dominant and widespread view that a good design method will guarantee a systematic approach as well as certain results. First, it explores the substantial differences between on the one hand the conception of methods implied in Pahl & Beitz’s widely recognized text book...... on engineering design, and on the other hand the understanding of method use, which has emerged from micro-sociological studies of practice (ethnomethodology). Second, it reviews a number of case studies conducted by engineering students, who were instructed to investigate the actual use of design methods...... in Danish companies. The paper concludes that design methods in practice deviate substantially from Pahl & Beitz’s description of method use: The object and problems, which are the starting points for method use, are more contested and less given than generally assumed; The steps of methods are often...

  3. Advances in Numerical Methods

    CERN Document Server

    Mastorakis, Nikos E

    2009-01-01

    Features contributions that are focused on significant aspects of current numerical methods and computational mathematics. This book carries chapters that advanced methods and various variations on known techniques that can solve difficult scientific problems efficiently.

  4. Basic Finite Element Method

    International Nuclear Information System (INIS)

    Lee, Byeong Hae

    1992-02-01

    This book gives descriptions of basic finite element method, which includes basic finite element method and data, black box, writing of data, definition of VECTOR, definition of matrix, matrix and multiplication of matrix, addition of matrix, and unit matrix, conception of hardness matrix like spring power and displacement, governed equation of an elastic body, finite element method, Fortran method and programming such as composition of computer, order of programming and data card and Fortran card, finite element program and application of nonelastic problem.

  5. Conformable variational iteration method

    Directory of Open Access Journals (Sweden)

    Omer Acan

    2017-02-01

    Full Text Available In this study, we introduce the conformable variational iteration method based on new defined fractional derivative called conformable fractional derivative. This new method is applied two fractional order ordinary differential equations. To see how the solutions of this method, linear homogeneous and non-linear non-homogeneous fractional ordinary differential equations are selected. Obtained results are compared the exact solutions and their graphics are plotted to demonstrate efficiency and accuracy of the method.

  6. VALUATION METHODS- LITERATURE REVIEW

    OpenAIRE

    Dorisz Talas

    2015-01-01

    This paper is a theoretical overview of the often used valuation methods with the help of which the value of a firm or its equity is calculated. Many experts (including Aswath Damodaran, Guochang Zhang and CA Hozefa Natalwala) classify the methods. The basic models are based on discounted cash flows. The main method uses the free cash flow for valuation, but there are some newer methods that reveal and correct the weaknesses of the traditional models. The valuation of flexibility of managemen...

  7. Mixed methods research.

    Science.gov (United States)

    Halcomb, Elizabeth; Hickman, Louise

    2015-04-08

    Mixed methods research involves the use of qualitative and quantitative data in a single research project. It represents an alternative methodological approach, combining qualitative and quantitative research approaches, which enables nurse researchers to explore complex phenomena in detail. This article provides a practical overview of mixed methods research and its application in nursing, to guide the novice researcher considering a mixed methods research project.

  8. Possibilities of roentgenological method

    International Nuclear Information System (INIS)

    Sivash, Eh.S.; Sal'man, M.M.

    1980-01-01

    Literary and experimental data on estimating possibilities of roentgenologic investigations using an electron optical amplifier, X-ray television and roentgen cinematography are generalized. Different methods of studying gastro-intestinal tract are compared. The advantage of the roentgenologic method over the endoscopic method after stomach resection is shown [ru

  9. The Generalized Sturmian Method

    DEFF Research Database (Denmark)

    Avery, James Emil

    2011-01-01

    these ideas clearly so that they become more accessible. By bringing together these non-standard methods, the book intends to inspire graduate students, postdoctoral researchers and academics to think of novel approaches. Is there a method out there that we have not thought of yet? Can we design a new method...... generations of researchers were left to work out how to achieve this ambitious goal for molecular systems of ever-increasing size. This book focuses on non-mainstream methods to solve the molecular electronic Schrödinger equation. Each method is based on a set of core ideas and this volume aims to explain...

  10. Mimetic discretization methods

    CERN Document Server

    Castillo, Jose E

    2013-01-01

    To help solve physical and engineering problems, mimetic or compatible algebraic discretization methods employ discrete constructs to mimic the continuous identities and theorems found in vector calculus. Mimetic Discretization Methods focuses on the recent mimetic discretization method co-developed by the first author. Based on the Castillo-Grone operators, this simple mimetic discretization method is invariably valid for spatial dimensions no greater than three. The book also presents a numerical method for obtaining corresponding discrete operators that mimic the continuum differential and

  11. DOE methods compendium

    International Nuclear Information System (INIS)

    Leasure, C.S.

    1992-01-01

    The Department of Energy (DOE) has established an analytical methods compendium development program to integrate its environmental analytical methods. This program is administered through DOE's Laboratory Management Division (EM-563). The primary objective of this program is to assemble a compendium of analytical chemistry methods of known performance for use by all DOE Environmental Restoration and Waste Management program. This compendium will include methods for sampling, field screening, fixed analytical laboratory and mobile analytical laboratory analyses. It will also include specific guidance on the proper selection of appropriate sampling and analytical methods in using specific analytical requirements

  12. Methods for assessing geodiversity

    Science.gov (United States)

    Zwoliński, Zbigniew; Najwer, Alicja; Giardino, Marco

    2017-04-01

    The accepted systematics of geodiversity assessment methods will be presented in three categories: qualitative, quantitative and qualitative-quantitative. Qualitative methods are usually descriptive methods that are suited to nominal and ordinal data. Quantitative methods use a different set of parameters and indicators to determine the characteristics of geodiversity in the area being researched. Qualitative-quantitative methods are a good combination of the collection of quantitative data (i.e. digital) and cause-effect data (i.e. relational and explanatory). It seems that at the current stage of the development of geodiversity research methods, qualitative-quantitative methods are the most advanced and best assess the geodiversity of the study area. Their particular advantage is the integration of data from different sources and with different substantive content. Among the distinguishing features of the quantitative and qualitative-quantitative methods for assessing geodiversity are their wide use within geographic information systems, both at the stage of data collection and data integration, as well as numerical processing and their presentation. The unresolved problem for these methods, however, is the possibility of their validation. It seems that currently the best method of validation is direct filed confrontation. Looking to the next few years, the development of qualitative-quantitative methods connected with cognitive issues should be expected, oriented towards ontology and the Semantic Web.

  13. Using LMS Method in Smoothing Reference Centile Curves for Lipid Profile of Iranian Children and Adolescents: A CASPIAN Study

    Directory of Open Access Journals (Sweden)

    M Hoseini

    2012-05-01

    HDL-C level is lower in Iranian children and adolescents than their counterparts in Western countries. Future studies with larger sample size and with higher density at the end points and equal distribution of measurements in changing limits of covariates would hopefully reach more precise findings.

     

  14. Design of the Endobronchial Valve for Emphysema Palliation Trial (VENT: a non-surgical method of lung volume reduction

    Directory of Open Access Journals (Sweden)

    Noppen Marc

    2007-07-01

    Full Text Available Abstract Background Lung volume reduction surgery is effective at improving lung function, quality of life, and mortality in carefully selected individuals with advanced emphysema. Recently, less invasive bronchoscopic approaches have been designed to utilize these principles while avoiding the associated perioperative risks. The Endobronchial Valve for Emphysema PalliatioN Trial (VENT posits that occlusion of a single pulmonary lobe through bronchoscopically placed Zephyr® endobronchial valves will effect significant improvements in lung function and exercise tolerance with an acceptable risk profile in advanced emphysema. Methods The trial design posted on Clinical trials.gov, on August 10, 2005 proposed an enrollment of 270 subjects. Inclusion criteria included: diagnosis of emphysema with forced expiratory volume in one second (FEV1 100%; residual volume > 150% predicted, and heterogeneous emphysema defined using a quantitative chest computed tomography algorithm. Following standardized pulmonary rehabilitation, patients were randomized 2:1 to receive unilateral lobar placement of endobronchial valves plus optimal medical management or optimal medical management alone. The co-primary endpoint was the mean percent change in FEV1 and six minute walk distance at 180 days. Secondary end-points included mean percent change in St. George's Respiratory Questionnaire score and the mean absolute changes in the maximal work load measured by cycle ergometry, dyspnea (mMRC score, and total oxygen use per day. Per patient response rates in clinically significant improvement/maintenance of FEV1 and six minute walk distance and technical success rates of valve placement were recorded. Apriori response predictors based on quantitative CT and lung physiology were defined. Conclusion If endobronchial valves improve FEV1 and health status with an acceptable safety profile in advanced emphysema, they would offer a novel intervention for this progressive and

  15. Methods of Software Verification

    Directory of Open Access Journals (Sweden)

    R. E. Gurin

    2015-01-01

    Full Text Available This article is devoted to the problem of software verification (SW. Methods of software verification designed to check the software for compliance with the stated requirements such as correctness, system security and system adaptability to small changes in the environment, portability and compatibility, etc. These are various methods both by the operation process and by the way of achieving result. The article describes the static and dynamic methods of software verification and paid attention to the method of symbolic execution. In its review of static analysis are discussed and described the deductive method, and methods for testing the model. A relevant issue of the pros and cons of a particular method is emphasized. The article considers classification of test techniques for each method. In this paper we present and analyze the characteristics and mechanisms of the static analysis of dependencies, as well as their views, which can reduce the number of false positives in situations where the current state of the program combines two or more states obtained both in different paths of execution and in working with multiple object values. Dependences connect various types of software objects: single variables, the elements of composite variables (structure fields, array elements, the size of the heap areas, the length of lines, the number of initialized array elements in the verification code using static methods. The article pays attention to the identification of dependencies within the framework of the abstract interpretation, as well as gives an overview and analysis of the inference tools.Methods of dynamic analysis such as testing, monitoring and profiling are presented and analyzed. Also some kinds of tools are considered which can be applied to the software when using the methods of dynamic analysis. Based on the work a conclusion is drawn, which describes the most relevant problems of analysis techniques, methods of their solutions and

  16. Radiometric dating methods

    International Nuclear Information System (INIS)

    Bourdon, B.

    2003-01-01

    The general principle of isotope dating methods is based on the presence of radioactive isotopes in the geologic or archaeological object to be dated. The decay with time of these isotopes is used to determine the 'zero' time corresponding to the event to be dated. This paper recalls the general principle of isotope dating methods (bases, analytical methods, validation of results and uncertainties) and presents the methods based on natural radioactivity (Rb-Sr, Sm-Nd, U-Pb, Re-Os, K-Ar (Ar-Ar), U-Th-Ra- 210 Pb, U-Pa, 14 C, 36 Cl, 10 Be) and the methods based on artificial radioactivity with their applications. Finally, the methods based on irradiation damages (thermoluminescence, fission tracks, electron spin resonance) are briefly evoked. (J.S.)

  17. Performative Schizoid Method

    DEFF Research Database (Denmark)

    Svabo, Connie

    2016-01-01

    is presented and an example is provided of a first exploratory engagement with it. The method is used in a specific project Becoming Iris, making inquiry into arts-based knowledge creation during a three month visiting scholarship at a small, independent visual art academy. Using the performative schizoid......A performative schizoid method is developed as a method contribution to performance as research. The method is inspired by contemporary research in the human and social sciences urging experimentation and researcher engagement with creative and artistic practice. In the article, the method...... method in Becoming Iris results in four audio-visual and performance-based productions, centered on an emergent theme of the scholartist as a bird in borrowed feathers. Interestingly, the moral lesson of the fable about the vain jackdaw, who dresses in borrowed peacock feathers and becomes a castout...

  18. Angular correlation methods

    International Nuclear Information System (INIS)

    Ferguson, A.J.

    1974-01-01

    An outline of the theory of angular correlations is presented, and the difference between the modern density matrix method and the traditional wave function method is stressed. Comments are offered on particular angular correlation theoretical techniques. A brief discussion is given of recent studies of gamma ray angular correlations of reaction products recoiling with high velocity into vacuum. Two methods for optimization to obtain the most accurate expansion coefficients of the correlation are discussed. (1 figure, 53 references) (U.S.)

  19. Maximum Quantum Entropy Method

    OpenAIRE

    Sim, Jae-Hoon; Han, Myung Joon

    2018-01-01

    Maximum entropy method for analytic continuation is extended by introducing quantum relative entropy. This new method is formulated in terms of matrix-valued functions and therefore invariant under arbitrary unitary transformation of input matrix. As a result, the continuation of off-diagonal elements becomes straightforward. Without introducing any further ambiguity, the Bayesian probabilistic interpretation is maintained just as in the conventional maximum entropy method. The applications o...

  20. Rossi Alpha Method

    International Nuclear Information System (INIS)

    Hansen, G.E.

    1985-01-01

    The Rossi Alpha Method has proved to be valuable for the determination of prompt neutron lifetimes in fissile assemblies having known reproduction numbers at or near delayed critical. This workshop report emphasizes the pioneering applications of the method by Dr. John D. Orndoff to fast-neutron critical assemblies at Los Alamos. The value of the method appears to disappear for subcritical systems where the Rossi-α is no longer an α-eigenvalue

  1. Qualitative methods textbooks

    OpenAIRE

    Barndt, William

    2003-01-01

    Over the past few years, the number of political science departments offering qualitative methods courses has grown substantially. The number of qualitative methods textbooks has kept pace, providing instructors with an overwhelming array of choices. But how to decide which text to choose from this exhortatory smorgasbord? The scholarship desperately needs evaluated. Yet the task is not entirely straightforward: qualitative methods textbooks reflect the diversity inherent in qualitative metho...

  2. The Box Method

    DEFF Research Database (Denmark)

    Nielsen, Peter Vilhelm

    The velocity level in a room ventilated by jet ventilation is strongly influenced by the supply conditions. The momentum flow in the supply jets controls the air movement in the room and, therefore, it is very important that the inlet conditions and the numerical method can generate a satisfactor...... description of this momentum flow. The Box Method is a practical method for the description of an Air Terminal Device which will save grid points and ensure the right level of the momentum flow....

  3. Applied Bayesian hierarchical methods

    National Research Council Canada - National Science Library

    Congdon, P

    2010-01-01

    ... . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Posterior Inference from Bayes Formula . . . . . . . . . . . . 1.3 Markov Chain Monte Carlo Sampling in Relation to Monte Carlo Methods: Obtaining Posterior...

  4. [Methods of quantitative proteomics].

    Science.gov (United States)

    Kopylov, A T; Zgoda, V G

    2007-01-01

    In modern science proteomic analysis is inseparable from other fields of systemic biology. Possessing huge resources quantitative proteomics operates colossal information on molecular mechanisms of life. Advances in proteomics help researchers to solve complex problems of cell signaling, posttranslational modification, structure and functional homology of proteins, molecular diagnostics etc. More than 40 various methods have been developed in proteomics for quantitative analysis of proteins. Although each method is unique and has certain advantages and disadvantages all these use various isotope labels (tags). In this review we will consider the most popular and effective methods employing both chemical modifications of proteins and also metabolic and enzymatic methods of isotope labeling.

  5. Methods in ALFA Alignment

    CERN Document Server

    Melendez, Jordan

    2014-01-01

    This note presents two model-independent methods for use in the alignment of the ALFA forward detectors. Using a Monte Carlo simulated LHC run at \\beta = 90m and \\sqrt{s} = 7 TeV, the Kinematic Peak alignment method is utilized to reconstruct the Mandelstam momentum transfer variable t for single-diractive protons. The Hot Spot method uses fluctuations in the hitmap density to pinpoint particular regions in the detector that could signal a misalignment. Another method uses an error function fit to find the detector edge. With this information, the vertical alignment can be determined.

  6. Method of chronokinemetrical invariants

    International Nuclear Information System (INIS)

    Vladimirov, Yu.S.; Shelkovenko, A.Eh.

    1976-01-01

    A particular case of a general dyadic method - the method of chronokinemetric invariants is formulated. The time-like dyad vector is calibrated in a chronometric way, and the space-like vector - in a kinemetric way. Expressions are written for the main physical-geometrical values of the dyadic method and for differential operators. The method developed may be useful for predetermining the reference system of a single observer, and also for studying problems connected with emission and absorption of gravitational and electromagnetic waves [ru

  7. Luminescence properties of Tb{sub 3}Al{sub 5}O{sub 12} garnet and related compounds synthesized by the metal organic decomposition method

    Energy Technology Data Exchange (ETDEWEB)

    Onishi, Yuya; Nakamura, Toshihiro, E-mail: tnakamura@gunma-u.ac.jp; Adachi, Sadao, E-mail: adachi@gunma-u.ac.jp

    2017-03-15

    The Tb–Al–O ternay compounds were prepared by the metal organic decompostion (MOD) method from mixted solutions of Al{sub 2}O{sub 3} and Tb{sub 4}O{sub 7} and subsequent calcination at T{sub c}=1200 °C in air. The structural and optical properties of the synthesized compounds were examined using X-ray diffraction analysis, photoluminescence (PL) analysis, PL excitation (PLE) spectroscopy, PL decay kinetics, and diffuse reflectance spetrosopy. The stoichiometric compounds of terbium aluminium garnet Tb{sub 3}Al{sub 5}O{sub 12} (TAG) and peroviskite-type TbAlO{sub 3} were synthesized at molar ratios of x=0.375 and 0.5 [x ≡Tb{sub 4}O{sub 7}/(Tb{sub 4}O{sub 7}+2Al{sub 2}O{sub 3})], together with the end-point binary materials of rhombohedral Al{sub 2}O{sub 3} (α-Al{sub 2}O{sub 3}; x=0) and cubic Tb{sub 4}O{sub 7} (x=1.0). One can also expect synthesis of stoichiometric compounds Tb{sub 4}Al{sub 2}O{sub 9} and Tb{sub 3}AlO{sub 12} at x=0.667 and 0.75, respectively; however, these compounds were found to be very difficult to synthesize by the MOD method or, probably by other methods. Temperature dependence of the PL spectra for TAG was measured from T=20–440 K in 10-K step and analyzed using a newly developed theoretical model. Raman scattering measurements were also performed on the Tb–Al–O material system with compositions widely varying from x=0 (α-Al{sub 2}O{sub 3}) to 1.0 (Tb{sub 4}O{sub 7}).

  8. Nondestructive testing method

    International Nuclear Information System (INIS)

    Porter, J.F.

    1996-01-01

    Nondestructive testing (NDT) is the use of physical and chemical methods for evaluating material integrity without impairing its intended usefulness or continuing service. Nondestructive tests are used by manufaturer's for the following reasons: 1) to ensure product reliability; 2) to prevent accidents and save human lives; 3) to aid in better product design; 4) to control manufacturing processes; and 5) to maintain a uniform quality level. Nondestructive testing is used extensively on power plants, oil and chemical refineries, offshore oil rigs and pipeline (NDT can even be conducted underwater), welds on tanks, boilers, pressure vessels and heat exchengers. NDT is now being used for testing concrete and composite materials. Because of the criticality of its application, NDT should be performed and the results evaluated by qualified personnel. There are five basic nondestructive examination methods: 1) liquid penetrant testing - method used for detecting surface flaws in materials. This method can be used for metallic and nonmetallic materials, portable and relatively inexpensive. 2) magnetic particle testing - method used to detect surface and subsurface flaws in ferromagnetic materials; 3) radiographic testing - method used to detect internal flaws and significant variation in material composition and thickness; 4) ultrasonic testing - method used to detect internal and external flaws in materials. This method uses ultrasonics to measure thickness of a material or to examine the internal structure for discontinuities. 5) eddy current testing - method used to detect surface and subsurface flaws in conductive materials. Not one nondestructive examination method can find all discontinuities in all of the materials capable of being tested. The most important consideration is for the specifier of the test to be familiar with the test method and its applicability to the type and geometry of the material and the flaws to be detected

  9. Sensitivity Analysis of Dune Height Measurements Along Cross-shore Profiles Using a Novel Method for Dune Ridge Extraction

    Science.gov (United States)

    Hardin, E.; Mitasova, H.; Overton, M.

    2010-12-01

    meets landward-facing slope. In this study, a novel approach for dune ridge extraction is proposed. First, two alongshore end-points of the studied dune ridge are identified using a standard, profile-based method. Then, the dune ridge is traced as the least cost path connecting the two end-points on a cost surface that represents the cumulative penalty for tracing a low elevation path. The cost surface is derived from elevation (i.e., elevation is equal to the cologarithm of the cost). The extracted dune ridge is then sampled at the DEM resolution of 0.5m and analysis of dune ridge height is performed. Statistics on variation in dune height are computed to help understand the sensitivity of dune height measurements to profile spacing and placement. Preliminary results suggest that dune height becomes nearly uncorrelated within 50m and ranges on average nearly a half meter within a five meter window suggesting that dune height measurements are sensitive to profile placement.

  10. Comparison of methods for assessing photoprotection against ultraviolet A in vivo

    International Nuclear Information System (INIS)

    Kaidbey, K.; Gange, R.W.

    1987-01-01

    Photoprotection against ultraviolet A (UVA) by three sunscreens was evaluated in humans, with erythema and pigmentation used as end points in normal skin and in skin sensitized with 8-methoxypsoralen and anthracene. The test sunscreens were Parsol 1789 (2%), Eusolex 8020 (2%), and oxybenzone (3%). UVA was obtained from two filtered xenon-arc sources. UVA protection factors were found to be significantly higher in sensitized skin compared with normal skin. Both Parsol and Eusolex provided better and comparable photoprotection (approximately 3.0) than oxybenzone (approximately 2.0) in sensitized skin, regardless of whether 8-methoxypsoralen or anthracene was used. In normal unsensitized skin, Parsol 1789 and Eusolex 8020 were also comparable and provided slightly better photoprotection (approximately 1.8) than oxybenzone (approximately 1.4) when pigmentation was used as an end point. The three sunscreens, however, were similar in providing photoprotection against UVA-induced erythema. Protection factors obtained in artificially sensitized skin are probably not relevant to normal skin. It is concluded that pigmentation, either immediate or delayed, is a reproducible and useful end point for the routine assessment of photoprotection of normal skin against UVA

  11. Methods for data classification

    Science.gov (United States)

    Garrity, George [Okemos, MI; Lilburn, Timothy G [Front Royal, VA

    2011-10-11

    The present invention provides methods for classifying data and uncovering and correcting annotation errors. In particular, the present invention provides a self-organizing, self-correcting algorithm for use in classifying data. Additionally, the present invention provides a method for classifying biological taxa.

  12. Computational methods working group

    International Nuclear Information System (INIS)

    Gabriel, T.A.

    1997-09-01

    During the Cold Moderator Workshop several working groups were established including one to discuss calculational methods. The charge for this working group was to identify problems in theory, data, program execution, etc., and to suggest solutions considering both deterministic and stochastic methods including acceleration procedures.

  13. Method for exchanging data

    NARCIS (Netherlands)

    2014-01-01

    The present invention relates to a method for exchanging data between at least two servers with use of a gateway. Preferably the method is applied to healthcare systems. Each server holds a unique federated identifier, which identifier identifies a single patient (P). Thus, it is possible for the

  14. WWW: The Scientific Method

    Science.gov (United States)

    Blystone, Robert V.; Blodgett, Kevin

    2006-01-01

    The scientific method is the principal methodology by which biological knowledge is gained and disseminated. As fundamental as the scientific method may be, its historical development is poorly understood, its definition is variable, and its deployment is uneven. Scientific progress may occur without the strictures imposed by the formal…

  15. Methods of numerical relativity

    International Nuclear Information System (INIS)

    Piran, T.

    1983-01-01

    Numerical Relativity is an alternative to analytical methods for obtaining solutions for Einstein equations. Numerical methods are particularly useful for studying generation of gravitational radiation by potential strong sources. The author reviews the analytical background, the numerical analysis aspects and techniques and some of the difficulties involved in numerical relativity. (Auth.)

  16. Differential equation method

    International Nuclear Information System (INIS)

    Kotikov, A.V.

    1993-01-01

    A new method of massive Feynman diagrams calculation is presented. It provides a fairly simple procedure to obtain the result without the D-space integral calculation (for the dimensional regularization). Some diagrams are calculated as an illustration of this method capacities. (author). 7 refs

  17. DISCOURSE ON METHODS.

    Science.gov (United States)

    BOUCHER, JOHN G.

    THE AUTHOR STATES THAT BEFORE PRESENT FOREIGN LANGUAGE TEACHING METHODS CAN BE DISCUSSED INTELLIGENTLY, THE RESEARCH IN PSYCHOLOGY AND LINGUISTICS WHICH HAS INFLUENCED THE DEVELOPMENT OF THESE METHODS MUST BE CONSIDERED. MANY FOREIGN LANGUAGE TEACHERS WERE BEGINNING TO FEEL COMFORTABLE WITH THE AUDIOLINGUAL APPROACH WHEN NOAM CHOMSKY, IN HIS 1966…

  18. Research Methods in Education

    Science.gov (United States)

    Check, Joseph; Schutt, Russell K.

    2011-01-01

    "Research Methods in Education" introduces research methods as an integrated set of techniques for investigating questions about the educational world. This lively, innovative text helps students connect technique and substance, appreciate the value of both qualitative and quantitative methodologies, and make ethical research decisions.…

  19. Attribute-Based Methods

    Science.gov (United States)

    Thomas P. Holmes; Wiktor L. Adamowicz

    2003-01-01

    Stated preference methods of environmental valuation have been used by economists for decades where behavioral data have limitations. The contingent valuation method (Chapter 5) is the oldest stated preference approach, and hundreds of contingent valuation studies have been conducted. More recently, and especially over the last decade, a class of stated preference...

  20. Proven Weight Loss Methods

    Science.gov (United States)

    Fact Sheet Proven Weight Loss Methods What can weight loss do for you? Losing weight can improve your health in a number of ways. It can lower ... at www.hormone.org/Spanish . Proven Weight Loss Methods Fact Sheet www.hormone.org

  1. Radiation borehole logging method

    International Nuclear Information System (INIS)

    Wylie, A.; Mathew, P.J.

    1977-01-01

    A method of obtaining an indication of the diameter of a borehole is described. The method comprises subjecting the walls of the borehole to monoenergetic gamma radiation and making measurements of the intensity of gamma radiation backscattered from the walls. The energy of the radiation is sufficiently high for the shape to be substantially independent of the density and composition of the borehole walls

  2. Isotope methods in hydrology

    International Nuclear Information System (INIS)

    Moser, H.; Rauert, W.

    1980-01-01

    Of the investigation methods used in hydrology, tracer methods hold a special place as they are the only ones which give direct insight into the movement and distribution processes taking place in surface and ground waters. Besides the labelling of water with salts and dyes, as in the past, in recent years the use of isotopes in hydrology, in water research and use, in ground-water protection and in hydraulic engineering has increased. This by no means replaces proven methods of hydrological investigation but tends rather to complement and expand them through inter-disciplinary cooperation. The book offers a general introduction to the application of various isotope methods to specific hydrogeological and hydrological problems. The idea is to place the hydrogeologist and the hydrologist in the position to recognize which isotope method will help him solve his particular problem or indeed, make a solution possible at all. He should also be able to recognize what the prerequisites are and what work and expenditure the use of such methods involves. May the book contribute to promoting cooperation between hydrogeologists, hydrologists, hydraulic engineers and isotope specialists, and thus supplement proven methods of investigation in hydrological research and water utilization and protection wherever the use of isotope methods proves to be of advantage. (orig./HP) [de

  3. Essential numerical computer methods

    CERN Document Server

    Johnson, Michael L

    2010-01-01

    The use of computers and computational methods has become ubiquitous in biological and biomedical research. During the last 2 decades most basic algorithms have not changed, but what has is the huge increase in computer speed and ease of use, along with the corresponding orders of magnitude decrease in cost. A general perception exists that the only applications of computers and computer methods in biological and biomedical research are either basic statistical analysis or the searching of DNA sequence data bases. While these are important applications they only scratch the surface of the current and potential applications of computers and computer methods in biomedical research. The various chapters within this volume include a wide variety of applications that extend far beyond this limited perception. As part of the Reliable Lab Solutions series, Essential Numerical Computer Methods brings together chapters from volumes 210, 240, 321, 383, 384, 454, and 467 of Methods in Enzymology. These chapters provide ...

  4. Adaptive method of lines

    CERN Document Server

    Saucez, Ph

    2001-01-01

    The general Method of Lines (MOL) procedure provides a flexible format for the solution of all the major classes of partial differential equations (PDEs) and is particularly well suited to evolutionary, nonlinear wave PDEs. Despite its utility, however, there are relatively few texts that explore it at a more advanced level and reflect the method''s current state of development.Written by distinguished researchers in the field, Adaptive Method of Lines reflects the diversity of techniques and applications related to the MOL. Most of its chapters focus on a particular application but also provide a discussion of underlying philosophy and technique. Particular attention is paid to the concept of both temporal and spatial adaptivity in solving time-dependent PDEs. Many important ideas and methods are introduced, including moving grids and grid refinement, static and dynamic gridding, the equidistribution principle and the concept of a monitor function, the minimization of a functional, and the moving finite elem...

  5. Bayesian Monte Carlo method

    International Nuclear Information System (INIS)

    Rajabalinejad, M.

    2010-01-01

    To reduce cost of Monte Carlo (MC) simulations for time-consuming processes, Bayesian Monte Carlo (BMC) is introduced in this paper. The BMC method reduces number of realizations in MC according to the desired accuracy level. BMC also provides a possibility of considering more priors. In other words, different priors can be integrated into one model by using BMC to further reduce cost of simulations. This study suggests speeding up the simulation process by considering the logical dependence of neighboring points as prior information. This information is used in the BMC method to produce a predictive tool through the simulation process. The general methodology and algorithm of BMC method are presented in this paper. The BMC method is applied to the simplified break water model as well as the finite element model of 17th Street Canal in New Orleans, and the results are compared with the MC and Dynamic Bounds methods.

  6. Methods in Modern Biophysics

    CERN Document Server

    Nölting, Bengt

    2006-01-01

    Incorporating recent dramatic advances, this textbook presents a fresh and timely introduction to modern biophysical methods. An array of new, faster and higher-power biophysical methods now enables scientists to examine the mysteries of life at a molecular level. This innovative text surveys and explains the ten key biophysical methods, including those related to biophysical nanotechnology, scanning probe microscopy, X-ray crystallography, ion mobility spectrometry, mass spectrometry, proteomics, and protein folding and structure. Incorporating much information previously unavailable in tutorial form, Nölting employs worked examples and 267 illustrations to fully detail the techniques and their underlying mechanisms. Methods in Modern Biophysics is written for advanced undergraduate and graduate students, postdocs, researchers, lecturers and professors in biophysics, biochemistry and related fields. Special features in the 2nd edition: • Illustrates the high-resolution methods for ultrashort-living protei...

  7. The surface analysis methods

    International Nuclear Information System (INIS)

    Deville, J.P.

    1998-01-01

    Nowadays, there are a lot of surfaces analysis methods, each having its specificity, its qualities, its constraints (for instance vacuum) and its limits. Expensive in time and in investment, these methods have to be used deliberately. This article appeals to non specialists. It gives some elements of choice according to the studied information, the sensitivity, the use constraints or the answer to a precise question. After having recalled the fundamental principles which govern these analysis methods, based on the interaction between radiations (ultraviolet, X) or particles (ions, electrons) with matter, two methods will be more particularly described: the Auger electron spectroscopy (AES) and x-rays photoemission spectroscopy (ESCA or XPS). Indeed, they are the most widespread methods in laboratories, the easier for use and probably the most productive for the analysis of surface of industrial materials or samples submitted to treatments in aggressive media. (O.M.)

  8. Cooperative method development

    DEFF Research Database (Denmark)

    Dittrich, Yvonne; Rönkkö, Kari; Eriksson, Jeanette

    2008-01-01

    The development of methods tools and process improvements is best to be based on the understanding of the development practice to be supported. Qualitative research has been proposed as a method for understanding the social and cooperative aspects of software development. However, qualitative...... research is not easily combined with the improvement orientation of an engineering discipline. During the last 6 years, we have applied an approach we call `cooperative method development', which combines qualitative social science fieldwork, with problem-oriented method, technique and process improvement....... The action research based approach focusing on shop floor software development practices allows an understanding of how contextual contingencies influence the deployment and applicability of methods, processes and techniques. This article summarizes the experiences and discusses the further development...

  9. Engaging with mobile methods

    DEFF Research Database (Denmark)

    Jensen, Martin Trandberg

    2014-01-01

    This chapter showcases how mobile methods are more than calibrated techniques awaiting application by tourism researchers, but productive in the enactment of the mobile (Law and Urry, 2004). Drawing upon recent findings deriving from a PhD course on mobility and mobile methods it reveals...... the conceptual ambiguousness of the term ‘mobile methods’. In order to explore this ambiguousness the chapter provides a number of examples deriving from tourism research, to explore how mobile methods are always entangled in ideologies, predispositions, conventions and practice-realities. Accordingly......, the engagements with methods are acknowledged to be always political and contextual, reminding us to avoid essentialist discussions regarding research methods. Finally, the chapter draws on recent fieldwork to extend developments in mobilities-oriented tourism research, by employing auto-ethnography to call...

  10. Determination method of radiostrontium

    International Nuclear Information System (INIS)

    1984-01-01

    This manual provides determination methods of strontium-90 and strontium-89 in the environment released from nuclear facilities, and it is a revised edition of the previous manual published in 1974. As for the preparation method of radiation counting sample, ion exchange method, oxalate separation method and solvent extraction method were adopted in addition to the method of fuming nitric acid separation adopted in the previous edition. Strontium-90 is determined by the separation and radioactivity determination of yttrium-90 in radioequilibrium with strontium-90. Strontium-89 is determined by subtraction of radioactivity of strontium-90 plus yttrium-90 from gross radioactivity of isolated strontium carbonate. Radioactivity determination should be carried out with a low-background 2 π-gas-flow counting system for the mounted sample on a filter having a chemical form of ferric hydroxide, yttrium oxalate or strontium carbonate. This manual describes sample preparation procedures as well as radioactivity counting procedures for environmental samples of precipitates as rain or snow, airborne dust, fresh water, sea water and soil, and also for ash sample made from biological or food samples such as grains, vegetables, tea leaves, pine needle, milk, marine organisms, and total diet, by employing a method of fuming nitric acid separation, ion exchange separation, oxalate precipitate separation or solvent extraction separation (only for an ash sample). Procedures for reagent chemicals preparation is also attached to this manual. (Takagi, S.)

  11. Basics of Bayesian methods.

    Science.gov (United States)

    Ghosh, Sujit K

    2010-01-01

    Bayesian methods are rapidly becoming popular tools for making statistical inference in various fields of science including biology, engineering, finance, and genetics. One of the key aspects of Bayesian inferential method is its logical foundation that provides a coherent framework to utilize not only empirical but also scientific information available to a researcher. Prior knowledge arising from scientific background, expert judgment, or previously collected data is used to build a prior distribution which is then combined with current data via the likelihood function to characterize the current state of knowledge using the so-called posterior distribution. Bayesian methods allow the use of models of complex physical phenomena that were previously too difficult to estimate (e.g., using asymptotic approximations). Bayesian methods offer a means of more fully understanding issues that are central to many practical problems by allowing researchers to build integrated models based on hierarchical conditional distributions that can be estimated even with limited amounts of data. Furthermore, advances in numerical integration methods, particularly those based on Monte Carlo methods, have made it possible to compute the optimal Bayes estimators. However, there is a reasonably wide gap between the background of the empirically trained scientists and the full weight of Bayesian statistical inference. Hence, one of the goals of this chapter is to bridge the gap by offering elementary to advanced concepts that emphasize linkages between standard approaches and full probability modeling via Bayesian methods.

  12. Methods in mummy research

    DEFF Research Database (Denmark)

    Lynnerup, Niels

    2009-01-01

    Mummies are human remains with preservation of non-bony tissue. Many mummy studies focus on the development and application of non-destructive methods for examining mummies, including radiography, CT-scanning with advanced 3-dimensional visualisations, and endoscopic techniques, as well as minima......Mummies are human remains with preservation of non-bony tissue. Many mummy studies focus on the development and application of non-destructive methods for examining mummies, including radiography, CT-scanning with advanced 3-dimensional visualisations, and endoscopic techniques, as well...... as minimally-destructive chemical, physical and biological methods for, e.g., stable isotopes, trace metals and DNA....

  13. Montessori Method and ICTs

    Directory of Open Access Journals (Sweden)

    Athanasios Drigas

    2016-03-01

    Full Text Available This article bridges the gap between the Montessori Method and Information and Communication Technologies (ICTs in contemporary education. It reviews recent research works which recall the Montessori philosophy, principles and didactical tools applying to today’s computers and supporting technologies in children’s learning process. This article reviews how important the stimulation of human senses in the learning process is, as well as the development of Montessori materials using the body and the hand in particular, all according to the Montessori Method along with recent researches over ICTs. Montessori Method within information society age acquires new perspectives, new functionality and new efficacy.

  14. Rubidium-strontium method

    International Nuclear Information System (INIS)

    Dubansky, A.

    1980-01-01

    The rubidium-strontium geological dating method is based on the determination of the Rb and Sr isotope ratio in rocks, mainly using mass spectrometry. The method is only practical for silicate minerals and rocks, potassium feldspars and slates. Also described is the rubidium-strontium isochrone method. This, however, requires a significant amount of experimental data and an analysis of large quantities of samples, often of the order of tons. The results are tabulated of rubidium-strontium dating of geological formations in the Czech Socialist Republic. (M.S.)

  15. Structural Reliability Methods

    DEFF Research Database (Denmark)

    Ditlevsen, Ove Dalager; Madsen, H. O.

    The structural reliability methods quantitatively treat the uncertainty of predicting the behaviour and properties of a structure given the uncertain properties of its geometry, materials, and the actions it is supposed to withstand. This book addresses the probabilistic methods for evaluation...... of structural reliability, including the theoretical basis for these methods. Partial safety factor codes under current practice are briefly introduced and discussed. A probabilistic code format for obtaining a formal reliability evaluation system that catches the most essential features of the nature...... of the uncertainties and their interplay is the developed, step-by-step. The concepts presented are illustrated by numerous examples throughout the text....

  16. Catalytic reforming methods

    Science.gov (United States)

    Tadd, Andrew R; Schwank, Johannes

    2013-05-14

    A catalytic reforming method is disclosed herein. The method includes sequentially supplying a plurality of feedstocks of variable compositions to a reformer. The method further includes adding a respective predetermined co-reactant to each of the plurality of feedstocks to obtain a substantially constant output from the reformer for the plurality of feedstocks. The respective predetermined co-reactant is based on a C/H/O atomic composition for a respective one of the plurality of feedstocks and a predetermined C/H/O atomic composition for the substantially constant output.

  17. Nuclear physics mathematical methods

    International Nuclear Information System (INIS)

    Balian, R.; Gervois, A.; Giannoni, M.J.; Levesque, D.; Maille, M.

    1984-01-01

    The nuclear physics mathematical methods, applied to the collective motion theory, to the reduction of the degrees of freedom and to the order and disorder phenomena; are investigated. In the scope of the study, the following aspects are discussed: the entropy of an ensemble of collective variables; the interpretation of the dissipation, applying the information theory; the chaos and the universality; the Monte-Carlo method applied to the classical statistical mechanics and quantum mechanics; the finite elements method, and the classical ergodicity [fr

  18. Methods for RNA Analysis

    DEFF Research Database (Denmark)

    Olivarius, Signe

    of the transcriptome, 5’ end capture of RNA is combined with next-generation sequencing for high-throughput quantitative assessment of transcription start sites by two different methods. The methods presented here allow for functional investigation of coding as well as noncoding RNA and contribute to future...... RNAs rely on interactions with proteins, the establishment of protein-binding profiles is essential for the characterization of RNAs. Aiming to facilitate RNA analysis, this thesis introduces proteomics- as well as transcriptomics-based methods for the functional characterization of RNA. First, RNA...

  19. Electromigration method in radiochemistry

    International Nuclear Information System (INIS)

    Makarova, T.P.; Stepanov, A.V.

    1977-01-01

    Investigations are reviewd of the period 1969-1975 accomplished by such methods as zonal electrophoresis in countercurrent, focusing electrophoresis, isotachophoresis, electrophoresis with elution, continuous two-dimensional electrophoresis. Since the methods considered are based on the use of porous fillers for stabilizing the medium, some attention is given to the effect of the solid-solution interface on the shape and rate of motion of the zones of the rare-earth elements investigated, Sr and others. The trend of developing electrophoresis as a method for obtaining high-purity elements is emphasized

  20. Numerical methods using Matlab

    CERN Document Server

    Lindfield, George

    2012-01-01

    Numerical Methods using MATLAB, 3e, is an extensive reference offering hundreds of useful and important numerical algorithms that can be implemented into MATLAB for a graphical interpretation to help researchers analyze a particular outcome. Many worked examples are given together with exercises and solutions to illustrate how numerical methods can be used to study problems that have applications in the biosciences, chaos, optimization, engineering and science across the board. Numerical Methods using MATLAB, 3e, is an extensive reference offering hundreds of use

  1. Model Correction Factor Method

    DEFF Research Database (Denmark)

    Christensen, Claus; Randrup-Thomsen, Søren; Morsing Johannesen, Johannes

    1997-01-01

    The model correction factor method is proposed as an alternative to traditional polynomial based response surface techniques in structural reliability considering a computationally time consuming limit state procedure as a 'black box'. The class of polynomial functions is replaced by a limit...... of the model correction factor method, is that in simpler form not using gradient information on the original limit state function or only using this information once, a drastic reduction of the number of limit state evaluation is obtained together with good approximations on the reliability. Methods...

  2. Imaging methods in otorhinolaryngology

    International Nuclear Information System (INIS)

    Frey, K.W.; Mees, K.; Vogl, T.

    1989-01-01

    This book is the work of an otorhinolaryngologist and two radiologists, who combined their experience and efforts in order to solve a great variety and number of problems encountered in practical work, taking into account the latest technical potentials and the practical feasibility, which is determined by the equipment available. Every chapter presents the full range of diagnostic methods applicable, starting with the suitable plain radiography methods and proceeding to the various tomographic scanning methods, including conventional tomography. Every technique is assessed in terms of diagnostic value and drawbacks. (orig./MG) With 778 figs [de

  3. Generalized subspace correction methods

    Energy Technology Data Exchange (ETDEWEB)

    Kolm, P. [Royal Institute of Technology, Stockholm (Sweden); Arbenz, P.; Gander, W. [Eidgenoessiche Technische Hochschule, Zuerich (Switzerland)

    1996-12-31

    A fundamental problem in scientific computing is the solution of large sparse systems of linear equations. Often these systems arise from the discretization of differential equations by finite difference, finite volume or finite element methods. Iterative methods exploiting these sparse structures have proven to be very effective on conventional computers for a wide area of applications. Due to the rapid development and increasing demand for the large computing powers of parallel computers, it has become important to design iterative methods specialized for these new architectures.

  4. Concrete compositions and methods

    Science.gov (United States)

    Chen, Irvin; Lee, Patricia Tung; Patterson, Joshua

    2015-06-23

    Provided herein are compositions, methods, and systems for cementitious compositions containing calcium carbonate compositions and aggregate. The compositions find use in a variety of applications, including use in a variety of building materials and building applications.

  5. Ensemble Data Mining Methods

    Science.gov (United States)

    Oza, Nikunj C.

    2004-01-01

    Ensemble Data Mining Methods, also known as Committee Methods or Model Combiners, are machine learning methods that leverage the power of multiple models to achieve better prediction accuracy than any of the individual models could on their own. The basic goal when designing an ensemble is the same as when establishing a committee of people: each member of the committee should be as competent as possible, but the members should be complementary to one another. If the members are not complementary, Le., if they always agree, then the committee is unnecessary---any one member is sufficient. If the members are complementary, then when one or a few members make an error, the probability is high that the remaining members can correct this error. Research in ensemble methods has largely revolved around designing ensembles consisting of competent yet complementary models.

  6. Diagnostic method and reagent

    International Nuclear Information System (INIS)

    Edgington, T.S.; Plow, E.F.

    1979-01-01

    The discovery of an isomeric species of carcinoembryonic antigen and methods of isolation, identification and utilization as a radiolabelled species of the same as an aid in the diagnosis of adenocarcinomas of the gastrointestinal tract are disclosed. 13 claims

  7. Methods of dating

    Energy Technology Data Exchange (ETDEWEB)

    Gatty, B

    1986-04-01

    Scientific methods of dating, born less than thirty years ago, have recently improved tremendously. First the dating principles will be given; then it will be explained how, through natural radioactivity, we can have access to the age of an event or an object; the case of radiocarbon will be especially emphasized. The principle of relative methods such as thermoluminescence or paleomagnetism will also be shortly given. What is the use for dating. The fields of its application are numerous; through these methods, relatively precise ages can be given to the major events which have been keys in the history of universe, life and man; thus, dating is a useful scientific tool in astrophysics, geology, biology, anthropology and archeology. Even if certain ages are still subject to controversies, we can say that these methods have confirmed evolution's continuity, be it on a cosmic, biologic or human scale, where ages are measured in billions, millions or thousands of years respectively.

  8. Energy consumption assessment methods

    Energy Technology Data Exchange (ETDEWEB)

    Sutherland, K S

    1975-01-01

    The why, what, and how-to aspects of energy audits for industrial plants, and the application of energy accounting methods to a chemical plant in order to assess energy conservation possibilities are discussed. (LCL)

  9. Stochastic optimization methods

    CERN Document Server

    Marti, Kurt

    2005-01-01

    Optimization problems arising in practice involve random parameters. For the computation of robust optimal solutions, i.e., optimal solutions being insensitive with respect to random parameter variations, deterministic substitute problems are needed. Based on the distribution of the random data, and using decision theoretical concepts, optimization problems under stochastic uncertainty are converted into deterministic substitute problems. Due to the occurring probabilities and expectations, approximative solution techniques must be applied. Deterministic and stochastic approximation methods and their analytical properties are provided: Taylor expansion, regression and response surface methods, probability inequalities, First Order Reliability Methods, convex approximation/deterministic descent directions/efficient points, stochastic approximation methods, differentiation of probability and mean value functions. Convergence results of the resulting iterative solution procedures are given.

  10. Predictive Methods of Pople

    Indian Academy of Sciences (India)

    Chemistry for their pioneering contri butions to the development of computational methods in quantum chemistry and density functional theory .... program of Pop Ie for ab-initio electronic structure calculation of molecules. This ab-initio MO ...

  11. Methods for cellobiosan utilization

    Energy Technology Data Exchange (ETDEWEB)

    Linger, Jeffrey; Beckham, Gregg T.

    2017-07-11

    Disclosed herein are enzymes useful for the degradation of cellobiosan in materials such a pyrolysis oils. Methods of degrading cellobiosan using enzymes or organisms expressing the same are also disclosed.

  12. Methods of neutron spectrometry

    International Nuclear Information System (INIS)

    Doerschel, B.

    1981-01-01

    The different methods of neutron spectrometry are based on the direct measurement of neutron velocity or on the use of suitable energy-dependent interaction processes. In the latter case the measuring effect of a detector is connected with the searched neutron spectrum by an integral equation. The solution needs suitable unfolding procedures. The most important methods of neutron spectrometry are the time-of-flight method, the crystal spectrometry, the neutron spectrometry by use of elastic collisions with hydrogen nuclei, and neutron spectrometry with the aid of nuclear reactions, especially of the neutron-induced activation. The advantages and disadvantages of these methods are contrasted considering the resolution, the measurable energy range, the sensitivity, and the experimental and computational efforts. (author)

  13. Methods in Modern Biophysics

    CERN Document Server

    Nölting, Bengt

    2010-01-01

    Incorporating recent dramatic advances, this textbook presents a fresh and timely introduction to modern biophysical methods. An array of new, faster and higher-power biophysical methods now enables scientists to examine the mysteries of life at a molecular level. This innovative text surveys and explains the ten key biophysical methods, including those related to biophysical nanotechnology, scanning probe microscopy, X-ray crystallography, ion mobility spectrometry, mass spectrometry, proteomics, and protein folding and structure. Incorporating much information previously unavailable in tutorial form, Nölting employs worked examples and about 270 illustrations to fully detail the techniques and their underlying mechanisms. Methods in Modern Biophysics is written for advanced undergraduate and graduate students, postdocs, researchers, lecturers, and professors in biophysics, biochemistry and related fields. Special features in the 3rd edition: Introduces rapid partial protein ladder sequencing - an important...

  14. Lean Government Methods Guide

    Science.gov (United States)

    This Guide focuses primarily on Lean production, which is an organizational improvement philosophy and set of methods that originated in manufacturing but has been expanded to government and service sectors.

  15. Number projection method

    International Nuclear Information System (INIS)

    Kaneko, K.

    1987-01-01

    A relationship between the number projection and the shell model methods is investigated in the case of a single-j shell. We can find a one-to-one correspondence between the number projected and the shell model states

  16. Etching method employing radiation

    International Nuclear Information System (INIS)

    Chapman, B.N.; Winters, H.F.

    1982-01-01

    This invention provides a method for etching a silicon oxide, carbide, nitride, or oxynitride surface using an electron or ion beam in the presence of a xenon or krypton fluoride. No additional steps are required after exposure to radiation

  17. GEM simulation methods development

    International Nuclear Information System (INIS)

    Tikhonov, V.; Veenhof, R.

    2002-01-01

    A review of methods used in the simulation of processes in gas electron multipliers (GEMs) and in the accurate calculation of detector characteristics is presented. Such detector characteristics as effective gas gain, transparency, charge collection and losses have been calculated and optimized for a number of GEM geometries and compared with experiment. A method and a new special program for calculations of detector macro-characteristics such as signal response in a real detector readout structure, and spatial and time resolution of detectors have been developed and used for detector optimization. A detailed development of signal induction on readout electrodes and electronics characteristics are included in the new program. A method for the simulation of charging-up effects in GEM detectors is described. All methods show good agreement with experiment

  18. Improved radioanalytical methods

    International Nuclear Information System (INIS)

    Erickson, M.D.; Aldstadt, J.H.; Alvarado, J.S.; Crain, J.S.; Orlandini, K.A.; Smith, L.L.

    1995-01-01

    Methods for the chemical characterization of the environment are being developed under a multitask project for the Analytical Services Division (EM-263) within the US Department of Energy (DOE) Office of Environmental Management. This project focuses on improvement of radioanalytical methods with an emphasis on faster and cheaper routine methods. We have developed improved methods, for separation of environmental levels of technetium-99 and strontium-89/90, radium, and actinides from soil and water; and for separation of actinides from soil and water matrix interferences. Among the novel separation techniques being used are element- and class-specific resins and membranes. (The 3M Corporation is commercializing Empore trademark membranes under a cooperative research and development agreement [CRADA] initiated under this project). We have also developed methods for simultaneous detection of multiple isotopes using inductively coupled plasma-mass spectrometry (ICP-MS). The ICP-MS method requires less rigorous chemical separations than traditional radiochemical analyses because of its mass-selective mode of detection. Actinides and their progeny have been isolated and concentrated from a variety of natural water matrices by using automated batch separation incorporating selective resins prior to ICP-MS analyses. In addition, improvements in detection limits, sample volume, and time of analysis were obtained by using other sample introduction techniques, such as ultrasonic nebulization and electrothermal vaporization. Integration and automation of the separation methods with the ICP-MS methodology by using flow injection analysis is underway, with an objective of automating methods to achieve more reproducible results, reduce labor costs, cut analysis time, and minimize secondary waste generation through miniaturization of the process

  19. Continuation Newton methods

    Czech Academy of Sciences Publication Activity Database

    Axelsson, Owe; Sysala, Stanislav

    2015-01-01

    Roč. 70, č. 11 (2015), s. 2621-2637 ISSN 0898-1221 R&D Projects: GA ČR GA13-18652S Institutional support: RVO:68145535 Keywords : system of nonlinear equations * Newton method * load increment method * elastoplasticity Subject RIV: IN - Informatics, Computer Science Impact factor: 1.398, year: 2015 http://www.sciencedirect.com/science/article/pii/S0898122115003818

  20. Nuclear methods monitor nutrition

    International Nuclear Information System (INIS)

    Allen, B.J.

    1988-01-01

    Neutron activation of nitrogen and hydrogen in the body, the isotope dilution technique and the measurement of naturally radioactive potassium in the body are among the new nuclear methods, now under collaborative development by the Australian Nuclear Scientific and Technology Organization and medical specialists from several Sydney hospitals. These methods allow medical specialists to monitor the patient's response to various diets and dietary treatments in cases of cystic fibrosis, anorexia nervosa, long-term surgical trauma, renal diseases and AIDS. ills

  1. The fission track method

    International Nuclear Information System (INIS)

    Hansen, K.

    1990-01-01

    During the last decade fission track (FT) analysis has evolved as an important tool in exploration for hydrocarbon resources. Most important is this method's ability to yield information about temperatures at different times (history), and thus relate oil generation and time independently of other maturity parameters. The purpose of this paper is to introduce the basics of the method and give an example from the author's studies. (AB) (14 refs.)

  2. Experimental physics method

    International Nuclear Information System (INIS)

    Jeong, Yang Su; Oh, Byeong Seong

    2010-05-01

    This book introduces measurement and error, statistics of experimental data, population, sample variable, distribution function, propagation of error, mean and measurement of error, adjusting to rectilinear equation, common sense of error, experiment method, and record and statement. It also explains importance of error of estimation, systematic error, random error, treatment of single variable, significant figure, deviation, mean value, median, mode, sample mean, sample standard deviation, binomial distribution, gauss distribution, and method of least squares.

  3. Methods for measuring shrinkage

    OpenAIRE

    Chapman, Paul; Templar, Simon

    2006-01-01

    This paper presents findings from research amongst European grocery retailers into their methods for measuring shrinkage. The findings indicate that: there is no dominant method for valuing or stating shrinkage; shrinkage in the supply chain is frequently overlooked; data is essential in pinpointing where and when loss occurs and that many retailers collect data at the stock-keeping unit (SKU) level and do so every 6 months. These findings reveal that it is difficult to benc...

  4. Method of saccharifying cellulose

    Science.gov (United States)

    Johnson, E.A.; Demain, A.L.; Madia, A.

    1983-05-13

    A method is disclosed of saccharifying cellulose by incubation with the cellulase of Clostridium thermocellum in a broth containing an efficacious amount of thiol reducing agent. Other incubation parameters which may be advantageously controlled to stimulate saccharification include the concentration of alkaline earth salts, pH, temperature, and duration. By the method of the invention, even native crystalline cellulose such as that found in cotton may be completely saccharified.

  5. Method of treating depression

    Science.gov (United States)

    Henn, Fritz [East Patchogue, NY

    2012-01-24

    Methods for treatment of depression-related mood disorders in mammals, particularly humans are disclosed. The methods of the invention include administration of compounds capable of enhancing glutamate transporter activity in the brain of mammals suffering from depression. ATP-sensitive K.sup.+ channel openers and .beta.-lactam antibiotics are used to enhance glutamate transport and to treat depression-related mood disorders and depressive symptoms.

  6. Methods of experimental physics

    CERN Document Server

    Williams, Dudley

    1962-01-01

    Methods of Experimental Physics, Volume 3: Molecular Physics focuses on molecular theory, spectroscopy, resonance, molecular beams, and electric and thermodynamic properties. The manuscript first considers the origins of molecular theory, molecular physics, and molecular spectroscopy, as well as microwave spectroscopy, electronic spectra, and Raman effect. The text then ponders on diffraction methods of molecular structure determination and resonance studies. Topics include techniques of electron, neutron, and x-ray diffraction and nuclear magnetic, nuclear quadropole, and electron spin reson

  7. The ICARE Method

    Science.gov (United States)

    Henke, Luke

    2010-01-01

    The ICARE method is a flexible, widely applicable method for systems engineers to solve problems and resolve issues in a complete and comprehensive manner. The method can be tailored by diverse users for direct application to their function (e.g. system integrators, design engineers, technical discipline leads, analysts, etc.). The clever acronym, ICARE, instills the attitude of accountability, safety, technical rigor and engagement in the problem resolution: Identify, Communicate, Assess, Report, Execute (ICARE). This method was developed through observation of Space Shuttle Propulsion Systems Engineering and Integration (PSE&I) office personnel approach in an attempt to succinctly describe the actions of an effective systems engineer. Additionally it evolved from an effort to make a broadly-defined checklist for a PSE&I worker to perform their responsibilities in an iterative and recursive manner. The National Aeronautics and Space Administration (NASA) Systems Engineering Handbook states, engineering of NASA systems requires a systematic and disciplined set of processes that are applied recursively and iteratively for the design, development, operation, maintenance, and closeout of systems throughout the life cycle of the programs and projects. ICARE is a method that can be applied within the boundaries and requirements of NASA s systems engineering set of processes to provide an elevated sense of duty and responsibility to crew and vehicle safety. The importance of a disciplined set of processes and a safety-conscious mindset increases with the complexity of the system. Moreover, the larger the system and the larger the workforce, the more important it is to encourage the usage of the ICARE method as widely as possible. According to the NASA Systems Engineering Handbook, elements of a system can include people, hardware, software, facilities, policies and documents; all things required to produce system-level results, qualities, properties, characteristics

  8. VALUATION METHODS- LITERATURE REVIEW

    Directory of Open Access Journals (Sweden)

    Dorisz Talas

    2015-07-01

    Full Text Available This paper is a theoretical overview of the often used valuation methods with the help of which the value of a firm or its equity is calculated. Many experts (including Aswath Damodaran, Guochang Zhang and CA Hozefa Natalwala classify the methods. The basic models are based on discounted cash flows. The main method uses the free cash flow for valuation, but there are some newer methods that reveal and correct the weaknesses of the traditional models. The valuation of flexibility of management can be conducted mainly with real options. This paper briefly describes the essence of the Dividend Discount Model, the Free Cash Flow Model, the benefit from using real options and the Residual Income Model. There are a few words about the Adjusted Present Value approach as well. Different models uses different premises, and an overall truth is that if the required premises are real and correct, the value will be appropriately accurate. Another important condition is that experts, analysts should choose between the models on the basis of the purpose of valuation. Thus there are no good or bad methods, only methods that fit different goals and aims. The main task is to define exactly the purpose, then to find the most appropriate valuation technique. All the methods originates from the premise that the value of an asset is the present value of its future cash flows. According to the different points of view of different techniques the resulted values can be also differed from each other. Valuation models and techniques should be adapted to the rapidly changing world, but the basic statements remain the same. On the other hand there is a need for more accurate models in order to help investors get as many information as they could. Today information is one of the most important resources and financial models should keep up with this trend.

  9. MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions

    Science.gov (United States)

    Novosad, Philip; Reader, Andrew J.

    2016-06-01

    Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [18F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral

  10. Data processing of qualitative results from an interlaboratory comparison for the detection of "Flavescence dorée" phytoplasma: How the use of statistics can improve the reliability of the method validation process in plant pathology.

    Science.gov (United States)

    Chabirand, Aude; Loiseau, Marianne; Renaudin, Isabelle; Poliakoff, Françoise

    2017-01-01

    A working group established in the framework of the EUPHRESCO European collaborative project aimed to compare and validate diagnostic protocols for the detection of "Flavescence dorée" (FD) phytoplasma in grapevines. Seven molecular protocols were compared in an interlaboratory test performance study where each laboratory had to analyze the same panel of samples consisting of DNA extracts prepared by the organizing laboratory. The tested molecular methods consisted of universal and group-specific real-time and end-point nested PCR tests. Different statistical approaches were applied to this collaborative study. Firstly, there was the standard statistical approach consisting in analyzing samples which are known to be positive and samples which are known to be negative and reporting the proportion of false-positive and false-negative results to respectively calculate diagnostic specificity and sensitivity. This approach was supplemented by the calculation of repeatability and reproducibility for qualitative methods based on the notions of accordance and concordance. Other new approaches were also implemented, based, on the one hand, on the probability of detection model, and, on the other hand, on Bayes' theorem. These various statistical approaches are complementary and give consistent results. Their combination, and in particular, the introduction of new statistical approaches give overall information on the performance and limitations of the different methods, and are particularly useful for selecting the most appropriate detection scheme with regards to the prevalence of the pathogen. Three real-time PCR protocols (methods M4, M5 and M6 respectively developed by Hren (2007), Pelletier (2009) and under patent oligonucleotides) achieved the highest levels of performance for FD phytoplasma detection. This paper also addresses the issue of indeterminate results and the identification of outlier results. The statistical tools presented in this paper and their

  11. Data processing of qualitative results from an interlaboratory comparison for the detection of "Flavescence dorée" phytoplasma: How the use of statistics can improve the reliability of the method validation process in plant pathology.

    Directory of Open Access Journals (Sweden)

    Aude Chabirand

    Full Text Available A working group established in the framework of the EUPHRESCO European collaborative project aimed to compare and validate diagnostic protocols for the detection of "Flavescence dorée" (FD phytoplasma in grapevines. Seven molecular protocols were compared in an interlaboratory test performance study where each laboratory had to analyze the same panel of samples consisting of DNA extracts prepared by the organizing laboratory. The tested molecular methods consisted of universal and group-specific real-time and end-point nested PCR tests. Different statistical approaches were applied to this collaborative study. Firstly, there was the standard statistical approach consisting in analyzing samples which are known to be positive and samples which are known to be negative and reporting the proportion of false-positive and false-negative results to respectively calculate diagnostic specificity and sensitivity. This approach was supplemented by the calculation of repeatability and reproducibility for qualitative methods based on the notions of accordance and concordance. Other new approaches were also implemented, based, on the one hand, on the probability of detection model, and, on the other hand, on Bayes' theorem. These various statistical approaches are complementary and give consistent results. Their combination, and in particular, the introduction of new statistical approaches give overall information on the performance and limitations of the different methods, and are particularly useful for selecting the most appropriate detection scheme with regards to the prevalence of the pathogen. Three real-time PCR protocols (methods M4, M5 and M6 respectively developed by Hren (2007, Pelletier (2009 and under patent oligonucleotides achieved the highest levels of performance for FD phytoplasma detection. This paper also addresses the issue of indeterminate results and the identification of outlier results. The statistical tools presented in this paper

  12. The lod score method.

    Science.gov (United States)

    Rice, J P; Saccone, N L; Corbett, J

    2001-01-01

    The lod score method originated in a seminal article by Newton Morton in 1955. The method is broadly concerned with issues of power and the posterior probability of linkage, ensuring that a reported linkage has a high probability of being a true linkage. In addition, the method is sequential, so that pedigrees or lod curves may be combined from published reports to pool data for analysis. This approach has been remarkably successful for 50 years in identifying disease genes for Mendelian disorders. After discussing these issues, we consider the situation for complex disorders, where the maximum lod score (MLS) statistic shares some of the advantages of the traditional lod score approach but is limited by unknown power and the lack of sharing of the primary data needed to optimally combine analytic results. We may still learn from the lod score method as we explore new methods in molecular biology and genetic analysis to utilize the complete human DNA sequence and the cataloging of all human genes.

  13. Advances in iterative methods

    International Nuclear Information System (INIS)

    Beauwens, B.; Arkuszewski, J.; Boryszewicz, M.

    1981-01-01

    Results obtained in the field of linear iterative methods within the Coordinated Research Program on Transport Theory and Advanced Reactor Calculations are summarized. The general convergence theory of linear iterative methods is essentially based on the properties of nonnegative operators on ordered normed spaces. The following aspects of this theory have been improved: new comparison theorems for regular splittings, generalization of the notions of M- and H-matrices, new interpretations of classical convergence theorems for positive-definite operators. The estimation of asymptotic convergence rates was developed with two purposes: the analysis of model problems and the optimization of relaxation parameters. In the framework of factorization iterative methods, model problem analysis is needed to investigate whether the increased computational complexity of higher-order methods does not offset their increased asymptotic convergence rates, as well as to appreciate the effect of standard relaxation techniques (polynomial relaxation). On the other hand, the optimal use of factorization iterative methods requires the development of adequate relaxation techniques and their optimization. The relative performances of a few possibilities have been explored for model problems. Presently, the best results have been obtained with optimal diagonal-Chebyshev relaxation

  14. Quantum mechanics implementation in drug-design workflows: does it really help?

    Science.gov (United States)

    Arodola, Olayide A; Soliman, Mahmoud Es

    2017-01-01

    The pharmaceutical industry is progressively operating in an era where development costs are constantly under pressure, higher percentages of drugs are demanded, and the drug-discovery process is a trial-and-error run. The profit that flows in with the discovery of new drugs has always been the motivation for the industry to keep up the pace and keep abreast with the endless demand for medicines. The process of finding a molecule that binds to the target protein using in silico tools has made computational chemistry a valuable tool in drug discovery in both academic research and pharmaceutical industry. However, the complexity of many protein-ligand interactions challenges the accuracy and efficiency of the commonly used empirical methods. The usefulness of quantum mechanics (QM) in drug-protein interaction cannot be overemphasized; however, this approach has little significance in some empirical methods. In this review, we discuss recent developments in, and application of, QM to medically relevant biomolecules. We critically discuss the different types of QM-based methods and their proposed application to incorporating them into drug-design and -discovery workflows while trying to answer a critical question: are QM-based methods of real help in drug-design and -discovery research and industry?

  15. Independent random sampling methods

    CERN Document Server

    Martino, Luca; Míguez, Joaquín

    2018-01-01

    This book systematically addresses the design and analysis of efficient techniques for independent random sampling. Both general-purpose approaches, which can be used to generate samples from arbitrary probability distributions, and tailored techniques, designed to efficiently address common real-world practical problems, are introduced and discussed in detail. In turn, the monograph presents fundamental results and methodologies in the field, elaborating and developing them into the latest techniques. The theory and methods are illustrated with a varied collection of examples, which are discussed in detail in the text and supplemented with ready-to-run computer code. The main problem addressed in the book is how to generate independent random samples from an arbitrary probability distribution with the weakest possible constraints or assumptions in a form suitable for practical implementation. The authors review the fundamental results and methods in the field, address the latest methods, and emphasize the li...

  16. Grid generation methods

    CERN Document Server

    Liseikin, Vladimir D

    2017-01-01

    This new edition provides a description of current developments relating to grid methods, grid codes, and their applications to actual problems. Grid generation methods are indispensable for the numerical solution of differential equations. Adaptive grid-mapping techniques, in particular, are the main focus and represent a promising tool to deal with systems with singularities. This 3rd edition includes three new chapters on numerical implementations (10), control of grid properties (11), and applications to mechanical, fluid, and plasma related problems (13). Also the other chapters have been updated including new topics, such as curvatures of discrete surfaces (3). Concise descriptions of hybrid mesh generation, drag and sweeping methods, parallel algorithms for mesh generation have been included too. This new edition addresses a broad range of readers: students, researchers, and practitioners in applied mathematics, mechanics, engineering, physics and other areas of applications.

  17. Bayesian methods in reliability

    Science.gov (United States)

    Sander, P.; Badoux, R.

    1991-11-01

    The present proceedings from a course on Bayesian methods in reliability encompasses Bayesian statistical methods and their computational implementation, models for analyzing censored data from nonrepairable systems, the traits of repairable systems and growth models, the use of expert judgment, and a review of the problem of forecasting software reliability. Specific issues addressed include the use of Bayesian methods to estimate the leak rate of a gas pipeline, approximate analyses under great prior uncertainty, reliability estimation techniques, and a nonhomogeneous Poisson process. Also addressed are the calibration sets and seed variables of expert judgment systems for risk assessment, experimental illustrations of the use of expert judgment for reliability testing, and analyses of the predictive quality of software-reliability growth models such as the Weibull order statistics.

  18. Energy methods in dynamics

    CERN Document Server

    Le, Khanh Chau

    2012-01-01

    The above examples should make clear the necessity of understanding the mechanism of vibrations and waves in order to control them in an optimal way. However vibrations and waves are governed by differential equations which require, as a rule, rather complicated mathematical methods for their analysis. The aim of this textbook is to help students acquire both a good grasp of the first principles from which the governing equations can be derived, and the adequate mathematical methods for their solving. Its distinctive features, as seen from the title, lie in the systematic and intensive use of Hamilton's variational principle and its generalizations for deriving the governing equations of conservative and dissipative mechanical systems, and also in providing the direct variational-asymptotic analysis, whenever available, of the energy and dissipation for the solution of these equations. It will be demonstrated that many well-known methods in dynamics like those of Lindstedt-Poincare, Bogoliubov-Mitropolsky, Ko...

  19. Nuclear methods for tribology

    International Nuclear Information System (INIS)

    Racolta, P.M.

    1994-01-01

    The tribological field of activity is mainly concerned with the relative movement of different machine components, friction and wear phenomena and their dependence upon lubrication. Tribological studies on friction and wear processes are important because they lead to significant parameter-improvements of engineering tools and machinery components. A review of fundamental aspects of both friction and wear phenomena is presented. A number of radioindicator-based methods have been known for almost four decades, differing mainly with respect to the mode of introducing the radio-indicators into the machine part to be studied. All these methods briefly presented in this paper are based on the measurement of the activity of wear products and therefore require high activity levels of the part. For this reason, such determinations can be carried out only in special laboratories and under conditions which do not usually agree with the conditions of actual use. What is required is a sensitive, fast method allowing the determination of wear under any operating conditions, without the necessity of stopping and disassembling the machine. The above mentioned requirements are the features that have made the Thin Layer Activation technique (TLA) the most widely used method applied in wear and corrosion studies in the last two decades. The TLA principle, taking in account that wear and corrosion processes are characterised by a loss of material, consists in an ion beam irradiation of a well defined volume of a machine part subjected to wear. The radioactivity level changes can usually be measured by gamma-ray spectroscopy methods. A review of both main TLA fields of application in major laboratories abroad and of those performed at the U-120 cyclotron of I.P.N.E.-Bucharest together with the existing trends to extend other nuclear analytical methods to tribological studies is presented as well. (author). 25 refs., 6 figs., 2 tabs

  20. Methods for pretreating biomass

    Science.gov (United States)

    Balan, Venkatesh; Dale, Bruce E; Chundawat, Shishir; Sousa, Leonardo

    2017-05-09

    A method for pretreating biomass is provided, which includes, in a reactor, allowing gaseous ammonia to condense on the biomass and react with water present in the biomass to produce pretreated biomass, wherein reactivity of polysaccharides in the biomass is increased during subsequent biological conversion as compared to the reactivity of polysaccharides in biomass which has not been pretreated. A method for pretreating biomass with a liquid ammonia and recovering the liquid ammonia is also provided. Related systems which include a biochemical or biofuel production facility are also disclosed.

  1. Exploring Monte Carlo methods

    CERN Document Server

    Dunn, William L

    2012-01-01

    Exploring Monte Carlo Methods is a basic text that describes the numerical methods that have come to be known as "Monte Carlo." The book treats the subject generically through the first eight chapters and, thus, should be of use to anyone who wants to learn to use Monte Carlo. The next two chapters focus on applications in nuclear engineering, which are illustrative of uses in other fields. Five appendices are included, which provide useful information on probability distributions, general-purpose Monte Carlo codes for radiation transport, and other matters. The famous "Buffon's needle proble

  2. Research on teaching methods.

    Science.gov (United States)

    Oermann, M H

    1990-01-01

    Research on teaching methods in nursing education was categorized into studies on media, CAI, and other nontraditional instructional strategies. While the research differed, some generalizations may be made from the findings. Multimedia, whether it is used for individual or group instruction, is at least as effective as traditional instruction (lecture and lecture-discussion) in promoting cognitive learning, retention of knowledge, and performance. Further study is needed to identify variables that may influence learning and retention. While learner attitudes toward mediated instruction tended to be positive, investigators failed to control for the effect of novelty. Control over intervening variables was lacking in the majority of studies as well. Research indicated that CAI is as effective as other teaching methods in terms of knowledge gain and retention. Attitudes toward CAI tended to be favorable, with similar problems in measurement as those evidenced in studies of media. Chang (1986) also recommends that future research examine the impact of computer-video interactive instruction on students, faculty, and settings. Research is needed on experimental teaching methods, strategies for teaching problem solving and clinical judgment, and ways of improving the traditional lecture and discussion. Limited research in these areas makes generalizations impossible. There is a particular need for research on how to teach students the diagnostic reasoning process and encourage critical thinking, both in terms of appropriate teaching methods and the way in which those strategies should be used. It is interesting that few researchers studied lecture and lecture-discussion except as comparable teaching methods for research on other strategies. Additional research questions may be generated on lecture and discussion in relation to promoting concept learning, an understanding of nursing and other theories, transfer of knowledge, and development of cognitive skills. Few

  3. Carbon 14 dating method

    International Nuclear Information System (INIS)

    Fortin, Ph.

    2000-01-01

    This document gives a first introduction to 14 C dating as it is put into practice at the radiocarbon dating centre of Claude-Bernard university (Lyon-1 univ., Villeurbanne, France): general considerations and recalls of nuclear physics; the 14 C dating method; the initial standard activity; the isotopic fractioning; the measurement of samples activity; the liquid-scintillation counters; the calibration and correction of 14 C dates; the preparation of samples; the benzene synthesis; the current applications of the method. (J.S.)

  4. Methods of Multivariate Analysis

    CERN Document Server

    Rencher, Alvin C

    2012-01-01

    Praise for the Second Edition "This book is a systematic, well-written, well-organized text on multivariate analysis packed with intuition and insight . . . There is much practical wisdom in this book that is hard to find elsewhere."-IIE Transactions Filled with new and timely content, Methods of Multivariate Analysis, Third Edition provides examples and exercises based on more than sixty real data sets from a wide variety of scientific fields. It takes a "methods" approach to the subject, placing an emphasis on how students and practitioners can employ multivariate analysis in real-life sit

  5. Tautomerism methods and theories

    CERN Document Server

    Antonov, Liudmil

    2013-01-01

    Covering the gap between basic textbooks and over-specialized scientific publications, this is the first reference available to describe this interdisciplinary topic for PhD students and scientists starting in the field. The result is an introductory description providing suitable practical examples of the basic methods used to study tautomeric processes, as well as the theories describing the tautomerism and proton transfer phenomena. It also includes different spectroscopic methods for examining tautomerism, such as UV-VIs, time-resolved fluorescence spectroscopy, and NMR spectrosc

  6. Speeding Fermat's factoring method

    Science.gov (United States)

    McKee, James

    A factoring method is presented which, heuristically, splits composite n in O(n^{1/4+epsilon}) steps. There are two ideas: an integer approximation to sqrt(q/p) provides an O(n^{1/2+epsilon}) algorithm in which n is represented as the difference of two rational squares; observing that if a prime m divides a square, then m^2 divides that square, a heuristic speed-up to O(n^{1/4+epsilon}) steps is achieved. The method is well-suited for use with small computers: the storage required is negligible, and one never needs to work with numbers larger than n itself.

  7. High frequency asymptotic methods

    International Nuclear Information System (INIS)

    Bouche, D.; Dessarce, R.; Gay, J.; Vermersch, S.

    1991-01-01

    The asymptotic methods allow us to compute the interaction of high frequency electromagnetic waves with structures. After an outline of their foundations with emphasis on the geometrical theory of diffraction, it is shown how to use these methods to evaluate the radar cross section (RCS) of complex tri-dimensional objects of great size compared to the wave-length. The different stages in simulating phenomena which contribute to the RCS are reviewed: physical theory of diffraction, multiple interactions computed by shooting rays, research for creeping rays. (author). 7 refs., 6 figs., 3 insets

  8. Practical methods of optimization

    CERN Document Server

    Fletcher, R

    2013-01-01

    Fully describes optimization methods that are currently most valuable in solving real-life problems. Since optimization has applications in almost every branch of science and technology, the text emphasizes their practical aspects in conjunction with the heuristics useful in making them perform more reliably and efficiently. To this end, it presents comparative numerical studies to give readers a feel for possibile applications and to illustrate the problems in assessing evidence. Also provides theoretical background which provides insights into how methods are derived. This edition offers rev

  9. Electrorheological fluids and methods

    Science.gov (United States)

    Green, Peter F.; McIntyre, Ernest C.

    2015-06-02

    Electrorheological fluids and methods include changes in liquid-like materials that can flow like milk and subsequently form solid-like structures under applied electric fields; e.g., about 1 kV/mm. Such fluids can be used in various ways as smart suspensions, including uses in automotive, defense, and civil engineering applications. Electrorheological fluids and methods include one or more polar molecule substituted polyhedral silsesquioxanes (e.g., sulfonated polyhedral silsesquioxanes) and one or more oils (e.g., silicone oil), where the fluid can be subjected to an electric field.

  10. Method of sterilization

    International Nuclear Information System (INIS)

    Peel, J.L.; Waites, W.M.

    1981-01-01

    A method of sterilisation of food packaging is described which comprises treating microorganisms with an ultraviolet irradiated solution of hydrogen peroxide to render the microorganisms non-viable. The wavelength of ultraviolet radiation used is wholly or predominantly below 325 nm and the concentration of the hydrogen peroxide is no greater than 10% by weight. The method is applicable to a wide variety of microorganisms including moulds, yeasts, bacteria, viruses and protozoa and finds particular application in the destruction of spore-forming bacteria, especially those which are dairy contaminants. (U.K.)

  11. Unorthodox theoretical methods

    Energy Technology Data Exchange (ETDEWEB)

    Nedd, Sean [Iowa State Univ., Ames, IA (United States)

    2012-01-01

    The use of the ReaxFF force field to correlate with NMR mobilities of amine catalytic substituents on a mesoporous silica nanosphere surface is considered. The interfacing of the ReaxFF force field within the Surface Integrated Molecular Orbital/Molecular Mechanics (SIMOMM) method, in order to replicate earlier SIMOMM published data and to compare with the ReaxFF data, is discussed. The development of a new correlation consistent Composite Approach (ccCA) is presented, which incorporates the completely renormalized coupled cluster method with singles, doubles and non-iterative triples corrections towards the determination of heats of formations and reaction pathways which contain biradical species.

  12. Monte Carlo methods

    Directory of Open Access Journals (Sweden)

    Bardenet Rémi

    2013-07-01

    Full Text Available Bayesian inference often requires integrating some function with respect to a posterior distribution. Monte Carlo methods are sampling algorithms that allow to compute these integrals numerically when they are not analytically tractable. We review here the basic principles and the most common Monte Carlo algorithms, among which rejection sampling, importance sampling and Monte Carlo Markov chain (MCMC methods. We give intuition on the theoretical justification of the algorithms as well as practical advice, trying to relate both. We discuss the application of Monte Carlo in experimental physics, and point to landmarks in the literature for the curious reader.

  13. The SPH homogeneization method

    International Nuclear Information System (INIS)

    Kavenoky, Alain

    1978-01-01

    The homogeneization of a uniform lattice is a rather well understood topic while difficult problems arise if the lattice becomes irregular. The SPH homogeneization method is an attempt to generate homogeneized cross sections for an irregular lattice. Section 1 summarizes the treatment of an isolated cylindrical cell with an entering surface current (in one velocity theory); Section 2 is devoted to the extension of the SPH method to assembly problems. Finally Section 3 presents the generalisation to general multigroup problems. Numerical results are obtained for a PXR rod bundle assembly in Section 4

  14. Splines and variational methods

    CERN Document Server

    Prenter, P M

    2008-01-01

    One of the clearest available introductions to variational methods, this text requires only a minimal background in calculus and linear algebra. Its self-contained treatment explains the application of theoretic notions to the kinds of physical problems that engineers regularly encounter. The text's first half concerns approximation theoretic notions, exploring the theory and computation of one- and two-dimensional polynomial and other spline functions. Later chapters examine variational methods in the solution of operator equations, focusing on boundary value problems in one and two dimension

  15. Probabilistic methods for physics

    International Nuclear Information System (INIS)

    Cirier, G

    2013-01-01

    We present an asymptotic method giving a probability of presence of the iterated spots of R d by a polynomial function f. We use the well-known Perron Frobenius operator (PF) that lets certain sets and measure invariant by f. Probabilistic solutions can exist for the deterministic iteration. If the theoretical result is already known, here we quantify these probabilities. This approach seems interesting to use for computing situations when the deterministic methods don't run. Among the examined applications, are asymptotic solutions of Lorenz, Navier-Stokes or Hamilton's equations. In this approach, linearity induces many difficult problems, all of whom we have not yet resolved.

  16. METHOD OF ROLLING URANIUM

    Science.gov (United States)

    Smith, C.S.

    1959-08-01

    A method is described for rolling uranium metal at relatively low temperatures and under non-oxidizing conditions. The method involves the steps of heating the uranium to 200 deg C in an oil bath, withdrawing the uranium and permitting the oil to drain so that only a thin protective coating remains and rolling the oil coated uranium at a temperature of 200 deg C to give about a 15% reduction in thickness at each pass. The operation may be repeated to accomplish about a 90% reduction without edge cracking, checking or any appreciable increase in brittleness.

  17. Supercritical fluid analytical methods

    International Nuclear Information System (INIS)

    Smith, R.D.; Kalinoski, H.T.; Wright, B.W.; Udseth, H.R.

    1988-01-01

    Supercritical fluids are providing the basis for new and improved methods across a range of analytical technologies. New methods are being developed to allow the detection and measurement of compounds that are incompatible with conventional analytical methodologies. Characterization of process and effluent streams for synfuel plants requires instruments capable of detecting and measuring high-molecular-weight compounds, polar compounds, or other materials that are generally difficult to analyze. The purpose of this program is to develop and apply new supercritical fluid techniques for extraction, separation, and analysis. These new technologies will be applied to previously intractable synfuel process materials and to complex mixtures resulting from their interaction with environmental and biological systems

  18. A flexible homework method

    Science.gov (United States)

    Bao, Lei; Stonebraker, Stephen R.; Sadaghiani, Homeyra

    2008-09-01

    The traditional methods of assigning and grading homework in large enrollment physics courses have raised concerns among many instructors and students. In this paper we discuss a cost-effective approach to managing homework that involves making half of the problem solutions available to students before the homework is due. In addition, students are allowed some control in choosing which problems to solve. This paper-based approach to homework provides more detailed and timely support to students and increases the amount of self-direction in the homework process. We describe the method and present preliminary results on how students have responded.

  19. APPLICATION OF THE SPERM CHROMATIN STRUCTURE ASSAY TO THE TEPLICE PROGRAM SEMEN STUDIES: A NEW METHOD FOR EVALUATING SPERM NUCLEAR CHROMATIN DAMAGE

    Science.gov (United States)

    ABSTRACTA measure of sperm chromatin integrity was added to the routine semen end points evaluated in the Teplice Program male reproductive health studies. To address the hypothesis that exposure to periods of elevated air pollution may be associated with abnormalities in sp...

  20. Molecular methods for biofilms

    KAUST Repository

    Ferrera, Isabel; Balagué , Vanessa; Voolstra, Christian R.; Aranda, Manuel; Bayer, Till; Abed, Raeid M.M.; Dobretsov, Sergey; Owens, Sarah M.; Wilkening, Jared; Fessler, Jennifer L.; Gilbert, Jack A.

    2014-01-01

    This chapter deals with both classical and modern molecular methods that can be useful for the identification of microorganisms, elucidation and comparison of microbial communities, and investigation of their diversity and functions. The most important and critical steps necessary for all molecular methods is DNA isolation from microbial communities and environmental samples; these are discussed in the first part. The second part provides an overview over DNA polymerase chain reaction (PCR) amplification and DNA sequencing methods. Protocols and analysis software as well as potential pitfalls associated with application of these methods are discussed. Community fingerprinting analyses that can be used to compare multiple microbial communities are discussed in the third part. This part focuses on Denaturing Gradient Gel Electrophoresis (DGGE), Terminal Restriction Fragment Length Polymorphism (T-RFLP) and Automated rRNA Intergenic Spacer Analysis (ARISA) methods. In addition, classical and next-generation metagenomics methods are presented. These are limited to bacterial artificial chromosome and Fosmid libraries and Sanger and next-generation 454 sequencing, as these methods are currently the most frequently used in research. Isolation of nucleic acids: This chapter discusses, the most important and critical steps necessary for all molecular methods is DNA isolation from microbial communities and environmental samples. Nucleic acid isolation methods generally include three steps: cell lysis, removal of unwanted substances, and a final step of DNA purification and recovery. The first critical step is the cell lysis, which can be achieved by enzymatic or mechanical procedures. Removal of proteins, polysaccharides and other unwanted substances is likewise important to avoid their interference in subsequent analyses. Phenol-chloroform-isoamyl alcohol is commonly used to recover DNA, since it separates nucleic acids into an aqueous phase and precipitates proteins and

  1. Molecular methods for biofilms

    KAUST Repository

    Ferrera, Isabel

    2014-08-30

    This chapter deals with both classical and modern molecular methods that can be useful for the identification of microorganisms, elucidation and comparison of microbial communities, and investigation of their diversity and functions. The most important and critical steps necessary for all molecular methods is DNA isolation from microbial communities and environmental samples; these are discussed in the first part. The second part provides an overview over DNA polymerase chain reaction (PCR) amplification and DNA sequencing methods. Protocols and analysis software as well as potential pitfalls associated with application of these methods are discussed. Community fingerprinting analyses that can be used to compare multiple microbial communities are discussed in the third part. This part focuses on Denaturing Gradient Gel Electrophoresis (DGGE), Terminal Restriction Fragment Length Polymorphism (T-RFLP) and Automated rRNA Intergenic Spacer Analysis (ARISA) methods. In addition, classical and next-generation metagenomics methods are presented. These are limited to bacterial artificial chromosome and Fosmid libraries and Sanger and next-generation 454 sequencing, as these methods are currently the most frequently used in research. Isolation of nucleic acids: This chapter discusses, the most important and critical steps necessary for all molecular methods is DNA isolation from microbial communities and environmental samples. Nucleic acid isolation methods generally include three steps: cell lysis, removal of unwanted substances, and a final step of DNA purification and recovery. The first critical step is the cell lysis, which can be achieved by enzymatic or mechanical procedures. Removal of proteins, polysaccharides and other unwanted substances is likewise important to avoid their interference in subsequent analyses. Phenol-chloroform-isoamyl alcohol is commonly used to recover DNA, since it separates nucleic acids into an aqueous phase and precipitates proteins and

  2. Software specification methods

    CERN Document Server

    Habrias, Henri

    2010-01-01

    This title provides a clear overview of the main methods, and has a practical focus that allows the reader to apply their knowledge to real-life situations. The following are just some of the techniques covered: UML, Z, TLA+, SAZ, B, OMT, VHDL, Estelle, SDL and LOTOS.

  3. Leak detection method

    International Nuclear Information System (INIS)

    1978-01-01

    This invention provides a method for removing nuclear fuel elements from a fabrication building while at the same time testing the fuel elements for leaks without releasing contaminants from the fabrication building or from the fuel elements. The vacuum source used, leak detecting mechanism and fuel element fabrication building are specified to withstand environmental hazards. (UK)

  4. Photovoltaic device and method

    Science.gov (United States)

    Cleereman, Robert J; Lesniak, Michael J; Keenihan, James R; Langmaid, Joe A; Gaston, Ryan; Eurich, Gerald K; Boven, Michelle L

    2015-01-27

    The present invention is premised upon an improved photovoltaic device ("PVD") and method of use, more particularly to an improved photovoltaic device with an integral locator and electrical terminal mechanism for transferring current to or from the improved photovoltaic device and the use as a system.

  5. Methods for Risk Analysis

    International Nuclear Information System (INIS)

    Alverbro, Karin

    2010-01-01

    Many decision-making situations today affect humans and the environment. In practice, many such decisions are made without an overall view and prioritise one or other of the two areas. Now and then these two areas of regulation come into conflict, e.g. the best alternative as regards environmental considerations is not always the best from a human safety perspective and vice versa. This report was prepared within a major project with the aim of developing a framework in which both the environmental aspects and the human safety aspects are integrated, and decisions can be made taking both fields into consideration. The safety risks have to be analysed in order to be successfully avoided and one way of doing this is to use different kinds of risk analysis methods. There is an abundance of existing methods to choose from and new methods are constantly being developed. This report describes some of the risk analysis methods currently available for analysing safety and examines the relationships between them. The focus here is mainly on human safety aspects

  6. HEV and cirrhosis: methods

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. HEV and cirrhosis: methods. Study group. Patients with cirrhosis and recent jaundice for <30 d. Controls. Patients with liver cirrhosis but no recent worsening. Exclusions. Significant alcohol consumption. Recent hepatotoxic drugs. Recent antiviral therapy. Recent ...

  7. Method of killing microorganisms

    International Nuclear Information System (INIS)

    Tensmeyer, L.G.

    1980-01-01

    A method of sterilizing the contents of containers involves exposure to a plasma induced therein by focusing a high-power laser beam in an electromagnetic field preferably for a period of from 1.0 millisec to 1.0 secs. (U.K.)

  8. Method of signal analysis

    International Nuclear Information System (INIS)

    Berthomier, Charles

    1975-01-01

    A method capable of handling the amplitude and the frequency time laws of a certain kind of geophysical signals is described here. This method is based upon the analytical signal idea of Gabor and Ville, which is constructed either in the time domain by adding an imaginary part to the real signal (in-quadrature signal), or in the frequency domain by suppressing negative frequency components. The instantaneous frequency of the initial signal is then defined as the time derivative of the phase of the analytical signal, and his amplitude, or envelope, as the modulus of this complex signal. The method is applied to three types of magnetospheric signals: chorus, whistlers and pearls. The results obtained by analog and numerical calculations are compared to results obtained by classical systems using filters, i.e. based upon a different definition of the concept of frequency. The precision with which the frequency-time laws are determined leads then to the examination of the principle of the method and to a definition of instantaneous power density spectrum attached to the signal, and to the first consequences of this definition. In this way, a two-dimensional representation of the signal is introduced which is less deformed by the analysis system properties than the usual representation, and which moreover has the advantage of being obtainable practically in real time [fr

  9. TRAC methods and models

    International Nuclear Information System (INIS)

    Mahaffy, J.H.; Liles, D.R.; Bott, T.F.

    1981-01-01

    The numerical methods and physical models used in the Transient Reactor Analysis Code (TRAC) versions PD2 and PF1 are discussed. Particular emphasis is placed on TRAC-PF1, the version specifically designed to analyze small-break loss-of-coolant accidents

  10. The Prescribed Velocity Method

    DEFF Research Database (Denmark)

    Nielsen, Peter Vilhelm

    The- velocity level in a room ventilated by jet ventilation is strongly influenced by the supply conditions. The momentum flow in the supply jets controls the air movement in the room and, therefore, it is very important that the inlet conditions and the numerical method can generate a satisfactory...

  11. Immunocytochemical methods and protocols

    National Research Council Canada - National Science Library

    Javois, Lorette C

    1999-01-01

    ... monoclonal antibodies to study cell differentiation during embryonic development. For a select few disciplines volumes have been published focusing on the specific application of immunocytochemical techniques to that discipline. What distinguished Immunocytochemical Methods and Protocols from earlier books when it was first published four years ago was i...

  12. Adhesive compositions and methods

    Science.gov (United States)

    Allen, Scott D.; Sendijarevic, Vahid; O'Connor, James

    2017-12-05

    The present invention encompasses polyurethane adhesive compositions comprising aliphatic polycarbonate chains. In one aspect, the present invention encompasses polyurethane adhesives derived from aliphatic polycarbonate polyols and polyisocyanates wherein the polyol chains contain a primary repeating unit having a structure:. In another aspect, the invention provides articles comprising the inventive polyurethane compositions as well as methods of making such compositions.

  13. Ferrari's Method and Technology

    Science.gov (United States)

    Althoen, Steve

    2005-01-01

    Some tips that combine knowledge of mathematics history and technology for adapting Ferrar's method to factor quintics with a TI-83 graphing calculator are presented. A demonstration on the use of the root finder and regression capabilities of the graphing calculator are presented, so that the tips can be easily adapted for any graphing calculator…

  14. Truth and Methods.

    Science.gov (United States)

    Dasenbrock, Reed Way

    1995-01-01

    Examines literary theory's displacing of "method" in the New Historicist criticism. Argues that Stephen Greenblatt and Lee Paterson imply that no objective historical truth is possible and as a result do not give methodology its due weight in their criticism. Questions the theory of "truth" advanced in this vein of literary…

  15. Sparse Classification - Methods & Applications

    DEFF Research Database (Denmark)

    Einarsson, Gudmundur

    for analysing such data carry the potential to revolutionize tasks such as medical diagnostics where often decisions need to be based on only a few high-dimensional observations. This explosion in data dimensionality has sparked the development of novel statistical methods. In contrast, classical statistics...

  16. Method of complex scaling

    International Nuclear Information System (INIS)

    Braendas, E.

    1986-01-01

    The method of complex scaling is taken to include bound states, resonances, remaining scattering background and interference. Particular points of the general complex coordinate formulation are presented. It is shown that care must be exercised to avoid paradoxical situations resulting from inadequate definitions of operator domains. A new resonance localization theorem is presented

  17. Alternative methods in criticality

    International Nuclear Information System (INIS)

    Pedicini, J.M.

    1982-01-01

    In this thesis two new methods of calculating the criticality of a nuclear system are introduced and verified. Most methods of determining the criticality of a nuclear system depend implicitly upon knowledge of the angular flux, net currents, or moments of the angular flux, on the system surface in order to know the leakage. For small systems, leakage is the predominant element in criticality calculations. Unfortunately, in these methods the least accurate fluxes, currents, or moments are those occurring near system surfaces or interfaces. This is due to a mathematical inability to satisfy rigorously with a finite order angular polynomial expansion or angular difference technique the physical boundary conditions which occur on these surfaces. Consequently, one must accept large computational effort or less precise criticality calculations. The methods introduced in this thesis, including a direct leakage operator and an indirect multiple scattering leakage operator, obviate the need to know angular fluxes accurately at system boundaries. Instead, the system wide scalar flux, an integral quantity which is substantially easier to obtain with good precision is sufficient to obtain production, absorption, scattering, and leakage rates

  18. Materials and Methods

    African Journals Online (AJOL)

    David Norris

    genetic variance and its distribution in the population structure can lead to the design of optimum ... Recent developments in statistical methods and computing algorithms ..... This may be an indication of the general effect of the population structure. .... Presentation at the 40th anniversary, Institute of Genetics and Animal.

  19. Biomass treatment method

    Science.gov (United States)

    Friend, Julie; Elander, Richard T.; Tucker, III; Melvin P.; Lyons, Robert C.

    2010-10-26

    A method for treating biomass was developed that uses an apparatus which moves a biomass and dilute aqueous ammonia mixture through reaction chambers without compaction. The apparatus moves the biomass using a non-compressing piston. The resulting treated biomass is saccharified to produce fermentable sugars.

  20. Embodied Design Ideation methods

    DEFF Research Database (Denmark)

    Wilde, Danielle; Vallgårda, Anna; Tomico, Oscar

    2017-01-01

    Embodied design ideation practices work with relationships between body, material and context to enliven design and research potential. Methods are often idiosyncratic and – due to their physical nature – not easily transferred. This presents challenges for designers wishing to develop and share ...

  1. gel template method

    Indian Academy of Sciences (India)

    TiO2 nanotubes have been synthesized by sol–gel template method using alumina membrane. Scanning electron microscopy (SEM), transmission electron microscopy (TEM), Raman spectroscopy, UV absorption spectrum and X-ray diffraction techniques have been used to investigate the structure, morphology and optical ...

  2. Audience Methods and Gratifications.

    Science.gov (United States)

    Lull, James

    A model of need gratification inspired by the work of K.E. Rosengren suggests a theoretical framework making it possible to identify, measure, and assess the components of the need gratification process with respect to the mass media. Methods having cognitive and behavioral components are designed by individuals to achieve need gratification. Deep…

  3. Method for forming ammonia

    Science.gov (United States)

    Kong, Peter C.; Pink, Robert J.; Zuck, Larry D.

    2008-08-19

    A method for forming ammonia is disclosed and which includes the steps of forming a plasma; providing a source of metal particles, and supplying the metal particles to the plasma to form metal nitride particles; and providing a substance, and reacting the metal nitride particles with the substance to produce ammonia, and an oxide byproduct.

  4. Fashion, Mediations & Method Assemblages

    DEFF Research Database (Denmark)

    Sommerlund, Julie; Jespersen, Astrid Pernille

    of handling multiple, fluid realities with multiple, fluid methods. Empirically, the paper works with mediation in fashion - that is efforts the active shaping of relations between producer and consumer through communication, marketing and PR. Fashion mediation is by no means simple, but organise complex...

  5. Universal Image Steganalytic Method

    Directory of Open Access Journals (Sweden)

    V. Banoci

    2014-12-01

    Full Text Available In the paper we introduce a new universal steganalytic method in JPEG file format that is detecting well-known and also newly developed steganographic methods. The steganalytic model is trained by MHF-DZ steganographic algorithm previously designed by the same authors. The calibration technique with the Feature Based Steganalysis (FBS was employed in order to identify statistical changes caused by embedding a secret data into original image. The steganalyzer concept utilizes Support Vector Machine (SVM classification for training a model that is later used by the same steganalyzer in order to identify between a clean (cover and steganographic image. The aim of the paper was to analyze the variety in accuracy of detection results (ACR while detecting testing steganographic algorithms as F5, Outguess, Model Based Steganography without deblocking, JP Hide and Seek which represent the generally used steganographic tools. The comparison of four feature vectors with different lengths FBS (22, FBS (66 FBS(274 and FBS(285 shows promising results of proposed universal steganalytic method comparing to binary methods.

  6. Ergonomics research methods

    Science.gov (United States)

    Uspenskiy, S. I.; Yermakova, S. V.; Chaynova, L. D.; Mitkin, A. A.; Gushcheva, T. M.; Strelkov, Y. K.; Tsvetkova, N. F.

    1973-01-01

    Various factors used in ergonomic research are given. They are: (1) anthrometric measurement, (2) polyeffector method of assessing the functional state of man, (3) galvanic skin reaction, (4) pneumography, (5) electromyography, (6) electrooculography, and (7) tachestoscopy. A brief summary is given of each factor and includes instrumentation and results.

  7. Research Methods in Sociolinguistics

    Science.gov (United States)

    Hernández-Campoy, Juan Manuel

    2014-01-01

    The development of Sociolinguistics has been qualitatively and quantitatively outstanding within Linguistic Science since its beginning in the 1950s, with a steady growth in both theoretical and methodological developments as well as in its interdisciplinary directions within the spectrum of language and society. Field methods in sociolinguistic…

  8. Kriging : Methods and Applications

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2017-01-01

    In this chapter we present Kriging— also known as a Gaussian process (GP) model— which is a mathematical interpolation method. To select the input combinations to be simulated, we use Latin hypercube sampling (LHS); we allow uniform and non-uniform distributions of the simulation inputs. Besides

  9. Sampling system and method

    Science.gov (United States)

    Decker, David L.; Lyles, Brad F.; Purcell, Richard G.; Hershey, Ronald Lee

    2013-04-16

    The present disclosure provides an apparatus and method for coupling conduit segments together. A first pump obtains a sample and transmits it through a first conduit to a reservoir accessible by a second pump. The second pump further conducts the sample from the reservoir through a second conduit.

  10. Six Sigma method

    NARCIS (Netherlands)

    Does, R.J.M.M.; de Mast, J.; Balakrishnan, N.; Brandimarte, P.; Everitt, B.; Molenberghs, G.; Piegorsch, W.; Ruggeri, F.

    2015-01-01

    Six Sigma is built on principles and methods that have proven themselves over the twentieth century. It has incorporated the most effective approaches and integrated them into a full program. It offers a management structure for organizing continuous improvement of routine tasks, such as

  11. Modern Reduction Methods

    CERN Document Server

    Andersson, Pher G

    2008-01-01

    With its comprehensive overview of modern reduction methods, this book features high quality contributions allowing readers to find reliable solutions quickly and easily. The monograph treats the reduction of carbonyles, alkenes, imines and alkynes, as well as reductive aminations and cross and heck couplings, before finishing off with sections on kinetic resolutions and hydrogenolysis. An indispensable lab companion for every chemist.

  12. Methods of information processing

    Energy Technology Data Exchange (ETDEWEB)

    Kosarev, Yu G; Gusev, V D

    1978-01-01

    Works are presented on automation systems for editing and publishing operations by methods of processing symbol information and information contained in training selection (ranking of objectives by promise, classification algorithm of tones and noise). The book will be of interest to specialists in the automation of processing textural information, programming, and pattern recognition.

  13. Dual completion method

    Energy Technology Data Exchange (ETDEWEB)

    Mamedov, N Ya; Kadymova, K S; Dzhafarov, Sh T

    1963-10-28

    One type of dual completion method utilizes a single tubing string. Through the use of the proper tubing equipment, the fluid from the low-productive upper formation is lifted by utilizing the surplus energy of a submerged pump, which handles the production from the lower stratum.

  14. Methods Evolved by Observation

    Science.gov (United States)

    Montessori, Maria

    2016-01-01

    Montessori's idea of the child's nature and the teacher's perceptiveness begins with amazing simplicity, and when she speaks of "methods evolved," she is unveiling a methodological system for observation. She begins with the early childhood explosion into writing, which is a familiar child phenomenon that Montessori has written about…

  15. Alternative methods in criticality

    International Nuclear Information System (INIS)

    Pedicini, J.M.

    1982-01-01

    Two new methods of calculating the criticality of a nuclear system are introduced and verified. Most methods of determining the criticality of a nuclear system depend implicitly upon knowledge of the angular flux, net currents, or moments of the angular flux, on the system surface in order to know the leakage. For small systems, leakage is the predominant element in criticality calculations. Unfortunately, in these methods the least accurate fluxes, currents, or moments are those occuring near system surfaces or interfaces. This is due to a mathematical inability to satisfy rigorously with a finite order angular polynomial expansion or angular difference technique the physical boundary conditions which occur on these surfaces. Consequently, one must accept large computational effort or less precise criticality calculations. The methods introduced in this thesis, including a direct leakage operator and an indirect multiple scattering leakage operator, obviate the need to know angular fluxes accurately at system boundaries. Instead, the system wide scalar flux, an integral quantity which is substantially easier to obtain with good precision, is sufficient to obtain production, absorption, scattering, and leakage rates

  16. WATER CHEMISTRY ASSESSMENT METHODS

    Science.gov (United States)

    This section summarizes and evaluates the surfce water column chemistry assessment methods for USEPA/EMAP-SW, USGS-NAQA, USEPA-RBP, Oho EPA, and MDNR-MBSS. The basic objective of surface water column chemistry assessment is to characterize surface water quality by measuring a sui...

  17. Analysis of numerical methods

    CERN Document Server

    Isaacson, Eugene

    1994-01-01

    This excellent text for advanced undergraduates and graduate students covers norms, numerical solution of linear systems and matrix factoring, iterative solutions of nonlinear equations, eigenvalues and eigenvectors, polynomial approximation, and other topics. It offers a careful analysis and stresses techniques for developing new methods, plus many examples and problems. 1966 edition.

  18. Die singulation method

    Science.gov (United States)

    Swiler, Thomas P.; Garcia, Ernest J.; Francis, Kathryn M.

    2013-06-11

    A method is disclosed for singulating die from a semiconductor substrate (e.g. a semiconductor-on-insulator substrate or a bulk silicon substrate) containing an oxide layer (e.g. silicon dioxide or a silicate glass) and one or more semiconductor layers (e.g. monocrystalline or polycrystalline silicon) located above the oxide layer. The method etches trenches through the substrate and through each semiconductor layer about the die being singulated, with the trenches being offset from each other around at least a part of the die so that the oxide layer between the trenches holds the substrate and die together. The trenches can be anisotropically etched using a Deep Reactive Ion Etching (DRIE) process. After the trenches are etched, the oxide layer between the trenches can be etched away with an HF etchant to singulate the die. A release fixture can be located near one side of the substrate to receive the singulated die.

  19. Epitope prediction methods

    DEFF Research Database (Denmark)

    Karosiene, Edita

    Analysis. The chapter provides detailed explanations on how to use different methods for T cell epitope discovery research, explaining how input should be given as well as how to interpret the output. In the last chapter, I present the results of a bioinformatics analysis of epitopes from the yellow fever...... peptide-MHC interactions. Furthermore, using yellow fever virus epitopes, we demonstrated the power of the %Rank score when compared with the binding affinity score of MHC prediction methods, suggesting that this score should be considered to be used for selecting potential T cell epitopes. In summary...... immune responses. Therefore, it is of great importance to be able to identify peptides that bind to MHC molecules, in order to understand the nature of immune responses and discover T cell epitopes useful for designing new vaccines and immunotherapies. MHC molecules in humans, referred to as human...

  20. Methods for forming particles

    Science.gov (United States)

    Fox, Robert V.; Zhang, Fengyan; Rodriguez, Rene G.; Pak, Joshua J.; Sun, Chivin

    2016-06-21

    Single source precursors or pre-copolymers of single source precursors are subjected to microwave radiation to form particles of a I-III-VI.sub.2 material. Such particles may be formed in a wurtzite phase and may be converted to a chalcopyrite phase by, for example, exposure to heat. The particles in the wurtzite phase may have a substantially hexagonal shape that enables stacking into ordered layers. The particles in the wurtzite phase may be mixed with particles in the chalcopyrite phase (i.e., chalcopyrite nanoparticles) that may fill voids within the ordered layers of the particles in the wurtzite phase thus produce films with good coverage. In some embodiments, the methods are used to form layers of semiconductor materials comprising a I-III-VI.sub.2 material. Devices such as, for example, thin-film solar cells may be fabricated using such methods.