WorldWideScience

Sample records for experimental design statistical

  1. Fundamentals of statistical experimental design and analysis

    CERN Document Server

    Easterling, Robert G

    2015-01-01

    Professionals in all areas - business; government; the physical, life, and social sciences; engineering; medicine, etc. - benefit from using statistical experimental design to better understand their worlds and then use that understanding to improve the products, processes, and programs they are responsible for. This book aims to provide the practitioners of tomorrow with a memorable, easy to read, engaging guide to statistics and experimental design. This book uses examples, drawn from a variety of established texts, and embeds them in a business or scientific context, seasoned with a dash of humor, to emphasize the issues and ideas that led to the experiment and the what-do-we-do-next? steps after the experiment. Graphical data displays are emphasized as means of discovery and communication and formulas are minimized, with a focus on interpreting the results that software produce. The role of subject-matter knowledge, and passion, is also illustrated. The examples do not require specialized knowledge, and t...

  2. Statistical experimental design for refractory coatings

    International Nuclear Information System (INIS)

    McKinnon, J.A.; Standard, O.C.

    2000-01-01

    The production of refractory coatings on metal casting moulds is critically dependent on the development of suitable rheological characteristics, such as viscosity and thixotropy, in the initial coating slurry. In this paper, the basic concepts of mixture design and analysis are applied to the formulation of a refractory coating, with illustration by a worked example. Experimental data of coating viscosity versus composition are fitted to a statistical model to obtain a reliable method of predicting the optimal formulation of the coating. Copyright (2000) The Australian Ceramic Society

  3. Experimental toxicology: Issues of statistics, experimental design, and replication.

    Science.gov (United States)

    Briner, Wayne; Kirwan, Jeral

    2017-01-01

    The difficulty of replicating experiments has drawn considerable attention. Issues with replication occur for a variety of reasons ranging from experimental design to laboratory errors to inappropriate statistical analysis. Here we review a variety of guidelines for statistical analysis, design, and execution of experiments in toxicology. In general, replication can be improved by using hypothesis driven experiments with adequate sample sizes, randomization, and blind data collection techniques. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Statistical experimental design for saltstone mixtures

    International Nuclear Information System (INIS)

    Harris, S.P.; Postles, R.L.

    1992-01-01

    The authors used a mixture experimental design for determining a window of operability for a process at the U.S. Department of Energy, Savannah River Site, Defense Waste Processing Facility (DWPF). The high-level radioactive waste at the Savannah River Site is stored in large underground carbon steel tanks. The waste consists of a supernate layer and a sludge layer. Cesium-137 will be removed from the supernate by precipitation and filtration. After further processing, the supernate layer will be fixed as a grout for disposal in concrete vaults. The remaining precipitate will be processed at the DWPF with treated waste tank sludge and glass-making chemicals into borosilicate glass. The leach-rate properties of the supernate grout formed from various mixes of solidified coefficients for NO 3 and chromium were used as a measure of leach rate. Various mixes of cement, Ca(OH) 2 , salt, slag, and fly ash were used. These constituents comprise the whole mix. Thus, a mixture experimental design was used. The regression procedure (PROC REG) in SAS was used to produce analysis of variance (ANOVA) statistics. In addition, detailed model diagnostics are readily available for identifying suspicious observations. For convenience, trillinear contour (TLC) plots, a standard graphics tool for examining mixture response surfaces, of the fitted model were produced using ECHIP

  5. Experimental design matters for statistical analysis

    DEFF Research Database (Denmark)

    Jensen, Signe Marie; Schaarschmidt, Frank; Onofri, Andrea

    2018-01-01

    , the experimental design is often more or less neglected when analyzing data. Two data examples were analyzed using different modelling strategies: Firstly, in a randomized complete block design, mean heights of maize treated with a herbicide and one of several adjuvants were compared. Secondly, translocation...... of an insecticide applied to maize as a seed treatment was evaluated using incomplete data from an unbalanced design with several layers of hierarchical sampling. Extensive simulations were carried out to further substantiate the effects of different modelling strategies. RESULTS: It was shown that results from sub...

  6. Statistical experimental design for saltstone mixtures

    International Nuclear Information System (INIS)

    Harris, S.P.; Postles, R.L.

    1991-01-01

    We used a mixture experimental design for determining a window of operability for a process at the Savannah River Site Defense Waste Processing Facility (DWPF). The high-level radioactive waste at the Savannah River Site is stored in large underground carbon steel tanks. The waste consists of a supernate layer and a sludge layer. 137 Cs will be removed from the supernate by precipitation and filtration. After further processing, the supernate layer will be fixed as a grout for disposal in concrete vaults. The remaining precipitate will be processed at the DWPF with treated waste tank sludge and glass-making chemicals into borosilicate glass. The leach rate properties of the supernate grout, formed from various mixes of solidified salt waste, needed to be determined. The effective diffusion coefficients for NO 3 and Cr were used as a measure of leach rate. Various mixes of cement, Ca(OH) 2 , salt, slag and flyash were used. These constituents comprise the whole mix. Thus, a mixture experimental design was used

  7. Statistical experimental design approach in coal briquetting

    Energy Technology Data Exchange (ETDEWEB)

    B. Salopek; S. Pfaff; R. Rajic

    2003-07-01

    The influence of pressure, temperature, humidity and granulation of the coal upon the resistance to pressure and the water absorption of the briquettes has been tested, with the aim to examine how each of the two dependent variables changes depending on the values assumed by any of the four independent variables and which of the mentioned independent variables significantly influences the dependent ones. The full factorial design with 16 experiments and the central composite design with 27 experiments have been applied. The influence of the independent variables upon the dependent ones has been examined by applying the analysis of variance. The influence values of individual factors and their interaction upon the dependent variables have been stated as well as coefficients of curvilinear equation. 2 refs., 2 figs., 5 tabs.

  8. Statistics in experimental design, preprocessing, and analysis of proteomics data.

    Science.gov (United States)

    Jung, Klaus

    2011-01-01

    High-throughput experiments in proteomics, such as 2-dimensional gel electrophoresis (2-DE) and mass spectrometry (MS), yield usually high-dimensional data sets of expression values for hundreds or thousands of proteins which are, however, observed on only a relatively small number of biological samples. Statistical methods for the planning and analysis of experiments are important to avoid false conclusions and to receive tenable results. In this chapter, the most frequent experimental designs for proteomics experiments are illustrated. In particular, focus is put on studies for the detection of differentially regulated proteins. Furthermore, issues of sample size planning, statistical analysis of expression levels as well as methods for data preprocessing are covered.

  9. Experimental statistics

    CERN Document Server

    Natrella, Mary Gibbons

    1963-01-01

    Formulated to assist scientists and engineers engaged in army ordnance research and development programs, this well-known and highly regarded handbook is a ready reference for advanced undergraduate and graduate students as well as for professionals seeking engineering information and quantitative data for designing, developing, constructing, and testing equipment. Topics include characterizing and comparing the measured performance of a material, product, or process; general considerations in planning experiments; statistical techniques for analyzing extreme-value data; use of transformations

  10. Statistical evaluation of SAGE libraries: consequences for experimental design

    NARCIS (Netherlands)

    Ruijter, Jan M.; van Kampen, Antoine H. C.; Baas, Frank

    2002-01-01

    Since the introduction of serial analysis of gene expression (SAGE) as a method to quantitatively analyze the differential expression of genes, several statistical tests have been published for the pairwise comparison of SAGE libraries. Testing the difference between the number of specific tags

  11. Experimental design techniques in statistical practice a practical software-based approach

    CERN Document Server

    Gardiner, W P

    1998-01-01

    Provides an introduction to the diverse subject area of experimental design, with many practical and applicable exercises to help the reader understand, present and analyse the data. The pragmatic approach offers technical training for use of designs and teaches statistical and non-statistical skills in design and analysis of project studies throughout science and industry. Provides an introduction to the diverse subject area of experimental design and includes practical and applicable exercises to help understand, present and analyse the data Offers technical training for use of designs and teaches statistical and non-statistical skills in design and analysis of project studies throughout science and industry Discusses one-factor designs and blocking designs, factorial experimental designs, Taguchi methods and response surface methods, among other topics.

  12. Optimization of phototrophic hydrogen production by Rhodopseudomonas palustris PBUM001 via statistical experimental design

    Energy Technology Data Exchange (ETDEWEB)

    Jamil, Zadariana [Department of Civil Engineering, Faculty of Engineering, University of Malaya (Malaysia); Faculty of Civil Engineering, Technology University of MARA (Malaysia); Mohamad Annuar, Mohamad Suffian; Vikineswary, S. [Institute of Biological Sciences, University of Malaya (Malaysia); Ibrahim, Shaliza [Department of Civil Engineering, Faculty of Engineering, University of Malaya (Malaysia)

    2009-09-15

    Phototrophic hydrogen production by indigenous purple non-sulfur bacteria, Rhodopseudomonas palustris PBUM001 from palm oil mill effluent (POME) was optimized using response surface methodology (RSM). The process parameters studied include inoculum sizes (% v/v), POME concentration (% v/v), light intensity (klux), agitation (rpm) and pH. The experimental data on cumulative hydrogen production and COD reduction were fitted into a quadratic polynomial model using response surface regression analysis. The path to optimal process conditions was determined by analyzing response surface three-dimensional surface plot and contour plot. Statistical analysis on experimental data collected following Box-Behnken design showed that 100% (v/v) POME concentration, 10% (v/v) inoculum size, light intensity at 4.0 klux, agitation rate at 250 rpm and pH of 6 were the best conditions. The maximum predicted cumulative hydrogen production and COD reduction obtained under these conditions was 1.05 ml H{sub 2}/ml POME and 31.71% respectively. Subsequent verification experiments at optimal process values gave the maximum yield of cumulative hydrogen at 0.66 {+-} 0.07 ml H{sub 2}/ml POME and COD reduction at 30.54 {+-} 9.85%. (author)

  13. The emergence of modern statistics in agricultural science: analysis of variance, experimental design and the reshaping of research at Rothamsted Experimental Station, 1919-1933.

    Science.gov (United States)

    Parolini, Giuditta

    2015-01-01

    During the twentieth century statistical methods have transformed research in the experimental and social sciences. Qualitative evidence has largely been replaced by quantitative results and the tools of statistical inference have helped foster a new ideal of objectivity in scientific knowledge. The paper will investigate this transformation by considering the genesis of analysis of variance and experimental design, statistical methods nowadays taught in every elementary course of statistics for the experimental and social sciences. These methods were developed by the mathematician and geneticist R. A. Fisher during the 1920s, while he was working at Rothamsted Experimental Station, where agricultural research was in turn reshaped by Fisher's methods. Analysis of variance and experimental design required new practices and instruments in field and laboratory research, and imposed a redistribution of expertise among statisticians, experimental scientists and the farm staff. On the other hand the use of statistical methods in agricultural science called for a systematization of information management and made computing an activity integral to the experimental research done at Rothamsted, permanently integrating the statisticians' tools and expertise into the station research programme. Fisher's statistical methods did not remain confined within agricultural research and by the end of the 1950s they had come to stay in psychology, sociology, education, chemistry, medicine, engineering, economics, quality control, just to mention a few of the disciplines which adopted them.

  14. Tribological behaviour and statistical experimental design of sintered iron-copper based composites

    Science.gov (United States)

    Popescu, Ileana Nicoleta; Ghiţă, Constantin; Bratu, Vasile; Palacios Navarro, Guillermo

    2013-11-01

    The sintered iron-copper based composites for automotive brake pads have a complex composite composition and should have good physical, mechanical and tribological characteristics. In this paper, we obtained frictional composites by Powder Metallurgy (P/M) technique and we have characterized them by microstructural and tribological point of view. The morphology of raw powders was determined by SEM and the surfaces of obtained sintered friction materials were analyzed by ESEM, EDS elemental and compo-images analyses. One lot of samples were tested on a "pin-on-disc" type wear machine under dry sliding conditions, at applied load between 3.5 and 11.5 × 10-1 MPa and 12.5 and 16.9 m/s relative speed in braking point at constant temperature. The other lot of samples were tested on an inertial test stand according to a methodology simulating the real conditions of dry friction, at a contact pressure of 2.5-3 MPa, at 300-1200 rpm. The most important characteristics required for sintered friction materials are high and stable friction coefficient during breaking and also, for high durability in service, must have: low wear, high corrosion resistance, high thermal conductivity, mechanical resistance and thermal stability at elevated temperature. Because of the tribological characteristics importance (wear rate and friction coefficient) of sintered iron-copper based composites, we predicted the tribological behaviour through statistical analysis. For the first lot of samples, the response variables Yi (represented by the wear rate and friction coefficient) have been correlated with x1 and x2 (the code value of applied load and relative speed in braking points, respectively) using a linear factorial design approach. We obtained brake friction materials with improved wear resistance characteristics and high and stable friction coefficients. It has been shown, through experimental data and obtained linear regression equations, that the sintered composites wear rate increases

  15. Statistical analysis of sonochemical synthesis of SAPO-34 nanocrystals using Taguchi experimental design

    International Nuclear Information System (INIS)

    Askari, Sima; Halladj, Rouein; Nazari, Mahdi

    2013-01-01

    Highlights: ► Sonochemical synthesis of SAPO-34 nanocrystals. ► Using Taguchi experimental design (L9) for optimizing the experimental procedure. ► The significant effects of all the ultrasonic parameters on the response. - Abstract: SAPO-34 nanocrystals with high crystallinity were synthesized by means of sonochemical method. An L9 orthogonal array of the Taguchi method was implemented to investigate the effects of sonication conditions on the preparation of SAPO-34 with respect to crystallinity of the final product phase. The experimental data establish the favorable phase crystallinity which is improved by increasing the ultrasonic power and the sonication temperature. In the case of ultrasonic irradiation time, however, an initial increases in crystallinity from 5 min to 15 min is followed by a decrease in crystallinity for longer sonication time

  16. Organic biowastes blend selection for composting industrial eggshell by-product: experimental and statistical mixture design.

    Science.gov (United States)

    Soares, Micaela A R; Andrade, Sandra R; Martins, Rui C; Quina, Margarida J; Quinta-Ferreira, Rosa M

    2012-01-01

    Composting is one of the technologies recommended for pre-treating industrial eggshells (ES) before its application in soils, for calcium recycling. However, due to the high inorganic content of ES, a mixture of biodegradable materials is required to assure a successful procedure. In this study, an adequate organic blend composition containing potato peel (PP), grass clippings (GC) and wheat straw (WS) was determined by applying the simplex-centroid mixture design method to achieve a desired moisture content, carbon: nitrogen ratio and free air space for effective composting of ES. A blend of 56% PP, 37% GC and 7% WS was selected and tested in a self heating reactor, where 10% (w/w) of ES was incorporated. After 29 days of reactor operation, a dry matter reduction of 46% was achieved and thermophilic temperatures were maintained during 15 days, indicating that the blend selected by statistical approach was adequate for composting of ES.

  17. Experimental statistics for biological sciences.

    Science.gov (United States)

    Bang, Heejung; Davidian, Marie

    2010-01-01

    In this chapter, we cover basic and fundamental principles and methods in statistics - from "What are Data and Statistics?" to "ANOVA and linear regression," which are the basis of any statistical thinking and undertaking. Readers can easily find the selected topics in most introductory statistics textbooks, but we have tried to assemble and structure them in a succinct and reader-friendly manner in a stand-alone chapter. This text has long been used in real classroom settings for both undergraduate and graduate students who do or do not major in statistical sciences. We hope that from this chapter, readers would understand the key statistical concepts and terminologies, how to design a study (experimental or observational), how to analyze the data (e.g., describe the data and/or estimate the parameter(s) and make inference), and how to interpret the results. This text would be most useful if it is used as a supplemental material, while the readers take their own statistical courses or it would serve as a great reference text associated with a manual for any statistical software as a self-teaching guide.

  18. Experimental Mathematics and Computational Statistics

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, David H.; Borwein, Jonathan M.

    2009-04-30

    The field of statistics has long been noted for techniques to detect patterns and regularities in numerical data. In this article we explore connections between statistics and the emerging field of 'experimental mathematics'. These includes both applications of experimental mathematics in statistics, as well as statistical methods applied to computational mathematics.

  19. A statistical approach to the experimental design of the sulfuric acid leaching of gold-copper ore

    Directory of Open Access Journals (Sweden)

    Mendes F.D.

    2003-01-01

    Full Text Available The high grade of copper in the Igarapé Bahia (Brazil gold-copper ore prevents the direct application of the classic cyanidation process. Copper oxides and sulfides react with cyanides in solution, causing a high consumption of leach reagent and thereby raising processing costs and decreasing recovery of gold. Studies have showm that a feasible route for this ore would be a pretreatment for copper minerals removal prior to the cyanidation stage. The goal of this experimental work was to study the experimental conditions required for copper removal from Igarapé Bahia gold-copper ore by sulfuric acid leaching by applying a statistical approach to the experimental design. By using the Plackett Burman method, it was possible to select the variables that had the largest influence on the percentage of copper extracted at the sulfuric acid leaching stage. These were temperature of leach solution, stirring speed, concentration of sulfuric acid in the leach solution and particle size of the ore. The influence of the individual effects of these variables and their interactions on the experimental response were analyzed by applying the replicated full factorial design method. Finally, the selected variables were optimized by the ascending path statistical method, which determined the best experimental conditions for leaching to achieve the highest percentage of copper extracted. Using the optimized conditions, the best leaching results showed a copper extraction of 75.5%.

  20. Statistical guidance for experimental design and data analysis of mutation detection in rare monogenic mendelian diseases by exome sequencing.

    Directory of Open Access Journals (Sweden)

    Degui Zhi

    Full Text Available Recently, whole-genome sequencing, especially exome sequencing, has successfully led to the identification of causal mutations for rare monogenic Mendelian diseases. However, it is unclear whether this approach can be generalized and effectively applied to other Mendelian diseases with high locus heterogeneity. Moreover, the current exome sequencing approach has limitations such as false positive and false negative rates of mutation detection due to sequencing errors and other artifacts, but the impact of these limitations on experimental design has not been systematically analyzed. To address these questions, we present a statistical modeling framework to calculate the power, the probability of identifying truly disease-causing genes, under various inheritance models and experimental conditions, providing guidance for both proper experimental design and data analysis. Based on our model, we found that the exome sequencing approach is well-powered for mutation detection in recessive, but not dominant, Mendelian diseases with high locus heterogeneity. A disease gene responsible for as low as 5% of the disease population can be readily identified by sequencing just 200 unrelated patients. Based on these results, for identifying rare Mendelian disease genes, we propose that a viable approach is to combine, sequence, and analyze patients with the same disease together, leveraging the statistical framework presented in this work.

  1. Ethanol Production from Kitchen Garbage Using Zymomonas mobilis: Optimization of Parameters through Statistical Experimental Designs

    OpenAIRE

    Ma, H.; Wang, Q.; Gong, L.; Wang, X.; Yin, W.

    2008-01-01

    Plackett-Burman design was employed to screen 8 parameters for ethanol production from kitchen garbage by Zymomonas mobilis in simultaneous saccharification and fermentation. The parameters were divided into two parts, four kinds of enzymes and supplementation nutrients. The result indicated that the nutrient inside kitchen garbage could meet the requirement of ethanol production without supplementation, only protease and glucoamylase were needed to accelerate the ethanol production. The opti...

  2. Exopolysaccharide production from Bacillus velezensis KY471306 using statistical experimental design.

    Science.gov (United States)

    Moghannem, Saad A M; Farag, Mohamed M S; Shehab, Amr M; Azab, Mohamed S

    2018-01-18

    Exopolysaccharide (EPS) biopolymers produced by microorganisms play a crucial role in the environment such as health and bio-nanotechnology sectors, gelling agents in food and cosmetic industries in addition to bio-flocculants in the environmental sector as they are degradable, nontoxic. This study focuses on the improvement of EPS production through manipulation of different culture and environmental conditions using response surface methodology (RSM). Plackett-Burman design indicated that; molasses, yeast extract and incubation temperature are the most effective parameters. Box-Behnken RSM indicated that; the optimum concentration for each parameter was 12% (w/v) for molasses, 6g/L yeast extract and 30°C for incubation temperature. The most potent bacterial isolate was identified as Bacillus velezensis KY498625. After production, EPS was extracted, purified using DEAE-cellulose, identified using Fourier transform infrared (FTIR), gel permeation chromatography (GPC) and gas chromatography-mass spectroscopy (GC-MS). The result indicated that; it has molecular weight 1.14×10 5 D consisting of glucose, mannose and galactose. Copyright © 2018 Sociedade Brasileira de Microbiologia. Published by Elsevier Editora Ltda. All rights reserved.

  3. Design and performance characteristics of solar adsorption refrigeration system using parabolic trough collector: Experimental and statistical optimization technique

    International Nuclear Information System (INIS)

    Abu-Hamdeh, Nidal H.; Alnefaie, Khaled A.; Almitani, Khalid H.

    2013-01-01

    Highlights: • The successes of using olive waste/methanol as an adsorbent/adsorbate pair. • The experimental gross cycle coefficient of performance obtained was COP a = 0.75. • Optimization showed expanding adsorbent mass to a certain range increases the COP. • The statistical optimization led to optimum tank volume between 0.2 and 0.3 m 3 . • Increasing the collector area to a certain range increased the COP. - Abstract: The current work demonstrates a developed model of a solar adsorption refrigeration system with specific requirements and specifications. The recent scheme can be employed as a refrigerator and cooler unit suitable for remote areas. The unit runs through a parabolic trough solar collector (PTC) and uses olive waste as adsorbent with methanol as adsorbate. Cooling production, COP (coefficient of performance, and COP a (cycle gross coefficient of performance) were used to assess the system performance. The system’s design optimum parameters in this study were arrived to through statistical and experimental methods. The lowest temperature attained in the refrigerated space was 4 °C and the equivalent ambient temperature was 27 °C. The temperature started to decrease steadily at 20:30 – when the actual cooling started – until it reached 4 °C at 01:30 in the next day when it rose again. The highest COP a obtained was 0.75

  4. Evaluating statistical and clinical significance of intervention effects in single-case experimental designs: An SPSS method to analyze univariate data

    NARCIS (Netherlands)

    Maric, M.; de Haan, M.; Hogendoorn, S.M.; Wolters, L.H.; Huizenga, H.M.

    2015-01-01

    Single-case experimental designs are useful methods in clinical research practice to investigate individual client progress. Their proliferation might have been hampered by methodological challenges such as the difficulty applying existing statistical procedures. In this article, we describe a

  5. Evaluating statistical and clinical significance of intervention effects in single-case experimental designs: an SPSS method to analyze univariate data

    NARCIS (Netherlands)

    Maric, Marija; de Haan, Else; Hogendoorn, Sanne M.; Wolters, Lidewij H.; Huizenga, Hilde M.

    2015-01-01

    Single-case experimental designs are useful methods in clinical research practice to investigate individual client progress. Their proliferation might have been hampered by methodological challenges such as the difficulty applying existing statistical procedures. In this article, we describe a

  6. Application of statistical experimental design to study the formulation variables influencing the coating process of lidocaine liposomes.

    Science.gov (United States)

    González-Rodríguez, M L; Barros, L B; Palma, J; González-Rodríguez, P L; Rabasco, A M

    2007-06-07

    In this paper, we have used statistical experimental design to investigate the effect of several factors in coating process of lidocaine hydrochloride (LID) liposomes by a biodegradable polymer (chitosan, CH). These variables were the concentration of CH coating solution, the dripping rate of this solution on the liposome colloidal dispersion, the stirring rate, the time since the liposome production to the liposome coating and finally the amount of drug entrapped into liposomes. The selected response variables were drug encapsulation efficiency (EE, %), coating efficiency (CE, %) and zeta potential. Liposomes were obtained by thin-layer evaporation method. They were subsequently coated with CH according the experimental plan provided by a fractional factorial (2(5-1)) screening matrix. We have used spectroscopic methods to determine the zeta potential values. The EE (%) assay was carried out in dialysis bags and the brilliant red probe was used to determine CE (%) due to its property of forming molecular complexes with CH. The graphic analysis of the effects allowed the identification of the main formulation and technological factors by the analysis of the selected responses and permitted the determination of the proper level of these factors for the response improvement. Moreover, fractional design allowed quantifying the interactions between the factors, which will consider in next experiments. The results obtained pointed out that LID amount was the predominant factor that increased the drug entrapment capacity (EE). The CE (%) response was mainly affected by the concentration of the CH solution and the stirring rate, although all the interactions between the main factors have statistical significance.

  7. Introduction to Statistically Designed Experiments

    Energy Technology Data Exchange (ETDEWEB)

    Heaney, Mike

    2016-09-13

    Statistically designed experiments can save researchers time and money by reducing the number of necessary experimental trials, while resulting in more conclusive experimental results. Surprisingly, many researchers are still not aware of this efficient and effective experimental methodology. As reported in a 2013 article from Chemical & Engineering News, there has been a resurgence of this methodology in recent years (http://cen.acs.org/articles/91/i13/Design-Experiments-Makes-Comeback.html?h=2027056365). This presentation will provide a brief introduction to statistically designed experiments. The main advantages will be reviewed along with the some basic concepts such as factorial and fractional factorial designs. The recommended sequential approach to experiments will be introduced and finally a case study will be presented to demonstrate this methodology.

  8. Statistical core design

    International Nuclear Information System (INIS)

    Oelkers, E.; Heller, A.S.; Farnsworth, D.A.; Kearfott, K.J.

    1978-01-01

    The report describes the statistical analysis of DNBR thermal-hydraulic margin of a 3800 MWt, 205-FA core under design overpower conditions. The analysis used LYNX-generated data at predetermined values of the input variables whose uncertainties were to be statistically combined. LYNX data were used to construct an efficient response surface model in the region of interest; the statistical analysis was accomplished through the evaluation of core reliability; utilizing propagation of the uncertainty distributions of the inputs. The response surface model was implemented in both the analytical error propagation and Monte Carlo Techniques. The basic structural units relating to the acceptance criteria are fuel pins. Therefore, the statistical population of pins with minimum DNBR values smaller than specified values is determined. The specified values are designated relative to the most probable and maximum design DNBR values on the power limiting pin used in present design analysis, so that gains over the present design criteria could be assessed for specified probabilistic acceptance criteria. The results are equivalent to gains ranging from 1.2 to 4.8 percent of rated power dependent on the acceptance criterion. The corresponding acceptance criteria range from 95 percent confidence that no pin will be in DNB to 99.9 percent of the pins, which are expected to avoid DNB

  9. Introductory statistics for engineering experimentation

    CERN Document Server

    Nelson, Peter R; Coffin, Marie

    2003-01-01

    The Accreditation Board for Engineering and Technology (ABET) introduced a criterion starting with their 1992-1993 site visits that "Students must demonstrate a knowledge of the application of statistics to engineering problems." Since most engineering curricula are filled with requirements in their own discipline, they generally do not have time for a traditional two semesters of probability and statistics. Attempts to condense that material into a single semester often results in so much time being spent on probability that the statistics useful for designing and analyzing engineering/scientific experiments is never covered. In developing a one-semester course whose purpose was to introduce engineering/scientific students to the most useful statistical methods, this book was created to satisfy those needs. - Provides the statistical design and analysis of engineering experiments & problems - Presents a student-friendly approach through providing statistical models for advanced learning techniques - Cove...

  10. Aplication of the statistical experimental design to optimize mine-impacted water (MIW) remediation using shrimp-shell.

    Science.gov (United States)

    Núñez-Gómez, Dámaris; Alves, Alcione Aparecida de Almeida; Lapolli, Flavio Rubens; Lobo-Recio, María A

    2017-01-01

    Mine-impacted water (MIW) is one of the most serious mining problems and has a high negative impact on water resources and aquatic life. The main characteristics of MIW are a low pH (between 2 and 4) and high concentrations of SO 4 2- and metal ions (Cd, Cu, Ni, Pb, Zn, Fe, Al, Cr, Mn, Mg, etc.), many of which are toxic to ecosystems and human life. Shrimp shell was selected as a MIW treatment agent because it is a low-cost metal-sorbent biopolymer with a high chitin content and contains calcium carbonate, an acid-neutralizing agent. To determine the best metal-removal conditions, a statistical study using statistical planning was carried out. Thus, the objective of this work was to identify the degree of influence and dependence of the shrimp-shell content for the removal of Fe, Al, Mn, Co, and Ni from MIW. In this study, a central composite rotational experimental design (CCRD) with a quadruplicate at the midpoint (2 2 ) was used to evaluate the joint influence of two formulation variables-agitation and the shrimp-shell content. The statistical results showed the significant influence (p < 0.05) of the agitation variable for Fe and Ni removal (linear and quadratic form, respectively) and of the shrimp-shell content variable for Mn (linear form), Al and Co (linear and quadratic form) removal. Analysis of variance (ANOVA) for Al, Co, and Ni removal showed that the model is valid at the 95% confidence interval and that no adjustment needed within the ranges evaluated of agitation (0-251.5 rpm) and shrimp-shell content (1.2-12.8 g L -1 ). The model required adjustments to the 90% and 75% confidence interval for Fe and Mn removal, respectively. In terms of efficiency in removing pollutants, it was possible to determine the best experimental values of the variables considered as 188 rpm and 9.36 g L -1 of shrimp-shells. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Optimal experimental design with R

    CERN Document Server

    Rasch, Dieter; Verdooren, L R; Gebhardt, Albrecht

    2011-01-01

    Experimental design is often overlooked in the literature of applied and mathematical statistics: statistics is taught and understood as merely a collection of methods for analyzing data. Consequently, experimenters seldom think about optimal design, including prerequisites such as the necessary sample size needed for a precise answer for an experimental question. Providing a concise introduction to experimental design theory, Optimal Experimental Design with R: Introduces the philosophy of experimental design Provides an easy process for constructing experimental designs and calculating necessary sample size using R programs Teaches by example using a custom made R program package: OPDOE Consisting of detailed, data-rich examples, this book introduces experimenters to the philosophy of experimentation, experimental design, and data collection. It gives researchers and statisticians guidance in the construction of optimum experimental designs using R programs, including sample size calculations, hypothesis te...

  12. Effect of experimental factors on magnetic properties of nickel nanoparticles produced by chemical reduction method using a statistical design

    International Nuclear Information System (INIS)

    Vaezi, M.R.; Barzgar Vishlaghi, M.; Farzalipour Tabriz, M.; Mohammad Moradi, O.

    2015-01-01

    Highlights: • Superparamagnetic nickel nanoparticles are synthesized by wet chemical reduction. • Effects of synthesis parameters on magnetic properties are studied. • Central composite experimental design is used for building an empirical model. • Solvents ratio was more influential than reactants mixing rate. - Abstract: Nickel nanoparticles were synthesized by chemical reduction method in the absence of any surface capping agent. The effect of reactants mixing rate and the volume ratio of methanol/ethanol as solvent on the morphology and magnetic properties of nickel nanoparticles were studied by design of experiment using central composite design. X-ray diffraction (XRD) technique and Transmission Electron Microscopy (TEM) were utilized to characterize the synthesized nanoparticles. Size distribution of particles was studied by Dynamic Light Scattering (DLS) technique and magnetic properties of produced nanoparticles were investigated by Vibrating Sample Magnetometer (VSM) apparatus. The results showed that the magnetic properties of nickel nanoparticles were more influenced by volume ratio of methanol/ethanol than the reactants mixing rate. Super-paramagnetic nickel nanoparticles with size range between 20 and 50 nm were achieved when solvent was pure methanol and the reactants mixing rate was kept at 70 ml/h. But addition of more ethanol to precursor solvent leads to the formation of larger particles with broader size distribution and weak ferromagnetic or super-paramagnetic behavior

  13. A statistical experimental design approach to evaluate the influence of various penetration enhancers on transdermal drug delivery of buprenorphine

    Directory of Open Access Journals (Sweden)

    S.Mojtaba Taghizadeh

    2015-03-01

    Full Text Available A series of drug-in-adhesive transdermal drug delivery systems (patch with different chemical penetration enhancers were designed to deliver drug through the skin as a site of application. The objective of our effort was to study the influence of various chemical penetration enhancers on skin permeation rate and adhesion properties of a transdermal drug delivery system using Box–Behnken experimental design. The response surface methodology based on a three-level, three-variable Box–Behnken design was used to evaluate the interactive effects on dependent variables including, the rate of skin permeation and adhesion properties, namely peel strength and tack value. Levulinic acid, lauryl alcohol, and Tween 80 were used as penetration enhancers (patch formulations, containing 0–8% of each chemical penetration enhancer. Buprenorphine was used as a model penetrant drug. The results showed that incorporation of 20% chemical penetration enhancer into the mixture led to maximum skin permeation flux of buprenorphine from abdominal rat skin while the adhesion properties decreased. Also that skin flux in presence of levulinic acid (1.594 μg/cm2 h was higher than Tween 80 (1.473 μg/cm2 h and lauryl alcohol (0.843 μg/cm2 h, and in mixing these enhancers together, an additional effect was observed. Moreover, it was found that each enhancer increased the tack value, while levulinic acid and lauryl alcohol improved the peel strength but Tween 80 reduced it. These findings indicated that the best chemical skin penetration enhancer for buprenorphine patch was levulinic acid. Among the designed formulations, the one which contained 12% (wt/wt enhancers exhibited the highest efficiency.

  14. A statistical experimental design approach to evaluate the influence of various penetration enhancers on transdermal drug delivery of buprenorphine.

    Science.gov (United States)

    Taghizadeh, S Mojtaba; Moghimi-Ardakani, Ali; Mohamadnia, Fatemeh

    2015-03-01

    A series of drug-in-adhesive transdermal drug delivery systems (patch) with different chemical penetration enhancers were designed to deliver drug through the skin as a site of application. The objective of our effort was to study the influence of various chemical penetration enhancers on skin permeation rate and adhesion properties of a transdermal drug delivery system using Box-Behnken experimental design. The response surface methodology based on a three-level, three-variable Box-Behnken design was used to evaluate the interactive effects on dependent variables including, the rate of skin permeation and adhesion properties, namely peel strength and tack value. Levulinic acid, lauryl alcohol, and Tween 80 were used as penetration enhancers (patch formulations, containing 0-8% of each chemical penetration enhancer). Buprenorphine was used as a model penetrant drug. The results showed that incorporation of 20% chemical penetration enhancer into the mixture led to maximum skin permeation flux of buprenorphine from abdominal rat skin while the adhesion properties decreased. Also that skin flux in presence of levulinic acid (1.594 μg/cm(2) h) was higher than Tween 80 (1.473 μg/cm(2) h) and lauryl alcohol (0.843 μg/cm(2) h), and in mixing these enhancers together, an additional effect was observed. Moreover, it was found that each enhancer increased the tack value, while levulinic acid and lauryl alcohol improved the peel strength but Tween 80 reduced it. These findings indicated that the best chemical skin penetration enhancer for buprenorphine patch was levulinic acid. Among the designed formulations, the one which contained 12% (wt/wt) enhancers exhibited the highest efficiency.

  15. Statistical processing of experimental data

    OpenAIRE

    NAVRÁTIL, Pavel

    2012-01-01

    This thesis contains theory of probability and statistical sets. Solved and unsolved problems of probability, random variable and distributions random variable, random vector, statistical sets, regression and correlation analysis. Unsolved problems contains solutions.

  16. Experimental signature for statistical multifragmentation

    International Nuclear Information System (INIS)

    Moretto, L.G.; Delis, D.N.; Wozniak, G.J.

    1993-01-01

    Multifragment production was measured for the 60 MeV/nucleon 197 Au+ 27 Al, 51 V, and nat Cu reactions. The branching ratios for binary, ternary, quaternary, and quinary decays were determined as a function of the excitation energy E and are independent of the target. The logarithms of these branching ratios when plotted vs E -1/2 show a linear dependence that strongly suggests a statistical competition between the various multifragmentation channels. This behavior seems to relegate the role of dynamics to the formation of the sources, which then proceed to decay in an apparently statistical manner

  17. Indium recovery from acidic aqueous solutions by solvent extraction with D2EHPA: a statistical approach to the experimental design

    Directory of Open Access Journals (Sweden)

    Fortes M.C.B.

    2003-01-01

    Full Text Available This experimental work presents the optimization results of obtaining a high indium concentration solution and minimum iron poisoning by solvent extraction with D2EHPA solubilized in isoparaffin and exxsol. The variables studied in the extraction step were D2EHPA concentration, acidity of the aqueous phase and time of contact between phases. Different hydrochloric and sulfuric acid concentrations were studied for the stripping step. The optimum experimental conditions resulted in a solution with 99% indium extraction and less than 4% iron. The construction of a McCabe-Thiele diagram indicated two theoretical countercurrent stages for indium extraction and at least six stages for indium stripping. Finally, the influence of associated metals found in typical sulfate leach liquors from zinc plants was studied. Under the experimental conditions for maximum indium extraction, 96% indium extraction was obtained, iron extraction was about 4% and no Ga, Cu and Zn were co-extracted.

  18. Research design and statistical analysis

    CERN Document Server

    Myers, Jerome L; Lorch Jr, Robert F

    2013-01-01

    Research Design and Statistical Analysis provides comprehensive coverage of the design principles and statistical concepts necessary to make sense of real data.  The book's goal is to provide a strong conceptual foundation to enable readers to generalize concepts to new research situations.  Emphasis is placed on the underlying logic and assumptions of the analysis and what it tells the researcher, the limitations of the analysis, and the consequences of violating assumptions.  Sampling, design efficiency, and statistical models are emphasized throughout. As per APA recommendations

  19. Final Report, DOE Early Career Award: Predictive modeling of complex physical systems: new tools for statistical inference, uncertainty quantification, and experimental design

    Energy Technology Data Exchange (ETDEWEB)

    Marzouk, Youssef [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States)

    2016-08-31

    Predictive simulation of complex physical systems increasingly rests on the interplay of experimental observations with computational models. Key inputs, parameters, or structural aspects of models may be incomplete or unknown, and must be developed from indirect and limited observations. At the same time, quantified uncertainties are needed to qualify computational predictions in the support of design and decision-making. In this context, Bayesian statistics provides a foundation for inference from noisy and limited data, but at prohibitive computional expense. This project intends to make rigorous predictive modeling *feasible* in complex physical systems, via accelerated and scalable tools for uncertainty quantification, Bayesian inference, and experimental design. Specific objectives are as follows: 1. Develop adaptive posterior approximations and dimensionality reduction approaches for Bayesian inference in high-dimensional nonlinear systems. 2. Extend accelerated Bayesian methodologies to large-scale {\\em sequential} data assimilation, fully treating nonlinear models and non-Gaussian state and parameter distributions. 3. Devise efficient surrogate-based methods for Bayesian model selection and the learning of model structure. 4. Develop scalable simulation/optimization approaches to nonlinear Bayesian experimental design, for both parameter inference and model selection. 5. Demonstrate these inferential tools on chemical kinetic models in reacting flow, constructing and refining thermochemical and electrochemical models from limited data. Demonstrate Bayesian filtering on canonical stochastic PDEs and in the dynamic estimation of inhomogeneous subsurface properties and flow fields.

  20. Application of statistical experimental design for optimisation of bioinsecticides production by sporeless Bacillus thuringiensis strain on cheap medium.

    Science.gov (United States)

    Ben Khedher, Saoussen; Jaoua, Samir; Zouari, Nabil

    2013-01-01

    In order to overproduce bioinsecticides production by a sporeless Bacillus thuringiensis strain, an optimal composition of a cheap medium was defined using a response surface methodology. In a first step, a Plackett-Burman design used to evaluate the effects of eight medium components on delta-endotoxin production showed that starch, soya bean and sodium chloride exhibited significant effects on bioinsecticides production. In a second step, these parameters were selected for further optimisation by central composite design. The obtained results revealed that the optimum culture medium for delta-endotoxin production consists of 30 g L(-1) starch, 30 g L(-1) soya bean and 9 g L(-1) sodium chloride. When compared to the basal production medium, an improvement in delta-endotoxin production up to 50% was noted. Moreover, relative toxin yield of sporeless Bacillus thuringiensis S22 was improved markedly by using optimised cheap medium (148.5 mg delta-endotoxins per g starch) when compared to the yield obtained in the basal medium (94.46 mg delta-endotoxins per g starch). Therefore, the use of optimised culture cheap medium appeared to be a good alternative for a low cost production of sporeless Bacillus thuringiensis bioinsecticides at industrial scale which is of great importance in practical point of view.

  1. Application of statistical experimental design for optimisation of bioinsecticides production by sporeless Bacillus thuringiensis strain on cheap medium

    Directory of Open Access Journals (Sweden)

    Saoussen Ben Khedher

    2013-09-01

    Full Text Available In order to overproduce bioinsecticides production by a sporeless Bacillus thuringiensis strain, an optimal composition of a cheap medium was defined using a response surface methodology. In a first step, a Plackett-Burman design used to evaluate the effects of eight medium components on delta-endotoxin production showed that starch, soya bean and sodium chloride exhibited significant effects on bioinsecticides production. In a second step, these parameters were selected for further optimisation by central composite design. The obtained results revealed that the optimum culture medium for delta-endotoxin production consists of 30 g L-1 starch, 30 g L-1 soya bean and 9g L-1 sodium chloride. When compared to the basal production medium, an improvement in delta-endotoxin production up to 50% was noted. Moreover, relative toxin yield of sporeless Bacillus thuringiensis S22 was improved markedly by using optimised cheap medium (148.5 mg delta-endotoxins per g starch when compared to the yield obtained in the basal medium (94.46 mg delta-endotoxins per g starch. Therefore, the use of optimised culture cheap medium appeared to be a good alternative for a low cost production of sporeless Bacillus thuringiensis bioinsecticides at industrial scale which is of great importance in practical point of view.

  2. Introducing an attractive method for total biomimetic creation of a synthetic biodegradable bioactive bone scaffold based on statistical experimental design.

    Science.gov (United States)

    Shahbazi, Sara; Zamanian, Ali; Pazouki, Mohammad; Jafari, Yaser

    2018-05-01

    A new total biomimetic technique based on both the water uptake and degradation processes is introduced in this study to provide an interesting procedure to fabricate a bioactive and biodegradable synthetic scaffold, which has a good mechanical and structural properties. The optimization of effective parameters to scaffold fabrication was done by response surface methodology/central composite design (CCD). With this method, a synthetic scaffold was fabricated which has a uniform and open-interconnected porous structure with the largest pore size of 100-200μm. The obtained compressive ultimate strength of ~35MPa and compression modulus of 58MPa are similar to some of the trabecular bone. The pore morphology, size, and distribution of the scaffold were characterized using a scanning electron microscope and mercury porosimeter. Fourier transform infrared spectroscopy, EDAX and X-ray diffraction analyses were used to determine the chemical composition, Ca/P element ratio of mineralized microparticles, and the crystal structure of the scaffolds, respectively. The optimum biodegradable synthetic scaffold based on its raw materials of polypropylene fumarate, hydroxyethyl methacrylate and nano bioactive glass (PPF/HEMA/nanoBG) as 70/30wt/wt%, 20wt%, and 1.5wt/wt% (PHB.732/1.5) with desired porosity, pore size, and geometry were created by 4weeks immersion in SBF. This scaffold showed considerable biocompatibility in the ranging from 86 to 101% for the indirect and direct contact tests and good osteoblast cell attachment when studied with the bone-like cells. Copyright © 2018 Elsevier B.V. All rights reserved.

  3. Optimization of Xylanase production from Penicillium sp.WX-Z1 by a two-step statistical strategy: Plackett-Burman and Box-Behnken experimental design.

    Science.gov (United States)

    Cui, Fengjie; Zhao, Liming

    2012-01-01

    The objective of the study was to optimize the nutrition sources in a culture medium for the production of xylanase from Penicillium sp.WX-Z1 using Plackett-Burman design and Box-Behnken design. The Plackett-Burman multifactorial design was first employed to screen the important nutrient sources in the medium for xylanase production by Penicillium sp.WX-Z1 and subsequent use of the response surface methodology (RSM) was further optimized for xylanase production by Box-Behnken design. The important nutrient sources in the culture medium, identified by the initial screening method of Placket-Burman, were wheat bran, yeast extract, NaNO(3), MgSO(4), and CaCl(2). The optimal amounts (in g/L) for maximum production of xylanase were: wheat bran, 32.8; yeast extract, 1.02; NaNO(3), 12.71; MgSO(4), 0.96; and CaCl(2), 1.04. Using this statistical experimental design, the xylanase production under optimal condition reached 46.50 U/mL and an increase in xylanase activity of 1.34-fold was obtained compared with the original medium for fermentation carried out in a 30-L bioreactor.

  4. iCFD: Interpreted Computational Fluid Dynamics - Degeneration of CFD to one-dimensional advection-dispersion models using statistical experimental design - The secondary clarifier.

    Science.gov (United States)

    Guyonvarch, Estelle; Ramin, Elham; Kulahci, Murat; Plósz, Benedek Gy

    2015-10-15

    The present study aims at using statistically designed computational fluid dynamics (CFD) simulations as numerical experiments for the identification of one-dimensional (1-D) advection-dispersion models - computationally light tools, used e.g., as sub-models in systems analysis. The objective is to develop a new 1-D framework, referred to as interpreted CFD (iCFD) models, in which statistical meta-models are used to calculate the pseudo-dispersion coefficient (D) as a function of design and flow boundary conditions. The method - presented in a straightforward and transparent way - is illustrated using the example of a circular secondary settling tank (SST). First, the significant design and flow factors are screened out by applying the statistical method of two-level fractional factorial design of experiments. Second, based on the number of significant factors identified through the factor screening study and system understanding, 50 different sets of design and flow conditions are selected using Latin Hypercube Sampling (LHS). The boundary condition sets are imposed on a 2-D axi-symmetrical CFD simulation model of the SST. In the framework, to degenerate the 2-D model structure, CFD model outputs are approximated by the 1-D model through the calibration of three different model structures for D. Correlation equations for the D parameter then are identified as a function of the selected design and flow boundary conditions (meta-models), and their accuracy is evaluated against D values estimated in each numerical experiment. The evaluation and validation of the iCFD model structure is carried out using scenario simulation results obtained with parameters sampled from the corners of the LHS experimental region. For the studied SST, additional iCFD model development was carried out in terms of (i) assessing different density current sub-models; (ii) implementation of a combined flocculation, hindered, transient and compression settling velocity function; and (iii

  5. CFD simulation of CO_2 sorption on K_2CO_3 solid sorbent in novel high flux circulating-turbulent fluidized bed riser: Parametric statistical experimental design study

    International Nuclear Information System (INIS)

    Thummakul, Theeranan; Gidaspow, Dimitri; Piumsomboon, Pornpote; Chalermsinsuwan, Benjapon

    2017-01-01

    Highlights: • Circulating-turbulent fluidization was proved to be advantage on CO_2 sorption. • The novel regime was proven to capture CO_2 higher than the conventional regimes. • Uniform solid particle distribution was observed in the novel fluidization regime. • The system continuity had more effect in the system than the process system mixing. • Parametric experimental design analysis was studied to evaluate significant factor. - Abstract: In this study a high flux circulating-turbulent fluidized bed (CTFB) riser was confirmed to be advantageous for carbon dioxide (CO_2) sorption on a potassium carbonate solid sorbent. The effect of various parameters on the CO_2 removal level was evaluated using a statistical experimental design. The most appropriate fluidization regime was found to occur between the turbulent and fast fluidization regimes, which was shown to capture CO_2 more efficiently than conventional fluidization regimes. The highest CO_2 sorption level was 93.4% under optimized CTFB operating conditions. The important parameters for CO_2 capture were the inlet gas velocity and the interactions between the CO_2 concentration and the inlet gas velocity and water vapor concentration. The CTFB regime had a high and uniform solid particle distribution in both the axial and radial system directions and could transport the solid sorbent to the regeneration reactor. In addition, the process system continuity had a stronger effect on the CO_2 removal level in the system than the process system mixing.

  6. Evaluating statistical and clinical significance of intervention effects in single-case experimental designs: an SPSS method to analyze univariate data.

    Science.gov (United States)

    Maric, Marija; de Haan, Else; Hogendoorn, Sanne M; Wolters, Lidewij H; Huizenga, Hilde M

    2015-03-01

    Single-case experimental designs are useful methods in clinical research practice to investigate individual client progress. Their proliferation might have been hampered by methodological challenges such as the difficulty applying existing statistical procedures. In this article, we describe a data-analytic method to analyze univariate (i.e., one symptom) single-case data using the common package SPSS. This method can help the clinical researcher to investigate whether an intervention works as compared with a baseline period or another intervention type, and to determine whether symptom improvement is clinically significant. First, we describe the statistical method in a conceptual way and show how it can be implemented in SPSS. Simulation studies were performed to determine the number of observation points required per intervention phase. Second, to illustrate this method and its implications, we present a case study of an adolescent with anxiety disorders treated with cognitive-behavioral therapy techniques in an outpatient psychotherapy clinic, whose symptoms were regularly assessed before each session. We provide a description of the data analyses and results of this case study. Finally, we discuss the advantages and shortcomings of the proposed method. Copyright © 2014. Published by Elsevier Ltd.

  7. Experimental Design Research

    DEFF Research Database (Denmark)

    This book presents a new, multidisciplinary perspective on and paradigm for integrative experimental design research. It addresses various perspectives on methods, analysis and overall research approach, and how they can be synthesized to advance understanding of design. It explores the foundations...... of experimental approaches and their utility in this domain, and brings together analytical approaches to promote an integrated understanding. The book also investigates where these approaches lead to and how they link design research more fully with other disciplines (e.g. psychology, cognition, sociology......, computer science, management). Above all, the book emphasizes the integrative nature of design research in terms of the methods, theories, and units of study—from the individual to the organizational level. Although this approach offers many advantages, it has inherently led to a situation in current...

  8. Application of machine/statistical learning, artificial intelligence and statistical experimental design for the modeling and optimization of methylene blue and Cd(ii) removal from a binary aqueous solution by natural walnut carbon.

    Science.gov (United States)

    Mazaheri, H; Ghaedi, M; Ahmadi Azqhandi, M H; Asfaram, A

    2017-05-10

    Analytical chemists apply statistical methods for both the validation and prediction of proposed models. Methods are required that are adequate for finding the typical features of a dataset, such as nonlinearities and interactions. Boosted regression trees (BRTs), as an ensemble technique, are fundamentally different to other conventional techniques, with the aim to fit a single parsimonious model. In this work, BRT, artificial neural network (ANN) and response surface methodology (RSM) models have been used for the optimization and/or modeling of the stirring time (min), pH, adsorbent mass (mg) and concentrations of MB and Cd 2+ ions (mg L -1 ) in order to develop respective predictive equations for simulation of the efficiency of MB and Cd 2+ adsorption based on the experimental data set. Activated carbon, as an adsorbent, was synthesized from walnut wood waste which is abundant, non-toxic, cheap and locally available. This adsorbent was characterized using different techniques such as FT-IR, BET, SEM, point of zero charge (pH pzc ) and also the determination of oxygen containing functional groups. The influence of various parameters (i.e. pH, stirring time, adsorbent mass and concentrations of MB and Cd 2+ ions) on the percentage removal was calculated by investigation of sensitive function, variable importance rankings (BRT) and analysis of variance (RSM). Furthermore, a central composite design (CCD) combined with a desirability function approach (DFA) as a global optimization technique was used for the simultaneous optimization of the effective parameters. The applicability of the BRT, ANN and RSM models for the description of experimental data was examined using four statistical criteria (absolute average deviation (AAD), mean absolute error (MAE), root mean square error (RMSE) and coefficient of determination (R 2 )). All three models demonstrated good predictions in this study. The BRT model was more precise compared to the other models and this showed

  9. Statistical reporting inconsistencies in experimental philosophy.

    Science.gov (United States)

    Colombo, Matteo; Duev, Georgi; Nuijten, Michèle B; Sprenger, Jan

    2018-01-01

    Experimental philosophy (x-phi) is a young field of research in the intersection of philosophy and psychology. It aims to make progress on philosophical questions by using experimental methods traditionally associated with the psychological and behavioral sciences, such as null hypothesis significance testing (NHST). Motivated by recent discussions about a methodological crisis in the behavioral sciences, questions have been raised about the methodological standards of x-phi. Here, we focus on one aspect of this question, namely the rate of inconsistencies in statistical reporting. Previous research has examined the extent to which published articles in psychology and other behavioral sciences present statistical inconsistencies in reporting the results of NHST. In this study, we used the R package statcheck to detect statistical inconsistencies in x-phi, and compared rates of inconsistencies in psychology and philosophy. We found that rates of inconsistencies in x-phi are lower than in the psychological and behavioral sciences. From the point of view of statistical reporting consistency, x-phi seems to do no worse, and perhaps even better, than psychological science.

  10. Statistical reporting inconsistencies in experimental philosophy

    Science.gov (United States)

    Colombo, Matteo; Duev, Georgi; Nuijten, Michèle B.; Sprenger, Jan

    2018-01-01

    Experimental philosophy (x-phi) is a young field of research in the intersection of philosophy and psychology. It aims to make progress on philosophical questions by using experimental methods traditionally associated with the psychological and behavioral sciences, such as null hypothesis significance testing (NHST). Motivated by recent discussions about a methodological crisis in the behavioral sciences, questions have been raised about the methodological standards of x-phi. Here, we focus on one aspect of this question, namely the rate of inconsistencies in statistical reporting. Previous research has examined the extent to which published articles in psychology and other behavioral sciences present statistical inconsistencies in reporting the results of NHST. In this study, we used the R package statcheck to detect statistical inconsistencies in x-phi, and compared rates of inconsistencies in psychology and philosophy. We found that rates of inconsistencies in x-phi are lower than in the psychological and behavioral sciences. From the point of view of statistical reporting consistency, x-phi seems to do no worse, and perhaps even better, than psychological science. PMID:29649220

  11. Statistical Methodologies to Integrate Experimental and Computational Research

    Science.gov (United States)

    Parker, P. A.; Johnson, R. T.; Montgomery, D. C.

    2008-01-01

    Development of advanced algorithms for simulating engine flow paths requires the integration of fundamental experiments with the validation of enhanced mathematical models. In this paper, we provide an overview of statistical methods to strategically and efficiently conduct experiments and computational model refinement. Moreover, the integration of experimental and computational research efforts is emphasized. With a statistical engineering perspective, scientific and engineering expertise is combined with statistical sciences to gain deeper insights into experimental phenomenon and code development performance; supporting the overall research objectives. The particular statistical methods discussed are design of experiments, response surface methodology, and uncertainty analysis and planning. Their application is illustrated with a coaxial free jet experiment and a turbulence model refinement investigation. Our goal is to provide an overview, focusing on concepts rather than practice, to demonstrate the benefits of using statistical methods in research and development, thereby encouraging their broader and more systematic application.

  12. DESIGNING ENVIRONMENTAL MONITORING DATABASES FOR STATISTIC ASSESSMENT

    Science.gov (United States)

    Databases designed for statistical analyses have characteristics that distinguish them from databases intended for general use. EMAP uses a probabilistic sampling design to collect data to produce statistical assessments of environmental conditions. In addition to supporting the ...

  13. Experimental site and design

    Energy Technology Data Exchange (ETDEWEB)

    Guenette, C. C. [SINTEF Applied Cemistry, Trondheim (Norway)

    1999-08-01

    Design and site selection criteria for the Svalbard oil spill experiments are described. All three experimental sites have coarse and mixed sediment beaches of sand and pebble; within each site wave exposure is very similar; along-shore and across-shore sediment characteristics are also relatively homogeneous. Tidal range is in the order of 0.6 m at neaps, and 1.8 m at springs. All three sites are open to wave action and are ice-free during the experimental period of mid-July to mid-October. Study plots at each site were selected for different treatments from within the continuous stretch of oiled shoreline, with oiled buffer zones between plots and at either end of the oiled zone. Treatments included mixing (tilling), sediment relocation (surf washing) and bioremediation (nutrient enrichment). Measurements and observations were carried out during the summers of 1997 and 1998. The characteristics measured were: wave and wind conditions; beach topography and elevation; sediment grain size distribution; mineral fines size distribution and mineral composition; background hydrocarbons; concentration of oil within experimental plots and the rate of oil loss over time; depth of oil penetration and thickness of the oiled sediment layer; oil concentration and toxicity of near-shore benthic sediments; mineral composition of suspended particulate material captured in sub-tidal sediment traps; and oil-fines interaction in near-shore water samples. 1 fig.

  14. Experimental site and design

    Energy Technology Data Exchange (ETDEWEB)

    Guenette, C. C. [SINTEF Applied Cemistry, Trondheim (Norway)

    1999-07-01

    Design and site selection criteria for the Svalbard oil spill experiments are described. All three experimental sites have coarse and mixed sediment beaches of sand and pebble; within each site waveexposure is very similar; along-shore and across-shore sediment characteristics are also relatively homogeneous. Tidal range is in the order of 0.6 m at neaps, and 1.8 m at springs. All three sites are open to wave action and are ice-free during the experimental period of mid-July to mid-October. Study plots at each site were selected for different treatments from within the continuous stretch of oiled shoreline, with oiled buffer zones between plots and at either end of the oiled zone. Treatments included mixing (tilling), sediment relocation (surf washing) and bioremediation (nutrient enrichment). Measurements and observations were carried out during the summers of 1997 and 1998. The characteristics measured were: wave and wind conditions; beach topography and elevation; sediment grain size distribution; mineral fines size distribution and mineral composition; background hydrocarbons; concentration of oil within experimental plots and the rate of oil loss over time; depth of oil penetration and thickness of the oiled sediment layer; oil concentration and toxicity of near-shore benthic sediments; mineral composition of suspended particulate material captured in sub-tidal sediment traps; and oil-fines interaction in near-shore water samples. 1 fig.

  15. Experimental Engineering: Articulating and Valuing Design Experimentation

    DEFF Research Database (Denmark)

    Vallgårda, Anna; Grönvall, Erik; Fritsch, Jonas

    2017-01-01

    In this paper we propose Experimental Engineering as a way to articulate open- ended technological experiments as a legitimate design research practice. Experimental Engineering introduces a move away from an outcome or result driven design process towards an interest in existing technologies and...

  16. Applied statistical designs for the researcher

    CERN Document Server

    Paulson, Daryl S

    2003-01-01

    Research and Statistics Basic Review of Parametric Statistics Exploratory Data Analysis Two Sample Tests Completely Randomized One-Factor Analysis of Variance One and Two Restrictions on Randomization Completely Randomized Two-Factor Factorial Designs Two-Factor Factorial Completely Randomized Blocked Designs Useful Small Scale Pilot Designs Nested Statistical Designs Linear Regression Nonparametric Statistics Introduction to Research Synthesis and "Meta-Analysis" and Conclusory Remarks References Index.

  17. iCFD: Interpreted Computational Fluid Dynamics – Degeneration of CFD to one-dimensional advection-dispersion models using statistical experimental design – The secondary clarifier

    DEFF Research Database (Denmark)

    Guyonvarch, Estelle; Ramin, Elham; Kulahci, Murat

    2015-01-01

    using the example of a circular secondary settling tank (SST). First, the significant design and flow factors are screened out by applying the statistical method of two-level fractional factorial design of experiments. Second, based on the number of significant factors identified through the factor...... of (i) assessing different density current sub-models; (ii) implementation of a combined flocculation, hindered, transient and compression settling velocity function; and (iii) assessment of modelling the onset of transient and compression settling. Furthermore, the optimal level of model discretization...

  18. Two polynomial representations of experimental design

    OpenAIRE

    Notari, Roberto; Riccomagno, Eva; Rogantin, Maria-Piera

    2007-01-01

    In the context of algebraic statistics an experimental design is described by a set of polynomials called the design ideal. This, in turn, is generated by finite sets of polynomials. Two types of generating sets are mostly used in the literature: Groebner bases and indicator functions. We briefly describe them both, how they are used in the analysis and planning of a design and how to switch between them. Examples include fractions of full factorial designs and designs for mixture experiments.

  19. Intermediate/Advanced Research Design and Statistics

    Science.gov (United States)

    Ploutz-Snyder, Robert

    2009-01-01

    The purpose of this module is To provide Institutional Researchers (IRs) with an understanding of the principles of advanced research design and the intermediate/advanced statistical procedures consistent with such designs

  20. Experimental design of a waste glass study

    International Nuclear Information System (INIS)

    Piepel, G.F.; Redgate, P.E.; Hrma, P.

    1995-04-01

    A Composition Variation Study (CVS) is being performed to support a future high-level waste glass plant at Hanford. A total of 147 glasses, covering a broad region of compositions melting at approximately 1150 degrees C, were tested in five statistically designed experimental phases. This paper focuses on the goals, strategies, and techniques used in designing the five phases. The overall strategy was to investigate glass compositions on the boundary and interior of an experimental region defined by single- component, multiple-component, and property constraints. Statistical optimal experimental design techniques were used to cover various subregions of the experimental region in each phase. Empirical mixture models for glass properties (as functions of glass composition) from previous phases wee used in designing subsequent CVS phases

  1. A Statistical Approach to Optimizing Concrete Mixture Design

    OpenAIRE

    Ahmad, Shamsad; Alghamdi, Saeid A.

    2014-01-01

    A step-by-step statistical approach is proposed to obtain optimum proportioning of concrete mixtures using the data obtained through a statistically planned experimental program. The utility of the proposed approach for optimizing the design of concrete mixture is illustrated considering a typical case in which trial mixtures were considered according to a full factorial experiment design involving three factors and their three levels (33). A total of 27 concrete mixtures with three replicate...

  2. Optimal Experimental Design for Model Discrimination

    Science.gov (United States)

    Myung, Jay I.; Pitt, Mark A.

    2009-01-01

    Models of a psychological process can be difficult to discriminate experimentally because it is not easy to determine the values of the critical design variables (e.g., presentation schedule, stimulus structure) that will be most informative in differentiating them. Recent developments in sampling-based search methods in statistics make it…

  3. Experimental design a chemometric approach

    CERN Document Server

    Deming, SN

    1987-01-01

    Now available in a paperback edition is a book which has been described as ``...an exceptionally lucid, easy-to-read presentation... would be an excellent addition to the collection of every analytical chemist. I recommend it with great enthusiasm.'' (Analytical Chemistry). Unlike most current textbooks, it approaches experimental design from the point of view of the experimenter, rather than that of the statistician. As the reviewer in `Analytical Chemistry' went on to say: ``Deming and Morgan should be given high praise for bringing the principles of experimental design to the level of the p

  4. Experimental design in chemistry: A tutorial.

    Science.gov (United States)

    Leardi, Riccardo

    2009-10-12

    In this tutorial the main concepts and applications of experimental design in chemistry will be explained. Unfortunately, nowadays experimental design is not as known and applied as it should be, and many papers can be found in which the "optimization" of a procedure is performed one variable at a time. Goal of this paper is to show the real advantages in terms of reduced experimental effort and of increased quality of information that can be obtained if this approach is followed. To do that, three real examples will be shown. Rather than on the mathematical aspects, this paper will focus on the mental attitude required by experimental design. The readers being interested to deepen their knowledge of the mathematical and algorithmical part can find very good books and tutorials in the references [G.E.P. Box, W.G. Hunter, J.S. Hunter, Statistics for Experimenters: An Introduction to Design, Data Analysis, and Model Building, John Wiley & Sons, New York, 1978; R. Brereton, Chemometrics: Data Analysis for the Laboratory and Chemical Plant, John Wiley & Sons, New York, 1978; R. Carlson, J.E. Carlson, Design and Optimization in Organic Synthesis: Second Revised and Enlarged Edition, in: Data Handling in Science and Technology, vol. 24, Elsevier, Amsterdam, 2005; J.A. Cornell, Experiments with Mixtures: Designs, Models and the Analysis of Mixture Data, in: Series in Probability and Statistics, John Wiley & Sons, New York, 1991; R.E. Bruns, I.S. Scarminio, B. de Barros Neto, Statistical Design-Chemometrics, in: Data Handling in Science and Technology, vol. 25, Elsevier, Amsterdam, 2006; D.C. Montgomery, Design and Analysis of Experiments, 7th edition, John Wiley & Sons, Inc., 2009; T. Lundstedt, E. Seifert, L. Abramo, B. Thelin, A. Nyström, J. Pettersen, R. Bergman, Chemolab 42 (1998) 3; Y. Vander Heyden, LC-GC Europe 19 (9) (2006) 469].

  5. Experimental Design: Review and Comment.

    Science.gov (United States)

    1984-02-01

    creativity. Innovative modifications and extensions of classical experimental designs were developed and many useful articles were published in a short...Pjrazolone Industrielle ," Bulletin de la Soci~t6 Chimique de France, 11-12, 1171-1174. LI, K. C. (1983), "Minimaxity for Randomized Designs: Some

  6. Formulation optimization of transdermal meloxicam potassium-loaded mesomorphic phases containing ethanol, oleic acid and mixture surfactant using the statistical experimental design methodology.

    Science.gov (United States)

    Huang, Chi-Te; Tsai, Chia-Hsun; Tsou, Hsin-Yeh; Huang, Yaw-Bin; Tsai, Yi-Hung; Wu, Pao-Chu

    2011-01-01

    Response surface methodology (RSM) was used to develop and optimize the mesomorphic phase formulation for a meloxicam transdermal dosage form. A mixture design was applied to prepare formulations which consisted of three independent variables including oleic acid (X(1)), distilled water (X(2)) and ethanol (X(3)). The flux and lag time (LT) were selected as dependent variables. The result showed that using mesomorphic phases as vehicles can significantly increase flux and shorten LT of drug. The analysis of variance showed that the permeation parameters of meloxicam from formulations were significantly influenced by the independent variables and their interactions. The X(3) (ethanol) had the greatest potential influence on the flux and LT, followed by X(1) and X(2). A new formulation was prepared according to the independent levels provided by RSM. The observed responses were in close agreement with the predicted values, demonstrating that RSM could be successfully used to optimize mesomorphic phase formulations.

  7. Application of Response Surface Methodology in the Preparation of Pectin-Caseinate Nanocomplexes for Potential Use as Nutraceutical Formulation: A Statistical Experimental Design Analysis

    Directory of Open Access Journals (Sweden)

    Sajedeh Bahrani

    2018-03-01

    Full Text Available Background: The formation of electrostatic complexes between two types of biopolymers, sodium Caseinate (a derivative from most abundant milk protein and Pectin (a natural hetro polysaccharide, was studied as a function of biopolymers concentrations and pH of solutions (3.9- 4.3. Method: The size and morphology of the resulted complexes were investigated by using of laser light scattering and transmission electron microscopy, respectively. Response surface methodology (A three-factor, three levels Box-Behnken design was used for the optimization procedure with pH, pectin and sodium Caseinate concentrations as independent variables. Particle size and polydispersity index of nanocomplexes were considered as dependent variables. Results: Negatively charged nanocomplexes were produced below the isoelectric point of protein (5.4, at pH 4.1 with a suitable colloidal stability and average particle size of about 100 nm. It was found that the particle size of nanocomplexes could be controlled by changing in variables. Conclusion: In conclusion response surface methodology are simple, rapid and beneficial approach for preparation, optimization and investigation of the effect of independent variables on the properties of products.

  8. Scientific, statistical, practical, and regulatory considerations in design space development.

    Science.gov (United States)

    Debevec, Veronika; Srčič, Stanko; Horvat, Matej

    2018-03-01

    The quality by design (QbD) paradigm guides the pharmaceutical industry towards improved understanding of products and processes, and at the same time facilitates a high degree of manufacturing and regulatory flexibility throughout the establishment of the design space. This review article presents scientific, statistical and regulatory considerations in design space development. All key development milestones, starting with planning, selection of factors, experimental execution, data analysis, model development and assessment, verification, and validation, and ending with design space submission, are presented and discussed. The focus is especially on frequently ignored topics, like management of factors and CQAs that will not be included in experimental design, evaluation of risk of failure on design space edges, or modeling scale-up strategy. Moreover, development of a design space that is independent of manufacturing scale is proposed as the preferred approach.

  9. Optimal Design and Related Areas in Optimization and Statistics

    CERN Document Server

    Pronzato, Luc

    2009-01-01

    This edited volume, dedicated to Henry P. Wynn, reflects his broad range of research interests, focusing in particular on the applications of optimal design theory in optimization and statistics. It covers algorithms for constructing optimal experimental designs, general gradient-type algorithms for convex optimization, majorization and stochastic ordering, algebraic statistics, Bayesian networks and nonlinear regression. Written by leading specialists in the field, each chapter contains a survey of the existing literature along with substantial new material. This work will appeal to both the

  10. Scalable Algorithms for Adaptive Statistical Designs

    Directory of Open Access Journals (Sweden)

    Robert Oehmke

    2000-01-01

    Full Text Available We present a scalable, high-performance solution to multidimensional recurrences that arise in adaptive statistical designs. Adaptive designs are an important class of learning algorithms for a stochastic environment, and we focus on the problem of optimally assigning patients to treatments in clinical trials. While adaptive designs have significant ethical and cost advantages, they are rarely utilized because of the complexity of optimizing and analyzing them. Computational challenges include massive memory requirements, few calculations per memory access, and multiply-nested loops with dynamic indices. We analyze the effects of various parallelization options, and while standard approaches do not work well, with effort an efficient, highly scalable program can be developed. This allows us to solve problems thousands of times more complex than those solved previously, which helps make adaptive designs practical. Further, our work applies to many other problems involving neighbor recurrences, such as generalized string matching.

  11. For the Love of Statistics: Appreciating and Learning to Apply Experimental Analysis and Statistics through Computer Programming Activities

    Science.gov (United States)

    Mascaró, Maite; Sacristán, Ana Isabel; Rufino, Marta M.

    2016-01-01

    For the past 4 years, we have been involved in a project that aims to enhance the teaching and learning of experimental analysis and statistics, of environmental and biological sciences students, through computational programming activities (using R code). In this project, through an iterative design, we have developed sequences of R-code-based…

  12. Randomizing Roaches: Exploring the "Bugs" of Randomization in Experimental Design

    Science.gov (United States)

    Wagler, Amy; Wagler, Ron

    2014-01-01

    Understanding the roles of random selection and random assignment in experimental design is a central learning objective in most introductory statistics courses. This article describes an activity, appropriate for a high school or introductory statistics course, designed to teach the concepts, values and pitfalls of random selection and assignment…

  13. Experimental investigation of statistical density function of decaying radioactive sources

    International Nuclear Information System (INIS)

    Salma, I.; Zemplen-Papp, E.

    1991-01-01

    The validity of the Poisson and the λ P(k) modified Poisson statistical density functions of observing k events in a short time interval is investigated experimentally in radioactive decay detection for various measuring times. The experiments to measure radioactive decay were performed with 89m Y, using a multichannel analyzer. According to the results, Poisson statistics adequately describes the counting experiment for short measuring times. (author) 13 refs.; 4 figs

  14. Quasi-Experimental Designs for Causal Inference

    Science.gov (United States)

    Kim, Yongnam; Steiner, Peter

    2016-01-01

    When randomized experiments are infeasible, quasi-experimental designs can be exploited to evaluate causal treatment effects. The strongest quasi-experimental designs for causal inference are regression discontinuity designs, instrumental variable designs, matching and propensity score designs, and comparative interrupted time series designs. This…

  15. Insights in Experimental Data : Interactive Statistics with the ILLMO Program

    NARCIS (Netherlands)

    Martens, J.B.O.S.

    2017-01-01

    Empirical researchers turn to statistics to assist them in drawing conclusions, also called inferences, from their collected data. Often, this data is experimental data, i.e., it consists of (repeated) measurements collected in one or more distinct conditions. The observed data can hence be

  16. A statistical approach to optimizing concrete mixture design.

    Science.gov (United States)

    Ahmad, Shamsad; Alghamdi, Saeid A

    2014-01-01

    A step-by-step statistical approach is proposed to obtain optimum proportioning of concrete mixtures using the data obtained through a statistically planned experimental program. The utility of the proposed approach for optimizing the design of concrete mixture is illustrated considering a typical case in which trial mixtures were considered according to a full factorial experiment design involving three factors and their three levels (3(3)). A total of 27 concrete mixtures with three replicates (81 specimens) were considered by varying the levels of key factors affecting compressive strength of concrete, namely, water/cementitious materials ratio (0.38, 0.43, and 0.48), cementitious materials content (350, 375, and 400 kg/m(3)), and fine/total aggregate ratio (0.35, 0.40, and 0.45). The experimental data were utilized to carry out analysis of variance (ANOVA) and to develop a polynomial regression model for compressive strength in terms of the three design factors considered in this study. The developed statistical model was used to show how optimization of concrete mixtures can be carried out with different possible options.

  17. A Statistical Approach to Optimizing Concrete Mixture Design

    Directory of Open Access Journals (Sweden)

    Shamsad Ahmad

    2014-01-01

    Full Text Available A step-by-step statistical approach is proposed to obtain optimum proportioning of concrete mixtures using the data obtained through a statistically planned experimental program. The utility of the proposed approach for optimizing the design of concrete mixture is illustrated considering a typical case in which trial mixtures were considered according to a full factorial experiment design involving three factors and their three levels (33. A total of 27 concrete mixtures with three replicates (81 specimens were considered by varying the levels of key factors affecting compressive strength of concrete, namely, water/cementitious materials ratio (0.38, 0.43, and 0.48, cementitious materials content (350, 375, and 400 kg/m3, and fine/total aggregate ratio (0.35, 0.40, and 0.45. The experimental data were utilized to carry out analysis of variance (ANOVA and to develop a polynomial regression model for compressive strength in terms of the three design factors considered in this study. The developed statistical model was used to show how optimization of concrete mixtures can be carried out with different possible options.

  18. Sequential experimental design based generalised ANOVA

    Energy Technology Data Exchange (ETDEWEB)

    Chakraborty, Souvik, E-mail: csouvik41@gmail.com; Chowdhury, Rajib, E-mail: rajibfce@iitr.ac.in

    2016-07-15

    Over the last decade, surrogate modelling technique has gained wide popularity in the field of uncertainty quantification, optimization, model exploration and sensitivity analysis. This approach relies on experimental design to generate training points and regression/interpolation for generating the surrogate. In this work, it is argued that conventional experimental design may render a surrogate model inefficient. In order to address this issue, this paper presents a novel distribution adaptive sequential experimental design (DA-SED). The proposed DA-SED has been coupled with a variant of generalised analysis of variance (G-ANOVA), developed by representing the component function using the generalised polynomial chaos expansion. Moreover, generalised analytical expressions for calculating the first two statistical moments of the response, which are utilized in predicting the probability of failure, have also been developed. The proposed approach has been utilized in predicting probability of failure of three structural mechanics problems. It is observed that the proposed approach yields accurate and computationally efficient estimate of the failure probability.

  19. Statistical considerations of graphite strength for assessing design allowable stresses

    International Nuclear Information System (INIS)

    Ishihara, M.; Mogi, H.; Ioka, I.; Arai, T.; Oku, T.

    1987-01-01

    Several aspects of statistics need to be considered to determine design allowable stresses for graphite structures. These include: 1) Statistical variation of graphite material strength. 2) Uncertainty of calculated stress. 3) Reliability (survival probability) required from operational and safety performance of graphite structures. This paper deals with some statistical considerations of structural graphite for assessing design allowable stress. Firstly, probability distribution functions of tensile and compressive strengths are investigated on experimental Very High Temperature candidated graphites. Normal, logarithmic normal and Weibull distribution functions are compared in terms of coefficient of correlation to measured strength data. This leads to the adaptation of normal distribution function. Then, the relation between factor of safety and fracture probability is discussed on the following items: 1) As the graphite strength is more variable than metalic material's strength, the effect of strength variation to the fracture probability is evaluated. 2) Fracture probability depending on survival probability of 99 ∼ 99.9 (%) with confidence level of 90 ∼ 95 (%) is discussed. 3) As the material properties used in the design analysis are usually the mean values of their variation, the additional effect of these variations on the fracture probability is discussed. Finally, the way to assure the minimum ultimate strength with required survival probability with confidence level is discussed in view of statistical treatment of the strength data from varying sample numbers in a material acceptance test. (author)

  20. Experimental, statistical, and biological models of radon carcinogenesis

    International Nuclear Information System (INIS)

    Cross, F.T.

    1991-09-01

    Risk models developed for underground miners have not been consistently validated in studies of populations exposed to indoor radon. Imprecision in risk estimates results principally from differences between exposures in mines as compared to domestic environments and from uncertainties about the interaction between cigarette-smoking and exposure to radon decay products. Uncertainties in extrapolating miner data to domestic exposures can be reduced by means of a broad-based health effects research program that addresses the interrelated issues of exposure, respiratory tract dose, carcinogenesis (molecular/cellular and animal studies, plus developing biological and statistical models), and the relationship of radon to smoking and other copollutant exposures. This article reviews experimental animal data on radon carcinogenesis observed primarily in rats at Pacific Northwest Laboratory. Recent experimental and mechanistic carcinogenesis models of exposures to radon, uranium ore dust, and cigarette smoke are presented with statistical analyses of animal data. 20 refs., 1 fig

  1. Literature in Focus: Statistical Methods in Experimental Physics

    CERN Multimedia

    2007-01-01

    Frederick James was a high-energy physicist who became the CERN "expert" on statistics and is now well-known around the world, in part for this famous text. The first edition of Statistical Methods in Experimental Physics was originally co-written with four other authors and was published in 1971 by North Holland (now an imprint of Elsevier). It became such an important text that demand for it has continued for more than 30 years. Fred has updated it and it was released in a second edition by World Scientific in 2006. It is still a top seller and there is no exaggeration in calling it «the» reference on the subject. A full review of the title appeared in the October CERN Courier.Come and meet the author to hear more about how this book has flourished during its 35-year lifetime. Frederick James Statistical Methods in Experimental Physics Monday, 26th of November, 4 p.m. Council Chamber (Bldg. 503-1-001) The author will be introduced...

  2. Some challenges with statistical inference in adaptive designs.

    Science.gov (United States)

    Hung, H M James; Wang, Sue-Jane; Yang, Peiling

    2014-01-01

    Adaptive designs have generated a great deal of attention to clinical trial communities. The literature contains many statistical methods to deal with added statistical uncertainties concerning the adaptations. Increasingly encountered in regulatory applications are adaptive statistical information designs that allow modification of sample size or related statistical information and adaptive selection designs that allow selection of doses or patient populations during the course of a clinical trial. For adaptive statistical information designs, a few statistical testing methods are mathematically equivalent, as a number of articles have stipulated, but arguably there are large differences in their practical ramifications. We pinpoint some undesirable features of these methods in this work. For adaptive selection designs, the selection based on biomarker data for testing the correlated clinical endpoints may increase statistical uncertainty in terms of type I error probability, and most importantly the increased statistical uncertainty may be impossible to assess.

  3. Sudoku Squares as Experimental Designs

    Indian Academy of Sciences (India)

    IAS Admin

    theoretical and applied research in many topics in statistics. Sudoku is a popular ... Figure 1 shows an example of a Sudoku puzzle that ap- peared in [1]. .... signs that study the effect of three explanatory variables, each at n levels, on a ...

  4. Experimental Designs Exercises and Solutions

    CERN Document Server

    Kabe, DG

    2007-01-01

    This volume provides a collection of exercises together with their solutions in design and analysis of experiments. The theoretical results, essential for understanding, are given first. These exercises have been collected during the authors teaching courses over a long period of time. These are particularly helpful to the students studying the design of experiments and instructors and researchers engaged in the teaching and research of design by experiment.

  5. EBTS: DESIGN AND EXPERIMENTAL STUDY

    International Nuclear Information System (INIS)

    PIKIN, A.; ALESSI, J.; BEEBE, E.; KPONOU, A.; PRELEC, K.; KUZNETSOV, G.; TIUNOV, M.

    2000-01-01

    Experimental study of the BNL Electron Beam Test Stand (EBTS), which is a prototype of the Relativistic Heavy Ion Collider (RHIC) Electron Beam Ion Source (EBIS), is currently underway. The basic physics and engineering aspects of a high current EBIS implemented in EBTS are outlined and construction of its main systems is presented. Efficient transmission of a 10 A electron beam through the ion trap has been achieved. Experimental results on generation of multiply charged ions with both continuous gas and external ion injection confirm stable operation of the ion trap

  6. From Cookbook to Experimental Design

    Science.gov (United States)

    Flannagan, Jenny Sue; McMillan, Rachel

    2009-01-01

    Developing expertise, whether from cook to chef or from student to scientist, occurs over time and requires encouragement, guidance, and support. One key goal of an elementary science program should be to move students toward expertise in their ability to design investigative questions. The ability to design a testable question is difficult for…

  7. Bayesian optimal experimental design for the Shock-tube experiment

    International Nuclear Information System (INIS)

    Terejanu, G; Bryant, C M; Miki, K

    2013-01-01

    The sequential optimal experimental design formulated as an information-theoretic sensitivity analysis is applied to the ignition delay problem using real experimental. The optimal design is obtained by maximizing the statistical dependence between the model parameters and observables, which is quantified in this study using mutual information. This is naturally posed in the Bayesian framework. The study shows that by monitoring the information gain after each measurement update, one can design a stopping criteria for the experimental process which gives a minimal set of experiments to efficiently learn the Arrhenius parameters.

  8. Experimental design and process optimization

    CERN Document Server

    Rodrigues, Maria Isabel; Dos Santos, Elian Luiz

    2014-01-01

    Initial ConsiderationsTopics of Elementary StatisticsIntroductory NotionsGeneral IdeasVariablesPopulations and Samples Importance of the Form of the PopulationFirst Ideas of Interference on a Normal PopulationParameters and EstimatesNotions on Testing HypothesesInference of the Mean of a Normal PopulationInference of the Variance of a Normal PopulationInference of the Means of Two Normal PopulationsIndependent SamplesPaired Samples L

  9. A study on the advanced statistical core thermal design methodology

    International Nuclear Information System (INIS)

    Lee, Seung Hyuk

    1992-02-01

    A statistical core thermal design methodology for generating the limit DNBR and the nominal DNBR is proposed and used in assessing the best-estimate thermal margin in a reactor core. Firstly, the Latin Hypercube Sampling Method instead of the conventional Experimental Design Technique is utilized as an input sampling method for a regression analysis to evaluate its sampling efficiency. Secondly and as a main topic, the Modified Latin Hypercube Sampling and the Hypothesis Test Statistics method is proposed as a substitute for the current statistical core thermal design method. This new methodology adopts 'a Modified Latin Hypercube Sampling Method' which uses the mean values of each interval of input variables instead of random values to avoid the extreme cases that arise in the tail areas of some parameters. Next, the independence between the input variables is verified through 'Correlation Coefficient Test' for statistical treatment of their uncertainties. And the distribution type of DNBR response is determined though 'Goodness of Fit Test'. Finally, the limit DNBR with one-sided 95% probability and 95% confidence level, DNBR 95/95 ' is estimated. The advantage of this methodology over the conventional statistical method using Response Surface and Monte Carlo simulation technique lies in its simplicity of the analysis procedure, while maintaining the same level of confidence in the limit DNBR result. This methodology is applied to the two cases of DNBR margin calculation. The first case is the application to the determination of the limit DNBR where the DNBR margin is determined by the difference between the nominal DNBR and the limit DNBR. The second case is the application to the determination of the nominal DNBR where the DNBR margin is determined by the difference between the lower limit value of the nominal DNBR and the CHF correlation limit being used. From this study, it is deduced that the proposed methodology gives a good agreement in the DNBR results

  10. A statistical model for telecommunication link design

    Science.gov (United States)

    Yuen, J. H.

    1975-01-01

    An evaluation is conducted of the current telecommunication link design technique and a description is presented of an alternative method, called the probability distribution method (PDM), which is free of the disadvantages of the current technique while retaining its advantages. The PDM preserves the simplicity of the design control table (DCT) format. The use of the DCT as a management design control tool is continued. The telecommunication link margin probability density function used presents the probability of achieving any particular value of link performance. It is, therefore, possible to assess the performance risk and other tradeoffs.

  11. Quasi experimental designs in pharmacist intervention research.

    Science.gov (United States)

    Krass, Ines

    2016-06-01

    Background In the field of pharmacist intervention research it is often difficult to conform to the rigorous requirements of the "true experimental" models, especially the requirement of randomization. When randomization is not feasible, a practice based researcher can choose from a range of "quasi-experimental designs" i.e., non-randomised and at time non controlled. Objective The aim of this article was to provide an overview of quasi-experimental designs, discuss their strengths and weaknesses and to investigate their application in pharmacist intervention research over the previous decade. Results In the literature quasi experimental studies may be classified into five broad categories: quasi-experimental design without control groups; quasi-experimental design that use control groups with no pre-test; quasi-experimental design that use control groups and pre-tests; interrupted time series and stepped wedge designs. Quasi-experimental study design has consistently featured in the evolution of pharmacist intervention research. The most commonly applied of all quasi experimental designs in the practice based research literature are the one group pre-post-test design and the non-equivalent control group design i.e., (untreated control group with dependent pre-tests and post-tests) and have been used to test the impact of pharmacist interventions in general medications management as well as in specific disease states. Conclusion Quasi experimental studies have a role to play as proof of concept, in the pilot phases of interventions when testing different intervention components, especially in complex interventions. They serve to develop an understanding of possible intervention effects: while in isolation they yield weak evidence of clinical efficacy, taken collectively, they help build a body of evidence in support of the value of pharmacist interventions across different practice settings and countries. However, when a traditional RCT is not feasible for

  12. Experimental investigation of statistical models describing distribution of counts

    International Nuclear Information System (INIS)

    Salma, I.; Zemplen-Papp, E.

    1992-01-01

    The binomial, Poisson and modified Poisson models which are used for describing the statistical nature of the distribution of counts are compared theoretically, and conclusions for application are considered. The validity of the Poisson and the modified Poisson statistical distribution for observing k events in a short time interval is investigated experimentally for various measuring times. The experiments to measure the influence of the significant radioactive decay were performed with 89 Y m (T 1/2 =16.06 s), using a multichannel analyser (4096 channels) in the multiscaling mode. According to the results, Poisson statistics describe the counting experiment for short measuring times (up to T=0.5T 1/2 ) and its application is recommended. However, analysis of the data demonstrated, with confidence, that for long measurements (T≥T 1/2 ) Poisson distribution is not valid and the modified Poisson function is preferable. The practical implications in calculating uncertainties and in optimizing the measuring time are discussed. Differences between the standard deviations evaluated on the basis of the Poisson and binomial models are especially significant for experiments with long measuring time (T/T 1/2 ≥2) and/or large detection efficiency (ε>0.30). Optimization of the measuring time for paired observations yields the same solution for either the binomial or the Poisson distribution. (orig.)

  13. OSHA and Experimental Safety Design.

    Science.gov (United States)

    Sichak, Stephen, Jr.

    1983-01-01

    Suggests that a governmental agency, most likely Occupational Safety and Health Administration (OSHA) be considered in the safety design stage of any experiment. Focusing on OSHA's role, discusses such topics as occupational health hazards of toxic chemicals in laboratories, occupational exposure to benzene, and role/regulations of other agencies.…

  14. Statistical approach for uncertainty quantification of experimental modal model parameters

    DEFF Research Database (Denmark)

    Luczak, M.; Peeters, B.; Kahsin, M.

    2014-01-01

    Composite materials are widely used in manufacture of aerospace and wind energy structural components. These load carrying structures are subjected to dynamic time-varying loading conditions. Robust structural dynamics identification procedure impose tight constraints on the quality of modal models...... represent different complexity levels ranging from coupon, through sub-component up to fully assembled aerospace and wind energy structural components made of composite materials. The proposed method is demonstrated on two application cases of a small and large wind turbine blade........ This paper aims at a systematic approach for uncertainty quantification of the parameters of the modal models estimated from experimentally obtained data. Statistical analysis of modal parameters is implemented to derive an assessment of the entire modal model uncertainty measure. Investigated structures...

  15. Experimental statistical signature of many-body quantum interference

    Science.gov (United States)

    Giordani, Taira; Flamini, Fulvio; Pompili, Matteo; Viggianiello, Niko; Spagnolo, Nicolò; Crespi, Andrea; Osellame, Roberto; Wiebe, Nathan; Walschaers, Mattia; Buchleitner, Andreas; Sciarrino, Fabio

    2018-03-01

    Multi-particle interference is an essential ingredient for fundamental quantum mechanics phenomena and for quantum information processing to provide a computational advantage, as recently emphasized by boson sampling experiments. Hence, developing a reliable and efficient technique to witness its presence is pivotal in achieving the practical implementation of quantum technologies. Here, we experimentally identify genuine many-body quantum interference via a recent efficient protocol, which exploits statistical signatures at the output of a multimode quantum device. We successfully apply the test to validate three-photon experiments in an integrated photonic circuit, providing an extensive analysis on the resources required to perform it. Moreover, drawing upon established techniques of machine learning, we show how such tools help to identify the—a priori unknown—optimal features to witness these signatures. Our results provide evidence on the efficacy and feasibility of the method, paving the way for its adoption in large-scale implementations.

  16. Chemicals-Based Formulation Design: Virtual Experimentations

    DEFF Research Database (Denmark)

    Conte, Elisa; Gani, Rafiqul

    2011-01-01

    This paper presents a systematic procedure for virtual experimentations related to the design of liquid formulated products. All the experiments that need to be performed when designing a liquid formulated product (lotion), such as ingredients selection and testing, solubility tests, property mea...... on the design of an insect repellent lotion will show that the software is an essential instrument in decision making, and that it reduces time and resources since experimental efforts can be focused on one or few product alternatives....

  17. Analysis and Evaluation of Statistical Models for Integrated Circuits Design

    Directory of Open Access Journals (Sweden)

    Sáenz-Noval J.J.

    2011-10-01

    Full Text Available Statistical models for integrated circuits (IC allow us to estimate the percentage of acceptable devices in the batch before fabrication. Actually, Pelgrom is the statistical model most accepted in the industry; however it was derived from a micrometer technology, which does not guarantee reliability in nanometric manufacturing processes. This work considers three of the most relevant statistical models in the industry and evaluates their limitations and advantages in analog design, so that the designer has a better criterion to make a choice. Moreover, it shows how several statistical models can be used for each one of the stages and design purposes.

  18. Statistical design of mass spectrometry calibration procedures

    International Nuclear Information System (INIS)

    Bayne, C.K.

    1996-11-01

    The main objective of this task was to agree on calibration procedures to estimate the system parameters (i.e., dead-time correction, ion-counting conversion efficiency, and detector efficiency factors) for SAL's new Finnigan MAT-262 mass spectrometer. SAL will use this mass spectrometer in a clean-laboratory which was opened in December 1995 to measure uranium and plutonium isotopes on environmental samples. The Finnigan MAT-262 mass spectrometer has a multi-detector system with seven Faraday cup detectors and one ion- counter for the measurement of very small signals (e.g. 10 -17 Ampere range). ORNL has made preliminary estimates of the system parameters based on SAL's experimental data measured in late 1994 when the Finnigan instrument was relatively new. SAL generated additional data in 1995 to verify the calibration procedures for estimating the dead-time correction factor, the ion-counting conversion factor and the Faraday cup detector efficiency factors. The system parameters estimated on the present data will have to be reestablished when the Finnigan MAT-262 is moved-to the new clean- laboratory. Different methods will be used to analyzed environmental samples than the current measurement methods being used. For example, the environmental samples will be electroplated on a single filament rather than using the current two filament system. An outline of the calibration standard operating procedure (SOP) is included

  19. HAMMLAB 1999 experimental control room: design - design rationale - experiences

    International Nuclear Information System (INIS)

    Foerdestroemmen, N. T.; Meyer, B. D.; Saarni, R.

    1999-01-01

    A presentation of HAMMLAB 1999 experimental control room, and the accumulated experiences gathered in the areas of design and design rationale as well as user experiences. It is concluded that HAMMLAB 1999 experimental control room is a realistic, compact and efficient control room well suited as an Advanced NPP Control Room (ml)

  20. Chemical-Based Formulation Design: Virtual Experimentation

    DEFF Research Database (Denmark)

    Conte, Elisa; Gani, Rafiqul

    This paper presents a software, the virtual Product-Process Design laboratory (virtual PPD-lab) and the virtual experimental scenarios for design/verification of consumer oriented liquid formulated products where the software can be used. For example, the software can be employed for the design......, the additives and/or their mixtures (formulations). Therefore, the experimental resources can focus on a few candidate product formulations to find the best product. The virtual PPD-lab allows various options for experimentations related to design and/or verification of the product. For example, the selection...... design, model adaptation). All of the above helps to perform virtual experiments by blending chemicals together and observing their predicted behaviour. The paper will highlight the application of the virtual PPD-lab in the design and/or verification of different consumer products (paint formulation...

  1. Application of descriptive statistics in analysis of experimental data

    OpenAIRE

    Mirilović Milorad; Pejin Ivana

    2008-01-01

    Statistics today represent a group of scientific methods for the quantitative and qualitative investigation of variations in mass appearances. In fact, statistics present a group of methods that are used for the accumulation, analysis, presentation and interpretation of data necessary for reaching certain conclusions. Statistical analysis is divided into descriptive statistical analysis and inferential statistics. The values which represent the results of an experiment, and which are the subj...

  2. Considering RNAi experimental design in parasitic helminths.

    Science.gov (United States)

    Dalzell, Johnathan J; Warnock, Neil D; McVeigh, Paul; Marks, Nikki J; Mousley, Angela; Atkinson, Louise; Maule, Aaron G

    2012-04-01

    Almost a decade has passed since the first report of RNA interference (RNAi) in a parasitic helminth. Whilst much progress has been made with RNAi informing gene function studies in disparate nematode and flatworm parasites, substantial and seemingly prohibitive difficulties have been encountered in some species, hindering progress. An appraisal of current practices, trends and ideals of RNAi experimental design in parasitic helminths is both timely and necessary for a number of reasons: firstly, the increasing availability of parasitic helminth genome/transcriptome resources means there is a growing need for gene function tools such as RNAi; secondly, fundamental differences and unique challenges exist for parasite species which do not apply to model organisms; thirdly, the inherent variation in experimental design, and reported difficulties with reproducibility undermine confidence. Ideally, RNAi studies of gene function should adopt standardised experimental design to aid reproducibility, interpretation and comparative analyses. Although the huge variations in parasite biology and experimental endpoints make RNAi experimental design standardization difficult or impractical, we must strive to validate RNAi experimentation in helminth parasites. To aid this process we identify multiple approaches to RNAi experimental validation and highlight those which we deem to be critical for gene function studies in helminth parasites.

  3. Four Papers on Contemporary Software Design Strategies for Statistical Methodologists

    OpenAIRE

    Carey, Vincent; Cook, Dianne

    2014-01-01

    Software design impacts much of statistical analysis and, as technology changes, dramatically so in recent years, it is exciting to learn how statistical software is adapting and changing. This leads to the collection of papers published here, written by John Chambers, Duncan Temple Lang, Michael Lawrence, Martin Morgan, Yihui Xie, Heike Hofmann and Xiaoyue Cheng.

  4. Experimental design and quantitative analysis of microbial community multiomics.

    Science.gov (United States)

    Mallick, Himel; Ma, Siyuan; Franzosa, Eric A; Vatanen, Tommi; Morgan, Xochitl C; Huttenhower, Curtis

    2017-11-30

    Studies of the microbiome have become increasingly sophisticated, and multiple sequence-based, molecular methods as well as culture-based methods exist for population-scale microbiome profiles. To link the resulting host and microbial data types to human health, several experimental design considerations, data analysis challenges, and statistical epidemiological approaches must be addressed. Here, we survey current best practices for experimental design in microbiome molecular epidemiology, including technologies for generating, analyzing, and integrating microbiome multiomics data. We highlight studies that have identified molecular bioactives that influence human health, and we suggest steps for scaling translational microbiome research to high-throughput target discovery across large populations.

  5. Design activities of a fusion experimental breeder

    International Nuclear Information System (INIS)

    Huang, J.; Feng, K.; Sheng, G.

    1999-01-01

    The fusion reactor design studies in China are under the support of a fusion-fission hybrid reactor research Program. The purpose of this program is to explore the potential near-term application of fusion energy to support the long-term fusion energy on the one hand and the fission energy development on the other. During 1992-1996 a detailed consistent and integral conceptual design of a Fusion Experimental Breeder, FEB was completed. Beginning from 1996, a further design study towards an Engineering Outline Design of the FEB, FEB-E, has started. The design activities are briefly given. (author)

  6. Design activities of a fusion experimental breeder

    International Nuclear Information System (INIS)

    Huang, J.; Feng, K.; Sheng, G.

    2001-01-01

    The fusion reactor design studies in China are under the support of a fusion-fission hybrid reactor research Program. The purpose of this program is to explore the potential near-term application of fusion energy to support the long-term fusion energy on the one hand and the fission energy development on the other. During 1992-1996 a detailed consistent and integral conceptual design of a Fusion Experimental Breeder, FEB was completed. Beginning from 1996, a further design study towards an Engineering Outline Design of the FEB, FEB-E, has started. The design activities are briefly given. (author)

  7. Optimal Bayesian Experimental Design for Combustion Kinetics

    KAUST Repository

    Huan, Xun

    2011-01-04

    Experimental diagnostics play an essential role in the development and refinement of chemical kinetic models, whether for the combustion of common complex hydrocarbons or of emerging alternative fuels. Questions of experimental design—e.g., which variables or species to interrogate, at what resolution and under what conditions—are extremely important in this context, particularly when experimental resources are limited. This paper attempts to answer such questions in a rigorous and systematic way. We propose a Bayesian framework for optimal experimental design with nonlinear simulation-based models. While the framework is broadly applicable, we use it to infer rate parameters in a combustion system with detailed kinetics. The framework introduces a utility function that reflects the expected information gain from a particular experiment. Straightforward evaluation (and maximization) of this utility function requires Monte Carlo sampling, which is infeasible with computationally intensive models. Instead, we construct a polynomial surrogate for the dependence of experimental observables on model parameters and design conditions, with the help of dimension-adaptive sparse quadrature. Results demonstrate the efficiency and accuracy of the surrogate, as well as the considerable effectiveness of the experimental design framework in choosing informative experimental conditions.

  8. Statistics of Stacked Strata on Experimental Shelf Margins

    Science.gov (United States)

    Fernandes, A. M.; Straub, K. M.

    2015-12-01

    Continental margin deposits provide the most complete record on Earth of paleo-landscapes, but these records are complex and difficult to interpret. To a seismic geomorphologist or stratigrapher, mapped surfaces often present a static diachronous record of these landscapes through time. We present data that capture the dynamics of experimental shelf-margin landscapes at high-temporal resolution and define internal hierarchies within stacked channelized and weakly channelized deposits from the shelf to the slope. Motivated by observations from acoustically-imaged continental margins offshore Brunei and in the Gulf of Mexico, we use physical experiments to quantify stratal patterns of sub-aqueous slope channels and lobes that are linked to delta-top channels. The data presented here are from an experiment that was run for 26 hours of experimental run time. Overhead photographs and topographic scans captured flow dynamics and surface aggradation/degradation every ten minutes. Currents rich in sediment built a delta that prograded to the shelf-edge. These currents were designed to plunge at the shoreline and travel as turbidity currents beyond the delta and onto the continental slope. Pseudo-subsidence was imposed by a slight base-level rise that generated accommodation space and promoted the construction of stratigraphy on the delta-top. Compensational stacking is a term that is frequently applied to deposits that tend to fill in topographic lows in channelized and weakly channelized systems. The compensation index, a metric used to quantify the strength of compensation, is used here to characterize deposits at different temporal scales on the experimental landscape. The compensation timescale is the characteristic time at which the accumulated deposits begins to match the shape of basin-wide subsidence rates (uniform for these experiments). We will use the compensation indices along strike transects across the delta, proximal slope and distal slope to evaluate the

  9. Application of a statistical thermal design procedure to evaluate the PWR DNBR safety analysis limits

    International Nuclear Information System (INIS)

    Robeyns, J.; Parmentier, F.; Peeters, G.

    2001-01-01

    In the framework of safety analysis for the Belgian nuclear power plants and for the reload compatibility studies, Tractebel Energy Engineering (TEE) has developed, to define a 95/95 DNBR criterion, a statistical thermal design method based on the analytical full statistical approach: the Statistical Thermal Design Procedure (STDP). In that methodology, each DNBR value in the core assemblies is calculated with an adapted CHF (Critical Heat Flux) correlation implemented in the sub-channel code Cobra for core thermal hydraulic analysis. The uncertainties of the correlation are represented by the statistical parameters calculated from an experimental database. The main objective of a sub-channel analysis is to prove that in all class 1 and class 2 situations, the minimum DNBR (Departure from Nucleate Boiling Ratio) remains higher than the Safety Analysis Limit (SAL). The SAL value is calculated from the Statistical Design Limit (SDL) value adjusted with some penalties and deterministic factors. The search of a realistic value for the SDL is the objective of the statistical thermal design methods. In this report, we apply a full statistical approach to define the DNBR criterion or SDL (Statistical Design Limit) with the strict observance of the design criteria defined in the Standard Review Plan. The same statistical approach is used to define the expected number of rods experiencing DNB. (author)

  10. Experimental uncertainty estimation and statistics for data having interval uncertainty.

    Energy Technology Data Exchange (ETDEWEB)

    Kreinovich, Vladik (Applied Biomathematics, Setauket, New York); Oberkampf, William Louis (Applied Biomathematics, Setauket, New York); Ginzburg, Lev (Applied Biomathematics, Setauket, New York); Ferson, Scott (Applied Biomathematics, Setauket, New York); Hajagos, Janos (Applied Biomathematics, Setauket, New York)

    2007-05-01

    This report addresses the characterization of measurements that include epistemic uncertainties in the form of intervals. It reviews the application of basic descriptive statistics to data sets which contain intervals rather than exclusively point estimates. It describes algorithms to compute various means, the median and other percentiles, variance, interquartile range, moments, confidence limits, and other important statistics and summarizes the computability of these statistics as a function of sample size and characteristics of the intervals in the data (degree of overlap, size and regularity of widths, etc.). It also reviews the prospects for analyzing such data sets with the methods of inferential statistics such as outlier detection and regressions. The report explores the tradeoff between measurement precision and sample size in statistical results that are sensitive to both. It also argues that an approach based on interval statistics could be a reasonable alternative to current standard methods for evaluating, expressing and propagating measurement uncertainties.

  11. Statistically designed experiments to screen chemical mixtures for possible interactions

    NARCIS (Netherlands)

    Groten, J.P.; Tajima, O.; Feron, V.J.; Schoen, E.D.

    1998-01-01

    For the accurate analysis of possible interactive effects of chemicals in a defined mixture, statistical designs are necessary to develop clear and manageable experiments. For instance, factorial designs have been successfully used to detect two-factor interactions. Particularly useful for this

  12. Experimental design research approaches, perspectives, applications

    CERN Document Server

    Stanković, Tino; Štorga, Mario

    2016-01-01

    This book presents a new, multidisciplinary perspective on and paradigm for integrative experimental design research. It addresses various perspectives on methods, analysis and overall research approach, and how they can be synthesized to advance understanding of design. It explores the foundations of experimental approaches and their utility in this domain, and brings together analytical approaches to promote an integrated understanding. The book also investigates where these approaches lead to and how they link design research more fully with other disciplines (e.g. psychology, cognition, sociology, computer science, management). Above all, the book emphasizes the integrative nature of design research in terms of the methods, theories, and units of study—from the individual to the organizational level. Although this approach offers many advantages, it has inherently led to a situation in current research practice where methods are diverging and integration between individual, team and organizational under...

  13. Statistical Multipath Model Based on Experimental GNSS Data in Static Urban Canyon Environment

    Directory of Open Access Journals (Sweden)

    Yuze Wang

    2018-04-01

    Full Text Available A deep understanding of multipath characteristics is essential to design signal simulators and receivers in global navigation satellite system applications. As a new constellation is deployed and more applications occur in the urban environment, the statistical multipath models of navigation signal need further study. In this paper, we present statistical distribution models of multipath time delay, multipath power attenuation, and multipath fading frequency based on the experimental data in the urban canyon environment. The raw data of multipath characteristics are obtained by processing real navigation signal to study the statistical distribution. By fitting the statistical data, it shows that the probability distribution of time delay follows a gamma distribution which is related to the waiting time of Poisson distributed events. The fading frequency follows an exponential distribution, and the mean of multipath power attenuation decreases linearly with an increasing time delay. In addition, the detailed statistical characteristics for different elevations and orbits satellites is studied, and the parameters of each distribution are quite different. The research results give useful guidance for navigation simulator and receiver designers.

  14. An Introduction to Experimental Design Research

    DEFF Research Database (Denmark)

    Cash, Philip; Stanković, Tino; Štorga, Mario

    2016-01-01

    Design research brings together influences from the whole gamut of social, psychological, and more technical sciences to create a tradition of empirical study stretching back over 50 years (Horvath 2004; Cross 2007). A growing part of this empirical tradition is experimental, which has gained in ...

  15. Paradigms for adaptive statistical information designs: practical experiences and strategies.

    Science.gov (United States)

    Wang, Sue-Jane; Hung, H M James; O'Neill, Robert

    2012-11-10

    In the last decade or so, interest in adaptive design clinical trials has gradually been directed towards their use in regulatory submissions by pharmaceutical drug sponsors to evaluate investigational new drugs. Methodological advances of adaptive designs are abundant in the statistical literature since the 1970s. The adaptive design paradigm has been enthusiastically perceived to increase the efficiency and to be more cost-effective than the fixed design paradigm for drug development. Much interest in adaptive designs is in those studies with two-stages, where stage 1 is exploratory and stage 2 depends upon stage 1 results, but where the data of both stages will be combined to yield statistical evidence for use as that of a pivotal registration trial. It was not until the recent release of the US Food and Drug Administration Draft Guidance for Industry on Adaptive Design Clinical Trials for Drugs and Biologics (2010) that the boundaries of flexibility for adaptive designs were specifically considered for regulatory purposes, including what are exploratory goals, and what are the goals of adequate and well-controlled (A&WC) trials (2002). The guidance carefully described these distinctions in an attempt to minimize the confusion between the goals of preliminary learning phases of drug development, which are inherently substantially uncertain, and the definitive inference-based phases of drug development. In this paper, in addition to discussing some aspects of adaptive designs in a confirmatory study setting, we underscore the value of adaptive designs when used in exploratory trials to improve planning of subsequent A&WC trials. One type of adaptation that is receiving attention is the re-estimation of the sample size during the course of the trial. We refer to this type of adaptation as an adaptive statistical information design. Specifically, a case example is used to illustrate how challenging it is to plan a confirmatory adaptive statistical information

  16. Synchronised laser chaos communication: statistical investigation of an experimental system

    OpenAIRE

    Lawrance, Anthony J.; Papamarkou, Theodore; Uchida, Atsushi

    2017-01-01

    The paper is concerned with analyzing data from an experimental antipodal laser-based chaos shift-keying communication system. Binary messages are embedded in a chaotically behaving laser wave which is transmitted through a fiber-optic cable and are decoded at the receiver using a second laser synchronized with the emitter laser. Instrumentation in the experimental system makes it particularly interesting to be able to empirically analyze both optical noise and synchronization error as well a...

  17. Statistical Analysis of Designed Experiments Theory and Applications

    CERN Document Server

    Tamhane, Ajit C

    2012-01-01

    A indispensable guide to understanding and designing modern experiments The tools and techniques of Design of Experiments (DOE) allow researchers to successfully collect, analyze, and interpret data across a wide array of disciplines. Statistical Analysis of Designed Experiments provides a modern and balanced treatment of DOE methodology with thorough coverage of the underlying theory and standard designs of experiments, guiding the reader through applications to research in various fields such as engineering, medicine, business, and the social sciences. The book supplies a foundation for the

  18. Simulation Experiments in Practice: Statistical Design and Regression Analysis

    OpenAIRE

    Kleijnen, J.P.C.

    2007-01-01

    In practice, simulation analysts often change only one factor at a time, and use graphical analysis of the resulting Input/Output (I/O) data. The goal of this article is to change these traditional, naïve methods of design and analysis, because statistical theory proves that more information is obtained when applying Design Of Experiments (DOE) and linear regression analysis. Unfortunately, classic DOE and regression analysis assume a single simulation response that is normally and independen...

  19. Learning Axes and Bridging Tools in a Technology-Based Design for Statistics

    Science.gov (United States)

    Abrahamson, Dor; Wilensky, Uri

    2007-01-01

    We introduce a design-based research framework, "learning axes and bridging tools," and demonstrate its application in the preparation and study of an implementation of a middle-school experimental computer-based unit on probability and statistics, "ProbLab" (Probability Laboratory, Abrahamson and Wilensky 2002 [Abrahamson, D., & Wilensky, U.…

  20. Achieving optimal SERS through enhanced experimental design.

    Science.gov (United States)

    Fisk, Heidi; Westley, Chloe; Turner, Nicholas J; Goodacre, Royston

    2016-01-01

    One of the current limitations surrounding surface-enhanced Raman scattering (SERS) is the perceived lack of reproducibility. SERS is indeed challenging, and for analyte detection, it is vital that the analyte interacts with the metal surface. However, as this is analyte dependent, there is not a single set of SERS conditions that are universal. This means that experimental optimisation for optimum SERS response is vital. Most researchers optimise one factor at a time, where a single parameter is altered first before going onto optimise the next. This is a very inefficient way of searching the experimental landscape. In this review, we explore the use of more powerful multivariate approaches to SERS experimental optimisation based on design of experiments and evolutionary computational methods. We particularly focus on colloidal-based SERS rather than thin film preparations as a result of their popularity. © 2015 The Authors. Journal of Raman Spectroscopy published by John Wiley & Sons, Ltd.

  1. Conceptual design of Fusion Experimental Reactor (FER)

    International Nuclear Information System (INIS)

    Tone, T.; Fujisawa, N.

    1983-01-01

    Conceptual design studies of the Fusion Experimental Reactor (FER) have been performed. The FER has an objective of achieving selfignition and demonstrating engineering feasibility as a next generation tokamak to JT-60. Various concepts of the FER have been considered. The reference design is based on a double-null divertor. Optional design studies with some attractive features based on advanced concepts such as pumped limiter and RF current drive have been carried out. Key design parameters are; fusion power of 440 MW, average neutron wall loading of 1MW/m 2 , major radius of 5.5m, plasma minor radius of 1.1m, plasma elongation of 1.5, plasma current of 5.3MA, toroidal beta of 4%, toroidal field on plasma axis of 5.7T and tritium breeding ratio of above unity

  2. Conceptual design of fusion experimental reactor (FER)

    International Nuclear Information System (INIS)

    1985-01-01

    The Fusion Experimental Reactor (FER) being developed at JAERI as a next generation tokamak to JT-60 has a major mission of realizing a self-ignited long-burning DT plasma and demonstrating engineering feasibility. During FY82 and FY83 a comprehensive and intensive conceptual design study has been conducted for a pulsed operation FER as a reference option which employs a conventional inductive current drive and a double-null divertor. In parallel with the reference design, studies have been carried out to evaluate advanced reactor concepts such as quasi-steady state operation and steady state operation based on RF current drive and pumped limiter, and comparative studies for single-null divertor/pumped limiter. This report presents major results obtained primarily from FY83 design studies, while the results of FY82 design studies are described in previous references (JAERI-M 83-213--216). (author)

  3. Design and experimentation of BSFQ logic devices

    International Nuclear Information System (INIS)

    Hosoki, T.; Kodaka, H.; Kitagawa, M.; Okabe, Y.

    1999-01-01

    Rapid single flux quantum (RSFQ) logic needs synchronous pulses for each gate, so the clock-wiring problem is more serious when designing larger scale circuits with this logic. So we have proposed a new SFQ logic which follows Boolean algebra perfectly by using set and reset pulses. With this logic, the level information of current input is transmitted with these pulses generated by level-to-pulse converters, and each gate calculates logic using its phase level made by these pulses. Therefore, our logic needs no clock in each gate. We called this logic 'Boolean SFQ (BSFQ) logic'. In this paper, we report design and experimentation for an AND gate with inverting input based on BSFQ logic. The experimental results for OR and XOR gates are also reported. (author)

  4. Bioinspiration: applying mechanical design to experimental biology.

    Science.gov (United States)

    Flammang, Brooke E; Porter, Marianne E

    2011-07-01

    The production of bioinspired and biomimetic constructs has fostered much collaboration between biologists and engineers, although the extent of biological accuracy employed in the designs produced has not always been a priority. Even the exact definitions of "bioinspired" and "biomimetic" differ among biologists, engineers, and industrial designers, leading to confusion regarding the level of integration and replication of biological principles and physiology. By any name, biologically-inspired mechanical constructs have become an increasingly important research tool in experimental biology, offering the opportunity to focus research by creating model organisms that can be easily manipulated to fill a desired parameter space of structural and functional repertoires. Innovative researchers with both biological and engineering backgrounds have found ways to use bioinspired models to explore the biomechanics of organisms from all kingdoms to answer a variety of different questions. Bringing together these biologists and engineers will hopefully result in an open discourse of techniques and fruitful collaborations for experimental and industrial endeavors.

  5. Simulation Experiments in Practice : Statistical Design and Regression Analysis

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2007-01-01

    In practice, simulation analysts often change only one factor at a time, and use graphical analysis of the resulting Input/Output (I/O) data. Statistical theory proves that more information is obtained when applying Design Of Experiments (DOE) and linear regression analysis. Unfortunately, classic

  6. Simulation Experiments in Practice : Statistical Design and Regression Analysis

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2007-01-01

    In practice, simulation analysts often change only one factor at a time, and use graphical analysis of the resulting Input/Output (I/O) data. The goal of this article is to change these traditional, naïve methods of design and analysis, because statistical theory proves that more information is

  7. Conceptual design of fusion experimental reactor (FER)

    International Nuclear Information System (INIS)

    1984-02-01

    This report describes the engineering conceptual design of Fusion Experimental Reactor (FER) which is to be built as a next generation tokamak machine. This design covers overall reactor systems including MHD equilibrium analysis, mechanical configuration of reactor, divertor, pumped limiter, first wall/breeding blanket/shield, toroidal field magnet, poloidal field magnet, cryostat, electromagnetic analysis, vacuum system, power handling and conversion, NBI, RF heating device, tritium system, neutronics, maintenance, cooling system and layout of facilities. The engineering comparison of a divertor with pumped limiters and safety analysis of reactor systems are also conducted. (author)

  8. Design and Statistics in Quantitative Translation (Process) Research

    DEFF Research Database (Denmark)

    Balling, Laura Winther; Hvelplund, Kristian Tangsgaard

    2015-01-01

    Traditionally, translation research has been qualitative, but quantitative research is becoming increasingly important, especially in translation process research but also in other areas of translation studies. This poses problems to many translation scholars since this way of thinking...... is unfamiliar. In this article, we attempt to mitigate these problems by outlining our approach to good quantitative research, all the way from research questions and study design to data preparation and statistics. We concentrate especially on the nature of the variables involved, both in terms of their scale...... and their role in the design; this has implications for both design and choice of statistics. Although we focus on quantitative research, we also argue that such research should be supplemented with qualitative analyses and considerations of the translation product....

  9. Conceptual design of helium experimental loop

    International Nuclear Information System (INIS)

    Yu Xingfu; Feng Kaiming

    2007-01-01

    In a future demonstration fusion power station (DEMO), helium is envisaged as coolant for plasma facing components, such as blanket and dive,or. All these components have a very complex geometry, with many parallel cooling channels, involving a complex helium flow distribution. Test blanket modules (TBM) of this concept will under go various tests in the experimental reactor ITER. For the qualification of TBM, it is indispensable to test mock-ups in a helium loop under realistic pressure and temperature profiles, in order to validate design codes, especially regarding mass flow and heat transition processes in narrow cooling channels. Similar testing must be performed for DEMO blanket, currently under development. A Helium Experimental Loop (HELOOP) is planed to be built for TBM tests. The design parameter of temperature, pressure, flow rate is 550 degree C, 10 MPa, l kg/s respectively. In particular, HELOOP is able to: perform full-scale tests of TBM under realistic conditions; test other components of the He-cooling system in ITER; qualify the purification circuit; obtain information for the design of the ITER cooling system. The main requirements and characteristics of the HELOOP facility and a preliminary conceptual design are described in the paper. (authors)

  10. Conceptual design of fusion experimental reactor (FER)

    International Nuclear Information System (INIS)

    1984-01-01

    Conceptual Design of Fusion Experimental Reactor (FER) of which the objective will be to realize self-ignition with D-T reaction is reported. Mechanical Configurations of FER are characterized with a noncircular plasma and a double-null divertor. The primary aim of design studies is to demonstrate fissibility of reactor structures as compact and simple as possible with removable torus sectors. The structures of each component such as a first-wall, blanket, shielding, divertor, magnet and so on have been designed. It is also discussed about essential reactor plant system requirements. In addition to the above, a brief concept of a steady-state reactor based on RF current drive is also discussed. The main aim, in this time, is to examine physical studies of a possible RF steady-state reactor. (author)

  11. Conceptual design of fusion experimental reactor (FER)

    International Nuclear Information System (INIS)

    1984-03-01

    A conceptual design study (option C) has been carried out for the fusion experimental reactor (FER). In addition to design of the tokamak reactor and associated systems based on the reference design specifications, feasibility of a water-shield reactor concept was examined as a topical study. The design study for the reference tokamak reactor has produced a reactor concept for the FER, along with major R D items for the concept, based on close examinations on thermal design, electromagnetics, neutronics and remote maintenance. Particular efforts have been directed to the area of electromagnetics. Detailed analyses with close simulation models have been performed on PF coil arrangements and configurations, shell effects of the blanket for plasma position unstability, feedback control, and eddy currents during disruptions. The major design specifications are as follows; Peak fusion power 437 MW Major radius 5.5 m Minor radius 1.1 m Plasma elongation 1.5 Plasma current 5.3 MA Toroidal beta 4 % Field on axis 5.7 T (author)

  12. Fuel rod design by statistical methods for MOX fuel

    International Nuclear Information System (INIS)

    Heins, L.; Landskron, H.

    2000-01-01

    Statistical methods in fuel rod design have received more and more attention during the last years. One of different possible ways to use statistical methods in fuel rod design can be described as follows: Monte Carlo calculations are performed using the fuel rod code CARO. For each run with CARO, the set of input data is modified: parameters describing the design of the fuel rod (geometrical data, density etc.) and modeling parameters are randomly selected according to their individual distributions. Power histories are varied systematically in a way that each power history of the relevant core management calculation is represented in the Monte Carlo calculations with equal frequency. The frequency distributions of the results as rod internal pressure and cladding strain which are generated by the Monte Carlo calculation are evaluated and compared with the design criteria. Up to now, this methodology has been applied to licensing calculations for PWRs and BWRs, UO 2 and MOX fuel, in 3 countries. Especially for the insertion of MOX fuel resulting in power histories with relatively high linear heat generation rates at higher burnup, the statistical methodology is an appropriate approach to demonstrate the compliance of licensing requirements. (author)

  13. Using a Discussion about Scientific Controversy to Teach Central Concepts in Experimental Design

    Science.gov (United States)

    Bennett, Kimberley Ann

    2015-01-01

    Students may need explicit training in informal statistical reasoning in order to design experiments or use formal statistical tests effectively. By using scientific scandals and media misinterpretation, we can explore the need for good experimental design in an informal way. This article describes the use of a paper that reviews the measles mumps…

  14. Fast Bayesian optimal experimental design and its applications

    KAUST Repository

    Long, Quan

    2015-01-07

    We summarize our Laplace method and multilevel method of accelerating the computation of the expected information gain in a Bayesian Optimal Experimental Design (OED). Laplace method is a widely-used method to approximate an integration in statistics. We analyze this method in the context of optimal Bayesian experimental design and extend this method from the classical scenario, where a single dominant mode of the parameters can be completely-determined by the experiment, to the scenarios where a non-informative parametric manifold exists. We show that by carrying out this approximation the estimation of the expected Kullback-Leibler divergence can be significantly accelerated. While Laplace method requires a concentration of measure, multi-level Monte Carlo method can be used to tackle the problem when there is a lack of measure concentration. We show some initial results on this approach. The developed methodologies have been applied to various sensor deployment problems, e.g., impedance tomography and seismic source inversion.

  15. Statistical evaluation of design-error related nuclear reactor accidents

    International Nuclear Information System (INIS)

    Ott, K.O.; Marchaterre, J.F.

    1981-01-01

    In this paper, general methodology for the statistical evaluation of design-error related accidents is proposed that can be applied to a variety of systems that evolves during the development of large-scale technologies. The evaluation aims at an estimate of the combined ''residual'' frequency of yet unknown types of accidents ''lurking'' in a certain technological system. A special categorization in incidents and accidents is introduced to define the events that should be jointly analyzed. The resulting formalism is applied to the development of U.S. nuclear power reactor technology, considering serious accidents (category 2 events) that involved, in the accident progression, a particular design inadequacy. 9 refs

  16. Statistical methods in the mechanical design of fuel assemblies

    Energy Technology Data Exchange (ETDEWEB)

    Radsak, C.; Streit, D.; Muench, C.J. [AREVA NP GmbH, Erlangen (Germany)

    2013-07-01

    The mechanical design of a fuel assembly is still being mainly performed in a de terministic way. This conservative approach is however not suitable to provide a realistic quantification of the design margins with respect to licensing criter ia for more and more demanding operating conditions (power upgrades, burnup increase,..). This quantification can be provided by statistical methods utilizing all available information (e.g. from manufacturing, experience feedback etc.) of the topic under consideration. During optimization e.g. of the holddown system certain objectives in the mechanical design of a fuel assembly (FA) can contradict each other, such as sufficient holddown forces enough to prevent fuel assembly lift-off and reducing the holddown forces to minimize axial loads on the fuel assembly structure to ensure no negative effect on the control rod movement.By u sing a statistical method the fuel assembly design can be optimized much better with respect to these objectives than it would be possible based on a deterministic approach. This leads to a more realistic assessment and safer way of operating fuel assemblies. Statistical models are defined on the one hand by the quanti le that has to be maintained concerning the design limit requirements (e.g. one FA quantile) and on the other hand by the confidence level which has to be met. Using the above example of the holddown force, a feasible quantile can be define d based on the requirement that less than one fuel assembly (quantile > 192/19 3 [%] = 99.5 %) in the core violates the holddown force limit w ith a confidence of 95%. (orig.)

  17. Set membership experimental design for biological systems

    Directory of Open Access Journals (Sweden)

    Marvel Skylar W

    2012-03-01

    Full Text Available Abstract Background Experimental design approaches for biological systems are needed to help conserve the limited resources that are allocated for performing experiments. The assumptions used when assigning probability density functions to characterize uncertainty in biological systems are unwarranted when only a small number of measurements can be obtained. In these situations, the uncertainty in biological systems is more appropriately characterized in a bounded-error context. Additionally, effort must be made to improve the connection between modelers and experimentalists by relating design metrics to biologically relevant information. Bounded-error experimental design approaches that can assess the impact of additional measurements on model uncertainty are needed to identify the most appropriate balance between the collection of data and the availability of resources. Results In this work we develop a bounded-error experimental design framework for nonlinear continuous-time systems when few data measurements are available. This approach leverages many of the recent advances in bounded-error parameter and state estimation methods that use interval analysis to generate parameter sets and state bounds consistent with uncertain data measurements. We devise a novel approach using set-based uncertainty propagation to estimate measurement ranges at candidate time points. We then use these estimated measurements at the candidate time points to evaluate which candidate measurements furthest reduce model uncertainty. A method for quickly combining multiple candidate time points is presented and allows for determining the effect of adding multiple measurements. Biologically relevant metrics are developed and used to predict when new data measurements should be acquired, which system components should be measured and how many additional measurements should be obtained. Conclusions The practicability of our approach is illustrated with a case study. This

  18. Applying Statistical Design to Control the Risk of Over-Design with Stochastic Simulation

    Directory of Open Access Journals (Sweden)

    Yi Wu

    2010-02-01

    Full Text Available By comparing a hard real-time system and a soft real-time system, this article elicits the risk of over-design in soft real-time system designing. To deal with this risk, a novel concept of statistical design is proposed. The statistical design is the process accurately accounting for and mitigating the effects of variation in part geometry and other environmental conditions, while at the same time optimizing a target performance factor. However, statistical design can be a very difficult and complex task when using clas-sical mathematical methods. Thus, a simulation methodology to optimize the design is proposed in order to bridge the gap between real-time analysis and optimization for robust and reliable system design.

  19. The Impact of Statistical Leakage Models on Design Yield Estimation

    Directory of Open Access Journals (Sweden)

    Rouwaida Kanj

    2011-01-01

    Full Text Available Device mismatch and process variation models play a key role in determining the functionality and yield of sub-100 nm design. Average characteristics are often of interest, such as the average leakage current or the average read delay. However, detecting rare functional fails is critical for memory design and designers often seek techniques that enable accurately modeling such events. Extremely leaky devices can inflict functionality fails. The plurality of leaky devices on a bitline increase the dimensionality of the yield estimation problem. Simplified models are possible by adopting approximations to the underlying sum of lognormals. The implications of such approximations on tail probabilities may in turn bias the yield estimate. We review different closed form approximations and compare against the CDF matching method, which is shown to be most effective method for accurate statistical leakage modeling.

  20. Flow cytometry: design, development and experimental validation

    International Nuclear Information System (INIS)

    Seigneur, Alain

    1987-01-01

    The flow cytometry techniques allow the analysis and sorting of living biologic cells at rates above five to ten thousand events per second. After a short review, we present in this report the design and development of a 'high-tech' apparatus intended for research laboratories and the experimental results. The first part deals with the physical principles allowing morphologic and functional analysis of cells or cellular components. The measured parameters are as follows: electrical resistance pulse sizing, light scattering and fluorescence. Hydrodynamic centering is used, and in the same way, the division of a water-stream into droplets leading to electrostatic sorting of particles. The second part deals with the apparatus designed by the 'Commissariat a l'Energie Atomique' (C.E.A.) and industrialised by 'ODAM' (ATC 3000). The last part of this thesis work is the performance evaluations of this cyto-meter. The difference between the two size measurement methods are analyzed: electrical resistance pulse sizing versus small-angle light scattering. By an original optics design, high sensitivity has been reached in the fluorescence measurement: the equivalent noise corresponds to six hundred fluorescein isothiocyanate (FITC) molecules. The sorting performances have also been analyzed and the cell viability proven. (author) [fr

  1. Optimizing an experimental design for an electromagnetic experiment

    Science.gov (United States)

    Roux, Estelle; Garcia, Xavier

    2013-04-01

    Most of geophysical studies focus on data acquisition and analysis, but another aspect which is gaining importance is the discussion on acquisition of suitable datasets. This can be done through the design of an optimal experiment. Optimizing an experimental design implies a compromise between maximizing the information we get about the target and reducing the cost of the experiment, considering a wide range of constraints (logistical, financial, experimental …). We are currently developing a method to design an optimal controlled-source electromagnetic (CSEM) experiment to detect a potential CO2 reservoir and monitor this reservoir during and after CO2 injection. Our statistical algorithm combines the use of linearized inverse theory (to evaluate the quality of one given design via the objective function) and stochastic optimization methods like genetic algorithm (to examine a wide range of possible surveys). The particularity of our method is that it uses a multi-objective genetic algorithm that searches for designs that fit several objective functions simultaneously. One main advantage of this kind of technique to design an experiment is that it does not require the acquisition of any data and can thus be easily conducted before any geophysical survey. Our new experimental design algorithm has been tested with a realistic one-dimensional resistivity model of the Earth in the region of study (northern Spain CO2 sequestration test site). We show that a small number of well distributed observations have the potential to resolve the target. This simple test also points out the importance of a well chosen objective function. Finally, in the context of CO2 sequestration that motivates this study, we might be interested in maximizing the information we get about the reservoir layer. In that case, we show how the combination of two different objective functions considerably improve its resolution.

  2. Variability aware compact model characterization for statistical circuit design optimization

    Science.gov (United States)

    Qiao, Ying; Qian, Kun; Spanos, Costas J.

    2012-03-01

    Variability modeling at the compact transistor model level can enable statistically optimized designs in view of limitations imposed by the fabrication technology. In this work we propose an efficient variabilityaware compact model characterization methodology based on the linear propagation of variance. Hierarchical spatial variability patterns of selected compact model parameters are directly calculated from transistor array test structures. This methodology has been implemented and tested using transistor I-V measurements and the EKV-EPFL compact model. Calculation results compare well to full-wafer direct model parameter extractions. Further studies are done on the proper selection of both compact model parameters and electrical measurement metrics used in the method.

  3. A statistical design for testing apomictic diversification through linkage analysis.

    Science.gov (United States)

    Zeng, Yanru; Hou, Wei; Song, Shuang; Feng, Sisi; Shen, Lin; Xia, Guohua; Wu, Rongling

    2014-03-01

    The capacity of apomixis to generate maternal clones through seed reproduction has made it a useful characteristic for the fixation of heterosis in plant breeding. It has been observed that apomixis displays pronounced intra- and interspecific diversification, but the genetic mechanisms underlying this diversification remains elusive, obstructing the exploitation of this phenomenon in practical breeding programs. By capitalizing on molecular information in mapping populations, we describe and assess a statistical design that deploys linkage analysis to estimate and test the pattern and extent of apomictic differences at various levels from genotypes to species. The design is based on two reciprocal crosses between two individuals each chosen from a hermaphrodite or monoecious species. A multinomial distribution likelihood is constructed by combining marker information from two crosses. The EM algorithm is implemented to estimate the rate of apomixis and test its difference between two plant populations or species as the parents. The design is validated by computer simulation. A real data analysis of two reciprocal crosses between hickory (Carya cathayensis) and pecan (C. illinoensis) demonstrates the utilization and usefulness of the design in practice. The design provides a tool to address fundamental and applied questions related to the evolution and breeding of apomixis.

  4. Statistical evaluation of design-error related accidents

    International Nuclear Information System (INIS)

    Ott, K.O.; Marchaterre, J.F.

    1980-01-01

    In a recently published paper (Campbell and Ott, 1979), a general methodology was proposed for the statistical evaluation of design-error related accidents. The evaluation aims at an estimate of the combined residual frequency of yet unknown types of accidents lurking in a certain technological system. Here, the original methodology is extended, as to apply to a variety of systems that evolves during the development of large-scale technologies. A special categorization of incidents and accidents is introduced to define the events that should be jointly analyzed. The resulting formalism is applied to the development of the nuclear power reactor technology, considering serious accidents that involve in the accident-progression a particular design inadequacy

  5. The role of experimental typography in designing logotypes

    OpenAIRE

    Pogačnik, Tadeja

    2014-01-01

    Designing logotypes is an important part of graphic design. Great logotypes are designed using custom made typefaces. Therefore, it is very important, especially for the typographic designer, to have practical experience and be up to date with all trends in the field of experimental typefaces design, also called experimental typography. In my thesis statement, I carefully examined the problems of experimental typography - which allows more creative and free typography designing for different ...

  6. Hierarchical adaptive experimental design for Gaussian process emulators

    International Nuclear Information System (INIS)

    Busby, Daniel

    2009-01-01

    Large computer simulators have usually complex and nonlinear input output functions. This complicated input output relation can be analyzed by global sensitivity analysis; however, this usually requires massive Monte Carlo simulations. To effectively reduce the number of simulations, statistical techniques such as Gaussian process emulators can be adopted. The accuracy and reliability of these emulators strongly depend on the experimental design where suitable evaluation points are selected. In this paper a new sequential design strategy called hierarchical adaptive design is proposed to obtain an accurate emulator using the least possible number of simulations. The hierarchical design proposed in this paper is tested on various standard analytic functions and on a challenging reservoir forecasting application. Comparisons with standard one-stage designs such as maximin latin hypercube designs show that the hierarchical adaptive design produces a more accurate emulator with the same number of computer experiments. Moreover a stopping criterion is proposed that enables to perform the number of simulations necessary to obtain required approximation accuracy.

  7. Procedure for statistical analysis of one-parameter discrepant experimental data

    International Nuclear Information System (INIS)

    Badikov, Sergey A.; Chechev, Valery P.

    2012-01-01

    A new, Mandel–Paule-type procedure for statistical processing of one-parameter discrepant experimental data is described. The procedure enables one to estimate a contribution of unrecognized experimental errors into the total experimental uncertainty as well as to include it in analysis. A definition of discrepant experimental data for an arbitrary number of measurements is introduced as an accompanying result. In the case of negligible unrecognized experimental errors, the procedure simply reduces to the calculation of the weighted average and its internal uncertainty. The procedure was applied to the statistical analysis of half-life experimental data; Mean half-lives for 20 actinides were calculated and results were compared to the ENSDF and DDEP evaluations. On the whole, the calculated half-lives are consistent with the ENSDF and DDEP evaluations. However, the uncertainties calculated in this work essentially exceed the ENSDF and DDEP evaluations for discrepant experimental data. This effect can be explained by adequately taking into account unrecognized experimental errors. - Highlights: ► A new statistical procedure for processing one-parametric discrepant experimental data has been presented. ► Procedure estimates a contribution of unrecognized errors in the total experimental uncertainty. ► Procedure was applied for processing half-life discrepant experimental data. ► Results of the calculations are compared to the ENSDF and DDEP evaluations.

  8. Design research in statistics education : on symbolizing and computer tools

    NARCIS (Netherlands)

    Bakker, A.

    2004-01-01

    The present knowledge society requires statistical literacy-the ability to interpret, critically evaluate, and communicate about statistical information and messages (Gal, 2002). However, research shows that students generally do not gain satisfactory statistical understanding. The research

  9. A Unified Statistical Rain-Attenuation Model for Communication Link Fade Predictions and Optimal Stochastic Fade Control Design Using a Location-Dependent Rain-Statistic Database

    Science.gov (United States)

    Manning, Robert M.

    1990-01-01

    A static and dynamic rain-attenuation model is presented which describes the statistics of attenuation on an arbitrarily specified satellite link for any location for which there are long-term rainfall statistics. The model may be used in the design of the optimal stochastic control algorithms to mitigate the effects of attenuation and maintain link reliability. A rain-statistics data base is compiled, which makes it possible to apply the model to any location in the continental U.S. with a resolution of 0-5 degrees in latitude and longitude. The model predictions are compared with experimental observations, showing good agreement.

  10. Manifold Regularized Experimental Design for Active Learning.

    Science.gov (United States)

    Zhang, Lining; Shum, Hubert P H; Shao, Ling

    2016-12-02

    Various machine learning and data mining tasks in classification require abundant data samples to be labeled for training. Conventional active learning methods aim at labeling the most informative samples for alleviating the labor of the user. Many previous studies in active learning select one sample after another in a greedy manner. However, this is not very effective because the classification models has to be retrained for each newly labeled sample. Moreover, many popular active learning approaches utilize the most uncertain samples by leveraging the classification hyperplane of the classifier, which is not appropriate since the classification hyperplane is inaccurate when the training data are small-sized. The problem of insufficient training data in real-world systems limits the potential applications of these approaches. This paper presents a novel method of active learning called manifold regularized experimental design (MRED), which can label multiple informative samples at one time for training. In addition, MRED gives an explicit geometric explanation for the selected samples to be labeled by the user. Different from existing active learning methods, our method avoids the intrinsic problems caused by insufficiently labeled samples in real-world applications. Various experiments on synthetic datasets, the Yale face database and the Corel image database have been carried out to show how MRED outperforms existing methods.

  11. Statistical aspects of quantitative real-time PCR experiment design.

    Science.gov (United States)

    Kitchen, Robert R; Kubista, Mikael; Tichopad, Ales

    2010-04-01

    Experiments using quantitative real-time PCR to test hypotheses are limited by technical and biological variability; we seek to minimise sources of confounding variability through optimum use of biological and technical replicates. The quality of an experiment design is commonly assessed by calculating its prospective power. Such calculations rely on knowledge of the expected variances of the measurements of each group of samples and the magnitude of the treatment effect; the estimation of which is often uninformed and unreliable. Here we introduce a method that exploits a small pilot study to estimate the biological and technical variances in order to improve the design of a subsequent large experiment. We measure the variance contributions at several 'levels' of the experiment design and provide a means of using this information to predict both the total variance and the prospective power of the assay. A validation of the method is provided through a variance analysis of representative genes in several bovine tissue-types. We also discuss the effect of normalisation to a reference gene in terms of the measured variance components of the gene of interest. Finally, we describe a software implementation of these methods, powerNest, that gives the user the opportunity to input data from a pilot study and interactively modify the design of the assay. The software automatically calculates expected variances, statistical power, and optimal design of the larger experiment. powerNest enables the researcher to minimise the total confounding variance and maximise prospective power for a specified maximum cost for the large study. Copyright 2010 Elsevier Inc. All rights reserved.

  12. A new method to determine the number of experimental data using statistical modeling methods

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Jung-Ho; Kang, Young-Jin; Lim, O-Kaung; Noh, Yoojeong [Pusan National University, Busan (Korea, Republic of)

    2017-06-15

    For analyzing the statistical performance of physical systems, statistical characteristics of physical parameters such as material properties need to be estimated by collecting experimental data. For accurate statistical modeling, many such experiments may be required, but data are usually quite limited owing to the cost and time constraints of experiments. In this study, a new method for determining a rea- sonable number of experimental data is proposed using an area metric, after obtaining statistical models using the information on the underlying distribution, the Sequential statistical modeling (SSM) approach, and the Kernel density estimation (KDE) approach. The area metric is used as a convergence criterion to determine the necessary and sufficient number of experimental data to be acquired. The pro- posed method is validated in simulations, using different statistical modeling methods, different true models, and different convergence criteria. An example data set with 29 data describing the fatigue strength coefficient of SAE 950X is used for demonstrating the performance of the obtained statistical models that use a pre-determined number of experimental data in predicting the probability of failure for a target fatigue life.

  13. Autonomous entropy-based intelligent experimental design

    Science.gov (United States)

    Malakar, Nabin Kumar

    2011-07-01

    The aim of this thesis is to explore the application of probability and information theory in experimental design, and to do so in a way that combines what we know about inference and inquiry in a comprehensive and consistent manner. Present day scientific frontiers involve data collection at an ever-increasing rate. This requires that we find a way to collect the most relevant data in an automated fashion. By following the logic of the scientific method, we couple an inference engine with an inquiry engine to automate the iterative process of scientific learning. The inference engine involves Bayesian machine learning techniques to estimate model parameters based upon both prior information and previously collected data, while the inquiry engine implements data-driven exploration. By choosing an experiment whose distribution of expected results has the maximum entropy, the inquiry engine selects the experiment that maximizes the expected information gain. The coupled inference and inquiry engines constitute an autonomous learning method for scientific exploration. We apply it to a robotic arm to demonstrate the efficacy of the method. Optimizing inquiry involves searching for an experiment that promises, on average, to be maximally informative. If the set of potential experiments is described by many parameters, the search involves a high-dimensional entropy space. In such cases, a brute force search method will be slow and computationally expensive. We develop an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment. This helps to reduce the number of computations necessary to find the optimal experiment. We also extended the method of maximizing entropy, and developed a method of maximizing joint entropy so that it could be used as a principle of collaboration between two robots. This is a major achievement of this thesis, as it allows the information-based collaboration between two robotic units towards a same

  14. Developing Statistical Knowledge for Teaching during Design-Based Research

    Science.gov (United States)

    Groth, Randall E.

    2017-01-01

    Statistical knowledge for teaching is not precisely equivalent to statistics subject matter knowledge. Teachers must know how to make statistics understandable to others as well as understand the subject matter themselves. This dual demand on teachers calls for the development of viable teacher education models. This paper offers one such model,…

  15. Design of experimental equipment at CRNL

    International Nuclear Information System (INIS)

    Godden, B.

    1976-01-01

    The Plant Design Division provides a design service to the research and development effort at CRNL. Severe constraints, both environmentally and spatially, are placed on the design of special equipment. Several examples are given. Finally, the use of automated drafting systems is described. (author)

  16. Surface laser marking optimization using an experimental design approach

    Science.gov (United States)

    Brihmat-Hamadi, F.; Amara, E. H.; Lavisse, L.; Jouvard, J. M.; Cicala, E.; Kellou, H.

    2017-04-01

    Laser surface marking is performed on a titanium substrate using a pulsed frequency doubled Nd:YAG laser ( λ= 532 nm, τ pulse=5 ns) to process the substrate surface under normal atmospheric conditions. The aim of the work is to investigate, following experimental and statistical approaches, the correlation between the process parameters and the response variables (output), using a Design of Experiment method (DOE): Taguchi methodology and a response surface methodology (RSM). A design is first created using MINTAB program, and then the laser marking process is performed according to the planned design. The response variables; surface roughness and surface reflectance were measured for each sample, and incorporated into the design matrix. The results are then analyzed and the RSM model is developed and verified for predicting the process output for the given set of process parameters values. The analysis shows that the laser beam scanning speed is the most influential operating factor followed by the laser pumping intensity during marking, while the other factors show complex influences on the objective functions.

  17. Fast Bayesian Optimal Experimental Design for Seismic Source Inversion

    KAUST Repository

    Long, Quan; Motamed, Mohammad; Tempone, Raul

    2016-01-01

    We develop a fast method for optimally designing experiments [1] in the context of statistical seismic source inversion [2]. In particular, we efficiently compute the optimal number and locations of the receivers or seismographs. The seismic source is modeled by a point moment tensor multiplied by a time-dependent function. The parameters include the source location, moment tensor components, and start time and frequency in the time function. The forward problem is modeled by the elastic wave equations. We show that the Hessian of the cost functional, which is usually defined as the square of the weighted L2 norm of the difference between the experimental data and the simulated data, is proportional to the measurement time and the number of receivers. Consequently, the posterior distribution of the parameters, in a Bayesian setting, concentrates around the true parameters, and we can employ Laplace approximation and speed up the estimation of the expected Kullback-Leibler divergence (expected information gain), the optimality criterion in the experimental design procedure. Since the source parameters span several magnitudes, we use a scaling matrix for efficient control of the condition number of the original Hessian matrix. We use a second-order accurate finite difference method to compute the Hessian matrix and either sparse quadrature or Monte Carlo sampling to carry out numerical integration. We demonstrate the efficiency, accuracy, and applicability of our method on a two-dimensional seismic source inversion problem.

  18. Fast Bayesian optimal experimental design for seismic source inversion

    KAUST Repository

    Long, Quan

    2015-07-01

    We develop a fast method for optimally designing experiments in the context of statistical seismic source inversion. In particular, we efficiently compute the optimal number and locations of the receivers or seismographs. The seismic source is modeled by a point moment tensor multiplied by a time-dependent function. The parameters include the source location, moment tensor components, and start time and frequency in the time function. The forward problem is modeled by elastodynamic wave equations. We show that the Hessian of the cost functional, which is usually defined as the square of the weighted L2 norm of the difference between the experimental data and the simulated data, is proportional to the measurement time and the number of receivers. Consequently, the posterior distribution of the parameters, in a Bayesian setting, concentrates around the "true" parameters, and we can employ Laplace approximation and speed up the estimation of the expected Kullback-Leibler divergence (expected information gain), the optimality criterion in the experimental design procedure. Since the source parameters span several magnitudes, we use a scaling matrix for efficient control of the condition number of the original Hessian matrix. We use a second-order accurate finite difference method to compute the Hessian matrix and either sparse quadrature or Monte Carlo sampling to carry out numerical integration. We demonstrate the efficiency, accuracy, and applicability of our method on a two-dimensional seismic source inversion problem. © 2015 Elsevier B.V.

  19. Fast Bayesian Optimal Experimental Design for Seismic Source Inversion

    KAUST Repository

    Long, Quan

    2016-01-06

    We develop a fast method for optimally designing experiments [1] in the context of statistical seismic source inversion [2]. In particular, we efficiently compute the optimal number and locations of the receivers or seismographs. The seismic source is modeled by a point moment tensor multiplied by a time-dependent function. The parameters include the source location, moment tensor components, and start time and frequency in the time function. The forward problem is modeled by the elastic wave equations. We show that the Hessian of the cost functional, which is usually defined as the square of the weighted L2 norm of the difference between the experimental data and the simulated data, is proportional to the measurement time and the number of receivers. Consequently, the posterior distribution of the parameters, in a Bayesian setting, concentrates around the true parameters, and we can employ Laplace approximation and speed up the estimation of the expected Kullback-Leibler divergence (expected information gain), the optimality criterion in the experimental design procedure. Since the source parameters span several magnitudes, we use a scaling matrix for efficient control of the condition number of the original Hessian matrix. We use a second-order accurate finite difference method to compute the Hessian matrix and either sparse quadrature or Monte Carlo sampling to carry out numerical integration. We demonstrate the efficiency, accuracy, and applicability of our method on a two-dimensional seismic source inversion problem.

  20. Return to Our Roots: Raising Radishes To Teach Experimental Design.

    Science.gov (United States)

    Stallings, William M.

    To provide practice in making design decisions, collecting and analyzing data, and writing and documenting results, a professor of statistics has his graduate students in statistics and research methodology classes design and perform an experiment on the effects of fertilizers on the growth of radishes. This project has been required of students…

  1. Statistical molecular design of balanced compound libraries for QSAR modeling.

    Science.gov (United States)

    Linusson, A; Elofsson, M; Andersson, I E; Dahlgren, M K

    2010-01-01

    A fundamental step in preclinical drug development is the computation of quantitative structure-activity relationship (QSAR) models, i.e. models that link chemical features of compounds with activities towards a target macromolecule associated with the initiation or progression of a disease. QSAR models are computed by combining information on the physicochemical and structural features of a library of congeneric compounds, typically assembled from two or more building blocks, and biological data from one or more in vitro assays. Since the models provide information on features affecting the compounds' biological activity they can be used as guides for further optimization. However, in order for a QSAR model to be relevant to the targeted disease, and drug development in general, the compound library used must contain molecules with balanced variation of the features spanning the chemical space believed to be important for interaction with the biological target. In addition, the assays used must be robust and deliver high quality data that are directly related to the function of the biological target and the associated disease state. In this review, we discuss and exemplify the concept of statistical molecular design (SMD) in the selection of building blocks and final synthetic targets (i.e. compounds to synthesize) to generate information-rich, balanced libraries for biological testing and computation of QSAR models.

  2. Optimization of aspergillus niger nutritional conditions using statistical experimental methods for bio-recovery of manganese from pyrolusite

    International Nuclear Information System (INIS)

    Mujeeb-ur-Rahman; Yasinzai, M.M.; Tareen, R.B.; Iqbal, A.; Gul, S.; Odhano, E.A.

    2011-01-01

    Optimization of aspergillus niger nutritional conditions using statistical experimental methods for bio-recovery of manganese from pyrolusite Mujeeb-ur-rahman, Mohammed Masoom Yasinzai, Rasool Bakhsh Tareen, Asim Iqbal, Ejaz Ali Odhano, Shereen Gul. The nutritional requirements for Aspergillus niger PCSIR-06 for bio-recovery of manganese from pyrolusite ore were optimized. Box-Bhenken design and response surface methodology were used for designing of experiment and statistical analysis of the results. This procedure limited the number of actual experiments to 54 for studying the possible interaction between six nutrients. The optimum concentration of the nutrients were Sucrose 148.5 g/L, KH/sub 2/PO/sub 4/ 0.50 g/L, NH/sub 4/NO/sub 3/ 0.33 g/L, MgSO/sub 4/ 0.41 g/L, Zn 23.76 mg/L, Fe 0.18 mg/L for Aspergillus niger to achieve maximum bio-recovery of manganese (82.47 +- 5.67%). The verification run confirmed the predicted optimized concentration of all the six ingredients for maximum bio leaching of manganese and successfully confirmed the use of Box-Bhenken experimental design for maximum bio-recovery. Results also revealed that small and less time consuming experimental designs could be efficient for optimization of bio-recovery processes. (author)

  3. Statistics

    CERN Document Server

    Hayslett, H T

    1991-01-01

    Statistics covers the basic principles of Statistics. The book starts by tackling the importance and the two kinds of statistics; the presentation of sample data; the definition, illustration and explanation of several measures of location; and the measures of variation. The text then discusses elementary probability, the normal distribution and the normal approximation to the binomial. Testing of statistical hypotheses and tests of hypotheses about the theoretical proportion of successes in a binomial population and about the theoretical mean of a normal population are explained. The text the

  4. Revised design for the Tokamak experimental power reactor

    International Nuclear Information System (INIS)

    Stacey, W.M. Jr.; Abdou, M.A.; Brooks, J.N.

    1977-03-01

    A new, preliminary design has been identified for the tokamak experimental power reactor (EPR). The revised EPR design is simpler, more compact, less expensive and has somewhat better performance characteristics than the previous design, yet retains many of the previously developed design concepts. This report summarizes the principle features of the new EPR design, including performance and cost

  5. Conceptual design report, CEBAF basic experimental equipment

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1990-04-13

    The Continuous Electron Beam Accelerator Facility (CEBAF) will be dedicated to basic research in Nuclear Physics using electrons and photons as projectiles. The accelerator configuration allows three nearly continuous beams to be delivered simultaneously in three experimental halls, which will be equipped with complementary sets of instruments: Hall A--two high resolution magnetic spectrometers; Hall B--a large acceptance magnetic spectrometer; Hall C--a high-momentum, moderate resolution, magnetic spectrometer and a variety of more dedicated instruments. This report contains a short description of the initial complement of experimental equipment to be installed in each of the three halls.

  6. A statistical approach to nuclear fuel design and performance

    Science.gov (United States)

    Cunning, Travis Andrew

    As CANDU fuel failures can have significant economic and operational consequences on the Canadian nuclear power industry, it is essential that factors impacting fuel performance are adequately understood. Current industrial practice relies on deterministic safety analysis and the highly conservative "limit of operating envelope" approach, where all parameters are assumed to be at their limits simultaneously. This results in a conservative prediction of event consequences with little consideration given to the high quality and precision of current manufacturing processes. This study employs a novel approach to the prediction of CANDU fuel reliability. Probability distributions are fitted to actual fuel manufacturing datasets provided by Cameco Fuel Manufacturing, Inc. They are used to form input for two industry-standard fuel performance codes: ELESTRES for the steady-state case and ELOCA for the transient case---a hypothesized 80% reactor outlet header break loss of coolant accident. Using a Monte Carlo technique for input generation, 105 independent trials are conducted and probability distributions are fitted to key model output quantities. Comparing model output against recognized industrial acceptance criteria, no fuel failures are predicted for either case. Output distributions are well removed from failure limit values, implying that margin exists in current fuel manufacturing and design. To validate the results and attempt to reduce the simulation burden of the methodology, two dimensional reduction methods are assessed. Using just 36 trials, both methods are able to produce output distributions that agree strongly with those obtained via the brute-force Monte Carlo method, often to a relative discrepancy of less than 0.3% when predicting the first statistical moment, and a relative discrepancy of less than 5% when predicting the second statistical moment. In terms of global sensitivity, pellet density proves to have the greatest impact on fuel performance

  7. Some statistical design and analysis aspects for NAEG studies

    International Nuclear Information System (INIS)

    Gilbert, R.O.; Eberhardt, L.L.

    1975-01-01

    Some of the design and analysis aspects of the NAEG studies at safety-shot sites are reviewed in conjunction with discussions of possible new approaches. The use of double sampling to estimate inventories is suggested as a means of obtaining data for estimating the geographical distribution of plutonium using computer contouring programs. The lack of estimates of error for plutonium contours is noted and a regression approach discussed for obtaining such estimates. The kinds of new data that are now available for analysis from A site of Area 11 and the four Tonopah Test Range (TTR) sites are outlined, and the need for a closer look at methods for analyzing ratio-type data is pointed out. The necessity for thorough planning of environmental sampling programs is emphasized in order to obtain the maximum amount of information for fixed cost. Some general planning aspects of new studies at nuclear sites and experimental clean-up plots are discussed, as is the planning of interlaboratory comparisons. (U.S.)

  8. Design preferences and cognitive styles: experimentation by automated website synthesis.

    Science.gov (United States)

    Leung, Siu-Wai; Lee, John; Johnson, Chris; Robertson, David

    2012-06-29

    This article aims to demonstrate computational synthesis of Web-based experiments in undertaking experimentation on relationships among the participants' design preference, rationale, and cognitive test performance. The exemplified experiments were computationally synthesised, including the websites as materials, experiment protocols as methods, and cognitive tests as protocol modules. This work also exemplifies the use of a website synthesiser as an essential instrument enabling the participants to explore different possible designs, which were generated on the fly, before selection of preferred designs. The participants were given interactive tree and table generators so that they could explore some different ways of presenting causality information in tables and trees as the visualisation formats. The participants gave their preference ratings for the available designs, as well as their rationale (criteria) for their design decisions. The participants were also asked to take four cognitive tests, which focus on the aspects of visualisation and analogy-making. The relationships among preference ratings, rationale, and the results of cognitive tests were analysed by conservative non-parametric statistics including Wilcoxon test, Krustal-Wallis test, and Kendall correlation. In the test, 41 of the total 64 participants preferred graphical (tree-form) to tabular presentation. Despite the popular preference for graphical presentation, the given tabular presentation was generally rated to be easier than graphical presentation to interpret, especially by those who were scored lower in the visualization and analogy-making tests. This piece of evidence helps generate a hypothesis that design preferences are related to specific cognitive abilities. Without the use of computational synthesis, the experiment setup and scientific results would be impractical to obtain.

  9. Statistical core design methodology using the VIPRE thermal-hydraulics code

    International Nuclear Information System (INIS)

    Lloyd, M.W.; Feltus, M.A.

    1995-01-01

    An improved statistical core design methodology for developing a computational departure from nucleate boiling ratio (DNBR) correlation has been developed and applied in order to analyze the nominal 1.3 DNBR limit on Westinghouse Pressurized Water Reactor (PWR) cores. This analysis, although limited in scope, found that the DNBR limit can be reduced from 1.3 to some lower value and be accurate within an adequate confidence level of 95%, for three particular FSAR operational transients: turbine trip, complete loss of flow, and inadvertent opening of a pressurizer relief valve. The VIPRE-01 thermal-hydraulics code, the SAS/STAT statistical package, and the EPRI/Columbia University DNBR experimental data base were used in this research to develop the Pennsylvania State Statistical Core Design Methodology (PSSCDM). The VIPRE code was used to perform the necessary sensitivity studies and generate the EPRI correlation-calculated DNBR predictions. The SAS package used for these EPRI DNBR correlation predictions from VIPRE as a data set to determine the best fit for the empirical model and to perform the statistical analysis. (author)

  10. Experimental testing of designated driver cues

    Science.gov (United States)

    2000-07-27

    In theory, the designated-driver concept holds great promise for reducing the incidences of drunk driving. It is simple, inexpensive, almost universally recognized, and generally positively regarded by the U.S. population as a means for avoiding drun...

  11. Comparing Visual and Statistical Analysis of Multiple Baseline Design Graphs.

    Science.gov (United States)

    Wolfe, Katie; Dickenson, Tammiee S; Miller, Bridget; McGrath, Kathleen V

    2018-04-01

    A growing number of statistical analyses are being developed for single-case research. One important factor in evaluating these methods is the extent to which each corresponds to visual analysis. Few studies have compared statistical and visual analysis, and information about more recently developed statistics is scarce. Therefore, our purpose was to evaluate the agreement between visual analysis and four statistical analyses: improvement rate difference (IRD); Tau-U; Hedges, Pustejovsky, Shadish (HPS) effect size; and between-case standardized mean difference (BC-SMD). Results indicate that IRD and BC-SMD had the strongest overall agreement with visual analysis. Although Tau-U had strong agreement with visual analysis on raw values, it had poorer agreement when those values were dichotomized to represent the presence or absence of a functional relation. Overall, visual analysis appeared to be more conservative than statistical analysis, but further research is needed to evaluate the nature of these disagreements.

  12. Guidelines for the design and statistical analysis of experiments in papers submitted to ATLA.

    Science.gov (United States)

    Festing, M F

    2001-01-01

    In vitro experiments need to be well designed and correctly analysed if they are to achieve their full potential to replace the use of animals in research. An "experiment" is a procedure for collecting scientific data in order to answer a hypothesis, or to provide material for generating new hypotheses, and differs from a survey because the scientist has control over the treatments that can be applied. Most experiments can be classified into one of a few formal designs, the most common being completely randomised, and randomised block designs. These are quite common with in vitro experiments, which are often replicated in time. Some experiments involve a single independent (treatment) variable, while other "factorial" designs simultaneously vary two or more independent variables, such as drug treatment and cell line. Factorial designs often provide additional information at little extra cost. Experiments need to be carefully planned to avoid bias, be powerful yet simple, provide for a valid statistical analysis and, in some cases, have a wide range of applicability. Virtually all experiments need some sort of statistical analysis in order to take account of biological variation among the experimental subjects. Parametric methods using the t test or analysis of variance are usually more powerful than non-parametric methods, provided the underlying assumptions of normality of the residuals and equal variances are approximately valid. The statistical analyses of data from a completely randomised design, and from a randomised-block design are demonstrated in Appendices 1 and 2, and methods of determining sample size are discussed in Appendix 3. Appendix 4 gives a checklist for authors submitting papers to ATLA.

  13. Statistical inference for extended or shortened phase II studies based on Simon's two-stage designs.

    Science.gov (United States)

    Zhao, Junjun; Yu, Menggang; Feng, Xi-Ping

    2015-06-07

    Simon's two-stage designs are popular choices for conducting phase II clinical trials, especially in the oncology trials to reduce the number of patients placed on ineffective experimental therapies. Recently Koyama and Chen (2008) discussed how to conduct proper inference for such studies because they found that inference procedures used with Simon's designs almost always ignore the actual sampling plan used. In particular, they proposed an inference method for studies when the actual second stage sample sizes differ from planned ones. We consider an alternative inference method based on likelihood ratio. In particular, we order permissible sample paths under Simon's two-stage designs using their corresponding conditional likelihood. In this way, we can calculate p-values using the common definition: the probability of obtaining a test statistic value at least as extreme as that observed under the null hypothesis. In addition to providing inference for a couple of scenarios where Koyama and Chen's method can be difficult to apply, the resulting estimate based on our method appears to have certain advantage in terms of inference properties in many numerical simulations. It generally led to smaller biases and narrower confidence intervals while maintaining similar coverages. We also illustrated the two methods in a real data setting. Inference procedures used with Simon's designs almost always ignore the actual sampling plan. Reported P-values, point estimates and confidence intervals for the response rate are not usually adjusted for the design's adaptiveness. Proper statistical inference procedures should be used.

  14. Statistics

    Science.gov (United States)

    Links to sources of cancer-related statistics, including the Surveillance, Epidemiology and End Results (SEER) Program, SEER-Medicare datasets, cancer survivor prevalence data, and the Cancer Trends Progress Report.

  15. Simulation-based optimal Bayesian experimental design for nonlinear systems

    KAUST Repository

    Huan, Xun

    2013-01-01

    The optimal selection of experimental conditions is essential to maximizing the value of data for inference and prediction, particularly in situations where experiments are time-consuming and expensive to conduct. We propose a general mathematical framework and an algorithmic approach for optimal experimental design with nonlinear simulation-based models; in particular, we focus on finding sets of experiments that provide the most information about targeted sets of parameters.Our framework employs a Bayesian statistical setting, which provides a foundation for inference from noisy, indirect, and incomplete data, and a natural mechanism for incorporating heterogeneous sources of information. An objective function is constructed from information theoretic measures, reflecting expected information gain from proposed combinations of experiments. Polynomial chaos approximations and a two-stage Monte Carlo sampling method are used to evaluate the expected information gain. Stochastic approximation algorithms are then used to make optimization feasible in computationally intensive and high-dimensional settings. These algorithms are demonstrated on model problems and on nonlinear parameter inference problems arising in detailed combustion kinetics. © 2012 Elsevier Inc.

  16. Irradiation Design for an Experimental Murine Model

    International Nuclear Information System (INIS)

    Ballesteros-Zebadua, P.; Moreno-Jimenez, S.; Suarez-Campos, J. E.; Celis, M. A.; Larraga-Gutierrez, J. M.; Garcia-Garduno, O. A.; Rubio-Osornio, M. C.; Custodio-Ramirez, V.; Paz, C.

    2010-01-01

    In radiotherapy and stereotactic radiosurgery, small animal experimental models are frequently used, since there are still a lot of unsolved questions about the biological and biochemical effects of ionizing radiation. This work presents a method for small-animal brain radiotherapy compatible with a dedicated 6MV Linac. This rodent model is focused on the research of the inflammatory effects produced by ionizing radiation in the brain. In this work comparisons between Pencil Beam and Monte Carlo techniques, were used in order to evaluate accuracy of the calculated dose using a commercial planning system. Challenges in this murine model are discussed.

  17. An Improved Rank Correlation Effect Size Statistic for Single-Case Designs: Baseline Corrected Tau.

    Science.gov (United States)

    Tarlow, Kevin R

    2017-07-01

    Measuring treatment effects when an individual's pretreatment performance is improving poses a challenge for single-case experimental designs. It may be difficult to determine whether improvement is due to the treatment or due to the preexisting baseline trend. Tau- U is a popular single-case effect size statistic that purports to control for baseline trend. However, despite its strengths, Tau- U has substantial limitations: Its values are inflated and not bound between -1 and +1, it cannot be visually graphed, and its relatively weak method of trend control leads to unacceptable levels of Type I error wherein ineffective treatments appear effective. An improved effect size statistic based on rank correlation and robust regression, Baseline Corrected Tau, is proposed and field-tested with both published and simulated single-case time series. A web-based calculator for Baseline Corrected Tau is also introduced for use by single-case investigators.

  18. Enhanced surrogate models for statistical design exploiting space mapping technology

    DEFF Research Database (Denmark)

    Koziel, Slawek; Bandler, John W.; Mohamed, Achmed S.

    2005-01-01

    We present advances in microwave and RF device modeling exploiting Space Mapping (SM) technology. We propose new SM modeling formulations utilizing input mappings, output mappings, frequency scaling and quadratic approximations. Our aim is to enhance circuit models for statistical analysis...

  19. On experimentation across science, design and society

    DEFF Research Database (Denmark)

    Boris, Stefan Darlan

    2016-01-01

    The article describes how the principal idea behind the landscape laboratories has been to develop a 1:1 platform where researchers, practitioners and lay people can meet and cooperate on the development and testing of new design concepts for establishing and managing urban landscapes...

  20. Using IMPRINT to Guide Experimental Design with Simulated Task Environments

    Science.gov (United States)

    2015-06-18

    USING IMPRINT TO GUIDE EXPERIMENTAL DESIGN OF SIMULATED TASK ENVIRONMENTS THESIS Gregory...ENG-MS-15-J-052 USING IMPRINT TO GUIDE EXPERIMENTAL DESIGN WITH SIMULATED TASK ENVIRONMENTS THESIS Presented to the Faculty Department...Civilian, USAF June 2015 DISTRIBUTION STATEMENT A. APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED. AFIT-ENG-MS-15-J-052 USING IMPRINT

  1. Development of the Biological Experimental Design Concept Inventory (BEDCI)

    Science.gov (United States)

    Deane, Thomas; Nomme, Kathy; Jeffery, Erica; Pollock, Carol; Birol, Gulnur

    2014-01-01

    Interest in student conception of experimentation inspired the development of a fully validated 14-question inventory on experimental design in biology (BEDCI) by following established best practices in concept inventory (CI) design. This CI can be used to diagnose specific examples of non-expert-like thinking in students and to evaluate the…

  2. Experimental Study on Various Solar Still Designs

    OpenAIRE

    T. Arunkumar; K. Vinothkumar; Amimul Ahsan; R. Jayaprakash; Sanjay Kumar

    2012-01-01

    Humankind has depended for ages on underground water reservoirs for its fresh water needs. But these sources do not always prove to be useful due to the presence of excessive salinity in the water. In this paper, the fabrication of seven solar still designs such as spherical solar still, pyramid solar still, hemispherical solar still, double basin glass solar still, concentrator coupled single slope solar still, tubular solar still and tubular solar still coupled with pyramid solar still and ...

  3. Split-plot designs for multistage experimentation

    DEFF Research Database (Denmark)

    Kulahci, Murat; Tyssedal, John

    2016-01-01

    at the same time will be more efficient. However, there have been only a few attempts in the literature to provide an adequate and easy-to-use approach for this problem. In this paper, we present a novel methodology for constructing two-level split-plot and multistage experiments. The methodology is based...... be accommodated in each stage. Furthermore, split-plot designs for multistage experiments with good projective properties are also provided....

  4. Statistics

    International Nuclear Information System (INIS)

    2005-01-01

    For the years 2004 and 2005 the figures shown in the tables of Energy Review are partly preliminary. The annual statistics published in Energy Review are presented in more detail in a publication called Energy Statistics that comes out yearly. Energy Statistics also includes historical time-series over a longer period of time (see e.g. Energy Statistics, Statistics Finland, Helsinki 2004.) The applied energy units and conversion coefficients are shown in the back cover of the Review. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in GDP, energy consumption and electricity consumption, Carbon dioxide emissions from fossile fuels use, Coal consumption, Consumption of natural gas, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices in heat production, Fuel prices in electricity production, Price of electricity by type of consumer, Average monthly spot prices at the Nord pool power exchange, Total energy consumption by source and CO 2 -emissions, Supplies and total consumption of electricity GWh, Energy imports by country of origin in January-June 2003, Energy exports by recipient country in January-June 2003, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Price of natural gas by type of consumer, Price of electricity by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes, precautionary stock fees and oil pollution fees

  5. Statistics

    International Nuclear Information System (INIS)

    2001-01-01

    For the year 2000, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy Review appear in more detail from the publication Energiatilastot - Energy Statistics issued annually, which also includes historical time series over a longer period (see e.g. Energiatilastot 1999, Statistics Finland, Helsinki 2000, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions from the use of fossil fuels, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in 2000, Energy exports by recipient country in 2000, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products

  6. Statistics

    International Nuclear Information System (INIS)

    2000-01-01

    For the year 1999 and 2000, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy Review appear in more detail from the publication Energiatilastot - Energy Statistics issued annually, which also includes historical time series over a longer period (see e.g., Energiatilastot 1998, Statistics Finland, Helsinki 1999, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in January-March 2000, Energy exports by recipient country in January-March 2000, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products

  7. Statistics

    International Nuclear Information System (INIS)

    1999-01-01

    For the year 1998 and the year 1999, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy Review appear in more detail from the publication Energiatilastot - Energy Statistics issued annually, which also includes historical time series over a longer period (see e.g. Energiatilastot 1998, Statistics Finland, Helsinki 1999, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in January-June 1999, Energy exports by recipient country in January-June 1999, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products

  8. ANIMATED DOCUMENTARY: EXPERIMENTATION, TECHNOLOGY AND DESIGN

    OpenAIRE

    INDIA MARA MARTINS

    2009-01-01

    O objetivo desta tese é refletir sobre o documentário animado. Um produto audiovisual que mistura documentário e animação e está redefinindo o papel do design na produção das novas mídias. Mostramos também que o documentário animado reacende uma série de debates e reflexões relativas à teoria do documentário e da animação em relação às concepções de realismo. A nossa principal premissa é que o documentário sempre se apropriou da tecnologia de forma a favorecer a experimentação ...

  9. Effects of different building blocks designs on the statistical ...

    African Journals Online (AJOL)

    Tholang T. Mokhele

    Enumeration Areas (EAs), Small Area Layers (SALs) and SubPlaces) from the 2001 census data were used as building blocks for the generation of census output areas with AZTool program in both rural and urban areas of South Africa. One way-Analysis of Variance (ANOVA) was also performed to determine statistical ...

  10. Optimal experimental design for placement of boreholes

    Science.gov (United States)

    Padalkina, Kateryna; Bücker, H. Martin; Seidler, Ralf; Rath, Volker; Marquart, Gabriele; Niederau, Jan; Herty, Michael

    2014-05-01

    Drilling for deep resources is an expensive endeavor. Among the many problems finding the optimal drilling location for boreholes is one of the challenging questions. We contribute to this discussion by using a simulation based assessment of possible future borehole locations. We study the problem of finding a new borehole location in a given geothermal reservoir in terms of a numerical optimization problem. In a geothermal reservoir the temporal and spatial distribution of temperature and hydraulic pressure may be simulated using the coupled differential equations for heat transport and mass and momentum conservation for Darcy flow. Within this model the permeability and thermal conductivity are dependent on the geological layers present in the subsurface model of the reservoir. In general, those values involve some uncertainty making it difficult to predict actual heat source in the ground. Within optimal experimental the question is which location and to which depth to drill the borehole in order to estimate conductivity and permeability with minimal uncertainty. We introduce a measure for computing the uncertainty based on simulations of the coupled differential equations. The measure is based on the Fisher information matrix of temperature data obtained through the simulations. We assume that the temperature data is available within the full borehole. A minimization of the measure representing the uncertainty in the unknown permeability and conductivity parameters is performed to determine the optimal borehole location. We present the theoretical framework as well as numerical results for several 2d subsurface models including up to six geological layers. Also, the effect of unknown layers on the introduced measure is studied. Finally, to obtain a more realistic estimate of optimal borehole locations, we couple the optimization to a cost model for deep drilling problems.

  11. Analyzing Data from a Pretest-Posttest Control Group Design: The Importance of Statistical Assumptions

    Science.gov (United States)

    Zientek, Linda; Nimon, Kim; Hammack-Brown, Bryn

    2016-01-01

    Purpose: Among the gold standards in human resource development (HRD) research are studies that test theoretically developed hypotheses and use experimental designs. A somewhat typical experimental design would involve collecting pretest and posttest data on individuals assigned to a control or experimental group. Data from such a design that…

  12. Applied statistics : an important phase in the development of experimental science (Inaugural lecture)

    NARCIS (Netherlands)

    Hamaker, H.C.

    1962-01-01

    In many fields of inquiry, especially those concerned with living beings, "exact" observations are not possible and it is necessary to investigate the effect of several factors at the same time. This has led to the design of experiments on a statistical basis, in which several factors may be varied

  13. Statistics

    International Nuclear Information System (INIS)

    2003-01-01

    For the year 2002, part of the figures shown in the tables of the Energy Review are partly preliminary. The annual statistics of the Energy Review also includes historical time-series over a longer period (see e.g. Energiatilastot 2001, Statistics Finland, Helsinki 2002). The applied energy units and conversion coefficients are shown in the inside back cover of the Review. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in GDP, energy consumption and electricity consumption, Carbon dioxide emissions from fossile fuels use, Coal consumption, Consumption of natural gas, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices in heat production, Fuel prices in electricity production, Price of electricity by type of consumer, Average monthly spot prices at the Nord pool power exchange, Total energy consumption by source and CO 2 -emissions, Supply and total consumption of electricity GWh, Energy imports by country of origin in January-June 2003, Energy exports by recipient country in January-June 2003, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Price of natural gas by type of consumer, Price of electricity by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Excise taxes, precautionary stock fees on oil pollution fees on energy products

  14. Statistics

    International Nuclear Information System (INIS)

    2004-01-01

    For the year 2003 and 2004, the figures shown in the tables of the Energy Review are partly preliminary. The annual statistics of the Energy Review also includes historical time-series over a longer period (see e.g. Energiatilastot, Statistics Finland, Helsinki 2003, ISSN 0785-3165). The applied energy units and conversion coefficients are shown in the inside back cover of the Review. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in GDP, energy consumption and electricity consumption, Carbon dioxide emissions from fossile fuels use, Coal consumption, Consumption of natural gas, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices in heat production, Fuel prices in electricity production, Price of electricity by type of consumer, Average monthly spot prices at the Nord pool power exchange, Total energy consumption by source and CO 2 -emissions, Supplies and total consumption of electricity GWh, Energy imports by country of origin in January-March 2004, Energy exports by recipient country in January-March 2004, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Price of natural gas by type of consumer, Price of electricity by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Excise taxes, precautionary stock fees on oil pollution fees

  15. Statistics

    International Nuclear Information System (INIS)

    2000-01-01

    For the year 1999 and 2000, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy also includes historical time series over a longer period (see e.g., Energiatilastot 1999, Statistics Finland, Helsinki 2000, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in January-June 2000, Energy exports by recipient country in January-June 2000, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products

  16. Return to Our Roots: Raising Radishes to Teach Experimental Design. Methods and Techniques.

    Science.gov (United States)

    Stallings, William M.

    1993-01-01

    Reviews research in teaching applied statistics. Concludes that students should analyze data from studies they have designed and conducted. Describes an activity in which students study germination and growth of radish seeds. Includes a table providing student instructions for both the experimental procedure and data analysis. (CFR)

  17. Statistical method for the determination of the ignition energy of dust cloud - experimental validation

    Energy Technology Data Exchange (ETDEWEB)

    Bernard, S.; Lebecki, K.; Gillard, P.; Youinou, L.; Baudry, G. [University of Orleans, Bourges (France)

    2010-05-15

    Powdery materials such as metallic or polymer powders play a considerable role in many industrial processes. Their use requires the introduction of preventive safeguard to control the plants safety. The mitigation of an explosion hazard, according to the ATEX 137 Directive (1999/92/EU), requires the assessment of the dust ignition sensitivity. PRISME laboratory (University of Orleans) has developed an experimental set-up and methodology, using the Langlie test, for the quick determination of the explosion sensitivity of dusts. This method requires only 20 shots and ignition sensitivity is evaluated through the E{sub 50} (energy with an ignition probability of 0.5) A Hartmann tube, with a volume of 1.3l, was designed and built. Many results on the energy ignition thresholds of partially oxidised aluminium were obtained using this experimental device and compared to literature. E-50 evolution is the same as MIE but their respective values are different and MIE is lower than E{sub 50} however the link between E{sub 50} and MIE has not been elucidated In this paper, the Langlie method is explained in detail for the determination of the parameters (mean value E{sub 50} and standard deviation {sigma}) of the associated statistic law. The ignition probability versus applied energy is firstly measured for Lycopodium in order to validate the method A comparison between the normal and the lognormal law was achieved and the best fit was obtained with the lognormal law. In a second part, the Langlie test was performed on different dusts such as aluminium, cornstarch, lycopodium, coal, and PA12 in order to determine E-50 and {sigma} for each dust. The energies E{sub 05} and E{sub 10} corresponding respectively to an ignition probability of 0.05 and 0.1 are determined with the lognormal law and compared to MIE find in literature. E{sub 05} and E{sub 10} values of ignition energy were found to be very close and were in good agreement with MIE in the literature.

  18. Some Statistical Methods and their Application to the Design and ...

    African Journals Online (AJOL)

    He should devote attention to such matters as ... formulate the questions that should be asked by the experimenter! ... these tests may be done at the a: level of significance and we may .... problem we need to know what kind of variation is to.

  19. Optimal Experimental Design for Large-Scale Bayesian Inverse Problems

    KAUST Repository

    Ghattas, Omar

    2014-01-01

    We develop a Bayesian framework for the optimal experimental design of the shock tube experiments which are being carried out at the KAUST Clean Combustion Research Center. The unknown parameters are the pre-exponential parameters and the activation

  20. Fast Bayesian optimal experimental design and its applications

    KAUST Repository

    Long, Quan

    2015-01-01

    We summarize our Laplace method and multilevel method of accelerating the computation of the expected information gain in a Bayesian Optimal Experimental Design (OED). Laplace method is a widely-used method to approximate an integration

  1. Design Issues and Inference in Experimental L2 Research

    Science.gov (United States)

    Hudson, Thom; Llosa, Lorena

    2015-01-01

    Explicit attention to research design issues is essential in experimental second language (L2) research. Too often, however, such careful attention is not paid. This article examines some of the issues surrounding experimental L2 research and its relationships to causal inferences. It discusses the place of research questions and hypotheses,…

  2. Experimental Design: Utilizing Microsoft Mathematics in Teaching and Learning Calculus

    Science.gov (United States)

    Oktaviyanthi, Rina; Supriani, Yani

    2015-01-01

    The experimental design was conducted to investigate the use of Microsoft Mathematics, free software made by Microsoft Corporation, in teaching and learning Calculus. This paper reports results from experimental study details on implementation of Microsoft Mathematics in Calculus, students' achievement and the effects of the use of Microsoft…

  3. Methods for the neutronic design of a Supersara experimental loop

    International Nuclear Information System (INIS)

    Casali, F.; Cepraga, D.

    1982-01-01

    This paper describes a method for the neutronic design of experimental loops irradiated in D 2 O experimental reactors, like Essor. The calculation approach concerns the definition of a Weigner-Seitz cell where the loop under examination be subjected to the same neutronic conditions as in the actual reactor

  4. SSSFD manipulator engineering using statistical experiment design techniques

    Science.gov (United States)

    Barnes, John

    1991-01-01

    The Satellite Servicer System Flight Demonstration (SSSFD) program is a series of Shuttle flights designed to verify major on-orbit satellite servicing capabilities, such as rendezvous and docking of free flyers, Orbital Replacement Unit (ORU) exchange, and fluid transfer. A major part of this system is the manipulator system that will perform the ORU exchange. The manipulator must possess adequate toolplate dexterity to maneuver a variety of EVA-type tools into position to interface with ORU fasteners, connectors, latches, and handles on the satellite, and to move workpieces and ORUs through 6 degree of freedom (dof) space from the Target Vehicle (TV) to the Support Module (SM) and back. Two cost efficient tools were combined to perform a study of robot manipulator design parameters. These tools are graphical computer simulations and Taguchi Design of Experiment methods. Using a graphics platform, an off-the-shelf robot simulation software package, and an experiment designed with Taguchi's approach, the sensitivities of various manipulator kinematic design parameters to performance characteristics are determined with minimal cost.

  5. Statistical aspects of quantitative real-time PCR experiment design

    Czech Academy of Sciences Publication Activity Database

    Kitchen, R.R.; Kubista, Mikael; Tichopád, Aleš

    2010-01-01

    Roč. 50, č. 4 (2010), s. 231-236 ISSN 1046-2023 R&D Projects: GA AV ČR IAA500520809 Institutional research plan: CEZ:AV0Z50520701 Keywords : Real-time PCR * Experiment design * Nested analysis of variance Subject RIV: EB - Genetics ; Molecular Biology Impact factor: 4.527, year: 2010

  6. Improved field experimental designs and quantitative evaluation of aquatic ecosystems

    Energy Technology Data Exchange (ETDEWEB)

    McKenzie, D.H.; Thomas, J.M.

    1984-05-01

    The paired-station concept and a log transformed analysis of variance were used as methods to evaluate zooplankton density data collected during five years at an electrical generation station on Lake Michigan. To discuss the example and the field design necessary for a valid statistical analysis, considerable background is provided on the questions of selecting (1) sampling station pairs, (2) experimentwise error rates for multi-species analyses, (3) levels of Type I and II error rates, (4) procedures for conducting the field monitoring program, and (5) a discussion of the consequences of violating statistical assumptions. Details for estimating sample sizes necessary to detect changes of a specified magnitude are included. Both statistical and biological problems with monitoring programs (as now conducted) are addressed; serial correlation of successive observations in the time series obtained was identified as one principal statistical difficulty. The procedure reduces this problem to a level where statistical methods can be used confidently. 27 references, 4 figures, 2 tables.

  7. PGT: A Statistical Approach to Prediction and Mechanism Design

    Science.gov (United States)

    Wolpert, David H.; Bono, James W.

    One of the biggest challenges facing behavioral economics is the lack of a single theoretical framework that is capable of directly utilizing all types of behavioral data. One of the biggest challenges of game theory is the lack of a framework for making predictions and designing markets in a manner that is consistent with the axioms of decision theory. An approach in which solution concepts are distribution-valued rather than set-valued (i.e. equilibrium theory) has both capabilities. We call this approach Predictive Game Theory (or PGT). This paper outlines a general Bayesian approach to PGT. It also presents one simple example to illustrate the way in which this approach differs from equilibrium approaches in both prediction and mechanism design settings.

  8. Use of Experimental Design for Peuhl Cheese Process Optimization ...

    African Journals Online (AJOL)

    Use of Experimental Design for Peuhl Cheese Process Optimization. ... Journal of Applied Sciences and Environmental Management ... This work consisting in use of a central composite design enables the determination of optimal process conditions concerning: leaf extract volume added (7 mL), heating temperature ...

  9. Experimentally supported control design for a direct drive robot

    NARCIS (Netherlands)

    Kostic, D.; Jager, de A.G.; Steinbuch, M.

    2002-01-01

    We promote the idea of an experimentally supported control design as a successful way to achieve accurate tracking of reference robot motions, under disturbance conditions and given the uncertainties arising from modeling errors. The Hinf robust control theory is used for design of motion

  10. Statistical examination of particle in a turbulent, non-dilute particle suspension flow experimental measurements

    International Nuclear Information System (INIS)

    Souza, R.C.; Jones, B.G.

    1986-01-01

    An experimental study of particles suspended in fully developed turbulent water flow in a vertical pipe was done. Three series of experiments were conducted to investigate the statistical behaviour of particles in nondilute turbulent suspension flow, for two particle densities and particle sizes, and for several particle volume loadings ranging from 0 to 1 percent. The mean free fall velocity of the particles was determined at these various particle volume loadings, and the phenomenon of cluster formation was observed. The precise volume loading which gives the maximum relative settling velocity was observed to depend on particle density and size. (E.G.) [pt

  11. Application of Statistical Design for the Production of Cellulase by Trichoderma reesei Using Mango Peel

    Directory of Open Access Journals (Sweden)

    P. Saravanan

    2012-01-01

    Full Text Available Optimization of the culture medium for cellulase production using Trichoderma reesei was carried out. The optimization of cellulase production using mango peel as substrate was performed with statistical methodology based on experimental designs. The screening of nine nutrients for their influence on cellulase production is achieved using Plackett-Burman design. Avicel, soybean cake flour, KH2PO4, and CoCl2·6H2O were selected based on their positive influence on cellulase production. The composition of the selected components was optimized using Response Surface Methodology (RSM. The optimum conditions are as follows: Avicel: 25.30 g/L, Soybean cake flour: 23.53 g/L, KH2PO4: 4.90 g/L, and CoCl2·6H2O: 0.95 g/L. These conditions are validated experimentally which revealed an enhanced Cellulase activity of 7.8 IU/mL.

  12. Statistical optimization of the growth factors for Chaetoceros neogracile using fractional factorial design and central composite design.

    Science.gov (United States)

    Jeong, Sung-Eun; Park, Jae-Kweon; Kim, Jeong-Dong; Chang, In-Jeong; Hong, Seong-Joo; Kang, Sung-Ho; Lee, Choul-Gyun

    2008-12-01

    Statistical experimental designs; involving (i) a fractional factorial design (FFD) and (ii) a central composite design (CCD) were applied to optimize the culture medium constituents for production of a unique antifreeze protein by the Antartic microalgae Chaetoceros neogracile. The results of the FFD suggested that NaCl, KCl, MgCl2, and Na2SiO3 were significant variables that highly influenced the growth rate and biomass production. The optimum culture medium for the production of an antifreeze protein from C. neogracile was found to be Kalleampersandrsquor;s artificial seawater, pH of 7.0ampersandplusmn;0.5, consisting of 28.566 g/l of NaCl, 3.887 g/l of MgCl2, 1.787 g/l of MgSO4, 1.308 g/l of CaSO4, 0.832 g/l of K2SO4, 0.124 g/l of CaCO3, 0.103 g/l of KBr, 0.0288 g/l of SrSO4, and 0.0282 g/l of H3BO3. The antifreeze activity significantly increased after cells were treated with cold shock (at -5oC) for 14 h. To the best of our knowledge, this is the first report demonstrating an antifreeze-like protein of C. neogracile.

  13. Statistical analysis on experimental calibration data for flowmeters in pressure pipes

    Science.gov (United States)

    Lazzarin, Alessandro; Orsi, Enrico; Sanfilippo, Umberto

    2017-08-01

    This paper shows a statistical analysis on experimental calibration data for flowmeters (i.e.: electromagnetic, ultrasonic, turbine flowmeters) in pressure pipes. The experimental calibration data set consists of the whole archive of the calibration tests carried out on 246 flowmeters from January 2001 to October 2015 at Settore Portate of Laboratorio di Idraulica “G. Fantoli” of Politecnico di Milano, that is accredited as LAT 104 for a flow range between 3 l/s and 80 l/s, with a certified Calibration and Measurement Capability (CMC) - formerly known as Best Measurement Capability (BMC) - equal to 0.2%. The data set is split into three subsets, respectively consisting in: 94 electromagnetic, 83 ultrasonic and 69 turbine flowmeters; each subset is analysed separately from the others, but then a final comparison is carried out. In particular, the main focus of the statistical analysis is the correction C, that is the difference between the flow rate Q measured by the calibration facility (through the accredited procedures and the certified reference specimen) minus the flow rate QM contemporarily recorded by the flowmeter under calibration, expressed as a percentage of the same QM .

  14. Status of experimental data for the VHTR core design

    Energy Technology Data Exchange (ETDEWEB)

    Park, Won Seok; Chang, Jong Hwa; Park, Chang Kue

    2004-05-01

    The VHTR (Very High Temperature Reactor) is being emerged as a next generation nuclear reactor to demonstrate emission-free nuclear-assisted electricity and hydrogen production. The VHTR could be either a prismatic or pebble type helium cooled, graphite moderated reactor. The final decision will be made after the completion of the pre-conceptual design for each type. For the pre-conceptual design for both types, computational tools are being developed. Experimental data are required to validate the tools to be developed. Many experiments on the HTGR (High Temperature Gas-cooled Reactor) cores have been performed to confirm the design data and to validate the design tools. The applicability and availability of the existing experimental data have been investigated for the VHTR core design in this report.

  15. Design and Experimental Study on Spinning Solid Rocket Motor

    Science.gov (United States)

    Xue, Heng; Jiang, Chunlan; Wang, Zaicheng

    The study on spinning solid rocket motor (SRM) which used as power plant of twice throwing structure of aerial submunition was introduced. This kind of SRM which with the structure of tangential multi-nozzle consists of a combustion chamber, propellant charge, 4 tangential nozzles, ignition device, etc. Grain design, structure design and prediction of interior ballistic performance were described, and problem which need mainly considered in design were analyzed comprehensively. Finally, in order to research working performance of the SRM, measure pressure-time curve and its speed, static test and dynamic test were conducted respectively. And then calculated values and experimental data were compared and analyzed. The results indicate that the designed motor operates normally, and the stable performance of interior ballistic meet demands. And experimental results have the guidance meaning for the pre-research design of SRM.

  16. Experimental Design for Sensitivity Analysis of Simulation Models

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2001-01-01

    This introductory tutorial gives a survey on the use of statistical designs for what if-or sensitivity analysis in simulation.This analysis uses regression analysis to approximate the input/output transformation that is implied by the simulation model; the resulting regression model is also known as

  17. Optimizing Nuclear Reaction Analysis (NRA) using Bayesian Experimental Design

    International Nuclear Information System (INIS)

    Toussaint, Udo von; Schwarz-Selinger, Thomas; Gori, Silvio

    2008-01-01

    Nuclear Reaction Analysis with 3 He holds the promise to measure Deuterium depth profiles up to large depths. However, the extraction of the depth profile from the measured data is an ill-posed inversion problem. Here we demonstrate how Bayesian Experimental Design can be used to optimize the number of measurements as well as the measurement energies to maximize the information gain. Comparison of the inversion properties of the optimized design with standard settings reveals huge possible gains. Application of the posterior sampling method allows to optimize the experimental settings interactively during the measurement process.

  18. Optimal Experimental Design of Furan Shock Tube Kinetic Experiments

    KAUST Repository

    Kim, Daesang

    2015-01-07

    A Bayesian optimal experimental design methodology has been developed and applied to refine the rate coefficients of elementary reactions in Furan combustion. Furans are considered as potential renewable fuels. We focus on the Arrhenius rates of Furan + OH ↔ Furyl-2 + H2O and Furan ↔ OH Furyl-3 + H2O, and rely on the OH consumption rate as experimental observable. A polynomial chaos surrogate is first constructed using an adaptive pseudo-spectral projection algorithm. The PC surrogate is then exploited in conjunction with a fast estimation of the expected information gain in order to determine the optimal design in the space of initial temperatures and OH concentrations.

  19. Statistical analysis of correlated experimental data and neutron cross section evaluation

    International Nuclear Information System (INIS)

    Badikov, S.A.

    1998-01-01

    The technique for evaluation of neutron cross sections on the basis of statistical analysis of correlated experimental data is presented. The most important stages of evaluation beginning from compilation of correlation matrix for measurement uncertainties till representation of the analysis results in the ENDF-6 format are described in details. Special attention is paid to restrictions (positive uncertainty) on covariation matrix of approximate parameters uncertainties generated within the method of least square fit which is derived from physical reasons. The requirements for source experimental data assuring satisfaction of the restrictions mentioned above are formulated. Correlation matrices of measurement uncertainties in particular should be also positively determined. Variants of modelling the positively determined correlation matrices of measurement uncertainties in a situation when their consequent calculation on the basis of experimental information is impossible are discussed. The technique described is used for creating the new generation of estimates of dosimetric reactions cross sections for the first version of the Russian dosimetric file (including nontrivial covariation information)

  20. Robust Bayesian Experimental Design for Conceptual Model Discrimination

    Science.gov (United States)

    Pham, H. V.; Tsai, F. T. C.

    2015-12-01

    A robust Bayesian optimal experimental design under uncertainty is presented to provide firm information for model discrimination, given the least number of pumping wells and observation wells. Firm information is the maximum information of a system can be guaranteed from an experimental design. The design is based on the Box-Hill expected entropy decrease (EED) before and after the experiment design and the Bayesian model averaging (BMA) framework. A max-min programming is introduced to choose the robust design that maximizes the minimal Box-Hill EED subject to that the highest expected posterior model probability satisfies a desired probability threshold. The EED is calculated by the Gauss-Hermite quadrature. The BMA method is used to predict future observations and to quantify future observation uncertainty arising from conceptual and parametric uncertainties in calculating EED. Monte Carlo approach is adopted to quantify the uncertainty in the posterior model probabilities. The optimal experimental design is tested by a synthetic 5-layer anisotropic confined aquifer. Nine conceptual groundwater models are constructed due to uncertain geological architecture and boundary condition. High-performance computing is used to enumerate all possible design solutions in order to identify the most plausible groundwater model. Results highlight the impacts of scedasticity in future observation data as well as uncertainty sources on potential pumping and observation locations.

  1. Conceptual design study of Fusion Experimental Reactor (FY87FER)

    International Nuclear Information System (INIS)

    1988-05-01

    The design study of Fusion Experimental Reactor(FER) which has been proposed to be the next step fusion device has been conducted by JAERI Reactor System Laboratory since 1982 and by FER design team since 1984. This is the final report of the FER design team program and describes the results obtained in FY1987 (partially in FY1986) activities. The contents of this report consist of the reference design which is based on the guideline in FY1986 by the Subcomitees set up in Nuclear Fusion Council of Atomic Energy Commission of Japan, the Low-Physics-Risk reactor design for achieving physics mission more reliably and the system study of FER design candidates including above two designs. (author)

  2. Robust optimization of the output voltage of nanogenerators by statistical design of experiments

    KAUST Repository

    Song, Jinhui; Xie, Huizhi; Wu, Wenzhuo; Roshan Joseph, V.; Jeff Wu, C. F.; Wang, Zhong Lin

    2010-01-01

    Nanogenerators were first demonstrated by deflecting aligned ZnO nanowires using a conductive atomic force microscopy (AFM) tip. The output of a nanogenerator is affected by three parameters: tip normal force, tip scanning speed, and tip abrasion. In this work, systematic experimental studies have been carried out to examine the combined effects of these three parameters on the output, using statistical design of experiments. A statistical model has been built to analyze the data and predict the optimal parameter settings. For an AFM tip of cone angle 70° coated with Pt, and ZnO nanowires with a diameter of 50 nm and lengths of 600 nm to 1 μm, the optimized parameters for the nanogenerator were found to be a normal force of 137 nN and scanning speed of 40 μm/s, rather than the conventional settings of 120 nN for the normal force and 30 μm/s for the scanning speed. A nanogenerator with the optimized settings has three times the average output voltage of one with the conventional settings. © 2010 Tsinghua University Press and Springer-Verlag Berlin Heidelberg.

  3. Robust optimization of the output voltage of nanogenerators by statistical design of experiments

    KAUST Repository

    Song, Jinhui

    2010-09-01

    Nanogenerators were first demonstrated by deflecting aligned ZnO nanowires using a conductive atomic force microscopy (AFM) tip. The output of a nanogenerator is affected by three parameters: tip normal force, tip scanning speed, and tip abrasion. In this work, systematic experimental studies have been carried out to examine the combined effects of these three parameters on the output, using statistical design of experiments. A statistical model has been built to analyze the data and predict the optimal parameter settings. For an AFM tip of cone angle 70° coated with Pt, and ZnO nanowires with a diameter of 50 nm and lengths of 600 nm to 1 μm, the optimized parameters for the nanogenerator were found to be a normal force of 137 nN and scanning speed of 40 μm/s, rather than the conventional settings of 120 nN for the normal force and 30 μm/s for the scanning speed. A nanogenerator with the optimized settings has three times the average output voltage of one with the conventional settings. © 2010 Tsinghua University Press and Springer-Verlag Berlin Heidelberg.

  4. Design study of blanket structure for tokamak experimental fusion reactor

    International Nuclear Information System (INIS)

    1979-11-01

    Design study of the blanket structure for JAERI Experimental Fusion Reactor (JXFR) has been carried out. Studied here were fabrication and testing of the blanket structure (blanket cells, blanket rings, piping and blanket modules), assembly and disassembly of the blanket module, and monitering and testing technique. Problems in design and fabrication of the blanket structure could be revealed. Research and development problems for the future were also disclosed. (author)

  5. Experimental designs for autoregressive models applied to industrial maintenance

    International Nuclear Information System (INIS)

    Amo-Salas, M.; López-Fidalgo, J.; Pedregal, D.J.

    2015-01-01

    Some time series applications require data which are either expensive or technically difficult to obtain. In such cases scheduling the points in time at which the information should be collected is of paramount importance in order to optimize the resources available. In this paper time series models are studied from a new perspective, consisting in the use of Optimal Experimental Design setup to obtain the best times to take measurements, with the principal aim of saving costs or discarding useless information. The model and the covariance function are expressed in an explicit form to apply the usual techniques of Optimal Experimental Design. Optimal designs for various approaches are computed and their efficiencies are compared. The methods working in an application of industrial maintenance of a critical piece of equipment at a petrochemical plant are shown. This simple model allows explicit calculations in order to show openly the procedure to find the correlation structure, needed for computing the optimal experimental design. In this sense the techniques used in this paper to compute optimal designs may be transferred to other situations following the ideas of the paper, but taking into account the increasing difficulty of the procedure for more complex models. - Highlights: • Optimal experimental design theory is applied to AR models to reduce costs. • The first observation has an important impact on any optimal design. • Either the lack of precision or small starting observations claim for large times. • Reasonable optimal times were obtained relaxing slightly the efficiency. • Optimal designs were computed in a predictive maintenance context

  6. SCRAED - Simple and Complex Random Assignment in Experimental Designs

    OpenAIRE

    Alferes, Valentim R.

    2009-01-01

    SCRAED is a package of 37 self-contained SPSS syntax files that performs simple and complex random assignment in experimental designs. For between-subjects designs, SCRAED includes simple random assignment (no restrictions, forced equal sizes, forced unequal sizes, and unequal probabilities), block random assignment (simple and generalized blocks), and stratified random assignment (no restrictions, forced equal sizes, forced unequal sizes, and unequal probabilities). For within-subject...

  7. Empirical Statistical Power for Testing Multilocus Genotypic Effects under Unbalanced Designs Using a Gibbs Sampler

    Directory of Open Access Journals (Sweden)

    Chaeyoung Lee

    2012-11-01

    Full Text Available Epistasis that may explain a large portion of the phenotypic variation for complex economic traits of animals has been ignored in many genetic association studies. A Baysian method was introduced to draw inferences about multilocus genotypic effects based on their marginal posterior distributions by a Gibbs sampler. A simulation study was conducted to provide statistical powers under various unbalanced designs by using this method. Data were simulated by combined designs of number of loci, within genotype variance, and sample size in unbalanced designs with or without null combined genotype cells. Mean empirical statistical power was estimated for testing posterior mean estimate of combined genotype effect. A practical example for obtaining empirical statistical power estimates with a given sample size was provided under unbalanced designs. The empirical statistical powers would be useful for determining an optimal design when interactive associations of multiple loci with complex phenotypes were examined.

  8. Neural Network Assisted Experimental Designs for Food Research

    Directory of Open Access Journals (Sweden)

    H.S. Ramaswamy

    2000-06-01

    Full Text Available The ability of artificial neural networks (ANN in predicting full factorial data from the fractional data corresponding to some of the commonly used experimental designs is explored in this paper. Factorial and fractional factorial designs such as L8, L9, L18, and Box and Behnken schemes were considered both in their original form and with some variations (L8+6, L15 and L9+1. Full factorial (3 factors x 5 levels and fractional data were generated employing sixteen different mathematical equations (four in each category: linear, with and without interactions, and non-linear, with and without interactions. Different ANN models were trained and the best model was chosen for each equation based on their ability to predict the fractional data. The best experimental design was then chosen based on their ability to simulate the full- factorial data for each equation. In several cases, the mean relative errors with the L18 design (which had more input data than other models were even higher than with other smaller fractional design. In general, the ANN assisted Lm, Box and Behnken, L15 and L18 designs were found to predict the full factorial data reasonably well with errors less than 5 %. The L8+6 model performed well with several experimental datasets reported in the literature.

  9. Optimal statistical damage detection and classification in an experimental wind turbine blade using minimum instrumentation

    Science.gov (United States)

    Hoell, Simon; Omenzetter, Piotr

    2017-04-01

    The increasing demand for carbon neutral energy in a challenging economic environment is a driving factor for erecting ever larger wind turbines in harsh environments using novel wind turbine blade (WTBs) designs characterized by high flexibilities and lower buckling capacities. To counteract resulting increasing of operation and maintenance costs, efficient structural health monitoring systems can be employed to prevent dramatic failures and to schedule maintenance actions according to the true structural state. This paper presents a novel methodology for classifying structural damages using vibrational responses from a single sensor. The method is based on statistical classification using Bayes' theorem and an advanced statistic, which allows controlling the performance by varying the number of samples which represent the current state. This is done for multivariate damage sensitive features defined as partial autocorrelation coefficients (PACCs) estimated from vibrational responses and principal component analysis scores from PACCs. Additionally, optimal DSFs are composed not only for damage classification but also for damage detection based on binary statistical hypothesis testing, where features selections are found with a fast forward procedure. The method is applied to laboratory experiments with a small scale WTB with wind-like excitation and non-destructive damage scenarios. The obtained results demonstrate the advantages of the proposed procedure and are promising for future applications of vibration-based structural health monitoring in WTBs.

  10. Experimental validation of a new heterogeneous mechanical test design

    Science.gov (United States)

    Aquino, J.; Campos, A. Andrade; Souto, N.; Thuillier, S.

    2018-05-01

    Standard material parameters identification strategies generally use an extensive number of classical tests for collecting the required experimental data. However, a great effort has been made recently by the scientific and industrial communities to support this experimental database on heterogeneous tests. These tests can provide richer information on the material behavior allowing the identification of a more complete set of material parameters. This is a result of the recent development of full-field measurements techniques, like digital image correlation (DIC), that can capture the heterogeneous deformation fields on the specimen surface during the test. Recently, new specimen geometries were designed to enhance the richness of the strain field and capture supplementary strain states. The butterfly specimen is an example of these new geometries, designed through a numerical optimization procedure where an indicator capable of evaluating the heterogeneity and the richness of strain information. However, no experimental validation was yet performed. The aim of this work is to experimentally validate the heterogeneous butterfly mechanical test in the parameter identification framework. For this aim, DIC technique and a Finite Element Model Up-date inverse strategy are used together for the parameter identification of a DC04 steel, as well as the calculation of the indicator. The experimental tests are carried out in a universal testing machine with the ARAMIS measuring system to provide the strain states on the specimen surface. The identification strategy is accomplished with the data obtained from the experimental tests and the results are compared to a reference numerical solution.

  11. Statistics is not enough: revisiting Ronald A. Fisher's critique (1936) of Mendel's experimental results (1866).

    Science.gov (United States)

    Pilpel, Avital

    2007-09-01

    This paper is concerned with the role of rational belief change theory in the philosophical understanding of experimental error. Today, philosophers seek insight about error in the investigation of specific experiments, rather than in general theories. Nevertheless, rational belief change theory adds to our understanding of just such cases: R. A. Fisher's criticism of Mendel's experiments being a case in point. After an historical introduction, the main part of this paper investigates Fisher's paper from the point of view of rational belief change theory: what changes of belief about Mendel's experiment does Fisher go through and with what justification. It leads to surprising insights about what Fisher had done right and wrong, and, more generally, about the limits of statistical methods in detecting error.

  12. Inference of missing data and chemical model parameters using experimental statistics

    Science.gov (United States)

    Casey, Tiernan; Najm, Habib

    2017-11-01

    A method for determining the joint parameter density of Arrhenius rate expressions through the inference of missing experimental data is presented. This approach proposes noisy hypothetical data sets from target experiments and accepts those which agree with the reported statistics, in the form of nominal parameter values and their associated uncertainties. The data exploration procedure is formalized using Bayesian inference, employing maximum entropy and approximate Bayesian computation methods to arrive at a joint density on data and parameters. The method is demonstrated in the context of reactions in the H2-O2 system for predictive modeling of combustion systems of interest. Work supported by the US DOE BES CSGB. Sandia National Labs is a multimission lab managed and operated by Nat. Technology and Eng'g Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell Intl, for the US DOE NCSA under contract DE-NA-0003525.

  13. A Statistical Approach for Selecting Buildings for Experimental Measurement of HVAC Needs

    Directory of Open Access Journals (Sweden)

    Malinowski Paweł

    2017-03-01

    Full Text Available This article presents a statistical methodology for selecting representative buildings for experimentally evaluating the performance of HVAC systems, especially in terms of energy consumption. The proposed approach is based on the k-means method. The algorithm for this method is conceptually simple, allowing it to be easily implemented. The method can be applied to large quantities of data with unknown distributions. The method was tested using numerical experiments to determine the hourly, daily, and yearly heat values and the domestic hot water demands of residential buildings in Poland. Due to its simplicity, the proposed approach is very promising for use in engineering applications and is applicable to testing the performance of many HVAC systems.

  14. Design and Implementation of an Experimental Cataloging Advisor--Mapper.

    Science.gov (United States)

    Ercegovac, Zorana; Borko, Harold

    1992-01-01

    Describes the design of an experimental computer-aided cataloging advisor, Mapper, that was developed to help novice users with the descriptive cataloging of single-sheet maps from U.S. publishers. The human-computer interface is considered, the use of HyperCard is discussed, the knowledge base is explained, and assistance screens are described.…

  15. Experimental design of natural and accellerated bone and wood ageing

    DEFF Research Database (Denmark)

    Facorellis, Y.; Pournou, A.; Richter, Jane

    2015-01-01

    This paper presents the experimental design for natural and accelerated ageing of bone and wood samples found in museum conditions that was conceived as part of the INVENVORG (Thales Research Funding Program – NRSF) investigating the effects of the environmental factors on natural organic materials....

  16. Optimal Experimental Design for Large-Scale Bayesian Inverse Problems

    KAUST Repository

    Ghattas, Omar

    2014-01-06

    We develop a Bayesian framework for the optimal experimental design of the shock tube experiments which are being carried out at the KAUST Clean Combustion Research Center. The unknown parameters are the pre-exponential parameters and the activation energies in the reaction rate expressions. The control parameters are the initial mixture composition and the temperature. The approach is based on first building a polynomial based surrogate model for the observables relevant to the shock tube experiments. Based on these surrogates, a novel MAP based approach is used to estimate the expected information gain in the proposed experiments, and to select the best experimental set-ups yielding the optimal expected information gains. The validity of the approach is tested using synthetic data generated by sampling the PC surrogate. We finally outline a methodology for validation using actual laboratory experiments, and extending experimental design methodology to the cases where the control parameters are noisy.

  17. Constructing experimental designs for discrete-choice experiments: report of the ISPOR Conjoint Analysis Experimental Design Good Research Practices Task Force.

    Science.gov (United States)

    Reed Johnson, F; Lancsar, Emily; Marshall, Deborah; Kilambi, Vikram; Mühlbacher, Axel; Regier, Dean A; Bresnahan, Brian W; Kanninen, Barbara; Bridges, John F P

    2013-01-01

    Stated-preference methods are a class of evaluation techniques for studying the preferences of patients and other stakeholders. While these methods span a variety of techniques, conjoint-analysis methods-and particularly discrete-choice experiments (DCEs)-have become the most frequently applied approach in health care in recent years. Experimental design is an important stage in the development of such methods, but establishing a consensus on standards is hampered by lack of understanding of available techniques and software. This report builds on the previous ISPOR Conjoint Analysis Task Force Report: Conjoint Analysis Applications in Health-A Checklist: A Report of the ISPOR Good Research Practices for Conjoint Analysis Task Force. This report aims to assist researchers specifically in evaluating alternative approaches to experimental design, a difficult and important element of successful DCEs. While this report does not endorse any specific approach, it does provide a guide for choosing an approach that is appropriate for a particular study. In particular, it provides an overview of the role of experimental designs for the successful implementation of the DCE approach in health care studies, and it provides researchers with an introduction to constructing experimental designs on the basis of study objectives and the statistical model researchers have selected for the study. The report outlines the theoretical requirements for designs that identify choice-model preference parameters and summarizes and compares a number of available approaches for constructing experimental designs. The task-force leadership group met via bimonthly teleconferences and in person at ISPOR meetings in the United States and Europe. An international group of experimental-design experts was consulted during this process to discuss existing approaches for experimental design and to review the task force's draft reports. In addition, ISPOR members contributed to developing a consensus

  18. Study designs, use of statistical tests, and statistical analysis software choice in 2015: Results from two Pakistani monthly Medline indexed journals.

    Science.gov (United States)

    Shaikh, Masood Ali

    2017-09-01

    Assessment of research articles in terms of study designs used, statistical tests applied and the use of statistical analysis programmes help determine research activity profile and trends in the country. In this descriptive study, all original articles published by Journal of Pakistan Medical Association (JPMA) and Journal of the College of Physicians and Surgeons Pakistan (JCPSP), in the year 2015 were reviewed in terms of study designs used, application of statistical tests, and the use of statistical analysis programmes. JPMA and JCPSP published 192 and 128 original articles, respectively, in the year 2015. Results of this study indicate that cross-sectional study design, bivariate inferential statistical analysis entailing comparison between two variables/groups, and use of statistical software programme SPSS to be the most common study design, inferential statistical analysis, and statistical analysis software programmes, respectively. These results echo previously published assessment of these two journals for the year 2014.

  19. Cellular internalisation kinetics and cytotoxic properties of statistically designed and optimised neo-geometric copper nanocrystals.

    Science.gov (United States)

    Murugan, Karmani; Choonara, Yahya E; Kumar, Pradeep; du Toit, Lisa C; Pillay, Viness

    2017-09-01

    This study aimed to highlight a statistic design to precisely engineer homogenous geometric copper nanoparticles (CuNPs) for enhanced intracellular drug delivery as a function of geometrical structure. CuNPs with a dual functionality comprising geometric attributes for enhanced cell uptake and exerting cytotoxic activity on proliferating cells were synthesized as a novel drug delivery system. This paper investigated the defined concentrations of two key surfactants used in the reaction to mutually control and manipulate nano-shape and optimisation of the geometric nanosystems. A statistical experimental design comprising a full factorial model served as a refining factor to achieve homogenous geometric nanoparticles using a one-pot method for the systematic optimisation of the geometric CuNPs. Shapes of the nanoparticles were investigated to determine the result of the surfactant variation as the aim of the study and zeta potential was studied to ensure the stability of the system and establish a nanosystem of low aggregation potential. After optimisation of the nano-shapes, extensive cellular internalisation studies were conducted to elucidate the effect of geometric CuNPs on uptake rates, in addition to the vital toxicity assays to further understand the cellular effect of geometric CuNPs as a drug delivery system. In addition to geometry; volume, surface area, orientation to the cell membrane and colloidal stability is also addressed. The outcomes of the study demonstrated the success of homogenous geometric NP formation, in addition to a stable surface charge. The findings of the study can be utilized for the development of a drug delivery system for promoted cellular internalisation and effective drug delivery. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Second preliminary design of JAERI experimental fusion reactor (JXFR)

    International Nuclear Information System (INIS)

    Sako, Kiyoshi; Tone, Tatsuzo; Seki, Yasushi; Iida, Hiromasa; Yamato, Harumi

    1979-06-01

    Second preliminary design of a tokamak experimental fusion reactor to be built in the near future has been performed. This design covers overall reactor system including plasma characteristics, reactor structure, blanket neutronics radiation shielding, superconducting magnets, neutral beam injector, electric power supply system, fuel recirculating system, reactor cooling and tritium recovery systems and maintenance scheme. Safety analyses of the reactor system have been also performed. This paper gives a brief description of the design as of January, 1979. The feasibility study of raising the power density has been also studied and is shown as appendix. (author)

  1. COMPARISON OF EXPERIMENTAL-DESIGNS COMBINING PROCESS AND MIXTURE VARIABLES .1. DESIGN CONSTRUCTION AND THEORETICAL EVALUATION

    NARCIS (Netherlands)

    DUINEVELD, CAA; SMILDE, AK; DOORNBOS, DA

    The combination of process variables and mixture variables in experimental design is a problem which has not yet been solved. It is examined here whether a set of designs can be found which can be used for a series of models of reasonable complexity. The proposed designs are compared with known

  2. COMPARISON OF EXPERIMENTAL-DESIGNS COMBINING PROCESS AND MIXTURE VARIABLES .1. DESIGN CONSTRUCTION AND THEORETICAL EVALUATION

    NARCIS (Netherlands)

    DUINEVELD, C. A. A.; Smilde, A. K.; Doornbos, D. A.

    1993-01-01

    The combination of process variables and mixture variables in experimental design is a problem which has not yet been solved. It is examined here whether a set of designs can be found which can be used for a series of models of reasonable complexity. The proposed designs are compared with known

  3. A Case Study on Teaching the Topic "Experimental Unit" and How It Is Presented in Advanced Placement Statistics Textbooks

    Science.gov (United States)

    Perrett, Jamis J.

    2012-01-01

    This article demonstrates how textbooks differ in their description of the term "experimental unit". Advanced Placement Statistics teachers and students are often limited in their statistical knowledge by the information presented in their classroom textbook. Definitions and descriptions differ among textbooks as well as among different…

  4. Matching of experimental and statistical-model thermonuclear reaction rates at high temperatures

    International Nuclear Information System (INIS)

    Newton, J. R.; Longland, R.; Iliadis, C.

    2008-01-01

    We address the problem of extrapolating experimental thermonuclear reaction rates toward high stellar temperatures (T>1 GK) by using statistical model (Hauser-Feshbach) results. Reliable reaction rates at such temperatures are required for studies of advanced stellar burning stages, supernovae, and x-ray bursts. Generally accepted methods are based on the concept of a Gamow peak. We follow recent ideas that emphasized the fundamental shortcomings of the Gamow peak concept for narrow resonances at high stellar temperatures. Our new method defines the effective thermonuclear energy range (ETER) by using the 8th, 50th, and 92nd percentiles of the cumulative distribution of fractional resonant reaction rate contributions. This definition is unambiguous and has a straightforward probability interpretation. The ETER is used to define a temperature at which Hauser-Feshbach rates can be matched to experimental rates. This matching temperature is usually much higher compared to previous estimates that employed the Gamow peak concept. We suggest that an increased matching temperature provides more reliable extrapolated reaction rates since Hauser-Feshbach results are more trustwhorthy the higher the temperature. Our ideas are applied to 21 (p,γ), (p,α), and (α,γ) reactions on A=20-40 target nuclei. For many of the cases studied here, our extrapolated reaction rates at high temperatures differ significantly from those obtained using the Gamow peak concept

  5. Computational design and experimental validation of new thermal barrier systems

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Shengmin [Louisiana State Univ., Baton Rouge, LA (United States)

    2015-03-31

    The focus of this project is on the development of a reliable and efficient ab initio based computational high temperature material design method which can be used to assist the Thermal Barrier Coating (TBC) bond-coat and top-coat design. Experimental evaluations on the new TBCs are conducted to confirm the new TBCs’ properties. Southern University is the subcontractor on this project with a focus on the computational simulation method development. We have performed ab initio density functional theory (DFT) method and molecular dynamics simulation on screening the top coats and bond coats for gas turbine thermal barrier coating design and validation applications. For experimental validations, our focus is on the hot corrosion performance of different TBC systems. For example, for one of the top coatings studied, we examined the thermal stability of TaZr2.75O8 and confirmed it’s hot corrosion performance.

  6. Statistical modeling of static strengths of nuclear graphites with relevance to structural design

    International Nuclear Information System (INIS)

    Arai, Taketoshi

    1992-02-01

    Use of graphite materials for structural members poses a problem as to how to take into account of statistical properties of static strength, especially tensile fracture stresses, in component structural design. The present study concerns comprehensive examinations on statistical data base and modelings on nuclear graphites. First, the report provides individual samples and their analyses on strengths of IG-110 and PGX graphites for HTTR components. Those statistical characteristics on other HTGR graphites are also exemplified from the literature. Most of statistical distributions of individual samples are found to be approximately normal. The goodness of fit to normal distributions is more satisfactory with larger sample sizes. Molded and extruded graphites, however, possess a variety of statistical properties depending of samples from different with-in-log locations and/or different orientations. Second, the previous statistical models including the Weibull theory are assessed from the viewpoint of applicability to design procedures. This leads to a conclusion that the Weibull theory and its modified ones are satisfactory only for limited parts of tensile fracture behavior. They are not consistent for whole observations. Only normal statistics are justifiable as practical approaches to discuss specified minimum ultimate strengths as statistical confidence limits for individual samples. Third, the assessment of various statistical models emphasizes the need to develop advanced analytical ones which should involve modeling of microstructural features of actual graphite materials. Improvements of other structural design methodologies are also presented. (author)

  7. Sensitivity analysis and optimization of system dynamics models : Regression analysis and statistical design of experiments

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    1995-01-01

    This tutorial discusses what-if analysis and optimization of System Dynamics models. These problems are solved, using the statistical techniques of regression analysis and design of experiments (DOE). These issues are illustrated by applying the statistical techniques to a System Dynamics model for

  8. Experimental evidence of the statistical intermittency in a cryogenic turbulent jet of normal and superfluid Helium

    International Nuclear Information System (INIS)

    Duri, D.

    2012-01-01

    This experimental work is focused on the statistical study of the high Reynolds number turbulent velocity field in an inertially driven liquid helium axis-symmetric round jet at temperatures above and below the lambda transition (between 2.3 K and 1.78 K) in a cryogenic wind tunnel. The possibility to finely tune the fluid temperature allows us to perform a comparative study of the quantum He II turbulence within the classical framework of the Kolmogorov turbulent cascade in order to have a better understanding of the energy cascade process in a superfluid. In particular we focused our attention on the intermittency phenomena, in both He I and He II phases, by measuring the high order statistics of the longitudinal velocity increments by means of the flatness and the skewness statistical estimators. A first phase consisted in developing the cryogenic facility, a closed loop pressurized and temperature regulated wind tunnel, and adapting the classic hot-wire anemometry technique in order to be able to work in such a challenging low temperature environment. A detailed calibration procedure of the fully developed turbulent flow was the carried out at 2.3 K at Reynolds numbers based on the Taylor length scale up to 2600 in order to qualify our testing set-up and to identify possible facility-related spurious phenomena. This procedure showed that the statistical properties of the longitudinal velocity increments are in good agreement with respect to previous results. By further reducing the temperature of the working fluid (at a constant pressure) below the lambda point down to 1.78 K local velocity measurements were performed at different superfluid density fractions. The results show a classic behaviour of the He II energy cascade at large scales while, at smaller scales, a deviation has been observed. The occurrence of this phenomenon, which requires further investigation and modelling, is highlighted by the observed changing sign of the third order structure

  9. MYRRHA/XT-ADS primary system design and experimental devices

    International Nuclear Information System (INIS)

    Maes, D.

    2009-01-01

    The EUROTRANS project is an integrated project in the Sixth European Framework Program in the context of Partitioning and Transmutation. The objective of this project is to work towards an ETD (European Transmutation Demonstration) in a step-wise manner. The first step is to carry out an advanced design of a small-scale XT-ADS (eXperimental Transmutation in an Accelerator Driven System) for realisation in a short-term (about 10 years) as well as to accomplish a generic conceptual design of EFIT (European Facility for Industrial Transmutation) for realisation in the long-term. The MYRRHA-2005 design served as a starting basis for the XT-ADS. Many options have been revisited and the framework is now set up. While the MYRRHA-2005 design was still a conceptual design, the intention is to get at the end of the EUROTRANS project (March 2009) an advanced design of the XT-ADS, albeit a first advanced design. While the design work performed during the first years of the project (2005-2006) was mainly devoted to optimise and enhance the primary and secondary system configuration according to the suggestions and contributions of our industrial partners (Ansaldo Nucleare, Areva, Suez-Tractebel) within the DM1 (Domain 1 D ESIGN ) , the last year work objectives mainly consisted of (1) the release of the Remote Handling Design Catalogue for XT-ADS and (2) the formulation of the specification of the experimental devices according to the XT-ADS objectives and adapted to the actual XT-ADS core and core support structure design; (3) the detailed calculations of the main XT-ADS primary and secondary system components

  10. A statistical characterization method for damping material properties and its application to structural-acoustic system design

    International Nuclear Information System (INIS)

    Jung, Byung C.; Lee, Doo Ho; Youn, Byeng D.; Lee, Soo Bum

    2011-01-01

    The performance of surface damping treatments may vary once the surface is exposed to a wide range of temperatures, because the performance of viscoelastic damping material is highly dependent on operational temperature. In addition, experimental data for dynamic responses of viscoelastic material are inherently random, which makes it difficult to design a robust damping layout. In this paper a statistical modeling procedure with a statistical calibration method is suggested for the variability characterization of viscoelastic damping material in constrained-layer damping structures. First, the viscoelastic material property is decomposed into two sources: (I) a random complex modulus due to operational temperature variability, and (II) experimental/model errors in the complex modulus. Next, the variability in the damping material property is obtained using the statistical calibration method by solving an unconstrained optimization problem with a likelihood function metric. Two case studies are considered to show the influence of the material variability on the acoustic performances in the structural-acoustic systems. It is shown that the variability of the damping material is propagated to that of the acoustic performances in the systems. Finally, robust and reliable damping layout designs of the two case studies are obtained through the reliability-based design optimization (RBDO) amidst severe variability in operational temperature and the damping material

  11. The influence of the design matrix on treatment effect estimates in the quantitative analyses of single-subject experimental design research.

    Science.gov (United States)

    Moeyaert, Mariola; Ugille, Maaike; Ferron, John M; Beretvas, S Natasha; Van den Noortgate, Wim

    2014-09-01

    The quantitative methods for analyzing single-subject experimental data have expanded during the last decade, including the use of regression models to statistically analyze the data, but still a lot of questions remain. One question is how to specify predictors in a regression model to account for the specifics of the design and estimate the effect size of interest. These quantitative effect sizes are used in retrospective analyses and allow synthesis of single-subject experimental study results which is informative for evidence-based decision making, research and theory building, and policy discussions. We discuss different design matrices that can be used for the most common single-subject experimental designs (SSEDs), namely, the multiple-baseline designs, reversal designs, and alternating treatment designs, and provide empirical illustrations. The purpose of this article is to guide single-subject experimental data analysts interested in analyzing and meta-analyzing SSED data. © The Author(s) 2014.

  12. Causality in Statistical Power: Isomorphic Properties of Measurement, Research Design, Effect Size, and Sample Size

    Directory of Open Access Journals (Sweden)

    R. Eric Heidel

    2016-01-01

    Full Text Available Statistical power is the ability to detect a significant effect, given that the effect actually exists in a population. Like most statistical concepts, statistical power tends to induce cognitive dissonance in hepatology researchers. However, planning for statistical power by an a priori sample size calculation is of paramount importance when designing a research study. There are five specific empirical components that make up an a priori sample size calculation: the scale of measurement of the outcome, the research design, the magnitude of the effect size, the variance of the effect size, and the sample size. A framework grounded in the phenomenon of isomorphism, or interdependencies amongst different constructs with similar forms, will be presented to understand the isomorphic effects of decisions made on each of the five aforementioned components of statistical power.

  13. Multi-criteria optimization of the flesh melons skin separation process by experimental and statistical analysis methods

    Directory of Open Access Journals (Sweden)

    Y. B. Medvedkov

    2016-01-01

    Full Text Available Research and innovation activity to create energy-efficient processes in the melon processing, is a significant task. Separation skin from the melon flesh with their subsequent destination application in the creation of new food products is one of the time-consuming operations in this technology. Lack of scientific and experimental base of this operation holding back the development of high-performance machines for its implementation. In this connection, the technique of the experiment on the separation of the skins of melons in the pilot plant and the search for optimal regimes of its work methods by statistical modeling is offered. The late-ripening species of melon: Kalaysan, Thorlami, Gulab-sary are objects of study. Interaction of factors influencing on separating the melon skins process is carried out. A central composite rotatable design and fractional factorial experiment was used. Using the method of experimental design with treatment planning template in Design Expert v.10 software yielded a regression equations that adequately describe the actual process. Rational intervals input factors values are established: the ratio of the rotational speed of the drum to the abrasive supply roll rotational frequency; the gap between the supply drum and the shearing knife; shearing blade sharpening angle; the number of feed drum spikes; abrading drum orifices diameter. The mean square error does not exceed 12.4%. Regression equations graphic interpretation is presented by scatter plots and engineering nomograms that can be predictive of a choice of rational values of the input factors for three optimization criteria: minimal specific energy consumption in the process of cutting values, maximal specific performance by the pulp and pulp extraction ratio values. Obtained data can be used for the operational management of the process technological parameters, taking into account the geometrical dimensions of the melon and its inhomogeneous structure.

  14. Conceptual design study of fusion experimental reactor (FY86 FER)

    International Nuclear Information System (INIS)

    Saito, Ryusei; Kashihara, Shin-ichiro; Itoh, Shin-ichi

    1987-08-01

    This report describes the results of conceptual design study on plant systems for the Fusion Experimental Reactor (FY86 FER). Design studies for FER plant systems have been continued from FY85, especially for design modifications made in accordance with revisions of plasma scaling parameters and system improvements. This report describes 1) system construction, 2) site and reactor building plan, 3) repaire and maintenance system, 4) tritium circulation system, 5) heating, ventilation and air conditioning system, 6) tritium clean-up system, 7) cooling and baking system, 8) waste treatment and storage system, 9) control system, 10) electric power system, 11) site factory plan, all of which are a part of FY86 design work. The plant systems described in this report generally have been based on the FY86 FER (ACS Reactor) which is an one of the six candidates for FER. (author)

  15. First preliminary design of an experimental fusion reactor

    International Nuclear Information System (INIS)

    1977-09-01

    A preliminary design of a tokamak experimental fusion reactor to be built in the near future is under way. The goals of the reactor are to achieve reactor-level plasma conditions for a sufficiently long operation period and to obtain design, construction and operational experience for the main components of full-scale power reactors. This design covers overall reactor system including plasma characteristics, reactor structure, blanket neutronics, shielding, superconducting magnets, neutral beam injector, electric power supply system, fuel circulating system, reactor cooling system, tritium recovery system and maintenance scheme. The main design parameters are as follows: the reactor fusion power 100 MW, torus radius 6.75 m, plasma radius 1.5 m, first wall radius 1.75 m, toroidal magnet field on axis 6 T, blanket fertile material Li 2 O, coolant He, structural material 316SS and tritium breeding ratio 0.9. (auth.)

  16. ITER [International Thermonuclear Experimental Reactor] reactor building design study

    International Nuclear Information System (INIS)

    Thomson, S.L.; Blevins, J.D.; Delisle, M.W.

    1989-01-01

    The International Thermonuclear Experimental Reactor (ITER) is at the midpoint of a two-year conceptual design. The ITER reactor building is a reinforced concrete structure that houses the tokamak and associated equipment and systems and forms a barrier between the tokamak and the external environment. It provides radiation shielding and controls the release of radioactive materials to the environment during both routine operations and accidents. The building protects the tokamak from external events, such as earthquakes or aircraft strikes. The reactor building requirements have been developed from the component designs and the preliminary safety analysis. The equipment requirements, tritium confinement, and biological shielding have been studied. The building design in progress requires continuous iteraction with the component and system designs and with the safety analysis. 8 figs

  17. Overview of International Thermonuclear Experimental Reactor (ITER) engineering design activities*

    Science.gov (United States)

    Shimomura, Y.

    1994-05-01

    The International Thermonuclear Experimental Reactor (ITER) [International Thermonuclear Experimental Reactor (ITER) (International Atomic Energy Agency, Vienna, 1988), ITER Documentation Series, No. 1] project is a multiphased project, presently proceeding under the auspices of the International Atomic Energy Agency according to the terms of a four-party agreement among the European Atomic Energy Community (EC), the Government of Japan (JA), the Government of the Russian Federation (RF), and the Government of the United States (US), ``the Parties.'' The ITER project is based on the tokamak, a Russian invention, and has since been brought to a high level of development in all major fusion programs in the world. The objective of ITER is to demonstrate the scientific and technological feasibility of fusion energy for peaceful purposes. The ITER design is being developed, with support from the Parties' four Home Teams and is in progress by the Joint Central Team. An overview of ITER Design activities is presented.

  18. Design and experimental characterization of an EM pump

    International Nuclear Information System (INIS)

    Kim, Hee Reyoung; Hong, Sang Hee

    1999-01-01

    Generally, an EM (electromagnetic) pump is been employed to circulate electrically conducting liquids by using the Lorentz force. Especially, at the liquid metal reactor (LMR), which uses liquid sodium with high electrical conductivity as a coolant, an EM pump is needed due to its advantages over a mechanical pump, such as no rotating parts, no noise, and simplicity. In this research, an EM pump of a pilot annular linear induction type with a flow rate of 200 l/min was designed by using the electrical equivalent-circuit method. The pump was designed and manufactured by considering material and environmental (high temperature and liquid sodium) requirements. The pump performance was experimentally characterized based on input currents, voltage, power, and frequency. Also, the theoretical prediction was compared with the experimental result

  19. TIBER: Tokamak Ignition/Burn Experimental Research. Final design report

    International Nuclear Information System (INIS)

    Henning, C.D.; Logan, B.G.; Barr, W.L.

    1985-01-01

    The Tokamak Ignition/Burn Experimental Research (TIBER) device is the smallest superconductivity tokamak designed to date. In the design plasma shaping is used to achieve a high plasma beta. Neutron shielding is minimized to achieve the desired small device size, but the superconducting magnets must be shielded sufficiently to reduce the neutron heat load and the gamma-ray dose to various components of the device. Specifications of the plasma-shaping coil, the shielding, coaling, requirements, and heating modes are given. 61 refs., 92 figs., 30 tabs

  20. Experimental use of iteratively designed rotation invariant correlation filters

    International Nuclear Information System (INIS)

    Sweeney, D.W.; Ochoa, E.; Schils, G.F.

    1987-01-01

    Iteratively designed filters are incorporated into an optical correlator for position, rotation, and intensity invariant recognition of target images. The filters exhibit excellent discrimination because they are designed to contain full information about the target image. Numerical simulations and experiments demonstrate detection of targets that are corrupted with random noise (SNR≅0.5) and also partially obscured by other objects. The complex valued filters are encoded in a computer generated hologram and fabricated directly using an electron-beam system. Experimental results using a liquid crystal spatial light modulator for real-time input show excellent agreement with analytical and numerical computations

  1. Study of Formulation Variables Influencing Polymeric Microparticles by Experimental Design

    Directory of Open Access Journals (Sweden)

    Jitendra B. Naik

    2014-04-01

    Full Text Available The objective of this study was to prepare diclofenac sodium loaded microparticles by single emulsion [oil-in-water (o/w] solvent evaporation method. The 22 experimental design methodology was used to evaluate the effect of two formulation variables on microspheres properties using the Design-Expert® software and evaluated for their particle size, morphology, and encapsulation efficiency and in vitro drug release. The graphical and mathematical analysis of the design showed that the independent variables were a significant effect on the encapsulation efficiency and drug release of microparticles. The low magnitudes of error and significant values of R2 prove the high prognostic ability of the design. The microspheres showed high encapsulation efficiency with an increase in the amount of polymer and decrease in the amount of PVA in the formulation. The particles were found to be spherical with smooth surface. Prolonged drug release and enhancement of encapsulation efficiency of polymeric microparticles can be successfully obtained with an application of experimental design technique.

  2. Design of JT-60SA magnets and associated experimental validations

    International Nuclear Information System (INIS)

    Zani, L.; Barabaschi, P.; Peyrot, M.; Meunier, L.; Tomarchio, V.; Duglue, D.; Decool, P.; Torre, A.; Marechal, J.L.; Della Corte, A.; Di Zenobio, A.; Muzzi, L.; Cucchiaro, A.; Turtu, S.; Ishida, S.; Yoshida, K.; Tsuchiya, K.; Kizu, K.; Murakami, H.

    2011-01-01

    In the framework of the JT-60SA project, aiming at upgrading the present JT-60U tokamak toward a fully superconducting configuration, the detailed design phase led to adopt for the three main magnet systems a brand new design. Europe (EU) is expected to provide to Japan (JA) the totality of the toroidal field (TF) magnet system, while JA will provide both Equilibrium field (EF) and Central Solenoid (CS) systems. All magnet designs were optimized trough the past years and entered in parallel into extensive experimentally-based phases of concept validation, which came to maturation in the years 2009 and 2010. For this, all magnet systems were investigated by mean of dedicated samples, e.g. conductor and joint samples designed, manufactured and tested at full scale in ad hoc facilities either in EU or in JA. The present paper, after an overall description of magnet systems layouts, presents in a general approach the different experimental campaigns dedicated to qualification design and manufacture processes of either coils, conductors and electrical joints. The main results with the associated analyses are shown and the main conclusions presented, especially regarding their contribution to consolidate the triggering of magnet mass production. The status of respective manufacturing stages in EU and in JA are also evoked. (authors)

  3. Design, construction and testing of a radon experimental chamber

    International Nuclear Information System (INIS)

    Chavez B, A.; Balcazar G, M.

    1991-10-01

    To carry out studies on the radon behavior under controlled and stable conditions it was designed and constructed a system that consists of two parts: a container of mineral rich in Uranium and an experimentation chamber with radon united one to the other one by a step valve. The container of uranium mineral approximately contains 800 gr of uranium with a law of 0.28%; the radon gas emanated by the mineral is contained tightly by the container. When the valve opens up the radon gas it spreads to the radon experimental chamber; this contains 3 accesses that allow to install different types of detectors. The versatility of the system is exemplified with two experiments: 1. With the radon experimental chamber and an associated spectroscopic system, the radon and two of its decay products are identified. 2. The design of the system allows to couple the mineral container to other experimental geometries to demonstrate this fact it was coupled and proved a new automatic exchanger system of passive detectors of radon. The results of the new automatic exchanger system when it leave to flow the radon freely among the container and the automatic exchanger through a plastic membrane of 15 m. are shown. (Author)

  4. Application of Plackett-Burman experimental design in the development of muffin using adlay flour

    Science.gov (United States)

    Valmorida, J. S.; Castillo-Israel, K. A. T.

    2018-01-01

    The application of Plackett-Burman experimental design was made to identify significant formulation and process variables in the development of muffin using adlay flour. Out of the seven screened variables, levels of sugar, levels of butter and baking temperature had the most significant influence on the product model in terms of physicochemical and sensory acceptability. Results of the experiment further demonstrate the effectiveness of Plackett-Burman design in choosing the best adlay variety for muffin production. Hence, the statistical method used in the study permits an efficient selection of important variables needed in the development of muffin from adlay which can be optimized using response surface methodology.

  5. Experimental design and estimation of growth rate distributions in size-structured shrimp populations

    International Nuclear Information System (INIS)

    Banks, H T; Davis, Jimena L; Ernstberger, Stacey L; Hu, Shuhua; Artimovich, Elena; Dhar, Arun K

    2009-01-01

    We discuss inverse problem results for problems involving the estimation of probability distributions using aggregate data for growth in populations. We begin with a mathematical model describing variability in the early growth process of size-structured shrimp populations and discuss a computational methodology for the design of experiments to validate the model and estimate the growth-rate distributions in shrimp populations. Parameter-estimation findings using experimental data from experiments so designed for shrimp populations cultivated at Advanced BioNutrition Corporation are presented, illustrating the usefulness of mathematical and statistical modeling in understanding the uncertainty in the growth dynamics of such populations

  6. Trends in study design and the statistical methods employed in a leading general medicine journal.

    Science.gov (United States)

    Gosho, M; Sato, Y; Nagashima, K; Takahashi, S

    2018-02-01

    Study design and statistical methods have become core components of medical research, and the methodology has become more multifaceted and complicated over time. The study of the comprehensive details and current trends of study design and statistical methods is required to support the future implementation of well-planned clinical studies providing information about evidence-based medicine. Our purpose was to illustrate study design and statistical methods employed in recent medical literature. This was an extension study of Sato et al. (N Engl J Med 2017; 376: 1086-1087), which reviewed 238 articles published in 2015 in the New England Journal of Medicine (NEJM) and briefly summarized the statistical methods employed in NEJM. Using the same database, we performed a new investigation of the detailed trends in study design and individual statistical methods that were not reported in the Sato study. Due to the CONSORT statement, prespecification and justification of sample size are obligatory in planning intervention studies. Although standard survival methods (eg Kaplan-Meier estimator and Cox regression model) were most frequently applied, the Gray test and Fine-Gray proportional hazard model for considering competing risks were sometimes used for a more valid statistical inference. With respect to handling missing data, model-based methods, which are valid for missing-at-random data, were more frequently used than single imputation methods. These methods are not recommended as a primary analysis, but they have been applied in many clinical trials. Group sequential design with interim analyses was one of the standard designs, and novel design, such as adaptive dose selection and sample size re-estimation, was sometimes employed in NEJM. Model-based approaches for handling missing data should replace single imputation methods for primary analysis in the light of the information found in some publications. Use of adaptive design with interim analyses is increasing

  7. A unified approach to linking experimental, statistical and computational analysis of spike train data.

    Directory of Open Access Journals (Sweden)

    Liang Meng

    Full Text Available A fundamental issue in neuroscience is how to identify the multiple biophysical mechanisms through which neurons generate observed patterns of spiking activity. In previous work, we proposed a method for linking observed patterns of spiking activity to specific biophysical mechanisms based on a state space modeling framework and a sequential Monte Carlo, or particle filter, estimation algorithm. We have shown, in simulation, that this approach is able to identify a space of simple biophysical models that were consistent with observed spiking data (and included the model that generated the data, but have yet to demonstrate the application of the method to identify realistic currents from real spike train data. Here, we apply the particle filter to spiking data recorded from rat layer V cortical neurons, and correctly identify the dynamics of an slow, intrinsic current. The underlying intrinsic current is successfully identified in four distinct neurons, even though the cells exhibit two distinct classes of spiking activity: regular spiking and bursting. This approach--linking statistical, computational, and experimental neuroscience--provides an effective technique to constrain detailed biophysical models to specific mechanisms consistent with observed spike train data.

  8. Derivation of stochastic differential equations for scrape-off layer plasma fluctuations from experimentally measured statistics

    Energy Technology Data Exchange (ETDEWEB)

    Mekkaoui, Abdessamad [IEK-4 Forschungszentrum Juelich 52428 (Germany)

    2013-07-01

    A method to derive stochastic differential equations for intermittent plasma density dynamics in magnetic fusion edge plasma is presented. It uses a measured first four moments (mean, variance, Skewness and Kurtosis) and the correlation time of turbulence to write a Pearson equation for the probability distribution function of fluctuations. The Fokker-Planck equation is then used to derive a Langevin equation for the plasma density fluctuations. A theoretical expectations are used as a constraints to fix the nonlinearity structure of the stochastic differential equation. In particular when the quadratically nonlinear dynamics is assumed, then it is shown that the plasma density is driven by a multiplicative Wiener process and evolves on the turbulence correlation time scale, while the linear growth is quadratically damped by the fluctuation level. Strong criteria for statistical discrimination of experimental time series are proposed as an alternative to the Kurtosis-Skewness scaling. This scaling is broadly used in contemporary literature to characterize edge turbulence, but it is inappropriate because a large family of distributions could share this scaling. Strong criteria allow us to focus on the relevant candidate distribution and approach a nonlinear structure of edge turbulence model.

  9. COMPARISON OF EXPERIMENTAL-DESIGNS COMBINING PROCESS AND MIXTURE VARIABLES .2. DESIGN EVALUATION ON MEASURED DATA

    NARCIS (Netherlands)

    DUINEVELD, C. A. A.; Smilde, A. K.; Doornbos, D. A.

    1993-01-01

    The construction of a small experimental design for a combination of process and mixture variables is a problem which has not been solved completely by now. In a previous paper we evaluated some designs with theoretical measures. This second paper evaluates the capabilities of the best of these

  10. COMPARISON OF EXPERIMENTAL-DESIGNS COMBINING PROCESS AND MIXTURE VARIABLES .2. DESIGN EVALUATION ON MEASURED DATA

    NARCIS (Netherlands)

    DUINEVELD, CAA; SMILDE, AK; DOORNBOS, DA

    The construction of a small experimental design for a combination of process and mixture variables is a problem which has not been solved completely by now. In a previous paper we evaluated some designs with theoretical measures. This second paper evaluates the capabilities of the best of these

  11. Bayesian Optimal Experimental Design Using Multilevel Monte Carlo

    KAUST Repository

    Ben Issaid, Chaouki; Long, Quan; Scavino, Marco; Tempone, Raul

    2015-01-01

    Experimental design is very important since experiments are often resource-exhaustive and time-consuming. We carry out experimental design in the Bayesian framework. To measure the amount of information, which can be extracted from the data in an experiment, we use the expected information gain as the utility function, which specifically is the expected logarithmic ratio between the posterior and prior distributions. Optimizing this utility function enables us to design experiments that yield the most informative data for our purpose. One of the major difficulties in evaluating the expected information gain is that the integral is nested and can be high dimensional. We propose using Multilevel Monte Carlo techniques to accelerate the computation of the nested high dimensional integral. The advantages are twofold. First, the Multilevel Monte Carlo can significantly reduce the cost of the nested integral for a given tolerance, by using an optimal sample distribution among different sample averages of the inner integrals. Second, the Multilevel Monte Carlo method imposes less assumptions, such as the concentration of measures, required by Laplace method. We test our Multilevel Monte Carlo technique using a numerical example on the design of sensor deployment for a Darcy flow problem governed by one dimensional Laplace equation. We also compare the performance of the Multilevel Monte Carlo, Laplace approximation and direct double loop Monte Carlo.

  12. Bayesian Optimal Experimental Design Using Multilevel Monte Carlo

    KAUST Repository

    Ben Issaid, Chaouki

    2015-01-07

    Experimental design is very important since experiments are often resource-exhaustive and time-consuming. We carry out experimental design in the Bayesian framework. To measure the amount of information, which can be extracted from the data in an experiment, we use the expected information gain as the utility function, which specifically is the expected logarithmic ratio between the posterior and prior distributions. Optimizing this utility function enables us to design experiments that yield the most informative data for our purpose. One of the major difficulties in evaluating the expected information gain is that the integral is nested and can be high dimensional. We propose using Multilevel Monte Carlo techniques to accelerate the computation of the nested high dimensional integral. The advantages are twofold. First, the Multilevel Monte Carlo can significantly reduce the cost of the nested integral for a given tolerance, by using an optimal sample distribution among different sample averages of the inner integrals. Second, the Multilevel Monte Carlo method imposes less assumptions, such as the concentration of measures, required by Laplace method. We test our Multilevel Monte Carlo technique using a numerical example on the design of sensor deployment for a Darcy flow problem governed by one dimensional Laplace equation. We also compare the performance of the Multilevel Monte Carlo, Laplace approximation and direct double loop Monte Carlo.

  13. Mechanical design of the small-scale experimental ADS: MYRRHA

    Energy Technology Data Exchange (ETDEWEB)

    Maes, Dirk [SCKCEN, Reactor Physics and MYRRHA Department, Boeretang 200, B-2400 Mol (Belgium)

    2006-10-15

    Since 1998, SCK*CEN, in partnership with IBA s.a. and many European research laboratories, is designing a multipurpose Accelerator Driven System (ADS) - MYRRHA - and is conducting an associated R and D support programme. MYRRHA aims to serve as a basis for the European experimental ADS to provide protons and neutrons for various R and D applications. Besides an overall configuration of the MYRRHA reactor internals, the description in this paper is limited to the mechanical design of the main components of the Primary System and Associated Equipment (vessel and cover, diaphragm, spallation loop, sub-critical core, primary cooling system, emergency cooling system, in-vessel fuel storage and fuel transfer machine), the conceptual design of the robotics for In-Service Inspection and Repair (ISIR), together with the remote handling for operation and maintenance (O and M). (author)

  14. A Modified Jonckheere Test Statistic for Ordered Alternatives in Repeated Measures Design

    Directory of Open Access Journals (Sweden)

    Hatice Tül Kübra AKDUR

    2016-09-01

    Full Text Available In this article, a new test based on Jonckheere test [1] for  randomized blocks which have dependent observations within block is presented. A weighted sum for each block statistic rather than the unweighted sum proposed by Jonckheereis included. For Jonckheere type statistics, the main assumption is independency of observations within block. In the case of repeated measures design, the assumption of independence is violated. The weighted Jonckheere type statistic for the situation of dependence for different variance-covariance structure and the situation based on ordered alternative hypothesis structure of each block on the design is used. Also, the proposed statistic is compared to the existing test based on Jonckheere in terms of type I error rates by performing Monte Carlo simulation. For the strong correlations, circular bootstrap version of the proposed Jonckheere test provides lower rates of type I error.

  15. Theoretical and experimental investigations on fracture statistics carried out at the IDIEM (Chile

    Directory of Open Access Journals (Sweden)

    Kittl, P.

    1986-09-01

    Full Text Available The high exigencies required for some structures owing to their responsability or their high cost have originated a new discipline that can be called Reability Engineering, which main aim is to determine the probability afforded by a machine to comply a requirement. This work contains a not fully detailed description of the topics studied by IDEM during those last years, within this field. Within it there are Fracture Statistics, which studies the probability for some structure to undergo plastic deformations, and the probability of the causes to occur, taken into account materials fatigue. It also includes a theoretical development of the fracture statistics, describing the specific-risk-of-fracture functions by means of integral equations, and the determination of their parameters and their uncertainties, when the functions have a known analytical form. Experimental researches range from the most brittle bodies, such as glass, almost brittle ones such as cement paste and to others that can admit plastic deformation, such as certain weldings, enlarging the study to fibro-composites and natural materials such as granite.

    Las altas exigencias requeridas para algunas estructuras de especial responsabilidad o de muy alto coste han dado origen a una nueva disciplina que puede denominarse Ingeniería de la Fiabilidad, cuyo principal objetivo es determinar la probabilidad con que un ingenio puede verificar una exigencia. En este trabajo se presenta una descripción, no muy detallada, de los tópicos tratados por el IDIEM, en estos últimos años, dentro de esta disciplina. Dentro de ella está la Mecánica Estadística de Fractura, que estudia la probabilidad de que una estructura se deforme plásticamente, y la probabilidad de ocurrencia de las causas, teniendo en cuenta la fatiga de los materiales. Se incluye un desarrollo teórico de la mecánica estadística de fractura, describiendo las funciones de riesgo específico de

  16. Content-based VLE designs improve learning efficiency in constructivist statistics education.

    Science.gov (United States)

    Wessa, Patrick; De Rycker, Antoon; Holliday, Ian Edward

    2011-01-01

    We introduced a series of computer-supported workshops in our undergraduate statistics courses, in the hope that it would help students to gain a deeper understanding of statistical concepts. This raised questions about the appropriate design of the Virtual Learning Environment (VLE) in which such an approach had to be implemented. Therefore, we investigated two competing software design models for VLEs. In the first system, all learning features were a function of the classical VLE. The second system was designed from the perspective that learning features should be a function of the course's core content (statistical analyses), which required us to develop a specific-purpose Statistical Learning Environment (SLE) based on Reproducible Computing and newly developed Peer Review (PR) technology. The main research question is whether the second VLE design improved learning efficiency as compared to the standard type of VLE design that is commonly used in education. As a secondary objective we provide empirical evidence about the usefulness of PR as a constructivist learning activity which supports non-rote learning. Finally, this paper illustrates that it is possible to introduce a constructivist learning approach in large student populations, based on adequately designed educational technology, without subsuming educational content to technological convenience. Both VLE systems were tested within a two-year quasi-experiment based on a Reliable Nonequivalent Group Design. This approach allowed us to draw valid conclusions about the treatment effect of the changed VLE design, even though the systems were implemented in successive years. The methodological aspects about the experiment's internal validity are explained extensively. The effect of the design change is shown to have substantially increased the efficiency of constructivist, computer-assisted learning activities for all cohorts of the student population under investigation. The findings demonstrate that a

  17. Content-based VLE designs improve learning efficiency in constructivist statistics education.

    Directory of Open Access Journals (Sweden)

    Patrick Wessa

    Full Text Available BACKGROUND: We introduced a series of computer-supported workshops in our undergraduate statistics courses, in the hope that it would help students to gain a deeper understanding of statistical concepts. This raised questions about the appropriate design of the Virtual Learning Environment (VLE in which such an approach had to be implemented. Therefore, we investigated two competing software design models for VLEs. In the first system, all learning features were a function of the classical VLE. The second system was designed from the perspective that learning features should be a function of the course's core content (statistical analyses, which required us to develop a specific-purpose Statistical Learning Environment (SLE based on Reproducible Computing and newly developed Peer Review (PR technology. OBJECTIVES: The main research question is whether the second VLE design improved learning efficiency as compared to the standard type of VLE design that is commonly used in education. As a secondary objective we provide empirical evidence about the usefulness of PR as a constructivist learning activity which supports non-rote learning. Finally, this paper illustrates that it is possible to introduce a constructivist learning approach in large student populations, based on adequately designed educational technology, without subsuming educational content to technological convenience. METHODS: Both VLE systems were tested within a two-year quasi-experiment based on a Reliable Nonequivalent Group Design. This approach allowed us to draw valid conclusions about the treatment effect of the changed VLE design, even though the systems were implemented in successive years. The methodological aspects about the experiment's internal validity are explained extensively. RESULTS: The effect of the design change is shown to have substantially increased the efficiency of constructivist, computer-assisted learning activities for all cohorts of the student

  18. Content-Based VLE Designs Improve Learning Efficiency in Constructivist Statistics Education

    Science.gov (United States)

    Wessa, Patrick; De Rycker, Antoon; Holliday, Ian Edward

    2011-01-01

    Background We introduced a series of computer-supported workshops in our undergraduate statistics courses, in the hope that it would help students to gain a deeper understanding of statistical concepts. This raised questions about the appropriate design of the Virtual Learning Environment (VLE) in which such an approach had to be implemented. Therefore, we investigated two competing software design models for VLEs. In the first system, all learning features were a function of the classical VLE. The second system was designed from the perspective that learning features should be a function of the course's core content (statistical analyses), which required us to develop a specific–purpose Statistical Learning Environment (SLE) based on Reproducible Computing and newly developed Peer Review (PR) technology. Objectives The main research question is whether the second VLE design improved learning efficiency as compared to the standard type of VLE design that is commonly used in education. As a secondary objective we provide empirical evidence about the usefulness of PR as a constructivist learning activity which supports non-rote learning. Finally, this paper illustrates that it is possible to introduce a constructivist learning approach in large student populations, based on adequately designed educational technology, without subsuming educational content to technological convenience. Methods Both VLE systems were tested within a two-year quasi-experiment based on a Reliable Nonequivalent Group Design. This approach allowed us to draw valid conclusions about the treatment effect of the changed VLE design, even though the systems were implemented in successive years. The methodological aspects about the experiment's internal validity are explained extensively. Results The effect of the design change is shown to have substantially increased the efficiency of constructivist, computer-assisted learning activities for all cohorts of the student population under

  19. Optimum design of automobile seat using statistical design support system; Tokeiteki sekkei shien system no jidoshayo seat eno tekiyo

    Energy Technology Data Exchange (ETDEWEB)

    Kashiwamura, T [NHK Spring Co. Ltd., Yokohama (Japan); Shiratori, M; Yu, Q; Koda, I [Yokohama National University, Yokohama (Japan)

    1997-10-01

    The authors proposed a new practical optimum design method called statistical design support system, which consists of five steps: the effectivity analysis, reanalysis, evaluation of dispersion, the optimiza4ion and evaluation of structural reliability. In this study, the authors applied the present system to analyze and optimum design of an automobile seat frame subjected to crushing. This study should that the present method could be applied to the complex nonlinear problems such as large deformation, material nonlinearity as well as impact problem. It was shown that the optimum design of the seat frame has been solved easily using the present system. 6 refs., 5 figs., 5 tabs.

  20. LOGICAL AND EXPERIMENTAL DESIGN FOR PHENOL DEGRADATION USING IMMOBILIZED ACINETOBACTER SP. CULTURE

    Directory of Open Access Journals (Sweden)

    Amro Abd Al Fattah Amara

    2010-05-01

    Full Text Available Phenol degradation processes were conducted through a series of enzymatic reactions effects and is affect by different components of the microbial metabolic flux. Using different optimization strategies like mutagenesis could lead to a successful optimization but also lead to lost of some important microbial features or to release a new virulence or unexpected characters. Plackett-Burman closes much gab between optimization, safety, time, cost, Man/hr, the complexity of the metabolic flux etc. Using Plackett-Burman experimental design lead to map the points affect in the optimization process by well understanding their request from nutrient and the best environmental condition required. In this study nine variables include pH (X1, oC (X2, glucose (X3, yeast extract (X4, meat extract (X5, NH4NO3 (X6, K-salt (X7, Mg-salt (X8 and trace element (X9 are optimized during phenol degradation by Acinetobacter sp., using Plackett-Burman design method. Plackett-Burman included 16 experiments, each was used in two levels, [-1] low and high [+1]. According to Blackett-Burman design experiments the maximum degradation rate was 31.25 mg/l/h. Logical and statistical analysis of the data lead to select pH, Temperature and Meat extract as three factors affecting on phenol degradation rate. These three variables have been used in Box-Behnken experimental design for further optimization. Meat extract, which is not statistically recommended for optimization has been used while it can substitute trace element, which is statistically significant. Glucose, which is statistically significant, did not included while it has a negative effect and gave the best result at 0 g/l amount. Glucose has been completely omitted from the media.  pH, temperature and meat extract were used in fifteen experiments each was used in three levels, –1, 0, and +1 according to Box-Behnken design. Microsoft Excel 2002 solver tool was used to optimize the model created from Box-Behnken. The

  1. Designing Solutions by a Student Centred Approach: Integration of Chemical Process Simulation with Statistical Tools to Improve Distillation Systems

    Directory of Open Access Journals (Sweden)

    Isabel M. Joao

    2017-09-01

    Full Text Available Projects thematically focused on simulation and statistical techniques for designing and optimizing chemical processes can be helpful in chemical engineering education in order to meet the needs of engineers. We argue for the relevance of the projects to improve a student centred approach and boost higher order thinking skills. This paper addresses the use of Aspen HYSYS by Portuguese chemical engineering master students to model distillation systems together with statistical experimental design techniques in order to optimize the systems highlighting the value of applying problem specific knowledge, simulation tools and sound statistical techniques. The paper summarizes the work developed by the students in order to model steady-state processes, dynamic processes and optimize the distillation systems emphasizing the benefits of the simulation tools and statistical techniques in helping the students learn how to learn. Students strengthened their domain specific knowledge and became motivated to rethink and improve chemical processes in their future chemical engineering profession. We discuss the main advantages of the methodology from the students’ and teachers perspective

  2. Conceptual design of neutron diagnostic systems for fusion experimental reactor

    International Nuclear Information System (INIS)

    Iguchi, T.; Kaneko, J.; Nakazawa, M.

    1994-01-01

    Neutron measurement in fusion experimental reactors is very important for burning plasma diagnostics and control, monitoring of irradiation effects on device components, neutron source characterization for in-situ engineering tests, etc. A conceptual design of neutron diagnostic systems for an ITER-like fusion experimental reactor has been made, which consists of a neutron yield monitor, a neutron emission profile monitor and a 14-MeV spectrometer. Each of them is based on a unique idea to meet the required performances for full power conditions assumed at ITER operation. Micro-fission chambers of 235 U (and 238 U) placed at several poloidal angles near the first wall are adopted as a promising neutron yield monitor. A collimated long counter system using a 235 U fission chamber and graphite neutron moderators is also proposed to improve the calibration accuracy of absolute neutron yield determination

  3. Spent Fuel Transportation Package Performance Study - Experimental Design Challenges

    International Nuclear Information System (INIS)

    Snyder, A. M.; Murphy, A. J.; Sprung, J. L.; Ammerman, D. J.; Lopez, C.

    2003-01-01

    Numerous studies of spent nuclear fuel transportation accident risks have been performed since the late seventies that considered shipping container design and performance. Based in part on these studies, NRC has concluded that the level of protection provided by spent nuclear fuel transportation package designs under accident conditions is adequate. [1] Furthermore, actual spent nuclear fuel transport experience showcase a safety record that is exceptional and unparalleled when compared to other hazardous materials transportation shipments. There has never been a known or suspected release of the radioactive contents from an NRC-certified spent nuclear fuel cask as a result of a transportation accident. In 1999 the United States Nuclear Regulatory Commission (NRC) initiated a study, the Package Performance Study, to demonstrate the performance of spent fuel and spent fuel packages during severe transportation accidents. NRC is not studying or testing its current regulations, a s the rigorous regulatory accident conditions specified in 10 CFR Part 71 are adequate to ensure safe packaging and use. As part of this study, NRC currently plans on using detailed modeling followed by experimental testing to increase public confidence in the safety of spent nuclear fuel shipments. One of the aspects of this confirmatory research study is the commitment to solicit and consider public comment during the scoping phase and experimental design planning phase of this research

  4. Multi-reader ROC studies with split-plot designs: a comparison of statistical methods.

    Science.gov (United States)

    Obuchowski, Nancy A; Gallas, Brandon D; Hillis, Stephen L

    2012-12-01

    Multireader imaging trials often use a factorial design, in which study patients undergo testing with all imaging modalities and readers interpret the results of all tests for all patients. A drawback of this design is the large number of interpretations required of each reader. Split-plot designs have been proposed as an alternative, in which one or a subset of readers interprets all images of a sample of patients, while other readers interpret the images of other samples of patients. In this paper, the authors compare three methods of analysis for the split-plot design. Three statistical methods are presented: the Obuchowski-Rockette method modified for the split-plot design, a newly proposed marginal-mean analysis-of-variance approach, and an extension of the three-sample U-statistic method. A simulation study using the Roe-Metz model was performed to compare the type I error rate, power, and confidence interval coverage of the three test statistics. The type I error rates for all three methods are close to the nominal level but tend to be slightly conservative. The statistical power is nearly identical for the three methods. The coverage of 95% confidence intervals falls close to the nominal coverage for small and large sample sizes. The split-plot multireader, multicase study design can be statistically efficient compared to the factorial design, reducing the number of interpretations required per reader. Three methods of analysis, shown to have nominal type I error rates, similar power, and nominal confidence interval coverage, are available for this study design. Copyright © 2012 AUR. All rights reserved.

  5. Conceptual design study of fusion experimental reactor (FER)

    International Nuclear Information System (INIS)

    1986-11-01

    Since 1980 the design study has been conducted at JAERI for the Fusion Experimental Reactor (FER) which has been proposed to be the next machine to JT-60 in the Japanese long term program of fusion reactor development. During two years from 1984 to 1985 FER concept was reviewed and redesigned. This report is the summary of the results obtained in the review and redesign activities in 1984 and 85. In the first year FER concept was discussed again and its frame work was reestablished. According to the new frame work the major reactor components of FER were designed. In the second year the whole plant system design including plant layout plan was conducted as well as the more detailed design analysis of the reactor conponents. The newly established frame for FER design is as follows: 1) Plasma : Self-ignition. 2) Operation scenario : Quasi-steady state operation with long burn pulse. 3) Neutron fluence on the first wall : 0.3 MWY/M 2 . 4) Blanket : Non-tritium breeding blanket with test modules for breeding blanket development. 5) Magnets : Superconducting Magnets. (author)

  6. Acting like a physicist: Student approach study to experimental design

    Science.gov (United States)

    Karelina, Anna; Etkina, Eugenia

    2007-12-01

    National studies of science education have unanimously concluded that preparing our students for the demands of the 21st century workplace is one of the major goals. This paper describes a study of student activities in introductory college physics labs, which were designed to help students acquire abilities that are valuable in the workplace. In these labs [called Investigative Science Learning Environment (ISLE) labs], students design their own experiments. Our previous studies have shown that students in these labs acquire scientific abilities such as the ability to design an experiment to solve a problem, the ability to collect and analyze data, the ability to evaluate assumptions and uncertainties, and the ability to communicate. These studies mostly concentrated on analyzing students’ writing, evaluated by specially designed scientific ability rubrics. Recently, we started to study whether the ISLE labs make students not only write like scientists but also engage in discussions and act like scientists while doing the labs. For example, do students plan an experiment, validate assumptions, evaluate results, and revise the experiment if necessary? A brief report of some of our findings that came from monitoring students’ activity during ISLE and nondesign labs was presented in the Physics Education Research Conference Proceedings. We found differences in student behavior and discussions that indicated that ISLE labs do in fact encourage a scientistlike approach to experimental design and promote high-quality discussions. This paper presents a full description of the study.

  7. Conceptual design study of fusion experimental reactor (FY86 FER)

    International Nuclear Information System (INIS)

    Kobayashi, Takeshi; Yamada, Masao; Mizoguchi, Tadanori

    1987-09-01

    This report describes the results of the reactor configuration/structure design for the fusion experimental reactor (FER) performed in FY 1986. The design was intended to meet the physical and engineering mission of the next step device which was decided by the subcommittee on the next step device of the nuclear fusion council. The objectives of the design study in FY 1986 are to advance and optimize the design concept of the last year because the recommendation of the subcommittee was basically the same as the design philosophy of the last year. Six candidate reactor configurations which correspond to options C ∼ D presented by the subcommittee were extensively examined. Consequently, ACS reactor (Advanced Option-C with Single Null Divertor) was selected as the reference configuration from viewpoints of technical risks and cost performance. Regarding the reactor structure, the following items were investigated intensively: minimization of reactor size, protection of first wall against plasma disruption, simplification of shield structure, reactor configuration which enables optimum arrangement of poloidal field coils. (author)

  8. Acting like a physicist: Student approach study to experimental design

    Directory of Open Access Journals (Sweden)

    Anna Karelina

    2007-10-01

    Full Text Available National studies of science education have unanimously concluded that preparing our students for the demands of the 21st century workplace is one of the major goals. This paper describes a study of student activities in introductory college physics labs, which were designed to help students acquire abilities that are valuable in the workplace. In these labs [called Investigative Science Learning Environment (ISLE labs], students design their own experiments. Our previous studies have shown that students in these labs acquire scientific abilities such as the ability to design an experiment to solve a problem, the ability to collect and analyze data, the ability to evaluate assumptions and uncertainties, and the ability to communicate. These studies mostly concentrated on analyzing students’ writing, evaluated by specially designed scientific ability rubrics. Recently, we started to study whether the ISLE labs make students not only write like scientists but also engage in discussions and act like scientists while doing the labs. For example, do students plan an experiment, validate assumptions, evaluate results, and revise the experiment if necessary? A brief report of some of our findings that came from monitoring students’ activity during ISLE and nondesign labs was presented in the Physics Education Research Conference Proceedings. We found differences in student behavior and discussions that indicated that ISLE labs do in fact encourage a scientistlike approach to experimental design and promote high-quality discussions. This paper presents a full description of the study.

  9. Improving the analysis of designed studies by combining statistical modelling with study design information

    NARCIS (Netherlands)

    Thissen, U.; Wopereis, S.; Berg, S.A.A. van den; Bobeldijk, I.; Kleemann, R.; Kooistra, T.; Dijk, K.W. van; Ommen, B. van; Smilde, A.K.

    2009-01-01

    Background: In the fields of life sciences, so-called designed studies are used for studying complex biological systems. The data derived from these studies comply with a study design aimed at generating relevant information while diminishing unwanted variation (noise). Knowledge about the study

  10. Experimental application of design principles in corrosion research

    International Nuclear Information System (INIS)

    Smyrl, W.H.; Pohlman, S.L.

    1977-01-01

    Experimental design criteria for corrosion investigations are based on established principles for systems that have uniform, or nearly uniform, corrosive attack. Scale-up or scale-down may be accomplished by proper use of dimensionless groups that measure the relative importance of interfacial kinetics, solution conductivity, and mass transfer. These principles have been applied to different fields of corrosion which include materials selection testing and protection; and to a specific corrosion problem involving attack of a substrate through holes in a protective overplate

  11. Development and optimization of fast dissolving oro-dispersible films of granisetron HCl using Box–Behnken statistical design

    Directory of Open Access Journals (Sweden)

    Hema Chaudhary

    2013-12-01

    Full Text Available The aim was to develop and optimize fast dissolving oro-dispersible films of granisetron hydrochloride (GH by two-factor, three-level Box–Behnken design as the two independent variables such as X1 (polymer and X2 (plasticizer were selected on the basis of the preliminary studies carried out before the experimental design is being implemented. A second-order polynomial equation to construct contour plots for the prediction of responses of the dependent variables such as drug release (Y1, Disintegration time (Y2, and Y3 (Tensile strength was studied. The Response surface plots were drawn, statistical validity of the polynomials was established to find the compositions of optimized formulation which was evaluated using the Franz-type diffusion cell. The designs establish the role of the derived polynomial equation and contour plots in predicting the values of dependent variables for the preparation and optimization.

  12. Design considerations for ITER [International Thermonuclear Experimental Reactor] magnet systems

    International Nuclear Information System (INIS)

    Henning, C.D.; Miller, J.R.

    1988-01-01

    The International Thermonuclear Experimental Reactor (ITER) is now completing a definition phase as a beginning of a three-year design effort. Preliminary parameters for the superconducting magnet system have been established to guide further and more detailed design work. Radiation tolerance of the superconductors and insulators has been of prime importance, since it sets requirements for the neutron-shield dimension and sensitively influences reactor size. The major levels of mechanical stress in the structure appear in the cases of the inboard legs of the toroidal-field (TF) coils. The cases of the poloidal-field (PF) coils must be made thin or segmented to minimize eddy current heating during inductive plasma operation. As a result, the winding packs of both the TF and PF coils includes significant fractions of steel. The TF winding pack provides support against in-plane separating loads but offers little support against out-of-plane loads, unless shear-bonding of the conductors can be maintained. The removal of heat due to nuclear and ac loads has not been a fundamental limit to design, but certainly has non-negligible economic consequences. We present here preliminary ITER magnetic systems design parameters taken from trade studies, designs, and analyses performed by the Home Teams of the four ITER participants, by the ITER Magnet Design Unit in Garching, and by other participants at workshops organized by the Magnet Design Unit. The work presented here reflects the efforts of many, but the responsibility for the opinions expressed is the authors'. 4 refs., 3 figs., 4 tabs

  13. An experimental method for designing the municipal solid waste biodrying

    International Nuclear Information System (INIS)

    Rada, E.C.; Politecnico Univ., Bucarest; Franzinelli, A.; Taiss, M.; Ragazzi, M.; Panaitescu, V.; Apostol, T.

    2005-01-01

    In the management of Municipal Solid Waste (MSW), in agreement with the new European directives concerning the valorization of materials and energy recovery, a recent approach based on a one-stream Biological Mechanical Treatment (BMT) is spreading as an alternative to the traditional two-stream approach. The bio-mechanical treatment of MSW is an increasing option either as a pre-treatment before land filling or as a pre-treatment before combustion. In the present paper an experimental method for designing the Municipal Solid Waste bio-drying is proposed. That means this paper deals with the option of energy recovery. The aim is to provide design criteria for bio-drying plants independent from the patents available in the sector [it

  14. Conceptual design study of fusion experimental reactor (FY86 FER)

    International Nuclear Information System (INIS)

    Seki, Yasushi; Iida, Hiromasa; Honda, Tsutomu.

    1987-08-01

    This report describes the study on safety for FER(Fusion Experimental Reactor) which has been designed as a next step machine to the JT-60. Though the final purpose of this study is to have an image of design base accident, maximum credible accident and to assess their risk or probability, etc., as FER plant system, the emphasis of this years study is placed on fuel-gas circulation system where the tritium inventory is maximum. This report consists of two chapters. The first chapter of this report summaries the FER system and describes FMEA(Failure Mode and Effect Analysis) and related accident progression sequence for FER plant system as a whole. The second chapter of this report is focused on fuel-gas circulation system including the purification, isotope separation system and storage system. Here, probability of risk is assessed by the probabilistic risk analysis (PRA) procedure based on FMEA, ETA and FTA. (author)

  15. Superconducting coil design for a tokamak experimental power reactor

    International Nuclear Information System (INIS)

    Turner, L.R.; Wang, S.T.; Smelser, P.

    1977-01-01

    Superconducting toroidal field (TF) and polodial-field (PF) coils have been designed for the proposed Argonne National Laboratory experimental power reactor (EPR). Features of the design include: (1) Peak field of 8 T at 4.2 K or 10 T at 3.0 K. (2) Constant-tension shape for the TF coils, corrected for the finite number (16) of coils. (3) Analysis of errors in coil alignment. (4) Comparison of safety aspects of series-connected and parallel-connected coils. (5) A 60 kA sheet conductor of NbTi with copper stabilizer and stainless steel for support. (6) Superconducting PF coils outside the TF coils. (7) The TF coils shielded from pulsed fields by high-purity aluminum

  16. Tritium system design studies of fusion experimental breeder

    International Nuclear Information System (INIS)

    Deng Baiquan; Huang Jinhua

    2003-01-01

    A summary of the tritium system design studies for the engineering outline design of a fusion experimental breeder (FEB-E) is presented. This paper is divided into three sections. In first section, the geometry, loading features and tritium concentrations in liquid lithium of tritium breeding zones of blanket are described. The tritium flow chart corresponding to the tritium fuel cycle system has been constructed, and the inventories in ten subsystems are calculated using SWITRIM code in section 2. Results show that the necessary initial tritium storage to start up FEB-E with fusion power of 143 MW is about 319 g. In final section, the tritium leakage issues under different operation circumstances have been analyzed. It was found that the potential danger of tritium leakage could be resulted from the exhausted gas of the diverter system. It is important to elevate the tritium burnup fraction and reduce the tritium throughput. (authors)

  17. Entropy-Based Experimental Design for Optimal Model Discrimination in the Geosciences

    Directory of Open Access Journals (Sweden)

    Wolfgang Nowak

    2016-11-01

    Full Text Available Choosing between competing models lies at the heart of scientific work, and is a frequent motivation for experimentation. Optimal experimental design (OD methods maximize the benefit of experiments towards a specified goal. We advance and demonstrate an OD approach to maximize the information gained towards model selection. We make use of so-called model choice indicators, which are random variables with an expected value equal to Bayesian model weights. Their uncertainty can be measured with Shannon entropy. Since the experimental data are still random variables in the planning phase of an experiment, we use mutual information (the expected reduction in Shannon entropy to quantify the information gained from a proposed experimental design. For implementation, we use the Preposterior Data Impact Assessor framework (PreDIA, because it is free of the lower-order approximations of mutual information often found in the geosciences. In comparison to other studies in statistics, our framework is not restricted to sequential design or to discrete-valued data, and it can handle measurement errors. As an application example, we optimize an experiment about the transport of contaminants in clay, featuring the problem of choosing between competing isotherms to describe sorption. We compare the results of optimizing towards maximum model discrimination with an alternative OD approach that minimizes the overall predictive uncertainty under model choice uncertainty.

  18. Design study of toroidal magnets for tokamak experimental power reactors

    International Nuclear Information System (INIS)

    Stekly, Z.J.J.; Lucas, E.J.

    1976-12-01

    This report contains the results of a six-month study of superconducting toroidal field coils for a Tokamak Experimental Power Reactor to be built in the late 1980s. The designs are for 8 T and 12 T maximum magnetic field at the superconducting winding. At each field level two main concepts were generated; one in which each of the 16 coils comprising the system has an individual vacuum vessel and the other in which all the coils are contained in a single vacuum vessel. The coils have a D shape and have openings of 11.25 m x 7.5 m for the 8 T coils and 10.2 m x 6.8 m for the 12 T coils. All the designs utilize rectangular cabled conductor made from copper stabilized Niobium Titanium composite which operates at 4.2 K for the 8 T design and at 2.5 K for the 12 T design. Manufacturing procedures, processes and schedule estimates are also discussed

  19. Design of Experimental Suspended Footbridge with Deck Made of UHPC

    Directory of Open Access Journals (Sweden)

    Blank Marek

    2016-01-01

    Full Text Available This paper deals with the static and dynamic design of experimental footbridge for pedestrians and cyclists in the municipality Lužec nad Vltavou in Czech Republic, Europe. This work aims to familiarize the reader with calculations carried out and the results obtained, describing the static and dynamic properties of proposed footbridge. The construction of footbridge is designed as a suspended structure with prestressed bridge deck consisting of prefabricated UHPC panels and reversed “V” shaped steel pylon with height of approximately 40 meters. The deck is anchored using 24 steel hangers in one row in a steel pylon - 17 ropes in the main span and 7 cables on the other side. Range of the main span is 99.18 meters and the secondary span is 31.9 m. Deck width is 4.5 meters with 3.0 meters passing space. The bridge is designed for the possibility of passage of vehicles weighting up to 3.5 tons. Deck panels are made of UHPC with reinforcement. At the edge of the bridge on the side of the shorter span the bridge deck is firmly connected with abutment and on the other deck it is stored using a pair of sliding bearings. The utilization of the excellent properties of UHPC allows to design a very thin and lightweight construction of the deck, which could not be achieved with the use of normal concrete.

  20. Sparse linear models: Variational approximate inference and Bayesian experimental design

    International Nuclear Information System (INIS)

    Seeger, Matthias W

    2009-01-01

    A wide range of problems such as signal reconstruction, denoising, source separation, feature selection, and graphical model search are addressed today by posterior maximization for linear models with sparsity-favouring prior distributions. The Bayesian posterior contains useful information far beyond its mode, which can be used to drive methods for sampling optimization (active learning), feature relevance ranking, or hyperparameter estimation, if only this representation of uncertainty can be approximated in a tractable manner. In this paper, we review recent results for variational sparse inference, and show that they share underlying computational primitives. We discuss how sampling optimization can be implemented as sequential Bayesian experimental design. While there has been tremendous recent activity to develop sparse estimation, little attendance has been given to sparse approximate inference. In this paper, we argue that many problems in practice, such as compressive sensing for real-world image reconstruction, are served much better by proper uncertainty approximations than by ever more aggressive sparse estimation algorithms. Moreover, since some variational inference methods have been given strong convex optimization characterizations recently, theoretical analysis may become possible, promising new insights into nonlinear experimental design.

  1. Sparse linear models: Variational approximate inference and Bayesian experimental design

    Energy Technology Data Exchange (ETDEWEB)

    Seeger, Matthias W [Saarland University and Max Planck Institute for Informatics, Campus E1.4, 66123 Saarbruecken (Germany)

    2009-12-01

    A wide range of problems such as signal reconstruction, denoising, source separation, feature selection, and graphical model search are addressed today by posterior maximization for linear models with sparsity-favouring prior distributions. The Bayesian posterior contains useful information far beyond its mode, which can be used to drive methods for sampling optimization (active learning), feature relevance ranking, or hyperparameter estimation, if only this representation of uncertainty can be approximated in a tractable manner. In this paper, we review recent results for variational sparse inference, and show that they share underlying computational primitives. We discuss how sampling optimization can be implemented as sequential Bayesian experimental design. While there has been tremendous recent activity to develop sparse estimation, little attendance has been given to sparse approximate inference. In this paper, we argue that many problems in practice, such as compressive sensing for real-world image reconstruction, are served much better by proper uncertainty approximations than by ever more aggressive sparse estimation algorithms. Moreover, since some variational inference methods have been given strong convex optimization characterizations recently, theoretical analysis may become possible, promising new insights into nonlinear experimental design.

  2. An Experimental Design of Bypass Magneto-Rheological (MR) damper

    Science.gov (United States)

    Rashid, MM; Aziz, Mohammad Abdul; Raisuddin Khan, Md.

    2017-11-01

    The magnetorheological (MR) fluid bypass damper fluid flow through a bypass by utilizing an external channel which allows the controllability of MR fluid in the channel. The Bypass MR damper (BMRD) contains a rectangular bypass flow channel, current controlled movable piston shaft arrangement and MR fluid. The static piston coil case is winding by a coil which is used inside the piston head arrangement. The current controlled coil case provides a magnetic flux through the BMRD cylinder for controllability. The high strength of alloy steel materials are used for making piston shaft which allows magnetic flux propagation throughout the BMRD cylinder. Using the above design materials, a Bypass MR damper is designed and tested. An excitation of current is applied during the experiment which characterizes the BMRD controllability. It is shown that the BMRD with external flow channel allows a high controllable damping force using an excitation current. The experimental result of damping force-displacement characteristics with current excitation and without current excitation are compared in this research. The BMRD model is validated by the experimental result at various frequencies and applied excitation current.

  3. Oak Ridge Tokamak experimental power reactor study reference design

    International Nuclear Information System (INIS)

    Roberts, M.; Bettis, E.S.

    1975-11-01

    A Tokamak EPR Reference Design is presented as a basis for further design study leading to a Conceptual Design. The set of basic plasma parameters selected--minor radius of 2.25 m, major radius of 6.75 m, magnetic field on axis of 4.8 T and plasma current of 7.2 MA--should produce a reactor-grade plasma with a significant neutron flux, even with the great uncertainty in plasma physics scaling from present experience to large sizes. Neutronics and heat transfer calculations coupled with mechanical design and materials considerations were used to develop a blanket and shield capable of operating at high temperature, protecting the surrounding coils, being maintained remotely and, in a few experimental modules, breeding tritium. Nb 3 Sn and NbTi superconductors are used in the toroidal field coil design. The coil system was developed for a maximum field of 11 T at the winding (to give a field on axis of 4.8 T), and combines multifilamentary superconducting cable with forced flow of supercritical helium enclosed in a steel conduit. The structural system uses a stainless steel center bucking ring and intercoil box beam bracing to provide rigid support for coils against the centering force, overturning moments from poloidal fields and faults, other external forces, and thermal stresses. The poloidal magnetics system is specially designed both to reduce the total volt-second energy requirements and to reduce the magnitude of the rate of field change at the toroidal field coils. The rate of field change imposed upon the toroidal field coils is reduced by at least a factor of 3.3 compared to that due to the plasma alone. Tritium processing, tritium containment and vacuum systems employ double containment and atmospheric cleanup to minimize releases. The document also contains discussions of systems integration and assembly, key research and development needs, and schedule considerations

  4. Experimental Design for Hanford Low-Activity Waste Glasses with High Waste Loading

    Energy Technology Data Exchange (ETDEWEB)

    Piepel, Gregory F. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Cooley, Scott K. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Vienna, John D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Crum, Jarrod V. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-07-24

    This report discusses the development of an experimental design for the initial phase of the Hanford low-activity waste (LAW) enhanced glass study. This report is based on a manuscript written for an applied statistics journal. Appendices A, B, and E include additional information relevant to the LAW enhanced glass experimental design that is not included in the journal manuscript. The glass composition experimental region is defined by single-component constraints (SCCs), linear multiple-component constraints (MCCs), and a nonlinear MCC involving 15 LAW glass components. Traditional methods and software for designing constrained mixture experiments with SCCs and linear MCCs are not directly applicable because of the nonlinear MCC. A modification of existing methodology to account for the nonlinear MCC was developed and is described in this report. One of the glass components, SO3, has a solubility limit in glass that depends on the composition of the balance of the glass. A goal was to design the experiment so that SO3 would not exceed its predicted solubility limit for any of the experimental glasses. The SO3 solubility limit had previously been modeled by a partial quadratic mixture model expressed in the relative proportions of the 14 other components. The partial quadratic mixture model was used to construct a nonlinear MCC in terms of all 15 components. In addition, there were SCCs and linear MCCs. This report describes how a layered design was generated to (i) account for the SCCs, linear MCCs, and nonlinear MCC and (ii) meet the goals of the study. A layered design consists of points on an outer layer, and inner layer, and a center point. There were 18 outer-layer glasses chosen using optimal experimental design software to augment 147 existing glass compositions that were within the LAW glass composition experimental region. Then 13 inner-layer glasses were chosen with the software to augment the existing and outer

  5. Experimental Design for Hanford Low-Activity Waste Glasses with High Waste Loading

    International Nuclear Information System (INIS)

    Piepel, Gregory F.; Cooley, Scott K.; Vienna, John D.; Crum, Jarrod V.

    2015-01-01

    This report discusses the development of an experimental design for the initial phase of the Hanford low-activity waste (LAW) enhanced glass study. This report is based on a manuscript written for an applied statistics journal. Appendices A, B, and E include additional information relevant to the LAW enhanced glass experimental design that is not included in the journal manuscript. The glass composition experimental region is defined by single-component constraints (SCCs), linear multiple-component constraints (MCCs), and a nonlinear MCC involving 15 LAW glass components. Traditional methods and software for designing constrained mixture experiments with SCCs and linear MCCs are not directly applicable because of the nonlinear MCC. A modification of existing methodology to account for the nonlinear MCC was developed and is described in this report. One of the glass components, SO 3 , has a solubility limit in glass that depends on the composition of the balance of the glass. A goal was to design the experiment so that SO 3 would not exceed its predicted solubility limit for any of the experimental glasses. The SO 3 solubility limit had previously been modeled by a partial quadratic mixture model expressed in the relative proportions of the 14 other components. The partial quadratic mixture model was used to construct a nonlinear MCC in terms of all 15 components. In addition, there were SCCs and linear MCCs. This report describes how a layered design was generated to (i) account for the SCCs, linear MCCs, and nonlinear MCC and (ii) meet the goals of the study. A layered design consists of points on an outer layer, and inner layer, and a center point. There were 18 outer-layer glasses chosen using optimal experimental design software to augment 147 existing glass compositions that were within the LAW glass composition experimental region. Then 13 inner-layer glasses were chosen with the software to augment the existing and outer-layer glasses. The experimental

  6. Bayesian optimal experimental design for priors of compact support

    KAUST Repository

    Long, Quan

    2016-01-08

    In this study, we optimize the experimental setup computationally by optimal experimental design (OED) in a Bayesian framework. We approximate the posterior probability density functions (pdf) using truncated Gaussian distributions in order to account for the bounded domain of the uniform prior pdf of the parameters. The underlying Gaussian distribution is obtained in the spirit of the Laplace method, more precisely, the mode is chosen as the maximum a posteriori (MAP) estimate, and the covariance is chosen as the negative inverse of the Hessian of the misfit function at the MAP estimate. The model related entities are obtained from a polynomial surrogate. The optimality, quantified by the information gain measures, can be estimated efficiently by a rejection sampling algorithm against the underlying Gaussian probability distribution, rather than against the true posterior. This approach offers a significant error reduction when the magnitude of the invariants of the posterior covariance are comparable to the size of the bounded domain of the prior. We demonstrate the accuracy and superior computational efficiency of our method for shock-tube experiments aiming to measure the model parameters of a key reaction which is part of the complex kinetic network describing the hydrocarbon oxidation. In the experiments, the initial temperature and fuel concentration are optimized with respect to the expected information gain in the estimation of the parameters of the target reaction rate. We show that the expected information gain surface can change its shape dramatically according to the level of noise introduced into the synthetic data. The information that can be extracted from the data saturates as a logarithmic function of the number of experiments, and few experiments are needed when they are conducted at the optimal experimental design conditions.

  7. Experimental study of elementary collection efficiency of aerosols by spray: Design of the experimental device

    Energy Technology Data Exchange (ETDEWEB)

    Ducret, D.; Vendel, J.; Garrec. S.L.

    1995-02-01

    The safety of a nuclear power plant containment building, in which pressure and temperature could increase because of a overheating reactor accident, can be achieved by spraying water drops. The spray reduces the pressure and the temperature levels by condensation of steam on cold water drops. The more stringent thermodynamic conditions are a pressure of 5.10{sup 5} Pa (due to steam emission) and a temperature of 413 K. Moreover its energy dissipation function, the spray leads to the washout of fission product particles emitted in the reactor building atmosphere. The present study includes a large program devoted to the evaluation of realistic washout rates. The aim of this work is to develop experiments in order to determine the collection efficiency of aerosols by a single drop. To do this, the experimental device has to be designed with fundamental criteria:-Thermodynamic conditions have to be representative of post-accident atmosphere. Thermodynamic equilibrium has to be attained between the water drops and the gaseous phase. Thermophoretic, diffusiophoretic and mechanical effects have to be studied independently. Operating conditions have to be homogenous and constant during each experiment. This paper presents the design of the experimental device. In practice, the consequences on the design of each of the criteria given previously and the necessity of being representative of the real conditions will be described.

  8. Statistical and Machine-Learning Classifier Framework to Improve Pulse Shape Discrimination System Design

    Energy Technology Data Exchange (ETDEWEB)

    Wurtz, R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Kaplan, A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-10-28

    Pulse shape discrimination (PSD) is a variety of statistical classifier. Fully-­realized statistical classifiers rely on a comprehensive set of tools for designing, building, and implementing. PSD advances rely on improvements to the implemented algorithm. PSD advances can be improved by using conventional statistical classifier or machine learning methods. This paper provides the reader with a glossary of classifier-­building elements and their functions in a fully-­designed and operational classifier framework that can be used to discover opportunities for improving PSD classifier projects. This paper recommends reporting the PSD classifier’s receiver operating characteristic (ROC) curve and its behavior at a gamma rejection rate (GRR) relevant for realistic applications.

  9. Sb2Te3 and Its Superlattices: Optimization by Statistical Design.

    Science.gov (United States)

    Behera, Jitendra K; Zhou, Xilin; Ranjan, Alok; Simpson, Robert E

    2018-05-02

    The objective of this work is to demonstrate the usefulness of fractional factorial design for optimizing the crystal quality of chalcogenide van der Waals (vdW) crystals. We statistically analyze the growth parameters of highly c axis oriented Sb 2 Te 3 crystals and Sb 2 Te 3 -GeTe phase change vdW heterostructured superlattices. The statistical significance of the growth parameters of temperature, pressure, power, buffer materials, and buffer layer thickness was found by fractional factorial design and response surface analysis. Temperature, pressure, power, and their second-order interactions are the major factors that significantly influence the quality of the crystals. Additionally, using tungsten rather than molybdenum as a buffer layer significantly enhances the crystal quality. Fractional factorial design minimizes the number of experiments that are necessary to find the optimal growth conditions, resulting in an order of magnitude improvement in the crystal quality. We highlight that statistical design of experiment methods, which is more commonly used in product design, should be considered more broadly by those designing and optimizing materials.

  10. White Noise Assumptions Revisited : Regression Models and Statistical Designs for Simulation Practice

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2006-01-01

    Classic linear regression models and their concomitant statistical designs assume a univariate response and white noise.By definition, white noise is normally, independently, and identically distributed with zero mean.This survey tries to answer the following questions: (i) How realistic are these

  11. Experimental Design of Electrocoagulation and Magnetic Technology for Enhancing Suspended Solids Removal from Synthetic Wastewater

    Directory of Open Access Journals (Sweden)

    Moh Faiqun Ni'am

    2014-10-01

    Full Text Available Design of experiments (DOE is one of the statistical method that is used as a tool to enhance and improve experimental quality. The changes to the variables of a process or system is supposed to give the optimal result (response and quite satisfactory. Experimental design can defined as a test or series of test series by varying the input variables (factors of a process that can known to cause changes in output (response. This paper presents the results of experimental design of wastewater treatment by electrocoagulation (EC technique. A combined magnet and electrocoagulation (EC technology were designed to increase settling velocity and to enhance suspended solid removal efficiencies from wastewater samples. In this experiment, a synthetic wastewater samples were prepared by mixing 700 mg of the milk powder in one litre of water and treated by using an acidic buffer solution. The monopolar iron (Fe plate anodes and cathodes were employed as electrodes. Direct current was varied in a range of between 0.5 and 1.1 A, and flowrate in a range of between 1.00 to 3.50 mL/s. One permanent magnets namely AlNiCo with a magnetic strength of 0.16T was used in this experiment. The results show that the magnetic field and the flowrate have major influences on suspended solids removal. The efficiency removals of suspended solids, turbidity and COD removal efficiencies at optimum conditions were found to be more than 85%, 95%, and 75%, respectively.

  12. New synthetic thrombin inhibitors: molecular design and experimental verification.

    Science.gov (United States)

    Sinauridze, Elena I; Romanov, Alexey N; Gribkova, Irina V; Kondakova, Olga A; Surov, Stepan S; Gorbatenko, Aleksander S; Butylin, Andrey A; Monakov, Mikhail Yu; Bogolyubov, Alexey A; Kuznetsov, Yuryi V; Sulimov, Vladimir B; Ataullakhanov, Fazoyl I

    2011-01-01

    The development of new anticoagulants is an important goal for the improvement of thromboses treatments. The design, synthesis and experimental testing of new safe and effective small molecule direct thrombin inhibitors for intravenous administration. Computer-aided molecular design of new thrombin inhibitors was performed using our original docking program SOL, which is based on the genetic algorithm of global energy minimization in the framework of a Merck Molecular Force Field. This program takes into account the effects of solvent. The designed molecules with the best scoring functions (calculated binding energies) were synthesized and their thrombin inhibitory activity evaluated experimentally in vitro using a chromogenic substrate in a buffer system and using a thrombin generation test in isolated plasma and in vivo using the newly developed model of hemodilution-induced hypercoagulation in rats. The acute toxicities of the most promising new thrombin inhibitors were evaluated in mice, and their stabilities in aqueous solutions were measured. New compounds that are both effective direct thrombin inhibitors (the best K(I) was 50) in the thrombin generation assay of approximately 100 nM) were discovered. These compounds contain one of the following new residues as the basic fragment: isothiuronium, 4-aminopyridinium, or 2-aminothiazolinium. LD(50) values for the best new inhibitors ranged from 166.7 to >1111.1 mg/kg. A plasma-substituting solution supplemented with one of the new inhibitors prevented hypercoagulation in the rat model of hemodilution-induced hypercoagulation. Activities of the best new inhibitors in physiological saline (1 µM solutions) were stable after sterilization by autoclaving, and the inhibitors remained stable at long-term storage over more than 1.5 years at room temperature and at 4°C. The high efficacy, stability and low acute toxicity reveal that the inhibitors that were developed may be promising for potential medical applications.

  13. New synthetic thrombin inhibitors: molecular design and experimental verification.

    Directory of Open Access Journals (Sweden)

    Elena I Sinauridze

    Full Text Available BACKGROUND: The development of new anticoagulants is an important goal for the improvement of thromboses treatments. OBJECTIVES: The design, synthesis and experimental testing of new safe and effective small molecule direct thrombin inhibitors for intravenous administration. METHODS: Computer-aided molecular design of new thrombin inhibitors was performed using our original docking program SOL, which is based on the genetic algorithm of global energy minimization in the framework of a Merck Molecular Force Field. This program takes into account the effects of solvent. The designed molecules with the best scoring functions (calculated binding energies were synthesized and their thrombin inhibitory activity evaluated experimentally in vitro using a chromogenic substrate in a buffer system and using a thrombin generation test in isolated plasma and in vivo using the newly developed model of hemodilution-induced hypercoagulation in rats. The acute toxicities of the most promising new thrombin inhibitors were evaluated in mice, and their stabilities in aqueous solutions were measured. RESULTS: New compounds that are both effective direct thrombin inhibitors (the best K(I was 1111.1 mg/kg. A plasma-substituting solution supplemented with one of the new inhibitors prevented hypercoagulation in the rat model of hemodilution-induced hypercoagulation. Activities of the best new inhibitors in physiological saline (1 µM solutions were stable after sterilization by autoclaving, and the inhibitors remained stable at long-term storage over more than 1.5 years at room temperature and at 4°C. CONCLUSIONS: The high efficacy, stability and low acute toxicity reveal that the inhibitors that were developed may be promising for potential medical applications.

  14. Conceptual design studies of experimental and demonstration fusion reactors

    International Nuclear Information System (INIS)

    1978-01-01

    Since 1973 the FINTOR Group has been involved in conceptual design studies of TOKAMAK-type fusion reactors to precede the construction of a prototype power reactor plant. FINTOR-1 was the first conceptual design aimed at investigating the main physics and engineering constraints on a minimum-size (both dimensions and thermal power) tokamak experimental reactor. The required plasma energy confinement time as evaluated by various power balance models was compared with the values resulting from different transport models. For the reference design, an energy confinement time ten times smaller than neoclassical was assumed. This also implied a rather high (thermally stable) working temperature (above 20 keV) for the reactor. Other relevant points of the design were: circular plasma cross section, single-null axisymmetric divertor; lithium breeder, stainless steel structures, helium coolant; modular blanket and shield structure; copper-stabilized, superconducting Nb-Ti toroidal field and divertor coils; vertical field and transformer coils inside the toroidal coils; vacuum-tight containment vessel. Solutions involving air and iron transformer cores were compared. These assumptions led to a minimum size reactor with a thermal power of about 100MW and rather large dimensions (major radius of about 9m) similar to those of full-scale power reactors considered in other conceptual studies. The FINTOR-1 analysis was completed by the end of 1976. In 1977 a conceptual design of a Demonstration Power Reactor Plant (FINTOR-D) was started. In this study the main working assumptions differing from those of FINTOR-1 are: non-circular plasma cross section; plasma confinement compatible with trapped ion instabilities; cold (gas) blanket sufficient for wall protection (no divertor); wall loading between 1-3MW/m 2 and thermal power of a few GW. (author)

  15. Remote maintenance design for Fusion Experimental Reactor (FER)

    International Nuclear Information System (INIS)

    Tachikawa, K.; Iida, H.; Nishio, S.; Tone, T.; Aota, T.; Iwamoto, T.; Niikura, S.; Nishizawa, H.

    1984-01-01

    Design of Fusion Experimental Reactor, FER, has been conducted by Japan Atomic Energy Research Institute (JAERI) since 1981. Two typical reactors can be classified in general from the viewpoints of remote maintenance among four design concepts of FER. In the case of the type 1 FER, the torus module consists of shield structure and blanket, and the connective joints between toruses provided at the outer region of the reactor. As for the type 2 FER, the shield structure is joined with the vacuum cryostat, and only the blanket module is allowed to move, but connection between toruses are located in the inner region of the reactor. Comparing type 1 with type 2 FER, this paper describes on the remote maintenance of FER including reactor configurations, work procedures, remote systems/equipments, repairing facility and future R and D problems. Reviewing design studies and investigation for the existing robotics technologies, R and D for FER remote maintenance technology should be performed under the reasonable long-term program. The main items of remote technology required to start urgently are multi-purpose manipulator system with performance of dextrousity, tele-viewing system which reduces operator fatigue and remote tests for commercially available components

  16. Design and experimental study of a novel giant magnetostrictive actuator

    Energy Technology Data Exchange (ETDEWEB)

    Xue, Guangming, E-mail: yy0youxia@163.com [Vehicle and Electrical Engineering Department, Ordnance Engineering College, Shijiazhuang, 050003 China (China); Zhang, Peilin; He, Zhongbo; Li, Dongwei; Huang, Yingjie [Vehicle and Electrical Engineering Department, Ordnance Engineering College, Shijiazhuang, 050003 China (China); Xie, Wenqiang [Cadre Rotational Training Brigade, Ordnance Engineering College, Shijiazhuang, 050003 China (China)

    2016-12-15

    Giant magnetostrictive actuator has been widely used in precise driving occasions for its excellent performance. However, in driving a switching valve, especially the ball-valve in an electronic controlled injector, the actuator can’t exhibit its good performance for limits in output displacement and responding speed. A novel giant magnetostrictive actuator, which can reach its maximum displacement for being exerted with no bias magnetic field, is designed in this paper. Simultaneously, elongating of the giant magetostrictive material is converted to shortening of the actuator's axial dimension with the help of an output rod in “T” type. Furthermore, to save responding time, the driving voltage with high opening voltage while low holding voltage is designed. Responding time and output displacement are studied experimentally with the help of a measuring system. From measured results, designed driving voltage can improve the responding speed of actuator displacement quite effectively. And, giant magnetostrictive actuator can output various steady-state displacements to reach more driving effects. - Highlights: • GMA with zero bias magnetic field can reach maximum displacement in one direction. • Driving wave with high opening voltage can promote GMA's responding speed. • Higher opening voltage is exerted, less rise time is reached. • Continuous displacements from 0 to maximum value can be achieved by GMA.

  17. A statistically rigorous sampling design to integrate avian monitoring and management within Bird Conservation Regions.

    Science.gov (United States)

    Pavlacky, David C; Lukacs, Paul M; Blakesley, Jennifer A; Skorkowsky, Robert C; Klute, David S; Hahn, Beth A; Dreitz, Victoria J; George, T Luke; Hanni, David J

    2017-01-01

    Monitoring is an essential component of wildlife management and conservation. However, the usefulness of monitoring data is often undermined by the lack of 1) coordination across organizations and regions, 2) meaningful management and conservation objectives, and 3) rigorous sampling designs. Although many improvements to avian monitoring have been discussed, the recommendations have been slow to emerge in large-scale programs. We introduce the Integrated Monitoring in Bird Conservation Regions (IMBCR) program designed to overcome the above limitations. Our objectives are to outline the development of a statistically defensible sampling design to increase the value of large-scale monitoring data and provide example applications to demonstrate the ability of the design to meet multiple conservation and management objectives. We outline the sampling process for the IMBCR program with a focus on the Badlands and Prairies Bird Conservation Region (BCR 17). We provide two examples for the Brewer's sparrow (Spizella breweri) in BCR 17 demonstrating the ability of the design to 1) determine hierarchical population responses to landscape change and 2) estimate hierarchical habitat relationships to predict the response of the Brewer's sparrow to conservation efforts at multiple spatial scales. The collaboration across organizations and regions provided economy of scale by leveraging a common data platform over large spatial scales to promote the efficient use of monitoring resources. We designed the IMBCR program to address the information needs and core conservation and management objectives of the participating partner organizations. Although it has been argued that probabilistic sampling designs are not practical for large-scale monitoring, the IMBCR program provides a precedent for implementing a statistically defensible sampling design from local to bioregional scales. We demonstrate that integrating conservation and management objectives with rigorous statistical

  18. A statistically rigorous sampling design to integrate avian monitoring and management within Bird Conservation Regions.

    Directory of Open Access Journals (Sweden)

    David C Pavlacky

    Full Text Available Monitoring is an essential component of wildlife management and conservation. However, the usefulness of monitoring data is often undermined by the lack of 1 coordination across organizations and regions, 2 meaningful management and conservation objectives, and 3 rigorous sampling designs. Although many improvements to avian monitoring have been discussed, the recommendations have been slow to emerge in large-scale programs. We introduce the Integrated Monitoring in Bird Conservation Regions (IMBCR program designed to overcome the above limitations. Our objectives are to outline the development of a statistically defensible sampling design to increase the value of large-scale monitoring data and provide example applications to demonstrate the ability of the design to meet multiple conservation and management objectives. We outline the sampling process for the IMBCR program with a focus on the Badlands and Prairies Bird Conservation Region (BCR 17. We provide two examples for the Brewer's sparrow (Spizella breweri in BCR 17 demonstrating the ability of the design to 1 determine hierarchical population responses to landscape change and 2 estimate hierarchical habitat relationships to predict the response of the Brewer's sparrow to conservation efforts at multiple spatial scales. The collaboration across organizations and regions provided economy of scale by leveraging a common data platform over large spatial scales to promote the efficient use of monitoring resources. We designed the IMBCR program to address the information needs and core conservation and management objectives of the participating partner organizations. Although it has been argued that probabilistic sampling designs are not practical for large-scale monitoring, the IMBCR program provides a precedent for implementing a statistically defensible sampling design from local to bioregional scales. We demonstrate that integrating conservation and management objectives with rigorous

  19. Statistical controversies in clinical research: requiem for the 3 + 3 design for phase I trials.

    Science.gov (United States)

    Paoletti, X; Ezzalfani, M; Le Tourneau, C

    2015-09-01

    More than 95% of published phase I trials have used the 3 + 3 design to identify the dose to be recommended for phase II trials. However, the statistical community agrees on the limitations of the 3 + 3 design compared with model-based approaches. Moreover, the mechanisms of action of targeted agents strongly challenge the hypothesis that the maximum tolerated dose constitutes the optimal dose, and more outcomes including clinical and biological activity increasingly need to be taken into account to identify the optimal dose. We review key elements from clinical publications and from the statistical literature to show that the 3 + 3 design lacks the necessary flexibility to address the challenges of targeted agents. The design issues raised by expansion cohorts, new definitions of dose-limiting toxicity and trials of combinations are not easily addressed by the 3 + 3 design or its extensions. Alternative statistical proposals have been developed to make a better use of the complex data generated by phase I trials. Their applications require a close collaboration between all actors of early phase clinical trials. © The Author 2015. Published by Oxford University Press on behalf of the European Society for Medical Oncology. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  20. A statistical method for evaluation of the experimental phase equilibrium data of simple clathrate hydrates

    DEFF Research Database (Denmark)

    Eslamimanesh, Ali; Gharagheizi, Farhad; Mohammadi, Amir H.

    2012-01-01

    We, herein, present a statistical method for diagnostics of the outliers in phase equilibrium data (dissociation data) of simple clathrate hydrates. The applied algorithm is performed on the basis of the Leverage mathematical approach, in which the statistical Hat matrix, Williams Plot, and the r......We, herein, present a statistical method for diagnostics of the outliers in phase equilibrium data (dissociation data) of simple clathrate hydrates. The applied algorithm is performed on the basis of the Leverage mathematical approach, in which the statistical Hat matrix, Williams Plot...... in exponential form is used to represent/predict the hydrate dissociation pressures for three-phase equilibrium conditions (liquid water/ice–vapor-hydrate). The investigated hydrate formers are methane, ethane, propane, carbon dioxide, nitrogen, and hydrogen sulfide. It is interpreted from the obtained results...

  1. An experimental study of the surface elevation probability distribution and statistics of wind-generated waves

    Science.gov (United States)

    Huang, N. E.; Long, S. R.

    1980-01-01

    Laboratory experiments were performed to measure the surface elevation probability density function and associated statistical properties for a wind-generated wave field. The laboratory data along with some limited field data were compared. The statistical properties of the surface elevation were processed for comparison with the results derived from the Longuet-Higgins (1963) theory. It is found that, even for the highly non-Gaussian cases, the distribution function proposed by Longuet-Higgins still gives good approximations.

  2. An experimental design method leading to chemical Turing patterns.

    Science.gov (United States)

    Horváth, Judit; Szalai, István; De Kepper, Patrick

    2009-05-08

    Chemical reaction-diffusion patterns often serve as prototypes for pattern formation in living systems, but only two isothermal single-phase reaction systems have produced sustained stationary reaction-diffusion patterns so far. We designed an experimental method to search for additional systems on the basis of three steps: (i) generate spatial bistability by operating autoactivated reactions in open spatial reactors; (ii) use an independent negative-feedback species to produce spatiotemporal oscillations; and (iii) induce a space-scale separation of the activatory and inhibitory processes with a low-mobility complexing agent. We successfully applied this method to a hydrogen-ion autoactivated reaction, the thiourea-iodate-sulfite (TuIS) reaction, and noticeably produced stationary hexagonal arrays of spots and parallel stripes of pH patterns attributed to a Turing bifurcation. This method could be extended to biochemical reactions.

  3. Design of nuclear fuel cells by means of a statistical analysis and a sensibility study

    International Nuclear Information System (INIS)

    Jauregui C, V.; Castillo M, J. A.; Ortiz S, J. J.; Montes T, J. L.; Perusquia del C, R.

    2013-10-01

    This work contains the results of the statistical analysis realized to study the nuclear fuel cells performance, considering the frequencies for the election of fuel bars used in the design of the same ones. The election of the bars used for the cells design are of 3 types, the first election shows that to the plotting the respective frequency is similar to a normal distribution, in the second case the frequencies graph is of type inverted square X 2 and the last election is when the bars are chosen in aleatory form. The heuristic techniques used for the cells design were the neural networks, the ant colonies and a hybrid between the dispersed search and the trajectories re-linkage. To carry out the statistical analysis in the cells design were considered the local power peak factor and the neutron infinite multiplication factor (k∞) of this. On the other hand, the performance of the designed cells was analyzed when verifying the position of the bars containing gadolinium. The results show that is possible to design cells of nuclear fuel with a good performance, when considering the frequency of the bars used in their design. (Author)

  4. Bayesian Optimal Experimental Design Using Multilevel Monte Carlo

    KAUST Repository

    Ben Issaid, Chaouki

    2015-05-12

    Experimental design can be vital when experiments are resource-exhaustive and time-consuming. In this work, we carry out experimental design in the Bayesian framework. To measure the amount of information that can be extracted from the data in an experiment, we use the expected information gain as the utility function, which specifically is the expected logarithmic ratio between the posterior and prior distributions. Optimizing this utility function enables us to design experiments that yield the most informative data about the model parameters. One of the major difficulties in evaluating the expected information gain is that it naturally involves nested integration over a possibly high dimensional domain. We use the Multilevel Monte Carlo (MLMC) method to accelerate the computation of the nested high dimensional integral. The advantages are twofold. First, MLMC can significantly reduce the cost of the nested integral for a given tolerance, by using an optimal sample distribution among different sample averages of the inner integrals. Second, the MLMC method imposes fewer assumptions, such as the asymptotic concentration of posterior measures, required for instance by the Laplace approximation (LA). We test the MLMC method using two numerical examples. The first example is the design of sensor deployment for a Darcy flow problem governed by a one-dimensional Poisson equation. We place the sensors in the locations where the pressure is measured, and we model the conductivity field as a piecewise constant random vector with two parameters. The second one is chemical Enhanced Oil Recovery (EOR) core flooding experiment assuming homogeneous permeability. We measure the cumulative oil recovery, from a horizontal core flooded by water, surfactant and polymer, for different injection rates. The model parameters consist of the endpoint relative permeabilities, the residual saturations and the relative permeability exponents for the three phases: water, oil and

  5. EXPERIMENTAL RESEARCH REGARDING LEATHER APPLICATIONS IN PRODUCT DESIGN

    Directory of Open Access Journals (Sweden)

    PRALEA Jeni

    2015-05-01

    Full Text Available This paper presents the role and importance of experimental research in design activity. The designer, as a researcher and a project manager, proposes to establish a relationship between functional-aesthetic-constructive-technological-economic,based on the aesthetic possibilities of the materials used for the experiments. With the aim to identify areas for the application of leather waste resulted from the production process, the paper presents experiments conducted with this material in combination with wood, by using different techniques that lead to different aesthetic effects. Identifying the areas to use and creating products from leather and/or wood waste, is based on the properties of these materials. Leather, the subject of these experiments, has the advantage that it can be used on both sides. Tactile differences of the two sides of this material has both aesthetical and functional advantages, which makes it suitable for applications on products that meet the requirements of "design for all". With differentiated tactile characteristics, in combination with other materials, for these experiments wood, easily "read touch" products can be generated to help people with certain disabilities. Thus, experiments presented in this paper allows the establishment of aesthetic schemes applicable to products that are friendly both with the environment (based on the reuse of wood and leather waste and with the users (can be used as applications, accessories and concepts of products for people with certain disabilities. The designer’s choices or decisions can be based on the results of this experiment. The experiment enables the designer to develop creative, innovative and environmentally friendly products.

  6. Design and experimental results of coaxial circuits for gyroklystron amplifiers

    International Nuclear Information System (INIS)

    Flaherty, M.K.E.; Lawson, W.; Cheng, J.; Calame, J.P.; Hogan, B.; Latham, P.E.; Granatstein, V.L.

    1994-01-01

    At the University of Maryland high power microwave source development for use in linear accelerator applications continues with the design and testing of coaxial circuits for gyroklystron amplifiers. This presentation will include experimental results from a coaxial gyroklystron that was tested on the current microwave test bed, and designs for second harmonic coaxial circuits for use in the next generation of the gyroklystron program. The authors present test results for a second harmonic coaxial circuit. Similar to previous second harmonic experiments the input cavity resonated at 9.886 GHz and the output frequency was 19.772 GHz. The coaxial insert was positioned in the input cavity and drift region. The inner conductor consisted of a tungsten rod with copper and ceramic cylinders covering its length. Two tungsten rods that bridged the space between the inner and outer conductors supported the whole assembly. The tube produced over 20 MW of output power with 17% efficiency. Beam interception by the tungsten rods resulted in minor damage. Comparisons with previous non-coaxial circuits showed that the coaxial configuration increased the parameter space over which stable operation was possible. Future experiments will feature an upgraded modulator and beam formation system capable of producing 300 MW of beam power. The fundamental frequency of operation is 8.568 GHz. A second harmonic coaxial gyroklystron circuit was designed for use in the new system. A scattering matrix code predicts a resonant frequency of 17.136 GHz and Q of 260 for the cavity with 95% of the outgoing microwaves in the desired TE032 mode. Efficiency studies of this second harmonic output cavity show 20% expected efficiency. Shorter second harmonic output cavity designs are also being investigated with expected efficiencies near 34%

  7. Statistical mixture design selective extraction of compounds with antioxidant activity and total polyphenol content from Trichilia catigua.

    Science.gov (United States)

    Lonni, Audrey Alesandra Stinghen Garcia; Longhini, Renata; Lopes, Gisely Cristiny; de Mello, João Carlos Palazzo; Scarminio, Ieda Spacino

    2012-03-16

    Statistical design mixtures of water, methanol, acetone and ethanol were used to extract material from Trichilia catigua (Meliaceae) barks to study the effects of different solvents and their mixtures on its yield, total polyphenol content and antioxidant activity. The experimental results and their response surface models showed that quaternary mixtures with approximately equal proportions of all four solvents provided the highest yields, total polyphenol contents and antioxidant activities of the crude extracts followed by ternary design mixtures. Principal component and hierarchical clustering analysis of the HPLC-DAD spectra of the chromatographic peaks of 1:1:1:1 water-methanol-acetone-ethanol mixture extracts indicate the presence of cinchonains, gallic acid derivatives, natural polyphenols, flavanoids, catechins, and epicatechins. Copyright © 2011 Elsevier B.V. All rights reserved.

  8. Experimental Verification of Current Shear Design Equations for HSRC Beams

    Directory of Open Access Journals (Sweden)

    Attaullah Shah

    2012-07-01

    Full Text Available Experimental research on the shear capacity of HSRC (High Strength Reinforced Concrete beams is relatively very limited as compared to the NSRC (Normal Strength Reinforced Concrete beams. Most of the Building Codes determine the shear strength of HSRC with the help of empirical equations based on experimental work of NSRC beams and hence these equations are generally regarded as un-conservative for HSRC beams particularly at low level of longitudinal reinforcement. In this paper, 42 beams have been tested in two sets, such that in 21 beams no transverse reinforcement has been used, whereas in the remaining 21 beams, minimum transverse reinforcement has been used as per ACI-318 (American Concrete Institute provisions. Two values of compressive strength 52 and 61 MPa, three values of longitudinal steel ratio and seven values of shear span to depth ratio have been have been used. The beams were tested under concentrated load at the mid span. The results are compared with the equations proposed by different international building codes like ACI, AASHTO LRFD, EC (Euro Code, Canadian Code and Japanese Code for shear strength of HSRC beams.From comparison, it has been observed that some codes are less conservative for shear design of HSRC beams and further research is required to rationalize these equations.

  9. Computational Design and Experimental Validation of New Thermal Barrier Systems

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Shengmin; Yang, Shizhong; Khosravi, Ebrahim

    2011-12-31

    This project (10/01/2010-9/30/2013), “Computational Design and Experimental Validation of New Thermal Barrier Systems”, originates from Louisiana State University (LSU) Mechanical Engineering Department and Southern University (SU) Department of Computer Science. This proposal will directly support the technical goals specified in DE-FOA-0000248, Topic Area 3: Turbine Materials, by addressing key technologies needed to enable the development of advanced turbines and turbine-based systems that will operate safely and efficiently using coal-derived synthesis gases. We will develop novel molecular dynamics method to improve the efficiency of simulation on novel TBC materials; we will perform high performance computing (HPC) on complex TBC structures to screen the most promising TBC compositions; we will perform material characterizations and oxidation/corrosion tests; and we will demonstrate our new Thermal barrier coating (TBC) systems experimentally under Integrated gasification combined cycle (IGCC) environments. The durability of the coating will be examined using the proposed High Temperature/High Pressure Durability Test Rig under real syngas product compositions.

  10. Design and experimental study on desulphurization process of ship exhaust

    Science.gov (United States)

    Han, Mingyang; Hao, Shan; Zhou, Junbo; Gao, Liping

    2018-02-01

    This desulfurization process involves removing sulfur oxides with seawater or alkaline aqueous solutions and then treating the effluent by aeration and pH adjustment before discharging it into the ocean. In the desulfurization system, the spray tower is the key equipment and the venturi tubes are the pretreatment device. The two stages of plates are designed to fully absorb sulfur oxides in exhaust gases. The spiral nozzles atomize and evenly spray the desulfurizers into the tower. This study experimentally investigated the effectiveness of this desulfurization process and the factors influencing it under laboratory conditions, with a diesel engine exhaust used to represent ship exhaust. The experimental results show that this process can effectively absorb the SO2 in the exhaust. When the exhaust flow rate was 25 m3/h and the desulfurizer flow rate was 4 L/min, the sulfur removal efficiency (SRE) reached 99.7%. The flow rate, alkalinity, and temperature of seawater were found to have significant effects on the SRE. Adjusting seawater flow rate (SWR) and alkalinity within certain ranges can substantially improve the SRE.

  11. Statistical metrology - measurement and modeling of variation for advanced process development and design rule generation

    International Nuclear Information System (INIS)

    Boning, Duane S.; Chung, James E.

    1998-01-01

    Advanced process technology will require more detailed understanding and tighter control of variation in devices and interconnects. The purpose of statistical metrology is to provide methods to measure and characterize variation, to model systematic and random components of that variation, and to understand the impact of variation on both yield and performance of advanced circuits. Of particular concern are spatial or pattern-dependencies within individual chips; such systematic variation within the chip can have a much larger impact on performance than wafer-level random variation. Statistical metrology methods will play an important role in the creation of design rules for advanced technologies. For example, a key issue in multilayer interconnect is the uniformity of interlevel dielectric (ILD) thickness within the chip. For the case of ILD thickness, we describe phases of statistical metrology development and application to understanding and modeling thickness variation arising from chemical-mechanical polishing (CMP). These phases include screening experiments including design of test structures and test masks to gather electrical or optical data, techniques for statistical decomposition and analysis of the data, and approaches to calibrating empirical and physical variation models. These models can be integrated with circuit CAD tools to evaluate different process integration or design rule strategies. One focus for the generation of interconnect design rules are guidelines for the use of 'dummy fill' or 'metal fill' to improve the uniformity of underlying metal density and thus improve the uniformity of oxide thickness within the die. Trade-offs that can be evaluated via statistical metrology include the improvements to uniformity possible versus the effect of increased capacitance due to additional metal

  12. Statistical model based iterative reconstruction (MBIR) in clinical CT systems: Experimental assessment of noise performance

    Energy Technology Data Exchange (ETDEWEB)

    Li, Ke; Tang, Jie [Department of Medical Physics, University of Wisconsin-Madison, 1111 Highland Avenue, Madison, Wisconsin 53705 (United States); Chen, Guang-Hong, E-mail: gchen7@wisc.edu [Department of Medical Physics, University of Wisconsin-Madison, 1111 Highland Avenue, Madison, Wisconsin 53705 and Department of Radiology, University of Wisconsin-Madison, 600 Highland Avenue, Madison, Wisconsin 53792 (United States)

    2014-04-15

    Purpose: To reduce radiation dose in CT imaging, the statistical model based iterative reconstruction (MBIR) method has been introduced for clinical use. Based on the principle of MBIR and its nonlinear nature, the noise performance of MBIR is expected to be different from that of the well-understood filtered backprojection (FBP) reconstruction method. The purpose of this work is to experimentally assess the unique noise characteristics of MBIR using a state-of-the-art clinical CT system. Methods: Three physical phantoms, including a water cylinder and two pediatric head phantoms, were scanned in axial scanning mode using a 64-slice CT scanner (Discovery CT750 HD, GE Healthcare, Waukesha, WI) at seven different mAs levels (5, 12.5, 25, 50, 100, 200, 300). At each mAs level, each phantom was repeatedly scanned 50 times to generate an image ensemble for noise analysis. Both the FBP method with a standard kernel and the MBIR method (Veo{sup ®}, GE Healthcare, Waukesha, WI) were used for CT image reconstruction. Three-dimensional (3D) noise power spectrum (NPS), two-dimensional (2D) NPS, and zero-dimensional NPS (noise variance) were assessed both globally and locally. Noise magnitude, noise spatial correlation, noise spatial uniformity and their dose dependence were examined for the two reconstruction methods. Results: (1) At each dose level and at each frequency, the magnitude of the NPS of MBIR was smaller than that of FBP. (2) While the shape of the NPS of FBP was dose-independent, the shape of the NPS of MBIR was strongly dose-dependent; lower dose lead to a “redder” NPS with a lower mean frequency value. (3) The noise standard deviation (σ) of MBIR and dose were found to be related through a power law of σ ∝ (dose){sup −β} with the component β ≈ 0.25, which violated the classical σ ∝ (dose){sup −0.5} power law in FBP. (4) With MBIR, noise reduction was most prominent for thin image slices. (5) MBIR lead to better noise spatial

  13. Experimental design technique applied to the validation of an instrumental Neutron Activation Analysis procedure

    International Nuclear Information System (INIS)

    Santos, Uanda Paula de M. dos; Moreira, Edson Gonçalves

    2017-01-01

    In this study optimization of procedures and standardization of Instrumental Neutron Activation Analysis (INAA) method were carried out for the determination of the elements bromine, chlorine, magnesium, manganese, potassium, sodium and vanadium in biological matrix materials using short irradiations at a pneumatic system. 2 k experimental designs were applied for evaluation of the individual contribution of selected variables of the analytical procedure in the final mass fraction result. The chosen experimental designs were the 2 3 and the 2 4 , depending on the radionuclide half life. Different certified reference materials and multi-element comparators were analyzed considering the following variables: sample decay time, irradiation time, counting time and sample distance to detector. Comparator concentration, sample mass and irradiation time were maintained constant in this procedure. By means of the statistical analysis and theoretical and experimental considerations, it was determined the optimized experimental conditions for the analytical methods that will be adopted for the validation procedure of INAA methods in the Neutron Activation Analysis Laboratory (LAN) of the Research Reactor Center (CRPq) at the Nuclear and Energy Research Institute (IPEN /CNEN-SP). Optimized conditions were estimated based on the results of z-score tests, main effect, interaction effects and better irradiation conditions. (author)

  14. Experimental design technique applied to the validation of an instrumental Neutron Activation Analysis procedure

    Energy Technology Data Exchange (ETDEWEB)

    Santos, Uanda Paula de M. dos; Moreira, Edson Gonçalves, E-mail: uandapaula@gmail.com, E-mail: emoreira@ipen.br [Instituto de Pesquisas Energéticas e Nucleares (IPEN/CNEN-SP), São Paulo, SP (Brazil)

    2017-07-01

    In this study optimization of procedures and standardization of Instrumental Neutron Activation Analysis (INAA) method were carried out for the determination of the elements bromine, chlorine, magnesium, manganese, potassium, sodium and vanadium in biological matrix materials using short irradiations at a pneumatic system. 2{sup k} experimental designs were applied for evaluation of the individual contribution of selected variables of the analytical procedure in the final mass fraction result. The chosen experimental designs were the 2{sup 3} and the 2{sup 4}, depending on the radionuclide half life. Different certified reference materials and multi-element comparators were analyzed considering the following variables: sample decay time, irradiation time, counting time and sample distance to detector. Comparator concentration, sample mass and irradiation time were maintained constant in this procedure. By means of the statistical analysis and theoretical and experimental considerations, it was determined the optimized experimental conditions for the analytical methods that will be adopted for the validation procedure of INAA methods in the Neutron Activation Analysis Laboratory (LAN) of the Research Reactor Center (CRPq) at the Nuclear and Energy Research Institute (IPEN /CNEN-SP). Optimized conditions were estimated based on the results of z-score tests, main effect, interaction effects and better irradiation conditions. (author)

  15. Research design and statistical methods in Indian medical journals: a retrospective survey.

    Science.gov (United States)

    Hassan, Shabbeer; Yellur, Rajashree; Subramani, Pooventhan; Adiga, Poornima; Gokhale, Manoj; Iyer, Manasa S; Mayya, Shreemathi S

    2015-01-01

    Good quality medical research generally requires not only an expertise in the chosen medical field of interest but also a sound knowledge of statistical methodology. The number of medical research articles which have been published in Indian medical journals has increased quite substantially in the past decade. The aim of this study was to collate all evidence on study design quality and statistical analyses used in selected leading Indian medical journals. Ten (10) leading Indian medical journals were selected based on impact factors and all original research articles published in 2003 (N = 588) and 2013 (N = 774) were categorized and reviewed. A validated checklist on study design, statistical analyses, results presentation, and interpretation was used for review and evaluation of the articles. Main outcomes considered in the present study were - study design types and their frequencies, error/defects proportion in study design, statistical analyses, and implementation of CONSORT checklist in RCT (randomized clinical trials). From 2003 to 2013: The proportion of erroneous statistical analyses did not decrease (χ2=0.592, Φ=0.027, p=0.4418), 25% (80/320) in 2003 compared to 22.6% (111/490) in 2013. Compared with 2003, significant improvement was seen in 2013; the proportion of papers using statistical tests increased significantly (χ2=26.96, Φ=0.16, pdesign decreased significantly (χ2=16.783, Φ=0.12 pdesigns has remained very low (7.3%, 43/588) with majority showing some errors (41 papers, 95.3%). Majority of the published studies were retrospective in nature both in 2003 [79.1% (465/588)] and in 2013 [78.2% (605/774)]. Major decreases in error proportions were observed in both results presentation (χ2=24.477, Φ=0.17, presearch seems to have made no major progress regarding using correct statistical analyses, but error/defects in study designs have decreased significantly. Randomized clinical trials are quite rarely published and have high proportion of

  16. Thermal design of horizontal tube boilers: numerical and experimental investigation

    International Nuclear Information System (INIS)

    Roser, Robert

    1999-01-01

    This work concerns the thermal design of kettle re-boilers. Current methods are highly inaccurate, regarded to the correlations for external heat transfer coefficient at one tube scale, as well as to two-phase flow modelling at boiler scale. The aim of this work is to improve these thermal design methods. It contains an experimental investigation with typical operating conditions of such equipment: an hydrocarbon (n-pentane) with low mass flux. This investigation has lead to characterize the local flow pattern through void fraction measurements and, from this, to develop correlations for void fraction, pressure drop and heat transfer coefficient. The approach is original, since the developed correlations are based on the liquid velocity at minimum cross section area between tubes, as variable characterizing the hydrodynamic effects on pressure drop and heat transfer coefficient. These correlations are shown to give much better results than those suggested up to now in the literature, which are empirical transpositions from methods developed for inside tube flows. Furthermore, the numerical code MC3D has been applied using the correlations developed in this work, leading to a modelization of the two-phase flow in the boiler, which is a significant progress compared to current simplified methods. (author) [fr

  17. Tabletop Games: Platforms, Experimental Games and Design Recommendations

    Science.gov (United States)

    Haller, Michael; Forlines, Clifton; Koeffel, Christina; Leitner, Jakob; Shen, Chia

    While the last decade has seen massive improvements in not only the rendering quality, but also the overall performance of console and desktop video games, these improvements have not necessarily led to a greater population of video game players. In addition to continuing these improvements, the video game industry is also constantly searching for new ways to convert non-players into dedicated gamers. Despite the growing popularity of computer-based video games, people still love to play traditional board games, such as Risk, Monopoly, and Trivial Pursuit. Both video and board games have their strengths and weaknesses, and an intriguing conclusion is to merge both worlds. We believe that a tabletop form-factor provides an ideal interface for digital board games. The design and implementation of tabletop games will be influenced by the hardware platforms, form factors, sensing technologies, as well as input techniques and devices that are available and chosen. This chapter is divided into three major sections. In the first section, we describe the most recent tabletop hardware technologies that have been used by tabletop researchers and practitioners. In the second section, we discuss a set of experimental tabletop games. The third section presents ten evaluation heuristics for tabletop game design.

  18. Conceptual design study of fusion experimental reactor (FY86 FER)

    International Nuclear Information System (INIS)

    Nakashima, Kunihiko; Okano, Kunihiko; Miyamoto, Kazuhiro.

    1987-09-01

    This report describes the results of a conceptual study on the RF system in the typical candidates for the Fusion Experimental Reactor (FER), which were picked out through the '86FER scoping studies. According to the FER operation scenario, three RF systems, that is, ICRF (heating), LHRF (current drive and heating), ECRF (auxiliary heating) were studied. Main concern in these RF systems is the launcher, which may be so designed that required power match the geometrical constraints of the reactor. Then studies were concentrated on the launcher configuration. A prug-in concept of the launcher was adopted in each system and vacancies except transmission space were filled with water. The ICRF launcher had the 2 x 2 loop arrays antenna and the faraday shield area of 1.5 m x 1 m to provide a power of 20 MW. The LHRF launcher had the grillantenna with 28 x 8 open waveguides, and included multi junction-type power splitters which were connected to 56 transmission wave guides. The grild was designed to have two functions of current drive and heating, and provide a power of 20 MW each. The ECRF launcher had a boundle of open wave guides which a reflection mirror each, and three plain mirrors. Assuming a oscillator unit size of 200 kW, it had 40 oversized wave guides to provide a power of 3 MW. (author)

  19. Physics design and experimental study of tokamak divertor

    International Nuclear Information System (INIS)

    Yan Jiancheng; Gao Qingdi; Yan Longwen; Wang Mingxu; Deng Baiquan; Zhang Fu; Zhang Nianman; Ran Hong; Cheng Fayin; Tang Yiwu; Chen Xiaoping

    2007-06-01

    The divertor configuration of HL-2A tokamak is optimized, and the plasma performance in divertor is simulated with B2-code. The effects of collisionality on plasma-wall transition in the scrape-off layer of divertor are investigated, high performances of the divertor plasma in HL-2A are simulated, and a quasi- stationary RS operation mode is established with the plasma controlled by LHCD and NBI. HL-2A tokamak has been successfully operated in divertor configuration. The major parameters: plasma current I p =320 kA, toroidal field B t =2.2 T, plasma discharger duration T d =1580 ms ware achieved at the end of 2004. The preliminary experimental researches of advanced diverter have been carried out. Design studies of divertor target plate for high power density fusion reactor have been carried out, especially, the physical processes on the surface of flowing liquid lithium target plate. The exploration research of improving divertor ash removal efficiency and reducing tritium inventory resulting from applying the RF ponderomotive force potential is studied. The optimization structure design studies of FEB-E reactor divertor are performed. High flux thermal shock experiments were carried on tungsten and carbon based materials. Hot Isostatic Press (HIP) method was employed to bond tungsten to copper alloys. Electron beam simulated thermal fatigue tests were also carried out to W/Cu bondings. Thermal desorption and surface modification of He + implanted into tungsten have been studied. (authors)

  20. Quasi-experimental designs in practice-based research settings: design and implementation considerations.

    Science.gov (United States)

    Handley, Margaret A; Schillinger, Dean; Shiboski, Stephen

    2011-01-01

    Although randomized controlled trials are often a gold standard for determining intervention effects, in the area of practice-based research (PBR), there are many situations in which individual randomization is not possible. Alternative approaches to evaluating interventions have received increased attention, particularly those that can retain elements of randomization such that they can be considered "controlled" trials. Methodological design elements and practical implementation considerations for two quasi-experimental design approaches that have considerable promise in PBR settings--the stepped-wedge design, and a variant of this design, a wait-list cross-over design, are presented along with a case study from a recent PBR intervention for patients with diabetes. PBR-relevant design features include: creation of a cohort over time that collects control data but allows all participants (clusters or patients) to receive the intervention; staggered introduction of clusters; multiple data collection points; and one-way cross-over into the intervention arm. Practical considerations include: randomization versus stratification, training run in phases; and extended time period for overall study completion. Several design features of practice based research studies can be adapted to local circumstances yet retain elements to improve methodological rigor. Studies that utilize these methods, such as the stepped-wedge design and the wait-list cross-over design, can increase the evidence base for controlled studies conducted within the complex environment of PBR.

  1. Experimental verification of the statistical theories of scaling factor effect in fatigue fracture of steel

    International Nuclear Information System (INIS)

    Svistun, R.P.; Babej, Yu.I.; Tkachenko, N.N.

    1976-01-01

    Statistical theories of the scale effect in the fatigue failure of 40KH18N9T, 10 and 20 steels have been verified. The theories are shown to be not invariably suitable for a satisfactory exlanation of the fatigue strength of the samples with respect to their dimensions. One of the main reasons for displaying the scale effect in the process of steel fatigue is the sample self-heating, i.e. a temperature factor which in many cases overlaps a statistical one

  2. Experimental verification of the statistical theories of scaling factor effect in fatigue fracture of steel

    Energy Technology Data Exchange (ETDEWEB)

    Svistun, R P; Babei, Yu I; Tkachenko, N N [AN Ukrainskoj SSR, Lvov. Fiziko-Mekhanicheskij Inst.; L' vovskij Lesotekhnicheskij Inst. (Ukrainian SSR))

    1976-01-01

    Statistical theories of the scale effect in the fatigue failure of 40KH18N9T, 10 and 20 steels have been verified. The theories are shown to be not invariably suitable for a satisfactory exlanation of the fatigue strength of the samples with respect to their dimensions. One of the main reasons for displaying the scale effect in the process of steel fatigue is the sample self-heating, i.e. a temperature factor which in many cases overlaps a statistical one.

  3. System design overview of JAXA small supersonic experimental airplane (NEXST-1)

    OpenAIRE

    Takami, Hikaru; 高見 光

    2007-01-01

    The system of JAXA small supersonic experimental airplane (NEXST-1: National EXperimental Supersonic Transport-1) has been briefly explained. Some design problems that the designers have encountered have also been briefly explained.

  4. Statistical properties of SASE FEL radiation: experimental results from the VUV FEL at the TESLA test facility at DESY

    International Nuclear Information System (INIS)

    Yurkov, M.V.

    2002-01-01

    This paper presents an experimental study of the statistical properties of the radiation from a SASE FEL. The experiments were performed at the TESLA Test Facility VUV SASE FEL at DESY operating in a high-gain linear regime with a gain of about 10 6 . It is shown that fluctuations of the output radiation energy follows a gamma-distribution. We also measured for the first time the probability distribution of SASE radiation energy after a narrow-band monochromator. The experimental results are in good agreement with theoretical predictions, the energy fluctuations after the monochromator follow a negative exponential distribution

  5. A statistical/computational/experimental approach to study the microstructural morphology of damage

    NARCIS (Netherlands)

    Hoefnagels, J. P. M.; Du, C.; de Geus, T. W. J.; Peerlings, R. H. J.; Geers, M. G. D.; Beese, A.M.; Zehnder, A.T.; Xia, Sh.

    2016-01-01

    The fractural behavior of multi-phase materials is not well understood. Therefore, a statistic study of micro-failures is conducted to deepen our insights on the failure mechanisms. We systematically studied the influence of the morphology of dual phase (DP) steel on the fracture behavior at the

  6. Inferential statistics, power estimates, and study design formalities continue to suppress biomedical innovation

    OpenAIRE

    Kern, Scott E.

    2014-01-01

    Innovation is the direct intended product of certain styles in research, but not of others. Fundamental conflicts between descriptive vs inferential statistics, deductive vs inductive hypothesis testing, and exploratory vs pre-planned confirmatory research designs have been played out over decades, with winners and losers and consequences. Longstanding warnings from both academics and research-funding interests have failed to influence effectively the course of these battles. The NIH publicly...

  7. Experimental and Statistical Analysis of MgO Nanofluids for Thermal Enhancement in a Novel Flat Plate Heat Pipes

    Science.gov (United States)

    Pandiaraj, P.; Gnanavelbabu, A.; Saravanan, P.

    Metallic fluids like CuO, Al2O3, ZnO, SiO2 and TiO2 nanofluids were widely used for the development of working fluids in flat plate heat pipes except magnesium oxide (MgO). So, we initiate our idea to use MgO nanofluids in flat plate heat pipe as a working fluid material. MgO nanopowders were synthesized by wet chemical method. Solid state characterizations of synthesized nanopowders were carried out by Ultraviolet Spectroscopy (UV), Fourier Transform Infrared Spectroscopy (FTIR), Scanning Electron Microscopy (SEM) and X-ray Diffraction (XRD) techniques. Synthesized nanopowders were prepared as nanofluids by adding water and as well as water/ethylene glycol as a binary mixture. Thermal conductivity measurements of prepared nanofluids were studied using transient hot-wire apparatus. Response surface methodology based on the Box-Behnken design was implemented to investigate the influence of temperature (30-60∘C), particle fraction (1.5-4.5 vol.%), and solution pH (4-12) of nanofluids as the independent variables. A total of 17 experiments were accomplished for the construction of second-order polynomial equations for target output. All the influential factors, their mutual effects and their quadratic terms were statistically validated by analysis of variance (ANOVA). The optimum stability and thermal conductivity of MgO nanofluids with various temperature, volume fraction and solution pH were predicted and compared with experimental results. The results revealed that increase in particle fraction and pH of MgO nanofluids at certain points would increase thermal conductivity and become stable at nominal temperature.

  8. Removal of thorium(IV) from aqueous solution by biosorption onto modified powdered waste sludge. Experimental design approach

    International Nuclear Information System (INIS)

    Yunus Pamukoglu, M.; Mustafa Senyurt; Bulent Kirkan

    2017-01-01

    The biosorption of radioactive Th(IV) ions in the aqueous solutions onto the modified powdered waste sludge (MPWS) has been examined. In this context, the parameters affecting biosorption of Th(IV) from aqueous solutions has been examined by using MPWS biosorbent in Box Behnken statistical experimental design. The structure of MPWS biosorbent was characterized by using SEM and BET techniques. According to the experimental design results, MPWS and Th(IV) concentrations should be kept high to achieve the maximum efficiency in Th(IV) biosorption. On the other hand, MPWS, which is also used as a biosorbent, is an economical, effective and natural biosorbent. (author)

  9. Design review of the Brazilian Experimental Solar Telescope

    Science.gov (United States)

    Dal Lago, A.; Vieira, L. E. A.; Albuquerque, B.; Castilho, B.; Guarnieri, F. L.; Cardoso, F. R.; Guerrero, G.; Rodríguez, J. M.; Santos, J.; Costa, J. E. R.; Palacios, J.; da Silva, L.; Alves, L. R.; Costa, L. L.; Sampaio, M.; Dias Silveira, M. V.; Domingues, M. O.; Rockenbach, M.; Aquino, M. C. O.; Soares, M. C. R.; Barbosa, M. J.; Mendes, O., Jr.; Jauer, P. R.; Branco, R.; Dallaqua, R.; Stekel, T. R. C.; Pinto, T. S. N.; Menconi, V. E.; Souza, V. M. C. E. S.; Gonzalez, W.; Rigozo, N.

    2015-12-01

    The Brazilian's National Institute for Space Research (INPE), in collaboration with the Engineering School of Lorena/University of São Paulo (EEL/USP), the Federal University of Minas Gerais (UFMG), and the Brazilian's National Laboratory for Astrophysics (LNA), is developing a solar vector magnetograph and visible-light imager to study solar processes through observations of the solar surface magnetic field. The Brazilian Experimental Solar Telescope is designed to obtain full disk magnetic field and line-of-sight velocity observations in the photosphere. Here we discuss the system requirements and the first design review of the instrument. The instrument is composed by a Ritchey-Chrétien telescope with a 500 mm aperture and 4000 mm focal length. LCD polarization modulators will be employed for the polarization analysis and a tuning Fabry-Perot filter for the wavelength scanning near the Fe II 630.25 nm line. Two large field-of-view, high-resolution 5.5 megapixel sCMOS cameras will be employed as sensors. Additionally, we describe the project management and system engineering approaches employed in this project. As the magnetic field anchored at the solar surface produces most of the structures and energetic events in the upper solar atmosphere and significantly influences the heliosphere, the development of this instrument plays an important role in advancing scientific knowledge in this field. In particular, the Brazilian's Space Weather program will benefit most from the development of this technology. We expect that this project will be the starting point to establish a strong research program on Solar Physics in Brazil. Our main aim is to progressively acquire the know-how to build state-of-art solar vector magnetograph and visible-light imagers for space-based platforms.

  10. Experimental investigation of local properties and statistics of optical vortices in random wave fields

    DEFF Research Database (Denmark)

    Wang, W.; Hanson, Steen Grüner; Miyamoto, Y.

    2005-01-01

    We present the first direct experimental evidence of the local properties of optical vortices in a random laser speckle field. We have observed the Berry anisotropy ellipse describing the anisotropic squeezing of phase lines close to vortex cores and quantitatively verified the Dennis angular mom...

  11. Experimental Design Strategy As Part of an Innovative Construction Industry

    NARCIS (Netherlands)

    Rogier Laterveer

    2013-01-01

    This exploratory and conceptual article sets out to research what arguments and possibilities for experimentation in construction exists and if experimentation can contribute towards more innovative construction as a whole. Traditional, -western- construction is very conservative and regional, often

  12. Combined data preprocessing and multivariate statistical analysis characterizes fed-batch culture of mouse hybridoma cells for rational medium design.

    Science.gov (United States)

    Selvarasu, Suresh; Kim, Do Yun; Karimi, Iftekhar A; Lee, Dong-Yup

    2010-10-01

    We present an integrated framework for characterizing fed-batch cultures of mouse hybridoma cells producing monoclonal antibody (mAb). This framework systematically combines data preprocessing, elemental balancing and statistical analysis technique. Initially, specific rates of cell growth, glucose/amino acid consumptions and mAb/metabolite productions were calculated via curve fitting using logistic equations, with subsequent elemental balancing of the preprocessed data indicating the presence of experimental measurement errors. Multivariate statistical analysis was then employed to understand physiological characteristics of the cellular system. The results from principal component analysis (PCA) revealed three major clusters of amino acids with similar trends in their consumption profiles: (i) arginine, threonine and serine, (ii) glycine, tyrosine, phenylalanine, methionine, histidine and asparagine, and (iii) lysine, valine and isoleucine. Further analysis using partial least square (PLS) regression identified key amino acids which were positively or negatively correlated with the cell growth, mAb production and the generation of lactate and ammonia. Based on these results, the optimal concentrations of key amino acids in the feed medium can be inferred, potentially leading to an increase in cell viability and productivity, as well as a decrease in toxic waste production. The study demonstrated how the current methodological framework using multivariate statistical analysis techniques can serve as a potential tool for deriving rational medium design strategies. Copyright © 2010 Elsevier B.V. All rights reserved.

  13. Directions for new developments on statistical design and analysis of small population group trials.

    Science.gov (United States)

    Hilgers, Ralf-Dieter; Roes, Kit; Stallard, Nigel

    2016-06-14

    Most statistical design and analysis methods for clinical trials have been developed and evaluated where at least several hundreds of patients could be recruited. These methods may not be suitable to evaluate therapies if the sample size is unavoidably small, which is usually termed by small populations. The specific sample size cut off, where the standard methods fail, needs to be investigated. In this paper, the authors present their view on new developments for design and analysis of clinical trials in small population groups, where conventional statistical methods may be inappropriate, e.g., because of lack of power or poor adherence to asymptotic approximations due to sample size restrictions. Following the EMA/CHMP guideline on clinical trials in small populations, we consider directions for new developments in the area of statistical methodology for design and analysis of small population clinical trials. We relate the findings to the research activities of three projects, Asterix, IDeAl, and InSPiRe, which have received funding since 2013 within the FP7-HEALTH-2013-INNOVATION-1 framework of the EU. As not all aspects of the wide research area of small population clinical trials can be addressed, we focus on areas where we feel advances are needed and feasible. The general framework of the EMA/CHMP guideline on small population clinical trials stimulates a number of research areas. These serve as the basis for the three projects, Asterix, IDeAl, and InSPiRe, which use various approaches to develop new statistical methodology for design and analysis of small population clinical trials. Small population clinical trials refer to trials with a limited number of patients. Small populations may result form rare diseases or specific subtypes of more common diseases. New statistical methodology needs to be tailored to these specific situations. The main results from the three projects will constitute a useful toolbox for improved design and analysis of small

  14. Conceptual design study of fusion experimental reactor (FY86FER)

    International Nuclear Information System (INIS)

    Nakashima, Kunihiko; Yamamoto, Shin; Ohara, Yoshihiro; Watanabe, Kazuhiro; Mizuno, Makoto; Araki, Masanori; Uede, Taisei; Okano, Kunihiko.

    1987-09-01

    This report describes the results of applicability studies for the negative ion-based neutral beam injector to the Fusion Experimental Reactor (FER). The operation scenario of FER has been proposed to adopt the neutral injection method as one of candidates, which has three functions of heating, current drive and profile control. One of the fundamental requirements is the tangential injection of the neutral beam. For neutral beam injectors, three port sections are available. Supposing to adopt the beam line with the straight long neutralizer which has been designed in JAERI, the geometrical arrangement was determined so as to avoid any trouble to the reactor structure. The conceptual study for major components which compose the beam line system was carried out including the estimation of the neutron streaming. The power supply system was studied also and the work was concentrated on the acceleration power supply which requires the output voltage of 500 kV and fast cut-off time. A basic concept, in which a inverter with a AC switch is used and the frequency of the supplied AC line is increased was proposed. In these works, the configuration of the neutral beam injection system was detailed and it was shown that the beam line seems to be well implemented with the geometrical constraints related to the reactor configuration. (author)

  15. Machine Learning and Experimental Design for Hydrogen Cosmology

    Science.gov (United States)

    Rapetti, David; Tauscher, Keith A.; Burns, Jack O.; Mirocha, Jordan; Switzer, Eric; Monsalve, Raul A.; Furlanetto, Steven R.; Bowman, Judd D.

    2018-06-01

    Based on two powerful innovations, we present a new pipeline to analyze the redshifted sky-averaged 21-cm spectrum (~10-200 MHz) of neutral hydrogen from the first stars, galaxies and black holes. First, we combine machine learning and model selection techniques to extract the global 21-cm signal from foreground and instrumental systematics. Second, we employ experimental designs to increase our ability to separate these two components in data sets. For measurements with foreground polarization induced by rotation about the anisotropic low-frequency radio sky on a large beam, we incorporate this information into the likelihood to distinguish the unpolarized 21-cm signal from the rest of the data. For experiments with a drift scan strategy, we take advantage of the varying foreground in time to identify the constant 21-cm signal. This pipeline can be applied to either lunar orbit/surface instruments shielded from terrestrial and solar radio contamination, or existing ground-based observations, such as those from the EDGES collaboration that recently observed an absorption trough potentially consistent with the global 21-cm signal of Cosmic Dawn. Finally, this pipeline allows us to constrain physical parameters for a given model of the first luminous objects plus exotic physics in the early universe, from e.g. dark matter, through an MCMC analysis that uses the extracted signal as a starting point, providing key efficiency for unexplored cosmologies.

  16. Validation of a buffet meal design in an experimental restaurant.

    Science.gov (United States)

    Allirot, Xavier; Saulais, Laure; Disse, Emmanuel; Roth, Hubert; Cazal, Camille; Laville, Martine

    2012-06-01

    We assessed the reproducibility of intakes and meal mechanics parameters (cumulative energy intake (CEI), number of bites, bite rate, mean energy content per bite) during a buffet meal designed in a natural setting, and their sensitivity to food deprivation. Fourteen men were invited to three lunch sessions in an experimental restaurant. Subjects ate their regular breakfast before sessions A and B. They skipped breakfast before session FAST. The same ad libitum buffet was offered each time. Energy intakes and meal mechanics were assessed by foods weighing and video recording. Intrasubject reproducibility was evaluated by determining intraclass correlation coefficients (ICC). Mixed-models were used to assess the effects of the sessions on CEI. We found a good reproducibility between A and B for total energy (ICC=0.82), carbohydrate (ICC=0.83), lipid (ICC=0.81) and protein intake (ICC=0.79) and for meal mechanics parameters. Total energy, lipid and carbohydrate intake were higher in FAST than in A and B. CEI were found sensitive to differences in hunger level while the other meal mechanics parameters were stable between sessions. In conclusion, a buffet meal in a normal eating environment is a valid tool for assessing the effects of interventions on intakes. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. Tokamak experimental power reactor conceptual design. Volume I

    International Nuclear Information System (INIS)

    1976-08-01

    A conceptual design has been developed for a tokamak Experimental Power Reactor to operate at net electrical power conditions with a plant capacity factor of 50 percent for 10 years. The EPR operates in a pulsed mode at a frequency of approximately 1/min., with an approximate 75 percent duty cycle, is capable of producing approximately 72 MWe and requires 42 MWe. The annual tritium consumption is 16 kg. The EPR vacuum chamber is 6.25 m in major radius and 2.4 m in minor radius, is constructed of 2-cm thick stainless steel, and has 2-cm thick detachable, beryllium-coated coolant panels mounted on the interior. An 0.28 m stainless steel blanket and a shield ranging from 0.6 to 1.0 m surround the vacuum vessel. The coolant is H 2 O. Sixteen niobium-titanium superconducting toroidal-field coils provide a field of 10 T at the coil and 4.47 T at the plasma. Superconducting ohmic-heating and equilibrium-field coils provide 135 V-s to drive the plasma current. Plasma heating is accomplished by 12 neutral beam-injectors, which provide 60 MW. The energy transfer and storage system consists of a central superconducting storage ring, a homopolar energy storage unit, and a variety of inductor-converters

  18. Study design and statistical analysis of data in human population studies with the micronucleus assay.

    Science.gov (United States)

    Ceppi, Marcello; Gallo, Fabio; Bonassi, Stefano

    2011-01-01

    The most common study design performed in population studies based on the micronucleus (MN) assay, is the cross-sectional study, which is largely performed to evaluate the DNA damaging effects of exposure to genotoxic agents in the workplace, in the environment, as well as from diet or lifestyle factors. Sample size is still a critical issue in the design of MN studies since most recent studies considering gene-environment interaction, often require a sample size of several hundred subjects, which is in many cases difficult to achieve. The control of confounding is another major threat to the validity of causal inference. The most popular confounders considered in population studies using MN are age, gender and smoking habit. Extensive attention is given to the assessment of effect modification, given the increasing inclusion of biomarkers of genetic susceptibility in the study design. Selected issues concerning the statistical treatment of data have been addressed in this mini-review, starting from data description, which is a critical step of statistical analysis, since it allows to detect possible errors in the dataset to be analysed and to check the validity of assumptions required for more complex analyses. Basic issues dealing with statistical analysis of biomarkers are extensively evaluated, including methods to explore the dose-response relationship among two continuous variables and inferential analysis. A critical approach to the use of parametric and non-parametric methods is presented, before addressing the issue of most suitable multivariate models to fit MN data. In the last decade, the quality of statistical analysis of MN data has certainly evolved, although even nowadays only a small number of studies apply the Poisson model, which is the most suitable method for the analysis of MN data.

  19. A d-statistic for single-case designs that is equivalent to the usual between-groups d-statistic.

    Science.gov (United States)

    Shadish, William R; Hedges, Larry V; Pustejovsky, James E; Boyajian, Jonathan G; Sullivan, Kristynn J; Andrade, Alma; Barrientos, Jeannette L

    2014-01-01

    We describe a standardised mean difference statistic (d) for single-case designs that is equivalent to the usual d in between-groups experiments. We show how it can be used to summarise treatment effects over cases within a study, to do power analyses in planning new studies and grant proposals, and to meta-analyse effects across studies of the same question. We discuss limitations of this d-statistic, and possible remedies to them. Even so, this d-statistic is better founded statistically than other effect size measures for single-case design, and unlike many general linear model approaches such as multilevel modelling or generalised additive models, it produces a standardised effect size that can be integrated over studies with different outcome measures. SPSS macros for both effect size computation and power analysis are available.

  20. Batch phenol biodegradation study and application of factorial experimental design

    Directory of Open Access Journals (Sweden)

    A. Hellal

    2010-01-01

    Full Text Available A bacterium, Pseudomonas aeruginosa (ATTC27853, was investigated for its ability to grow and to degrade phenol as solecarbon source, in aerobic batch culture. The parameters which affect the substrate biodegradation such as the adaptation ofbacteria to phenol, the temperature, and the nature of the bacteria were investigated. The results show that for a range oftemperature of 30 to 40°C, the best degradation of phenol for a concentration of 100mg/l was observed at 30°C. The regenerationof the bacterium which allows the reactivation of its enzymatic activity, shows that the degradation of 100 mg/ l ofsubstrate at 30° C required approximately 50 hours with revivified bacteria, while it only starts after 72 hours for those norevivified. Adapted to increasing concentrations, allows the bacteria to degrade a substrate concentration of about 400mg/l in less than 350 hours.A second part was consisted in the determination of a substrate degradation model using the factorial experiment design,as a function of temperature (30-40°C and of the size of the inoculums (260.88 - 521.76mg/ l. The results were analyzedstatistically using the Student’s t-test, analysis of variance, and F-test. The value of R2 (0.99872 and adjusted R2 (0.9962close to 1.0, verifies the good correlation between the observed and the predicted values, and provides the excellent relationshipbetween the independent variables (factors and the response (the time of the phenol degradation. F-value found above200, indicates that the considered model is statistically significant.

  1. Design and implementation of a modular program system for the carrying-through of statistical analyses

    International Nuclear Information System (INIS)

    Beck, W.

    1984-01-01

    From the complexity of computer programs for the solution of scientific and technical problems results a lot of questions. Typical questions concern the strength and weakness of computer programs, the propagation of incertainties among the input data, the sensitivity of input data on output data and the substitute of complex models by more simple ones, which provide equivalent results in certain ranges. Those questions have a general practical meaning, principle answers may be found by statistical methods, which are based on the Monte Carlo Method. In this report the statistical methods are chosen, described and valuated. They are implemented into the modular program system STAR, which is an own component of the program system RSYST. The design of STAR considers users with different knowledge of data processing and statistics. The variety of statistical methods, generating and evaluating procedures. The processing of large data sets in complex structures. The coupling to other components of RSYST and RSYST foreign programs. That the system can be easily modificated and enlarged. Four examples are given, which demonstrate the application of STAR. (orig.) [de

  2. Demonstration of a software design and statistical analysis methodology with application to patient outcomes data sets.

    Science.gov (United States)

    Mayo, Charles; Conners, Steve; Warren, Christopher; Miller, Robert; Court, Laurence; Popple, Richard

    2013-11-01

    With emergence of clinical outcomes databases as tools utilized routinely within institutions, comes need for software tools to support automated statistical analysis of these large data sets and intrainstitutional exchange from independent federated databases to support data pooling. In this paper, the authors present a design approach and analysis methodology that addresses both issues. A software application was constructed to automate analysis of patient outcomes data using a wide range of statistical metrics, by combining use of C#.Net and R code. The accuracy and speed of the code was evaluated using benchmark data sets. The approach provides data needed to evaluate combinations of statistical measurements for ability to identify patterns of interest in the data. Through application of the tools to a benchmark data set for dose-response threshold and to SBRT lung data sets, an algorithm was developed that uses receiver operator characteristic curves to identify a threshold value and combines use of contingency tables, Fisher exact tests, Welch t-tests, and Kolmogorov-Smirnov tests to filter the large data set to identify values demonstrating dose-response. Kullback-Leibler divergences were used to provide additional confirmation. The work demonstrates the viability of the design approach and the software tool for analysis of large data sets.

  3. Designing Tasks to Examine Mathematical Knowledge for Teaching Statistics for Primary Teachers

    Science.gov (United States)

    Siswono, T. Y. E.; Kohar, A. W.; Hartono, S.

    2018-01-01

    Mathematical knowledge for teaching (MKT) is viewed as fuel resources for conducting an orchestra in a teaching and learning process. By understanding MKT, especially for primary teachers, it can predict the success of a goal of an instruction and analyze the weaknesses and improvements of it. To explore what teachers think about subject matters, pedagogical terms, and appropriate curriculum, it needs a task which can be identified the teachers’ MKT including the subject matter knowledge (SMK) and pedagogical content knowledge (PCK). This study aims to design an appropriate task for exploring primary teachers’ MKT for statistics in primary school. We designed six tasks to examine 40 primary teachers’ MKT, of which each respectively represents the categories of SMK (common content knowledge (CCK) and specialised content knowledge (SCK)) and PCK (knowledge of content and students (KCS), knowledge of content and teaching (KCT), and knowledge of content and curriculum (KCC)). While MKT has much attention of numbers of scholars, we consider knowledge of content and culture (KCCl) to be hypothesized in the domains of MKT. Thus, we added one more task examining how the primary teachers used their knowledge of content (KC) regarding to MKT in statistics. Some examples of the teachers’ responses on the tasks are discussed and some refinements of MKT task in statistics for primary teachers are suggested.

  4. Effect of carboxymethylcellulose on the rheological and filtration properties of bentonite clay samples determined by experimental planning and statistical analysis

    Directory of Open Access Journals (Sweden)

    B. M. A. Brito

    Full Text Available Abstract Over the past few years, considerable research has been conducted using the techniques of mixture delineation and statistical modeling. Through this methodology, applications in various technological fields have been found/optimized, especially in clay technology, leading to greater efficiency and reliability. This work studied the influence of carboxymethylcellulose on the rheological and filtration properties of bentonite dispersions to be applied in water-based drilling fluids using experimental planning and statistical analysis for clay mixtures. The dispersions were prepared according to Petrobras standard EP-1EP-00011-A, which deals with the testing of water-based drilling fluid viscosifiers for oil prospecting. The clay mixtures were transformed into sodic compounds, and carboxymethylcellulose additives of high and low molar mass were added, in order to improve their rheology and filtrate volume. Experimental planning and statistical analysis were used to verify the effect. The regression models were calculated for the relation between the compositions and the following rheological properties: apparent viscosity, plastic viscosity, and filtrate volume. The significance and validity of the models were confirmed. The results showed that the 3D response surfaces of the compositions with high molecular weight carboxymethylcellulose added were the ones that most contributed to the rise in apparent viscosity and plastic viscosity, and that those with low molecular weight were the ones that most helped in the reduction of the filtrate volume. Another important observation is that the experimental planning and statistical analysis can be used as an important auxiliary tool to optimize the rheological properties and filtrate volume of bentonite clay dispersions for use in drilling fluids when carboxymethylcellulose is added.

  5. ELECTRONIC SYSTEM FOR EXPERIMENTATION IN AC ELECTROGRAVIMETRY II: IMPLEMENTED DESIGN

    Directory of Open Access Journals (Sweden)

    Robinson Torres

    2007-06-01

    Full Text Available A detailed description of the electronic system designed to improve the measurements in an experimental AC electrogravimetry setup is presented. This system is committed to acquire appropriated data for determining the Electrogravimetric Transfer Function (EGTF and provide information regarding the mass transfer in an electrochemical cell in the AC Electrogravimetry Technique, but maintaining a good trade-off between the locking frequency bandwidth and the resolution in the frequency tracking, that is, enlarging the bandwidth of the system to follow signals with frequency as higher as 1 kHz, but maintaining an accurate and continuous tracking of this signal. The enlarged bandwidth allows the study of fast kinetic process in electrochemical applications and the continuous tracking let to achieve a precise measurement with good resolution rather than average frequency records obtained by conventional frequency meters. The system is based on an Analogue-Digital Phase Locked Loop (A-D PLL.En este artículo se presenta una descripción detallada del sistema electrónico diseñado para mejorar las medidas en un sistema experimental de electrogravimetría AC. El sistema diseñado se encarga de adquirir los datos adecuados para determinar la función de transferencia electrogravimétrica (EGTF y proveer información relacionada con la transferencia de masa en una celda electroquímica en la técnica de electrogravimetría AC, pero manteniendo un buen compromiso entre el ancho de banda de enganche y la resolución en el seguimiento de la frecuencia, es decir, el sistema incrementa el ancho de banda para permitir el seguimiento de señales con frecuencias hasta de 1 kHz, pero conservando un exacto y continuo seguimiento de esta señal. El aumento del ancho de banda permite el estudio de procesos con una cinética rápida en aplicaciones electroquímicas y el seguimiento continuo de la señal permite la obtención de medidas precisas con buena resoluci

  6. Statistical Information and Uncertainty: A Critique of Applications in Experimental Psychology

    Directory of Open Access Journals (Sweden)

    Donald Laming

    2010-04-01

    Full Text Available This paper presents, first, a formal exploration of the relationships between information (statistically defined, statistical hypothesis testing, the use of hypothesis testing in reverse as an investigative tool, channel capacity in a communication system, uncertainty, the concept of entropy in thermodynamics, and Bayes’ theorem. This exercise brings out the close mathematical interrelationships between different applications of these ideas in diverse areas of psychology. Subsequent illustrative examples are grouped under (a the human operator as an ideal communications channel, (b the human operator as a purely physical system, and (c Bayes’ theorem as an algorithm for combining information from different sources. Some tentative conclusions are drawn about the usefulness of information theory within these different categories. (a The idea of the human operator as an ideal communications channel has long been abandoned, though it provides some lessons that still need to be absorbed today. (b Treating the human operator as a purely physical system provides a platform for the quantitative exploration of many aspects of human performance by analogy with the analysis of other physical systems. (c The use of Bayes’ theorem to calculate the effects of prior probabilities and stimulus frequencies on human performance is probably misconceived, but it is difficult to obtain results precise enough to resolve this question.

  7. Application of Statistical Design to the Optimization of Culture Medium for Prodigiosin Production by Serratia marcescens SWML08

    Directory of Open Access Journals (Sweden)

    Venil, C. K.

    2009-01-01

    Full Text Available Combination of Plackett – Burman design (PBD and Box – Behnken design (BBD were applied for optimization of different factors for prodigiosin production by Serratia marcescens SWML08. Among 11 factors, incubation temperature, and supplement of (NH42PO4 and trace salts into the culture medium were selected due to significant positive effect on prodigiosin yield. Box - Behnken design, a response surface methodology, was used for further optimization of these selected factors for better prodigiosin output. Data were analyzed step wise and a second order polynomial model was established to identify the relationship between the prodigiosin output and the selected factors. The media formulations were optimized having the factors such as incubation temperature 30 °C, (NH42PO4 6 g/L and trace salts 0.6 g/L. The maximum experimental response for prodigiosin production was 1397.96 mg/L whereas the predicted value was 1394.26 mg/L. The high correlation between the predicted and observed values indicated the validity of the statistical design.

  8. Approach toward enhancement of halophilic protease production by Halobacterium sp. strain LBU50301 using statistical design response surface methodology.

    Science.gov (United States)

    Chuprom, Julalak; Bovornreungroj, Preeyanuch; Ahmad, Mehraj; Kantachote, Duangporn; Dueramae, Sawitree

    2016-06-01

    A new potent halophilic protease producer, Halobacterium sp. strain LBU50301 was isolated from salt-fermented fish samples ( budu ) and identified by phenotypic analysis, and 16S rDNA gene sequencing. Thereafter, sequential statistical strategy was used to optimize halophilic protease production from Halobacterium sp. strain LBU50301 by shake-flask fermentation. The classical one-factor-at-a-time (OFAT) approach determined gelatin was the best nitrogen source. Based on Plackett - Burman (PB) experimental design; gelatin, MgSO 4 ·7H 2 O, NaCl and pH significantly influenced the halophilic protease production. Central composite design (CCD) determined the optimum level of medium components. Subsequently, an 8.78-fold increase in corresponding halophilic protease yield (156.22 U/mL) was obtained, compared with that produced in the original medium (17.80 U/mL). Validation experiments proved the adequacy and accuracy of model, and the results showed the predicted value agreed well with the experimental values. An overall 13-fold increase in halophilic protease yield was achieved using a 3 L laboratory fermenter and optimized medium (231.33 U/mL).

  9. Approach toward enhancement of halophilic protease production by Halobacterium sp. strain LBU50301 using statistical design response surface methodology

    Directory of Open Access Journals (Sweden)

    Julalak Chuprom

    2016-06-01

    Full Text Available A new potent halophilic protease producer, Halobacterium sp. strain LBU50301 was isolated from salt-fermented fish samples (budu and identified by phenotypic analysis, and 16S rDNA gene sequencing. Thereafter, sequential statistical strategy was used to optimize halophilic protease production from Halobacterium sp. strain LBU50301 by shake-flask fermentation. The classical one-factor-at-a-time (OFAT approach determined gelatin was the best nitrogen source. Based on Plackett–Burman (PB experimental design; gelatin, MgSO4·7H2O, NaCl and pH significantly influenced the halophilic protease production. Central composite design (CCD determined the optimum level of medium components. Subsequently, an 8.78-fold increase in corresponding halophilic protease yield (156.22 U/mL was obtained, compared with that produced in the original medium (17.80 U/mL. Validation experiments proved the adequacy and accuracy of model, and the results showed the predicted value agreed well with the experimental values. An overall 13-fold increase in halophilic protease yield was achieved using a 3 L laboratory fermenter and optimized medium (231.33 U/mL.

  10. Review of research designs and statistical methods employed in dental postgraduate dissertations.

    Science.gov (United States)

    Shirahatti, Ravi V; Hegde-Shetiya, Sahana

    2015-01-01

    There is a need to evaluate the quality of postgraduate dissertations of dentistry submitted to university in the light of the international standards of reporting. We conducted the review with an objective to document the use of sampling methods, measurement standardization, blinding, methods to eliminate bias, appropriate use of statistical tests, appropriate use of data presentation in postgraduate dental research and suggest and recommend modifications. The public access database of the dissertations from Rajiv Gandhi University of Health Sciences was reviewed. Three hundred and thirty-three eligible dissertations underwent preliminary evaluation followed by detailed evaluation of 10% of randomly selected dissertations. The dissertations were assessed based on international reporting guidelines such as strengthening the reporting of observational studies in epidemiology (STROBE), consolidated standards of reporting trials (CONSORT), and other scholarly resources. The data were compiled using MS Excel and SPSS 10.0. Numbers and percentages were used for describing the data. The "in vitro" studies were the most common type of research (39%), followed by observational (32%) and experimental studies (29%). The disciplines conservative dentistry (92%) and prosthodontics (75%) reported high numbers of in vitro research. Disciplines oral surgery (80%) and periodontics (67%) had conducted experimental studies as a major share of their research. Lacunae in the studies included observational studies not following random sampling (70%), experimental studies not following random allocation (75%), not mentioning about blinding, confounding variables and calibrations in measurements, misrepresenting the data by inappropriate data presentation, errors in reporting probability values and not reporting confidence intervals. Few studies showed grossly inappropriate choice of statistical tests and many studies needed additional tests. Overall observations indicated the need to

  11. Summary on experimental methods for statistical transient analysis of two-phase gas-liquid flow

    International Nuclear Information System (INIS)

    Delhaye, J.M.; Jones, O.C. Jr.

    1976-06-01

    Much work has been done in the study of two-phase gas-liquid flows. Although it has been recognized superficially that such flows are not homogeneous in general, little attention has been paid to the inherent discreteness of the two-phase systems. Only relatively recently have fluctuating characteristics of two-phase flows been studied in detail. As a result, new experimental devices and techniques have been developed for use in measuring quantities previously ignored. This report reviews and summarizes most of these methods in an effort to emphasize the importance of the fluctuating nature of these flows and as a guide to further research in this field

  12. Normalization and experimental design for ChIP-chip data

    Directory of Open Access Journals (Sweden)

    Alekseyenko Artyom A

    2007-06-01

    Full Text Available Abstract Background Chromatin immunoprecipitation on tiling arrays (ChIP-chip has been widely used to investigate the DNA binding sites for a variety of proteins on a genome-wide scale. However, several issues in the processing and analysis of ChIP-chip data have not been resolved fully, including the effect of background (mock control subtraction and normalization within and across arrays. Results The binding profiles of Drosophila male-specific lethal (MSL complex on a tiling array provide a unique opportunity for investigating these topics, as it is known to bind on the X chromosome but not on the autosomes. These large bound and control regions on the same array allow clear evaluation of analytical methods. We introduce a novel normalization scheme specifically designed for ChIP-chip data from dual-channel arrays and demonstrate that this step is critical for correcting systematic dye-bias that may exist in the data. Subtraction of the mock (non-specific antibody or no antibody control data is generally needed to eliminate the bias, but appropriate normalization obviates the need for mock experiments and increases the correlation among replicates. The idea underlying the normalization can be used subsequently to estimate the background noise level in each array for normalization across arrays. We demonstrate the effectiveness of the methods with the MSL complex binding data and other publicly available data. Conclusion Proper normalization is essential for ChIP-chip experiments. The proposed normalization technique can correct systematic errors and compensate for the lack of mock control data, thus reducing the experimental cost and producing more accurate results.

  13. A Simulation Study of Threats to Validity in Quasi-Experimental Designs: Interrelationship between Design, Measurement, and Analysis.

    Science.gov (United States)

    Holgado-Tello, Fco P; Chacón-Moscoso, Salvador; Sanduvete-Chaves, Susana; Pérez-Gil, José A

    2016-01-01

    The Campbellian tradition provides a conceptual framework to assess threats to validity. On the other hand, different models of causal analysis have been developed to control estimation biases in different research designs. However, the link between design features, measurement issues, and concrete impact estimation analyses is weak. In order to provide an empirical solution to this problem, we use Structural Equation Modeling (SEM) as a first approximation to operationalize the analytical implications of threats to validity in quasi-experimental designs. Based on the analogies established between the Classical Test Theory (CTT) and causal analysis, we describe an empirical study based on SEM in which range restriction and statistical power have been simulated in two different models: (1) A multistate model in the control condition (pre-test); and (2) A single-trait-multistate model in the control condition (post-test), adding a new mediator latent exogenous (independent) variable that represents a threat to validity. Results show, empirically, how the differences between both the models could be partially or totally attributed to these threats. Therefore, SEM provides a useful tool to analyze the influence of potential threats to validity.

  14. Experimental and statistical investigation of the compressive strength anisotropy in structural concrete

    DEFF Research Database (Denmark)

    Hansen, Soren Gustenhoff; Lauridsen, Jorgen Trankjaer; Hoang, Linh Cao

    2018-01-01

    design parameters and conditions on the anisotropy. This includes the influence of reinforcement, w/c-ratio, curing time, load history and structural geometry. For this purpose, cores were drilled out at different angles from beam- and slab specimens for compressive testing. The main findings include: a...

  15. Experimental Salix shoot and root growth statistics on the alluvial sediment of a restored river corridor

    Science.gov (United States)

    Pasquale, N.; Perona, P.; Verones, F.; Francis, R.; Burlando, P.

    2009-12-01

    River restoration projects encompass not only the amelioration of flood protection but also the rehabilitation of the riverine ecosystem. However, the interactions and feedbacks between river hydrology, riparian vegetation and aquifer dynamics are still poorly understood. Vegetation interacts with river hydrology on multiple time scales. Hence, there is considerable interest in understanding the morphodynamics of restored river reaches in relation to the characteristics of vegetation that may colonize the bare sediment, and locally stabilize it by root anchoring. In this paper we document results from a number of ongoing experiments within the project RECORD (Restored CORridor Dynamics, sponsored by CCES - www.cces.ch - and Cantons Zurich and Thurgau, CH). In particular, we discuss both the above and below ground biomass growth dynamics of 1188 Salix cuttings (individual and group survival rate, growth of the longest shoots and number of branches and morphological root analysis) in relation to local river hydrodynamics. Cuttings were organized in square plots of different size and planted in spring 2009 on a gravel island of the restored river section of River Thur in Switzerland. By periodical monitoring the plots we obtained a detailed and quite unique set of data, including root statistics of uprooted samples derived from image analysis from a high-resolution scanner. Beyond describing the survival rate dynamics in relation to river hydrology, we show the nature and strength of correlations between island topography and cutting growth statistics. In particular, by root analysis and by comparing empirical histograms of the vertical root distribution vs satured water surface in the sediment, we show that main tropic responses on such environment are oxytropism, hydrotropism and thigmotropism. The main factor influencing the survival rate is naturally found in erosion by floods, of which we also give an interesting example that helps demonstrate the role of river

  16. A Short Guide to Experimental Design and Analysis for Engineers

    Science.gov (United States)

    2014-04-01

    2011) Statistical Analysis: Microsoft Excel 2010. Indianapolis, Que Publishing Coakes, S. J. and Ong, C. (2011) SPSS : Analysis without Anguish ...of 0. UNCLASSIFIED DSTO-TN-1291 UNCLASSIFIED 24 Figure 6: Box and whisker plot of the batch data produced with SPSS . A closer look at Figure...Smirnov and Shapiro-Wilk statistics is used to confirm this requirement (Coakes and Ong, 2011). Employing SPSS produces significance values of 0.2

  17. Six Sigma Quality Management System and Design of Risk-based Statistical Quality Control.

    Science.gov (United States)

    Westgard, James O; Westgard, Sten A

    2017-03-01

    Six sigma concepts provide a quality management system (QMS) with many useful tools for managing quality in medical laboratories. This Six Sigma QMS is driven by the quality required for the intended use of a test. The most useful form for this quality requirement is the allowable total error. Calculation of a sigma-metric provides the best predictor of risk for an analytical examination process, as well as a design parameter for selecting the statistical quality control (SQC) procedure necessary to detect medically important errors. Simple point estimates of sigma at medical decision concentrations are sufficient for laboratory applications. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Statistically designed optimisation of enzyme catalysed starch removal from potato pulp

    DEFF Research Database (Denmark)

    Thomassen, Lise Vestergaard; Meyer, Anne S.

    2010-01-01

    to obtain dietary fibers is usually accomplished via a three step, sequential enzymatic treatment procedure using a heat stable alpha-amylase, protease, and amyloglucosidase. Statistically designed experiments were performed to investigate the influence of enzyme dose, amount of dry matter, incubation time...... and temperature on the amount of starch released from the potato pulp. The data demonstrated that all the starch could be released from potato pulp in one step when 8% (w/w) dry potato pulp was treated with 0.2% (v/w) (enzyme/substrate (E/S)) of a thermostable Bacillus licheniformis alpha-amylase (Termamyl(R) SC...

  19. Research Methods in Healthcare Epidemiology and Antimicrobial Stewardship – Quasi-Experimental Designs

    Science.gov (United States)

    Schweizer, Marin L.; Braun, Barbara I.; Milstone, Aaron M.

    2016-01-01

    Quasi-experimental studies evaluate the association between an intervention and an outcome using experiments in which the intervention is not randomly assigned. Quasi-experimental studies are often used to evaluate rapid responses to outbreaks or other patient safety problems requiring prompt non-randomized interventions. Quasi-experimental studies can be categorized into three major types: interrupted time series designs, designs with control groups, and designs without control groups. This methods paper highlights key considerations for quasi-experimental studies in healthcare epidemiology and antimicrobial stewardship including study design and analytic approaches to avoid selection bias and other common pitfalls of quasi-experimental studies. PMID:27267457

  20. A case study on the design and development of minigames for research methods and statistics

    Directory of Open Access Journals (Sweden)

    P. Van Rosmalen

    2014-08-01

    Full Text Available Research methodology involves logical reasoning and critical thinking skills which are core competences in developing a more sophisticated understanding of the world. Acquiring expertise in research methods and statistics is not easy and poses a significant challenge for many students. The subject material is challenging because it is highly abstract and complex and requires the coordination of different but inter-related knowledge and skills that are all necessary to develop a coherent and usable skills base in this area. Additionally, while many students embrace research methods enthusiastically, others find the area dry, abstract and boring. In this paper we discuss the design and the first evaluation of a set of mini-games to practice research methods. Games are considered to be engaging and allow students to test out scenarios which provide concrete examples in a way that they typically only do once they are out in the field. The design of a game is a complex task. First, we describe how we used cognitive task analysis to identify the knowledge and competences required to develop a comprehensive and usable understanding of research methods. Next, we describe the games designed and how 4C-ID, an instructional design model, was used to underpin the games with a sound instructional design basis. Finally, the evaluation approach is discussed and how the findings of the first evaluation phase were used to improve the games.

  1. Experimental observations of Lagrangian sand grain kinematics under bedload transport: statistical description of the step and rest regimes

    Science.gov (United States)

    Guala, M.; Liu, M.

    2017-12-01

    The kinematics of sediment particles is investigated by non-intrusive imaging methods to provide a statistical description of bedload transport in conditions near the threshold of motion. In particular, we focus on the cyclic transition between motion and rest regimes to quantify the waiting time statistics inferred to be responsible for anomalous diffusion, and so far elusive. Despite obvious limitations in the spatio-temporal domain of the observations, we are able to identify the probability distributions of the particle step time and length, velocity, acceleration, waiting time, and thus distinguish which quantities exhibit well converged mean values, based on the thickness of their respective tails. The experimental results shown here for four different transport conditions highlight the importance of the waiting time distribution and represent a benchmark dataset for the stochastic modeling of bedload transport.

  2. Vital Statistics of Panstrongylus geniculatus (Latreille 1811 (Hemiptera: Reduviidae under Experimental Conditions

    Directory of Open Access Journals (Sweden)

    Cabello Daniel R

    1998-01-01

    Full Text Available A statistical evaluation of the population dynamics of Panstrongylus geniculatus is based on a cohort experiment conducted under controlled laboratory conditions. Animals were fed on hen every 15 days. Egg incubation took 21 days; mean duration of 1st, 2nd, 3rd, 4th, and 5th instar nymphs was 25, 30, 58, 62, and 67 days, respectively; mean nymphal development time was 39 weeks and adult longevity was 72 weeks. Females reproduced during 30 weeks, producing an average of 61.6 eggs for female on its lifetime; the average number of eggs/female/week was 2.1. Total number of eggs produced by the cohort was 1379. Average hatch for the cohort was 88.9%; it was not affected by age of the mother. Age specific survival and reproduction tables were constructed. The following population parameters were evaluated, generation time was 36.1 weeks; net reproduction rate was 89.4; intrinsic rate of natural increase was 0.125; instantaneous birth and death rates were 0.163 and 0.039 respectively; finite rate of increase was 1.13; total reproductive value was 1196 and stable age distribution was 31.2% eggs, 64.7% nymphs and 4.1% adults. Finally the population characteristics of P. geniculatus lead to the conclusion that this species is a K strategist.

  3. Segmentation-free statistical image reconstruction for polyenergetic x-ray computed tomography with experimental validation

    International Nuclear Information System (INIS)

    Elbakri, Idris A; Fessler, Jeffrey A

    2003-01-01

    This paper describes a statistical image reconstruction method for x-ray CT that is based on a physical model that accounts for the polyenergetic x-ray source spectrum and the measurement nonlinearities caused by energy-dependent attenuation. Unlike our earlier work, the proposed algorithm does not require pre-segmentation of the object into the various tissue classes (e.g., bone and soft tissue) and allows mixed pixels. The attenuation coefficient of each voxel is modelled as the product of its unknown density and a weighted sum of energy-dependent mass attenuation coefficients. We formulate a penalized-likelihood function for this polyenergetic model and develop an iterative algorithm for estimating the unknown density of each voxel. Applying this method to simulated x-ray CT measurements of objects containing both bone and soft tissue yields images with significantly reduced beam hardening artefacts relative to conventional beam hardening correction methods. We also apply the method to real data acquired from a phantom containing various concentrations of potassium phosphate solution. The algorithm reconstructs an image with accurate density values for the different concentrations, demonstrating its potential for quantitative CT applications

  4. Segmentation-free statistical image reconstruction for polyenergetic x-ray computed tomography with experimental validation.

    Science.gov (United States)

    Idris A, Elbakri; Fessler, Jeffrey A

    2003-08-07

    This paper describes a statistical image reconstruction method for x-ray CT that is based on a physical model that accounts for the polyenergetic x-ray source spectrum and the measurement nonlinearities caused by energy-dependent attenuation. Unlike our earlier work, the proposed algorithm does not require pre-segmentation of the object into the various tissue classes (e.g., bone and soft tissue) and allows mixed pixels. The attenuation coefficient of each voxel is modelled as the product of its unknown density and a weighted sum of energy-dependent mass attenuation coefficients. We formulate a penalized-likelihood function for this polyenergetic model and develop an iterative algorithm for estimating the unknown density of each voxel. Applying this method to simulated x-ray CT measurements of objects containing both bone and soft tissue yields images with significantly reduced beam hardening artefacts relative to conventional beam hardening correction methods. We also apply the method to real data acquired from a phantom containing various concentrations of potassium phosphate solution. The algorithm reconstructs an image with accurate density values for the different concentrations, demonstrating its potential for quantitative CT applications.

  5. Electrodialytic desalination of brackish water: determination of optimal experimental parameters using full factorial design

    Science.gov (United States)

    Gmar, Soumaya; Helali, Nawel; Boubakri, Ali; Sayadi, Ilhem Ben Salah; Tlili, Mohamed; Amor, Mohamed Ben

    2017-12-01

    The aim of this work is to study the desalination of brackish water by electrodialysis (ED). A two level-three factor (23) full factorial design methodology was used to investigate the influence of different physicochemical parameters on the demineralization rate (DR) and the specific power consumption (SPC). Statistical design determines factors which have the important effects on ED performance and studies all interactions between the considered parameters. Three significant factors were used including applied potential, salt concentration and flow rate. The experimental results and statistical analysis show that applied potential and salt concentration are the main effect for DR as well as for SPC. The effect of interaction between applied potential and salt concentration was observed for SPC. A maximum value of 82.24% was obtained for DR under optimum conditions and the best value of SPC obtained was 5.64 Wh L-1. Empirical regression models were also obtained and used to predict the DR and the SPC profiles with satisfactory results. The process was applied for the treatment of real brackish water using the optimal parameters.

  6. Using factorial experimental design to evaluate the separation of plastics by froth flotation.

    Science.gov (United States)

    Salerno, Davide; Jordão, Helga; La Marca, Floriana; Carvalho, M Teresa

    2018-03-01

    This paper proposes the use of factorial experimental design as a standard experimental method in the application of froth flotation to plastic separation instead of the commonly used OVAT method (manipulation of one variable at a time). Furthermore, as is common practice in minerals flotation, the parameters of the kinetic model were used as process responses rather than the recovery of plastics in the separation products. To explain and illustrate the proposed methodology, a set of 32 experimental tests was performed using mixtures of two polymers with approximately the same density, PVC and PS (with mineral charges), with particle size ranging from 2 to 4 mm. The manipulated variables were frother concentration, air flow rate and pH. A three-level full factorial design was conducted. The models establishing the relationships between the manipulated variables and their interactions with the responses (first order kinetic model parameters) were built. The Corrected Akaike Information Criterion was used to select the best fit model and an analysis of variance (ANOVA) was conducted to identify the statistically significant terms of the model. It was shown that froth flotation can be used to efficiently separate PVC from PS with mineral charges by reducing the floatability of PVC, which largely depends on the action of pH. Within the tested interval, this is the factor that most affects the flotation rate constants. The results obtained show that the pure error may be of the same magnitude as the sum of squares of the errors, suggesting that there is significant variability within the same experimental conditions. Thus, special care is needed when evaluating and generalizing the process. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Challenges and Approaches to Statistical Design and Inference in High Dimensional Investigations

    Science.gov (United States)

    Garrett, Karen A.; Allison, David B.

    2015-01-01

    Summary Advances in modern technologies have facilitated high-dimensional experiments (HDEs) that generate tremendous amounts of genomic, proteomic, and other “omic” data. HDEs involving whole-genome sequences and polymorphisms, expression levels of genes, protein abundance measurements, and combinations thereof have become a vanguard for new analytic approaches to the analysis of HDE data. Such situations demand creative approaches to the processes of statistical inference, estimation, prediction, classification, and study design. The novel and challenging biological questions asked from HDE data have resulted in many specialized analytic techniques being developed. This chapter discusses some of the unique statistical challenges facing investigators studying high-dimensional biology, and describes some approaches being developed by statistical scientists. We have included some focus on the increasing interest in questions involving testing multiple propositions simultaneously, appropriate inferential indicators for the types of questions biologists are interested in, and the need for replication of results across independent studies, investigators, and settings. A key consideration inherent throughout is the challenge in providing methods that a statistician judges to be sound and a biologist finds informative. PMID:19588106

  8. Challenges and approaches to statistical design and inference in high-dimensional investigations.

    Science.gov (United States)

    Gadbury, Gary L; Garrett, Karen A; Allison, David B

    2009-01-01

    Advances in modern technologies have facilitated high-dimensional experiments (HDEs) that generate tremendous amounts of genomic, proteomic, and other "omic" data. HDEs involving whole-genome sequences and polymorphisms, expression levels of genes, protein abundance measurements, and combinations thereof have become a vanguard for new analytic approaches to the analysis of HDE data. Such situations demand creative approaches to the processes of statistical inference, estimation, prediction, classification, and study design. The novel and challenging biological questions asked from HDE data have resulted in many specialized analytic techniques being developed. This chapter discusses some of the unique statistical challenges facing investigators studying high-dimensional biology and describes some approaches being developed by statistical scientists. We have included some focus on the increasing interest in questions involving testing multiple propositions simultaneously, appropriate inferential indicators for the types of questions biologists are interested in, and the need for replication of results across independent studies, investigators, and settings. A key consideration inherent throughout is the challenge in providing methods that a statistician judges to be sound and a biologist finds informative.

  9. Statistical analysis of experimental multifragmentation events in 64Zn+112Sn at 40 MeV/nucleon

    Science.gov (United States)

    Lin, W.; Zheng, H.; Ren, P.; Liu, X.; Huang, M.; Wada, R.; Chen, Z.; Wang, J.; Xiao, G. Q.; Qu, G.

    2018-04-01

    A statistical multifragmentation model (SMM) is applied to the experimentally observed multifragmentation events in an intermediate heavy-ion reaction. Using the temperature and symmetry energy extracted from the isobaric yield ratio (IYR) method based on the modified Fisher model (MFM), SMM is applied to the reaction 64Zn+112Sn at 40 MeV/nucleon. The experimental isotope distribution and mass distribution of the primary reconstructed fragments are compared without afterburner and they are well reproduced. The extracted temperature T and symmetry energy coefficient asym from SMM simulated events, using the IYR method, are also consistent with those from the experiment. These results strongly suggest that in the multifragmentation process there is a freezeout volume, in which the thermal and chemical equilibrium is established before or at the time of the intermediate-mass fragments emission.

  10. MUP, CEC-DES, STRADE. Codes for uncertainty propagation, experimental design and stratified random sampling techniques

    International Nuclear Information System (INIS)

    Amendola, A.; Astolfi, M.; Lisanti, B.

    1983-01-01

    The report describes the how-to-use of the codes: MUP (Monte Carlo Uncertainty Propagation) for uncertainty analysis by Monte Carlo simulation, including correlation analysis, extreme value identification and study of selected ranges of the variable space; CEC-DES (Central Composite Design) for building experimental matrices according to the requirements of Central Composite and Factorial Experimental Designs; and, STRADE (Stratified Random Design) for experimental designs based on the Latin Hypercube Sampling Techniques. Application fields, of the codes are probabilistic risk assessment, experimental design, sensitivity analysis and system identification problems

  11. Statistical Modeling, Simulation, and Experimental Verification of Wideband Indoor Mobile Radio Channels

    Directory of Open Access Journals (Sweden)

    Yuanyuan Ma

    2018-01-01

    Full Text Available This paper focuses on the modeling, simulation, and experimental verification of wideband single-input single-output (SISO mobile fading channels for indoor propagation environments. The indoor reference channel model is derived from a geometrical rectangle scattering model, which consists of an infinite number of scatterers. It is assumed that the scatterers are exponentially distributed over the two-dimensional (2D horizontal plane of a rectangular room. Analytical expressions are derived for the probability density function (PDF of the angle of arrival (AOA, the PDF of the propagation path length, the power delay profile (PDP, and the frequency correlation function (FCF. An efficient sum-of-cisoids (SOC channel simulator is derived from the nonrealizable reference model by employing the SOC principle. It is shown that the SOC channel simulator approximates closely the reference model with respect to the FCF. The SOC channel simulator enables the performance evaluation of wideband indoor wireless communication systems with reduced realization expenditure. Moreover, the rationality and usefulness of the derived indoor channel model is confirmed by various measurements at 2.4, 5, and 60 GHz.

  12. Experimental Verification of Statistically Optimized Parameters for Low-Pressure Cold Spray Coating of Titanium

    Directory of Open Access Journals (Sweden)

    Damilola Isaac Adebiyi

    2016-06-01

    Full Text Available The cold spray coating process involves many process parameters which make the process very complex, and highly dependent and sensitive to small changes in these parameters. This results in a small operational window of the parameters. Consequently, mathematical optimization of the process parameters is key, not only to achieving deposition but also improving the coating quality. This study focuses on the mathematical identification and experimental justification of the optimum process parameters for cold spray coating of titanium alloy with silicon carbide (SiC. The continuity, momentum and the energy equations governing the flow through the low-pressure cold spray nozzle were solved by introducing a constitutive equation to close the system. This was used to calculate the critical velocity for the deposition of SiC. In order to determine the input temperature that yields the calculated velocity, the distribution of velocity, temperature, and pressure in the cold spray nozzle were analyzed, and the exit values were predicted using the meshing tool of Solidworks. Coatings fabricated using the optimized parameters and some non-optimized parameters are compared. The coating of the CFD-optimized parameters yielded lower porosity and higher hardness.

  13. Alteration of 'R7T7' type nuclear glasses: statistical approach, experimental validation, local evolution model

    International Nuclear Information System (INIS)

    Thierry, F.

    2003-02-01

    The aim of this work is to propose an evolution of nuclear (R7T7-type) glass alteration modeling. The first part of this thesis is about development and validation of the 'r(t)' model. This model which predicts the decrease of alteration rates in confined conditions is based upon a coupling between a first-order dissolution law and a diffusion barrier effect of the alteration gel layer. The values and the uncertainties regarding the main adjustable parameters of the model (α, Dg and C*) have been determined from a systematic study of the available experimental data. A program called INVERSION has been written for this purpose. This work lead to characterize the validity domain of the 'r(t)' model and to parametrize it. Validation experiments have been undertaken, confirming the validity of the parametrization over 200 days. A new model is proposed in the second part of this thesis. It is based on an inhibition of glass dissolution reaction by silicon coupled with a local description of silicon retention in the alteration gel layer. This model predicts the evolutions of boron and silicon concentrations in solution as well as the concentrations and retention profiles in the gel layer. These predictions have been compared to measurements of retention profiles by the secondary ion mass spectrometry (SIMS) method. The model has been validated on fractions of gel layer which reactivity present low or moderate disparities. (author)

  14. Artificial Warming of Arctic Meadow under Pollution Stress: Experimental design

    Science.gov (United States)

    Moni, Christophe; Silvennoinen, Hanna; Fjelldal, Erling; Brenden, Marius; Kimball, Bruce; Rasse, Daniel

    2014-05-01

    Boreal and arctic terrestrial ecosystems are central to the climate change debate, notably because future warming is expected to be disproportionate as compared to world averages. Likewise, greenhouse gas (GHG) release from terrestrial ecosystems exposed to climate warming is expected to be the largest in the arctic. Artic agriculture, in the form of cultivated grasslands, is a unique and economically relevant feature of Northern Norway (e.g. Finnmark Province). In Eastern Finnmark, these agro-ecosystems are under the additional stressor of heavy metal and sulfur pollution generated by metal smelters of NW Russia. Warming and its interaction with heavy metal dynamics will influence meadow productivity, species composition and GHG emissions, as mediated by responses of soil microbial communities. Adaptation and mitigation measurements will be needed. Biochar application, which immobilizes heavy metal, is a promising adaptation method to promote positive growth response in arctic meadows exposed to a warming climate. In the MeadoWarm project we conduct an ecosystem warming experiment combined to biochar adaptation treatments in the heavy-metal polluted meadows of Eastern Finnmark. In summary, the general objective of this study is twofold: 1) to determine the response of arctic agricultural ecosystems under environmental stress to increased temperatures, both in terms of plant growth, soil organisms and GHG emissions, and 2) to determine if biochar application can serve as a positive adaptation (plant growth) and mitigation (GHG emission) strategy for these ecosystems under warming conditions. Here, we present the experimental site and the designed open-field warming facility. The selected site is an arctic meadow located at the Svanhovd Research station less than 10km west from the Russian mining city of Nikel. A splitplot design with 5 replicates for each treatment is used to test the effect of biochar amendment and a 3oC warming on the Arctic meadow. Ten circular

  15. Experimental design, modeling and optimization of polyplex formation between DNA oligonucleotides and branched polyethylenimine.

    Science.gov (United States)

    Clima, Lilia; Ursu, Elena L; Cojocaru, Corneliu; Rotaru, Alexandru; Barboiu, Mihail; Pinteala, Mariana

    2015-09-28

    The complexes formed by DNA and polycations have received great attention owing to their potential application in gene therapy. In this study, the binding efficiency between double-stranded oligonucleotides (dsDNA) and branched polyethylenimine (B-PEI) has been quantified by processing of the images captured from the gel electrophoresis assays. The central composite experimental design has been employed to investigate the effects of controllable factors on the binding efficiency. On the basis of experimental data and the response surface methodology, a multivariate regression model has been constructed and statistically validated. The model has enabled us to predict the binding efficiency depending on experimental factors, such as concentrations of dsDNA and B-PEI as well as the initial pH of solution. The optimization of the binding process has been performed using simplex and gradient methods. The optimal conditions determined for polyplex formation have yielded a maximal binding efficiency close to 100%. In order to reveal the mechanism of complex formation at the atomic-scale, a molecular dynamic simulation has been carried out. According to the computation results, B-PEI amine hydrogen atoms have interacted with oxygen atoms from dsDNA phosphate groups. These interactions have led to the formation of hydrogen bonds between macromolecules, stabilizing the polyplex structure.

  16. West Valley high-level nuclear waste glass development: a statistically designed mixture study

    Energy Technology Data Exchange (ETDEWEB)

    Chick, L.A.; Bowen, W.M.; Lokken, R.O.; Wald, J.W.; Bunnell, L.R.; Strachan, D.M.

    1984-10-01

    The first full-scale conversion of high-level commercial nuclear wastes to glass in the United States will be conducted at West Valley, New York, by West Valley Nuclear Services Company, Inc. (WVNS), for the US Department of Energy. Pacific Northwest Laboratory (PNL) is supporting WVNS in the design of the glass-making process and the chemical formulation of the glass. This report describes the statistically designed study performed by PNL to develop the glass composition recommended for use at West Valley. The recommended glass contains 28 wt% waste, as limited by process requirements. The waste loading and the silica content (45 wt%) are similar to those in previously developed waste glasses; however, the new formulation contains more calcium and less boron. A series of tests verified that the increased calcium results in improved chemical durability and does not adversely affect the other modeled properties. The optimization study assessed the effects of seven oxide components on glass properties. Over 100 melts combining the seven components into a wide variety of statistically chosen compositions were tested. Viscosity, electrical conductivity, thermal expansion, crystallinity, and chemical durability were measured and empirically modeled as a function of the glass composition. The mathematical models were then used to predict the optimum formulation. This glass was tested and adjusted to arrive at the final composition recommended for use at West Valley. 56 references, 49 figures, 18 tables.

  17. Sources of Safety Data and Statistical Strategies for Design and Analysis: Clinical Trials.

    Science.gov (United States)

    Zink, Richard C; Marchenko, Olga; Sanchez-Kam, Matilde; Ma, Haijun; Jiang, Qi

    2018-03-01

    There has been an increased emphasis on the proactive and comprehensive evaluation of safety endpoints to ensure patient well-being throughout the medical product life cycle. In fact, depending on the severity of the underlying disease, it is important to plan for a comprehensive safety evaluation at the start of any development program. Statisticians should be intimately involved in this process and contribute their expertise to study design, safety data collection, analysis, reporting (including data visualization), and interpretation. In this manuscript, we review the challenges associated with the analysis of safety endpoints and describe the safety data that are available to influence the design and analysis of premarket clinical trials. We share our recommendations for the statistical and graphical methodologies necessary to appropriately analyze, report, and interpret safety outcomes, and we discuss the advantages and disadvantages of safety data obtained from clinical trials compared to other sources. Clinical trials are an important source of safety data that contribute to the totality of safety information available to generate evidence for regulators, sponsors, payers, physicians, and patients. This work is a result of the efforts of the American Statistical Association Biopharmaceutical Section Safety Working Group.

  18. High-throughput optimization by statistical designs: example with rat liver slices cryopreservation.

    Science.gov (United States)

    Martin, H; Bournique, B; Blanchi, B; Lerche-Langrand, C

    2003-08-01

    The purpose of this study was to optimize cryopreservation conditions of rat liver slices in a high-throughput format, with focus on reproducibility. A statistical design of 32 experiments was performed and intracellular lactate dehydrogenase (LDHi) activity and antipyrine (AP) metabolism were evaluated as biomarkers. At freezing, modified University of Wisconsin solution was better than Williams'E medium, and pure dimethyl sulfoxide was better than a cryoprotectant mixture. The best cryoprotectant concentrations were 10% for LDHi and 20% for AP metabolism. Fetal calf serum could be used at 50 or 80%, and incubation of slices with the cryoprotectant could last 10 or 20 min. At thawing, 42 degrees C was better than 22 degrees C. After thawing, 1h was better than 3h of preculture. Cryopreservation increased the interslice variability of the biomarkers. After cryopreservation, LDHi and AP metabolism levels were up to 84 and 80% of fresh values. However, these high levels were not reproducibly achieved. Two factors involved in the day-to-day variability of LDHi were identified: the incubation time with the cryoprotectant and the preculture time. In conclusion, the statistical design was very efficient to quickly determine optimized conditions by simultaneously measuring the role of numerous factors. The cryopreservation procedure developed appears suitable for qualitative metabolic profiling studies.

  19. Effect of non-normality on test statistics for one-way independent groups designs.

    Science.gov (United States)

    Cribbie, Robert A; Fiksenbaum, Lisa; Keselman, H J; Wilcox, Rand R

    2012-02-01

    The data obtained from one-way independent groups designs is typically non-normal in form and rarely equally variable across treatment populations (i.e., population variances are heterogeneous). Consequently, the classical test statistic that is used to assess statistical significance (i.e., the analysis of variance F test) typically provides invalid results (e.g., too many Type I errors, reduced power). For this reason, there has been considerable interest in finding a test statistic that is appropriate under conditions of non-normality and variance heterogeneity. Previously recommended procedures for analysing such data include the James test, the Welch test applied either to the usual least squares estimators of central tendency and variability, or the Welch test with robust estimators (i.e., trimmed means and Winsorized variances). A new statistic proposed by Krishnamoorthy, Lu, and Mathew, intended to deal with heterogeneous variances, though not non-normality, uses a parametric bootstrap procedure. In their investigation of the parametric bootstrap test, the authors examined its operating characteristics under limited conditions and did not compare it to the Welch test based on robust estimators. Thus, we investigated how the parametric bootstrap procedure and a modified parametric bootstrap procedure based on trimmed means perform relative to previously recommended procedures when data are non-normal and heterogeneous. The results indicated that the tests based on trimmed means offer the best Type I error control and power when variances are unequal and at least some of the distribution shapes are non-normal. © 2011 The British Psychological Society.

  20. Simulation-based optimal Bayesian experimental design for nonlinear systems

    KAUST Repository

    Huan, Xun; Marzouk, Youssef M.

    2013-01-01

    The optimal selection of experimental conditions is essential to maximizing the value of data for inference and prediction, particularly in situations where experiments are time-consuming and expensive to conduct. We propose a general mathematical

  1. Electrochemical production and use of free chlorine for pollutant removal: an experimental design approach.

    Science.gov (United States)

    Antonelli, Raissa; de Araújo, Karla Santos; Pires, Ricardo Francisco; Fornazari, Ana Luiza de Toledo; Granato, Ana Claudia; Malpass, Geoffroy Roger Pointer

    2017-10-28

    The present paper presents the study of (1) the optimization of electrochemical-free chlorine production using an experimental design approach, and (2) the application of the optimum conditions obtained for the application in photo-assisted electrochemical degradation of simulated textile effluent. In the experimental design the influence of inter-electrode gap, pH, NaCl concentration and current was considered. It was observed that the four variables studied are significant for the process, with NaCl concentration and current being the most significant variables for free chlorine production. The maximum free chlorine production was obtained at a current of 2.33 A and NaCl concentrations in 0.96 mol dm -3 . The application of the optimized conditions with simultaneous UV irradiation resulted in up to 83.1% Total Organic Carbon removal and 100% of colour removal over 180 min of electrolysis. The results indicate that a systematic (statistical) approach to the electrochemical treatment of pollutants can save time and reagents.

  2. Development of a fast, lean and agile direct pelletization process using experimental design techniques.

    Science.gov (United States)

    Politis, Stavros N; Rekkas, Dimitrios M

    2017-04-01

    A novel hot melt direct pelletization method was developed, characterized and optimized, using statistical thinking and experimental design tools. Mixtures of carnauba wax (CW) and HPMC K100M were spheronized using melted gelucire 50-13 as a binding material (BM). Experimentation was performed sequentially; a fractional factorial design was set up initially to screen the factors affecting the process, namely spray rate, quantity of BM, rotor speed, type of rotor disk, lubricant-glidant presence, additional spheronization time, powder feeding rate and quantity. From the eight factors assessed, three were further studied during process optimization (spray rate, quantity of BM and powder feeding rate), at different ratios of the solid mixture of CW and HPMC K100M. The study demonstrated that the novel hot melt process is fast, efficient, reproducible and predictable. Therefore, it can be adopted in a lean and agile manufacturing setting for the production of flexible pellet dosage forms with various release rates easily customized between immediate and modified delivery.

  3. Fast Synthesis of Gibbsite Nanoplates and Process Optimization using Box-Behnken Experimental Design

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Xin; Zhang, Xianwen; Graham, Trenton R.; Pearce, Carolyn I.; Mehdi, Beata L.; N' Diaye, Alpha T.; Kerisit, Sebastien N.; Browning, Nigel D.; Clark, Sue B.; Rosso, Kevin M.

    2017-10-26

    Developing the ability to synthesize compositionally and morphologically well-defined gibbsite particles at the nanoscale with high yield is an ongoing need that has not yet achieved the level of rational design. Here we report optimization of a clean inorganic synthesis route based on statistical experimental design examining the influence of Al(OH)3 gel precursor concentration, pH, and aging time at temperature. At 80 oC, the optimum synthesis conditions of gel concentration at 0.5 M, pH at 9.2, and time at 72 h maximized the reaction yield up to ~87%. The resulting gibbsite product is composed of highly uniform euhedral hexagonal nanoplates within a basal plane diameter range of 200-400 nm. The independent roles of key system variables in the growth mechanism are considered. On the basis of these optimized experimental conditions, the synthesis procedure, which is both cost-effective and environmentally friendly, has the potential for mass production scale-up of high quality gibbsite material for various fundamental research and industrial applications.

  4. Application of statistical experimental methodology to optimize bioremediation of n-alkanes in aquatic environment

    International Nuclear Information System (INIS)

    Zahed, Mohammad Ali; Aziz, Hamidi Abdul; Mohajeri, Leila; Mohajeri, Soraya; Kutty, Shamsul Rahman Mohamed; Isa, Mohamed Hasnain

    2010-01-01

    Response surface methodology (RSM) was employed to optimize nitrogen and phosphorus concentrations for removal of n-alkanes from crude oil contaminated seawater samples in batch reactors. Erlenmeyer flasks were used as bioreactors; each containing 250 mL dispersed crude oil contaminated seawater, indigenous acclimatized microorganism and different amounts of nitrogen and phosphorus based on central composite design (CCD). Samples were extracted and analyzed according to US-EPA protocols using a gas chromatograph. During 28 days of bioremediation, a maximum of 95% total aliphatic hydrocarbons removal was observed. The obtained Model F-value of 267.73 and probability F < 0.0001 implied the model was significant. Numerical condition optimization via a quadratic model, predicted 98% n-alkanes removal for a 20-day laboratory bioremediation trial using nitrogen and phosphorus concentrations of 13.62 and 1.39 mg/L, respectively. In actual experiments, 95% removal was observed under these conditions.

  5. Statistical mixture design and multivariate analysis of inkjet printed a-WO3/TiO2/WOX electrochromic films.

    Science.gov (United States)

    Wojcik, Pawel Jerzy; Pereira, Luís; Martins, Rodrigo; Fortunato, Elvira

    2014-01-13

    An efficient mathematical strategy in the field of solution processed electrochromic (EC) films is outlined as a combination of an experimental work, modeling, and information extraction from massive computational data via statistical software. Design of Experiment (DOE) was used for statistical multivariate analysis and prediction of mixtures through a multiple regression model, as well as the optimization of a five-component sol-gel precursor subjected to complex constraints. This approach significantly reduces the number of experiments to be realized, from 162 in the full factorial (L=3) and 72 in the extreme vertices (D=2) approach down to only 30 runs, while still maintaining a high accuracy of the analysis. By carrying out a finite number of experiments, the empirical modeling in this study shows reasonably good prediction ability in terms of the overall EC performance. An optimized ink formulation was employed in a prototype of a passive EC matrix fabricated in order to test and trial this optically active material system together with a solid-state electrolyte for the prospective application in EC displays. Coupling of DOE with chromogenic material formulation shows the potential to maximize the capabilities of these systems and ensures increased productivity in many potential solution-processed electrochemical applications.

  6. Designing experiments for maximum information from cyclic oxidation tests and their statistical analysis using half Normal plots

    International Nuclear Information System (INIS)

    Coleman, S.Y.; Nicholls, J.R.

    2006-01-01

    Cyclic oxidation testing at elevated temperatures requires careful experimental design and the adoption of standard procedures to ensure reliable data. This is a major aim of the 'COTEST' research programme. Further, as such tests are both time consuming and costly, in terms of human effort, to take measurements over a large number of cycles, it is important to gain maximum information from a minimum number of tests (trials). This search for standardisation of cyclic oxidation conditions leads to a series of tests to determine the relative effects of cyclic parameters on the oxidation process. Following a review of the available literature, databases and the experience of partners to the COTEST project, the most influential parameters, upper dwell temperature (oxidation temperature) and time (hot time), lower dwell time (cold time) and environment, were investigated in partners' laboratories. It was decided to test upper dwell temperature at 3 levels, at and equidistant from a reference temperature; to test upper dwell time at a reference, a higher and a lower time; to test lower dwell time at a reference and a higher time and wet and dry environments. Thus an experiment, consisting of nine trials, was designed according to statistical criteria. The results of the trial were analysed statistically, to test the main linear and quadratic effects of upper dwell temperature and hot time and the main effects of lower dwell time (cold time) and environment. The nine trials are a quarter fraction of the 36 possible combinations of parameter levels that could have been studied. The results have been analysed by half Normal plots as there are only 2 degrees of freedom for the experimental error variance, which is rather low for a standard analysis of variance. Half Normal plots give a visual indication of which factors are statistically significant. In this experiment each trial has 3 replications, and the data are analysed in terms of mean mass change, oxidation kinetics

  7. Survey design, statistical analysis, and basis for statistical inferences in coastal habitat injury assessment: Exxon Valdez oil spill

    International Nuclear Information System (INIS)

    McDonald, L.L.; Erickson, W.P.; Strickland, M.D.

    1995-01-01

    The objective of the Coastal Habitat Injury Assessment study was to document and quantify injury to biota of the shallow subtidal, intertidal, and supratidal zones throughout the shoreline affected by oil or cleanup activity associated with the Exxon Valdez oil spill. The results of these studies were to be used to support the Trustee's Type B Natural Resource Damage Assessment under the Comprehensive Environmental Response, Compensation, and Liability Act of 1980 (CERCLA). A probability based stratified random sample of shoreline segments was selected with probability proportional to size from each of 15 strata (5 habitat types crossed with 3 levels of potential oil impact) based on those data available in July, 1989. Three study regions were used: Prince William Sound, Cook Inlet/Kenai Peninsula, and Kodiak/Alaska Peninsula. A Geographic Information System was utilized to combine oiling and habitat data and to select the probability sample of study sites. Quasi-experiments were conducted where randomly selected oiled sites were compared to matched reference sites. Two levels of statistical inferences, philosophical bases, and limitations are discussed and illustrated with example data from the resulting studies. 25 refs., 4 figs., 1 tab

  8. Conceptual design study of fusion experimental reactor (FY86 FER)

    International Nuclear Information System (INIS)

    Miki, Nobuharu; Iida, Fumio; Suzuki, Shohei; Wachi, Yoshihiro; Toyoda, Katsuyoshi; Hashizume, Takashi; Konno, Masayuki.

    1987-09-01

    This report summarizes the FER magnet design which was conducted last year (1986). Main objective of the new FER design is to have better cost performance of the machine. The physics assumptions are reviewed to reduce risks. Optimization of the physics design and improvements of the engineering design have been done without changing missions of the device. After a preliminary investigation for the optimization and improvements, six FER concepts have been developed to establish the improved design point, and have been studied in more detail. In the magnet design, the improvements of superconducting magnet design were mainly investigated to reduce the reactor size. A normal conductor was studied as an alternative option for appling to the special poloidal field coils that were located on the interior to the toroidal field coils. Some improvements were made on the superconducting magnet design. Based on the preliminary investigation, the magnet design specifications have been modified somewhat. The conceptual design of the magnet system components have been done for the candidate FER concepts. (author)

  9. Verification of aseismic design model by using experimental results

    International Nuclear Information System (INIS)

    Mizuno, N.; Sugiyama, N.; Suzuki, T.; Shibata, Y.; Miura, K.; Miyagawa, N.

    1985-01-01

    A lattice model is applied as an analysis model for an aseismic design of the Hamaoka nuclear reactor building. With object to verify an availability of this design model, two reinforced concrete blocks are constructed on the ground and the forced vibration tests are carried out. The test results are well followed by simulation analysis using the lattice model. Damping value of the ground obtained from the test is more conservative than the design value. (orig.)

  10. Conceptual design study of Fusion Experimental Reactor (FY87FER)

    International Nuclear Information System (INIS)

    Miki, Nobuharu; Iida, Fumio; Wachi, Yoshihiro; Toyoda, Katsuyoshi; Hashizume, Takashi; Konno, Masayuki.

    1988-06-01

    This report describes the FER magnet design which was conducted last year (1987). Based on a large uncertainty of the physics assumption, two sets of FER concepts have been developed. One is based on the best existing physics data bases and another is based on rather conservative physics bases. In the magnet design, the improvements of superconducting magnet design were investigated to reduce the reactor size and to realize higher reactor-core performance. In addition, we studied several critical technical issues that affect the magnet design specification. (author)

  11. FFTF reload core nuclear design for increased experimental capability

    International Nuclear Information System (INIS)

    Rothrock, R.B.; Nelson, J.V.; Dobbin, K.D.; Bennett, R.A.

    1976-01-01

    In anticipation of continued growth in the FTR experimental irradiations program, the enrichments for the next batches of reload driver fuel to be manufactured have been increased to provide a substantially enlarged experimental reactivity allowance. The enrichments for these fuel assemblies, termed ''Cores 3 and 4,'' were selected to meet the following objectives and constraints: (1) maintain a reactor power capability of 400 MW (based on an evaluation of driver fuel centerline melting probability at 15 percent overpower); (2) provide a peak neutron flux of nominally 7 x 10 15 n/cm 2 -sec, with a minimum acceptable value of 95 percent of this (i.e., 6.65 x 10 15 n/cm 2 -sec); and (3) provide the maximum experimental reactivity allowance that is consistent with the above constraints

  12. Critical Zone Experimental Design to Assess Soil Processes and Function

    Science.gov (United States)

    Banwart, Steve

    2010-05-01

    experimental design studies soil processes across the temporal evolution of the soil profile, from its formation on bare bedrock, through managed use as productive land to its degradation under longstanding pressures from intensive land use. To understand this conceptual life cycle of soil, we have selected 4 European field sites as Critical Zone Observatories. These are to provide data sets of soil parameters, processes and functions which will be incorporated into the mathematical models. The field sites are 1) the BigLink field station which is located in the chronosequence of the Damma Glacier forefield in alpine Switzerland and is established to study the initial stages of soil development on bedrock; 2) the Lysina Catchment in the Czech Republic which is representative of productive soils managed for intensive forestry, 3) the Fuchsenbigl Field Station in Austria which is an agricultural research site that is representative of productive soils managed as arable land and 4) the Koiliaris Catchment in Crete, Greece which represents degraded Mediterranean region soils, heavily impacted by centuries of intensive grazing and farming, under severe risk of desertification.

  13. The reactor safety study of experimental multi-purpose VHTR design

    International Nuclear Information System (INIS)

    Yasuno, T.; Mitake, S.; Ezaki, M.; Suzuki, K.

    1981-01-01

    Over the past years, the design works of the Experimental Very High Temperature Reactor (VHTR) plant have been conducted at Japan Atomic Energy Research Institute. The conceptual design has been completed and the more detailed design works and the safety analysis of the experimental VHTR plant are continued. The purposes of design studies are to show the feasibility of the experimental VHTR program, to specify the characteristics and functions of the plant components, to point out the R and D items necessary for the experimental VHTR plant construction, and to analyze the feature of the plant safety. In this paper the summary of system design and safety features of the experimental reactor are indicated. Main issues are the safety philosophy for the design basis accident, the accidents assumed and the engineered safety systems adopted in the design works

  14. The experimental design of the Missouri Ozark Forest Ecosystem Project

    Science.gov (United States)

    Steven L. Sheriff; Shuoqiong. He

    1997-01-01

    The Missouri Ozark Forest Ecosystem Project (MOFEP) is an experiment that examines the effects of three forest management practices on the forest community. MOFEP is designed as a randomized complete block design using nine sites divided into three blocks. Treatments of uneven-aged, even-aged, and no-harvest management were randomly assigned to sites within each block...

  15. Experimental and Sampling Design for the INL-2 Sample Collection Operational Test

    Energy Technology Data Exchange (ETDEWEB)

    Piepel, Gregory F.; Amidan, Brett G.; Matzke, Brett D.

    2009-02-16

    , sample extraction, and analytical methods to be used in the INL-2 study. For each of the five test events, the specified floor of the INL building will be contaminated with BG using a point-release device located in the room specified in the experimental design. Then quality control (QC), reference material coupon (RMC), judgmental, and probabilistic samples will be collected according to the sampling plan for each test event. Judgmental samples will be selected based on professional judgment and prior information. Probabilistic samples were selected with a random aspect and in sufficient numbers to provide desired confidence for detecting contamination or clearing uncontaminated (or decontaminated) areas. Following sample collection for a given test event, the INL building will be decontaminated. For possibly contaminated areas, the numbers of probabilistic samples were chosen to provide 95% confidence of detecting contaminated areas of specified sizes. For rooms that may be uncontaminated following a contamination event, or for whole floors after decontamination, the numbers of judgmental and probabilistic samples were chosen using the CJR approach. The numbers of samples were chosen to support making X%/Y% clearance statements with X = 95% or 99% and Y = 96% or 97%. The experimental and sampling design also provides for making X%/Y% clearance statements using only probabilistic samples. For each test event, the numbers of characterization and clearance samples were selected within limits based on operational considerations while still maintaining high confidence for detection and clearance aspects. The sampling design for all five test events contains 2085 samples, with 1142 after contamination and 943 after decontamination. These numbers include QC, RMC, judgmental, and probabilistic samples. The experimental and sampling design specified in this report provides a good statistical foundation for achieving the objectives of the INL-2 study.

  16. Diameter optimization of VLS-synthesized ZnO nanowires, using statistical design of experiment

    International Nuclear Information System (INIS)

    Shafiei, Sepideh; Nourbakhsh, Amirhasan; Ganjipour, Bahram; Zahedifar, Mostafa; Vakili-Nezhaad, Gholamreza

    2007-01-01

    The possibility of diameter optimization of ZnO nanowires by using statistical design of experiment (DoE) is investigated. In this study, nanowires were synthesized using a vapor-liquid-solid (VLS) growth method in a horizontal reactor. The effects of six synthesis parameters (synthesis time, synthesis temperature, thickness of gold layer, distance between ZnO holder and substrate, mass of ZnO and Ar flow rate) on the average diameter of a ZnO nanowire were examined using the fractional factorial design (FFD) coupled with response surface methodology (RSM). Using a 2 III 6-3 FFD, the main effects of the thickness of the gold layer, synthesis temperature and synthesis time were concluded to be the key factors influencing the diameter. Then Box-Behnken design (BBD) was exploited to create a response surface from the main factors. The total number of required runs for the DoE process is 25, 8 runs for FFD parameter screening and 17 runs for the response surface obtained by BBD. Three extra runs are done to confirm the predicted results

  17. Morphology optimization of CCVD-synthesized multiwall carbon nanotubes, using statistical design of experiments

    International Nuclear Information System (INIS)

    Nourbakhsh, Amirhasan; Ganjipour, Bahram; Zahedifar, Mostafa; Arzi, Ezatollah

    2007-01-01

    The possibility of optimization of morphological features of multiwall carbon nanotubes (MWCNTs) using the statistical design of experiments (DoE) is investigated. In this study, MWCNTs were synthesized using a catalytic chemical vapour deposition (CCVD) method in a horizontal reactor using acetylene as the carbon source. The effects of six synthesis parameters (synthesis time, synthesis temperature, catalyst mass, reduction time, acetylene flow rate and hydrogen flow rate) on the average diameter and mean rectilinear length (MRL) of carbon nanotubes were examined using fractional-factorial design (FFD) coupled with response surface methodology (RSM). Using a 2 III 6-3 FFD, the main effects of reaction temperature, hydrogen flow rate and chemical reduction time were concluded to be the key factors influencing the diameter and MRL of MWCNTs; then Box-Behnken design (BBD) was exploited to create a response surface from the main factors. The total number of required runs is 26: 8 runs are for FFD parameter screening, 17 runs are for the response surface obtained by the BBD, and the final run is used to confirm the predicted results

  18. Quasi-experimental study designs series-paper 1: introduction: two historical lineages.

    Science.gov (United States)

    Bärnighausen, Till; Røttingen, John-Arne; Rockers, Peter; Shemilt, Ian; Tugwell, Peter

    2017-09-01

    The objective of this study was to contrast the historical development of experiments and quasi-experiments and provide the motivation for a journal series on quasi-experimental designs in health research. A short historical narrative, with concrete examples, and arguments based on an understanding of the practice of health research and evidence synthesis. Health research has played a key role in developing today's gold standard for causal inference-the randomized controlled multiply blinded trial. Historically, allocation approaches developed from convenience and purposive allocation to alternate and, finally, to random allocation. This development was motivated both by concerns for manipulation in allocation as well as statistical and theoretical developments demonstrating the power of randomization in creating counterfactuals for causal inference. In contrast to the sequential development of experiments, quasi-experiments originated at very different points in time, from very different scientific perspectives, and with frequent and long interruptions in their methodological development. Health researchers have only recently started to recognize the value of quasi-experiments for generating novel insights on causal relationships. While quasi-experiments are unlikely to replace experiments in generating the efficacy and safety evidence required for clinical guidelines and regulatory approval of medical technologies, quasi-experiments can play an important role in establishing the effectiveness of health care practice, programs, and policies. The papers in this series describe and discuss a range of important issues in utilizing quasi-experimental designs for primary research and quasi-experimental results for evidence synthesis. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Los Alamos Experimental Engineering Waste Burial Facility: design considerations and preliminary experimental plan

    International Nuclear Information System (INIS)

    DePoorter, G.L.

    1981-01-01

    The Experimental Engineered Waste Burial Facility is a field test site where generic experiments can be performed on several scales to get the basic information necessary to understand the processes occurring in low-level waste disposal facilities. The experiments include hydrological, chemical, mechanical, and biological factors. In order to separate these various factors in the experiments and to extrapolate the experimental results to actual facilities, experiments will be performed on several different scales

  20. Optimization and evaluation of clarithromycin floating tablets using experimental mixture design.

    Science.gov (United States)

    Uğurlu, Timucin; Karaçiçek, Uğur; Rayaman, Erkan

    2014-01-01

    The purpose of the study was to prepare and evaluate clarithromycin (CLA) floating tablets using experimental mixture design for treatment of Helicobacter pylori provided by prolonged gastric residence time and controlled plasma level. Ten different formulations were generated based on different molecular weight of hypromellose (HPMC K100, K4M, K15M) by using simplex lattice design (a sub-class of mixture design) with Minitab 16 software. Sodium bicarbonate and anhydrous citric acid were used as gas generating agents. Tablets were prepared by wet granulation technique. All of the process variables were fixed. Results of cumulative drug release at 8th h (CDR 8th) were statistically analyzed to get optimized formulation (OF). Optimized formulation, which gave floating lag time lower than 15 s and total floating time more than 10 h, was analyzed and compared with target for CDR 8th (80%). A good agreement was shown between predicted and actual values of CDR 8th with a variation lower than 1%. The activity of clarithromycin contained optimizedformula against H. pylori were quantified using well diffusion agar assay. Diameters of inhibition zones vs. log10 clarithromycin concentrations were plotted in order to obtain a standard curve and clarithromycin activity.

  1. Conceptual design of blanket structures for fusion experimental reactor (FER)

    International Nuclear Information System (INIS)

    1984-03-01

    Conceptual design study for in-vessel components including tritium breeding blanket of FER has been carried out. The objective of this study is to obtain the engineering and technological data for selecting the reactor concept and for its construction by investigating fully and broadly. The design work covers in-vessel components (such as tritium breeding blanket, first wall, shield, divertor and blanket test module), remote handling system and tritium system. The designs of those components and systems are accomplished in consideration of their accomodation to whole reactor system and problems for furthur study are clarified. (author)

  2. Experimental design and analysis for piezoelectric circular actuators in flow control applications

    International Nuclear Information System (INIS)

    Mane, Poorna; Mossi, Karla; Bryant, Robert

    2008-01-01

    Flow control can lead to saving millions of dollars in fuel costs each year by making an aircraft more efficient. Synthetic jets, a device for active flow control, operate by introducing small amounts of energy locally to achieve non-local changes in the flow field with large performance gains. These devices consist of a cavity with an oscillating diaphragm that divides it into active and passive sides. The active side has a small opening where a jet is formed, while the passive side does not directly participate in the fluidic jet. Over the years, research has shown that synthetic jet behavior is dependent on the active diaphragm and the cavity design; hence, the focus of this work. The performance of the synthetic jet is studied under various factors related to the diaphragm and the cavity geometry. Three diaphragms, manufactured from piezoelectric composites, were selected for this study: Bimorph, Thunder ® and Lipca. The overall factors considered are the driving signals, voltage, frequency, cavity height, orifice size, and passive cavity pressure. Using the average maximum jet velocity as the response variable, these factors are individually studied for each actuator, and statistical analysis tools are used to select the relevant factors in the response variable. The factors are divided into two experimental fractional factorial design matrices, with five and four factors, respectively. Both experiments are chosen to be of resolution V, where main factors are confounded with three-factor interactions. In the first experimental design, the results show that frequency is not a significant factor, while waveform is significant for all the actuators. In addition, the magnitude of the regression coefficients suggests that a model that includes the diaphragm as a factor may be possible. These results are valid within the ranges tested, that is low frequencies and sawtooth and sine waveform as driving signals. In the second experimental design, cavity dimensions are

  3. International Thermonuclear Experimental Reactor (ITER) neutral beam design

    International Nuclear Information System (INIS)

    Myers, T.J.; Brook, J.W.; Spampinato, P.T.; Mueller, J.P.; Luzzi, T.E.; Sedgley, D.W.

    1990-10-01

    This report discusses the following topics on ITER neutral beam design: ion dump; neutralizer and module gas flow analysis; vacuum system; cryogenic system; maintainability; power distribution; and system cost

  4. Experimental design with applications in management, engineering and the sciences

    CERN Document Server

    Berger, Paul D; Celli, Giovana B

    2018-01-01

    This text introduces and provides instruction on the design and analysis of experiments for a broad audience. Formed by decades of teaching, consulting, and industrial experience in the Design of Experiments field, this new edition contains updated examples, exercises, and situations covering the science and engineering practice. This text minimizes the amount of mathematical detail, while still doing full justice to the mathematical rigor of the presentation and the precision of statements, making the text accessible for those who have little experience with design of experiments and who need some practical advice on using such designs to solve day-to-day problems. Additionally, an intuitive understanding of the principles is always emphasized, with helpful hints throughout.

  5. Statistical Shape Analysis of the Human Ear Canal with Application to In-the-Ear Hearing Aid Design

    DEFF Research Database (Denmark)

    Paulsen, Rasmus Reinhold

    2004-01-01

    This thesis is about the statistical shape analysis of the human ear canal with application to the mechanical design of in-the-ear hearing aids. Initially, it is described how a statistical shape model of the human ear canal is built based on a training set of laser-scanned ear impressions. A thin...

  6. Formulation and optimization of chronomodulated press-coated tablet of carvedilol by Box–Behnken statistical design

    Directory of Open Access Journals (Sweden)

    Satwara RS

    2012-08-01

    Full Text Available Rohan S Satwara, Parul K PatelDepartment of Pharmaceutics, Babaria Institute of Pharmacy, Vadodara, Gujarat, IndiaObjective: The primary objective of the present investigation was to formulate and optimize chronomodulated press-coated tablets to deliver the antihypertensive carvedilol at an effective quantity predawn, when a blood pressure spike is typically observed in most hypertensive patients.Experimental work: Preformulation studies and drug excipient compatibility studies were carried out for carvedilol and excipients. Core tablets (6 mm containing carvedilol and 10-mm press-coated tablets were prepared by direct compression. The Box–Behnken experimental design was applied to these press-coated tablets (F1–F15 formula with differing concentrations of rate-controlling polymers. Hydroxypropyl methyl cellulose K4M, ethyl cellulose, and K-carrageenan were used as rate-controlling polymers in the outer layer. These tablets were subjected to various precompression and postcompression tests. The optimized batch was derived both by statistically (using desirability function and graphically (using Design Expert® 8; Stat-Ease Inc. Tablets formulated using the optimized formulas were then evaluated for lag time and in vitro dissolution.Results and discussion: Results of preformulation studies were satisfactory. No interaction was observed between carvedilol and excipients by ultraviolet, Fourier transform infrared spectroscopy, and dynamic light scattering analysis. The results of precompression studies and postcompression studies were within limits. The varying lag time and percent cumulative carvedilol release after 8 h was optimized to obtain a formulation that offered a release profile with 6 h lag time, followed by complete carvedilol release after 8 h. The results showed no significant bias between predicted response and actual response for the optimized formula.Conclusion: Bedtime dosing of chronomodulated press-coated tablets may offer a

  7. A Multifunctional Public Lighting Infrastructure, Design and Experimental Test

    OpenAIRE

    Marco Beccali; Valerio Lo Brano; Marina Bonomolo; Paolo Cicero; Giacomo Corvisieri; Marco Caruso; Francesco Gamberale

    2017-01-01

    Nowadays, the installation of efficient lighting sources and Information and Communications Technologies can provide economic benefits, energy efficiency, and visual comfort requirements. More advantages can be derived if the public lighting infrastructure integrates a smart grid. This study presents an experimental multifunctional infrastructure for public lighting, installed in Palermo. The system is able to provide smart lighting functions (hotspot Wi-Fi, video-surveillances, car and pedes...

  8. Statistical design of personalized medicine interventions: The Clarification of Optimal Anticoagulation through Genetics (COAG trial

    Directory of Open Access Journals (Sweden)

    Gage Brian F

    2010-11-01

    Full Text Available Abstract Background There is currently much interest in pharmacogenetics: determining variation in genes that regulate drug effects, with a particular emphasis on improving drug safety and efficacy. The ability to determine such variation motivates the application of personalized drug therapies that utilize a patient's genetic makeup to determine a safe and effective drug at the correct dose. To ascertain whether a genotype-guided drug therapy improves patient care, a personalized medicine intervention may be evaluated within the framework of a randomized controlled trial. The statistical design of this type of personalized medicine intervention requires special considerations: the distribution of relevant allelic variants in the study population; and whether the pharmacogenetic intervention is equally effective across subpopulations defined by allelic variants. Methods The statistical design of the Clarification of Optimal Anticoagulation through Genetics (COAG trial serves as an illustrative example of a personalized medicine intervention that uses each subject's genotype information. The COAG trial is a multicenter, double blind, randomized clinical trial that will compare two approaches to initiation of warfarin therapy: genotype-guided dosing, the initiation of warfarin therapy based on algorithms using clinical information and genotypes for polymorphisms in CYP2C9 and VKORC1; and clinical-guided dosing, the initiation of warfarin therapy based on algorithms using only clinical information. Results We determine an absolute minimum detectable difference of 5.49% based on an assumed 60% population prevalence of zero or multiple genetic variants in either CYP2C9 or VKORC1 and an assumed 15% relative effectiveness of genotype-guided warfarin initiation for those with zero or multiple genetic variants. Thus we calculate a sample size of 1238 to achieve a power level of 80% for the primary outcome. We show that reasonable departures from these

  9. Efficient Bayesian experimental design for contaminant source identification

    Science.gov (United States)

    Zhang, Jiangjiang; Zeng, Lingzao; Chen, Cheng; Chen, Dingjiang; Wu, Laosheng

    2015-01-01

    In this study, an efficient full Bayesian approach is developed for the optimal sampling well location design and source parameters identification of groundwater contaminants. An information measure, i.e., the relative entropy, is employed to quantify the information gain from concentration measurements in identifying unknown parameters. In this approach, the sampling locations that give the maximum expected relative entropy are selected as the optimal design. After the sampling locations are determined, a Bayesian approach based on Markov Chain Monte Carlo (MCMC) is used to estimate unknown parameters. In both the design and estimation, the contaminant transport equation is required to be solved many times to evaluate the likelihood. To reduce the computational burden, an interpolation method based on the adaptive sparse grid is utilized to construct a surrogate for the contaminant transport equation. The approximated likelihood can be evaluated directly from the surrogate, which greatly accelerates the design and estimation process. The accuracy and efficiency of our approach are demonstrated through numerical case studies. It is shown that the methods can be used to assist in both single sampling location and monitoring network design for contaminant source identifications in groundwater.

  10. Experimental Evaluation of Three Designs of Electrodynamic Flexural Transducers

    Directory of Open Access Journals (Sweden)

    Tobias J. R. Eriksson

    2016-08-01

    Full Text Available Three designs for electrodynamic flexural transducers (EDFT for air-coupled ultrasonics are presented and compared. An all-metal housing was used for robustness, which makes the designs more suitable for industrial applications. The housing is designed such that there is a thin metal plate at the front, with a fundamental flexural vibration mode at ∼50 kHz. By using a flexural resonance mode, good coupling to the load medium was achieved without the use of matching layers. The front radiating plate is actuated electrodynamically by a spiral coil inside the transducer, which produces an induced magnetic field when an AC current is applied to it. The transducers operate without the use of piezoelectric materials, which can simplify manufacturing and prolong the lifetime of the transducers, as well as open up possibilities for high-temperature applications. The results show that different designs perform best for the generation and reception of ultrasound. All three designs produced large acoustic pressure outputs, with a recorded sound pressure level (SPL above 120 dB at a 40 cm distance from the highest output transducer. The sensitivity of the transducers was low, however, with single shot signal-to-noise ratio ( SNR ≃ 15 dB in transmit–receive mode, with transmitter and receiver 40 cm apart.

  11. Polypropylene /Aspen/ liquid polybutadienes composites: maximization of impact strength, tensile and modulus by statistical experimental design

    Czech Academy of Sciences Publication Activity Database

    Kokta, B. V.; Fortelný, Ivan; Kruliš, Zdeněk; Horák, Zdeněk; Michálková, Danuše

    2005-01-01

    Roč. 99, - (2005), s. 10-11 ISSN 0009-2770. [International Conference on Polymeric Materials in Automotive , Slovak Rubber Conference /17./. 10.5.2005-12.5.2005, Bratislava] Institutional research plan: CEZ:AV0Z40500505 Keywords : polypropylene * Aspen-PP composite Subject RIV: CD - Macromolecular Chemistry

  12. LPCVD silicon-rich silicon nitride films for applications in micromechanics, studied with statistical experimental design

    NARCIS (Netherlands)

    Gardeniers, Johannes G.E.; Tilmans, H.A.C.; Tilmans, H.A.C.; Visser, C.C.G.

    A systematic investigation of the influence of the process parameters temperature, pressure, total gas flow, and SiH2Cl2:NH3 gas flow ratio on the residual stress, the refractive index, and its nonuniformity across a wafer, the growth rate, the film thickness nonuniformity across a wafer, and the

  13. Factorial experimental design for recovering heavy metals from sludge with ion-exchange resin

    International Nuclear Information System (INIS)

    Lee, I.H.; Kuan, Y.-C.; Chern, J.-M.

    2006-01-01

    Wastewaters containing heavy metals are usually treated by chemical precipitation method in Taiwan. This method can remove heavy metals form wastewaters efficiently, but the resultant heavy metal sludge is classified as hazardous solid waste and becomes another environmental problem. If we can remove heavy metals from sludge, it becomes non-hazardous waste and the treatment cost can be greatly reduced. This study aims at using ion-exchange resin to remove heavy metals such as copper, zinc, cadmium, and chromium from sludge generated by a PCB manufacturing plant. Factorial experimental design methodology was used to study the heavy metal removal efficiency. The total metal concentrations in the sludge, resin, and solution phases were measured respectively after 30 min reaction with varying leaching agents (citric acid and nitric acid); ion-exchange resins (Amberlite IRC-718 and IR-120), and temperatures (50 and 70 deg. C). The experimental results and statistical analysis show that a stronger leaching acid and a higher temperature both favor lower heavy metal residues in the sludge. Two-factors and even three-factor interaction effects on the heavy metal sorption in the resin phase are not negligible. The ion-exchange resin plays an important role in the sludge extraction or metal recovery. Empirical regression models were also obtained and used to predict the heavy metal profiles with satisfactory results

  14. Investigation on gas medium parameters for an ArF excimer laser through orthogonal experimental design

    Science.gov (United States)

    Song, Xingliang; Sha, Pengfei; Fan, Yuanyuan; Jiang, R.; Zhao, Jiangshan; Zhou, Yi; Yang, Junhong; Xiong, Guangliang; Wang, Yu

    2018-02-01

    Due to complex kinetics of formation and loss mechanisms, such as ion-ion recombination reaction, neutral species harpoon reaction, excited state quenching and photon absorption, as well as their interactions, the performance behavior of different laser gas medium parameters for excimer laser varies greatly. Therefore, the effects of gas composition and total gas pressure on excimer laser performance attract continual research studies. In this work, orthogonal experimental design (OED) is used to investigate quantitative and qualitative correlations between output laser energy characteristics and gas medium parameters for an ArF excimer laser with plano-plano optical resonator operation. Optimized output laser energy with good pulse to pulse stability can be obtained effectively by proper selection of the gas medium parameters, which makes the most of the ArF excimer laser device. Simple and efficient method for gas medium optimization is proposed and demonstrated experimentally, which provides a global and systematic solution. By detailed statistical analysis, the significance sequence of relevant parameter factors and the optimized composition for gas medium parameters are obtained. Compared with conventional route of varying single gas parameter factor sequentially, this paper presents a more comprehensive way of considering multivariables simultaneously, which seems promising in striking an appropriate balance among various complicated parameters for power scaling study of an excimer laser.

  15. Design of an experimental incinerator for alpha waste

    International Nuclear Information System (INIS)

    Warren, J.H.

    1979-08-01

    An electrically heated controlled-air two-stage incinerator has been designed for burning small volumes (5 kg/h) of solid wastes. Distinguishing features of the design are compactness, relatively lightweight, and ease of assembly made possible by using prefabricated ceramic components to form two combustion chambers surrounded by packed fiber insulation within a steel case. Electric girdle heaters around the two combustion chambers provide 600 to 1000 0 C. These temperatures combined with controlled air give minimum ash entrainment and long combustion gas residence times to yield approx. 10 9 off-gas decontamination factors with conventional off-gas cleaning equipment. After decommissioning, the design allows for ease of disassembly and convenient disposal of the ceramic components. 24 figures, 1 table

  16. Design of an experimental incinerator for alpha waste

    Energy Technology Data Exchange (ETDEWEB)

    Warren, J.H.

    1979-08-01

    An electrically heated controlled-air two-stage incinerator has been designed for burning small volumes (5 kg/h) of solid wastes. Distinguishing features of the design are compactness, relatively lightweight, and ease of assembly made possible by using prefabricated ceramic components to form two combustion chambers surrounded by packed fiber insulation within a steel case. Electric girdle heaters around the two combustion chambers provide 600 to 1000/sup 0/C. These temperatures combined with controlled air give minimum ash entrainment and long combustion gas residence times to yield approx. 10/sup 9/ off-gas decontamination factors with conventional off-gas cleaning equipment. After decommissioning, the design allows for ease of disassembly and convenient disposal of the ceramic components. 24 figures, 1 table.

  17. Robust transceiver design for reciprocal M × N interference channel based on statistical linearization approximation

    Science.gov (United States)

    Mayvan, Ali D.; Aghaeinia, Hassan; Kazemi, Mohammad

    2017-12-01

    This paper focuses on robust transceiver design for throughput enhancement on the interference channel (IC), under imperfect channel state information (CSI). In this paper, two algorithms are proposed to improve the throughput of the multi-input multi-output (MIMO) IC. Each transmitter and receiver has, respectively, M and N antennas and IC operates in a time division duplex mode. In the first proposed algorithm, each transceiver adjusts its filter to maximize the expected value of signal-to-interference-plus-noise ratio (SINR). On the other hand, the second algorithm tries to minimize the variances of the SINRs to hedge against the variability due to CSI error. Taylor expansion is exploited to approximate the effect of CSI imperfection on mean and variance. The proposed robust algorithms utilize the reciprocity of wireless networks to optimize the estimated statistical properties in two different working modes. Monte Carlo simulations are employed to investigate sum rate performance of the proposed algorithms and the advantage of incorporating variation minimization into the transceiver design.

  18. On Designing Multicore-Aware Simulators for Systems Biology Endowed with OnLine Statistics

    Directory of Open Access Journals (Sweden)

    Marco Aldinucci

    2014-01-01

    Full Text Available The paper arguments are on enabling methodologies for the design of a fully parallel, online, interactive tool aiming to support the bioinformatics scientists .In particular, the features of these methodologies, supported by the FastFlow parallel programming framework, are shown on a simulation tool to perform the modeling, the tuning, and the sensitivity analysis of stochastic biological models. A stochastic simulation needs thousands of independent simulation trajectories turning into big data that should be analysed by statistic and data mining tools. In the considered approach the two stages are pipelined in such a way that the simulation stage streams out the partial results of all simulation trajectories to the analysis stage that immediately produces a partial result. The simulation-analysis workflow is validated for performance and effectiveness of the online analysis in capturing biological systems behavior on a multicore platform and representative proof-of-concept biological systems. The exploited methodologies include pattern-based parallel programming and data streaming that provide key features to the software designers such as performance portability and efficient in-memory (big data management and movement. Two paradigmatic classes of biological systems exhibiting multistable and oscillatory behavior are used as a testbed.

  19. On designing multicore-aware simulators for systems biology endowed with OnLine statistics.

    Science.gov (United States)

    Aldinucci, Marco; Calcagno, Cristina; Coppo, Mario; Damiani, Ferruccio; Drocco, Maurizio; Sciacca, Eva; Spinella, Salvatore; Torquati, Massimo; Troina, Angelo

    2014-01-01

    The paper arguments are on enabling methodologies for the design of a fully parallel, online, interactive tool aiming to support the bioinformatics scientists .In particular, the features of these methodologies, supported by the FastFlow parallel programming framework, are shown on a simulation tool to perform the modeling, the tuning, and the sensitivity analysis of stochastic biological models. A stochastic simulation needs thousands of independent simulation trajectories turning into big data that should be analysed by statistic and data mining tools. In the considered approach the two stages are pipelined in such a way that the simulation stage streams out the partial results of all simulation trajectories to the analysis stage that immediately produces a partial result. The simulation-analysis workflow is validated for performance and effectiveness of the online analysis in capturing biological systems behavior on a multicore platform and representative proof-of-concept biological systems. The exploited methodologies include pattern-based parallel programming and data streaming that provide key features to the software designers such as performance portability and efficient in-memory (big) data management and movement. Two paradigmatic classes of biological systems exhibiting multistable and oscillatory behavior are used as a testbed.

  20. Sensitivity study of experimental measures for the nuclear liquid-gas phase transition in the statistical multifragmentation model

    Science.gov (United States)

    Lin, W.; Ren, P.; Zheng, H.; Liu, X.; Huang, M.; Wada, R.; Qu, G.

    2018-05-01

    The experimental measures of the multiplicity derivatives—the moment parameters, the bimodal parameter, the fluctuation of maximum fragment charge number (normalized variance of Zmax, or NVZ), the Fisher exponent (τ ), and the Zipf law parameter (ξ )—are examined to search for the liquid-gas phase transition in nuclear multifragmention processes within the framework of the statistical multifragmentation model (SMM). The sensitivities of these measures are studied. All these measures predict a critical signature at or near to the critical point both for the primary and secondary fragments. Among these measures, the total multiplicity derivative and the NVZ provide accurate measures for the critical point from the final cold fragments as well as the primary fragments. The present study will provide a guide for future experiments and analyses in the study of the nuclear liquid-gas phase transition.

  1. Conceptual design study of fusion experimental reactor (FY 86 FER)

    International Nuclear Information System (INIS)

    Kobayashi, Takeshi; Yamada, Masao; Mizoguchi, Tadanori

    1987-09-01

    This report describes the results of the investigation on critical issues of FY 86 FER reactor configuration/structure design. Accuracy evaluation of shielding calculation and crack growth prediction of first wall and divertor based on the elastic-plastic fracture mechanics were performed. Further, optimization of shield configuration, graphite first wall armor and flexifility of reactor were investigated to support future design work. Feasibilities of innovative ideas were also examined, such as the ripple insert effect and the application of shape memory alloys. (author)

  2. Box-Behnken statistical design to optimize thermal performance of energy storage systems

    Science.gov (United States)

    Jalalian, Iman Joz; Mohammadiun, Mohammad; Moqadam, Hamid Hashemi; Mohammadiun, Hamid

    2018-05-01

    Latent heat thermal storage (LHTS) is a technology that can help to reduce energy consumption for cooling applications, where the cold is stored in phase change materials (PCMs). In the present study a comprehensive theoretical and experimental investigation is performed on a LHTES system containing RT25 as phase change material (PCM). Process optimization of the experimental conditions (inlet air temperature and velocity and number of slabs) was carried out by means of Box-Behnken design (BBD) of Response surface methodology (RSM). Two parameters (cooling time and COP value) were chosen to be the responses. Both of the responses were significantly influenced by combined effect of inlet air temperature with velocity and number of slabs. Simultaneous optimization was performed on the basis of the desirability function to determine the optimal conditions for the cooling time and COP value. Maximum cooling time (186 min) and COP value (6.04) were found at optimum process conditions i.e. inlet temperature of (32.5), air velocity of (1.98) and slab number of (7).

  3. Box-Behnken statistical design to optimize thermal performance of energy storage systems

    Science.gov (United States)

    Jalalian, Iman Joz; Mohammadiun, Mohammad; Moqadam, Hamid Hashemi; Mohammadiun, Hamid

    2017-11-01

    Latent heat thermal storage (LHTS) is a technology that can help to reduce energy consumption for cooling applications, where the cold is stored in phase change materials (PCMs). In the present study a comprehensive theoretical and experimental investigation is performed on a LHTES system containing RT25 as phase change material (PCM). Process optimization of the experimental conditions (inlet air temperature and velocity and number of slabs) was carried out by means of Box-Behnken design (BBD) of Response surface methodology (RSM). Two parameters (cooling time and COP value) were chosen to be the responses. Both of the responses were significantly influenced by combined effect of inlet air temperature with velocity and number of slabs. Simultaneous optimization was performed on the basis of the desirability function to determine the optimal conditions for the cooling time and COP value. Maximum cooling time (186 min) and COP value (6.04) were found at optimum process conditions i.e. inlet temperature of (32.5), air velocity of (1.98) and slab number of (7).

  4. Summary of the experimental multi-purpose very high temperature gas cooled reactor design

    International Nuclear Information System (INIS)

    1984-12-01

    The report presents the design of Multi-purpose Very High Temperature Gas Cooled Reactor (the Experimental VHTR) based on the second stage of detailed design which was completed on March 1984, in the from of ''An application of reactor construction permit Appendix 8''. The Experimental VHTR is designed to satisfy with the design specification for the reactor thermal output 50 MW and reactor outlet temperature 950 0 C. The adequacy of the design is also checked by the safety analysis. The planning of plant system and safety is summarized such as safety design requirements and conformance with them, seismic design and plant arrangement. Concerning with the system of the Experimental VHTR the design basis, design data and components are described in the order. (author)

  5. Creativity in Advertising Design Education: An Experimental Study

    Science.gov (United States)

    Cheung, Ming

    2011-01-01

    Have you ever thought about why qualities whose definitions are elusive, such as those of a sunset or a half-opened rose, affect us so powerfully? According to de Saussure (Course in general linguistics, 1983), the making of meanings is closely related to the production and interpretation of signs. All types of design, including advertising…

  6. Experimental design applied to the optimization and partial ...

    African Journals Online (AJOL)

    The objective of this work was to optimize the medium composition for maximum pectin-methylesterase (PME) production from a newly isolated strain of Penicillium brasilianum by submerged fermentation. A Plackett-Burman design was first used for the screening of most important factors, followed by a 23 full ...

  7. Tokamak experimental power reactor conceptual design. Volume II

    International Nuclear Information System (INIS)

    1976-08-01

    Volume II contains the following appendices: (1) summary of EPR design parameters, (2) impurity control, (3) plasma computational models, (4) structural support system, (5) materials considerations for the primary energy conversion system, (6) magnetics, (7) neutronics penetration analysis, (8) first wall stress analysis, (9) enrichment of isotopes of hydrogen by cryogenic distillation, and (10) noncircular plasma considerations

  8. An Experimental Verification of morphology of ibuprofen crystals from CAMD designed solvent

    DEFF Research Database (Denmark)

    Karunanithi, Arunprakash T.; Acquah, Charles; Achenie, Luke E.K.

    2007-01-01

    of crystals formed from solvents, necessitates additional experimental verification steps. In this work we report the experimental verification of crystal morphology for the case study, solvent design for ibuprofen crystallization, presented in Karunanithi et al. [2006. A computer-aided molecular design...

  9. Providing guidance in virtual lab experimentation : the case of an experiment design tool

    NARCIS (Netherlands)

    Efstathiou, Charalampos; Hovardas, Tasos; Xenofontos, Nikoletta A.; Zacharia, Zacharias C.; de Jong, Ton A.J.M.; Anjewierden, Anjo; van Riesen, Siswa A.N.

    2018-01-01

    The present study employed a quasi-experimental design to assess a computer-based tool, which was intended to scaffold the task of designing experiments when using a virtual lab for the process of experimentation. In particular, we assessed the impact of this tool on primary school students’

  10. The design status of CSNS experimental control system

    International Nuclear Information System (INIS)

    Jian Zhuang; Yuanping Chu; Dapeng Jin; Yuqian Liu; Yinhong Zhang; Zhuoyu Zhang; Kejun Zhu; Libin Ding; Lei Hu; Jiajie Li; Yali, Liu

    2012-01-01

    To meet the increasing demand from user community, China decided to build a world-class spallation neutron source, called CSNS (China Spallation Neutron Source). It can provide users a neutron scattering platform with high flux, wide wavelength range and high efficiency. CSNS construction is expected to start in 2011 and will last 6.5 years. The control system of CSNS is divided into accelerator control system and experimental control system. CSNS Experimental Control System is based on EPICS architecture, offering device operation and device debug interface, communication between devices, environment monitor, machine and personnel protection, interface to accelerator control system, overall system monitor and database service. The control system is divided into 4 parts, such as front control layer, local and global control layer based on EPICS, database and network service and the others. The front control layer is based on YOKOGAWA PLC and other controllers. EPICS includes local and global control layer provides all system control and information exchange. Embedded PLC YOKOGAWA RP61 and others is to be used as communication node between front layer and EPICS. Database service provides system configuration and historical data. From the experience of BESIII, MySQL is an option. The system will be developed in Dongguan, Guangdong province and Beijing. So VPN will be used to help development. Now, total 9 persons are working on this system. (authors)

  11. Repair/maintenance design for tokamak experimental fusion reactor

    International Nuclear Information System (INIS)

    1978-10-01

    Repair and maintenance design for JXFR has been studied. The reactor is in eight modules so that a damaged module alone can be separated from the other modules and transferred from the reactor room to a repair shop. Design work covers overhaul procedure, dismounting equipments (overhead cranes, auto welder/cutter and remote handling equipments), transport system of a module (module mounting carriages and rotating carriage), repair equipment for blanket, earthquake-proof analysis of the reactor, reactor room structure, repair shop layout, management of radioactive wastes, time and the number of persons required for overhaul etc. Though the repair and maintenance system is almost complete, there still remain problems for further study in joints of blanket cooling piping, auto welder/cutter and earthquake-proof strength in reactor disassemblage. More detailed studies and R and D are necessary for engineering perfection. (author)

  12. Conceptual design of a continuous fluorinator experimental facility (CFEF)

    International Nuclear Information System (INIS)

    Lindauer, R.B.; Hightower, J.R. Jr.

    1976-07-01

    A conceptual design has been made of a circulating salt system, consisting principally of a fluorinator and reduction column, to demonstrate uranium removal from the salt by fluorination. The fluorinator vessel wall will be protected from fluorine corrosion by a frozen salt film. The circulating salt in the fluorinator will be kept molten by electrical heating that simulates fission product heating in an actual MSBR system

  13. On the construction of experimental designs for a given task by jointly optimizing several quality criteria: Pareto-optimal experimental designs.

    Science.gov (United States)

    Sánchez, M S; Sarabia, L A; Ortiz, M C

    2012-11-19

    Experimental designs for a given task should be selected on the base of the problem being solved and of some criteria that measure their quality. There are several such criteria because there are several aspects to be taken into account when making a choice. The most used criteria are probably the so-called alphabetical optimality criteria (for example, the A-, E-, and D-criteria related to the joint estimation of the coefficients, or the I- and G-criteria related to the prediction variance). Selecting a proper design to solve a problem implies finding a balance among these several criteria that measure the performance of the design in different aspects. Technically this is a problem of multi-criteria optimization, which can be tackled from different views. The approach presented here addresses the problem in its real vector nature, so that ad hoc experimental designs are generated with an algorithm based on evolutionary algorithms to find the Pareto-optimal front. There is not theoretical limit to the number of criteria that can be studied and, contrary to other approaches, no just one experimental design is computed but a set of experimental designs all of them with the property of being Pareto-optimal in the criteria needed by the user. Besides, the use of an evolutionary algorithm makes it possible to search in both continuous and discrete domains and avoid the need of having a set of candidate points, usual in exchange algorithms. Copyright © 2012 Elsevier B.V. All rights reserved.

  14. Conceptual design Fusion Experimental Reactor (FER/ITER)

    International Nuclear Information System (INIS)

    Uehara, Kazuya; Nagashima, Takashi; Ikeda, Yoshitaka

    1991-11-01

    This report describes a conceptual design of Lower Hybrid Wave (LH) system for FER and ITER. In JAERI, the conceptual design of LH system for FER has been performed in these 3 years in parallel to that of ITER. There must be a common design part with ITER and FER. The physical requirement of LH system is the saving of volt · sec in the current start-up phase, and the current drive at the boundary region. The frequency of 5GHz is mainly chosen for avoidance of the α particle absorption and for the availability of electron tube development. Seventy-two klystrons (FER) and one hundred klystrons (ITER) are necessary to inject the 30 MW (FER) and 45-50 MW (ITER) rf power into plasma using 0.7 - 0.8 MW klystron per one tube. The launching system is the multi-junction type and the rf spectrum must be as sharp as possible with high directivity to improve the current drive efficiency. One port (FER) and two ports (ITER) are used and the injection direction is in horizontal, in which the analysis of the ray-tracing code and the better coupling of LH wave is considered. The transmission line is over-sized waveguide with low rf loss. (author)

  15. Gladstone-Dale constant for CF4. [experimental design

    Science.gov (United States)

    Burner, A. W., Jr.; Goad, W. K.

    1980-01-01

    The Gladstone-Dale constant, which relates the refractive index to density, was measured for CF4 by counting fringes of a two-beam interferometer, one beam of which passes through a cell containing the test gas. The experimental approach and sources of systematic and imprecision errors are discussed. The constant for CF4 was measured at several wavelengths in the visible region of the spectrum. A value of 0.122 cu cm/g with an uncertainty of plus or minus 0.001 cu cm/g was determined for use in the visible region. A procedure for noting the departure of the gas density from the ideal-gas law is discussed.

  16. Experimental evaluation of human-system interaction on alarm design

    International Nuclear Information System (INIS)

    Huang, F.-H.; Lee, Y.-L.; Hwang, S.-L.; Yenn, T.-C.; Yu, Y.-C.; Hsu, C.-C.; Huang, H.-W.

    2007-01-01

    This study evaluates the practicability of automatic reset alarm system in Fourth Nuclear Power Plant (FNPP) of Taiwan. The features of auto-reset alarm system include dynamic prioritization of all alarm signals and fast system reset. Two experiments were conducted to evaluate the effect of automatic/manual reset on operation time, situational awareness (SA), task load index (TLX), and subjective ratings. All participants, including Experts and Novices, took part in the experiment on the alarm system simulator with Load Rejection procedure. The experimental results imply that the auto-reset alarm system may be applied in an advanced control room under Load Rejection procedure, because all participants' operation time were reduced as well as Novice's SA were raised up. Nevertheless, to ensure operating safety in FNPP, the effects of the auto-reset alarm system in other procedures/special situations still need to be tested in the near future

  17. An experimental study of noise in mid-infrared quantum cascade lasers of different designs

    Science.gov (United States)

    Schilt, Stéphane; Tombez, Lionel; Tardy, Camille; Bismuto, Alfredo; Blaser, Stéphane; Maulini, Richard; Terazzi, Romain; Rochat, Michel; Südmeyer, Thomas

    2015-04-01

    We present an experimental study of noise in mid-infrared quantum cascade lasers (QCLs) of different designs. By quantifying the high degree of correlation occurring between fluctuations of the optical frequency and voltage between the QCL terminals, we show that electrical noise is a powerful and simple mean to study noise in QCLs. Based on this outcome, we investigated the electrical noise in a large set of 22 QCLs emitting in the range of 7.6-8 μm and consisting of both ridge-waveguide and buried-heterostructure (BH) lasers with different geometrical designs and operation parameters. From a statistical data processing based on an analysis of variance, we assessed that ridge-waveguide lasers have a lower noise than BH lasers. Our physical interpretation is that additional current leakages or spare injection channels occur at the interface between the active region and the lateral insulator in the BH geometry, which induces some extra noise. In addition, Schottky-type contacts occurring at the interface between the n-doped regions and the lateral insulator, i.e., iron-doped InP, are also believed to be a potential source of additional noise in some BH lasers, as observed from the slight reduction in the integrated voltage noise observed at the laser threshold in several BH-QCLs.

  18. A new dietary model to study colorectal carcinogenesis: experimental design, food preparation, and experimental findings.

    Science.gov (United States)

    Rozen, P; Liberman, V; Lubin, F; Angel, S; Owen, R; Trostler, N; Shkolnik, T; Kritchevsky, D

    1996-01-01

    Experimental dietary studies of human colorectal carcinogenesis are usually based on the AIN-76A diet, which is dissimilar to human food in source, preparation, and content. The aims of this study were to examine the feasibility of preparing and feeding rats the diet of a specific human population at risk for colorectal neoplasia and to determine whether changes in the colonic morphology and metabolic contents would differ from those resulting from a standard rat diet. The mean daily food intake composition of a previously evaluated adenoma patient case-control study was used for the "human adenoma" (HA) experimental diet. Foods were prepared as for usual human consumption and processed by dehydration to the physical characteristics of an animal diet. Sixty-four female Sprague-Dawley rats were randomized and fed ad libitum the HA or the AIN-76A diet. Every eight weeks, eight rats from each group were sacrificed, and the colons and contents were examined. Analysis of the prepared food showed no significant deleterious changes; food intake and weight gain were similar in both groups. Compared with the controls, the colonic contents of rats fed the HA diet contained significantly less calcium, concentrations of neutral sterols, total lipids, and cholic and deoxycholic acids were increased, and there were no colonic histological changes other than significant epithelial hyperproliferation. This initial study demonstrated that the HA diet can be successfully processed for feeding to experimental animals and is acceptable and adequate for growth but induces significant metabolic and hyperproliferative changes in the rat colon. This dietary model may be useful for studies of human food, narrowing the gap between animal experimentation and human nutritional research.

  19. Designing a Course in Statistics for a Learning Health Systems Training Program

    Science.gov (United States)

    Samsa, Gregory P.; LeBlanc, Thomas W.; Zaas, Aimee; Howie, Lynn; Abernethy, Amy P.

    2014-01-01

    The core pedagogic problem considered here is how to effectively teach statistics to physicians who are engaged in a "learning health system" (LHS). This is a special case of a broader issue--namely, how to effectively teach statistics to academic physicians for whom research--and thus statistics--is a requirement for professional…

  20. Development and Validation of a Rubric for Diagnosing Students’ Experimental Design Knowledge and Difficulties

    Science.gov (United States)

    Dasgupta, Annwesa P.; Anderson, Trevor R.

    2014-01-01

    It is essential to teach students about experimental design, as this facilitates their deeper understanding of how most biological knowledge was generated and gives them tools to perform their own investigations. Despite the importance of this area, surprisingly little is known about what students actually learn from designing biological experiments. In this paper, we describe a rubric for experimental design (RED) that can be used to measure knowledge of and diagnose difficulties with experimental design. The development and validation of the RED was informed by a literature review and empirical analysis of undergraduate biology students’ responses to three published assessments. Five areas of difficulty with experimental design were identified: the variable properties of an experimental subject; the manipulated variables; measurement of outcomes; accounting for variability; and the scope of inference appropriate for experimental findings. Our findings revealed that some difficulties, documented some 50 yr ago, still exist among our undergraduate students, while others remain poorly investigated. The RED shows great promise for diagnosing students’ experimental design knowledge in lecture settings, laboratory courses, research internships, and course-based undergraduate research experiences. It also shows potential for guiding the development and selection of assessment and instructional activities that foster experimental design. PMID:26086658

  1. STATISTICAL MODELLING OF FDC AND RETURN PERIODS TO CHARACTERISE QDF AND DESIGN THRESHOLD OF HYDROLOGICAL EXTREMES

    Directory of Open Access Journals (Sweden)

    Charles Onyutha

    2012-01-01

    Full Text Available In this paper, firstly, flow duration curves (FDCs for hydrological extremes were calibrated for a range of aggregation levels and seasons to provide compressed statistical information for water resources management at selected temporal scales and seasons. Secondly, instead of the common approach of using return periods, T (years for deriving discharge duration frequency (QDF relationships, the method of using exceedance frequencies, E (% was introduced so as to provide answer to important question like, what is the streamflow at a given aggregation level and selected E (%? Thirdly, the concept of estimated design threshold (EDT was introduced and proposed for consideration in the risk analysis for design of water resources structures. This study was based on the long daily discharge record for the period 1950 - 2008 at station 1EF01 in Kenya, on the Nzoia river with watershed area of 12,676 km² located in the North Eastern quadrant of Lake Victoria Nile Sub Basin. In the statistical modelling of FDCs and T (years, suitable extreme value distributions (EVD were selected and calibrated to fit nearly independent high flows and low flows. The FDCs and T-curves were used to determine the EDT. The FDCs were used to model the QDF relationships. To derive QDF relationships of hydrological extremes, for a given range of aggregation levels, extreme value analysis (EVA was carried out and suitable EVD selected. Next was the calibration of parameters of the EVD and analysis of relationship between the model parameters and aggregation levels. Finally, smooth mathematical relationships were derived using little but acceptable modifications to the model parameters. Such constructed QDF relationships can be used for various applications to estimate cumulative volumes of water available during droughts or floods at various aggregation levels or E (% of hydrological extremes. The EDT when obtained for a range of aggregation levels can also be used to understand

  2. STATISTICAL MODELLING OF FDC AND RETURN PERIODS TO CHARACTERISE QDF AND DESIGN THRESHOLD OF HYDROLOGICAL EXTREMES

    Directory of Open Access Journals (Sweden)

    Charles Onyutha

    2012-12-01

    Full Text Available In this paper, firstly, flow duration curves (FDCs for hydrological extremes were calibrated for a range of aggregation levels and seasons to provide compressed statistical information for water resources management at selected temporal scales and seasons. Secondly, instead of the common approach of using return periods, T (years for deriving discharge duration frequency (QDF relationships, the method of using exceedance frequencies, E (% was introduced so as to provide answer to important question like, what is the streamflow at a given aggregation level and selected E (%?. Thirdly, the concept of estimated design threshold (EDT was introduced and proposed for consideration in the risk analysis for design of water resources structures. This study was based on the long daily discharge record for the period 1950 – 2008 at station 1EF01 in Kenya, on the Nzoia river with watershed area of 12,676 km2 located in the North Eastern quadrant of Lake Victoria Nile Sub Basin. In the statistical modeling of FDCs and T (years, suitable extreme value distributions (EVD were selected and calibrated to fit nearly independent high flows and low flows. The FDCs and T-curves were used to determine the EDT. The FDCs were used to model the QDF relationships. To derive QDF relationships of hydrological extremes, for a given range of aggregation levels, extreme value analysis (EVA was carried out and suitable EVD selected. Next was the calibration of parameters of the EVD and analysis of relationship between the model parameters and aggregation levels. Finally, smooth mathematical relationships were derived using little but acceptable modifications to the model parameters. Such constructed QDF relationships can be used for various applications to estimate cumulative volumes of water available during droughts or floods at various aggregation levels or E (% of hydrological extremes. The EDT when obtained for a range of aggregation levels can also be used to

  3. Design Engineering Development of Experimental MVC Desalination Installation (Part I)

    International Nuclear Information System (INIS)

    Geni Rina Sunaryo; Puradwi Ismu Wahyono

    2003-01-01

    The design for evaporator/condenser unit from the MVC desalination has been made in 4 modules. Each module is consisted by 29 Stainless steel tubes, where the distance between the plates is 1.7 m. Those 4 modules can be connected each other by using series or parallel which is depend on the purpose of the experiment. This design has been based on the overall calculation of the process of MVC (Mechanical Vapor Compression) desalination. The complex parameters such as desired water product flow rate, temperature distillate, boiling point, and other complex parameters has been used as inputs. From the calculation results have been found that the optimum total required surface area for laboratory scale of evaporator/condenser is 23.8 m 2 , therefore, the required pipe length with outer diameter of 16 mm and thickness of 1.2 mm are obtained as 345 m. The coefficient heat transfer for evaporating and boiling are obtained as 4.15 kWh/m 2 . o C and 4.32 kWh/m 2 . o C, respectively. Then, the evaporation and condensation coefficient are obtained as 4 kWh/m 2 . o C and 53 kWh/m 2 . o C, respectively. The required pipe length with the same diameter and thickness for distillate and brine are obtained as 24 m and 150 m, respectively. The required electricity consumption for the compressor per m 3 product is 322.85 kWh. From this optimum condition, the design of evaporator/condenser has been made. (author)

  4. Experimental design and quality assurance: in situ fluorescence instrumentation

    Science.gov (United States)

    Conmy, Robyn N.; Del Castillo, Carlos E.; Downing, Bryan D.; Chen, Robert F.

    2014-01-01

    Both instrument design and capabilities of fluorescence spectroscopy have greatly advanced over the last several decades. Advancements include solid-state excitation sources, integration of fiber optic technology, highly sensitive multichannel detectors, rapid-scan monochromators, sensitive spectral correction techniques, and improve data manipulation software (Christian et al., 1981, Lochmuller and Saavedra, 1986; Cabniss and Shuman, 1987; Lakowicz, 2006; Hudson et al., 2007). The cumulative effect of these improvements have pushed the limits and expanded the application of fluorescence techniques to numerous scientific research fields. One of the more powerful advancements is the ability to obtain in situ fluorescence measurements of natural waters (Moore, 1994). The development of submersible fluorescence instruments has been made possible by component miniaturization and power reduction including advances in light sources technologies (light-emitting diodes, xenon lamps, ultraviolet [UV] lasers) and the compatible integration of new optical instruments with various sampling platforms (Twardowski et at., 2005 and references therein). The development of robust field sensors skirt the need for cumbersome and or time-consuming filtration techniques, the potential artifacts associated with sample storage, and coarse sampling designs by increasing spatiotemporal resolution (Chen, 1999; Robinson and Glenn, 1999). The ability to obtain rapid, high-quality, highly sensitive measurements over steep gradients has revolutionized investigations of dissolved organic matter (DOM) optical properties, thereby enabling researchers to address novel biogeochemical questions regarding colored or chromophoric DOM (CDOM). This chapter is dedicated to the origin, design, calibration, and use of in situ field fluorometers. It will serve as a review of considerations to be accounted for during the operation of fluorescence field sensors and call attention to areas of concern when making

  5. Neutronics design and supporting experimental activities in the EU

    Czech Academy of Sciences Publication Activity Database

    Batistoni, P.; Fischer, U.; Angelone, M.; Bém, Pavel; Kodeli, I.; Pereslavtsev, P.

    2006-01-01

    Roč. 81, 8-14 (2006), s. 1169-1181 ISSN 0920-3796 R&D Projects: GA AV ČR(BE) KSK1048102 Grant - others:Evropská unie(BE) EFDA TW3-TTMN-002/D5a(BE) KSK1048102 Program:KS Institutional research plan: CEZ:AV0Z10480505 Keywords : fusion reactor * nuclear design * nuclear data * validation experiments Subject RIV: BG - Nuclear, Atomic and Molecular Physics, Colliders Impact factor: 0.598, year: 2006

  6. Experimental burn plot trial in the Kruger National Park: history, experimental design and suggestions for data analysis

    Directory of Open Access Journals (Sweden)

    R. Biggs

    2003-12-01

    Full Text Available The experimental burn plot (EBP trial initiated in 1954 is one of few ongoing long-termfire ecology research projects in Africa. The trial aims to assess the impacts of differentfire regimes in the Kruger National Park. Recent studies on the EBPs have raised questions as to the experimental design of the trial, and the appropriate model specificationwhen analysing data. Archival documentation reveals that the original design was modified on several occasions, related to changes in the park's fire policy. These modifications include the addition of extra plots, subdivision of plots and changes in treatmentsover time, and have resulted in a design which is only partially randomised. The representativity of the trial plots has been questioned on account of their relatively small size,the concentration of herbivores on especially the frequently burnt plots, and soil variation between plots. It is suggested that these factors be included as covariates inexplanatory models or that certain plots be excluded from data analysis based on resultsof independent studies of these factors. Suggestions are provided for the specificationof the experimental design when analysing data using Analysis of Variance. It is concluded that there is no practical alternative to treating the trial as a fully randomisedcomplete block design.

  7. Human in vitro 3D co-culture model to engineer vascularized bone-mimicking tissues combining computational tools and statistical experimental approach.

    Science.gov (United States)

    Bersini, Simone; Gilardi, Mara; Arrigoni, Chiara; Talò, Giuseppe; Zamai, Moreno; Zagra, Luigi; Caiolfa, Valeria; Moretti, Matteo

    2016-01-01

    The generation of functional, vascularized tissues is a key challenge for both tissue engineering applications and the development of advanced in vitro models analyzing interactions among circulating cells, endothelium and organ-specific microenvironments. Since vascularization is a complex process guided by multiple synergic factors, it is critical to analyze the specific role that different experimental parameters play in the generation of physiological tissues. Our goals were to design a novel meso-scale model bridging the gap between microfluidic and macro-scale studies, and high-throughput screen the effects of multiple variables on the vascularization of bone-mimicking tissues. We investigated the influence of endothelial cell (EC) density (3-5 Mcells/ml), cell ratio among ECs, mesenchymal stem cells (MSCs) and osteo-differentiated MSCs (1:1:0, 10:1:0, 10:1:1), culture medium (endothelial, endothelial + angiopoietin-1, 1:1 endothelial/osteo), hydrogel type (100%fibrin, 60%fibrin+40%collagen), tissue geometry (2 × 2 × 2, 2 × 2 × 5 mm(3)). We optimized the geometry and oxygen gradient inside hydrogels through computational simulations and we analyzed microvascular network features including total network length/area and vascular branch number/length. Particularly, we employed the "Design of Experiment" statistical approach to identify key differences among experimental conditions. We combined the generation of 3D functional tissue units with the fine control over the local microenvironment (e.g. oxygen gradients), and developed an effective strategy to enable the high-throughput screening of multiple experimental parameters. Our approach allowed to identify synergic correlations among critical parameters driving microvascular network development within a bone-mimicking environment and could be translated to any vascularized tissue. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. A projection method for under determined optimal experimental designs

    KAUST Repository

    Long, Quan; Scavino, Marco; Tempone, Raul; Wang, Suojin

    2014-01-01

    A new implementation, based on the Laplace approximation, was developed in (Long, Scavino, Tempone, & Wang 2013) to accelerate the estimation of the post–experimental expected information gains in the model parameters and predictive quantities of interest. A closed–form approximation of the inner integral and the order of the corresponding dominant error term were obtained in the cases where the parameters are determined by the experiment. In this work, we extend that method to the general cases where the model parameters could not be determined completely by the data from the proposed experiments. We carry out the Laplace approximations in the directions orthogonal to the null space of the corresponding Jacobian matrix, so that the information gain (Kullback–Leibler divergence) can be reduced to an integration against the marginal density of the transformed parameters which are not determined by the experiments. Furthermore, the expected information gain can be approximated by an integration over the prior, where the integrand is a function of the projected posterior covariance matrix. To deal with the issue of dimensionality in a complex problem, we use Monte Carlo sampling or sparse quadratures for the integration over the prior probability density function, depending on the regularity of the integrand function. We demonstrate the accuracy, efficiency and robustness of the proposed method via several nonlinear under determined numerical examples.

  9. A projection method for under determined optimal experimental designs

    KAUST Repository

    Long, Quan

    2014-01-09

    A new implementation, based on the Laplace approximation, was developed in (Long, Scavino, Tempone, & Wang 2013) to accelerate the estimation of the post–experimental expected information gains in the model parameters and predictive quantities of interest. A closed–form approximation of the inner integral and the order of the corresponding dominant error term were obtained in the cases where the parameters are determined by the experiment. In this work, we extend that method to the general cases where the model parameters could not be determined completely by the data from the proposed experiments. We carry out the Laplace approximations in the directions orthogonal to the null space of the corresponding Jacobian matrix, so that the information gain (Kullback–Leibler divergence) can be reduced to an integration against the marginal density of the transformed parameters which are not determined by the experiments. Furthermore, the expected information gain can be approximated by an integration over the prior, where the integrand is a function of the projected posterior covariance matrix. To deal with the issue of dimensionality in a complex problem, we use Monte Carlo sampling or sparse quadratures for the integration over the prior probability density function, depending on the regularity of the integrand function. We demonstrate the accuracy, efficiency and robustness of the proposed method via several nonlinear under determined numerical examples.

  10. A Multifunctional Public Lighting Infrastructure, Design and Experimental Test

    Directory of Open Access Journals (Sweden)

    Marco Beccali

    2017-12-01

    Full Text Available Nowadays, the installation of efficient lighting sources and Information and Communications Technologies can provide economic benefits, energy efficiency, and visual comfort requirements. More advantages can be derived if the public lighting infrastructure integrates a smart grid. This study presents an experimental multifunctional infrastructure for public lighting, installed in Palermo. The system is able to provide smart lighting functions (hotspot Wi-Fi, video-surveillances, car and pedestrian access control, car parking monitoring and support for environmental monitoring. A remote control and monitoring platform called “Centro Servizi” processes the information coming from different installations as well as their status in real time, and sends commands to the devices (e.g. to control the luminous flux, each one provided with a machine to machine interface. Data can be reported either on the web or on a customised app. The study has shown the efficient operation of such new infrastructure and its capability to provide new functions and benefits to citizens, tourists, and public administration. Thus, this system represents a starting point for the implementation of many other lighting infrastructure features typical of a “smart city.”

  11. Conceptual design study of fusion experimental reactor (FY86FER)

    International Nuclear Information System (INIS)

    Nakashima, Kunihiko; Ishigaki, Yukio; Ozaki, Akira; Yamane, Minoru.

    1987-09-01

    This report describes the results of the capacity estimation for the electrical power system on the typical two candidates for the FER (Fusion Experimental Reactor) which were picked out through the process of '86 FER scoping studies. Main concern in the electrical systems is coil power supplies which have a capacity of about 1 GW, and this is dominated by poloidal coil power supplies. Then, studies to reduce the converter capacity are concentrated on the poloidal coil power system in relation to the sypplying poloidal flux at the initial phase of plasma ramp-up. A quench protection circuit was proposed on the toroidal coil power supply. On the position control power supply, a circuit with reasonable functions was proposed. Under these system studies, general specifications were determined and the capacity of each power supply unit was estimated. On the poloidal coil power supply system, the accumulated capacity of converters amounted to 885 MW for the one candidate and 782 MW for another. (author)

  12. Synthetic tracked aperture ultrasound imaging: design, simulation, and experimental evaluation.

    Science.gov (United States)

    Zhang, Haichong K; Cheng, Alexis; Bottenus, Nick; Guo, Xiaoyu; Trahey, Gregg E; Boctor, Emad M

    2016-04-01

    Ultrasonography is a widely used imaging modality to visualize anatomical structures due to its low cost and ease of use; however, it is challenging to acquire acceptable image quality in deep tissue. Synthetic aperture (SA) is a technique used to increase image resolution by synthesizing information from multiple subapertures, but the resolution improvement is limited by the physical size of the array transducer. With a large F-number, it is difficult to achieve high resolution in deep regions without extending the effective aperture size. We propose a method to extend the available aperture size for SA-called synthetic tracked aperture ultrasound (STRATUS) imaging-by sweeping an ultrasound transducer while tracking its orientation and location. Tracking information of the ultrasound probe is used to synthesize the signals received at different positions. Considering the practical implementation, we estimated the effect of tracking and ultrasound calibration error to the quality of the final beamformed image through simulation. In addition, to experimentally validate this approach, a 6 degree-of-freedom robot arm was used as a mechanical tracker to hold an ultrasound transducer and to apply in-plane lateral translational motion. Results indicate that STRATUS imaging with robotic tracking has the potential to improve ultrasound image quality.

  13. Statistical media design for efficient polyhydroxyalkanoate production in Pseudomonas sp. MNNG-S.

    Science.gov (United States)

    Saranya, V; Rajeswari, V; Abirami, P; Poornimakkani, K; Suguna, P; Shenbagarathai, R

    2016-07-03

    Polyhydroxyalkanoate (PHA) is a promising polymer for various biomedical applications. There is a high need to improve the production rate to achieve end use. When a cost-effective production was carried out with cheaper agricultural residues like molasses, traces of toxins were incorporated into the polymer, which makes it unfit for biomedical applications. On the other hand, there is an increase in the popularity of using chemically defined media for the production of compounds with biomedical applications. However, these media do not exhibit favorable characteristics such as efficient utilization at large scale compared to complex media. This article aims to determine the specific nutritional requirement of Pseudomonas sp. MNNG-S for efficient production of polyhydroxyalkanoate. Response surface methodology (RSM) was used in this study to statistically design for PHA production based on the interactive effect of five significant variables (sucrose; potassium dihydrogen phosphate; ammonium sulfate; magnesium sulfate; trace elements). The interactive effects of sucrose with ammonium sulfate, ammonium sulfate with combined potassium phosphate, and trace element with magnesium sulfate were found to be significant (p production more than fourfold (from 0.85 g L(-1) to 4.56 g L(-1)).

  14. Optimization of Ficus deltoidea Using Ultrasound-Assisted Extraction by Box-Behnken Statistical Design

    Directory of Open Access Journals (Sweden)

    L. J. Ong

    2016-09-01

    Full Text Available In this study, the effect of extraction parameters (ethanol concentration, sonication time, and solvent-to-sample ratio on Ficus deltoidea leaves was investigated using ultrasound-assisted extraction by response surface methodology (RSM. Total phenolic content (TPC of F. deltoidea extracts was identified using Folin-Ciocalteu method and expressed in gallic acid equivalent (GAE per g. Box-Behnken statistical design (BBD was the tool used to find the optimal conditions for maximum TPC. Besides, the extraction yield was measured and stated in percentage. The optimized TPC attained was 455.78 mg GAE/g at 64% ethanol concentration, 10 minutes sonication time, and 20 mL/g solvent-to-sample ratio whereas the greatest extraction yield was 33% with ethanol concentration of 70%, sonication time of 40 minutes, and solvent-to-material ratio at 40 mL/g. The determination coefficient, R2, for TPC indicates that 99.5% capriciousness in the response could be clarified by the ANOVA model and the value of 0.9681 of predicted R2 is in equitable agreement with the 0.9890 of adjusted R2. The present study shows that ethanol water as solvent, a short time of 10 minutes, and adequate solvent-to-sample ratio (20 mL/g are the best conditions for extraction.

  15. Design and experimental evaluation of flexible manipulator control algorithms

    International Nuclear Information System (INIS)

    Kwon, D.S.; Hwang, D.H.; Babcock, S.M.; Kress, R.L.

    1995-01-01

    Within the Environmental Restoration and Waste Management Program of the US Department of Energy, the remediation of single-shell radioactive waste storage tanks is one of the areas that challenge state-of-the-art equipment and methods. The use of long-reach manipulators is being seriously considered for this task. Because of high payload capacity and high length-to-cross-section ratio requirements, these long-reach manipulator systems are expected to use hydraulic actuators and to exhibit significant structural flexibility. The controller has been designed to compensate for the hydraulic actuator dynamics by using a load-compensated velocity feedforward loop and to increase the bandwidth by using an inner pressure feedback loop. Shaping filter techniques have been applied as feedforward controllers to avoid structural vibrations during operation. Various types of shaping filter methods have been investigated. Among them, a new approach, referred to as a ''feedforward simulation filter'' that uses embedded simulation, has been presented

  16. A passive exoskeleton with artificial tendons: design and experimental evaluation.

    Science.gov (United States)

    van Dijk, Wietse; van der Kooij, Herman; Hekman, Edsko

    2011-01-01

    We developed a passive exoskeleton that was designed to minimize joint work during walking. The exoskeleton makes use of passive structures, called artificial tendons, acting in parallel with the leg. Artificial tendons are elastic elements that are able to store and redistribute energy over the human leg joints. The elastic characteristics of the tendons have been optimized to minimize the mechanical work of the human leg joints. In simulation the maximal reduction was 40 percent. The performance of the exoskeleton was evaluated in an experiment in which nine subjects participated. Energy expenditure and muscle activation were measured during three conditions: Normal walking, walking with the exoskeleton without artificial tendons, and walking with the exoskeleton with the artificial tendons. Normal walking was the most energy efficient. While walking with the exoskeleton, the artificial tendons only resulted in a negligibly small decrease in energy expenditure. © 2011 IEEE

  17. Research Methods in Healthcare Epidemiology and Antimicrobial Stewardship-Quasi-Experimental Designs.

    Science.gov (United States)

    Schweizer, Marin L; Braun, Barbara I; Milstone, Aaron M

    2016-10-01

    Quasi-experimental studies evaluate the association between an intervention and an outcome using experiments in which the intervention is not randomly assigned. Quasi-experimental studies are often used to evaluate rapid responses to outbreaks or other patient safety problems requiring prompt, nonrandomized interventions. Quasi-experimental studies can be categorized into 3 major types: interrupted time-series designs, designs with control groups, and designs without control groups. This methods paper highlights key considerations for quasi-experimental studies in healthcare epidemiology and antimicrobial stewardship, including study design and analytic approaches to avoid selection bias and other common pitfalls of quasi-experimental studies. Infect Control Hosp Epidemiol 2016;1-6.

  18. Conceptual design of fusion experimental reactor (FER/ITER)

    International Nuclear Information System (INIS)

    Kimura, Haruyuki; Saigusa, Mikio; Saitoh, Yasushi

    1991-06-01

    Conceptual design of the Ion Cyclotron Wave (ICW) system for FER and Japanese contribution to the conceptual design of the ITER ICW system are presented. A frequency range of the FER ICW system is 50-85 MHz, which covers 2ω cT heating, current drive by transit time magnetic pumping (TTMP) and 2ω cD heating. Physics analyses show that the FER and the ITER ICW systems are suitable for the central ion heating and the burn control. The launching systems of the FER ICW system and the ITER high frequency ICW system are characterized by in-port plug and ridged-waveguide-fed 5x4 phased loop array. Merits of those systems are (1) a ceramic support is not necessary inside the cryostat and (2) remote maintenance of the front end part of the launcher is relatively easy. Overall structure of the launching system is consistent with radiation shielding, cooling, pumping, tritium safety and remote maintenance. The launcher has injection capability of 20 MW in the frequency range of 50-85 MHz with the separatrix-antenna distance of 15 cm and steep scrape-off density profile of H-mode. The shape of the ridged waveguide is optimized to provide desired frequency range and power handling capability with a finite element method. Matching between the current strap and the ridged waveguide is satisfactorily good. Thermal analysis of the Faraday shield shows that high electric conductivity low Z material such as beryllium should be chosen for a protection tile of the Faraday shield. Thick Faraday shield is necessary to tolerate electromagnetic force during disruptions. R and D needs for the ITER/FER ICW systems are identified and gain from JT-60/60U ICRF experiments and operations are indicated in connection with them. (author)

  19. Reference design (MK-I and MK-II) for experimental multi-purpose VHTR

    International Nuclear Information System (INIS)

    Miyamoto, Yoshiaki; Suzuki, Kunihiko; Sato, Sadao

    1975-10-01

    This report summarizes the results of a study on thermal and mechanical performances of the core, which are obtained in course of reference design (Mk-I and Mk-II) for the experimental multi-purpose VHTR: (1) Design criteria, design methods and design data. These bases are also discussed in order to refer in the case of proceeding a next design work. (2) The results of performance analysis such as the initial core and its prediction for the irradiated core. (auth.)

  20. Quasi-experimental study designs series-paper 7: assessing the assumptions.

    Science.gov (United States)

    Bärnighausen, Till; Oldenburg, Catherine; Tugwell, Peter; Bommer, Christian; Ebert, Cara; Barreto, Mauricio; Djimeu, Eric; Haber, Noah; Waddington, Hugh; Rockers, Peter; Sianesi, Barbara; Bor, Jacob; Fink, Günther; Valentine, Jeffrey; Tanner, Jeffrey; Stanley, Tom; Sierra, Eduardo; Tchetgen, Eric Tchetgen; Atun, Rifat; Vollmer, Sebastian

    2017-09-01

    Quasi-experimental designs are gaining popularity in epidemiology and health systems research-in particular for the evaluation of health care practice, programs, and policy-because they allow strong causal inferences without randomized controlled experiments. We describe the concepts underlying five important quasi-experimental designs: Instrumental Variables, Regression Discontinuity, Interrupted Time Series, Fixed Effects, and Difference-in-Differences designs. We illustrate each of the designs with an example from health research. We then describe the assumptions required for each of the designs to ensure valid causal inference and discuss the tests available to examine the assumptions. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Design of an experimental model to study the behavior of unsaturated fats in the preparation of meat emulsions

    Directory of Open Access Journals (Sweden)

    Javier F. Rey

    2009-08-01

    Full Text Available The essence of a good experimental position consists on projecting an experiment so that him it is able to in fact give therefore the type of information that is looked for, by means of the development of the present work it is looked for to determine which the quality of the meat products will be elaborated with unsaturated vegetable fats, which its yield will be and for ende its cost regarding the traditional products, in and of itself the present investigation outlines an experimental design by means of the control of such variables as type of fat, use temperature and time of cutteado, keeping in mind the physiochemical and biochemical phenomena that happen beginning the control from the composition of the meat and fat during the trial, as raw materials dedicated to this end, for he/she thought about it the experimental design using a statistical model of complete factorial planning with 3 variables and 2 levels for a number of 15 rehearsals with a replica. Identified the variables to control as type of fat, temperature of use of the fats and the time of cutteado, the outlined experimental design is applied and you ends up obtaining the equation that gives solution to the identified problem that facilitates to use the unsaturated fats inside a process of elaboration of meat emulsions.

  2. Experimental design for study of cardiopulmonary resuscitation in dogs.

    Science.gov (United States)

    Barsan, W G; Levy, R C

    1981-03-01

    Many different designs for studies of various aspects of cardiopulmonary resuscitation (CPR) in dogs are described in the literature. No single technique is generally accepted. We present a systematized approach to the study of CPR in the canine model. Cardiac output, arterial blood pressure, and electrocardiogram were recorded for three different methods. The methods studied were closed chest compression, closed chest compression with an automatic gas-powered chest compressor, and open chest manual cardiac massage. Cardiac output for both types of external chest compression were less than 17% of control in all cases. With open chest cardiac massage, systemic arterial blood pressures were in the 50 mm Hg to 100 mm Hg range and cardiac output of up to 70% of control was achieved. Using a metronome to obtain compression rate and the arterial blood pressure to guide the efficacy of compression, consistent levels of cardiac output could be achieved for up to 30 minutes using open chest cardiac massage. Closed chest massage in man results in a cardiac output of 25% to 30% of normal when performed under optimal conditions. A cardiac output of 25% to 30% of control cannot be achieved in large dogs with external chest compression, and hence is not a good model to stimulate CPR in man.

  3. Experimental design for dynamics identification of cellular processes.

    Science.gov (United States)

    Dinh, Vu; Rundell, Ann E; Buzzard, Gregery T

    2014-03-01

    We address the problem of using nonlinear models to design experiments to characterize the dynamics of cellular processes by using the approach of the Maximally Informative Next Experiment (MINE), which was introduced in W. Dong et al. (PLoS ONE 3(8):e3105, 2008) and independently in M.M. Donahue et al. (IET Syst. Biol. 4:249-262, 2010). In this approach, existing data is used to define a probability distribution on the parameters; the next measurement point is the one that yields the largest model output variance with this distribution. Building upon this approach, we introduce the Expected Dynamics Estimator (EDE), which is the expected value using this distribution of the output as a function of time. We prove the consistency of this estimator (uniform convergence to true dynamics) even when the chosen experiments cluster in a finite set of points. We extend this proof of consistency to various practical assumptions on noisy data and moderate levels of model mismatch. Through the derivation and proof, we develop a relaxed version of MINE that is more computationally tractable and robust than the original formulation. The results are illustrated with numerical examples on two nonlinear ordinary differential equation models of biomolecular and cellular processes.

  4. Design, Manufacture, and Experimental Serviceability Validation of ITER Blanket Components

    Science.gov (United States)

    Leshukov, A. Yu.; Strebkov, Yu. S.; Sviridenko, M. N.; Safronov, V. M.; Putrik, A. B.

    2017-12-01

    In 2014, the Russian Federation and the ITER International Organization signed two Procurement Arrangements (PAs) for ITER blanket components: 1.6.P1ARF.01 "Blanket First Wall" of February 14, 2014, and 1.6.P3.RF.01 "Blanket Module Connections" of December 19, 2014. The first PA stipulates development, manufacture, testing, and delivery to the ITER site of 179 Enhanced Heat Flux (EHF) First Wall (FW) Panels intended for withstanding the heat flux from the plasma up to 4.7MW/m2. Two Russian institutions, NIIEFA (Efremov Institute) and NIKIET, are responsible for the implementation of this PA. NIIEFA manufactures plasma-facing components (PFCs) of the EHF FW panels and performs the final assembly and testing of the panels, and NIKIET manufactures FW beam structures, load-bearing structures of PFCs, and all elements of the panel attachment system. As for the second PA, NIKIET is the sole official supplier of flexible blanket supports, electrical insulation key pads (EIKPs), and blanket module/vacuum vessel electrical connectors. Joint activities of NIKIET and NIIEFA for implementing PA 1.6.P1ARF.01 are briefly described, and information on implementation of PA 1.6.P3.RF.01 is given. Results of the engineering design and research efforts in the scope of the above PAs in 2015-2016 are reported, and results of developing the technology for manufacturing ITER blanket components are presented.

  5. Hefei experimental hybrid fusion-fission reactor conceptual design

    International Nuclear Information System (INIS)

    Qiu Lijian; Luan Guishi; Xu Qiang

    1992-03-01

    A new concept of hybrid reactor is introduced. It uses JET-like(Joint European Tokamak) device worked at sub-breakeven conditions, as a source of high energy neutrons to induce a blanket fission of depleted uranium. The solid breeding material and helium cooling technique are also used. It can produce 100 kg of 239 Pu per year by partial fission suppressed. The energy self-sustained of the fusion core is not necessary. Plasma temperature is maintained by external 20 MW ICRF (ion cyclotron resonance frequency) and 10 MW ECRF (electron cyclotron resonance frequency) heating. A steady state plasma current at 1.5 Ma is driven by 10 MW LHCD (lower hybrid current driven). Plasma density will be kept by pellet injection. ICRF can produce a high energy tail in ion distribution function and lead to significant enhancement of D-T reaction rate by 2 ∼ 5 times so that the neutron source strength reaches to the level of 1 x 10 19 n/s. This system is a passive system. It's power density is 10 W/cm 3 and the wall loading is 0.6 W/cm 2 that is the lower limitation of fusion and fission technology. From the calculation of neutrons it could always be in sub-critical and has intrinsic safety. The radiation damage and neutron flux distribution on the first wall are also analyzed. According to the conceptual design the application of this type hybrid reactor earlier is feasible

  6. A Statistical Thermodynamic Model for Ligands Interacting With Ion Channels: Theoretical Model and Experimental Validation of the KCNQ2 Channel

    Directory of Open Access Journals (Sweden)

    Fang Bai

    2018-03-01

    Full Text Available Ion channels are important therapeutic targets, and their pharmacology is becoming increasingly important. However, knowledge of the mechanism of interaction of the activators and ion channels is still limited due to the complexity of the mechanisms. A statistical thermodynamic model has been developed in this study to characterize the cooperative binding of activators to ion channels. By fitting experimental concentration-response data, the model gives eight parameters for revealing the mechanism of an activator potentiating an ion channel, i.e., the binding affinity (KA, the binding cooperative coefficients for two to four activator molecules interacting with one channel (γ, μ, and ν, and the channel conductance coefficients for four activator binding configurations of the channel (a, b, c, and d. Values for the model parameters and the mechanism underlying the interaction of ztz240, a proven KCNQ2 activator, with the wild-type channel have been obtained and revealed by fitting the concentration-response data of this activator potentiating the outward current amplitudes of KCNQ2. With these parameters, our model predicted an unexpected bi-sigmoid concentration-response curve of ztz240 activation of the WT-F137A mutant heteromeric channel that was in good agreement with the experimental data determined in parallel in this study, lending credence to the assumptions on which the model is based and to the model itself. Our model can provide a better fit to the measured data than the Hill equation and estimates the binding affinity, as well as the cooperative coefficients for the binding of activators and conductance coefficients for binding states, which validates its use in studying ligand-channel interaction mechanisms.

  7. Box-Behnken design based statistical modeling for ultrasound-assisted extraction of corn silk polysaccharide.

    Science.gov (United States)

    Prakash Maran, J; Manikandan, S; Thirugnanasambandham, K; Vigna Nivetha, C; Dinesh, R

    2013-01-30

    In this study, ultrasound assisted extraction (UAE) conditions on the yield of polysaccharide from corn silk were studied using three factors, three level Box-Behnken response surface design. Process parameters, which affect the efficiency of UAE such as extraction temperature (40-60 °C), time (10-30 min) and solid-liquid ratio (1:10-1:30 g/ml) were investigated. The results showed that, the extraction conditions have significant effects on extraction yield of polysaccharide. The obtained experimental data were fitted to a second-order polynomial equation using multiple regression analysis with high coefficient of determination value (R(2)) of 0.994. An optimization study using Derringer's desired function methodology was performed and the optimal conditions based on both individual and combinations of all independent variables (extraction temperature of 56 °C, time of 17 min and solid-liquid ratio of 1:20 g/ml) were determined with maximum polysaccharide yield of 6.06%, which was confirmed through validation experiments. Copyright © 2012 Elsevier Ltd. All rights reserved.

  8. Design and statistical optimization of glipizide loaded lipospheres using response surface methodology.

    Science.gov (United States)

    Shivakumar, Hagalavadi Nanjappa; Patel, Pragnesh Bharat; Desai, Bapusaheb Gangadhar; Ashok, Purnima; Arulmozhi, Sinnathambi

    2007-09-01

    A 32 factorial design was employed to produce glipizide lipospheres by the emulsification phase separation technique using paraffin wax and stearic acid as retardants. The effect of critical formulation variables, namely levels of paraffin wax (X1) and proportion of stearic acid in the wax (X2) on geometric mean diameter (dg), percent encapsulation efficiency (% EE), release at the end of 12 h (rel12) and time taken for 50% of drug release (t50), were evaluated using the F-test. Mathematical models containing only the significant terms were generated for each response parameter using the multiple linear regression analysis (MLRA) and analysis of variance (ANOVA). Both formulation variables studied exerted a significant influence (p optimization using the desirability approach was employed to develop an optimized formulation by setting constraints on the dependent and independent variables. The experimental values of dg, % EE, rel12 and t50 values for the optimized formulation were found to be 57.54 +/- 1.38 mum, 86.28 +/- 1.32%, 77.23 +/- 2.78% and 5.60 +/- 0.32 h, respectively, which were in close agreement with those predicted by the mathematical models. The drug release from lipospheres followed first-order kinetics and was characterized by the Higuchi diffusion model. The optimized liposphere formulation developed was found to produce sustained anti-diabetic activity following oral administration in rats.

  9. STRONG LENS TIME DELAY CHALLENGE. I. EXPERIMENTAL DESIGN

    Energy Technology Data Exchange (ETDEWEB)

    Dobler, Gregory [Kavli Institute for Theoretical Physics, University of California Santa Barbara, Santa Barbara, CA 93106 (United States); Fassnacht, Christopher D.; Rumbaugh, Nicholas [Department of Physics, University of California, 1 Shields Avenue, Davis, CA 95616 (United States); Treu, Tommaso; Liao, Kai [Department of Physics, University of California, Santa Barbara, CA 93106 (United States); Marshall, Phil [Kavli Institute for Particle Astrophysics and Cosmology, P.O. Box 20450, MS29, Stanford, CA 94309 (United States); Hojjati, Alireza [Department of Physics and Astronomy, University of British Columbia, 6224 Agricultural Road, Vancouver, B.C. V6T 1Z1 (Canada); Linder, Eric, E-mail: tt@astro.ucla.edu [Lawrence Berkeley National Laboratory and University of California, Berkeley, CA 94720 (United States)

    2015-02-01

    The time delays between point-like images in gravitational lens systems can be used to measure cosmological parameters. The number of lenses with measured time delays is growing rapidly; the upcoming Large Synoptic Survey Telescope (LSST) will monitor ∼10{sup 3} strongly lensed quasars. In an effort to assess the present capabilities of the community, to accurately measure the time delays, and to provide input to dedicated monitoring campaigns and future LSST cosmology feasibility studies, we have invited the community to take part in a ''Time Delay Challenge'' (TDC). The challenge is organized as a set of ''ladders'', each containing a group of simulated data sets to be analyzed blindly by participating teams. Each rung on a ladder consists of a set of realistic mock observed lensed quasar light curves, with the rungs' data sets increasing in complexity and realism. The initial challenge described here has two ladders, TDC0 and TDC1. TDC0 has a small number of data sets, and is designed to be used as a practice set by the participating teams. The (non-mandatory) deadline for completion of TDC0 was the TDC1 launch date, 2013 December 1. The TDC1 deadline was 2014 July 1. Here we give an overview of the challenge, we introduce a set of metrics that will be used to quantify the goodness of fit, efficiency, precision, and accuracy of the algorithms, and we present the results of TDC0. Thirteen teams participated in TDC0 using 47 different methods. Seven of those teams qualified for TDC1, which is described in the companion paper.

  10. Experimental determination of new statistical correlations for the calculation of the heat transfer coefficient by convection for flat plates, cylinders and tube banks

    Directory of Open Access Journals (Sweden)

    Ismael Fernando Meza Castro

    2017-07-01

    Full Text Available Introduction: This project carried out an experimental research with the design, assembly, and commissioning of a convection heat transfer test bench. Objective: To determine new statistical correlations that allow knowing the heat transfer coefficients by air convection with greater accuracy in applications with different heating geometry configurations. Methodology: Three geometric configurations, such as flat plate, cylinders and tube banks were studied according to their physical properties through Reynolds and Prandtl numbers, using a data transmission interface using Arduino® controllers Measured the air temperature through the duct to obtain real-time data and to relate the heat transferred from the heating element to the fluid and to perform mathematical modeling in specialized statistical software. The study was made for the three geometries mentioned, one power per heating element and two air velocities with 10 repetitions. Results: Three mathematical correlations were obtained with regression coefficients greater than 0.972, one for each heating element, obtaining prediction errors in the heat transfer convective coefficients of 7.50% for the flat plate, 2.85% for the plate Cylindrical and 1.57% for the tube bank. Conclusions: It was observed that in geometries constituted by several individual elements, a much more accurate statistical adjustment was obtained to predict the behavior of the convection heat coefficients, since each unit reaches a stability in the surface temperature profile with Greater speed, giving the geometry in general, a more precise measurement of the parameters that govern the transfer of heat, as it is in the case of the geometry of the tube bank.

  11. Survey of editors and reviewers of high-impact psychology journals: statistical and research design problems in submitted manuscripts.

    Science.gov (United States)

    Harris, Alex; Reeder, Rachelle; Hyun, Jenny

    2011-01-01

    The authors surveyed 21 editors and reviewers from major psychology journals to identify and describe the statistical and design errors they encounter most often and to get their advice regarding prevention of these problems. Content analysis of the text responses revealed themes in 3 major areas: (a) problems with research design and reporting (e.g., lack of an a priori power analysis, lack of congruence between research questions and study design/analysis, failure to adequately describe statistical procedures); (b) inappropriate data analysis (e.g., improper use of analysis of variance, too many statistical tests without adjustments, inadequate strategy for addressing missing data); and (c) misinterpretation of results. If researchers attended to these common methodological and analytic issues, the scientific quality of manuscripts submitted to high-impact psychology journals might be significantly improved.

  12. A Review of Study Designs and Statistical Methods for Genomic Epidemiology Studies using Next Generation Sequencing

    Directory of Open Access Journals (Sweden)

    Qian eWang

    2015-04-01

    Full Text Available Results from numerous linkage and association studies have greatly deepened scientists’ understanding of the genetic basis of many human diseases, yet some important questions remain unanswered. For example, although a large number of disease-associated loci have been identified from genome-wide association studies (GWAS in the past 10 years, it is challenging to interpret these results as most disease-associated markers have no clear functional roles in disease etiology, and all the identified genomic factors only explain a small portion of disease heritability. With the help of next-generation sequencing (NGS, diverse types of genomic and epigenetic variations can be detected with high accuracy. More importantly, instead of using linkage disequilibrium to detect association signals based on a set of pre-set probes, NGS allows researchers to directly study all the variants in each individual, therefore promises opportunities for identifying functional variants and a more comprehensive dissection of disease heritability. Although the current scale of NGS studies is still limited due to the high cost, the success of several recent studies suggests the great potential for applying NGS in genomic epidemiology, especially as the cost of sequencing continues to drop. In this review, we discuss several pioneer applications of NGS, summarize scientific discoveries for rare and complex diseases, and compare various study designs including targeted sequencing and whole-genome sequencing using population-based and family-based cohorts. Finally, we highlight recent advancements in statistical methods proposed for sequencing analysis, including group-based association tests, meta-analysis techniques, and annotation tools for variant prioritization.

  13. Regularization design for high-quality cone-beam CT of intracranial hemorrhage using statistical reconstruction

    Science.gov (United States)

    Dang, H.; Stayman, J. W.; Xu, J.; Sisniega, A.; Zbijewski, W.; Wang, X.; Foos, D. H.; Aygun, N.; Koliatsos, V. E.; Siewerdsen, J. H.

    2016-03-01

    Intracranial hemorrhage (ICH) is associated with pathologies such as hemorrhagic stroke and traumatic brain injury. Multi-detector CT is the current front-line imaging modality for detecting ICH (fresh blood contrast 40-80 HU, down to 1 mm). Flat-panel detector (FPD) cone-beam CT (CBCT) offers a potential alternative with a smaller scanner footprint, greater portability, and lower cost potentially well suited to deployment at the point of care outside standard diagnostic radiology and emergency room settings. Previous studies have suggested reliable detection of ICH down to 3 mm in CBCT using high-fidelity artifact correction and penalized weighted least-squared (PWLS) image reconstruction with a post-artifact-correction noise model. However, ICH reconstructed by traditional image regularization exhibits nonuniform spatial resolution and noise due to interaction between the statistical weights and regularization, which potentially degrades the detectability of ICH. In this work, we propose three regularization methods designed to overcome these challenges. The first two compute spatially varying certainty for uniform spatial resolution and noise, respectively. The third computes spatially varying regularization strength to achieve uniform "detectability," combining both spatial resolution and noise in a manner analogous to a delta-function detection task. Experiments were conducted on a CBCT test-bench, and image quality was evaluated for simulated ICH in different regions of an anthropomorphic head. The first two methods improved the uniformity in spatial resolution and noise compared to traditional regularization. The third exhibited the highest uniformity in detectability among all methods and best overall image quality. The proposed regularization provides a valuable means to achieve uniform image quality in CBCT of ICH and is being incorporated in a CBCT prototype for ICH imaging.

  14. Statistics of Scientific Procedures on Living Animals 2014: A new format, and hopefully a new era of diminishing animal experimentation?

    Science.gov (United States)

    Hudson-Shore, Michelle

    2016-03-01

    The Annual Statistics of Scientific Procedures on Living Animals Great Britain 2014 reports a welcome decline in animal experimentation in the UK. However, caution has to be exercised when interpreting these most recent figures, due to the significant changes made to satisfy the requirements of Directive 2010/63/EU as to what information is reported and how it is reported. Comparisons to the figures and trends reported in previous years is difficult, so this paper focuses on the specifics of the current report, providing information on overall animal use and highlighting specific issues associated with genetically-altered animals, fish and primates. There is a detailed discussion of the extent of the changes, commenting on the benefits and disadvantages of the new format, in areas such as severity of procedures, legislation and techniques of special interest. It also considers the consequences of the changes on the effective monitoring of laboratory animal use, the openness and transparency regarding the impacts of animal use, and the implementation of Three Rs initiatives. In addition, suggestions for further improvements to the new format are made to the Home Office. 2016 FRAME.

  15. Statistical methods for launch vehicle guidance, navigation, and control (GN&C) system design and analysis

    Science.gov (United States)

    Rose, Michael Benjamin

    A novel trajectory and attitude control and navigation analysis tool for powered ascent is developed. The tool is capable of rapid trade-space analysis and is designed to ultimately reduce turnaround time for launch vehicle design, mission planning, and redesign work. It is streamlined to quickly determine trajectory and attitude control dispersions, propellant dispersions, orbit insertion dispersions, and navigation errors and their sensitivities to sensor errors, actuator execution uncertainties, and random disturbances. The tool is developed by applying both Monte Carlo and linear covariance analysis techniques to a closed-loop, launch vehicle guidance, navigation, and control (GN&C) system. The nonlinear dynamics and flight GN&C software models of a closed-loop, six-degree-of-freedom (6-DOF), Monte Carlo simulation are formulated and developed. The nominal reference trajectory (NRT) for the proposed lunar ascent trajectory is defined and generated. The Monte Carlo truth models and GN&C algorithms are linearized about the NRT, the linear covariance equations are formulated, and the linear covariance simulation is developed. The performance of the launch vehicle GN&C system is evaluated using both Monte Carlo and linear covariance techniques and their trajectory and attitude control dispersion, propellant dispersion, orbit insertion dispersion, and navigation error results are validated and compared. Statistical results from linear covariance analysis are generally within 10% of Monte Carlo results, and in most cases the differences are less than 5%. This is an excellent result given the many complex nonlinearities that are embedded in the ascent GN&C problem. Moreover, the real value of this tool lies in its speed, where the linear covariance simulation is 1036.62 times faster than the Monte Carlo simulation. Although the application and results presented are for a lunar, single-stage-to-orbit (SSTO), ascent vehicle, the tools, techniques, and mathematical

  16. Long-term strategy for the statistical design of a forest health monitoring system

    Science.gov (United States)

    Hans T. Schreuder; Raymond L. Czaplewski

    1993-01-01

    A conceptual framework is given for a broad-scale survey of forest health that accomplishes three objectives: generate descriptive statistics; detect changes in such statistics; and simplify analytical inferences that identify, and possibly establish cause-effect relationships. Our paper discusses the development of sampling schemes to satisfy these three objectives,...

  17. Delineamento experimental e tamanho de amostra para alface cultivada em hidroponia Experimental design and sample size for hydroponic lettuce crop

    Directory of Open Access Journals (Sweden)

    Valéria Schimitz Marodim

    2000-10-01

    Full Text Available Este estudo visa a estabelecer o delineamento experimental e o tamanho de amostra para a cultura da alface (Lactuca sativa em hidroponia, pelo sistema NFT (Nutrient film technique. O experimento foi conduzido no Laboratório de Cultivos Sem Solo/Hidroponia, no Departamento de Fitotecnia da Universidade Federal de Santa Maria e baseou-se em dados de massa de plantas. Os resultados obtidos mostraram que, usando estrutura de cultivo de alface em hidroponia sobre bancadas de fibrocimento com seis canais, o delineamento experimental adequado é blocos ao acaso se a unidade experimental for constituída de faixas transversais aos canais das bancadas, e deve ser inteiramente casualizado se a bancada for a unidade experimental; para a variável massa de plantas, o tamanho da amostra é de 40 plantas para uma semi-amplitude do intervalo de confiança em percentagem da média (d igual a 5% e de 7 plantas para um d igual a 20%.This study was carried out to establish the experimental design and sample size for hydroponic lettuce (Lactuca sativa crop under nutrient film technique. The experiment was conducted in the Laboratory of Hydroponic Crops of the Horticulture Department of the Federal University of Santa Maria. The evaluated traits were plant weight. Under hydroponic conditions on concrete bench with six ducts, the most indicated experimental design for lettuce is randomised blocks for duct transversal plots or completely randomised for bench plot. The sample size for plant weight should be 40 and 7 plants, respectively, for a confidence interval of mean percentage (d equal to 5% and 20%.

  18. Sensitivity analysis by experimental design and metamodelling : case study on simulation in national animal disease control

    NARCIS (Netherlands)

    Vonk Noordegraaf, A.; Nielen, M.; Kleijnen, J.P.C.

    2003-01-01

    Simulation is a frequently applied tool in the discipline of animal health economics. Application of sensitivity analysis, however, is often limited to changing only one factor at a time (OAT designs). In this study, the statistical techniques of Design of Experiments (DOE) and regression

  19. Design, construction and testing of a radon experimental chamber; Diseno, construccion y pruebas de una camara experimental de radon

    Energy Technology Data Exchange (ETDEWEB)

    Chavez B, A; Balcazar G, M

    1991-10-15

    To carry out studies on the radon behavior under controlled and stable conditions it was designed and constructed a system that consists of two parts: a container of mineral rich in Uranium and an experimentation chamber with radon united one to the other one by a step valve. The container of uranium mineral approximately contains 800 gr of uranium with a law of 0.28%; the radon gas emanated by the mineral is contained tightly by the container. When the valve opens up the radon gas it spreads to the radon experimental chamber; this contains 3 accesses that allow to install different types of detectors. The versatility of the system is exemplified with two experiments: 1. With the radon experimental chamber and an associated spectroscopic system, the radon and two of its decay products are identified. 2. The design of the system allows to couple the mineral container to other experimental geometries to demonstrate this fact it was coupled and proved a new automatic exchanger system of passive detectors of radon. The results of the new automatic exchanger system when it leave to flow the radon freely among the container and the automatic exchanger through a plastic membrane of 15 m. are shown. (Author)

  20. Contribution of experimental fluid mechanics to the design of vertical slot fish passes

    Directory of Open Access Journals (Sweden)

    Wang R.W.

    2010-02-01

    Full Text Available This paper presents the main results of an experimental study of mean and turbulent characteristics of flow in a scale model of a vertical slot fish pass with varying width and slope (from 5% to 15%. Experimental hydraulic modelling was combined with the study of fish behaviour in the model. The discharge coefficient, which significantly affects the design of such facilities, varied from 0.67 to 0.89 and was strongly influenced by the slope. Two distinct flow patterns were observed, depending on the slope and the fish pass width. The point of transition between the two states was determined. Low velocity areas are likely resting zones for fish and particular attention was paid to evaluating these areas. Slope was found to affect both the volume of the low velocity zone, and the value of turbulent kinetic energy in these areas. The statistical characteristics of turbulent kinetic energy in the pools were linked primarily to the maximum velocity in the jet. An analysis of the behaviour of juvenile brown trout (Salmo trutta in the scale model clearly showed that the fish avoided the areas of high velocities in the jet, except at the slot itself where they took advantage of the jet’s non-stationary character. Low-velocity areas were not frequented uniformly by fish, which stayed most frequently in the zone located just downstream from the slot and behind the small side baffle. It is suggested that future studies might investigate lower pool-length to slot-width ratios, which might make it possible to increase the slope significantly and should also examine ways of improving hydraulic conditions for fish by carefully distributing obstacles in pools.