WorldWideScience

Sample records for repeated-measures experimental design

  1. On balanced minimal repeated measurements designs

    Directory of Open Access Journals (Sweden)

    Shakeel Ahmad Mir

    2014-10-01

    Full Text Available Repeated Measurements designs are concerned with scientific experiments in which each experimental unit is assigned more than once to a treatment either different or identical. This class of designs has the property that the unbiased estimators for elementary contrasts among direct and residual effects are obtainable. Afsarinejad (1983 provided a method of constructing balanced Minimal Repeated Measurements designs p < t , when t is an odd or prime power, one or more than one treatment may occur more than once in some sequences and  designs so constructed no longer remain uniform in periods. In this paper an attempt has been made to provide a new method to overcome this drawback. Specifically, two cases have been considered                RM[t,n=t(t-t/(p-1,p], λ2=1 for balanced minimal repeated measurements designs and  RM[t,n=2t(t-t/(p-1,p], λ2=2 for balanced  repeated measurements designs. In addition , a method has been provided for constructing              extra-balanced minimal designs for special case RM[t,n=t2/(p-1,p], λ2=1.

  2. Power analysis for multivariate and repeated measures designs: a flexible approach using the SPSS MANOVA procedure.

    Science.gov (United States)

    D'Amico, E J; Neilands, T B; Zambarano, R

    2001-11-01

    Although power analysis is an important component in the planning and implementation of research designs, it is often ignored. Computer programs for performing power analysis are available, but most have limitations, particularly for complex multivariate designs. An SPSS procedure is presented that can be used for calculating power for univariate, multivariate, and repeated measures models with and without time-varying and time-constant covariates. Three examples provide a framework for calculating power via this method: an ANCOVA, a MANOVA, and a repeated measures ANOVA with two or more groups. The benefits and limitations of this procedure are discussed.

  3. On the repeated measures designs and sample sizes for randomized controlled trials.

    Science.gov (United States)

    Tango, Toshiro

    2016-04-01

    For the analysis of longitudinal or repeated measures data, generalized linear mixed-effects models provide a flexible and powerful tool to deal with heterogeneity among subject response profiles. However, the typical statistical design adopted in usual randomized controlled trials is an analysis of covariance type analysis using a pre-defined pair of "pre-post" data, in which pre-(baseline) data are used as a covariate for adjustment together with other covariates. Then, the major design issue is to calculate the sample size or the number of subjects allocated to each treatment group. In this paper, we propose a new repeated measures design and sample size calculations combined with generalized linear mixed-effects models that depend not only on the number of subjects but on the number of repeated measures before and after randomization per subject used for the analysis. The main advantages of the proposed design combined with the generalized linear mixed-effects models are (1) it can easily handle missing data by applying the likelihood-based ignorable analyses under the missing at random assumption and (2) it may lead to a reduction in sample size, compared with the simple pre-post design. The proposed designs and the sample size calculations are illustrated with real data arising from randomized controlled trials.

  4. Multiple-objective response-adaptive repeated measurement designs in clinical trials for binary responses.

    Science.gov (United States)

    Liang, Yuanyuan; Li, Yin; Wang, Jing; Carriere, Keumhee C

    2014-02-20

    A multiple-objective allocation strategy was recently proposed for constructing response-adaptive repeated measurement designs for continuous responses. We extend the allocation strategy to constructing response-adaptive repeated measurement designs for binary responses. The approach with binary responses is quite different from the continuous case, as the information matrix is a function of responses, and it involves nonlinear modeling. To deal with these problems, we first build the design on the basis of success probabilities. Then we illustrate how various models can accommodate carryover effects on the basis of logits of response profiles as well as any correlation structure. Through computer simulations, we find that the allocation strategy developed for continuous responses also works well for binary responses. As expected, design efficiency in terms of mean squared error drops sharply, as more emphasis is placed on increasing treatment benefit than estimation precision. However, we find that it can successfully allocate more patients to better treatment sequences without sacrificing much estimation precision.

  5. On the Analysis of a Repeated Measure Design in Genome-Wide Association Analysis

    Directory of Open Access Journals (Sweden)

    Young Lee

    2014-11-01

    Full Text Available Longitudinal data enables detecting the effect of aging/time, and as a repeated measures design is statistically more efficient compared to cross-sectional data if the correlations between repeated measurements are not large. In particular, when genotyping cost is more expensive than phenotyping cost, the collection of longitudinal data can be an efficient strategy for genetic association analysis. However, in spite of these advantages, genome-wide association studies (GWAS with longitudinal data have rarely been analyzed taking this into account. In this report, we calculate the required sample size to achieve 80% power at the genome-wide significance level for both longitudinal and cross-sectional data, and compare their statistical efficiency. Furthermore, we analyzed the GWAS of eight phenotypes with three observations on each individual in the Korean Association Resource (KARE. A linear mixed model allowing for the correlations between observations for each individual was applied to analyze the longitudinal data, and linear regression was used to analyze the first observation on each individual as cross-sectional data. We found 12 novel genome-wide significant disease susceptibility loci that were then confirmed in the Health Examination cohort, as well as some significant interactions between age/sex and SNPs.

  6. Comparison of Repeated Measurement Design and Mixed Models in Evaluation of the Entonox Effect on Labor Pain

    Directory of Open Access Journals (Sweden)

    Nasim Karimi

    2017-01-01

    Full Text Available Background & objectives: In many medical studies, the response variable is measured repeatedly over time to evaluate the treatment effect that is known as longitudinal study. The analysis method for this type of data is repeated measures ANOVA that uses only one correlation structure and the results are not valid with inappropriate correlation structure. To avoid this problem, a convenient alternative is mixed models. So, the aim of this study was to compare of mixed and repeated measurement models for examination of the Entonox effect on the labor pain. Methods: This experimental study was designed to compare the effect of Entonox and oxygen inhalation on pain relief between two groups. Data were analyzed using repeated measurement and mixed models with different correlation structures. Selection and comparison of proper correlation structures performed using Akaike information criterion, Bayesian information criterion and restricted log-likelihood. Data were analyzed using SPSS-22. Results: Results of our study showed that all variables containing analgesia methods, labor duration of the first and second stages, and time were significant in these tests. In mixed model, heterogeneous first-order autoregressive, first-order autoregressive, heterogeneous Toeplitz and unstructured correlation structures were recognized as the best structures. Also, all variables were significant in these structures. Unstructured variance covariance matrix was recognized as the worst structure and labor duration of the first and second stages was not significant in this structure. Conclusions: This study showed that the Entonox inhalation has a significant effect on pain relief in primiparous and it is confirmed by all of the models.

  7. Splashing Our Way to Playfulness! An Aquatic Playgroup for Young Children with Autism, A Repeated Measures Design

    Science.gov (United States)

    Fabrizi, Sarah E.

    2015-01-01

    This study investigated the effectiveness of an aquatic playgroup on the playfulness of children, ages 2 to 3 with autism spectrum disorder. Using a repeated measures design, we followed 10 children and their caregivers who participated in a 6-week aquatic playgroup in southwest Florida. Four dyads completed the entire 12-week study period. The…

  8. Splashing Our Way to Playfulness! An Aquatic Playgroup for Young Children with Autism, A Repeated Measures Design

    Science.gov (United States)

    Fabrizi, Sarah E.

    2015-01-01

    This study investigated the effectiveness of an aquatic playgroup on the playfulness of children, ages 2 to 3 with autism spectrum disorder. Using a repeated measures design, we followed 10 children and their caregivers who participated in a 6-week aquatic playgroup in southwest Florida. Four dyads completed the entire 12-week study period. The…

  9. A Correction for the Epsilon Approximate Test in Repeated Measures Designs with Two or More Independent Groups.

    Science.gov (United States)

    Lecoutre, Bruno

    1991-01-01

    The routine epsilon approximate test in repeated measures designs when the condition of circularity is unfulfilled uses an erroneous formula in the case of two or more groups. Because this may lead to underestimation of the deviation from circularity when the subject number is small, a correction is proposed. (Author/SLD)

  10. Characterization of the peripheral blood transcriptome in a repeated measures design using a panel of healthy individuals

    DEFF Research Database (Denmark)

    De Boever, Patrick; Wens, Britt; Forcheh, Anyiawung Chiara

    2014-01-01

    A repeated measures microarray design with 22 healthy, non-smoking volunteers (aging 32. ±. 5. years) was set up to study transcriptome profiles in whole blood samples. The results indicate that repeatable data can be obtained with high within-subject correlation. Probes that could discriminate....... Our study suggests that the blood transcriptome of healthy individuals is reproducible over a time period of several months. © 2013 Elsevier Inc....

  11. A novel AX+/BX- paradigm to assess fear learning and safety-signal processing with repeated-measure designs.

    Science.gov (United States)

    Kazama, Andy M; Schauder, Kimberly B; McKinnon, Michael; Bachevalier, Jocelyne; Davis, Michael

    2013-04-15

    One of the core symptoms of anxiety disorders, such as post-traumatic stress disorder, is the failure to overcome feelings of danger despite being in a safe environment. This deficit likely stems from an inability to fully process safety signals, which are cues in the environment that enable healthy individuals to over-ride fear in aversive situations. Studies examining safety signal learning in rodents, humans, and non-human primates currently rely on between-groups designs. Because repeated-measure designs reduce the number of subjects required, and facilitate a broader range of safety signal studies, the current project sought to develop a repeated-measures safety-signal learning paradigm in non-human primates. Twelve healthy rhesus macaques of both sexes received three rounds of auditory fear-potentiated startle training and testing using an AX+/BX- design with all visual cues. Cue AX was paired with an aversive blast of air, whereas the same X cue in compound with another B cue (BX) signaled the absence of an air blast. Hence, cue B served as a safety signal. Once animals consistently discriminated between the aversive (AX+) and safe (BX-) cues, measured by greater startle amplitude in the presence of AX vs. BX, they were tested for conditioned inhibition by eliciting startle in the presence of a novel ambiguous combined cue (AB). Similar to previous AX+/BX- studies, healthy animals rapidly learned to discriminate between the AX+ and BX- cues as well as demonstrate conditioned inhibition in the presence of the combined AB cue (i.e. lower startle amplitude in the presence of AB vs. AX). Additionally, animals performed consistently across three rounds of testing using three new cues each time. The results validate this novel method that will serve as a useful tool for better understanding the mechanisms for the regulation of fear and anxiety. Copyright © 2013 Elsevier B.V. All rights reserved.

  12. Repeated measures dose-finding design with time-trend detection in the presence of correlated toxicity data.

    Science.gov (United States)

    Yin, Jun; Paoletti, Xavier; Sargent, Daniel J; Mandrekar, Sumithra J

    2017-08-01

    Phase I trials are designed to determine the safety, tolerability, and recommended phase 2 dose of therapeutic agents for subsequent testing. The dose-finding paradigm has thus traditionally focused on identifying the maximum tolerable dose of an agent or combination therapy under the assumption that there is a non-decreasing relationship between dose-toxicity and dose-efficacy. The dose is typically determined based on the probability of severe toxicity observed during the first treatment cycle. A novel endpoint, the total toxicity profile, was previously developed to account for the multiple toxicity types and grades experienced in the first cycle. More recently, this was extended to a repeated measures design based on the total toxicity profile to account for longitudinal toxicities over multiple treatment cycles in the absence of within-patient correlation. In this work, we propose to extend the design in the presence of within-patient correlation. Furthermore, we provide a framework to detect a toxicity time trend (toxicity increasing, decreasing, or stable) over multiple treatment cycles. We utilize a linear mixed model in the Bayesian framework, with the addition of Bayesian risk functions for decision-making in dose assignment. The performance of this design was evaluated using simulation studies and real data from a phase I trial. We demonstrated that using available toxicity data from all cycles of treatment improves the accuracy of maximum tolerated dose identification and allows for the detection of a time trend. The performance is consistent regardless of the strength of the within-patient correlation. In addition, the use of a quasi-continuous total toxicity profile score significantly increased the power to detect time trends compared to when binary data only were used. The increased interest in molecularly targeted agents and immunotherapies in oncology necessitates innovative phase I study designs. Our proposed framework provides a tool to tackle

  13. Power and sample size for the S:T repeated measures design combined with a linear mixed-effects model allowing for missing data.

    Science.gov (United States)

    Tango, Toshiro

    2017-02-13

    Tango (Biostatistics 2016) proposed a new repeated measures design called the S:T repeated measures design, combined with generalized linear mixed-effects models and sample size calculations for a test of the average treatment effect that depend not only on the number of subjects but on the number of repeated measures before and after randomization per subject used for analysis. The main advantages of the proposed design combined with the generalized linear mixed-effects models are (1) it can easily handle missing data by applying the likelihood-based ignorable analyses under the missing at random assumption and (2) it may lead to a reduction in sample size compared with the simple pre-post design. In this article, we present formulas for calculating power and sample sizes for a test of the average treatment effect allowing for missing data within the framework of the S:T repeated measures design with a continuous response variable combined with a linear mixed-effects model. Examples are provided to illustrate the use of these formulas.

  14. Developing the Pieta House Suicide Intervention Model: a quasi-experimental, repeated measures design.

    LENUS (Irish Health Repository)

    Surgenor, Paul Wg

    2015-01-01

    While most crisis intervention models adhere to a generalised theoretical framework, the lack of clarity around how these should be enacted has resulted in a proliferation of models, most of which have little to no empirical support. The primary aim of this research was to propose a suicide intervention model that would resolve the client\\'s suicidal crisis by decreasing their suicidal ideation and improve their outlook through enhancing a range of protective factors. The secondary aim was to assess the impact of this model on negative and positive outlook.

  15. Power analysis for multivariate and repeated measurements designs via SPSS: correction and extension of D'Amico, Neilands, and Zambarano (2001).

    Science.gov (United States)

    Osborne, Jason W

    2006-05-01

    D'Amico, Neilands, and Zambarano (2001) published SPSS syntax to perform power analyses for three complex procedures: ANCOVA, MANOVA, and repeated measures ANOVA. Unfortunately, the published SPSS syntax for performing the repeated measures analysis needed some minor revision in order to perform the analysis correctly. This article presents the corrected syntax that will successfully perform the repeated measures analysis and provides some guidance on modifying the syntax to customize the analysis.

  16. Analysis of repeated measures data

    CERN Document Server

    Islam, M Ataharul

    2017-01-01

    This book presents a broad range of statistical techniques to address emerging needs in the field of repeated measures. It also provides a comprehensive overview of extensions of generalized linear models for the bivariate exponential family of distributions, which represent a new development in analysing repeated measures data. The demand for statistical models for correlated outcomes has grown rapidly recently, mainly due to presence of two types of underlying associations: associations between outcomes, and associations between explanatory variables and outcomes. The book systematically addresses key problems arising in the modelling of repeated measures data, bearing in mind those factors that play a major role in estimating the underlying relationships between covariates and outcome variables for correlated outcome data. In addition, it presents new approaches to addressing current challenges in the field of repeated measures and models based on conditional and joint probabilities. Markov models of first...

  17. Experimental design

    Science.gov (United States)

    Lim, H. S.

    1983-01-01

    The design of long life, low weight nickel cadmium cells is studied. The status of a program to optimize nickel electrodes for the best performance is discussed. The pore size of the plaque, the mechanical strength and active material loading are considered in depth.

  18. Using the American alligator and a repeated-measures design to place constraints on in vivo shoulder joint range of motion in dinosaurs and other fossil archosaurs.

    Science.gov (United States)

    Hutson, Joel D; Hutson, Kelda N

    2013-01-15

    Using the extant phylogenetic bracket of dinosaurs (crocodylians and birds), recent work has reported that elbow joint range of motion (ROM) studies of fossil dinosaur forearms may be providing conservative underestimates of fully fleshed in vivo ROM. As humeral ROM occupies a more central role in forelimb movements, the placement of quantitative constraints on shoulder joint ROM could improve fossil reconstructions. Here, we investigated whether soft tissues affect the more mobile shoulder joint in the same manner in which they affect elbow joint ROM in an extant archosaur. This test involved separately and repeatedly measuring humeral ROM in Alligator mississippiensis as soft tissues were dissected away in stages to bare bone. Our data show that the ROMs of humeral flexion and extension, as well as abduction and adduction, both show a statistically significant increase as flesh is removed, but then decrease when the bones must be physically articulated and moved until they separate from one another and/or visible joint surfaces. A similar ROM pattern is inferred for humeral pronation and supination. All final skeletonized ROMs were less than initial fully fleshed ROMs. These results are consistent with previously reported elbow joint ROM patterns from the extant phylogenetic bracket of dinosaurs. Thus, studies that avoid separation of complementary articular surfaces may be providing fossil shoulder joint ROMs that underestimate in vivo ROM in dinosaurs, as well as other fossil archosaurs.

  19. Applying the General Linear Model to Repeated Measures Problems.

    Science.gov (United States)

    Pohlmann, John T.; McShane, Michael G.

    The purpose of this paper is to demonstrate the use of the general linear model (GLM) in problems with repeated measures on a dependent variable. Such problems include pretest-posttest designs, multitrial designs, and groups by trials designs. For each of these designs, a GLM analysis is demonstrated wherein full models are formed and restrictions…

  20. Nonparametric additive regression for repeatedly measured data

    KAUST Repository

    Carroll, R. J.

    2009-05-20

    We develop an easily computed smooth backfitting algorithm for additive model fitting in repeated measures problems. Our methodology easily copes with various settings, such as when some covariates are the same over repeated response measurements. We allow for a working covariance matrix for the regression errors, showing that our method is most efficient when the correct covariance matrix is used. The component functions achieve the known asymptotic variance lower bound for the scalar argument case. Smooth backfitting also leads directly to design-independent biases in the local linear case. Simulations show our estimator has smaller variance than the usual kernel estimator. This is also illustrated by an example from nutritional epidemiology. © 2009 Biometrika Trust.

  1. Systems biology: experimental design.

    Science.gov (United States)

    Kreutz, Clemens; Timmer, Jens

    2009-02-01

    Experimental design has a long tradition in statistics, engineering and life sciences, dating back to the beginning of the last century when optimal designs for industrial and agricultural trials were considered. In cell biology, the use of mathematical modeling approaches raises new demands on experimental planning. A maximum informative investigation of the dynamic behavior of cellular systems is achieved by an optimal combination of stimulations and observations over time. In this minireview, the existing approaches concerning this optimization for parameter estimation and model discrimination are summarized. Furthermore, the relevant classical aspects of experimental design, such as randomization, replication and confounding, are reviewed.

  2. Discriminant analysis for repeated measures data: a review

    Directory of Open Access Journals (Sweden)

    Lisa Lix

    2010-09-01

    Full Text Available Discriminant analysis (DA encompasses procedures for classifying observations into groups (i.e., predictive discriminative analysis and describing the relative importance of variables for distinguishing amongst groups (i.e., descriptive discriminative analysis. In recent years, a number of developments have occurred in DA procedures for the analysis of data from repeated measures designs. Specifically, DA procedures have been developed for repeated measures data characterized by missing observations and/or unbalanced measurement occasions, as well as high-dimensional data in which measurements are collected repeatedly on two or more variables. This paper reviews the literature on DA procedures for univariate and multivariate repeated measures data, focusing on covariance pattern and linear mixed-effects models. A numeric example illustrates their implementation using SAS software.

  3. Modeling repeated measurement data for occupational exposure assessment and epidemiology

    NARCIS (Netherlands)

    Peretz, Chava

    2004-01-01

    Repeated measurements designs, occur frequently in the assessment of exposure to toxic chemicals. This thesis deals with the possibilities of using mixed effects models for occupational exposure assessment and in the analysis of exposure response relationships. The model enables simultaneous estima

  4. Experimental Design Research

    DEFF Research Database (Denmark)

    of experimental approaches and their utility in this domain, and brings together analytical approaches to promote an integrated understanding. The book also investigates where these approaches lead to and how they link design research more fully with other disciplines (e.g. psychology, cognition, sociology......This book presents a new, multidisciplinary perspective on and paradigm for integrative experimental design research. It addresses various perspectives on methods, analysis and overall research approach, and how they can be synthesized to advance understanding of design. It explores the foundations......, computer science, management). Above all, the book emphasizes the integrative nature of design research in terms of the methods, theories, and units of study—from the individual to the organizational level. Although this approach offers many advantages, it has inherently led to a situation in current...

  5. Multivariate linear models and repeated measurements revisited

    DEFF Research Database (Denmark)

    Dalgaard, Peter

    2009-01-01

    Methods for generalized analysis of variance based on multivariate normal theory have been known for many years. In a repeated measurements context, it is most often of interest to consider transformed responses, typically within-subject contrasts or averages. Efficiency considerations leads...

  6. Teaching experimental design.

    Science.gov (United States)

    Fry, Derek J

    2014-01-01

    Awareness of poor design and published concerns over study quality stimulated the development of courses on experimental design intended to improve matters. This article describes some of the thinking behind these courses and how the topics can be presented in a variety of formats. The premises are that education in experimental design should be undertaken with an awareness of educational principles, of how adults learn, and of the particular topics in the subject that need emphasis. For those using laboratory animals, it should include ethical considerations, particularly severity issues, and accommodate learners not confident with mathematics. Basic principles, explanation of fully randomized, randomized block, and factorial designs, and discussion of how to size an experiment form the minimum set of topics. A problem-solving approach can help develop the skills of deciding what are correct experimental units and suitable controls in different experimental scenarios, identifying when an experiment has not been properly randomized or blinded, and selecting the most efficient design for particular experimental situations. Content, pace, and presentation should suit the audience and time available, and variety both within a presentation and in ways of interacting with those being taught is likely to be effective. Details are given of a three-day course based on these ideas, which has been rated informative, educational, and enjoyable, and can form a postgraduate module. It has oral presentations reinforced by group exercises and discussions based on realistic problems, and computer exercises which include some analysis. Other case studies consider a half-day format and a module for animal technicians. © The Author 2014. Published by Oxford University Press on behalf of the Institute for Laboratory Animal Research. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  7. Conservative Sample Size Determination for Repeated Measures Analysis of Covariance.

    Science.gov (United States)

    Morgan, Timothy M; Case, L Douglas

    2013-07-05

    In the design of a randomized clinical trial with one pre and multiple post randomized assessments of the outcome variable, one needs to account for the repeated measures in determining the appropriate sample size. Unfortunately, one seldom has a good estimate of the variance of the outcome measure, let alone the correlations among the measurements over time. We show how sample sizes can be calculated by making conservative assumptions regarding the correlations for a variety of covariance structures. The most conservative choice for the correlation depends on the covariance structure and the number of repeated measures. In the absence of good estimates of the correlations, the sample size is often based on a two-sample t-test, making the 'ultra' conservative and unrealistic assumption that there are zero correlations between the baseline and follow-up measures while at the same time assuming there are perfect correlations between the follow-up measures. Compared to the case of taking a single measurement, substantial savings in sample size can be realized by accounting for the repeated measures, even with very conservative assumptions regarding the parameters of the assumed correlation matrix. Assuming compound symmetry, the sample size from the two-sample t-test calculation can be reduced at least 44%, 56%, and 61% for repeated measures analysis of covariance by taking 2, 3, and 4 follow-up measures, respectively. The results offer a rational basis for determining a fairly conservative, yet efficient, sample size for clinical trials with repeated measures and a baseline value.

  8. Repeated measurement sampling in genetic association analysis with genotyping errors.

    Science.gov (United States)

    Lai, Renzhen; Zhang, Hong; Yang, Yaning

    2007-02-01

    Genotype misclassification occurs frequently in human genetic association studies. When cases and controls are subject to the same misclassification model, Pearson's chi-square test has the correct type I error but may lose power. Most current methods adjusting for genotyping errors assume that the misclassification model is known a priori or can be assessed by a gold standard instrument. But in practical applications, the misclassification probabilities may not be completely known or the gold standard method can be too costly to be available. The repeated measurement design provides an alternative approach for identifying misclassification probabilities. With this design, a proportion of the subjects are measured repeatedly (five or more repeats) for the genotypes when the error model is completely unknown. We investigate the applications of the repeated measurement method in genetic association analysis. Cost-effectiveness study shows that if the phenotyping-to-genotyping cost ratio or the misclassification rates are relatively large, the repeat sampling can gain power over the regular case-control design. We also show that the power gain is not sensitive to the genetic model, genetic relative risk and the population high-risk allele frequency, all of which are typically important ingredients in association studies. An important implication of this result is that whatever the genetic factors are, the repeated measurement method can be applied if the genotyping errors must be accounted for or the phenotyping cost is high.

  9. True Experimental Design.

    Science.gov (United States)

    Huck, Schuyler W.

    1991-01-01

    This poem, with stanzas in limerick form, refers humorously to the many threats to validity posed by problems in research design, including problems of sample selection, data collection, and data analysis. (SLD)

  10. Capturing learning effects on eye movements in repeated measures experiments

    DEFF Research Database (Denmark)

    Bagger, Martin; Orquin, Jacob Lund; Fiedler, Susann

    We propose and illustrate that repeated exposure to stimuli sets increases the size of the saccade amplitudes. Saccadic amplitudes are closely related to the perceptual span and therefore used as a measure for the information intake in an experiment. Studies on expertise have shown that experts...... experiment in which 68 participants made choices between four alternatives with three different between subject conditions varying in presentation format (verbal matrix, a pictorial matrix, and a realistic product representation). The results consistently demonstrate an increase of the saccade amplitude over...... the course of the experiment independent of condition. We conclude by discussing our results in the light of the possible increase of the perceptual span and its implications for the research procedure in eye-tracking experiments with a repeated measurement design....

  11. Experimental design a chemometric approach

    CERN Document Server

    Deming, SN

    1987-01-01

    Now available in a paperback edition is a book which has been described as ``...an exceptionally lucid, easy-to-read presentation... would be an excellent addition to the collection of every analytical chemist. I recommend it with great enthusiasm.'' (Analytical Chemistry). Unlike most current textbooks, it approaches experimental design from the point of view of the experimenter, rather than that of the statistician. As the reviewer in `Analytical Chemistry' went on to say: ``Deming and Morgan should be given high praise for bringing the principles of experimental design to the level of the p

  12. Human Factors Experimental Design and Analysis Reference

    Science.gov (United States)

    2007-07-01

    1) • Box Correction for Repeated Measures (Box, 1954) – Adjusted F Table – F[(k-1)ε, (n-1)(k-1)ε] – Imhof (1962) Table for Small Sample Size (nɡ...1954) provided an adjusted F tabled value based on the value of ε as shown in the middle of this slide. Alternatively, the Imhof (1962) Table is...freedom from sample data in randomized block and split-plot designs. Journal of Educational Statistics, 1, 69-82. Imhof , J.P. (1962). Testing the

  13. [Repeated measurement of memory with valenced test items: verbal memory, working memory and autobiographic memory].

    Science.gov (United States)

    Kuffel, A; Terfehr, K; Uhlmann, C; Schreiner, J; Löwe, B; Spitzer, C; Wingenfeld, K

    2013-07-01

    A large number of questions in clinical and/or experimental neuropsychology require the multiple repetition of memory tests at relatively short intervals. Studies on the impact of the associated exercise and interference effects on the validity of the test results are rare. Moreover, hardly any neuropsychological instruments exist to date to record the memory performance with several parallel versions in which the emotional valence of the test material is also taken into consideration. The aim of the present study was to test whether a working memory test (WST, a digit-span task with neutral or negative distraction stimuli) devised by our workgroup can be used with repeated measurements. This question was also examined in parallel versions of a wordlist learning paradigm and an autobiographical memory test (AMT). Both tests contained stimuli with neutral, positive and negative valence. Twenty-four participants completed the memory testing including the working memory test and three versions of a wordlist and the AMT at intervals of a week apiece (measuring points 1. - 3.). The results reveal consistent performances across the three measuring points in the working and autobiographical memory test. The valence of the stimulus material did not influence the memory performance. In the delayed recall of the wordlist an improvement in memory performance over time was seen. The tests on working memory presented and the parallel versions for the declarative and autobiographical memory constitute informal economic instruments within the scope of the measurement repeatability designs. While the WST and AMT are appropriate for study designs with repeated measurements at relatively short intervals, longer intervals might seem more favourable for the use of wordlist learning paradigms. © Georg Thieme Verlag KG Stuttgart · New York.

  14. Hierarchical linear model: thinking outside the traditional repeated-measures analysis-of-variance box.

    Science.gov (United States)

    Lininger, Monica; Spybrook, Jessaca; Cheatham, Christopher C

    2015-04-01

    Longitudinal designs are common in the field of athletic training. For example, in the Journal of Athletic Training from 2005 through 2010, authors of 52 of the 218 original research articles used longitudinal designs. In 50 of the 52 studies, a repeated-measures analysis of variance was used to analyze the data. A possible alternative to this approach is the hierarchical linear model, which has been readily accepted in other medical fields. In this short report, we demonstrate the use of the hierarchical linear model for analyzing data from a longitudinal study in athletic training. We discuss the relevant hypotheses, model assumptions, analysis procedures, and output from the HLM 7.0 software. We also examine the advantages and disadvantages of using the hierarchical linear model with repeated measures and repeated-measures analysis of variance for longitudinal data.

  15. Hierarchical Linear Model: Thinking Outside the Traditional Repeated-Measures Analysis-of-Variance Box

    Science.gov (United States)

    Lininger, Monica; Spybrook, Jessaca; Cheatham, Christopher C.

    2015-01-01

    Longitudinal designs are common in the field of athletic training. For example, in the Journal of Athletic Training from 2005 through 2010, authors of 52 of the 218 original research articles used longitudinal designs. In 50 of the 52 studies, a repeated-measures analysis of variance was used to analyze the data. A possible alternative to this approach is the hierarchical linear model, which has been readily accepted in other medical fields. In this short report, we demonstrate the use of the hierarchical linear model for analyzing data from a longitudinal study in athletic training. We discuss the relevant hypotheses, model assumptions, analysis procedures, and output from the HLM 7.0 software. We also examine the advantages and disadvantages of using the hierarchical linear model with repeated measures and repeated-measures analysis of variance for longitudinal data. PMID:25875072

  16. Bayesian model selection of informative hypotheses for repeated measurements

    NARCIS (Netherlands)

    Mulder, Joris; Klugkist, I.G.; Schoot, Rens van de; Meeus, W.H.J.; Selfhout, Maarten; Hoijtink, Herbert

    2010-01-01

    When analyzing repeated measurements data, researchers often have expectations about the relations between the measurement means. The expectations can often be formalized using equality and inequality constraints between (i) the measurement means over time, (ii) the measurement means between

  17. Bayesian model selection of informative hypotheses for repeated measurements

    NARCIS (Netherlands)

    Mulder, J.|info:eu-repo/dai/nl/304823031; Klugkist, I.G.|info:eu-repo/dai/nl/27330089X; Van de Schoot, R.|info:eu-repo/dai/nl/304833207; Meeus, W.H.J.|info:eu-repo/dai/nl/070442215; van Zalk, M.H.W.|info:eu-repo/dai/nl/304836214; Hoijtink, H.J.A.|info:eu-repo/dai/nl/075184427

    2009-01-01

    When analyzing repeated measurements data, researchers often have expectations about the relations between the measurement means. The expectations can often be formalized using equality and inequality constraints between (i) the measurement means over time, (ii) the measurement means between groups,

  18. Correct use of repeated measures analysis of variance.

    Science.gov (United States)

    Park, Eunsik; Cho, Meehye; Ki, Chang-Seok

    2009-02-01

    In biomedical research, researchers frequently use statistical procedures such as the t-test, standard analysis of variance (ANOVA), or the repeated measures ANOVA to compare means between the groups of interest. There are frequently some misuses in applying these procedures since the conditions of the experiments or statistical assumptions necessary to apply these procedures are not fully taken into consideration. In this paper, we demonstrate the correct use of repeated measures ANOVA to prevent or minimize ethical or scientific problems due to its misuse. We also describe the appropriate use of multiple comparison tests for follow-up analysis in repeated measures ANOVA. Finally, we demonstrate the use of repeated measures ANOVA by using real data and the statistical software package SPSS (SPSS Inc., USA).

  19. Rank-Based Analysis of Unbalanced Repeated Measures Data

    Directory of Open Access Journals (Sweden)

    M. Mushfiqur Rashid

    2012-07-01

    Full Text Available Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} In this article, we have developed a rank (intra-subject based analysis of clinical trials with unbalanced repeated measures data. We assume that the errors within each patient are exchangeable and continuous random variables. This rank-based inference is valid when the unbalanced data are missing either completely at random or by design. A drop in dispersion test is developed for general linear hypotheses. A numerical example is given to illustrate the procedure.

  20. Elements of Bayesian experimental design

    Energy Technology Data Exchange (ETDEWEB)

    Sivia, D.S. [Rutherford Appleton Lab., Oxon (United Kingdom)

    1997-09-01

    We consider some elements of the Bayesian approach that are important for optimal experimental design. While the underlying principles used are very general, and are explained in detail in a recent tutorial text, they are applied here to the specific case of characterising the inferential value of different resolution peakshapes. This particular issue was considered earlier by Silver, Sivia and Pynn (1989, 1990a, 1990b), and the following presentation confirms and extends the conclusions of their analysis.

  1. Improving power to detect changes in blood miRNA expression by accounting for sources of variability in experimental designs.

    Science.gov (United States)

    Daniels, Sarah I; Sillé, Fenna C M; Goldbaum, Audrey; Yee, Brenda; Key, Ellen F; Zhang, Luoping; Smith, Martyn T; Thomas, Reuben

    2014-12-01

    Blood miRNAs are a new promising area of disease research, but variability in miRNA measurements may limit detection of true-positive findings. Here, we measured sources of miRNA variability and determine whether repeated measures can improve power to detect fold-change differences between comparison groups. Blood from healthy volunteers (N = 12) was collected at three time points. The miRNAs were extracted by a method predetermined to give the highest miRNA yield. Nine different miRNAs were quantified using different qPCR assays and analyzed using mixed models to identify sources of variability. A larger number of miRNAs from a publicly available blood miRNA microarray dataset with repeated measures were used for a bootstrapping procedure to investigate effects of repeated measures on power to detect fold changes in miRNA expression for a theoretical case-control study. Technical variability in qPCR replicates was identified as a significant source of variability (P power to detect small fold changes in blood miRNAs can be improved by accounting for sources of variability using repeated measures and choosing appropriate methods to minimize variability in miRNA quantification. This study demonstrates the importance of including repeated measures in experimental designs for blood miRNA research. See all the articles in this CEBP Focus section, "Biomarkers, Biospecimens, and New Technologies in Molecular Epidemiology." ©2014 American Association for Cancer Research.

  2. Graphical Models for Quasi-Experimental Designs

    Science.gov (United States)

    Kim, Yongnam; Steiner, Peter M.; Hall, Courtney E.; Su, Dan

    2016-01-01

    Experimental and quasi-experimental designs play a central role in estimating cause-effect relationships in education, psychology, and many other fields of the social and behavioral sciences. This paper presents and discusses the causal graphs of experimental and quasi-experimental designs. For quasi-experimental designs the authors demonstrate…

  3. Teaching experimental design to biologists.

    Science.gov (United States)

    Zolman, J F

    1999-12-01

    The teaching of research design and data analysis to our graduate students has been a persistent problem. A course is described in which students, early in their graduate training, obtain extensive practice in designing experiments and interpreting data. Lecture-discussions on the essentials of biostatistics are given, and then these essentials are repeatedly reviewed by illustrating their applications and misapplications in numerous research design problems. Students critique these designs and prepare similar problems for peer evaluation. In most problems the treatments are confounded by extraneous variables, proper controls may be absent, or data analysis may be incorrect. For each problem, students must decide whether the researchers' conclusions are valid and, if not, must identify a fatal experimental flaw. Students learn that an experiment is a well-conceived plan for data collection, analysis, and interpretation. They enjoy the interactive evaluations of research designs and appreciate the repetitive review of common flaws in different experiments. They also benefit from their practice in scientific writing and in critically evaluating their peers' designs.

  4. Assessing agreement with repeated measures for random observers.

    Science.gov (United States)

    Chen, Chia-Cheng; Barnhart, Huiman X

    2011-12-30

    Agreement studies are often concerned with assessing whether different observers for measuring responses on the same subject or sample can produce similar results. The concordance correlation coefficient (CCC) is a popular index for assessing the closeness among observers for quantitative measurements. Usually, the CCC is used for data without and with replications based on subject and observer effects only. However, we cannot use this methodology if repeated measurements rather than replications are collected. Although there exist some CCC-type indices for assessing agreement with repeated measurements, there is no CCC for random observers and random time points. In this paper, we propose a new CCC for repeated measures where both observers and time points are treated as random effects. A simulation study demonstrates our proposed methodology, and we use vertebral body data and image data for illustrations.

  5. Animal husbandry and experimental design.

    Science.gov (United States)

    Nevalainen, Timo

    2014-01-01

    If the scientist needs to contact the animal facility after any study to inquire about husbandry details, this represents a lost opportunity, which can ultimately interfere with the study results and their interpretation. There is a clear tendency for authors to describe methodological procedures down to the smallest detail, but at the same time to provide minimal information on animals and their husbandry. Controlling all major variables as far as possible is the key issue when establishing an experimental design. The other common mechanism affecting study results is a change in the variation. Factors causing bias or variation changes are also detectable within husbandry. Our lives and the lives of animals are governed by cycles: the seasons, the reproductive cycle, the weekend-working days, the cage change/room sanitation cycle, and the diurnal rhythm. Some of these may be attributable to routine husbandry, and the rest are cycles, which may be affected by husbandry procedures. Other issues to be considered are consequences of in-house transport, restrictions caused by caging, randomization of cage location, the physical environment inside the cage, the acoustic environment audible to animals, olfactory environment, materials in the cage, cage complexity, feeding regimens, kinship, and humans. Laboratory animal husbandry issues are an integral but underappreciated part of investigators' experimental design, which if ignored can cause major interference with the results. All researchers should familiarize themselves with the current routine animal care of the facility serving them, including their capabilities for the monitoring of biological and physicochemical environment.

  6. Quasi-Experimental Designs for Causal Inference

    Science.gov (United States)

    Kim, Yongnam; Steiner, Peter

    2016-01-01

    When randomized experiments are infeasible, quasi-experimental designs can be exploited to evaluate causal treatment effects. The strongest quasi-experimental designs for causal inference are regression discontinuity designs, instrumental variable designs, matching and propensity score designs, and comparative interrupted time series designs. This…

  7. The application of analysis of variance (ANOVA) to different experimental designs in optometry.

    Science.gov (United States)

    Armstrong, R A; Eperjesi, F; Gilmartin, B

    2002-05-01

    Analysis of variance (ANOVA) is the most efficient method available for the analysis of experimental data. Analysis of variance is a method of considerable complexity and subtlety, with many different variations, each of which applies in a particular experimental context. Hence, it is possible to apply the wrong type of ANOVA to data and, therefore, to draw an erroneous conclusion from an experiment. This article reviews the types of ANOVA most likely to arise in clinical experiments in optometry including the one-way ANOVA ('fixed' and 'random effect' models), two-way ANOVA in randomised blocks, three-way ANOVA, and factorial experimental designs (including the varieties known as 'split-plot' and 'repeated measures'). For each ANOVA, the appropriate experimental design is described, a statistical model is formulated, and the advantages and limitations of each type of design discussed. In addition, the problems of non-conformity to the statistical model and determination of the number of replications are considered.

  8. Experimental design methods for bioengineering applications.

    Science.gov (United States)

    Keskin Gündoğdu, Tuğba; Deniz, İrem; Çalışkan, Gülizar; Şahin, Erdem Sefa; Azbar, Nuri

    2016-01-01

    Experimental design is a form of process analysis in which certain factors are selected to obtain the desired responses of interest. It may also be used for the determination of the effects of various independent factors on a dependent factor. The bioengineering discipline includes many different areas of scientific interest, and each study area is affected and governed by many different factors. Briefly analyzing the important factors and selecting an experimental design for optimization are very effective tools for the design of any bioprocess under question. This review summarizes experimental design methods that can be used to investigate various factors relating to bioengineering processes. The experimental methods generally used in bioengineering are as follows: full factorial design, fractional factorial design, Plackett-Burman design, Taguchi design, Box-Behnken design and central composite design. These design methods are briefly introduced, and then the application of these design methods to study different bioengineering processes is analyzed.

  9. Experimental Design and Some Threats to Experimental Validity: A Primer

    Science.gov (United States)

    Skidmore, Susan

    2008-01-01

    Experimental designs are distinguished as the best method to respond to questions involving causality. The purpose of the present paper is to explicate the logic of experimental design and why it is so vital to questions that demand causal conclusions. In addition, types of internal and external validity threats are discussed. To emphasize the…

  10. The Experimental Design Ability Test (EDAT)

    Science.gov (United States)

    Sirum, Karen; Humburg, Jennifer

    2011-01-01

    Higher education goals include helping students develop evidence based reasoning skills; therefore, scientific thinking skills such as those required to understand the design of a basic experiment are important. The Experimental Design Ability Test (EDAT) measures students' understanding of the criteria for good experimental design through their…

  11. Analysis of oligonucleotide array experiments with repeated measures using mixed models

    Directory of Open Access Journals (Sweden)

    Getchell Thomas V

    2004-12-01

    Full Text Available Abstract Background Two or more factor mixed factorial experiments are becoming increasingly common in microarray data analysis. In this case study, the two factors are presence (Patients with Alzheimer's disease or absence (Control of the disease, and brain regions including olfactory bulb (OB or cerebellum (CER. In the design considered in this manuscript, OB and CER are repeated measurements from the same subject and, hence, are correlated. It is critical to identify sources of variability in the analysis of oligonucleotide array experiments with repeated measures and correlations among data points have to be considered. In addition, multiple testing problems are more complicated in experiments with multi-level treatments or treatment combinations. Results In this study we adopted a linear mixed model to analyze oligonucleotide array experiments with repeated measures. We first construct a generalized F test to select differentially expressed genes. The Benjamini and Hochberg (BH procedure of controlling false discovery rate (FDR at 5% was applied to the P values of the generalized F test. For those genes with significant generalized F test, we then categorize them based on whether the interaction terms were significant or not at the α-level (αnew = 0.0033 determined by the FDR procedure. Since simple effects may be examined for the genes with significant interaction effect, we adopt the protected Fisher's least significant difference test (LSD procedure at the level of αnew to control the family-wise error rate (FWER for each gene examined. Conclusions A linear mixed model is appropriate for analysis of oligonucleotide array experiments with repeated measures. We constructed a generalized F test to select differentially expressed genes, and then applied a specific sequence of tests to identify factorial effects. This sequence of tests applied was designed to control for gene based FWER.

  12. Analysis of oligonucleotide array experiments with repeated measures using mixed models.

    Science.gov (United States)

    Li, Hao; Wood, Constance L; Getchell, Thomas V; Getchell, Marilyn L; Stromberg, Arnold J

    2004-12-30

    Two or more factor mixed factorial experiments are becoming increasingly common in microarray data analysis. In this case study, the two factors are presence (Patients with Alzheimer's disease) or absence (Control) of the disease, and brain regions including olfactory bulb (OB) or cerebellum (CER). In the design considered in this manuscript, OB and CER are repeated measurements from the same subject and, hence, are correlated. It is critical to identify sources of variability in the analysis of oligonucleotide array experiments with repeated measures and correlations among data points have to be considered. In addition, multiple testing problems are more complicated in experiments with multi-level treatments or treatment combinations. In this study we adopted a linear mixed model to analyze oligonucleotide array experiments with repeated measures. We first construct a generalized F test to select differentially expressed genes. The Benjamini and Hochberg (BH) procedure of controlling false discovery rate (FDR) at 5% was applied to the P values of the generalized F test. For those genes with significant generalized F test, we then categorize them based on whether the interaction terms were significant or not at the alpha-level (alphanew = 0.0033) determined by the FDR procedure. Since simple effects may be examined for the genes with significant interaction effect, we adopt the protected Fisher's least significant difference test (LSD) procedure at the level of alphanew to control the family-wise error rate (FWER) for each gene examined. A linear mixed model is appropriate for analysis of oligonucleotide array experiments with repeated measures. We constructed a generalized F test to select differentially expressed genes, and then applied a specific sequence of tests to identify factorial effects. This sequence of tests applied was designed to control for gene based FWER.

  13. Modeling intraindividual variability with repeated measures data methods and applications

    CERN Document Server

    Hershberger, Scott L

    2013-01-01

    This book examines how individuals behave across time and to what degree that behavior changes, fluctuates, or remains stable.It features the most current methods on modeling repeated measures data as reported by a distinguished group of experts in the field. The goal is to make the latest techniques used to assess intraindividual variability accessible to a wide range of researchers. Each chapter is written in a ""user-friendly"" style such that even the ""novice"" data analyst can easily apply the techniques.Each chapter features:a minimum discussion of mathematical detail;an empirical examp

  14. High-Dimensional Multivariate Repeated Measures Analysis with Unequal Covariance Matrices

    Science.gov (United States)

    Harrar, Solomon W.; Kong, Xiaoli

    2015-01-01

    In this paper, test statistics for repeated measures design are introduced when the dimension is large. By large dimension is meant the number of repeated measures and the total sample size grow together but either one could be larger than the other. Asymptotic distribution of the statistics are derived for the equal as well as unequal covariance cases in the balanced as well as unbalanced cases. The asymptotic framework considered requires proportional growth of the sample sizes and the dimension of the repeated measures in the unequal covariance case. In the equal covariance case, one can grow at much faster rate than the other. The derivations of the asymptotic distributions mimic that of Central Limit Theorem with some important peculiarities addressed with sufficient rigor. Consistent and unbiased estimators of the asymptotic variances, which make efficient use of all the observations, are also derived. Simulation study provides favorable evidence for the accuracy of the asymptotic approximation under the null hypothesis. Power simulations have shown that the new methods have comparable power with a popular method known to work well in low-dimensional situation but the new methods have shown enormous advantage when the dimension is large. Data from Electroencephalograph (EEG) experiment is analyzed to illustrate the application of the results. PMID:26778861

  15. Quasi experimental designs in pharmacist intervention research.

    Science.gov (United States)

    Krass, Ines

    2016-06-01

    Background In the field of pharmacist intervention research it is often difficult to conform to the rigorous requirements of the "true experimental" models, especially the requirement of randomization. When randomization is not feasible, a practice based researcher can choose from a range of "quasi-experimental designs" i.e., non-randomised and at time non controlled. Objective The aim of this article was to provide an overview of quasi-experimental designs, discuss their strengths and weaknesses and to investigate their application in pharmacist intervention research over the previous decade. Results In the literature quasi experimental studies may be classified into five broad categories: quasi-experimental design without control groups; quasi-experimental design that use control groups with no pre-test; quasi-experimental design that use control groups and pre-tests; interrupted time series and stepped wedge designs. Quasi-experimental study design has consistently featured in the evolution of pharmacist intervention research. The most commonly applied of all quasi experimental designs in the practice based research literature are the one group pre-post-test design and the non-equivalent control group design i.e., (untreated control group with dependent pre-tests and post-tests) and have been used to test the impact of pharmacist interventions in general medications management as well as in specific disease states. Conclusion Quasi experimental studies have a role to play as proof of concept, in the pilot phases of interventions when testing different intervention components, especially in complex interventions. They serve to develop an understanding of possible intervention effects: while in isolation they yield weak evidence of clinical efficacy, taken collectively, they help build a body of evidence in support of the value of pharmacist interventions across different practice settings and countries. However, when a traditional RCT is not feasible for

  16. Design of Formulated Products: Experimental Component

    DEFF Research Database (Denmark)

    Conte, Elisa; Gani, Rafiqul; Cheng, Y. S.

    2012-01-01

    : computer-aided design (Stage 1), which generates a list of feasible candidates, experimental planning (Stage 2), which generates a list of experiments and checks the available experimental set-ups, and experimental testing (Stage 3), which measures the necessary data and verifies the desirable attributes...

  17. Experimental design and priority PLS regression

    DEFF Research Database (Denmark)

    Høskuldsson, Agnar

    1996-01-01

    Rules, ideas and algorithms of the H-principle are used to analyse models that are derived from experimental design. Some of the basic ideas of experimental design are reviewed and related to the methodology of the H-principle. New methods of optimal response surfaces are developed....

  18. Light Experimental Supercruiser Conceptual Design

    Science.gov (United States)

    1976-07-01

    RADIUS-200 TOGW Fifun 118. Arrow Point Design Ptnittmn* Carpet 193 4.3 VEHICLE PARAMETRIC TRADES (Continued) Sustained Load Factor--The third...0.06 ^s’ ^60^^ ^ ^**«» ^ >^^»M00 «^ /^ / —~ <. xf ’"»^C^ ^ -< >^ X*** Fifun 120. Arrow Su$t»lmd Lo§d factor Vtrwt

  19. Some Thoughts on Experimental Design.

    Science.gov (United States)

    Stern, Claudio D

    2017-01-01

    Perhaps even more important than the techniques themselves are the quality of the biological questions asked and the design of the experiments devised to answer them. This chapter summarizes some of the key issues and also touches on how the same principles affect scholarly use of the scientific literature and good peer-reviewing practices.

  20. A Simple and Transparent Alternative to Repeated Measures ANOVA

    Directory of Open Access Journals (Sweden)

    James W. Grice

    2015-09-01

    Full Text Available Observation Oriented Modeling is a novel approach toward conceptualizing and analyzing data. Compared with traditional parametric statistics, Observation Oriented Modeling is more intuitive, relatively free of assumptions, and encourages researchers to stay close to their data. Rather than estimating abstract population parameters, the overarching goal of the analysis is to identify and explain distinct patterns within the observations. Selected data from a recent study by Craig et al. were analyzed using Observation Oriented Modeling; this analysis was contrasted with a traditional repeated measures ANOVA assessment. Various pitfalls in traditional parametric analyses were avoided when using Observation Oriented Modeling, including the presence of outliers and missing data. The differences between Observation Oriented Modeling and various parametric and nonparametric statistical methods were finally discussed.

  1. Experimental Design: Review and Comment.

    Science.gov (United States)

    1984-02-01

    and early work in the subject was done by Wald (1943), Hotelling (1944), and Elfving (1952). The major contributions to the area, however, were made by...Kiefer (1958, 1959) and Kiefer and Wolfowitz (1959, 1960), who synthesized and greatly extended the previous work. Although the ideas of optimal...design theory is the general equivalence theorem (Kiefer and Wolfowitz 1960), which links D- and G-optimality. The theorem is phrased in terms of

  2. Chemicals-Based Formulation Design: Virtual Experimentations

    DEFF Research Database (Denmark)

    Conte, Elisa; Gani, Rafiqul

    2011-01-01

    This paper presents a systematic procedure for virtual experimentations related to the design of liquid formulated products. All the experiments that need to be performed when designing a liquid formulated product (lotion), such as ingredients selection and testing, solubility tests, property mea...... on the design of an insect repellent lotion will show that the software is an essential instrument in decision making, and that it reduces time and resources since experimental efforts can be focused on one or few product alternatives....

  3. Design of Uranium Solution Critical Experimental Device

    Institute of Scientific and Technical Information of China (English)

    YI; Da-yong; GUO; Zhi-jia; YAO; Cheng-zhi; SHI; Chen-lei

    2012-01-01

    <正>In 2012, Department of reactor engineering design completes the design and mechanical analysis of Uranium solution critical experimental device. According to user’s requirements and nuclear safety regulations, design and analysis mainly involves two sets of core structure, uranium solution loop, water loop and experimental bench, etc. The core which includes a core vessel, reactor core support, safety rods, control rods, and so on, is used for containing uranium solution and fuel element and fulfilling the

  4. [Analysis of binary classification repeated measurement data with GEE and GLMMs using SPSS software].

    Science.gov (United States)

    An, Shengli; Zhang, Yanhong; Chen, Zheng

    2012-12-01

    To analyze binary classification repeated measurement data with generalized estimating equations (GEE) and generalized linear mixed models (GLMMs) using SPSS19.0. GEE and GLMMs models were tested using binary classification repeated measurement data sample using SPSS19.0. Compared with SAS, SPSS19.0 allowed convenient analysis of categorical repeated measurement data using GEE and GLMMs.

  5. An Introduction to Experimental Design Research

    DEFF Research Database (Denmark)

    2016-01-01

    Design research brings together influences from the whole gamut of social, psychological, and more technical sciences to create a tradition of empirical study stretching back over 50 years (Horvath 2004; Cross 2007). A growing part of this empirical tradition is experimental, which has gained...... in importance as the field has matured. As in other evolving disciplines, e.g. behavioural psychology, this maturation brings with it ever-greater scientific and methodological demands (Reiser 1939; Dorst 2008). In particular, the experimental paradigm holds distinct and significant challenges for the modern...... design researcher. Thus, this book brings together leading researchers from across design research in order to provide the reader with a foundation in experimental design research; an appreciation of possible experimental perspectives; and insight into how experiments can be used to build robust...

  6. Censored Weibull Distributed Data in Experimental Design

    OpenAIRE

    Støtvig, Jeanett Gunneklev

    2014-01-01

    Give an introduction to experimental design. Investigate how four methods handle Weibull distributed censored data, where the four methods are the quick and dirty method, the maximum likelihood method, single imputation and multiple imputation.

  7. Experimental design of a waste glass study

    Energy Technology Data Exchange (ETDEWEB)

    Piepel, G.F.; Redgate, P.E.; Hrma, P.

    1995-04-01

    A Composition Variation Study (CVS) is being performed to support a future high-level waste glass plant at Hanford. A total of 147 glasses, covering a broad region of compositions melting at approximately 1150{degrees}C, were tested in five statistically designed experimental phases. This paper focuses on the goals, strategies, and techniques used in designing the five phases. The overall strategy was to investigate glass compositions on the boundary and interior of an experimental region defined by single- component, multiple-component, and property constraints. Statistical optimal experimental design techniques were used to cover various subregions of the experimental region in each phase. Empirical mixture models for glass properties (as functions of glass composition) from previous phases wee used in designing subsequent CVS phases.

  8. Chemical-Based Formulation Design: Virtual Experimentation

    DEFF Research Database (Denmark)

    Conte, Elisa; Gani, Rafiqul

    This paper presents a software, the virtual Product-Process Design laboratory (virtual PPD-lab) and the virtual experimental scenarios for design/verification of consumer oriented liquid formulated products where the software can be used. For example, the software can be employed for the design...... system engineering community, it is possible now to replace, at least, some of the experimental steps with efficient and validated model-based approaches. For example, the search space can be significantly reduced through computer-aided screenings of the active ingredient (AI), the solvent mixture......, the additives and/or their mixtures (formulations). Therefore, the experimental resources can focus on a few candidate product formulations to find the best product. The virtual PPD-lab allows various options for experimentations related to design and/or verification of the product. For example, the selection...

  9. New product development using experimental design

    OpenAIRE

    Zhang, Zhihai

    1998-01-01

    New product development is one of the most powerful but difficult activities in business. It is also a very important factor affecting final product quality. There are many techniques available for new product development. Experimental design is now regarded as one of the most significant techniques. In this article, we will discuss how to use the technique of experimental design in developing a new product - an extrusion press. In order to provide a better understanding of this specific proc...

  10. Matrix-based concordance correlation coefficient for repeated measures.

    Science.gov (United States)

    Hiriote, Sasiprapa; Chinchilli, Vernon M

    2011-09-01

    In many clinical studies, Lin's concordance correlation coefficient (CCC) is a common tool to assess the agreement of a continuous response measured by two raters or methods. However, the need for measures of agreement may arise for more complex situations, such as when the responses are measured on more than one occasion by each rater or method. In this work, we propose a new CCC in the presence of repeated measurements, called the matrix-based concordance correlation coefficient (MCCC) based on a matrix norm that possesses the properties needed to characterize the level of agreement between two p× 1 vectors of random variables. It can be shown that the MCCC reduces to Lin's CCC when p= 1. For inference, we propose an estimator for the MCCC based on U-statistics. Furthermore, we derive the asymptotic distribution of the estimator of the MCCC, which is proven to be normal. The simulation studies confirm that overall in terms of accuracy, precision, and coverage probability, the estimator of the MCCC works very well in general cases especially when n is greater than 40. Finally, we use real data from an Asthma Clinical Research Network (ACRN) study and the Penn State Young Women's Health Study for demonstration.

  11. Bayesian Concordance Correlation Coefficient with Application to Repeatedly Measured Data

    Directory of Open Access Journals (Sweden)

    Atanu BHATTACHARJEE

    2015-10-01

    Full Text Available Objective: In medical research, Lin's classical concordance correlation coefficient (CCC is frequently applied to evaluate the similarity of the measurements produced by different raters or methods on the same subjects. It is particularly useful for continuous data. The objective of this paper is to propose the Bayesian counterpart to compute CCC for continuous data. Material and Methods: A total of 33 patients of astrocytoma brain treated in the Department of Radiation Oncology at Malabar Cancer Centre is enrolled in this work. It is a continuous data of tumor volume and tumor size repeatedly measured during baseline pretreatment workup and post surgery follow-ups for all patients. The tumor volume and tumor size are measured separately by MRI and CT scan. The agreement of measurement between MRI and CT scan is calculated through CCC. The statistical inference is performed through Markov Chain Monte Carlo (MCMC technique. Results: Bayesian CCC is found suitable to get prominent evidence for test statistics to explore the relation between concordance measurements. The posterior mean estimates and 95% credible interval of CCC on tumor size and tumor volume are observed with 0.96(0.87,0.99 and 0.98(0.95,0.99 respectively. Conclusion: The Bayesian inference is adopted for development of the computational algorithm. The approach illustrated in this work provides the researchers an opportunity to find out the most appropriate model for specific data and apply CCC to fulfill the desired hypothesis.

  12. Considering RNAi experimental design in parasitic helminths.

    Science.gov (United States)

    Dalzell, Johnathan J; Warnock, Neil D; McVeigh, Paul; Marks, Nikki J; Mousley, Angela; Atkinson, Louise; Maule, Aaron G

    2012-04-01

    Almost a decade has passed since the first report of RNA interference (RNAi) in a parasitic helminth. Whilst much progress has been made with RNAi informing gene function studies in disparate nematode and flatworm parasites, substantial and seemingly prohibitive difficulties have been encountered in some species, hindering progress. An appraisal of current practices, trends and ideals of RNAi experimental design in parasitic helminths is both timely and necessary for a number of reasons: firstly, the increasing availability of parasitic helminth genome/transcriptome resources means there is a growing need for gene function tools such as RNAi; secondly, fundamental differences and unique challenges exist for parasite species which do not apply to model organisms; thirdly, the inherent variation in experimental design, and reported difficulties with reproducibility undermine confidence. Ideally, RNAi studies of gene function should adopt standardised experimental design to aid reproducibility, interpretation and comparative analyses. Although the huge variations in parasite biology and experimental endpoints make RNAi experimental design standardization difficult or impractical, we must strive to validate RNAi experimentation in helminth parasites. To aid this process we identify multiple approaches to RNAi experimental validation and highlight those which we deem to be critical for gene function studies in helminth parasites.

  13. Yakima Hatchery Experimental Design : Annual Progress Report.

    Energy Technology Data Exchange (ETDEWEB)

    Busack, Craig; Knudsen, Curtis; Marshall, Anne

    1991-08-01

    This progress report details the results and status of Washington Department of Fisheries' (WDF) pre-facility monitoring, research, and evaluation efforts, through May 1991, designed to support the development of an Experimental Design Plan (EDP) for the Yakima/Klickitat Fisheries Project (YKFP), previously termed the Yakima/Klickitat Production Project (YKPP or Y/KPP). This pre- facility work has been guided by planning efforts of various research and quality control teams of the project that are annually captured as revisions to the experimental design and pre-facility work plans. The current objective are as follows: to develop genetic monitoring and evaluation approach for the Y/KPP; to evaluate stock identification monitoring tools, approaches, and opportunities available to meet specific objectives of the experimental plan; and to evaluate adult and juvenile enumeration and sampling/collection capabilities in the Y/KPP necessary to measure experimental response variables.

  14. Optimal Bayesian Experimental Design for Combustion Kinetics

    KAUST Repository

    Huan, Xun

    2011-01-04

    Experimental diagnostics play an essential role in the development and refinement of chemical kinetic models, whether for the combustion of common complex hydrocarbons or of emerging alternative fuels. Questions of experimental design—e.g., which variables or species to interrogate, at what resolution and under what conditions—are extremely important in this context, particularly when experimental resources are limited. This paper attempts to answer such questions in a rigorous and systematic way. We propose a Bayesian framework for optimal experimental design with nonlinear simulation-based models. While the framework is broadly applicable, we use it to infer rate parameters in a combustion system with detailed kinetics. The framework introduces a utility function that reflects the expected information gain from a particular experiment. Straightforward evaluation (and maximization) of this utility function requires Monte Carlo sampling, which is infeasible with computationally intensive models. Instead, we construct a polynomial surrogate for the dependence of experimental observables on model parameters and design conditions, with the help of dimension-adaptive sparse quadrature. Results demonstrate the efficiency and accuracy of the surrogate, as well as the considerable effectiveness of the experimental design framework in choosing informative experimental conditions.

  15. Optimal experimental design strategies for detecting hormesis.

    Science.gov (United States)

    Dette, Holger; Pepelyshev, Andrey; Wong, Weng Kee

    2011-12-01

    Hormesis is a widely observed phenomenon in many branches of life sciences, ranging from toxicology studies to agronomy, with obvious public health and risk assessment implications. We address optimal experimental design strategies for determining the presence of hormesis in a controlled environment using the recently proposed Hunt-Bowman model. We propose alternative models that have an implicit hormetic threshold, discuss their advantages over current models, and construct and study properties of optimal designs for (i) estimating model parameters, (ii) estimating the threshold dose, and (iii) testing for the presence of hormesis. We also determine maximin optimal designs that maximize the minimum of the design efficiencies when we have multiple design criteria or there is model uncertainty where we have a few plausible models of interest. We apply these optimal design strategies to a teratology study and show that the proposed designs outperform the implemented design by a wide margin for many situations.

  16. Using experimental design to define boundary manikins.

    Science.gov (United States)

    Bertilsson, Erik; Högberg, Dan; Hanson, Lars

    2012-01-01

    When evaluating human-machine interaction it is central to consider anthropometric diversity to ensure intended accommodation levels. A well-known method is the use of boundary cases where manikins with extreme but likely measurement combinations are derived by mathematical treatment of anthropometric data. The supposition by that method is that the use of these manikins will facilitate accommodation of the expected part of the total, less extreme, population. In literature sources there are differences in how many and in what way these manikins should be defined. A similar field to the boundary case method is the use of experimental design in where relationships between affecting factors of a process is studied by a systematic approach. This paper examines the possibilities to adopt methodology used in experimental design to define a group of manikins. Different experimental designs were adopted to be used together with a confidence region and its axes. The result from the study shows that it is possible to adapt the methodology of experimental design when creating groups of manikins. The size of these groups of manikins depends heavily on the number of key measurements but also on the type of chosen experimental design.

  17. Experimental design in chromatography: a tutorial review.

    Science.gov (United States)

    Hibbert, D Brynn

    2012-12-01

    The ability of a chromatographic method to successful separate, identify and quantitate species is determined by many factors, many of which are in the control of the experimenter. When attempting to discover the important factors and then optimise a response by tuning these factors, experimental design (design of experiments, DoE) gives a powerful suite of statistical methodology. Advantages include modelling by empirical functions, not requiring detailed knowledge of the underlying physico-chemical properties of the system, a defined number of experiments to be performed, and available software to accomplish the task. Two uses of DoE in chromatography are for showing lack of significant effects in robustness studies for method validation, and for identifying significant factors and then optimising a response with respect to them in method development. Plackett-Burman designs are widely used in validation studies, and fractional factorial designs and their extensions such as central composite designs are the most popular optimisers. Box-Behnken and Doehlert designs are becoming more used as efficient alternatives. If it is not possible to practically realise values of the factors required by experimental designs, or if there is a constraint on the total number of experiments that can be done, then D-optimal designs can be very powerful. Examples of the use of DoE in chromatography are reviewed. Recommendations are given on how to report DoE studies in the literature.

  18. Experimental design research approaches, perspectives, applications

    CERN Document Server

    Stanković, Tino; Štorga, Mario

    2016-01-01

    This book presents a new, multidisciplinary perspective on and paradigm for integrative experimental design research. It addresses various perspectives on methods, analysis and overall research approach, and how they can be synthesized to advance understanding of design. It explores the foundations of experimental approaches and their utility in this domain, and brings together analytical approaches to promote an integrated understanding. The book also investigates where these approaches lead to and how they link design research more fully with other disciplines (e.g. psychology, cognition, sociology, computer science, management). Above all, the book emphasizes the integrative nature of design research in terms of the methods, theories, and units of study—from the individual to the organizational level. Although this approach offers many advantages, it has inherently led to a situation in current research practice where methods are diverging and integration between individual, team and organizational under...

  19. Optimal Experimental Design for Model Discrimination

    Science.gov (United States)

    Myung, Jay I.; Pitt, Mark A.

    2009-01-01

    Models of a psychological process can be difficult to discriminate experimentally because it is not easy to determine the values of the critical design variables (e.g., presentation schedule, stimulus structure) that will be most informative in differentiating them. Recent developments in sampling-based search methods in statistics make it…

  20. Design of 162 MHz RF Experimental Cavity

    Institute of Scientific and Technical Information of China (English)

    YIN; Zhi-guo; CAO; Xue-long; GUO; Juan-juan; JI; Bin; FU; Xiao-liang; WEI; Jun-yi

    2015-01-01

    In this paper,a 162MHz RF experimental cavity is designed to study the multipacting multiplier effect of the medium and the metal electrode and its relationship with the plate surface characteristics,and to find out the method for inhibiting multipacting multiplier effects.The

  1. New product development using experimental design

    NARCIS (Netherlands)

    Zhang, Zhihai

    1998-01-01

    New product development is one of the most powerful but difficult activities in business. It is also a very important factor affecting final product quality. There are many techniques available for new product development. Experimental design is now regarded as one of the most significant techniques

  2. Estimation of the concordance correlation coefficient for repeated measures using SAS and R.

    Science.gov (United States)

    Carrasco, Josep L; Phillips, Brenda R; Puig-Martinez, Josep; King, Tonya S; Chinchilli, Vernon M

    2013-03-01

    The concordance correlation coefficient is one of the most common approaches used to assess agreement among different observers or instruments when the outcome of interest is a continuous variable. A SAS macro and R package are provided here to estimate the concordance correlation coefficient (CCC) where the design of the data involves repeated measurements by subject and observer. The CCC is estimated using U-statistics (UST) and variance components (VC) approaches. Confidence intervals and standard errors are reported along with the point estimate of the CCC. In the case of the VC approach, the linear mixed model output and variance components estimates are also provided. The performance of each function is shown by means of some examples with real data sets.

  3. Involving students in experimental design: three approaches.

    Science.gov (United States)

    McNeal, A P; Silverthorn, D U; Stratton, D B

    1998-12-01

    Many faculty want to involve students more actively in laboratories and in experimental design. However, just "turning them loose in the lab" is time-consuming and can be frustrating for both students and faculty. We describe three different ways of providing structures for labs that require students to design their own experiments but guide the choices. One approach emphasizes invertebrate preparations and classic techniques that students can learn fairly easily. Students must read relevant primary literature and learn each technique in one week, and then design and carry out their own experiments in the next week. Another approach provides a "design framework" for the experiments so that all students are using the same technique and the same statistical comparisons, whereas their experimental questions differ widely. The third approach involves assigning the questions or problems but challenging students to design good protocols to answer these questions. In each case, there is a mixture of structure and freedom that works for the level of the students, the resources available, and our particular aims.

  4. Design and Experimental Implementation of Bipedal robot

    Directory of Open Access Journals (Sweden)

    Sreejith C

    2012-09-01

    Full Text Available Biped robots have better mobility than conventional wheeled robots, but they tend to tip over easily. To be able to walk stably in various environments, such as on rough terrain, up and down slopes, or in regions containing obstacles, it is necessary for the robot to adapt to the ground conditions with a foot motion, and maintain its stability with a torso motion. In this paper, we first formulate the design and walking pattern for a bipedal robot and then a kicking robot has been developed for experimental verification. Finally, the correlation between the design and the walking patterns is described through simulation studies, and the effectiveness of the proposed methods is confirmed by simulation examples and experimental results.

  5. Bioinspiration: applying mechanical design to experimental biology.

    Science.gov (United States)

    Flammang, Brooke E; Porter, Marianne E

    2011-07-01

    The production of bioinspired and biomimetic constructs has fostered much collaboration between biologists and engineers, although the extent of biological accuracy employed in the designs produced has not always been a priority. Even the exact definitions of "bioinspired" and "biomimetic" differ among biologists, engineers, and industrial designers, leading to confusion regarding the level of integration and replication of biological principles and physiology. By any name, biologically-inspired mechanical constructs have become an increasingly important research tool in experimental biology, offering the opportunity to focus research by creating model organisms that can be easily manipulated to fill a desired parameter space of structural and functional repertoires. Innovative researchers with both biological and engineering backgrounds have found ways to use bioinspired models to explore the biomechanics of organisms from all kingdoms to answer a variety of different questions. Bringing together these biologists and engineers will hopefully result in an open discourse of techniques and fruitful collaborations for experimental and industrial endeavors.

  6. Fundamentals of statistical experimental design and analysis

    CERN Document Server

    Easterling, Robert G

    2015-01-01

    Professionals in all areas - business; government; the physical, life, and social sciences; engineering; medicine, etc. - benefit from using statistical experimental design to better understand their worlds and then use that understanding to improve the products, processes, and programs they are responsible for. This book aims to provide the practitioners of tomorrow with a memorable, easy to read, engaging guide to statistics and experimental design. This book uses examples, drawn from a variety of established texts, and embeds them in a business or scientific context, seasoned with a dash of humor, to emphasize the issues and ideas that led to the experiment and the what-do-we-do-next? steps after the experiment. Graphical data displays are emphasized as means of discovery and communication and formulas are minimized, with a focus on interpreting the results that software produce. The role of subject-matter knowledge, and passion, is also illustrated. The examples do not require specialized knowledge, and t...

  7. Optimal experimental design strategies for detecting hormesis

    OpenAIRE

    2010-01-01

    Hormesis is a widely observed phenomenon in many branches of life sciences ranging from toxicology studies to agronomy with obvious public health and risk assessment implications. We address optimal experimental design strategies for determining the presence of hormesis in a controlled environment using the recently proposed Hunt-Bowman model. We propose alternative models that have an implicit hormetic threshold, discuss their advantages over current models, construct and study properties of...

  8. Sequential experimental design based generalised ANOVA

    Energy Technology Data Exchange (ETDEWEB)

    Chakraborty, Souvik, E-mail: csouvik41@gmail.com; Chowdhury, Rajib, E-mail: rajibfce@iitr.ac.in

    2016-07-15

    Over the last decade, surrogate modelling technique has gained wide popularity in the field of uncertainty quantification, optimization, model exploration and sensitivity analysis. This approach relies on experimental design to generate training points and regression/interpolation for generating the surrogate. In this work, it is argued that conventional experimental design may render a surrogate model inefficient. In order to address this issue, this paper presents a novel distribution adaptive sequential experimental design (DA-SED). The proposed DA-SED has been coupled with a variant of generalised analysis of variance (G-ANOVA), developed by representing the component function using the generalised polynomial chaos expansion. Moreover, generalised analytical expressions for calculating the first two statistical moments of the response, which are utilized in predicting the probability of failure, have also been developed. The proposed approach has been utilized in predicting probability of failure of three structural mechanics problems. It is observed that the proposed approach yields accurate and computationally efficient estimate of the failure probability.

  9. Sequential experimental design based generalised ANOVA

    Science.gov (United States)

    Chakraborty, Souvik; Chowdhury, Rajib

    2016-07-01

    Over the last decade, surrogate modelling technique has gained wide popularity in the field of uncertainty quantification, optimization, model exploration and sensitivity analysis. This approach relies on experimental design to generate training points and regression/interpolation for generating the surrogate. In this work, it is argued that conventional experimental design may render a surrogate model inefficient. In order to address this issue, this paper presents a novel distribution adaptive sequential experimental design (DA-SED). The proposed DA-SED has been coupled with a variant of generalised analysis of variance (G-ANOVA), developed by representing the component function using the generalised polynomial chaos expansion. Moreover, generalised analytical expressions for calculating the first two statistical moments of the response, which are utilized in predicting the probability of failure, have also been developed. The proposed approach has been utilized in predicting probability of failure of three structural mechanics problems. It is observed that the proposed approach yields accurate and computationally efficient estimate of the failure probability.

  10. Teaching renewable energy using online PBL in investigating its effect on behaviour towards energy conservation among Malaysian students: ANOVA repeated measures approach

    Science.gov (United States)

    Nordin, Norfarah; Samsudin, Mohd Ali; Hadi Harun, Abdul

    2017-01-01

    This research aimed to investigate whether online problem based learning (PBL) approach to teach renewable energy topic improves students’ behaviour towards energy conservation. A renewable energy online problem based learning (REePBaL) instruction package was developed based on the theory of constructivism and adaptation of the online learning model. This study employed a single group quasi-experimental design to ascertain the changed in students’ behaviour towards energy conservation after underwent the intervention. The study involved 48 secondary school students in a Malaysian public school. ANOVA Repeated Measure technique was employed in order to compare scores of students’ behaviour towards energy conservation before and after the intervention. Based on the finding, students’ behaviour towards energy conservation improved after the intervention.

  11. Experimental design of laminar proportional amplifiers

    Science.gov (United States)

    Hellbaum, R. F.

    1976-01-01

    An experimental program was initiated at Langley Research Center to study the effects of various parameters on the design of laminar proportional beam deflection amplifiers. Matching and staging of amplifiers to obtain high-pressure gain was also studied. Variable parameters were aspect ratio, setback, control length, receiver distance, receiver width, width of center vent, and bias pressure levels. Usable pressure gains from 4 to 19 per stage can now be achieved, and five amplifiers were staged together to yield pressure gains up to 2,000,000.

  12. Set membership experimental design for biological systems

    Directory of Open Access Journals (Sweden)

    Marvel Skylar W

    2012-03-01

    Full Text Available Abstract Background Experimental design approaches for biological systems are needed to help conserve the limited resources that are allocated for performing experiments. The assumptions used when assigning probability density functions to characterize uncertainty in biological systems are unwarranted when only a small number of measurements can be obtained. In these situations, the uncertainty in biological systems is more appropriately characterized in a bounded-error context. Additionally, effort must be made to improve the connection between modelers and experimentalists by relating design metrics to biologically relevant information. Bounded-error experimental design approaches that can assess the impact of additional measurements on model uncertainty are needed to identify the most appropriate balance between the collection of data and the availability of resources. Results In this work we develop a bounded-error experimental design framework for nonlinear continuous-time systems when few data measurements are available. This approach leverages many of the recent advances in bounded-error parameter and state estimation methods that use interval analysis to generate parameter sets and state bounds consistent with uncertain data measurements. We devise a novel approach using set-based uncertainty propagation to estimate measurement ranges at candidate time points. We then use these estimated measurements at the candidate time points to evaluate which candidate measurements furthest reduce model uncertainty. A method for quickly combining multiple candidate time points is presented and allows for determining the effect of adding multiple measurements. Biologically relevant metrics are developed and used to predict when new data measurements should be acquired, which system components should be measured and how many additional measurements should be obtained. Conclusions The practicability of our approach is illustrated with a case study. This

  13. Set membership experimental design for biological systems

    Science.gov (United States)

    2012-01-01

    Background Experimental design approaches for biological systems are needed to help conserve the limited resources that are allocated for performing experiments. The assumptions used when assigning probability density functions to characterize uncertainty in biological systems are unwarranted when only a small number of measurements can be obtained. In these situations, the uncertainty in biological systems is more appropriately characterized in a bounded-error context. Additionally, effort must be made to improve the connection between modelers and experimentalists by relating design metrics to biologically relevant information. Bounded-error experimental design approaches that can assess the impact of additional measurements on model uncertainty are needed to identify the most appropriate balance between the collection of data and the availability of resources. Results In this work we develop a bounded-error experimental design framework for nonlinear continuous-time systems when few data measurements are available. This approach leverages many of the recent advances in bounded-error parameter and state estimation methods that use interval analysis to generate parameter sets and state bounds consistent with uncertain data measurements. We devise a novel approach using set-based uncertainty propagation to estimate measurement ranges at candidate time points. We then use these estimated measurements at the candidate time points to evaluate which candidate measurements furthest reduce model uncertainty. A method for quickly combining multiple candidate time points is presented and allows for determining the effect of adding multiple measurements. Biologically relevant metrics are developed and used to predict when new data measurements should be acquired, which system components should be measured and how many additional measurements should be obtained. Conclusions The practicability of our approach is illustrated with a case study. This study shows that our

  14. Experimental Design for the LATOR Mission

    Science.gov (United States)

    Turyshev, Slava G.; Shao, Michael; Nordtvedt, Kenneth, Jr.

    2004-01-01

    This paper discusses experimental design for the Laser Astrometric Test Of Relativity (LATOR) mission. LATOR is designed to reach unprecedented accuracy of 1 part in 10(exp 8) in measuring the curvature of the solar gravitational field as given by the value of the key Eddington post-Newtonian parameter gamma. This mission will demonstrate the accuracy needed to measure effects of the next post-Newtonian order (near infinity G2) of light deflection resulting from gravity s intrinsic non-linearity. LATOR will provide the first precise measurement of the solar quadrupole moment parameter, J(sub 2), and will improve determination of a variety of relativistic effects including Lense-Thirring precession. The mission will benefit from the recent progress in the optical communication technologies the immediate and natural step above the standard radio-metric techniques. The key element of LATOR is a geometric redundancy provided by the laser ranging and long-baseline optical interferometry. We discuss the mission and optical designs, as well as the expected performance of this proposed mission. LATOR will lead to very robust advances in the tests of Fundamental physics: this mission could discover a violation or extension of general relativity, or reveal the presence of an additional long range interaction in the physical law. There are no analogs to the LATOR experiment; it is unique and is a natural culmination of solar system gravity experiments.

  15. Analysis of repeated measurements from medical research when observations are missing

    OpenAIRE

    Walker, K.

    2007-01-01

    Subject dropout is a common problem in repeated measurements health stud ies. Where dropout is related to the response, the results obtained can be substantially biased. The research in this thesis is motivated by a repeated measurements asthma clinical trial with substantial patient dropout. In practice the extent to which missing observations affect parameter esti mates and their efficiency is not clear. Through extensive simulation studies under various scenarios and missing data mechanism...

  16. The role of experimental typography in designing logotypes

    OpenAIRE

    Pogačnik, Tadeja

    2014-01-01

    Designing logotypes is an important part of graphic design. Great logotypes are designed using custom made typefaces. Therefore, it is very important, especially for the typographic designer, to have practical experience and be up to date with all trends in the field of experimental typefaces design, also called experimental typography. In my thesis statement, I carefully examined the problems of experimental typography - which allows more creative and free typography designing for different ...

  17. Manifold Regularized Experimental Design for Active Learning.

    Science.gov (United States)

    Zhang, Lining; Shum, Hubert P H; Shao, Ling

    2016-12-02

    Various machine learning and data mining tasks in classification require abundant data samples to be labeled for training. Conventional active learning methods aim at labeling the most informative samples for alleviating the labor of the user. Many previous studies in active learning select one sample after another in a greedy manner. However, this is not very effective because the classification models has to be retrained for each newly labeled sample. Moreover, many popular active learning approaches utilize the most uncertain samples by leveraging the classification hyperplane of the classifier, which is not appropriate since the classification hyperplane is inaccurate when the training data are small-sized. The problem of insufficient training data in real-world systems limits the potential applications of these approaches. This paper presents a novel method of active learning called manifold regularized experimental design (MRED), which can label multiple informative samples at one time for training. In addition, MRED gives an explicit geometric explanation for the selected samples to be labeled by the user. Different from existing active learning methods, our method avoids the intrinsic problems caused by insufficiently labeled samples in real-world applications. Various experiments on synthetic datasets, the Yale face database and the Corel image database have been carried out to show how MRED outperforms existing methods.

  18. Autonomous entropy-based intelligent experimental design

    Science.gov (United States)

    Malakar, Nabin Kumar

    2011-07-01

    The aim of this thesis is to explore the application of probability and information theory in experimental design, and to do so in a way that combines what we know about inference and inquiry in a comprehensive and consistent manner. Present day scientific frontiers involve data collection at an ever-increasing rate. This requires that we find a way to collect the most relevant data in an automated fashion. By following the logic of the scientific method, we couple an inference engine with an inquiry engine to automate the iterative process of scientific learning. The inference engine involves Bayesian machine learning techniques to estimate model parameters based upon both prior information and previously collected data, while the inquiry engine implements data-driven exploration. By choosing an experiment whose distribution of expected results has the maximum entropy, the inquiry engine selects the experiment that maximizes the expected information gain. The coupled inference and inquiry engines constitute an autonomous learning method for scientific exploration. We apply it to a robotic arm to demonstrate the efficacy of the method. Optimizing inquiry involves searching for an experiment that promises, on average, to be maximally informative. If the set of potential experiments is described by many parameters, the search involves a high-dimensional entropy space. In such cases, a brute force search method will be slow and computationally expensive. We develop an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment. This helps to reduce the number of computations necessary to find the optimal experiment. We also extended the method of maximizing entropy, and developed a method of maximizing joint entropy so that it could be used as a principle of collaboration between two robots. This is a major achievement of this thesis, as it allows the information-based collaboration between two robotic units towards a same

  19. Web Based Learning Support for Experimental Design in Molecular Biology.

    Science.gov (United States)

    Wilmsen, Tinri; Bisseling, Ton; Hartog, Rob

    An important learning goal of a molecular biology curriculum is a certain proficiency level in experimental design. Currently students are confronted with experimental approaches in textbooks, in lectures and in the laboratory. However, most students do not reach a satisfactory level of competence in the design of experimental approaches. This…

  20. Repeated measures of serum glucose and insulin in relation to postmenopausal breast cancer.

    Science.gov (United States)

    Kabat, Geoffrey C; Kim, Mimi; Caan, Bette J; Chlebowski, Rowan T; Gunter, Marc J; Ho, Gloria Y F; Rodriguez, Beatriz L; Shikany, James M; Strickler, Howard D; Vitolins, Mara Z; Rohan, Thomas E

    2009-12-01

    Experimental and epidemiological evidence suggests that circulating glucose and insulin may play a role in breast carcinogenesis. However, few cohort studies have examined breast cancer risk in association with glucose and insulin levels, and studies to date have had only baseline measurements of exposure. We conducted a longitudinal study of postmenopausal breast cancer risk using the 6% random sample of women in the Women's Health Initiative clinical trials whose fasting blood samples, provided at baseline and at years 1, 3 and 6, were analyzed for glucose and insulin. In addition, a 1% sample of women in the observational study, who had glucose and insulin measured in fasting blood samples drawn at baseline and in year 3, were included in the analysis. We used Cox proportional hazards models to estimate hazard ratios and 95% confidence intervals for the association of baseline and follow-up measurements of serum glucose and insulin with breast cancer risk. All statistical tests were 2-sided. Among 5,450 women with baseline serum glucose and insulin values, 190 incident cases of breast cancer were ascertained over a median of 8.0 years of follow-up. The highest tertile of baseline insulin, relative to the lowest, was associated with a 2-fold increase in risk in the total population (multivariable hazard ratio 2.22, 95% confidence interval 1.39-3.53) and with a 3-fold increase in risk in women who were not enrolled in the intervention arm of any clinical trial (multivariable hazard ratio 3.15, 95% confidence interval 1.61-6.17). Glucose levels showed no association with risk. Analysis of the repeated measurements supported the results of the baseline analysis. These data suggest that elevated serum insulin levels may be a risk factor for postmenopausal breast cancer.

  1. Experimental design and model choice the planning and analysis of experiments with continuous or categorical response

    CERN Document Server

    Toutenburg, Helge

    1995-01-01

    This textbook gives a representation of the design and analysis of experiments, that comprises the aspects of classical theory for continuous response and of modern procedures for categorical response, and especially for correlated categorical response. Complex designs, as for example, cross-over and repeated measures, are included. Thus, it is an important book for statisticians in the pharmaceutical industry as well as for clinical research in medicine and dentistry.

  2. A repeated measures experiment of green exercise to improve self-esteem in UK school children.

    Directory of Open Access Journals (Sweden)

    Katharine Reed

    Full Text Available Exercising in natural, green environments creates greater improvements in adult's self-esteem than exercise undertaken in urban or indoor settings. No comparable data are available for children. The aim of this study was to determine whether so called 'green exercise' affected changes in self-esteem; enjoyment and perceived exertion in children differently to urban exercise. We assessed cardiorespiratory fitness (20 m shuttle-run and self-reported physical activity (PAQ-A in 11 and 12 year olds (n = 75. Each pupil completed two 1.5 mile timed runs, one in an urban and another in a rural environment. Trials were completed one week apart during scheduled physical education lessons allocated using a repeated measures design. Self-esteem was measured before and after each trial, ratings of perceived exertion (RPE and enjoyment were assessed after completing each trial. We found a significant main effect (F (1,74, = 12.2, p<0.001, for the increase in self-esteem following exercise but there was no condition by exercise interaction (F (1,74, = 0.13, p = 0.72. There were no significant differences in perceived exertion or enjoyment between conditions. There was a negative correlation (r = -0.26, p = 0.04 between habitual physical activity and RPE during the control condition, which was not evident in the green exercise condition (r = -0.07, p = 0.55. Contrary to previous studies in adults, green exercise did not produce significantly greater increases in self-esteem than the urban exercise condition. Green exercise was enjoyed more equally by children with differing levels of habitual physical activity and has the potential to engage less active children in exercise.

  3. Development of the Biological Experimental Design Concept Inventory (BEDCI)

    OpenAIRE

    2014-01-01

    Interest in student conception of experimentation inspired the development of a fully validated 14-question inventory on experimental design in biology (BEDCI) by following established best practices in concept inventory (CI) design. This CI can be used to diagnose specific examples of non–expert-like thinking in students and to evaluate the success of teaching strategies that target conceptual changes. We used BEDCI to diagnose non–expert-like student thinking in experimental design at the p...

  4. Power Analysis Tutorial for Experimental Design Software

    Science.gov (United States)

    2014-11-01

    Details ............................................................ D-1 Appendix E – JMP Monte Carlo Simulation Script...freedom for error. • In Design Expert, when constructing a design, you are asked for delta and sigma . The default model for power analysis is...Designed Experiments. Third Edition. New York: John Wiley and Sons, 2009. 12. Muthen, Linda, and Bengt Muthen. “How to Use a Monte Carlo Study to

  5. Graphic Methods for Interpreting Longitudinal Dyadic Patterns From Repeated-Measures Actor-Partner Interdependence Models

    DEFF Research Database (Denmark)

    Perry, Nicholas; Baucom, Katherine; Bourne, Stacia

    2017-01-01

    Researchers commonly use repeated-measures actor–partner interdependence models (RM-APIM) to understand how romantic partners change in relation to one another over time. However, traditional interpretations of the results of these models do not fully or correctly capture the dyadic temporal...

  6. The concordance correlation coefficient for repeated measures estimated by variance components.

    Science.gov (United States)

    Carrasco, Josep L; King, Tonya S; Chinchilli, Vernon M

    2009-01-01

    The concordance correlation coefficient (CCC) is an index that is commonly used to assess the degree of agreement between observers on measuring a continuous characteristic. Here, a CCC for longitudinal repeated measurements is developed through the appropriate specification of the intraclass correlation coefficient from a variance components linear mixed model. A case example and the results of a simulation study are provided.

  7. Cross-trimester repeated measures testing for Down's syndrome screening: an assessment.

    LENUS (Irish Health Repository)

    Wright, D

    2010-07-01

    To provide estimates and confidence intervals for the performance (detection and false-positive rates) of screening for Down\\'s syndrome using repeated measures of biochemical markers from first and second trimester maternal serum samples taken from the same woman.

  8. The Multilevel Approach to Repeated Measures for Complete and Incomplete Data

    NARCIS (Netherlands)

    Maas, CJM; Snijders, TAB

    2003-01-01

    Repeated measurements often are analyzed by multivariate analysis of variance (MANOVA). An alternative approach is provided by multilevel analysis, also called the hierarchical linear model (HLM), which makes use of random coefficient models. This paper is a tutorial which indicates that the HLM can

  9. Concordance correlation coefficients estimated by generalized estimating equations and variance components for longitudinal repeated measurements.

    Science.gov (United States)

    Tsai, Miao-Yu

    2017-04-15

    The concordance correlation coefficient (CCC) is a commonly accepted measure of agreement between two observers for continuous responses. This paper proposes a generalized estimating equations (GEE) approach allowing dependency between repeated measurements over time to assess intra-agreement for each observer and inter- and total agreement among multiple observers simultaneously. Furthermore, the indices of intra-, inter-, and total agreement through variance components (VC) from an extended three-way linear mixed model (LMM) are also developed with consideration of the correlation structure of longitudinal repeated measurements. Simulation studies are conducted to compare the performance of the GEE and VC approaches for repeated measurements from longitudinal data. An application of optometric conformity study is used for illustration. In conclusion, the GEE approach allowing flexibility in model assumptions and correlation structures of repeated measurements gives satisfactory results with small mean square errors and nominal 95% coverage rates for large data sets, and when the assumption of the relationship between variances and covariances for the extended three-way LMM holds, the VC approach performs outstandingly well for all sample sizes. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  10. Experimental design in analytical chemistry--part II: applications.

    Science.gov (United States)

    Ebrahimi-Najafabadi, Heshmatollah; Leardi, Riccardo; Jalali-Heravi, Mehdi

    2014-01-01

    This paper reviews the applications of experimental design to optimize some analytical chemistry techniques such as extraction, chromatography separation, capillary electrophoresis, spectroscopy, and electroanalytical methods.

  11. Conceptual design report, CEBAF basic experimental equipment

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1990-04-13

    The Continuous Electron Beam Accelerator Facility (CEBAF) will be dedicated to basic research in Nuclear Physics using electrons and photons as projectiles. The accelerator configuration allows three nearly continuous beams to be delivered simultaneously in three experimental halls, which will be equipped with complementary sets of instruments: Hall A--two high resolution magnetic spectrometers; Hall B--a large acceptance magnetic spectrometer; Hall C--a high-momentum, moderate resolution, magnetic spectrometer and a variety of more dedicated instruments. This report contains a short description of the initial complement of experimental equipment to be installed in each of the three halls.

  12. EBTS:DESIGN AND EXPERIMENTAL STUDY.

    Energy Technology Data Exchange (ETDEWEB)

    PIKIN,A.; ALESSI,J.; BEEBE,E.; KPONOU,A.; PRELEC,K.; KUZNETSOV,G.; TIUNOV,M.

    2000-11-06

    Experimental study of the BNL Electron Beam Test Stand (EBTS), which is a prototype of the Relativistic Heavy Ion Collider (RHIC) Electron Beam Ion Source (EBIS), is currently underway. The basic physics and engineering aspects of a high current EBIS implemented in EBTS are outlined and construction of its main systems is presented. Efficient transmission of a 10 A electron beam through the ion trap has been achieved. Experimental results on generation of multiply charged ions with both continuous gas and external ion injection confirm stable operation of the ion trap.

  13. Experimental Plans and Intensive Numerical Aided Design

    Directory of Open Access Journals (Sweden)

    Yvon Gardan

    2014-09-01

    Full Text Available This paper deals with new methods to optimize design and subsequent phases, notably in SMEs specialized in manufacturing. SMEs use numerical simulation to verify that the design meets the expectations of the specification, following the current traditional process: CAD model, simulation of its behavior, changes in the CAD model... This process suffers from a number of drawbacks: no overall multi-criteria vision, use of CAD software, by nature "constructive, not considering the overall objectives (even with integration of the parameters or with considering the downstream phases. The research centre DINCCS led several projects, relying on industrial cases to consider more efficient approaches. It is proposed to reverse this process by making intensive simulations, based on trade knowledge, before design itself. In order to plan the great number of simulations, automatically designed plan of experiences are used. We discuss the application of this approach to intensive simulations. The stakes are crucial for SMEs, particularly manufacturing ones. Using numerical simulation (optimization intensively, before setting the CAD model, they can get unexpected gains (mass, better use of the means of manufacture.... The results show that the proposed approach is a very promising new way of computer aided design..

  14. Teaching Experimental Design to Elementary School Pupils in Greece

    Science.gov (United States)

    Karampelas, Konstantinos

    2016-01-01

    This research is a study about the possibility to promote experimental design skills to elementary school pupils. Experimental design and the experiment process are foundational elements in current approaches to Science Teaching, as they provide learners with profound understanding about knowledge construction and science inquiry. The research was…

  15. The Implications of "Contamination" for Experimental Design in Education

    Science.gov (United States)

    Rhoads, Christopher H.

    2011-01-01

    Experimental designs that randomly assign entire clusters of individuals (e.g., schools and classrooms) to treatments are frequently advocated as a way of guarding against contamination of the estimated average causal effect of treatment. However, in the absence of contamination, experimental designs that randomly assign intact clusters to…

  16. Development of the Biological Experimental Design Concept Inventory (BEDCI)

    Science.gov (United States)

    Deane, Thomas; Nomme, Kathy; Jeffery, Erica; Pollock, Carol; Birol, Gulnur

    2014-01-01

    Interest in student conception of experimentation inspired the development of a fully validated 14-question inventory on experimental design in biology (BEDCI) by following established best practices in concept inventory (CI) design. This CI can be used to diagnose specific examples of non-expert-like thinking in students and to evaluate the…

  17. A framework for efficient process development using optimal experimental designs

    NARCIS (Netherlands)

    Ven, P. van de; Bijlsma, S.; Gout, E.; Voort Maarschalk, K. van der; Thissen, U.

    2011-01-01

    Introduction: The aim of this study was to develop and demonstrate a framework assuring efficient process development using fewer experiments than standard experimental designs. Methods: A novel optimality criterion for experimental designs (Iw criterion) is defined that leads to more efficient proc

  18. Research designs for experimental single-case studies

    Directory of Open Access Journals (Sweden)

    Ralf Spieß

    2012-09-01

    Full Text Available This overview describes the most important designs for single case experimental studies, which are ABAB-design, multiple baseline design, alternating treatments design and changing criterion design. The logic of experimental control in single case studies is explained and it is described, how these different designs are able to provide internal validity and enable causal interpretations of intervention outcome. An important precondition of valid interpretation is objective and reliable data assessment. Data evaluation by visual inspection is explained and several methods of statistical data analysis are discussed. To establish generability across persons, situations, and settings, the importance of replication studies is highlighted.

  19. Autism genetics: Methodological issues and experimental design.

    Science.gov (United States)

    Sacco, Roberto; Lintas, Carla; Persico, Antonio M

    2015-10-01

    Autism is a complex neuropsychiatric disorder of developmental origin, where multiple genetic and environmental factors likely interact resulting in a clinical continuum between "affected" and "unaffected" individuals in the general population. During the last two decades, relevant progress has been made in identifying chromosomal regions and genes in linkage or association with autism, but no single gene has emerged as a major cause of disease in a large number of patients. The purpose of this paper is to discuss specific methodological issues and experimental strategies in autism genetic research, based on fourteen years of experience in patient recruitment and association studies of autism spectrum disorder in Italy.

  20. Irradiation Design for an Experimental Murine Model

    Science.gov (United States)

    Ballesteros-Zebadúa, P.; Lárraga-Gutierrez, J. M.; García-Garduño, O. A.; Rubio-Osornio, M. C.; Custodio-Ramírez, V.; Moreno-Jimenez, S.; Suarez-Campos, J. E.; Paz, C.; Celis, M. A.

    2010-12-01

    In radiotherapy and stereotactic radiosurgery, small animal experimental models are frequently used, since there are still a lot of unsolved questions about the biological and biochemical effects of ionizing radiation. This work presents a method for small-animal brain radiotherapy compatible with a dedicated 6MV Linac. This rodent model is focused on the research of the inflammatory effects produced by ionizing radiation in the brain. In this work comparisons between Pencil Beam and Monte Carlo techniques, were used in order to evaluate accuracy of the calculated dose using a commercial planning system. Challenges in this murine model are discussed.

  1. Split-plot designs for multistage experimentation

    DEFF Research Database (Denmark)

    Kulahci, Murat; Tyssedal, John

    2016-01-01

    at the same time will be more efficient. However, there have been only a few attempts in the literature to provide an adequate and easy-to-use approach for this problem. In this paper, we present a novel methodology for constructing two-level split-plot and multistage experiments. The methodology is based...... be accommodated in each stage. Furthermore, split-plot designs for multistage experiments with good projective properties are also provided....

  2. Information measures in nonlinear experimental design

    Science.gov (United States)

    Niple, E.; Shaw, J. H.

    1980-01-01

    Some different approaches to the problem of designing experiments which estimate the parameters of nonlinear models are discussed. The assumption in these approaches that the information in a set of data can be represented by a scalar is criticized, and the nonscalar discrimination information is proposed as the proper measure to use. The two-step decay example in Box and Lucas (1959) is used to illustrate the main points of the discussion.

  3. REPEATED MEASURES ANALYSIS OF CHANGES IN PHOTOSYNTHETIC EFFICIENCY IN SOUR CHERRY DURING WATER DEFICIT

    Directory of Open Access Journals (Sweden)

    Marija Viljevac

    2012-06-01

    Full Text Available The objective of this study was to investigate changes in photosynthetic efficiency applying repeated measures ANOVA using the photosynthetic performance index (PIABS of the JIP-test as a vitality parameter in seven genotypes of sour cherry (Prunus cerasus, L. during 10 days of continuous water deficit. Both univariate and multivariate ANOVA repeated measures revealed highly significant time effect (Days and its subsequent interactions with genotype and water deficit. However, the multivariate Pillai’s trace test detected the interaction Time × Genotype × Water deficit as not significant. According to the Tukey’s Studentized Range (HSD test, differences between the control and genotypes exposed to water stress became significant on the fourth day of the experiment, indicating that the plants on the average, began to lose their photosynthetic efficiency four days after being exposed to water shortage. It corroborates previous findings in other species that PIABS is very sensitive tool for detecting drought stress.

  4. Design and Experimental Implementation of Bipedal robot

    OpenAIRE

    Sreejith C; Sreeshma K

    2012-01-01

    Biped robots have better mobility than conventional wheeled robots, but they tend to tip over easily. To be able to walk stably in various environments, such as on rough terrain, up and down slopes, or in regions containing obstacles, it is necessary for the robot to adapt to the ground conditions with a foot motion, and maintain its stability with a torso motion. In this paper, we first formulate the design and walking pattern for a bipedal robot and then a kicking robot has been developed f...

  5. Are There Linguistic Markers of Suicidal Writing That Can Predict the Course of Treatment? A Repeated Measures Longitudinal Analysis.

    Science.gov (United States)

    Brancu, Mira; Jobes, David; Wagner, Barry M; Greene, Jeffrey A; Fratto, Timothy A

    2016-07-02

    The purpose of this pilot study was to predict resolution of suicidal ideation and risk over the course of therapy among suicidal outpatients (N = 144) using a novel method for analyzing Self- verses Relationally oriented qualitative written responses to the Suicide Status Form (SSF). A content analysis software program was used to extract word counts and a repeated measures longitudinal design was implemented to assess improvement over time. Patients with primarily Relationally focused word counts were more likely to have a quicker suicide risk resolution than those with more Self-focused word counts (6-7 sessions versus 17-18 sessions). Implications of these data are discussed, including the potential for enhancing treatment outcomes using this method with individuals entering treatment.

  6. On experimentation across science, design and society

    DEFF Research Database (Denmark)

    Boris, Stefan Darlan

    2016-01-01

    . This is something, which is becoming increasingly relevant, as landscape architects and urban planners today have to address the challenges confronting urbanism due to the continued entanglement of urbanisation and anthropogenic processes. These are challenges where the act of destabilizing dichotomies (inside......The article describes how the principal idea behind the landscape laboratories has been to develop a 1:1 platform where researchers, practitioners and lay people can meet and cooperate on the development and testing of new design concepts for establishing and managing urban landscapes....../outside, natural/manmade, etc.) is one out of several reasons for not only continuing but also strengthening the landscape laboratories as testing grounds for future urban landscapes and green spaces in the Anthropocene....

  7. 10 simple rules for best experimental design in ecology

    OpenAIRE

    Sajadi, Farwa

    2016-01-01

    A quick set of rules on how best to execute an experimental design in ecology. From having a clear hypothesis to obtaining accurate statistics, this guide will help make sure authors are on the right track before publishing. The 10 simple rules are based on articles written to help readers and editors learn more about experimental design and how to avoid any unseeable pitfalls. These rules act as a checklist for authors to go through to make sure they have created the best experimental design...

  8. Numerical taxonomy of maize landraces: comparison between experimental designs

    OpenAIRE

    1989-01-01

    [EN] Seventy three maize (Zea mays L.) landraces from Northwestern Spain were grown according to two different experimental design.The first one (design A) was a randomized complete blocks design with two replications per trial at two locations for two years. The second design (desing B) is simpler than the first one: the populations were grown at one location without replications for three years. Numerical taxonomy of these landraces was made according to results of the field trials u...

  9. Simultaneous optimal experimental design for in vitro binding parameter estimation.

    Science.gov (United States)

    Ernest, C Steven; Karlsson, Mats O; Hooker, Andrew C

    2013-10-01

    Simultaneous optimization of in vitro ligand binding studies using an optimal design software package that can incorporate multiple design variables through non-linear mixed effect models and provide a general optimized design regardless of the binding site capacity and relative binding rates for a two binding system. Experimental design optimization was employed with D- and ED-optimality using PopED 2.8 including commonly encountered factors during experimentation (residual error, between experiment variability and non-specific binding) for in vitro ligand binding experiments: association, dissociation, equilibrium and non-specific binding experiments. Moreover, a method for optimizing several design parameters (ligand concentrations, measurement times and total number of samples) was examined. With changes in relative binding site density and relative binding rates, different measurement times and ligand concentrations were needed to provide precise estimation of binding parameters. However, using optimized design variables, significant reductions in number of samples provided as good or better precision of the parameter estimates compared to the original extensive sampling design. Employing ED-optimality led to a general experimental design regardless of the relative binding site density and relative binding rates. Precision of the parameter estimates were as good as the extensive sampling design for most parameters and better for the poorly estimated parameters. Optimized designs for in vitro ligand binding studies provided robust parameter estimation while allowing more efficient and cost effective experimentation by reducing the measurement times and separate ligand concentrations required and in some cases, the total number of samples.

  10. Active Photonic Crystal Switches: Modeling, Design and Experimental Characterization

    DEFF Research Database (Denmark)

    Heuck, Mikkel; Yu, Yi; Kristensen, Philip Trøst;

    2013-01-01

    In this paper, we present recent progress in modeling, design, fabrication and experimental characterization of InP photonic crystal all-optical switches. Novel designs with increased flexibility and performance are presented, and their operation using high speed data signals is analyzed numerica......In this paper, we present recent progress in modeling, design, fabrication and experimental characterization of InP photonic crystal all-optical switches. Novel designs with increased flexibility and performance are presented, and their operation using high speed data signals is analyzed...

  11. The Use of an Experimental Design Approach to Investigate the ...

    African Journals Online (AJOL)

    The Use of an Experimental Design Approach to Investigate the Interactions of Additives ... When a conventional starting, lighting and ignition (SLI) lead acid battery is ... Typical flooded nominal 8 Ah test cells were assembled in a reverse ratio ...

  12. The nonstandard algorithm for constructing efficient conjoint experimental designs

    Directory of Open Access Journals (Sweden)

    Kuzmanović Marija

    2008-01-01

    Full Text Available Conjoint analysis is a research technique for measuring consumer preferences, and it is a method for simulating consumers' possible reactions to changes in current products or newly introduced products into an existing competitive market. One of the most critical steps in Conjoint analysis application is experimental designs construction. The purpose of an experimental design is to give a rough overall idea as to the shape of the experimental response surface, while only requiring a relatively small number of runs. These designs are expected to be orthogonal and balanced in an ideal case. In practice, though, it is hard to construct optimal designs and thus constructing of near optimal and efficient designs is carried out. There are several ways to quantify the relative efficiency of experimental designs. The choice of measure will determine which types of experimental designs are favored as well as the algorithms for choosing efficient designs. In this paper it is proposed the algorithm which combines one standard and one non-standard optimality criteria. The computational experiments were made, and results of comparison with algorithm implemented in commercial package SPSS confirm the efficiency of the proposed algorithm. .

  13. Process Model Construction and Optimization Using Statistical Experimental Design,

    Science.gov (United States)

    1988-04-01

    Memo No. 88-442 ~LECTE March 1988 31988 %,.. MvAY 1 98 0) PROCESS MODEL CONSTRUCTION AND OPTIMIZATION USING STATISTICAL EXPERIMENTAL DESIGN Emmanuel...Sachs and George Prueger Abstract A methodology is presented for the construction of process models by the combination of physically based mechanistic...253-8138. .% I " Process Model Construction and Optimization Using Statistical Experimental Design" by Emanuel Sachs Assistant Professor and George

  14. Model-Based Optimal Experimental Design for Complex Physical Systems

    Science.gov (United States)

    2015-12-03

    an open-loop behavior , where no feedback is involved , and the observations from any experiment do not affect the design of other experiments...developing and refining models of physical systems. Yet experimental observations can be difficult, time- consuming , and expensive to acquire. In this...improve design and decision-making under uncertainty. Yet experimental observations can be difficult, time- consuming , and expensive to acquire. In this

  15. Experimental Design: Utilizing Microsoft Mathematics in Teaching and Learning Calculus

    Science.gov (United States)

    Oktaviyanthi, Rina; Supriani, Yani

    2015-01-01

    The experimental design was conducted to investigate the use of Microsoft Mathematics, free software made by Microsoft Corporation, in teaching and learning Calculus. This paper reports results from experimental study details on implementation of Microsoft Mathematics in Calculus, students' achievement and the effects of the use of Microsoft…

  16. Design Issues and Inference in Experimental L2 Research

    Science.gov (United States)

    Hudson, Thom; Llosa, Lorena

    2015-01-01

    Explicit attention to research design issues is essential in experimental second language (L2) research. Too often, however, such careful attention is not paid. This article examines some of the issues surrounding experimental L2 research and its relationships to causal inferences. It discusses the place of research questions and hypotheses,…

  17. Design Issues and Inference in Experimental L2 Research

    Science.gov (United States)

    Hudson, Thom; Llosa, Lorena

    2015-01-01

    Explicit attention to research design issues is essential in experimental second language (L2) research. Too often, however, such careful attention is not paid. This article examines some of the issues surrounding experimental L2 research and its relationships to causal inferences. It discusses the place of research questions and hypotheses,…

  18. A Bayesian experimental design approach to structural health monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Farrar, Charles [Los Alamos National Laboratory; Flynn, Eric [UCSD; Todd, Michael [UCSD

    2010-01-01

    Optimal system design for SHM involves two primarily challenges. The first is the derivation of a proper performance function for a given system design. The second is the development of an efficient optimization algorithm for choosing a design that maximizes, or nearly maximizes the performance function. In this paper we will outline how an SHM practitioner can construct the proper performance function by casting the entire design problem into a framework of Bayesian experimental design. The approach demonstrates how the design problem necessarily ties together all steps of the SHM process.

  19. Experimentally based, longitudinally designed, teacher-focused intervention to help physical education teachers be more autonomy supportive toward their students.

    Science.gov (United States)

    Cheon, Sung Hyeon; Reeve, Johnmarshall; Moon, Ik Soo

    2012-06-01

    Using the field's state-of-the-art knowledge, we designed, implemented, and assessed the effectiveness of an intervention to help physical education (PE) teachers be more autonomy supportive during instruction. Nineteen secondary-school PE teachers in Seoul were randomly assigned into either an experimental or a delayed-treatment control group, and their 1,158 students self-reported their course-related psychological need satisfaction, autonomous motivation, amotivation, classroom engagement, skill development, future intentions, and academic achievement at the beginning, middle, and end of the semester. Observers' ratings and students' self-reports confirmed that the intervention was successful. Repeated-measures ANCOVAs showed that the students of teachers in the experimental group showed midsemester and end-of-semester improvements in all dependent measures. A multilevel structural equation model mediation analysis showed why the teacher-training program produced improvements in all six student outcomes - namely, teachers in the experimental group vitalized their students' psychological need satisfaction during PE class in ways that teachers in the control group were unable to do, and it was this enhanced need satisfaction that explained the observed improvements in all six outcomes.

  20. Iterative Weighted Semiparametric Least Squares Estimation in Repeated Measurement Partially Linear Regression Models

    Institute of Scientific and Technical Information of China (English)

    Ge-mai Chen; Jin-hong You

    2005-01-01

    Consider a repeated measurement partially linear regression model with an unknown vector pasemiparametric generalized least squares estimator (SGLSE) ofβ, we propose an iterative weighted semiparametric least squares estimator (IWSLSE) and show that it improves upon the SGLSE in terms of asymptotic covariance matrix. An adaptive procedure is given to determine the number of iterations. We also show that when the number of replicates is less than or equal to two, the IWSLSE can not improve upon the SGLSE.These results are generalizations of those in [2] to the case of semiparametric regressions.

  1. Randomizing Roaches: Exploring the "Bugs" of Randomization in Experimental Design

    Science.gov (United States)

    Wagler, Amy; Wagler, Ron

    2014-01-01

    Understanding the roles of random selection and random assignment in experimental design is a central learning objective in most introductory statistics courses. This article describes an activity, appropriate for a high school or introductory statistics course, designed to teach the concepts, values and pitfalls of random selection and assignment…

  2. Fundamentals of experimental design: lessons from beyond the textbook world

    Science.gov (United States)

    We often think of experimental designs as analogous to recipes in a cookbook. We look for something that we like and frequently return to those that have become our long-standing favorites. We can easily become complacent, favoring the tried-and-true designs (or recipes) over those that contain unkn...

  3. Experimental Design and Multiplexed Modeling Using Titrimetry and Spreadsheets

    Science.gov (United States)

    Harrington, Peter De B.; Kolbrich, Erin; Cline, Jennifer

    2002-07-01

    The topics of experimental design and modeling are important for inclusion in the undergraduate curriculum. Many general chemistry and quantitative analysis courses introduce students to spreadsheet programs, such as MS Excel. Students in the laboratory sections of these courses use titrimetry as a quantitative measurement method. Unfortunately, the only model that students may be exposed to in introductory chemistry courses is the working curve that uses the linear model. A novel experiment based on a multiplex model has been devised for titrating several vinegar samples at a time. The multiplex titration can be applied to many other routine determinations. An experimental design model is fit to titrimetric measurements using the MS Excel LINEST function to estimate concentration from each sample. This experiment provides valuable lessons in error analysis, Class A glassware tolerances, experimental simulation, statistics, modeling, and experimental design.

  4. Experimental Study and Design of Balloon expandable Endovascular Stent Expansion

    Institute of Scientific and Technical Information of China (English)

    WANG Yue-xuan; YI Hong; NI Zhong-hua

    2005-01-01

    The application background and experimental research overview of medical endovascular stent are presented. Based on the analytical comparison of the current research achievements, the life cycle of medical endovascular stent is pointed out and the characteristics of stent expansion in the life cycle are emphasized on.The experimental scheme of in vitro stent expansion based on the machine vision technology in LabVIEW is presented. The selection and usage of the chosen component devices and design of measurement program for experiment are expatiated. A special drug-loading stent was expanded on the assembled platform of selected equipments and experimental results were analyzed. The experimental scheme presented in the paper provides powerful experimental support for the optimization of stent design and computer simulation of stent expansion by the finite element analysis.

  5. Experimental Methodology in English Teaching and Learning: Method Features, Validity Issues, and Embedded Experimental Design

    Science.gov (United States)

    Lee, Jang Ho

    2012-01-01

    Experimental methods have played a significant role in the growth of English teaching and learning studies. The paper presented here outlines basic features of experimental design, including the manipulation of independent variables, the role and practicality of randomised controlled trials (RCTs) in educational research, and alternative methods…

  6. A Network-Based Algorithm for Clustering Multivariate Repeated Measures Data

    Science.gov (United States)

    Koslovsky, Matthew; Arellano, John; Schaefer, Caroline; Feiveson, Alan; Young, Millennia; Lee, Stuart

    2017-01-01

    The National Aeronautics and Space Administration (NASA) Astronaut Corps is a unique occupational cohort for which vast amounts of measures data have been collected repeatedly in research or operational studies pre-, in-, and post-flight, as well as during multiple clinical care visits. In exploratory analyses aimed at generating hypotheses regarding physiological changes associated with spaceflight exposure, such as impaired vision, it is of interest to identify anomalies and trends across these expansive datasets. Multivariate clustering algorithms for repeated measures data may help parse the data to identify homogeneous groups of astronauts that have higher risks for a particular physiological change. However, available clustering methods may not be able to accommodate the complex data structures found in NASA data, since the methods often rely on strict model assumptions, require equally-spaced and balanced assessment times, cannot accommodate missing data or differing time scales across variables, and cannot process continuous and discrete data simultaneously. To fill this gap, we propose a network-based, multivariate clustering algorithm for repeated measures data that can be tailored to fit various research settings. Using simulated data, we demonstrate how our method can be used to identify patterns in complex data structures found in practice.

  7. Design and Experimental Study on Spinning Solid Rocket Motor

    Science.gov (United States)

    Xue, Heng; Jiang, Chunlan; Wang, Zaicheng

    The study on spinning solid rocket motor (SRM) which used as power plant of twice throwing structure of aerial submunition was introduced. This kind of SRM which with the structure of tangential multi-nozzle consists of a combustion chamber, propellant charge, 4 tangential nozzles, ignition device, etc. Grain design, structure design and prediction of interior ballistic performance were described, and problem which need mainly considered in design were analyzed comprehensively. Finally, in order to research working performance of the SRM, measure pressure-time curve and its speed, static test and dynamic test were conducted respectively. And then calculated values and experimental data were compared and analyzed. The results indicate that the designed motor operates normally, and the stable performance of interior ballistic meet demands. And experimental results have the guidance meaning for the pre-research design of SRM.

  8. Characterizing the Experimental Procedure in Science Laboratories: A Preliminary Step towards Students Experimental Design

    Science.gov (United States)

    Girault, Isabelle; d'Ham, Cedric; Ney, Muriel; Sanchez, Eric; Wajeman, Claire

    2012-01-01

    Many studies have stressed students' lack of understanding of experiments in laboratories. Some researchers suggest that if students design all or parts of entire experiment, as part of an inquiry-based approach, it would overcome certain difficulties. It requires that a procedure be written for experimental design. The aim of this paper is to…

  9. Useful experimental designs and rank order statistics in educational research

    Directory of Open Access Journals (Sweden)

    Zendler, Andreas

    2013-01-01

    Full Text Available Experimental educational research is of great impact because it illuminates cause-and-effect relationships by accumulating empirical evidence. The present article does not propose new methods but brings three useful experimental designs as well as appropriate statistical procedures (rank order statistics to the attention of the reader to conduct educational experiments, even with small samples. By means of their systematic use combined with the process-product paradigm of experimental educational research, the influence of essential variables (teacher, context, and process variables in schools, universities, and other educational institutions can be investigated. The statistical procedures described in this article guarantee that small samples (e.g. a school class can be successfully used, and that product variables (e.g. knowledge, comprehension, transfer are only required to meet the criteria of an ordinal scale. The experimental designs and statistical procedures are exemplified by hypothetical data and detailed calculations.

  10. Reliability of single sample experimental designs: comfortable effort level.

    Science.gov (United States)

    Brown, W S; Morris, R J; DeGroot, T; Murry, T

    1998-12-01

    This study was designed to ascertain the intrasubject variability across multiple recording sessions-most often disregarded in reporting group mean data or unavailable because of single sample experimental designs. Intrasubject variability was assessed within and across several experimental sessions from measures of speaking fundamental frequency, vocal intensity, and reading rate. Three age groups of men and women--young, middle-aged, and elderly--repeated the vowel /a/, read a standard passage, and spoke extemporaneously during each experimental session. Statistical analyses were performed to assess each speaker's variability from his or her own mean, and that which consistently varied for any one speaking sample type, both within or across days. Results indicated that intrasubject variability was minimal, with approximately 4% of the data exhibiting significant variation across experimental sessions.

  11. Computer-Generated Experimental Designs for Irregular-Shaped Regions

    Energy Technology Data Exchange (ETDEWEB)

    Nguyen, Nam K.; Piepel, Gregory F.

    2005-09-01

    This paper focuses on the construction of computer-generated designs on irregularly-shaped, constrained regions. Overviews of the Fedorov exchange algorithm (FEA) and other exchange algorithms for the construction of D-optimal designs are given. A faster implementation of the FEA is presented, which is referred to as fast-FEA (denoted FFEA). The FFEA was applied to construct D-optimal designs for several published examples with constrained experimental regions. Designs resulting from the FFEA are more D-efficient than published designs, and provide benchmarks for future comparisons of design construction algorithms. The construction of G-optimal designs for constrained regions is also discussed and illustrated with a published example.

  12. Experimental Design and Power Calculation for RNA-seq Experiments.

    Science.gov (United States)

    Wu, Zhijin; Wu, Hao

    2016-01-01

    Power calculation is a critical component of RNA-seq experimental design. The flexibility of RNA-seq experiment and the wide dynamic range of transcription it measures make it an attractive technology for whole transcriptome analysis. These features, in addition to the high dimensionality of RNA-seq data, bring complexity in experimental design, making an analytical power calculation no longer realistic. In this chapter we review the major factors that influence the statistical power of detecting differential expression, and give examples of power assessment using the R package PROPER.

  13. Optimal Experimental Design of Furan Shock Tube Kinetic Experiments

    KAUST Repository

    Kim, Daesang

    2015-01-07

    A Bayesian optimal experimental design methodology has been developed and applied to refine the rate coefficients of elementary reactions in Furan combustion. Furans are considered as potential renewable fuels. We focus on the Arrhenius rates of Furan + OH ↔ Furyl-2 + H2O and Furan ↔ OH Furyl-3 + H2O, and rely on the OH consumption rate as experimental observable. A polynomial chaos surrogate is first constructed using an adaptive pseudo-spectral projection algorithm. The PC surrogate is then exploited in conjunction with a fast estimation of the expected information gain in order to determine the optimal design in the space of initial temperatures and OH concentrations.

  14. Principles of Experimental Design for Big Data Analysis.

    Science.gov (United States)

    Drovandi, Christopher C; Holmes, Christopher; McGree, James M; Mengersen, Kerrie; Richardson, Sylvia; Ryan, Elizabeth G

    2017-08-01

    Big Datasets are endemic, but are often notoriously difficult to analyse because of their size, heterogeneity and quality. The purpose of this paper is to open a discourse on the potential for modern decision theoretic optimal experimental design methods, which by their very nature have traditionally been applied prospectively, to improve the analysis of Big Data through retrospective designed sampling in order to answer particular questions of interest. By appealing to a range of examples, it is suggested that this perspective on Big Data modelling and analysis has the potential for wide generality and advantageous inferential and computational properties. We highlight current hurdles and open research questions surrounding efficient computational optimisation in using retrospective designs, and in part this paper is a call to the optimisation and experimental design communities to work together in the field of Big Data analysis.

  15. Optimal Bayesian experimental design for contaminant transport parameter estimation

    CERN Document Server

    Tsilifis, Panagiotis; Hajali, Paris

    2015-01-01

    Experimental design is crucial for inference where limitations in the data collection procedure are present due to cost or other restrictions. Optimal experimental designs determine parameters that in some appropriate sense make the data the most informative possible. In a Bayesian setting this is translated to updating to the best possible posterior. Information theoretic arguments have led to the formation of the expected information gain as a design criterion. This can be evaluated mainly by Monte Carlo sampling and maximized by using stochastic approximation methods, both known for being computationally expensive tasks. We propose an alternative framework where a lower bound of the expected information gain is used as the design criterion. In addition to alleviating the computational burden, this also addresses issues concerning estimation bias. The problem of permeability inference in a large contaminated area is used to demonstrate the validity of our approach where we employ the massively parallel vers...

  16. Graphic Methods for Interpreting Longitudinal Dyadic Patterns From Repeated-Measures Actor-Partner Interdependence Models

    DEFF Research Database (Denmark)

    Perry, Nicholas; Baucom, Katherine; Bourne, Stacia

    2017-01-01

    Researchers commonly use repeated-measures actor–partner interdependence models (RM-APIM) to understand how romantic partners change in relation to one another over time. However, traditional interpretations of the results of these models do not fully or correctly capture the dyadic temporal...... patterns estimated in RM-APIM. Interpretation of results from these models largely focuses on the meaning of single-parameter estimates in isolation from all the others. However, considering individual coefficients separately impedes the understanding of how these associations combine to produce...... to improve the understanding and presentation of dyadic patterns of association described by standard RM-APIMs. The current article briefly reviews the conceptual foundations of RM-APIMs, demonstrates how change-as-outcome RM-APIMs and VFDs can aid interpretation of standard RM-APIMs, and provides a tutorial...

  17. Inmate responses to prison-based drug treatment: a repeated measures analysis.

    Science.gov (United States)

    Welsh, Wayne N

    2010-06-01

    Using a sample of 347 prison inmates and general linear modeling (GLM) repeated measures analyses, this paper examined during-treatment responses (e.g., changes in psychological and social functioning) to prison-based TC drug treatment. These effects have rarely been examined in previous studies, and never with a fully multivariate model accounting for within-subjects effects (changes over time), between-subjects effects (e.g., levels of risk and motivation), and within/between-subjects interactions (timexriskxmotivation). The results provide evidence of positive inmate change in response to prison TC treatment, but the patterns of results varied depending upon: (a) specific indicators of psychological and social functioning, motivation, and treatment process; (b) the time periods examined (1, 6, and 12 months during treatment); and (c) baseline levels of risk and motivation. Significant interactions between time and type of inmate suggest important new directions for research, theory, and practice in offender-based substance abuse treatment.

  18. Bayesian latent variable models for hierarchical clustered count outcomes with repeated measures in microbiome studies.

    Science.gov (United States)

    Xu, Lizhen; Paterson, Andrew D; Xu, Wei

    2017-04-01

    Motivated by the multivariate nature of microbiome data with hierarchical taxonomic clusters, counts that are often skewed and zero inflated, and repeated measures, we propose a Bayesian latent variable methodology to jointly model multiple operational taxonomic units within a single taxonomic cluster. This novel method can incorporate both negative binomial and zero-inflated negative binomial responses, and can account for serial and familial correlations. We develop a Markov chain Monte Carlo algorithm that is built on a data augmentation scheme using Pólya-Gamma random variables. Hierarchical centering and parameter expansion techniques are also used to improve the convergence of the Markov chain. We evaluate the performance of our proposed method through extensive simulations. We also apply our method to a human microbiome study.

  19. Alcohol intake and colorectal cancer: a comparison of approaches for including repeated measures of alcohol consumption

    DEFF Research Database (Denmark)

    Thygesen, Lau Caspar; Wu, Kana; Grønbaek, Morten

    2008-01-01

    BACKGROUND: In numerous studies, alcohol intake has been found to be positively associated with colorectal cancer risk. However, the majority of studies included only one exposure measurement, which may bias the results if long-term intake is relevant.METHODS: We compared different approaches...... for including repeated measures of alcohol intake among 47,432 US men enrolled in the Health Professionals Follow-up Study. Questionnaires including questions on alcohol intake had been completed in 1986, 1990, 1994, and 1998. The outcome was incident colorectal cancer during follow-up from 1986 to 2002.RESULTS......: During follow-up, 868 members of the cohort experienced colorectal cancer. Baseline, updated, and cumulative average alcohol intakes were positively associated with colorectal cancer, with only minor differences among the approaches. These results support moderately increased risk for intake >30 g...

  20. Causal inference in longitudinal comparative effectiveness studies with repeated measures of a continuous intermediate variable.

    Science.gov (United States)

    Wang, Chen-Pin; Jo, Booil; Brown, C Hendricks

    2014-09-10

    We propose a principal stratification approach to assess causal effects in nonrandomized longitudinal comparative effectiveness studies with a binary endpoint outcome and repeated measures of a continuous intermediate variable. Our method is an extension of the principal stratification approach originally proposed for the longitudinal randomized study "Prevention of Suicide in Primary Care Elderly: Collaborative Trial" to assess the treatment effect on the continuous Hamilton depression score adjusting for the heterogeneity of repeatedly measured binary compliance status. Our motivation for this work comes from a comparison of the effect of two glucose-lowering medications on a clinical cohort of patients with type 2 diabetes. Here, we consider a causal inference problem assessing how well the two medications work relative to one another on two binary endpoint outcomes: cardiovascular disease-related hospitalization and all-cause mortality. Clinically, these glucose-lowering medications can have differential effects on the intermediate outcome, glucose level over time. Ultimately, we want to compare medication effects on the endpoint outcomes among individuals in the same glucose trajectory stratum while accounting for the heterogeneity in baseline covariates (i.e., to obtain 'principal effects' on the endpoint outcomes). The proposed method involves a three-step model estimation procedure. Step 1 identifies principal strata associated with the intermediate variable using hybrid growth mixture modeling analyses. Step 2 obtains the stratum membership using the pseudoclass technique and derives propensity scores for treatment assignment. Step 3 obtains the stratum-specific treatment effect on the endpoint outcome weighted by inverse propensity probabilities derived from Step 2.

  1. ±25ppm repeatable measurement of trapezoidal pulses with 5MHz bandwidth

    CERN Document Server

    AUTHOR|(SzGeCERN)712364; Arpaia, Pasquale; Cerqueira Bastos, Miguel; Martino, Michele

    2015-01-01

    High-quality measurements of pulses are nowadays widely used in fields such as radars, pulsed lasers, electromagnetic pulse generators, and particle accelerators. Whilst literature is mainly focused on fast systems for nanosecond regime with relaxed metrological requirements, in this paper, the high-performance measurement of slower pulses in microsecond regime is faced. In particular, the experimental proof demonstration for a 15 MS/s,_25 ppm repeatable acquisition system to characterize the flat-top of 3 ms rise-time trapezoidal pulses is given. The system exploits a 5MHz bandwidth circuit for analogue signal processing based on the concept of flat-top removal. The requirements, as well as the conceptual and physical designs are illustrated. Simulation results aimed at assessing the circuit performance are also presented. Finally, an experimental case study on the characterization of a pulsed power supply for the klystrons modulators of the Compact Linear Collider (CLIC) under study at CERN is reported. In ...

  2. Applications of Chemiluminescence in the Teaching of Experimental Design

    Science.gov (United States)

    Krawczyk, Tomasz; Slupska, Roksana; Baj, Stefan

    2015-01-01

    This work describes a single-session laboratory experiment devoted to teaching the principles of factorial experimental design. Students undertook the rational optimization of a luminol oxidation reaction, using a two-level experiment that aimed to create a long-lasting bright emission. During the session students used only simple glassware and…

  3. Single-Subject Experimental Design for Evidence-Based Practice

    Science.gov (United States)

    Byiers, Breanne J.; Reichle, Joe; Symons, Frank J.

    2012-01-01

    Purpose: Single-subject experimental designs (SSEDs) represent an important tool in the development and implementation of evidence-based practice in communication sciences and disorders. The purpose of this article is to review the strategies and tactics of SSEDs and their application in speech-language pathology research. Method: The authors…

  4. Experimental design of natural and accellerated bone and wood ageing

    DEFF Research Database (Denmark)

    Facorellis, Y.; Pournou, A.; Richter, Jane;

    2015-01-01

    This paper presents the experimental design for natural and accelerated ageing of bone and wood samples found in museum conditions that was conceived as part of the INVENVORG (Thales Research Funding Program – NRSF) investigating the effects of the environmental factors on natural organic materials....

  5. Using Propensity Score Methods to Approximate Factorial Experimental Designs

    Science.gov (United States)

    Dong, Nianbo

    2011-01-01

    The purpose of this study is through Monte Carlo simulation to compare several propensity score methods in approximating factorial experimental design and identify best approaches in reducing bias and mean square error of parameter estimates of the main and interaction effects of two factors. Previous studies focused more on unbiased estimates of…

  6. Model selection in systems biology depends on experimental design.

    Science.gov (United States)

    Silk, Daniel; Kirk, Paul D W; Barnes, Chris P; Toni, Tina; Stumpf, Michael P H

    2014-06-01

    Experimental design attempts to maximise the information available for modelling tasks. An optimal experiment allows the inferred models or parameters to be chosen with the highest expected degree of confidence. If the true system is faithfully reproduced by one of the models, the merit of this approach is clear - we simply wish to identify it and the true parameters with the most certainty. However, in the more realistic situation where all models are incorrect or incomplete, the interpretation of model selection outcomes and the role of experimental design needs to be examined more carefully. Using a novel experimental design and model selection framework for stochastic state-space models, we perform high-throughput in-silico analyses on families of gene regulatory cascade models, to show that the selected model can depend on the experiment performed. We observe that experimental design thus makes confidence a criterion for model choice, but that this does not necessarily correlate with a model's predictive power or correctness. Finally, in the special case of linear ordinary differential equation (ODE) models, we explore how wrong a model has to be before it influences the conclusions of a model selection analysis.

  7. Applications of Chemiluminescence in the Teaching of Experimental Design

    Science.gov (United States)

    Krawczyk, Tomasz; Slupska, Roksana; Baj, Stefan

    2015-01-01

    This work describes a single-session laboratory experiment devoted to teaching the principles of factorial experimental design. Students undertook the rational optimization of a luminol oxidation reaction, using a two-level experiment that aimed to create a long-lasting bright emission. During the session students used only simple glassware and…

  8. Statistiscal Experimental Design for Quantitative Atomic Resolution Transmission Electron Microscopy

    NARCIS (Netherlands)

    Van Aert, S.

    2003-01-01

    Statistical experimental design is applied to set up quantitative atomic resolution transmission electron microscopy experiments. In such experiments, observations of the atomic structure of the object under study are always subject to spontaneous fluctuations. As a result of these fluctuations, the

  9. Using experimental designs to understand the development of peer relations

    NARCIS (Netherlands)

    Orobio De Castro, Bram; Thomaes, Sander; Reijntjes, Albert

    2015-01-01

    In the past decades, tremendous advances have been made in our understanding of peer relations. A description is emerging of how peer relations develop. Surprisingly, though, we know less about why peer relations develop as they do. Experimental designs provide opportunities to learn about causal pr

  10. Taguchi method of experimental design in materials education

    Science.gov (United States)

    Weiser, Martin W.

    1993-01-01

    Some of the advantages and disadvantages of the Taguchi Method of experimental design as applied to Materials Science will be discussed. This is a fractional factorial method that employs the minimum number of experimental trials for the information obtained. The analysis is also very simple to use and teach, which is quite advantageous in the classroom. In addition, the Taguchi loss function can be easily incorporated to emphasize that improvements in reproducibility are often at least as important as optimization of the response. The disadvantages of the Taguchi Method include the fact that factor interactions are normally not accounted for, there are zero degrees of freedom if all of the possible factors are used, and randomization is normally not used to prevent environmental biasing. In spite of these disadvantages it is felt that the Taguchi Method is extremely useful for both teaching experimental design and as a research tool, as will be shown with a number of brief examples.

  11. Optimal Experimental Design for Large-Scale Bayesian Inverse Problems

    KAUST Repository

    Ghattas, Omar

    2014-01-06

    We develop a Bayesian framework for the optimal experimental design of the shock tube experiments which are being carried out at the KAUST Clean Combustion Research Center. The unknown parameters are the pre-exponential parameters and the activation energies in the reaction rate expressions. The control parameters are the initial mixture composition and the temperature. The approach is based on first building a polynomial based surrogate model for the observables relevant to the shock tube experiments. Based on these surrogates, a novel MAP based approach is used to estimate the expected information gain in the proposed experiments, and to select the best experimental set-ups yielding the optimal expected information gains. The validity of the approach is tested using synthetic data generated by sampling the PC surrogate. We finally outline a methodology for validation using actual laboratory experiments, and extending experimental design methodology to the cases where the control parameters are noisy.

  12. Design and Experimental Research on a New Pipe Rupture Valve

    Institute of Scientific and Technical Information of China (English)

    HU Guo-liang; XU Bing; YANG Hua-yong; ZHANG Yi-ding

    2006-01-01

    A new pipe rupture valve for hydraulic elevator is designed.Mathematical models established for the hydraulic elevator system are used in numefical simulations on the Simulink environment of Matlab kits.The effects of different viscous damping diameters.inlet pressures of pipe rupture valve.and elevator loads on the hydraulic elevator system's dynamic performance are analyzed.Expefimenml research is also carried out using a hydraulic elevator experiment rig.The numerical simulations accord with experimental results in general.Dynamic performance indexes are assessed by the EN812 standard.The results show that the newly designed pipe rupture valve meets the designing requirement for hydraulic elevators.

  13. Computational design and experimental validation of new thermal barrier systems

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Shengmin [Louisiana State Univ., Baton Rouge, LA (United States)

    2015-03-31

    The focus of this project is on the development of a reliable and efficient ab initio based computational high temperature material design method which can be used to assist the Thermal Barrier Coating (TBC) bond-coat and top-coat design. Experimental evaluations on the new TBCs are conducted to confirm the new TBCs’ properties. Southern University is the subcontractor on this project with a focus on the computational simulation method development. We have performed ab initio density functional theory (DFT) method and molecular dynamics simulation on screening the top coats and bond coats for gas turbine thermal barrier coating design and validation applications. For experimental validations, our focus is on the hot corrosion performance of different TBC systems. For example, for one of the top coatings studied, we examined the thermal stability of TaZr2.75O8 and confirmed it’s hot corrosion performance.

  14. Prediction Accuracy in Multivariate Repeated-Measures Bayesian Forecasting Models with Examples Drawn from Research on Sleep and Circadian Rhythms

    Directory of Open Access Journals (Sweden)

    Clark Kogan

    2016-01-01

    Full Text Available In study designs with repeated measures for multiple subjects, population models capturing within- and between-subjects variances enable efficient individualized prediction of outcome measures (response variables by incorporating individuals response data through Bayesian forecasting. When measurement constraints preclude reasonable levels of prediction accuracy, additional (secondary response variables measured alongside the primary response may help to increase prediction accuracy. We investigate this for the case of substantial between-subjects correlation between primary and secondary response variables, assuming negligible within-subjects correlation. We show how to determine the accuracy of primary response predictions as a function of secondary response observations. Given measurement costs for primary and secondary variables, we determine the number of observations that produces, with minimal cost, a fixed average prediction accuracy for a model of subject means. We illustrate this with estimation of subject-specific sleep parameters using polysomnography and wrist actigraphy. We also consider prediction accuracy in an example time-dependent, linear model and derive equations for the optimal timing of measurements to achieve, on average, the best prediction accuracy. Finally, we examine an example involving a circadian rhythm model and show numerically that secondary variables can improve individualized predictions in this time-dependent nonlinear model as well.

  15. Prediction Accuracy in Multivariate Repeated-Measures Bayesian Forecasting Models with Examples Drawn from Research on Sleep and Circadian Rhythms.

    Science.gov (United States)

    Kogan, Clark; Kalachev, Leonid; Van Dongen, Hans P A

    2016-01-01

    In study designs with repeated measures for multiple subjects, population models capturing within- and between-subjects variances enable efficient individualized prediction of outcome measures (response variables) by incorporating individuals response data through Bayesian forecasting. When measurement constraints preclude reasonable levels of prediction accuracy, additional (secondary) response variables measured alongside the primary response may help to increase prediction accuracy. We investigate this for the case of substantial between-subjects correlation between primary and secondary response variables, assuming negligible within-subjects correlation. We show how to determine the accuracy of primary response predictions as a function of secondary response observations. Given measurement costs for primary and secondary variables, we determine the number of observations that produces, with minimal cost, a fixed average prediction accuracy for a model of subject means. We illustrate this with estimation of subject-specific sleep parameters using polysomnography and wrist actigraphy. We also consider prediction accuracy in an example time-dependent, linear model and derive equations for the optimal timing of measurements to achieve, on average, the best prediction accuracy. Finally, we examine an example involving a circadian rhythm model and show numerically that secondary variables can improve individualized predictions in this time-dependent nonlinear model as well.

  16. Comparison of intraclass correlation coefficient estimates and standard errors between using cross-sectional and repeated measurement data: the Safety Check cluster randomized trial.

    Science.gov (United States)

    Ip, Edward H; Wasserman, Richard; Barkin, Shari

    2011-03-01

    Designing cluster randomized trials in clinical studies often requires accurate estimates of intraclass correlation, which quantifies the strength of correlation between units, such as participants, within a cluster, such as a practice. Published ICC estimates, even when available, often suffer from the problem of wide confidence intervals. Using data from a national, randomized, controlled study concerning violence prevention for children--the Safety Check--we compare the ICC values derived from two approaches only baseline data and using both baseline and follow-up data. Using a variance component decomposition approach, the latter method allows flexibility in handling complex data sets. For example, it allows for shifts in the outcome variable over time and for an unbalanced cluster design. Furthermore, we evaluate the large-sample formula for ICC estimates and standard errors using the bootstrap method. Our findings suggest that ICC estimates range from 0.012 to 0.11 for providers within practice and range from 0.018 to 0.11 for families within provider. The estimates derived from the baseline-only and repeated-measurements approaches agree quite well except in cases in which variation over repeated measurements is large. The reductions in the widths of ICC confidence limits from using repeated measurement over baseline only are, respectively, 62% and 42% at the practice and provider levels. The contribution of this paper therefore includes two elements, which are a methodology for improving the accuracy of ICC, and the reporting of such quantities for pediatric and other researchers who are interested in designing clustered randomized trials similar to the current study.

  17. Convergence in parameters and predictions using computational experimental design.

    Science.gov (United States)

    Hagen, David R; White, Jacob K; Tidor, Bruce

    2013-08-06

    Typically, biological models fitted to experimental data suffer from significant parameter uncertainty, which can lead to inaccurate or uncertain predictions. One school of thought holds that accurate estimation of the true parameters of a biological system is inherently problematic. Recent work, however, suggests that optimal experimental design techniques can select sets of experiments whose members probe complementary aspects of a biochemical network that together can account for its full behaviour. Here, we implemented an experimental design approach for selecting sets of experiments that constrain parameter uncertainty. We demonstrated with a model of the epidermal growth factor-nerve growth factor pathway that, after synthetically performing a handful of optimal experiments, the uncertainty in all 48 parameters converged below 10 per cent. Furthermore, the fitted parameters converged to their true values with a small error consistent with the residual uncertainty. When untested experimental conditions were simulated with the fitted models, the predicted species concentrations converged to their true values with errors that were consistent with the residual uncertainty. This paper suggests that accurate parameter estimation is achievable with complementary experiments specifically designed for the task, and that the resulting parametrized models are capable of accurate predictions.

  18. Lab-Scale Fiber Spinning Experimental Design Cost Comparison

    Directory of Open Access Journals (Sweden)

    Jeffrey C. Moreland

    2010-03-01

    Full Text Available Many statistical experimental designs are too costlyor require too much raw material to be feasible forlab-scale fiber spinning experiments. In this study afour-factor response surface design is presented tostudy the fiber spinning process in detail at the labscale. The time, cost, and amount of raw materialrequired to execute the proposed design are comparedto the typical completely randomized 24 factorialdesign used in fiber spinning experiments and also toa standard four-factor response surface design.Sample fiber data as well as analysis from a typicalstatistical software package is provided to furtherdemonstrate the differences between each design. Bydesignating some treatment factors in the design ashard-to-change, split-plotting is used to reduce thetime, cost, and amount of raw material required tocomplete the experiment. The proposed split-plotdesign is faster and less expensive than a typicalfactorial design and has the advantage of fitting amore complex second-order model to the system.When compared to a standard response surfacedesign, the proposed split-plot design provides thesame second-order modeling capabilities but reducesthe cost of the experiment by 53%, the total time by36%, and the amount of polymer required by 24%.Thus, a split-plot response surface design based onhard-to-change factors is recommended in lab-scalespinning.

  19. Experimental design for parameter estimation of gene regulatory networks.

    Directory of Open Access Journals (Sweden)

    Bernhard Steiert

    Full Text Available Systems biology aims for building quantitative models to address unresolved issues in molecular biology. In order to describe the behavior of biological cells adequately, gene regulatory networks (GRNs are intensively investigated. As the validity of models built for GRNs depends crucially on the kinetic rates, various methods have been developed to estimate these parameters from experimental data. For this purpose, it is favorable to choose the experimental conditions yielding maximal information. However, existing experimental design principles often rely on unfulfilled mathematical assumptions or become computationally demanding with growing model complexity. To solve this problem, we combined advanced methods for parameter and uncertainty estimation with experimental design considerations. As a showcase, we optimized three simulated GRNs in one of the challenges from the Dialogue for Reverse Engineering Assessment and Methods (DREAM. This article presents our approach, which was awarded the best performing procedure at the DREAM6 Estimation of Model Parameters challenge. For fast and reliable parameter estimation, local deterministic optimization of the likelihood was applied. We analyzed identifiability and precision of the estimates by calculating the profile likelihood. Furthermore, the profiles provided a way to uncover a selection of most informative experiments, from which the optimal one was chosen using additional criteria at every step of the design process. In conclusion, we provide a strategy for optimal experimental design and show its successful application on three highly nonlinear dynamic models. Although presented in the context of the GRNs to be inferred for the DREAM6 challenge, the approach is generic and applicable to most types of quantitative models in systems biology and other disciplines.

  20. Optimizing an experimental design for an electromagnetic experiment

    Science.gov (United States)

    Roux, Estelle; Garcia, Xavier

    2013-04-01

    Most of geophysical studies focus on data acquisition and analysis, but another aspect which is gaining importance is the discussion on acquisition of suitable datasets. This can be done through the design of an optimal experiment. Optimizing an experimental design implies a compromise between maximizing the information we get about the target and reducing the cost of the experiment, considering a wide range of constraints (logistical, financial, experimental …). We are currently developing a method to design an optimal controlled-source electromagnetic (CSEM) experiment to detect a potential CO2 reservoir and monitor this reservoir during and after CO2 injection. Our statistical algorithm combines the use of linearized inverse theory (to evaluate the quality of one given design via the objective function) and stochastic optimization methods like genetic algorithm (to examine a wide range of possible surveys). The particularity of our method is that it uses a multi-objective genetic algorithm that searches for designs that fit several objective functions simultaneously. One main advantage of this kind of technique to design an experiment is that it does not require the acquisition of any data and can thus be easily conducted before any geophysical survey. Our new experimental design algorithm has been tested with a realistic one-dimensional resistivity model of the Earth in the region of study (northern Spain CO2 sequestration test site). We show that a small number of well distributed observations have the potential to resolve the target. This simple test also points out the importance of a well chosen objective function. Finally, in the context of CO2 sequestration that motivates this study, we might be interested in maximizing the information we get about the reservoir layer. In that case, we show how the combination of two different objective functions considerably improve its resolution.

  1. Design and experimental results for the S809 airfoil

    Energy Technology Data Exchange (ETDEWEB)

    Somers, D M [Airfoils, Inc., State College, PA (United States)

    1997-01-01

    A 21-percent-thick, laminar-flow airfoil, the S809, for horizontal-axis wind-turbine applications, has been designed and analyzed theoretically and verified experimentally in the low-turbulence wind tunnel of the Delft University of Technology Low Speed Laboratory, The Netherlands. The two primary objectives of restrained maximum lift, insensitive to roughness, and low profile drag have been achieved. The airfoil also exhibits a docile stall. Comparisons of the theoretical and experimental results show good agreement. Comparisons with other airfoils illustrate the restrained maximum lift coefficient as well as the lower profile-drag coefficients, thus confirming the achievement of the primary objectives.

  2. Experimental design unified concepts, practical applications, and computer implementation

    CERN Document Server

    Bowerman, Bruce L

    2014-01-01

    This book is a concise and innovative book that gives a complete presentation of the design and analysis of experiments in approximately one half the space of competing books. With only the modest prerequisite of a basic (non-calculus) statistics course, this text is appropriate for the widest possible audience. Two procedures are generally used to analyze experimental design data-analysis of variance (ANOVA) and regression analysis. Because ANOVA is more intuitive, this book devotes most of its first three chapters to showing how to use ANOVA to analyze balanced (equal sample size) experiment

  3. Optimal active vibration absorber - Design and experimental results

    Science.gov (United States)

    Lee-Glauser, Gina; Juang, Jer-Nan; Sulla, Jeffrey L.

    1993-01-01

    An optimal active vibration absorber can provide guaranteed closed-loop stability and control for large flexible space structures with collocated sensors/actuators. The active vibration absorber is a second-order dynamic system which is designed to suppress any unwanted structural vibration. This can be designed with minimum knowledge of the controlled system. Two methods for optimizing the active vibration absorber parameters are illustrated: minimum resonant amplitude and frequency matched active controllers. The Controls-Structures Interaction Phase-1 Evolutionary Model at NASA LaRC is used to demonstrate the effectiveness of the active vibration absorber for vibration suppression. Performance is compared numerically and experimentally using acceleration feedback.

  4. Optimal active vibration absorber: Design and experimental results

    Science.gov (United States)

    Lee-Glauser, Gina; Juang, Jer-Nan; Sulla, Jeffrey L.

    1992-01-01

    An optimal active vibration absorber can provide guaranteed closed-loop stability and control for large flexible space structures with collocated sensors/actuators. The active vibration absorber is a second-order dynamic system which is designed to suppress any unwanted structural vibration. This can be designed with minimum knowledge of the controlled system. Two methods for optimizing the active vibration absorber parameters are illustrated: minimum resonant amplitude and frequency matched active controllers. The Controls-Structures Interaction Phase-1 Evolutionary Model at NASA LaRC is used to demonstrate the effectiveness of the active vibration absorber for vibration suppression. Performance is compared numerically and experimentally using acceleration feedback.

  5. TIBER: Tokamak Ignition/Burn Experimental Research. Final design report

    Energy Technology Data Exchange (ETDEWEB)

    Henning, C.D.; Logan, B.G.; Barr, W.L.; Bulmer, R.H.; Doggett, J.N.; Johnson, B.M.; Lee, J.D.; Hoard, R.W.; Miller, J.R.; Slack, D.S.

    1985-11-01

    The Tokamak Ignition/Burn Experimental Research (TIBER) device is the smallest superconductivity tokamak designed to date. In the design plasma shaping is used to achieve a high plasma beta. Neutron shielding is minimized to achieve the desired small device size, but the superconducting magnets must be shielded sufficiently to reduce the neutron heat load and the gamma-ray dose to various components of the device. Specifications of the plasma-shaping coil, the shielding, coaling, requirements, and heating modes are given. 61 refs., 92 figs., 30 tabs. (WRF)

  6. Fast Bayesian optimal experimental design and its applications

    KAUST Repository

    Long, Quan

    2015-01-07

    We summarize our Laplace method and multilevel method of accelerating the computation of the expected information gain in a Bayesian Optimal Experimental Design (OED). Laplace method is a widely-used method to approximate an integration in statistics. We analyze this method in the context of optimal Bayesian experimental design and extend this method from the classical scenario, where a single dominant mode of the parameters can be completely-determined by the experiment, to the scenarios where a non-informative parametric manifold exists. We show that by carrying out this approximation the estimation of the expected Kullback-Leibler divergence can be significantly accelerated. While Laplace method requires a concentration of measure, multi-level Monte Carlo method can be used to tackle the problem when there is a lack of measure concentration. We show some initial results on this approach. The developed methodologies have been applied to various sensor deployment problems, e.g., impedance tomography and seismic source inversion.

  7. Contract design and insurance fraud: An experimental investigation

    OpenAIRE

    Lammers, Frauke; Schiller, Jörg

    2010-01-01

    This paper investigates the impact of insurance contract design on the behavior of filing fraudulent claims in an experimental setup. We test how fraud behavior varies for insurance contracts with full coverage, a straight deductible or variable premiums (bonus-malus contract). In our experiment, filing fraudulent claims is a dominant strategy for selfish participants, with no psychological costs of committing fraud. While some people always commit fraud, a substantial share of people only oc...

  8. Experimental framework for autonomous fast ships's control design

    OpenAIRE

    Recas Piorno, Joaquín; Esteban San Román, Segundo; Girón Sierra, José María; Cruz García, Jesús Manuel de la

    2005-01-01

    The research on seakeeping control of fast ships requires difficult experiments for modeling and control design. To alleviate the ship motion certain active appendages are added, such moving flaps, T-foil and fins. The motion of appendages must be optimized to counteract each encountered wave. During our first research steps, a scaled down ship, with scaled appendages, has been used in a towing tank facility. The scaled ship is towed at fixed speeds of experimental interest, for instance at t...

  9. Gradient-based stochastic optimization methods in Bayesian experimental design

    OpenAIRE

    2012-01-01

    Optimal experimental design (OED) seeks experiments expected to yield the most useful data for some purpose. In practical circumstances where experiments are time-consuming or resource-intensive, OED can yield enormous savings. We pursue OED for nonlinear systems from a Bayesian perspective, with the goal of choosing experiments that are optimal for parameter inference. Our objective in this context is the expected information gain in model parameters, which in general can only be estimated u...

  10. Parameter Estimation and Experimental Design in Groundwater Modeling

    Institute of Scientific and Technical Information of China (English)

    SUN Ne-zheng

    2004-01-01

    This paper reviews the latest developments on parameter estimation and experimental design in the field of groundwater modeling. Special considerations are given when the structure of the identified parameter is complex and unknown. A new methodology for constructing useful groundwater models is described, which is based on the quantitative relationships among the complexity of model structure, the identifiability of parameter, the sufficiency of data, and the reliability of model application.

  11. Microcomputer-based tests for repeated-measures: Metric properties and predictive validities

    Science.gov (United States)

    Kennedy, Robert S.; Baltzley, Dennis R.; Dunlap, William P.; Wilkes, Robert L.; Kuntz, Lois-Ann

    1989-01-01

    A menu of psychomotor and mental acuity tests were refined. Field applications of such a battery are, for example, a study of the effects of toxic agents or exotic environments on performance readiness, or the determination of fitness for duty. The key requirement of these tasks is that they be suitable for repeated-measures applications, and so questions of stability and reliability are a continuing, central focus of this work. After the initial (practice) session, seven replications of 14 microcomputer-based performance tests (32 measures) were completed by 37 subjects. Each test in the battery had previously been shown to stabilize in less than five 90-second administrations and to possess retest reliabilities greater than r = 0.707 for three minutes of testing. However, all the tests had never been administered together as a battery and they had never been self-administered. In order to provide predictive validity for intelligence measurement, the Wechsler Adult Intelligence Scale-Revised and the Wonderlic Personnel Test were obtained on the same subjects.

  12. Impact of repeated measures and sample selection on genome-wide association studies of fasting glucose

    Science.gov (United States)

    Rasmussen-Torvik, Laura J.; Alonso, Alvaro; Li, Man; Kao, Wen; Köttgen, Anna; Yan, Yuer; Couper, David; Boerwinkle, Eric; Bielinski, Suzette J.; Pankow, James S.

    2010-01-01

    Although GWAS have been performed in longitudinal studies, most used only a single trait measure. GWAS of fasting glucose have generally included only normoglycemic individuals. We examined the impact of both repeated measures and sample selection on GWAS in ARIC, a study which obtained four longitudinal measures of fasting glucose and included both individuals with and without prevalent diabetes. The sample included Caucasians and the Affymetrix 6.0 chip was used for genotyping. Sample sizes for GWAS analyses ranged from 8372 (first study visit) to 5782 (average fasting glucose). Candidate SNP analyses with SNPs identified through fasting glucose or diabetes GWAS were conducted in 9133 individuals, including 761 with prevalent diabetes. For a constant sample size, smaller p-values were obtained for the average measure of fasting glucose compared to values at any single visit, and two additional significant GWAS signals were detected. For four candidate SNPs (rs780094, rs10830963, rs7903146, and rs4607517), the strength of association between genotype and glucose was significantly (p-interaction fasting glucose candidate SNPs (rs780094, rs10830963, rs560887, rs4607517, rs13266634) the association with measured fasting glucose was more significant in the smaller sample without prevalent diabetes than in the larger combined sample of those with and without diabetes. This analysis demonstrates the potential utility of averaging trait values in GWAS studies and explores the advantage of using only individuals without prevalent diabetes in GWAS of fasting glucose. PMID:20839289

  13. Analyzing repeated measures semi-continuous data, with application to an alcohol dependence study.

    Science.gov (United States)

    Liu, Lei; Strawderman, Robert L; Johnson, Bankole A; O'Quigley, John M

    2016-02-01

    Two-part random effects models (Olsen and Schafer,(1) Tooze et al.(2)) have been applied to repeated measures of semi-continuous data, characterized by a mixture of a substantial proportion of zero values and a skewed distribution of positive values. In the original formulation of this model, the natural logarithm of the positive values is assumed to follow a normal distribution with a constant variance parameter. In this article, we review and consider three extensions of this model, allowing the positive values to follow (a) a generalized gamma distribution, (b) a log-skew-normal distribution, and (c) a normal distribution after the Box-Cox transformation. We allow for the possibility of heteroscedasticity. Maximum likelihood estimation is shown to be conveniently implemented in SAS Proc NLMIXED. The performance of the methods is compared through applications to daily drinking records in a secondary data analysis from a randomized controlled trial of topiramate for alcohol dependence treatment. We find that all three models provide a significantly better fit than the log-normal model, and there exists strong evidence for heteroscedasticity. We also compare the three models by the likelihood ratio tests for non-nested hypotheses (Vuong(3)). The results suggest that the generalized gamma distribution provides the best fit, though no statistically significant differences are found in pairwise model comparisons.

  14. Preliminary evaluation of a micro-based repeated measures testing system

    Science.gov (United States)

    Kennedy, Robert S.; Wilkes, Robert L.; Lane, Norman E.

    1985-01-01

    A need exists for an automated performance test system to study the effects of various treatments which are of interest to the aerospace medical community, i.e., the effects of drugs and environmental stress. The ethics and pragmatics of such assessment demand that repeated measures in small groups of subjects be the customary research paradigm. Test stability, reliability-efficiency and factor structure take on extreme significance; in a program of study by the U.S. Navy, 80 percent of 150 tests failed to meet minimum metric requirements. The best is being programmed on a portable microprocessor and administered along with tests in their original formats in order to examine their metric properties in the computerized mode. Twenty subjects have been tested over four replications on a 6.0 minute computerized battery (six tests) and which compared with five paper and pencil marker tests. All tests achieved stability within the four test sessions, reliability-efficiencies were high (r greater than .707 for three minutes testing), and the computerized tests were largely comparable to the paper and pencil version from which they were derived. This computerized performance test system is portable, inexpensive and rugged.

  15. Bayesian Optimal Experimental Design Using Multilevel Monte Carlo

    KAUST Repository

    Issaid, Chaouki Ben

    2015-01-07

    Experimental design is very important since experiments are often resource-exhaustive and time-consuming. We carry out experimental design in the Bayesian framework. To measure the amount of information, which can be extracted from the data in an experiment, we use the expected information gain as the utility function, which specifically is the expected logarithmic ratio between the posterior and prior distributions. Optimizing this utility function enables us to design experiments that yield the most informative data for our purpose. One of the major difficulties in evaluating the expected information gain is that the integral is nested and can be high dimensional. We propose using Multilevel Monte Carlo techniques to accelerate the computation of the nested high dimensional integral. The advantages are twofold. First, the Multilevel Monte Carlo can significantly reduce the cost of the nested integral for a given tolerance, by using an optimal sample distribution among different sample averages of the inner integrals. Second, the Multilevel Monte Carlo method imposes less assumptions, such as the concentration of measures, required by Laplace method. We test our Multilevel Monte Carlo technique using a numerical example on the design of sensor deployment for a Darcy flow problem governed by one dimensional Laplace equation. We also compare the performance of the Multilevel Monte Carlo, Laplace approximation and direct double loop Monte Carlo.

  16. Theoretical and experimental performance analysis for cold trap design

    Energy Technology Data Exchange (ETDEWEB)

    Hemanath, M.G., E-mail: hemanath@igcar.gov.i [Fast Reactor Technology Group, Indira Gandhi Center for Atomic Research, Kalpakkam (India); Meikandamurthy, C.; Kumar, A. Ashok; Chandramouli, S.; Rajan, K.K.; Rajan, M.; Vaidyanathan, G.; Padmakumar, G.; Kalyanasundaram, P.; Raj, Baldev [Fast Reactor Technology Group, Indira Gandhi Center for Atomic Research, Kalpakkam (India)

    2010-10-15

    Cold trap is a purification unit used in sodium system of FBR's for maintaining the oxygen/hydrogen level in sodium within acceptable limits. It works on the principle of crystallization and precipitation of oxides/hydrides of sodium in a wire mesh, when the temperature of sodium is reduced below the saturation temperature. The cold traps presently used have lower effectiveness and get plugged prematurely. The plugged cold traps are cleaned and then put back into service. Frequent cleaning of cold trap results in the long down time of the sodium system. New design of cold trap has been conceived to overcome the above problems. The mathematical modeling for the new design was carried out and validated with experimentally tested results for its effectiveness. This paper shares the experience gained on the new design of cold trap for FBR's.

  17. Design and experimental study of a novel giant magnetostrictive actuator

    Science.gov (United States)

    Xue, Guangming; Zhang, Peilin; He, Zhongbo; Li, Dongwei; Huang, Yingjie; Xie, Wenqiang

    2016-12-01

    Giant magnetostrictive actuator has been widely used in precise driving occasions for its excellent performance. However, in driving a switching valve, especially the ball-valve in an electronic controlled injector, the actuator can't exhibit its good performance for limits in output displacement and responding speed. A novel giant magnetostrictive actuator, which can reach its maximum displacement for being exerted with no bias magnetic field, is designed in this paper. Simultaneously, elongating of the giant magetostrictive material is converted to shortening of the actuator's axial dimension with the help of an output rod in "T" type. Furthermore, to save responding time, the driving voltage with high opening voltage while low holding voltage is designed. Responding time and output displacement are studied experimentally with the help of a measuring system. From measured results, designed driving voltage can improve the responding speed of actuator displacement quite effectively. And, giant magnetostrictive actuator can output various steady-state displacements to reach more driving effects.

  18. Entropy-Based Search Algorithm for Experimental Design

    Science.gov (United States)

    Malakar, N. K.; Knuth, K. H.

    2011-03-01

    The scientific method relies on the iterated processes of inference and inquiry. The inference phase consists of selecting the most probable models based on the available data; whereas the inquiry phase consists of using what is known about the models to select the most relevant experiment. Optimizing inquiry involves searching the parameterized space of experiments to select the experiment that promises, on average, to be maximally informative. In the case where it is important to learn about each of the model parameters, the relevance of an experiment is quantified by Shannon entropy of the distribution of experimental outcomes predicted by a probable set of models. If the set of potential experiments is described by many parameters, we must search this high-dimensional entropy space. Brute force search methods will be slow and computationally expensive. We present an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment for efficient experimental design. This algorithm is inspired by Skilling's nested sampling algorithm used in inference and borrows the concept of a rising threshold while a set of experiment samples are maintained. We demonstrate that this algorithm not only selects highly relevant experiments, but also is more efficient than brute force search. Such entropic search techniques promise to greatly benefit autonomous experimental design.

  19. Technological issues and experimental design of gene association studies.

    Science.gov (United States)

    Distefano, Johanna K; Taverna, Darin M

    2011-01-01

    Genome-wide association studies (GWAS), in which thousands of single-nucleotide polymorphisms (SNPs) spanning the genome are genotyped in individuals who are phenotypically well characterized, -currently represent the most popular strategy for identifying gene regions associated with common -diseases and related quantitative traits. Improvements in technology and throughput capability, development of powerful statistical tools, and more widespread acceptance of pooling-based genotyping approaches have led to greater utilization of GWAS in human genetics research. However, important considerations for optimal experimental design, including selection of the most appropriate genotyping platform, can enhance the utility of the approach even further. This chapter reviews experimental and technological issues that may affect the success of GWAS findings and proposes strategies for developing the most comprehensive, logical, and cost-effective approaches for genotyping given the population of interest.

  20. Entropy-Based Search Algorithm for Experimental Design

    CERN Document Server

    Malakar, N K

    2010-01-01

    The scientific method relies on the iterated processes of inference and inquiry. The inference phase consists of selecting the most probable models based on the available data; whereas the inquiry phase consists of using what is known about the models to select the most relevant experiment. Optimizing inquiry involves searching the parameterized space of experiments to select the experiment that promises, on average, to be maximally informative. In the case where it is important to learn about each of the model parameters, the relevance of an experiment is quantified by Shannon entropy of the distribution of experimental outcomes predicted by a probable set of models. If the set of potential experiments is described by many parameters, we must search this high-dimensional entropy space. Brute force search methods will be slow and computationally expensive. We present an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment for efficient experimental design. ...

  1. Computational Design and Experimental Validation of New Thermal Barrier Systems

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Shengmin; Yang, Shizhong; Khosravi, Ebrahim

    2014-04-01

    This project (10/01/2010-9/30/2014), “Computational Design and Experimental Validation of New Thermal Barrier Systems”, originates from Louisiana State University (LSU) Mechanical Engineering Department and Southern University (SU) Department of Computer Science. This project will directly support the technical goals specified in DE-FOA-0000248, Topic Area 3: Turbine Materials, by addressing key technologies needed to enable the development of advanced turbines and turbine-based systems that will operate safely and efficiently using coal-derived synthesis gases. In this project, the focus is to develop and implement novel molecular dynamics method to improve the efficiency of simulation on novel TBC materials; perform high performance computing (HPC) on complex TBC structures to screen the most promising TBC compositions; perform material characterizations and oxidation/corrosion tests; and demonstrate our new thermal barrier coating (TBC) systems experimentally under integrated gasification combined cycle (IGCC) environments.

  2. Computational Design and Experimental Validation of New Thermal Barrier Systems

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Shengmin; Yang, Shizhong; Khosravi, Ebrahim

    2012-10-01

    This project (10/01/2010-9/30/2013), “Computational Design and Experimental Validation of New Thermal Barrier Systems”, originates from Louisiana State University (LSU) Mechanical Engineering Department and Southern University (SU) Department of Computer Science. This project will directly support the technical goals specified in DEFOA- 0000248, Topic Area 3: Turbine Materials, by addressing key technologies needed to enable the development of advanced turbines and turbine-based systems that will operate safely and efficiently using coal-derived synthesis gases. We will develop and implement novel molecular dynamics method to improve the efficiency of simulation on novel TBC materials; perform high performance computing (HPC) on complex TBC structures to screen the most promising TBC compositions; perform material characterizations and oxidation/corrosion tests; and demonstrate our new thermal barrier coating (TBC) systems experimentally under integrated gasification combined cycle (IGCC) environments. The durability of the coating will be examined using the proposed Durability Test Rig.

  3. Computational Design and Experimental Validation of New Thermal Barrier Systems

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Shengmin; Yang, Shizhong; Khosravi, Ebrahim

    2014-04-01

    This project (10/01/2010-9/30/2014), “Computational Design and Experimental Validation of New Thermal Barrier Systems”, originates from Louisiana State University (LSU) Mechanical Engineering Department and Southern University (SU) Department of Computer Science. This project will directly support the technical goals specified in DE-FOA-0000248, Topic Area 3: Turbine Materials, by addressing key technologies needed to enable the development of advanced turbines and turbine-based systems that will operate safely and efficiently using coal-derived synthesis gases. In this project, the focus is to develop and implement novel molecular dynamics method to improve the efficiency of simulation on novel TBC materials; perform high performance computing (HPC) on complex TBC structures to screen the most promising TBC compositions; perform material characterizations and oxidation/corrosion tests; and demonstrate our new thermal barrier coating (TBC) systems experimentally under integrated gasification combined cycle (IGCC) environments.

  4. The clinical applicability of a daily summary of patients' self-reported postoperative pain-A repeated measure analysis.

    Science.gov (United States)

    Wikström, Lotta; Eriksson, Kerstin; Fridlund, Bengt; Nilsson, Mats; Årestedt, Kristofer; Broström, Anders

    2017-03-23

    (i) To determine whether a central tendency, median, based on patients' self-rated pain is a clinically applicable daily measure to show patients' postoperative pain on the first day after major surgery (ii) and to determine the number of self-ratings required for the calculation of this measure. Perioperative pain traits in medical records are difficult to overview. The clinical applicability of a daily documented summarising measure of patients' self-rated pain scores is little explored. A repeated measure design was carried out at three Swedish country hospitals. Associations between the measures were analysed with nonparametric statistical methods; systematic and individual group changes were analysed separately. Measure I: pain scores at rest and activity postoperative day 1; measure II: retrospective average pain from postoperative day 1. The sample consisted of 190 general surgery patients and 289 orthopaedic surgery patients with a mean age of 65; 56% were men. Forty-four percent had a pre-operative daily intake of analgesia, and 77% used postoperative opioids. A range of 4-9 pain scores seem to be eligible for the calculation of the daily measures of pain. Rank correlations for individual median scores, based on four ratings, vs. retrospective self-rated average pain, were moderate and strengthened with increased numbers of ratings. A systematic group change towards a higher level of reported retrospective pain was significant. The median values were clinically applicable daily measures. The risk of obtaining a higher value than was recalled by patients seemed to be low. Applicability increased with increased frequency of self-rated pain scores and with high-quality pain assessments. The documenting of daily median pain scores at rest and during activity could constitute the basis for obtaining patients' experiences by showing their pain severity trajectories. The measures could also be an important key to predicting postoperative health

  5. Acting like a physicist: Student approach study to experimental design

    Science.gov (United States)

    Karelina, Anna; Etkina, Eugenia

    2007-12-01

    National studies of science education have unanimously concluded that preparing our students for the demands of the 21st century workplace is one of the major goals. This paper describes a study of student activities in introductory college physics labs, which were designed to help students acquire abilities that are valuable in the workplace. In these labs [called Investigative Science Learning Environment (ISLE) labs], students design their own experiments. Our previous studies have shown that students in these labs acquire scientific abilities such as the ability to design an experiment to solve a problem, the ability to collect and analyze data, the ability to evaluate assumptions and uncertainties, and the ability to communicate. These studies mostly concentrated on analyzing students’ writing, evaluated by specially designed scientific ability rubrics. Recently, we started to study whether the ISLE labs make students not only write like scientists but also engage in discussions and act like scientists while doing the labs. For example, do students plan an experiment, validate assumptions, evaluate results, and revise the experiment if necessary? A brief report of some of our findings that came from monitoring students’ activity during ISLE and nondesign labs was presented in the Physics Education Research Conference Proceedings. We found differences in student behavior and discussions that indicated that ISLE labs do in fact encourage a scientistlike approach to experimental design and promote high-quality discussions. This paper presents a full description of the study.

  6. Optimization of formulation variables of benzocaine liposomes using experimental design.

    Science.gov (United States)

    Mura, Paola; Capasso, Gaetano; Maestrelli, Francesca; Furlanetto, Sandra

    2008-01-01

    This study aimed to optimize, by means of an experimental design multivariate strategy, a liposomal formulation for topical delivery of the local anaesthetic agent benzocaine. The formulation variables for the vesicle lipid phase uses potassium glycyrrhizinate (KG) as an alternative to cholesterol and the addition of a cationic (stearylamine) or anionic (dicethylphosphate) surfactant (qualitative factors); the percents of ethanol and the total volume of the hydration phase (quantitative factors) were the variables for the hydrophilic phase. The combined influence of these factors on the considered responses (encapsulation efficiency (EE%) and percent drug permeated at 180 min (P%)) was evaluated by means of a D-optimal design strategy. Graphic analysis of the effects indicated that maximization of the selected responses requested opposite levels of the considered factors: For example, KG and stearylamine were better for increasing EE%, and cholesterol and dicethylphosphate for increasing P%. In the second step, the Doehlert design, applied for the response-surface study of the quantitative factors, pointed out a negative interaction between percent ethanol and volume of the hydration phase and allowed prediction of the best formulation for maximizing drug permeation rate. Experimental P% data of the optimized formulation were inside the confidence interval (P < 0.05) calculated around the predicted value of the response. This proved the suitability of the proposed approach for optimizing the composition of liposomal formulations and predicting the effects of formulation variables on the considered experimental response. Moreover, the optimized formulation enabled a significant improvement (P < 0.05) of the drug anaesthetic effect with respect to the starting reference liposomal formulation, thus demonstrating its actually better therapeutic effectiveness.

  7. Designing artificial enzymes from scratch: Experimental study and mesoscale simulation

    Science.gov (United States)

    Komarov, Pavel V.; Zaborina, Olga E.; Klimova, Tamara P.; Lozinsky, Vladimir I.; Khalatur, Pavel G.; Khokhlov, Alexey R.

    2016-09-01

    We present a new concept for designing biomimetic analogs of enzymatic proteins; these analogs are based on the synthetic protein-like copolymers. α-Chymotrypsin is used as a prototype of the artificial catalyst. Our experimental study shows that in the course of free radical copolymerization of hydrophobic and hydrophilic monomers the target globular nanostructures of a "core-shell" morphology appear in a selective solvent. Using a mesoscale computer simulation, we show that the protein-like globules can have a large number of catalytic centers located at the hydrophobic core/hydrophilic shell interface.

  8. On the proper study design applicable to experimental balneology

    Science.gov (United States)

    Varga, Csaba

    2016-08-01

    The simple message of this paper is that it is the high time to reevaluate the strategies and optimize the efforts for investigation of thermal (spa) waters. Several articles trying to clear mode of action of medicinal waters have been published up to now. Almost all studies apply the unproven hypothesis, namely the inorganic ingredients are in close connection with healing effects of bathing. Change of paradigm would be highly necessary in this field taking into consideration the presence of several biologically active organic substances in these waters. A successful design for experimental mechanistic studies is approved.

  9. On the proper study design applicable to experimental balneology.

    Science.gov (United States)

    Varga, Csaba

    2016-08-01

    The simple message of this paper is that it is the high time to reevaluate the strategies and optimize the efforts for investigation of thermal (spa) waters. Several articles trying to clear mode of action of medicinal waters have been published up to now. Almost all studies apply the unproven hypothesis, namely the inorganic ingredients are in close connection with healing effects of bathing. Change of paradigm would be highly necessary in this field taking into consideration the presence of several biologically active organic substances in these waters. A successful design for experimental mechanistic studies is approved.

  10. Statistics for proteomics: experimental design and 2-DE differential analysis.

    Science.gov (United States)

    Chich, Jean-François; David, Olivier; Villers, Fanny; Schaeffer, Brigitte; Lutomski, Didier; Huet, Sylvie

    2007-04-15

    Proteomics relies on the separation of complex protein mixtures using bidimensional electrophoresis. This approach is largely used to detect the expression variations of proteins prepared from two or more samples. Recently, attention was drawn on the reliability of the results published in literature. Among the critical points identified were experimental design, differential analysis and the problem of missing data, all problems where statistics can be of help. Using examples and terms understandable by biologists, we describe how a collaboration between biologists and statisticians can improve reliability of results and confidence in conclusions.

  11. Experimental design methodology: the scientific tool for performance evaluation

    Science.gov (United States)

    Sadjadi, Firooz A.

    1990-09-01

    With the rapid growth of the signal and image processing technology in the last several decades has arisen the need for means of evaluating and comparing the numerous algorithms and systems that are created or are being developed. Performance evaluation, in the past, has been mostly ad hoc and incohesive. In this paper we present a systematic step by step approach for the scientific evaluation of signal and image processing algorithms and systems. This approach is based on the methodology of Experimental Design. We illustrate this method by means of an example from the field of automatic object recognition.

  12. Optimal experimental design to position transducers in ultrasound breast imaging

    Science.gov (United States)

    Korta Martiartu, Naiara; Boehm, Christian; Vinard, Nicolas; Jovanović Balic, Ivana; Fichtner, Andreas

    2017-03-01

    We present methods to optimize the setup of a 3D ultrasound tomography scanner for breast cancer detection. This approach provides a systematic and quantitative tool to evaluate different designs and to optimize the con- figuration with respect to predefined design parameters. We consider both, time-of-flight inversion using straight rays and time-domain waveform inversion governed by the acoustic wave equation for imaging the sound speed. In order to compare different designs, we measure their quality by extracting properties from the Hessian operator of the time-of-flight or waveform differences defined in the inverse problem, i.e., the second derivatives with respect to the sound speed. Spatial uncertainties and resolution can be related to the eigenvalues of the Hessian, which provide a good indication of the information contained in the data that is acquired with a given design. However, the complete spectrum is often prohibitively expensive to compute, thus suitable approximations have to be developed and analyzed. We use the trace of the Hessian operator as design criterion, which is equivalent to the sum of all eigenvalues and requires less computational effort. In addition, we suggest to take advantage of the spatial symmetry to extrapolate the 3D experimental design from a set of 2D configurations. In order to maximize the quality criterion, we use a genetic algorithm to explore the space of possible design configurations. Numerical results show that the proposed strategies are capable of improving an initial configuration with uniformly distributed transducers, clustering them around regions with poor illumination and improving the ray coverage of the domain of interest.

  13. The effect of repeated measurements and working memory on the most comfortable level in the ANL test

    DEFF Research Database (Denmark)

    Brännström, K Jonas; Olsen, Steen Østergaard; Holm, Lucas

    2014-01-01

    interleaved methodology during one session using a non-semantic version. Phonological (PWM) and visuospatial working memory (VSWM) was measured. STUDY SAMPLE: Thirty-two normal-hearing adults. RESULTS: Repeated measures ANOVA, intraclass correlations, and the coefficient of repeatability (CR) were used...

  14. Cardiometabolic treatment decisions in patients with type 2 diabetes : the role of repeated measurements and medication burden

    NARCIS (Netherlands)

    Voorham, J.; Haaijer-Ruskamp, F. M.; Wolffenbuttel, B. H. R.; Stolk, R. P.; Denig, P.

    2010-01-01

    Purpose Clinical guidelines for cardiometabolic risk management indicate a simple threshold-based strategy for treatment, but physicians and their patients may be reluctant to modify drug treatment after a single elevated measurement. We determined how repeated measurements of blood pressure, choles

  15. Cardiometabolic treatment decisions in patients with type 2 diabetes : the role of repeated measurements and medication burden

    NARCIS (Netherlands)

    Voorham, J.; Haaijer-Ruskamp, F. M.; Wolffenbuttel, B. H. R.; Stolk, R. P.; Denig, P.

    2010-01-01

    Purpose Clinical guidelines for cardiometabolic risk management indicate a simple threshold-based strategy for treatment, but physicians and their patients may be reluctant to modify drug treatment after a single elevated measurement. We determined how repeated measurements of blood pressure,

  16. Comparing Simulated Emission from Molecular Clouds Using Experimental Design

    CERN Document Server

    Yeremi, Miayan; Offner, Stella; Loeppky, Jason; Rosolowsky, Erik

    2014-01-01

    We propose a new approach to comparing simulated observations that enables us to determine the significance of the underlying physical effects. We utilize the methodology of experimental design, a subfield of statistical analysis, to establish a framework for comparing simulated position-position-velocity data cubes to each other. We propose three similarity metrics based on methods described in the literature: principal component analysis, the spectral correlation function, and the Cramer multi-variate two sample similarity statistic. Using these metrics, we intercompare a suite of mock observational data of molecular clouds generated from magnetohydrodynamic simulations with varying physical conditions. Using this framework, we show that all three metrics are sensitive to changing Mach number and temperature in the simulation sets, but cannot detect changes in magnetic field strength and initial velocity spectrum. We highlight the shortcomings of one-factor-at-a-time designs commonly used in astrophysics an...

  17. Parameters optimization for enzymatic assays using experimental design

    Directory of Open Access Journals (Sweden)

    J. F. M. Burkert

    2006-06-01

    Full Text Available The conditions for maximization enzymatic activity were determined using experimental design and inulinase from Kluyveromyces marxianus ATCC 16045. The effects of substrate concentration (sucrose and inulin, pH and temperature on inulinase activity were verified using four factorial design and surface response analysis. Using sucrose as substrate. It has bean shown that the effects sucrose on enzymatic activity is not statistically significant and the best condition for the highest activity (110 U/mL was achieved with temperature between 60°C and 68°C and pH between 4.5 and 5.0. Using inulin as substrate it was verified that temperature is the only variable statistically significant and the maximum activity was 7.3 U/mL at temperature between 50°C and 51°C.

  18. Bayesian experimental design for models with intractable likelihoods.

    Science.gov (United States)

    Drovandi, Christopher C; Pettitt, Anthony N

    2013-12-01

    In this paper we present a methodology for designing experiments for efficiently estimating the parameters of models with computationally intractable likelihoods. The approach combines a commonly used methodology for robust experimental design, based on Markov chain Monte Carlo sampling, with approximate Bayesian computation (ABC) to ensure that no likelihood evaluations are required. The utility function considered for precise parameter estimation is based upon the precision of the ABC posterior distribution, which we form efficiently via the ABC rejection algorithm based on pre-computed model simulations. Our focus is on stochastic models and, in particular, we investigate the methodology for Markov process models of epidemics and macroparasite population evolution. The macroparasite example involves a multivariate process and we assess the loss of information from not observing all variables.

  19. Design of Experimental Suspended Footbridge with Deck Made of UHPC

    Directory of Open Access Journals (Sweden)

    Blank Marek

    2016-01-01

    Full Text Available This paper deals with the static and dynamic design of experimental footbridge for pedestrians and cyclists in the municipality Lužec nad Vltavou in Czech Republic, Europe. This work aims to familiarize the reader with calculations carried out and the results obtained, describing the static and dynamic properties of proposed footbridge. The construction of footbridge is designed as a suspended structure with prestressed bridge deck consisting of prefabricated UHPC panels and reversed “V” shaped steel pylon with height of approximately 40 meters. The deck is anchored using 24 steel hangers in one row in a steel pylon - 17 ropes in the main span and 7 cables on the other side. Range of the main span is 99.18 meters and the secondary span is 31.9 m. Deck width is 4.5 meters with 3.0 meters passing space. The bridge is designed for the possibility of passage of vehicles weighting up to 3.5 tons. Deck panels are made of UHPC with reinforcement. At the edge of the bridge on the side of the shorter span the bridge deck is firmly connected with abutment and on the other deck it is stored using a pair of sliding bearings. The utilization of the excellent properties of UHPC allows to design a very thin and lightweight construction of the deck, which could not be achieved with the use of normal concrete.

  20. Logical Experimental Design and Execution in the Biomedical Sciences.

    Science.gov (United States)

    Holder, Daniel J; Marino, Michael J

    2017-03-17

    Lack of reproducibility has been highlighted as a significant problem in biomedical research. The present unit is devoted to describing ways to help ensure that research findings can be replicated by others, with a focus on the design and execution of laboratory experiments. Essential components for this include clearly defining the question being asked, using available information or information from pilot studies to aid in the design the experiment, and choosing manipulations under a logical framework based on Mill's "methods of knowing" to build confidence in putative causal links. Final experimental design requires systematic attention to detail, including the choice of controls, sample selection, blinding to avoid bias, and the use of power analysis to determine the sample size. Execution of the experiment is done with care to ensure that the independent variables are controlled and the measurements of the dependent variables are accurate. While there are always differences among laboratories with respect to technical expertise, equipment, and suppliers, execution of the steps itemized in this unit will ensure well-designed and well-executed experiments to answer any question in biomedical research. © 2017 by John Wiley & Sons, Inc. Copyright © 2017 John Wiley & Sons, Inc.

  1. GSTM1 and APE1 genotypes affect arsenic-induced oxidative stress: a repeated measures study

    Directory of Open Access Journals (Sweden)

    Quamruzzaman Quazi

    2007-12-01

    Full Text Available Abstract Background Chronic arsenic exposure is associated with an increased risk of skin, bladder and lung cancers. Generation of oxidative stress may contribute to arsenic carcinogenesis. Methods To investigate the association between arsenic exposure and oxidative stress, urinary 8-hydroxy-2'-deoxyguanosine (8-OHdG was evaluated in a cohort of 97 women recruited from an arsenic-endemic region of Bangladesh in 2003. Arsenic exposure was measured in urine, toenails, and drinking water. Drinking water and urine samples were collected on three consecutive days. Susceptibility to oxidative stress was evaluated by genotyping relevant polymorphisms in glutathione-s transferase mu (GSTM1, human 8-oxoguanine glycosylase (hOGG1 and apurinic/apyrimidinic endonuclease (APE1 genes using the Taqman method. Data were analyzed using random effects Tobit regression to account for repeated measures and 8-OHdG values below the detection limit. Results A consistent negative effect for APE1 was observed across water, toenail and urinary arsenic models. APE1 148 glu/glu + asp/glu genotype was associated with a decrease in logged 8-OHdG of 0.40 (95%CI -0.73, -0.07 compared to APE1 148 asp/asp. An association between total urinary arsenic and 8-OHdG was observed among women with the GSTM1 null genotype but not in women with GSTM1 positive. Among women with GSTM1 null, a comparison of the second, third, and fourth quartiles of total urinary arsenic to the first quartile resulted in a 0.84 increase (95% CI 0.27, 1.42, a 0.98 increase (95% CI 033, 1.66 and a 0.85 increase (95% CI 0.27, 1.44 in logged 8-OHdG, respectively. No effects between 8-OHdG and toenail arsenic or drinking water arsenic were observed. Conclusion These results suggest the APE1 variant genotype decreases repair of 8-OHdG and that arsenic exposure is associated with oxidative stress in women who lack a functional GSTM1 detoxification enzyme.

  2. Assessing variability and comparing short-term biomarkers of styrene exposure using a repeated measurements approach.

    Science.gov (United States)

    Fustinoni, S; Manini, P; Campo, L; De Palma, G; Andreoli, R; Mutti, A; Bertazzi, P A; Rappaport, S M

    2010-01-15

    The aim of this work is to compare several short-term biomarkers of styrene exposure, namely urinary styrene (StyU), mercapturic acids (M1+M2), mandelic acid (MA), phenylglyoxylic acid (PGA), phenylglycine (PHG), and 4-vinylphenol conjugates (VP), for use as biomarkers of exposure in epidemiologic studies. A repeated measurements protocol (typically 4 measurements per worker over 6 weeks) was applied to measure airborne styrene (StyA) and urinary biomarkers in 10 varnish and 8 fiberglass reinforced plastic workers. Estimated geometric mean personal exposures to StyA were 2.96mg/m(3) in varnish workers and 15.7mg/m(3) in plastic workers. The corresponding levels of StyU, M1+M2, MA, PGA, MA+PGA, PHG and VP were 5.13microg/L, 0.111, 38.2, 22.7, 62.6, 0.978, and 3.97mg/g creatinine in varnish workers and 8.38microg/L, 0.303, 146, 83.4, 232, 2.85 and 3.97mg/g creatinine in plastic workers. Within-worker (sigma(wY)(2)) and between-worker (sigma(bY)(2)) variance components were estimated from the log-transformed data as were the corresponding fold ranges containing 95% of the respective lognormal distributions of daily levels ((w)R(0.95)) and subject-specific mean levels ((b)R(0.95)). Estimates of (w)R(0.95) (range: 4-26) were generally smaller than those of (b)R(0.95) (range: 5-790) for both environmental and biological markers; this indicates that exposures varied much more between workers than within workers in these groups. Since attenuation bias in an estimated exposure-response relationship increases with the variance ratio lambda=sigma(wY)(2)/sigma(bY)(2), we estimated values of lambda for all exposure measures in our study. Values of lambda were typically much less than one (median=0.220) and ranged from 0.089 for M1+M2 in plastic workers to 1.38 for PHG in varnish workers. Since values of lambda were 0.147 and 0.271 for StyA in varnish workers and plastic workers, respectively, compared to 0.178 and 0.210 for MA in the same groups, our results suggest that either

  3. Fast Bayesian optimal experimental design for seismic source inversion

    KAUST Repository

    Long, Quan

    2015-07-01

    We develop a fast method for optimally designing experiments in the context of statistical seismic source inversion. In particular, we efficiently compute the optimal number and locations of the receivers or seismographs. The seismic source is modeled by a point moment tensor multiplied by a time-dependent function. The parameters include the source location, moment tensor components, and start time and frequency in the time function. The forward problem is modeled by elastodynamic wave equations. We show that the Hessian of the cost functional, which is usually defined as the square of the weighted L2 norm of the difference between the experimental data and the simulated data, is proportional to the measurement time and the number of receivers. Consequently, the posterior distribution of the parameters, in a Bayesian setting, concentrates around the "true" parameters, and we can employ Laplace approximation and speed up the estimation of the expected Kullback-Leibler divergence (expected information gain), the optimality criterion in the experimental design procedure. Since the source parameters span several magnitudes, we use a scaling matrix for efficient control of the condition number of the original Hessian matrix. We use a second-order accurate finite difference method to compute the Hessian matrix and either sparse quadrature or Monte Carlo sampling to carry out numerical integration. We demonstrate the efficiency, accuracy, and applicability of our method on a two-dimensional seismic source inversion problem. © 2015 Elsevier B.V.

  4. Prediction uncertainty and optimal experimental design for learning dynamical systems

    Science.gov (United States)

    Letham, Benjamin; Letham, Portia A.; Rudin, Cynthia; Browne, Edward P.

    2016-06-01

    Dynamical systems are frequently used to model biological systems. When these models are fit to data, it is necessary to ascertain the uncertainty in the model fit. Here, we present prediction deviation, a metric of uncertainty that determines the extent to which observed data have constrained the model's predictions. This is accomplished by solving an optimization problem that searches for a pair of models that each provides a good fit for the observed data, yet has maximally different predictions. We develop a method for estimating a priori the impact that additional experiments would have on the prediction deviation, allowing the experimenter to design a set of experiments that would most reduce uncertainty. We use prediction deviation to assess uncertainty in a model of interferon-alpha inhibition of viral infection, and to select a sequence of experiments that reduces this uncertainty. Finally, we prove a theoretical result which shows that prediction deviation provides bounds on the trajectories of the underlying true model. These results show that prediction deviation is a meaningful metric of uncertainty that can be used for optimal experimental design.

  5. Fast Bayesian Optimal Experimental Design for Seismic Source Inversion

    KAUST Repository

    Long, Quan

    2016-01-06

    We develop a fast method for optimally designing experiments [1] in the context of statistical seismic source inversion [2]. In particular, we efficiently compute the optimal number and locations of the receivers or seismographs. The seismic source is modeled by a point moment tensor multiplied by a time-dependent function. The parameters include the source location, moment tensor components, and start time and frequency in the time function. The forward problem is modeled by the elastic wave equations. We show that the Hessian of the cost functional, which is usually defined as the square of the weighted L2 norm of the difference between the experimental data and the simulated data, is proportional to the measurement time and the number of receivers. Consequently, the posterior distribution of the parameters, in a Bayesian setting, concentrates around the true parameters, and we can employ Laplace approximation and speed up the estimation of the expected Kullback-Leibler divergence (expected information gain), the optimality criterion in the experimental design procedure. Since the source parameters span several magnitudes, we use a scaling matrix for efficient control of the condition number of the original Hessian matrix. We use a second-order accurate finite difference method to compute the Hessian matrix and either sparse quadrature or Monte Carlo sampling to carry out numerical integration. We demonstrate the efficiency, accuracy, and applicability of our method on a two-dimensional seismic source inversion problem.

  6. Experimental study of elementary collection efficiency of aerosols by spray: Design of the experimental device

    Energy Technology Data Exchange (ETDEWEB)

    Ducret, D.; Vendel, J.; Garrec. S.L.

    1995-02-01

    The safety of a nuclear power plant containment building, in which pressure and temperature could increase because of a overheating reactor accident, can be achieved by spraying water drops. The spray reduces the pressure and the temperature levels by condensation of steam on cold water drops. The more stringent thermodynamic conditions are a pressure of 5.10{sup 5} Pa (due to steam emission) and a temperature of 413 K. Moreover its energy dissipation function, the spray leads to the washout of fission product particles emitted in the reactor building atmosphere. The present study includes a large program devoted to the evaluation of realistic washout rates. The aim of this work is to develop experiments in order to determine the collection efficiency of aerosols by a single drop. To do this, the experimental device has to be designed with fundamental criteria:-Thermodynamic conditions have to be representative of post-accident atmosphere. Thermodynamic equilibrium has to be attained between the water drops and the gaseous phase. Thermophoretic, diffusiophoretic and mechanical effects have to be studied independently. Operating conditions have to be homogenous and constant during each experiment. This paper presents the design of the experimental device. In practice, the consequences on the design of each of the criteria given previously and the necessity of being representative of the real conditions will be described.

  7. Experimental Design for the INL Sample Collection Operational Test

    Energy Technology Data Exchange (ETDEWEB)

    Amidan, Brett G.; Piepel, Gregory F.; Matzke, Brett D.; Filliben, James J.; Jones, Barbara

    2007-12-13

    This document describes the test events and numbers of samples comprising the experimental design that was developed for the contamination, decontamination, and sampling of a building at the Idaho National Laboratory (INL). This study is referred to as the INL Sample Collection Operational Test. Specific objectives were developed to guide the construction of the experimental design. The main objective is to assess the relative abilities of judgmental and probabilistic sampling strategies to detect contamination in individual rooms or on a whole floor of the INL building. A second objective is to assess the use of probabilistic and Bayesian (judgmental + probabilistic) sampling strategies to make clearance statements of the form “X% confidence that at least Y% of a room (or floor of the building) is not contaminated. The experimental design described in this report includes five test events. The test events (i) vary the floor of the building on which the contaminant will be released, (ii) provide for varying or adjusting the concentration of contaminant released to obtain the ideal concentration gradient across a floor of the building, and (iii) investigate overt as well as covert release of contaminants. The ideal contaminant gradient would have high concentrations of contaminant in rooms near the release point, with concentrations decreasing to zero in rooms at the opposite end of the building floor. For each of the five test events, the specified floor of the INL building will be contaminated with BG, a stand-in for Bacillus anthracis. The BG contaminant will be disseminated from a point-release device located in the room specified in the experimental design for each test event. Then judgmental and probabilistic samples will be collected according to the pre-specified sampling plan. Judgmental samples will be selected based on professional judgment and prior information. Probabilistic samples will be selected in sufficient numbers to provide desired confidence

  8. Bayesian optimal experimental design for priors of compact support

    KAUST Repository

    Long, Quan

    2016-01-08

    In this study, we optimize the experimental setup computationally by optimal experimental design (OED) in a Bayesian framework. We approximate the posterior probability density functions (pdf) using truncated Gaussian distributions in order to account for the bounded domain of the uniform prior pdf of the parameters. The underlying Gaussian distribution is obtained in the spirit of the Laplace method, more precisely, the mode is chosen as the maximum a posteriori (MAP) estimate, and the covariance is chosen as the negative inverse of the Hessian of the misfit function at the MAP estimate. The model related entities are obtained from a polynomial surrogate. The optimality, quantified by the information gain measures, can be estimated efficiently by a rejection sampling algorithm against the underlying Gaussian probability distribution, rather than against the true posterior. This approach offers a significant error reduction when the magnitude of the invariants of the posterior covariance are comparable to the size of the bounded domain of the prior. We demonstrate the accuracy and superior computational efficiency of our method for shock-tube experiments aiming to measure the model parameters of a key reaction which is part of the complex kinetic network describing the hydrocarbon oxidation. In the experiments, the initial temperature and fuel concentration are optimized with respect to the expected information gain in the estimation of the parameters of the target reaction rate. We show that the expected information gain surface can change its shape dramatically according to the level of noise introduced into the synthetic data. The information that can be extracted from the data saturates as a logarithmic function of the number of experiments, and few experiments are needed when they are conducted at the optimal experimental design conditions.

  9. Experimental design in phylogenetics: testing predictions from expected information.

    Science.gov (United States)

    San Mauro, Diego; Gower, David J; Cotton, James A; Zardoya, Rafael; Wilkinson, Mark; Massingham, Tim

    2012-07-01

    Taxon and character sampling are central to phylogenetic experimental design; yet, we lack general rules. Goldman introduced a method to construct efficient sampling designs in phylogenetics, based on the calculation of expected Fisher information given a probabilistic model of sequence evolution. The considerable potential of this approach remains largely unexplored. In an earlier study, we applied Goldman's method to a problem in the phylogenetics of caecilian amphibians and made an a priori evaluation and testable predictions of which taxon additions would increase information about a particular weakly supported branch of the caecilian phylogeny by the greatest amount. We have now gathered mitogenomic and rag1 sequences (some newly determined for this study) from additional caecilian species and studied how information (both expected and observed) and bootstrap support vary as each new taxon is individually added to our previous data set. This provides the first empirical test of specific predictions made using Goldman's method for phylogenetic experimental design. Our results empirically validate the top 3 (more intuitive) taxon addition predictions made in our previous study, but only information results validate unambiguously the 4th (less intuitive) prediction. This highlights a complex relationship between information and support, reflecting that each measures different things: Information is related to the ability to estimate branch length accurately and support to the ability to estimate the tree topology accurately. Thus, an increase in information may be correlated with but does not necessitate an increase in support. Our results also provide the first empirical validation of the widely held intuition that additional taxa that join the tree proximal to poorly supported internal branches are more informative and enhance support more than additional taxa that join the tree more distally. Our work supports the view that adding more data for a single (well

  10. Experimental design schemes for learning Boolean network models

    Science.gov (United States)

    Atias, Nir; Gershenzon, Michal; Labazin, Katia; Sharan, Roded

    2014-01-01

    Motivation: A holy grail of biological research is a working model of the cell. Current modeling frameworks, especially in the protein–protein interaction domain, are mostly topological in nature, calling for stronger and more expressive network models. One promising alternative is logic-based or Boolean network modeling, which was successfully applied to model signaling regulatory circuits in human. Learning such models requires observing the system under a sufficient number of different conditions. To date, the amount of measured data is the main bottleneck in learning informative Boolean models, underscoring the need for efficient experimental design strategies. Results: We developed novel design approaches that greedily select an experiment to be performed so as to maximize the difference or the entropy in the results it induces with respect to current best-fit models. Unique to our maximum difference approach is the ability to account for all (possibly exponential number of) Boolean models displaying high fit to the available data. We applied both approaches to simulated and real data from the EFGR and IL1 signaling systems in human. We demonstrate the utility of the developed strategies in substantially improving on a random selection approach. Our design schemes highlight the redundancy in these datasets, leading up to 11-fold savings in the number of experiments to be performed. Availability and implementation: Source code will be made available upon acceptance of the manuscript. Contact: roded@post.tau.ac.il PMID:25161232

  11. Comparing simulated emission from molecular clouds using experimental design

    Energy Technology Data Exchange (ETDEWEB)

    Yeremi, Miayan; Flynn, Mallory; Loeppky, Jason; Rosolowsky, Erik [University of British Columbia, Okanagan Campus, Departments of Physics and Statistics, 3333 University Way, Kelowna BC V1V 1V7 (Canada); Offner, Stella [Yale University Astronomy Department, 260 Whitney Avenue, New Haven, CT 06511 (United States)

    2014-03-10

    We propose a new approach to comparing simulated observations that enables us to determine the significance of the underlying physical effects. We utilize the methodology of experimental design, a subfield of statistical analysis, to establish a framework for comparing simulated position-position-velocity data cubes to each other. We propose three similarity metrics based on methods described in the literature: principal component analysis, the spectral correlation function, and the Cramer multi-variate two-sample similarity statistic. Using these metrics, we intercompare a suite of mock observational data of molecular clouds generated from magnetohydrodynamic simulations with varying physical conditions. Using this framework, we show that all three metrics are sensitive to changing Mach number and temperature in the simulation sets, but cannot detect changes in magnetic field strength and initial velocity spectrum. We highlight the shortcomings of one-factor-at-a-time designs commonly used in astrophysics and propose fractional factorial designs as a means to rigorously examine the effects of changing physical properties while minimizing the investment of computational resources.

  12. EXPERIMENTAL RESEARCH REGARDING LEATHER APPLICATIONS IN PRODUCT DESIGN

    Directory of Open Access Journals (Sweden)

    PRALEA Jeni

    2015-05-01

    Full Text Available This paper presents the role and importance of experimental research in design activity. The designer, as a researcher and a project manager, proposes to establish a relationship between functional-aesthetic-constructive-technological-economic,based on the aesthetic possibilities of the materials used for the experiments. With the aim to identify areas for the application of leather waste resulted from the production process, the paper presents experiments conducted with this material in combination with wood, by using different techniques that lead to different aesthetic effects. Identifying the areas to use and creating products from leather and/or wood waste, is based on the properties of these materials. Leather, the subject of these experiments, has the advantage that it can be used on both sides. Tactile differences of the two sides of this material has both aesthetical and functional advantages, which makes it suitable for applications on products that meet the requirements of "design for all". With differentiated tactile characteristics, in combination with other materials, for these experiments wood, easily "read touch" products can be generated to help people with certain disabilities. Thus, experiments presented in this paper allows the establishment of aesthetic schemes applicable to products that are friendly both with the environment (based on the reuse of wood and leather waste and with the users (can be used as applications, accessories and concepts of products for people with certain disabilities. The designer’s choices or decisions can be based on the results of this experiment. The experiment enables the designer to develop creative, innovative and environmentally friendly products.

  13. Bayesian Optimal Experimental Design Using Multilevel Monte Carlo

    KAUST Repository

    Ben Issaid, Chaouki

    2015-05-12

    Experimental design can be vital when experiments are resource-exhaustive and time-consuming. In this work, we carry out experimental design in the Bayesian framework. To measure the amount of information that can be extracted from the data in an experiment, we use the expected information gain as the utility function, which specifically is the expected logarithmic ratio between the posterior and prior distributions. Optimizing this utility function enables us to design experiments that yield the most informative data about the model parameters. One of the major difficulties in evaluating the expected information gain is that it naturally involves nested integration over a possibly high dimensional domain. We use the Multilevel Monte Carlo (MLMC) method to accelerate the computation of the nested high dimensional integral. The advantages are twofold. First, MLMC can significantly reduce the cost of the nested integral for a given tolerance, by using an optimal sample distribution among different sample averages of the inner integrals. Second, the MLMC method imposes fewer assumptions, such as the asymptotic concentration of posterior measures, required for instance by the Laplace approximation (LA). We test the MLMC method using two numerical examples. The first example is the design of sensor deployment for a Darcy flow problem governed by a one-dimensional Poisson equation. We place the sensors in the locations where the pressure is measured, and we model the conductivity field as a piecewise constant random vector with two parameters. The second one is chemical Enhanced Oil Recovery (EOR) core flooding experiment assuming homogeneous permeability. We measure the cumulative oil recovery, from a horizontal core flooded by water, surfactant and polymer, for different injection rates. The model parameters consist of the endpoint relative permeabilities, the residual saturations and the relative permeability exponents for the three phases: water, oil and

  14. Design and construction of the IEA Grimethorpe experimental facility

    Energy Technology Data Exchange (ETDEWEB)

    Broadbent, D.H.; Wright, S.J.; Kaden, M.

    1979-06-01

    In December 1975 the Governments of the United Kingdom, the United States of America, and the Federal Republic of Germany, under the auspices of the International Energy Agency, entered into an agreement to build a large pressurized fluidized bed combustion experimental facility. The function of the facility would be to extend the range of fluidization and combustion characteristics investigated from those of the relatively small rigs then in operation across the whole range of conditions potentially applicable to combined cycle power generation systems. The ranges of conditions to be investigated in the facility are pressures 6 to 12 bar, fluidizing velocity 0.6 to 3.0 m/s and bed temperatures 750 to 950/sup 0/C. The ultimate aim is to seek an optimum condition and establish a data base from which a demonstration plant could be designed and built.

  15. Statistics in experimental design, preprocessing, and analysis of proteomics data.

    Science.gov (United States)

    Jung, Klaus

    2011-01-01

    High-throughput experiments in proteomics, such as 2-dimensional gel electrophoresis (2-DE) and mass spectrometry (MS), yield usually high-dimensional data sets of expression values for hundreds or thousands of proteins which are, however, observed on only a relatively small number of biological samples. Statistical methods for the planning and analysis of experiments are important to avoid false conclusions and to receive tenable results. In this chapter, the most frequent experimental designs for proteomics experiments are illustrated. In particular, focus is put on studies for the detection of differentially regulated proteins. Furthermore, issues of sample size planning, statistical analysis of expression levels as well as methods for data preprocessing are covered.

  16. An experimental design method leading to chemical Turing patterns.

    Science.gov (United States)

    Horváth, Judit; Szalai, István; De Kepper, Patrick

    2009-05-08

    Chemical reaction-diffusion patterns often serve as prototypes for pattern formation in living systems, but only two isothermal single-phase reaction systems have produced sustained stationary reaction-diffusion patterns so far. We designed an experimental method to search for additional systems on the basis of three steps: (i) generate spatial bistability by operating autoactivated reactions in open spatial reactors; (ii) use an independent negative-feedback species to produce spatiotemporal oscillations; and (iii) induce a space-scale separation of the activatory and inhibitory processes with a low-mobility complexing agent. We successfully applied this method to a hydrogen-ion autoactivated reaction, the thiourea-iodate-sulfite (TuIS) reaction, and noticeably produced stationary hexagonal arrays of spots and parallel stripes of pH patterns attributed to a Turing bifurcation. This method could be extended to biochemical reactions.

  17. Computational Design and Experimental Validation of New Thermal Barrier Systems

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Shengmin; Yang, Shizhong; Khosravi, Ebrahim

    2011-12-31

    This project (10/01/2010-9/30/2013), “Computational Design and Experimental Validation of New Thermal Barrier Systems”, originates from Louisiana State University (LSU) Mechanical Engineering Department and Southern University (SU) Department of Computer Science. This proposal will directly support the technical goals specified in DE-FOA-0000248, Topic Area 3: Turbine Materials, by addressing key technologies needed to enable the development of advanced turbines and turbine-based systems that will operate safely and efficiently using coal-derived synthesis gases. We will develop novel molecular dynamics method to improve the efficiency of simulation on novel TBC materials; we will perform high performance computing (HPC) on complex TBC structures to screen the most promising TBC compositions; we will perform material characterizations and oxidation/corrosion tests; and we will demonstrate our new Thermal barrier coating (TBC) systems experimentally under Integrated gasification combined cycle (IGCC) environments. The durability of the coating will be examined using the proposed High Temperature/High Pressure Durability Test Rig under real syngas product compositions.

  18. Simulation-based optimal Bayesian experimental design for nonlinear systems

    KAUST Repository

    Huan, Xun

    2013-01-01

    The optimal selection of experimental conditions is essential to maximizing the value of data for inference and prediction, particularly in situations where experiments are time-consuming and expensive to conduct. We propose a general mathematical framework and an algorithmic approach for optimal experimental design with nonlinear simulation-based models; in particular, we focus on finding sets of experiments that provide the most information about targeted sets of parameters.Our framework employs a Bayesian statistical setting, which provides a foundation for inference from noisy, indirect, and incomplete data, and a natural mechanism for incorporating heterogeneous sources of information. An objective function is constructed from information theoretic measures, reflecting expected information gain from proposed combinations of experiments. Polynomial chaos approximations and a two-stage Monte Carlo sampling method are used to evaluate the expected information gain. Stochastic approximation algorithms are then used to make optimization feasible in computationally intensive and high-dimensional settings. These algorithms are demonstrated on model problems and on nonlinear parameter inference problems arising in detailed combustion kinetics. © 2012 Elsevier Inc.

  19. A retrospective mathematical analysis of controlled release design and experimentation.

    Science.gov (United States)

    Rothstein, Sam N; Kay, Jennifer E; Schopfer, Francisco J; Freeman, Bruce A; Little, Steven R

    2012-11-01

    The development and performance evaluation of new biodegradable polymer controlled release formulations relies on successful interpretation and evaluation of in vitro release data. However, depending upon the extent of empirical characterization, release data may be open to more than one qualitative interpretation. In this work, a predictive model for release from degradable polymer matrices was applied to a number of published release data in order to extend the characterization of release behavior. Where possible, the model was also used to interpolate and extrapolate upon collected released data to clarify the overall duration of release and also kinetics of release between widely spaced data points. In each case examined, mathematical predictions of release coincide well with experimental results, offering a more definitive description of each formulation's performance than was previously available. This information may prove particularly helpful in the design of future studies, such as when calculating proper dosing levels or determining experimental end points in order to more comprehensively evaluate a controlled release system's performance.

  20. Experimental Verification of Current Shear Design Equations for HSRC Beams

    Directory of Open Access Journals (Sweden)

    Attaullah Shah

    2012-07-01

    Full Text Available Experimental research on the shear capacity of HSRC (High Strength Reinforced Concrete beams is relatively very limited as compared to the NSRC (Normal Strength Reinforced Concrete beams. Most of the Building Codes determine the shear strength of HSRC with the help of empirical equations based on experimental work of NSRC beams and hence these equations are generally regarded as un-conservative for HSRC beams particularly at low level of longitudinal reinforcement. In this paper, 42 beams have been tested in two sets, such that in 21 beams no transverse reinforcement has been used, whereas in the remaining 21 beams, minimum transverse reinforcement has been used as per ACI-318 (American Concrete Institute provisions. Two values of compressive strength 52 and 61 MPa, three values of longitudinal steel ratio and seven values of shear span to depth ratio have been have been used. The beams were tested under concentrated load at the mid span. The results are compared with the equations proposed by different international building codes like ACI, AASHTO LRFD, EC (Euro Code, Canadian Code and Japanese Code for shear strength of HSRC beams.From comparison, it has been observed that some codes are less conservative for shear design of HSRC beams and further research is required to rationalize these equations.

  1. Quasi-experimental designs in practice-based research settings: design and implementation considerations.

    Science.gov (United States)

    Handley, Margaret A; Schillinger, Dean; Shiboski, Stephen

    2011-01-01

    Although randomized controlled trials are often a gold standard for determining intervention effects, in the area of practice-based research (PBR), there are many situations in which individual randomization is not possible. Alternative approaches to evaluating interventions have received increased attention, particularly those that can retain elements of randomization such that they can be considered "controlled" trials. Methodological design elements and practical implementation considerations for two quasi-experimental design approaches that have considerable promise in PBR settings--the stepped-wedge design, and a variant of this design, a wait-list cross-over design, are presented along with a case study from a recent PBR intervention for patients with diabetes. PBR-relevant design features include: creation of a cohort over time that collects control data but allows all participants (clusters or patients) to receive the intervention; staggered introduction of clusters; multiple data collection points; and one-way cross-over into the intervention arm. Practical considerations include: randomization versus stratification, training run in phases; and extended time period for overall study completion. Several design features of practice based research studies can be adapted to local circumstances yet retain elements to improve methodological rigor. Studies that utilize these methods, such as the stepped-wedge design and the wait-list cross-over design, can increase the evidence base for controlled studies conducted within the complex environment of PBR.

  2. MicroarrayDesigner: an online search tool and repository for near-optimal microarray experimental designs

    Directory of Open Access Journals (Sweden)

    Ferhatosmanoglu Nilgun

    2009-09-01

    Full Text Available Abstract Background Dual-channel microarray experiments are commonly employed for inference of differential gene expressions across varying organisms and experimental conditions. The design of dual-channel microarray experiments that can help minimize the errors in the resulting inferences has recently received increasing attention. However, a general and scalable search tool and a corresponding database of optimal designs were still missing. Description An efficient and scalable search method for finding near-optimal dual-channel microarray designs, based on a greedy hill-climbing optimization strategy, has been developed. It is empirically shown that this method can successfully and efficiently find near-optimal designs. Additionally, an improved interwoven loop design construction algorithm has been developed to provide an easily computable general class of near-optimal designs. Finally, in order to make the best results readily available to biologists, a continuously evolving catalog of near-optimal designs is provided. Conclusion A new search algorithm and database for near-optimal microarray designs have been developed. The search tool and the database are accessible via the World Wide Web at http://db.cse.ohio-state.edu/MicroarrayDesigner. Source code and binary distributions are available for academic use upon request.

  3. Estimating Intervention Effects across Different Types of Single-Subject Experimental Designs: Empirical Illustration

    Science.gov (United States)

    Moeyaert, Mariola; Ugille, Maaike; Ferron, John M.; Onghena, Patrick; Heyvaert, Mieke; Beretvas, S. Natasha; Van den Noortgate, Wim

    2015-01-01

    The purpose of this study is to illustrate the multilevel meta-analysis of results from single-subject experimental designs of different types, including AB phase designs, multiple-baseline designs, ABAB reversal designs, and alternating treatment designs. Current methodological work on the meta-analysis of single-subject experimental designs…

  4. Experimental Charging Behavior of Orion UltraFlex Array Designs

    Science.gov (United States)

    Golofaro, Joel T.; Vayner, Boris V.; Hillard, Grover B.

    2010-01-01

    The present ground based investigations give the first definitive look describing the charging behavior of Orion UltraFlex arrays in both the Low Earth Orbital (LEO) and geosynchronous (GEO) environments. Note the LEO charging environment also applies to the International Space Station (ISS). The GEO charging environment includes the bounding case for all lunar mission environments. The UltraFlex photovoltaic array technology is targeted to become the sole power system for life support and on-orbit power for the manned Orion Crew Exploration Vehicle (CEV). The purpose of the experimental tests is to gain an understanding of the complex charging behavior to answer some of the basic performance and survivability issues to ascertain if a single UltraFlex array design will be able to cope with the projected worst case LEO and GEO charging environments. Stage 1 LEO plasma testing revealed that all four arrays successfully passed arc threshold bias tests down to -240 V. Stage 2 GEO electron gun charging tests revealed that only the front side area of indium tin oxide coated array designs successfully passed the arc frequency tests

  5. Large-scale experimental design for decentralized SLAM

    Science.gov (United States)

    Cunningham, Alex; Dellaert, Frank

    2012-06-01

    This paper presents an analysis of large scale decentralized SLAM under a variety of experimental conditions to illustrate design trade-offs relevant to multi-robot mapping in challenging environments. As a part of work through the MAST CTA, the focus of these robot teams is on the use of small-scale robots with limited sensing, communication and computational resources. To evaluate mapping algorithms with large numbers (50+) of robots, we developed a simulation incorporating sensing of unlabeled landmarks, line-of-sight blocking obstacles, and communication modeling. Scenarios are randomly generated with variable models for sensing, communication, and robot behavior. The underlying Decentralized Data Fusion (DDF) algorithm in these experiments enables robots to construct a map of their surroundings by fusing local sensor measurements with condensed map information from neighboring robots. Each robot maintains a cache of previously collected condensed maps from neighboring robots, and actively distributes these maps throughout the network to ensure resilience to communication and node failures. We bound the size of the robot neighborhoods to control the growth of the size of neighborhood maps. We present the results of experiments conducted in these simulated scenarios under varying measurement models and conditions while measuring mapping performance. We discuss the trade-offs between mapping performance and scenario design, including robot teams separating and joining, multi-robot data association, exploration bounding, and neighborhood sizes.

  6. Tabletop Games: Platforms, Experimental Games and Design Recommendations

    Science.gov (United States)

    Haller, Michael; Forlines, Clifton; Koeffel, Christina; Leitner, Jakob; Shen, Chia

    While the last decade has seen massive improvements in not only the rendering quality, but also the overall performance of console and desktop video games, these improvements have not necessarily led to a greater population of video game players. In addition to continuing these improvements, the video game industry is also constantly searching for new ways to convert non-players into dedicated gamers. Despite the growing popularity of computer-based video games, people still love to play traditional board games, such as Risk, Monopoly, and Trivial Pursuit. Both video and board games have their strengths and weaknesses, and an intriguing conclusion is to merge both worlds. We believe that a tabletop form-factor provides an ideal interface for digital board games. The design and implementation of tabletop games will be influenced by the hardware platforms, form factors, sensing technologies, as well as input techniques and devices that are available and chosen. This chapter is divided into three major sections. In the first section, we describe the most recent tabletop hardware technologies that have been used by tabletop researchers and practitioners. In the second section, we discuss a set of experimental tabletop games. The third section presents ten evaluation heuristics for tabletop game design.

  7. Numerical and experimental design of coaxial shallow geothermal energy systems

    Science.gov (United States)

    Raghavan, Niranjan

    Geothermal Energy has emerged as one of the front runners in the energy race because of its performance efficiency, abundance and production competitiveness. Today, geothermal energy is used in many regions of the world as a sustainable solution for decreasing dependence on fossil fuels and reducing health hazards. However, projects related to geothermal energy have not received their deserved recognition due to lack of computational tools associated with them and economic misconceptions related to their installation and functioning. This research focuses on numerical and experimental system design analysis of vertical shallow geothermal energy systems. The driving force is the temperature difference between a finite depth beneath the earth and its surface stimulates continuous exchange of thermal energy from sub-surface to the surface (a geothermal gradient is set up). This heat gradient is captured by the circulating refrigerant and thus, tapping the geothermal energy from shallow depths. Traditionally, U-bend systems, which consist of two one-inch pipes with a U-bend connector at the bottom, have been widely used in geothermal applications. Alternative systems include coaxial pipes (pipe-in-pipe) that are the main focus of this research. It has been studied that coaxial pipes have significantly higher thermal performance characteristics than U-bend pipes, with comparative production and installation costs. This makes them a viable design upgrade to the traditional piping systems. Analytical and numerical heat transfer analysis of the coaxial system is carried out with the help of ABAQUS software. It is tested by varying independent parameters such as materials, soil conditions and effect of thermal contact conductance on heat transfer characteristics. With the above information, this research aims at formulating a preliminary theoretical design setup for an experimental study to quantify and compare the heat transfer characteristics of U-bend and coaxial

  8. Incomplete quality of life data in lung transplant research: comparing cross sectional, repeated measures ANOVA, and multi-level analysis

    Directory of Open Access Journals (Sweden)

    van der Bij Wim

    2005-09-01

    Full Text Available Abstract Background In longitudinal studies on Health Related Quality of Life (HRQL it frequently occurs that patients have one or more missing forms, which may cause bias, and reduce the sample size. Aims of the present study were to address the problem of missing data in the field of lung transplantation (LgTX and HRQL, to compare results obtained with different methods of analysis, and to show the value of each type of statistical method used to summarize data. Methods Results from cross-sectional analysis, repeated measures on complete cases (ANOVA, and a multi-level analysis were compared. The scores on the dimension 'energy' of the Nottingham Health Profile (NHP after transplantation were used to illustrate the differences between methods. Results Compared to repeated measures ANOVA, the cross-sectional and multi-level analysis included more patients, and allowed for a longer period of follow-up. In contrast to the cross sectional analyses, in the complete case analysis, and the multi-level analysis, the correlation between different time points was taken into account. Patterns over time of the three methods were comparable. In general, results from repeated measures ANOVA showed the most favorable energy scores, and results from the multi-level analysis the least favorable. Due to the separate subgroups per time point in the cross-sectional analysis, and the relatively small number of patients in the repeated measures ANOVA, inclusion of predictors was only possible in the multi-level analysis. Conclusion Results obtained with the various methods of analysis differed, indicating some reduction of bias took place. Multi-level analysis is a useful approach to study changes over time in a data set where missing data, to reduce bias, make efficient use of available data, and to include predictors, in studies concerning the effects of LgTX on HRQL.

  9. Detection of quasi-periodic processes in repeated measurements: New approach for the fitting and clusterization of different data

    Science.gov (United States)

    Nigmatullin, R.; Rakhmatullin, R.

    2014-12-01

    Many experimentalists were accustomed to think that any independent measurement forms a non-correlated measurement that depends weakly from others. We are trying to reconsider this conventional point of view and prove that similar measurements form a strongly-correlated sequence of random functions with memory. In other words, successive measurements "remember" each other at least their nearest neighbors. This observation and justification on real data help to fit the wide set of data based on the Prony's function. The Prony's decomposition follows from the quasi-periodic (QP) properties of the measured functions and includes the Fourier transform as a partial case. New type of decomposition helps to obtain a specific amplitude-frequency response (AFR) of the measured (random) functions analyzed and each random function contains less number of the fitting parameters in comparison with its number of initial data points. Actually, the calculated AFR can be considered as the generalized Prony's spectrum (GPS), which will be extremely useful in cases where the simple model pretending on description of the measured data is absent but vital necessity of their quantitative description is remained. These possibilities open a new way for clusterization of the initial data and new information that is contained in these data gives a chance for their detailed analysis. The electron paramagnetic resonance (EPR) measurements realized for empty resonator (pure noise data) and resonator containing a sample (CeO2 in our case) confirmed the existence of the QP processes in reality. But we think that the detection of the QP processes is a common feature of many repeated measurements and this new property of successive measurements can attract an attention of many experimentalists. To formulate some general conditions that help to identify and then detect the presence of some QP process in the repeated experimental measurements. To find a functional equation and its solution that

  10. Statistical vs. stochastic experimental design: an experimental comparison on the example of protein refolding.

    Science.gov (United States)

    Anselment, Bernd; Schoemig, Veronika; Kesten, Christopher; Weuster-Botz, Dirk

    2012-01-01

    Optimization of experimental problems is a challenging task in both engineering and science. In principle, two different design of experiments (DOE) strategies exist: statistical and stochastic methods. Both aim to efficiently and precisely identify optimal solutions inside the problem-specific search space. Here, we evaluate and compare both strategies on the same experimental problem, the optimization of the refolding conditions of the lipase from Thermomyces lanuginosus with 26 variables under study. Protein refolding is one of the main bottlenecks in the process development for recombinant proteins. Despite intensive effort, the prediction of refolding from sequence information alone is still not applicable today. Instead, suitable refolding conditions are typically derived empirically in large screening experiments. Thus, protein refolding should constitute a good performance test for DOE strategies. We compared an iterative stochastic optimization applying a genetic algorithm and a standard statistical design consisting of a D-optimal screening step followed by an optimization via response surface methodology. Our results revealed that only the stochastic optimization was able to identify optimal refolding conditions (~1.400 U g(-1) refolded activity), which were 3.4-fold higher than the standard. Additionally, the stochastic optimization proved quite robust, as three independent optimizations performed similar. In contrast, the statistical DOE resulted in a suboptimal solution and failed to identify comparable activities. Interactions between process variables proved to be pivotal for this optimization. Hence, the linear screening model was not able to identify the most important process variables correctly. Thereby, this study highlighted the limits of the classic two-step statistical DOE.

  11. Design review of the Brazilian Experimental Solar Telescope

    Science.gov (United States)

    Dal Lago, A.; Vieira, L. E. A.; Albuquerque, B.; Castilho, B.; Guarnieri, F. L.; Cardoso, F. R.; Guerrero, G.; Rodríguez, J. M.; Santos, J.; Costa, J. E. R.; Palacios, J.; da Silva, L.; Alves, L. R.; Costa, L. L.; Sampaio, M.; Dias Silveira, M. V.; Domingues, M. O.; Rockenbach, M.; Aquino, M. C. O.; Soares, M. C. R.; Barbosa, M. J.; Mendes, O., Jr.; Jauer, P. R.; Branco, R.; Dallaqua, R.; Stekel, T. R. C.; Pinto, T. S. N.; Menconi, V. E.; Souza, V. M. C. E. S.; Gonzalez, W.; Rigozo, N.

    2015-12-01

    The Brazilian's National Institute for Space Research (INPE), in collaboration with the Engineering School of Lorena/University of São Paulo (EEL/USP), the Federal University of Minas Gerais (UFMG), and the Brazilian's National Laboratory for Astrophysics (LNA), is developing a solar vector magnetograph and visible-light imager to study solar processes through observations of the solar surface magnetic field. The Brazilian Experimental Solar Telescope is designed to obtain full disk magnetic field and line-of-sight velocity observations in the photosphere. Here we discuss the system requirements and the first design review of the instrument. The instrument is composed by a Ritchey-Chrétien telescope with a 500 mm aperture and 4000 mm focal length. LCD polarization modulators will be employed for the polarization analysis and a tuning Fabry-Perot filter for the wavelength scanning near the Fe II 630.25 nm line. Two large field-of-view, high-resolution 5.5 megapixel sCMOS cameras will be employed as sensors. Additionally, we describe the project management and system engineering approaches employed in this project. As the magnetic field anchored at the solar surface produces most of the structures and energetic events in the upper solar atmosphere and significantly influences the heliosphere, the development of this instrument plays an important role in advancing scientific knowledge in this field. In particular, the Brazilian's Space Weather program will benefit most from the development of this technology. We expect that this project will be the starting point to establish a strong research program on Solar Physics in Brazil. Our main aim is to progressively acquire the know-how to build state-of-art solar vector magnetograph and visible-light imagers for space-based platforms.

  12. Optimal experimental design with the sigma point method.

    Science.gov (United States)

    Schenkendorf, R; Kremling, A; Mangold, M

    2009-01-01

    Using mathematical models for a quantitative description of dynamical systems requires the identification of uncertain parameters by minimising the difference between simulation and measurement. Owing to the measurement noise also, the estimated parameters possess an uncertainty expressed by their variances. To obtain highly predictive models, very precise parameters are needed. The optimal experimental design (OED) as a numerical optimisation method is used to reduce the parameter uncertainty by minimising the parameter variances iteratively. A frequently applied method to define a cost function for OED is based on the inverse of the Fisher information matrix. The application of this traditional method has at least two shortcomings for models that are nonlinear in their parameters: (i) it gives only a lower bound of the parameter variances and (ii) the bias of the estimator is neglected. Here, the authors show that by applying the sigma point (SP) method a better approximation of characteristic values of the parameter statistics can be obtained, which has a direct benefit on OED. An additional advantage of the SP method is that it can also be used to investigate the influence of the parameter uncertainties on the simulation results. The SP method is demonstrated for the example of a widely used biological model.

  13. Tokamak experimental power reactor conceptual design. Volume I

    Energy Technology Data Exchange (ETDEWEB)

    1976-08-01

    A conceptual design has been developed for a tokamak Experimental Power Reactor to operate at net electrical power conditions with a plant capacity factor of 50 percent for 10 years. The EPR operates in a pulsed mode at a frequency of approximately 1/min., with an approximate 75 percent duty cycle, is capable of producing approximately 72 MWe and requires 42 MWe. The annual tritium consumption is 16 kg. The EPR vacuum chamber is 6.25 m in major radius and 2.4 m in minor radius, is constructed of 2-cm thick stainless steel, and has 2-cm thick detachable, beryllium-coated coolant panels mounted on the interior. An 0.28 m stainless steel blanket and a shield ranging from 0.6 to 1.0 m surround the vacuum vessel. The coolant is H/sub 2/O. Sixteen niobium-titanium superconducting toroidal-field coils provide a field of 10 T at the coil and 4.47 T at the plasma. Superconducting ohmic-heating and equilibrium-field coils provide 135 V-s to drive the plasma current. Plasma heating is accomplished by 12 neutral beam-injectors, which provide 60 MW. The energy transfer and storage system consists of a central superconducting storage ring, a homopolar energy storage unit, and a variety of inductor-converters.

  14. Sparsely Sampling the Sky: A Bayesian Experimental Design Approach

    CERN Document Server

    Paykari, P

    2012-01-01

    The next generation of galaxy surveys will observe millions of galaxies over large volumes of the universe. These surveys are expensive both in time and cost, raising questions regarding the optimal investment of this time and money. In this work we investigate criteria for selecting amongst observing strategies for constraining the galaxy power spectrum and a set of cosmological parameters. Depending on the parameters of interest, it may be more efficient to observe a larger, but sparsely sampled, area of sky instead of a smaller contiguous area. In this work, by making use of the principles of Bayesian Experimental Design, we will investigate the advantages and disadvantages of the sparse sampling of the sky and discuss the circumstances in which a sparse survey is indeed the most efficient strategy. For the Dark Energy Survey (DES), we find that by sparsely observing the same area in a smaller amount of time, we only increase the errors on the parameters by a maximum of 0.45%. Conversely, investing the sam...

  15. [Experimental design and data handling in food microbiology].

    Science.gov (United States)

    Carbonell, E A

    1993-02-01

    A discussion on the problems associated with designing experiments in Food Microbiology research is presented. After defining what is meant by Design of an Experiment, a series of questions are raised that, once answered, will help in properly designing the experiment. It is emphasized the chain research-design-model-analysis-design and the danger in blindly using well-known designs and canned programs.

  16. An efficient approach to bioconversion kinetic model generation based on automated microscale experimentation integrated with model driven experimental design

    DEFF Research Database (Denmark)

    Chen, B. H.; Micheletti, M.; Baganz, F.;

    2009-01-01

    design. It incorporates a model driven approach to the experimental design that minimises the number of experiments to be performed, while still generating accurate values of kinetic parameters. The approach has been illustrated with the transketolase mediated asymmetric synthesis of L...... experimental design.]it comparison with conventional methodology, the modelling approach enabled a nearly 4-fold decrease in the number of experiments while the microwell experimentation enabled a 45-fold decrease in material requirements and a significant increase in experimental throughput. The approach......Reliable models of enzyme kinetics are required for the effective design of bioconversion processes. Kinetic expressions of the enzyme-catalysed reaction rate however, are frequently complex and establishing accurate values of kinetic parameters normally requires a large number of experiments...

  17. Experimental Design on Laminated Veneer Lumber Fiber Composite: Surface Enhancement

    Science.gov (United States)

    Meekum, U.; Mingmongkol, Y.

    2010-06-01

    Thick laminate veneer lumber(LVL) fibre reinforced composites were constructed from the alternated perpendicularly arrayed of peeled rubber woods. Glass woven was laid in between the layers. Native golden teak veneers were used as faces. In house formulae epoxy was employed as wood adhesive. The hand lay-up laminate was cured at 150° C for 45 mins. The cut specimen was post cured at 80° C for at least 5 hours. The 2k factorial design of experimental(DOE) was used to verify the parameters. Three parameters by mean of silane content in epoxy formulation(A), smoke treatment of rubber wood surface(B) and anti-termite application(C) on the wood surface were analysed. Both low and high levels were further subcategorised into 2 sub-levels. Flexural properties were the main respond obtained. ANOVA analysis of the Pareto chart was engaged. The main effect plot was also testified. The results showed that the interaction between silane quantity and termite treatment is negative effect at high level(AC+). Vice versa, the interaction between silane and smoke treatment was positive significant effect at high level(AB+). According to this research work, the optimal setting to improve the surface adhesion and hence flexural properties enhancement were high level of silane quantity, 15% by weight, high level of smoked wood layers, 8 out of 14 layers, and low anti termite applied wood. The further testes also revealed that the LVL composite had superior properties that the solid woods but slightly inferior in flexibility. The screw withdrawn strength of LVL showed the higher figure than solid wood. It is also better resistance to moisture and termite attack than the rubber wood.

  18. Optimization of model parameters and experimental designs with the Optimal Experimental Design Toolbox (v1.0) exemplified by sedimentation in salt marshes

    Science.gov (United States)

    Reimer, J.; Schuerch, M.; Slawig, T.

    2015-03-01

    The geosciences are a highly suitable field of application for optimizing model parameters and experimental designs especially because many data are collected. In this paper, the weighted least squares estimator for optimizing model parameters is presented together with its asymptotic properties. A popular approach to optimize experimental designs called local optimal experimental designs is described together with a lesser known approach which takes into account the potential nonlinearity of the model parameters. These two approaches have been combined with two methods to solve their underlying discrete optimization problem. All presented methods were implemented in an open-source MATLAB toolbox called the Optimal Experimental Design Toolbox whose structure and application is described. In numerical experiments, the model parameters and experimental design were optimized using this toolbox. Two existing models for sediment concentration in seawater and sediment accretion on salt marshes of different complexity served as an application example. The advantages and disadvantages of these approaches were compared based on these models. Thanks to optimized experimental designs, the parameters of these models could be determined very accurately with significantly fewer measurements compared to unoptimized experimental designs. The chosen optimization approach played a minor role for the accuracy; therefore, the approach with the least computational effort is recommended.

  19. City Connects: Building an Argument for Effects on Student Achievement with a Quasi-Experimental Design

    Science.gov (United States)

    Walsh, Mary; Raczek, Anastasia; Sibley, Erin; Lee-St. John, Terrence; An, Chen; Akbayin, Bercem; Dearing, Eric; Foley, Claire

    2015-01-01

    While randomized experimental designs are the gold standard in education research concerned with causal inference, non-experimental designs are ubiquitous. For researchers who work with non-experimental data and are no less concerned for causal inference, the major problem is potential omitted variable bias. In this presentation, the authors…

  20. 14 CFR 437.85 - Allowable design changes; modification of an experimental permit.

    Science.gov (United States)

    2010-01-01

    ... Conditions of an Experimental Permit § 437.85 Allowable design changes; modification of an experimental... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Allowable design changes; modification of an experimental permit. 437.85 Section 437.85 Aeronautics and Space COMMERCIAL SPACE...

  1. An integrated approach to biomonitoring exposure to styrene and styrene-(7,8)-oxide using a repeated measurements sampling design.

    Science.gov (United States)

    Fustinoni, S; Campo, L; Manini, P; Buratti, M; Waidyanatha, S; De Palma, G; Mutti, A; Foa, V; Colombi, A; Rappaport, S M

    2008-09-01

    The aim of this work was to investigate urinary analytes and haemoglobin and albumin adducts as biomarkers of exposure to airborne styrene (Sty) and styrene-(7,8)-oxide (StyOX) and to evaluate the influence of smoking habit and genetic polymorphism of metabolic enzymes GSTM1 and GSTT1 on these biomarkers. We obtained three or four air and urine samples from each exposed worker (eight reinforced plastics workers and 13 varnish workers), one air and urine samples from 22 control workers (automobile mechanics) and one blood sample from all subjects. Median levels of exposure to Sty and StyOX, respectively, were 18.2 mg m(-3) and 133 microg m(-3) for reinforced plastics workers, 3.4 mg m(-3) and 12 microg m(-3) for varnish workers, and <0.3 mg m(-3) and <5 microg m(-3) for controls. Urinary levels of styrene, mandelic acid, phenylglyoxylic acid, phenylglycine (PHG), 4-vinylphenol (VP) and mercapturic acids (M1+M2), as well as cysteinyl adducts of serum albumin (but not those of haemoglobin) were significantly associated with exposure status (controls

  2. Artificial Warming of Arctic Meadow under Pollution Stress: Experimental design

    Science.gov (United States)

    Moni, Christophe; Silvennoinen, Hanna; Fjelldal, Erling; Brenden, Marius; Kimball, Bruce; Rasse, Daniel

    2014-05-01

    Boreal and arctic terrestrial ecosystems are central to the climate change debate, notably because future warming is expected to be disproportionate as compared to world averages. Likewise, greenhouse gas (GHG) release from terrestrial ecosystems exposed to climate warming is expected to be the largest in the arctic. Artic agriculture, in the form of cultivated grasslands, is a unique and economically relevant feature of Northern Norway (e.g. Finnmark Province). In Eastern Finnmark, these agro-ecosystems are under the additional stressor of heavy metal and sulfur pollution generated by metal smelters of NW Russia. Warming and its interaction with heavy metal dynamics will influence meadow productivity, species composition and GHG emissions, as mediated by responses of soil microbial communities. Adaptation and mitigation measurements will be needed. Biochar application, which immobilizes heavy metal, is a promising adaptation method to promote positive growth response in arctic meadows exposed to a warming climate. In the MeadoWarm project we conduct an ecosystem warming experiment combined to biochar adaptation treatments in the heavy-metal polluted meadows of Eastern Finnmark. In summary, the general objective of this study is twofold: 1) to determine the response of arctic agricultural ecosystems under environmental stress to increased temperatures, both in terms of plant growth, soil organisms and GHG emissions, and 2) to determine if biochar application can serve as a positive adaptation (plant growth) and mitigation (GHG emission) strategy for these ecosystems under warming conditions. Here, we present the experimental site and the designed open-field warming facility. The selected site is an arctic meadow located at the Svanhovd Research station less than 10km west from the Russian mining city of Nikel. A splitplot design with 5 replicates for each treatment is used to test the effect of biochar amendment and a 3oC warming on the Arctic meadow. Ten circular

  3. A Finite Mixture of Nonlinear Random Coefficient Models for Continuous Repeated Measures Data.

    Science.gov (United States)

    Kohli, Nidhi; Harring, Jeffrey R; Zopluoglu, Cengiz

    2016-09-01

    Nonlinear random coefficient models (NRCMs) for continuous longitudinal data are often used for examining individual behaviors that display nonlinear patterns of development (or growth) over time in measured variables. As an extension of this model, this study considers the finite mixture of NRCMs that combine features of NRCMs with the idea of finite mixture (or latent class) models. The efficacy of this model is that it allows the integration of intrinsically nonlinear functions where the data come from a mixture of two or more unobserved subpopulations, thus allowing the simultaneous investigation of intra-individual (within-person) variability, inter-individual (between-person) variability, and subpopulation heterogeneity. Effectiveness of this model to work under real data analytic conditions was examined by executing a Monte Carlo simulation study. The simulation study was carried out using an R routine specifically developed for the purpose of this study. The R routine used maximum likelihood with the expectation-maximization algorithm. The design of the study mimicked the output obtained from running a two-class mixture model on task completion data.

  4. Design of a Combined Ballistic Simulator and Primer Force Experimental Fixture

    Science.gov (United States)

    2015-08-01

    ARL-MR-0896 ●AUG 2015 US Army Research Laboratory Design of a Combined Ballistic Simulator and Primer Force Experimental Fixture...SUBTITLE Design of a Combined Ballistic Simulator and Primer Force Experimental Fixture 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT...Contents List of Figures iv Acknowledgments v 1. Introduction 1 2. Background 1 3. Technical Approach 4 3.1 Hardware Design 4 3.2 Experimental Setup

  5. Psychological impact and recovery after involvement in a patient safety incident: a repeated measures analysis

    Science.gov (United States)

    Van Gerven, Eva; Bruyneel, Luk; Panella, Massimiliano; Euwema, Martin; Sermeus, Walter; Vanhaecht, Kris

    2016-01-01

    Objective To examine individual, situational and organisational aspects that influence psychological impact and recovery of a patient safety incident on physicians, nurses and midwives. Design Cross-sectional, retrospective surveys of physicians, midwives and nurses. Setting 33 Belgian hospitals. Participants 913 clinicians (186 physicians, 682 nurses, 45 midwives) involved in a patient safety incident. Main outcome measures The Impact of Event Scale was used to retrospectively measure psychological impact of the safety incident at the time of the event and compare it with psychological impact at the time of the survey. Results Individual, situational as well as organisational aspects influenced psychological impact and recovery of a patient safety incident. Psychological impact is higher when the degree of harm for the patient is more severe, when healthcare professionals feel responsible for the incident and among female healthcare professionals. Impact of degree of harm differed across clinicians. Psychological impact is lower among more optimistic professionals. Overall, impact decreased significantly over time. This effect was more pronounced for women and for those who feel responsible for the incident. The longer ago the incident took place, the stronger impact had decreased. Also, higher psychological impact is related with the use of a more active coping and planning coping strategy, and is unrelated to support seeking coping strategies. Rendered support and a support culture reduce psychological impact, whereas a blame culture increases psychological impact. No associations were found with job experience and resilience of the health professional, the presence of a second victim support team or guideline and working in a learning culture. Conclusions Healthcare organisations should anticipate on providing their staff appropriate and timely support structures that are tailored to the healthcare professional involved in the incident and to the specific

  6. Efficient Experimental Design Strategies in Toxicology and Bioassay

    Directory of Open Access Journals (Sweden)

    Timothy E. O'Brien

    2016-06-01

    Full Text Available Modelling in bioassay often uses linear or nonlinear logistic regression models, and relative potency is often the focus when two or more compounds are to be compared.  Estimation in these settings is typically based on likelihood methods.  Here, we focus on the 3-parameter model representation given in Finney (1978 in which the relative potency is a model parameter.  Using key matrix results and the general equivalence theorem of Kiefer & Wolfowitz (1960, this paper establishes key design properties of the optimal design for relative potency using this model.  We also highlight aspects of subset designs for the relative potency parameter and extend geometric designs to efficient design settings of bioassay.  These latter designs are thus useful for both parameter estimation and checking for goodness-of-fit.  A typical yet insightful example is provided from the field of toxicology to illustrate our findings.

  7. Repeated measures of inflammation and oxidative stress biomarkers in preeclamptic and normotensive pregnancies.

    Science.gov (United States)

    Ferguson, Kelly K; Meeker, John D; McElrath, Thomas F; Mukherjee, Bhramar; Cantonwine, David E

    2017-05-01

    Preeclampsia is a prevalent and enigmatic disease, in part characterized by poor remodeling of the spiral arteries. However, preeclampsia does not always clinically present when remodeling has failed to occur. Hypotheses surrounding the "second hit" that is necessary for the clinical presentation of the disease focus on maternal inflammation and oxidative stress. Yet, the studies to date that have investigated these factors have used cross-sectional study designs or small study populations. In the present study, we sought to explore longitudinal trajectories, beginning early in gestation, of a panel of inflammation and oxidative stress markers in women who went on to have preeclamptic or normotensive pregnancies. We examined 441 subjects from the ongoing LIFECODES prospective birth cohort, which included 50 mothers who experienced preeclampsia and 391 mothers with normotensive pregnancies. Participants provided urine and plasma samples at 4 time points during gestation (median, 10, 18, 26, and 35 weeks) that were analyzed for a panel of oxidative stress and inflammation markers. Oxidative stress biomarkers included 8-isoprostane and 8-hydroxydeoxyguanosine. Inflammation biomarkers included C-reactive protein, the cytokines interleukin-1β, -6, and -10, and tumor necrosis factor-α. We created Cox proportional hazard models to calculate hazard ratios based on time of preeclampsia diagnosis in association with biomarker concentrations at each of the 4 study visits. In adjusted models, hazard ratios of preeclampsia were significantly (Pinflammation biomarkers that were measured at visit 2 (median, 18 weeks; hazard ratios, 1.31-1.83, in association with an interquartile range increase in biomarker). Hazard ratios at this time point were the most elevated for C-reactive protein, for interleukin-1β, -6, and -10, and for the oxidative stress biomarker 8-isoprostane (hazard ratio, 1.68; 95% confidence interval, 1.14-2.48) compared to other time points. Hazard ratios for

  8. Design, experimentation, and modeling of a novel continuous biodrying process

    Science.gov (United States)

    Navaee-Ardeh, Shahram

    Massive production of sludge in the pulp and paper industry has made the effective sludge management increasingly a critical issue for the industry due to high landfill and transportation costs, and complex regulatory frameworks for options such as sludge landspreading and composting. Sludge dewatering challenges are exacerbated at many mills due to improved in-plant fiber recovery coupled with increased production of secondary sludge, leading to a mixed sludge with a high proportion of biological matter which is difficult to dewater. In this thesis, a novel continuous biodrying reactor was designed and developed for drying pulp and paper mixed sludge to economic dry solids level so that the dried sludge can be economically and safely combusted in a biomass boiler for energy recovery. In all experimental runs the economic dry solids level was achieved, proving the process successful. In the biodrying process, in addition to the forced aeration, the drying rates are enhanced by biological heat generated through the microbial activity of mesophilic and thermophilic microorganisms naturally present in the porous matrix of mixed sludge. This makes the biodrying process more attractive compared to the conventional drying techniques because the reactor is a self-heating process. The reactor is divided into four nominal compartments and the mixed sludge dries as it moves downward in the reactor. The residence times were 4-8 days, which are 2-3 times shorter than the residence times achieved in a batch biodrying reactor previously studied by our research group for mixed sludge drying. A process variable analysis was performed to determine the key variable(s) in the continuous biodrying reactor. Several variables were investigated, namely: type of biomass feed, pH of biomass, nutrition level (C/N ratio), residence times, recycle ratio of biodried sludge, and outlet relative humidity profile along the reactor height. The key variables that were identified in the continuous

  9. Use of Experimental Design for Peuhl Cheese Process Optimization ...

    African Journals Online (AJOL)

    Devika

    added (7 mL), heating temperature (84.12°C) and heating time (15 min). When these optimal ... conditions constitute one of the major obstacles for ... a two levels factorial design;. • a design of ... balance (Sartorius Gmbh, Göttingen, Germany),.

  10. Mediation of the Relationship between Maternal Phthalate Exposure and Preterm Birth by Oxidative Stress with Repeated Measurements across Pregnancy.

    Science.gov (United States)

    Ferguson, Kelly K; Chen, Yin-Hsiu; VanderWeele, Tyler J; McElrath, Thomas F; Meeker, John D; Mukherjee, Bhramar

    2017-03-01

    Mediation analysis is useful for understanding mechanisms and has been used minimally in the study of the environment and disease. We examined mediation of the association between phthalate exposure during pregnancy and preterm birth by oxidative stress. This nested case-control study of preterm birth (n = 130 cases, 352 controls) included women who delivered in Boston, Massachusestts, from 2006 through 2008. Phthalate metabolites and 8-isoprostane, an oxidative stress biomarker, were measured in urine from three visits in pregnancy. We applied four counterfactual mediation methods: method 1, utilizing exposure and mediator averages; method 2, using averages but allowing for an exposure-mediator interaction; method 3, incorporating longitudinal measurements of the exposure and mediator; and method 4, using longitudinal measurements and allowing for an exposure-mediator interaction. We observed mediation of the associations between phthalate metabolites and all preterm birth by 8-isoprostane, with the greatest estimated proportion mediated observed for spontaneous preterm births specifically. Fully utilizing repeated measures of the exposure and mediator improved precision of indirect (i.e., mediated) effect estimates, and including an exposure-mediator interaction increased the estimated proportion mediated. For example, for mono(2-ethyl-carboxy-propyl) phthalate (MECPP), a metabolite of di(2-ethylhexyl) phthalate (DEHP), the percent of the total effect mediated by 8-isoprostane increased from 47% to 60% with inclusion of an exposure-mediator interaction term, in reference to a total adjusted odds ratio of 1.67 or 1.48, respectively. This demonstrates mediation of the phthalate-preterm birth relationship by oxidative stress, and the utility of complex regression models in capturing mediated associations when repeated measures of exposure and mediator are available and an exposure-mediator interaction may exist. Citation: Ferguson KK, Chen YH, VanderWeele TJ, Mc

  11. Leveraging the Experimental Method to Inform Solar Cell Design

    Science.gov (United States)

    Rose, Mary Annette; Ribblett, Jason W.; Hershberger, Heather Nicole

    2010-01-01

    In this article, the underlying logic of experimentation is exemplified within the context of a photoelectrical experiment for students taking a high school engineering, technology, or chemistry class. Students assume the role of photochemists as they plan, fabricate, and experiment with a solar cell made of copper and an aqueous solution of…

  12. Experimental validation of micro endmill design for hard milling application

    NARCIS (Netherlands)

    Li, P.; Hoogstrate, A.M.; Oosterling, J.A.J.; Langen, H.H.; Munnig Schmidt, R.

    2008-01-01

    Abstract: In experimental investigations of micro milling of hardened tool steel SAE H11, with a hardness of 56 HRC, with commercially available micro square endmills of Ø 0.5mm, it was observed that the endmills suffered from severe tool wear/failure. Because of these problems, the quality of the m

  13. Leveraging the Experimental Method to Inform Solar Cell Design

    Science.gov (United States)

    Rose, Mary Annette; Ribblett, Jason W.; Hershberger, Heather Nicole

    2010-01-01

    In this article, the underlying logic of experimentation is exemplified within the context of a photoelectrical experiment for students taking a high school engineering, technology, or chemistry class. Students assume the role of photochemists as they plan, fabricate, and experiment with a solar cell made of copper and an aqueous solution of…

  14. Gunnar Aagaard Andersen: Commercial Design and Experimental Art

    DEFF Research Database (Denmark)

    Gether, Vibeke Petersen

    2016-01-01

    in Copenhagen and the magazine Mobilia. These enterprises were pioneers within both the commercial–industrial creative field and experimental art from the 1950s. The industrial process, from idea and experiment to production,as well as the expanded field of visual art, was demonstrated in concrete painting...

  15. Experimental design for research on shock-turbulence interaction

    Science.gov (United States)

    Radcliffe, S. W.

    1969-01-01

    Report investigates the production of acoustic waves in the interaction of a supersonic shock and a turbulence environment. The five stages of the investigation are apparatus design, development of instrumentation, preliminary experiment, turbulence generator selection, and main experiments.

  16. Experimental techniques for design of impact-resistant material (poster)

    NARCIS (Netherlands)

    Fan, J.; Weerheijm, J.; Sluys, L.J.

    2013-01-01

    Some polymers are not only transparent and lightweight, but also impact and ballistic resistant. Designing and preparing such polymeric materials with a high impact‐resistant performance is of importance to e.g. aviation, military and windscreen applications.

  17. Experimental techniques for design of impact-resistant material (poster)

    NARCIS (Netherlands)

    Fan, J.; Weerheijm, J.; Sluys, L.J.

    2013-01-01

    Some polymers are not only transparent and lightweight, but also impact and ballistic resistant. Designing and preparing such polymeric materials with a high impact‐resistant performance is of importance to e.g. aviation, military and windscreen applications.

  18. A Bayesian model for repeated measures zero-inflated count data with application to outpatient psychiatric service use

    Science.gov (United States)

    Neelon, Brian H.; O’Malley, A. James; Normand, Sharon-Lise T.

    2009-01-01

    In applications involving count data, it is common to encounter an excess number of zeros. In the study of outpatient service utilization, for example, the number of utilization days will take on integer values, with many subjects having no utilization (zero values). Mixed-distribution models, such as the zero-inflated Poisson (ZIP) and zero-inflated negative binomial (ZINB), are often used to fit such data. A more general class of mixture models, called hurdle models, can be used to model zero-deflation as well as zero-inflation. Several authors have proposed frequentist approaches to fitting zero-inflated models for repeated measures. We describe a practical Bayesian approach which incorporates prior information, has optimal small-sample properties, and allows for tractable inference. The approach can be easily implemented using standard Bayesian software. A study of psychiatric outpatient service use illustrates the methods. PMID:21339863

  19. Global sensitivity analysis for repeated measures studies with informative drop-out: A semi-parametric approach.

    Science.gov (United States)

    Scharfstein, Daniel; McDermott, Aidan; Díaz, Iván; Carone, Marco; Lunardon, Nicola; Turkoz, Ibrahim

    2017-05-23

    In practice, both testable and untestable assumptions are generally required to draw inference about the mean outcome measured at the final scheduled visit in a repeated measures study with drop-out. Scharfstein et al. (2014) proposed a sensitivity analysis methodology to determine the robustness of conclusions within a class of untestable assumptions. In their approach, the untestable and testable assumptions were guaranteed to be compatible; their testable assumptions were based on a fully parametric model for the distribution of the observable data. While convenient, these parametric assumptions have proven especially restrictive in empirical research. Here, we relax their distributional assumptions and provide a more flexible, semi-parametric approach. We illustrate our proposal in the context of a randomized trial for evaluating a treatment of schizoaffective disorder. © 2017, The International Biometric Society.

  20. Sources of Experimental Variation in 2-D Maps: The Importance of Experimental Design in Gel-Based Proteomics.

    Science.gov (United States)

    Valcu, Cristina-Maria; Valcu, Mihai

    2016-01-01

    The success of proteomic studies employing 2-D maps largely depends on the way surveys and experiments have been organized and performed. Planning gel-based proteomic experiments involves the selection of equipment, methodology, treatments, types and number of samples, experimental layout, and methods for data analysis. A good experimental design will maximize the output of the experiment while taking into account the biological and technical resources available. In this chapter we provide guidelines to assist proteomics researchers in all these choices and help them to design quantitative 2-DE experiments.

  1. Physics Design of Water Moderator Criticality Assembly in Experimental Research About ADS

    Institute of Scientific and Technical Information of China (English)

    LV; Niu

    2013-01-01

    In order to meet the experimental demand of ADS research,we need to design a suitable criticality assembly.The key problem of the design work is the core design,we design a criticality assembly with the water moderator according to available nuclear material(Fig.1).The theoretical calculation have been

  2. Nanowire piezo-phototronic photodetector: theory and experimental design.

    Science.gov (United States)

    Liu, Ying; Yang, Qing; Zhang, Yan; Yang, Zongyin; Wang, Zhong Lin

    2012-03-15

    The piezo-phototronic effect is about the use of the inner crystal piezoelectric potential to tune/control charge carrier generation, separation, transport and/or recombination in optoelectronic devices. In this paper, a theoretical model for describing the characteristics of a metal-nanowire-metal structured piezo-phototronic photodetector is constructed. Numerical simulations fit well to the experimental results of a CdS and ZnO nanowire based visible and UV detector, respectively.

  3. Experimental Evaluation of Three Designs of Electrodynamic Flexural Transducers

    Science.gov (United States)

    Eriksson, Tobias J. R.; Laws, Michael; Kang, Lei; Fan, Yichao; Ramadas, Sivaram N.; Dixon, Steve

    2016-01-01

    Three designs for electrodynamic flexural transducers (EDFT) for air-coupled ultrasonics are presented and compared. An all-metal housing was used for robustness, which makes the designs more suitable for industrial applications. The housing is designed such that there is a thin metal plate at the front, with a fundamental flexural vibration mode at ∼50 kHz. By using a flexural resonance mode, good coupling to the load medium was achieved without the use of matching layers. The front radiating plate is actuated electrodynamically by a spiral coil inside the transducer, which produces an induced magnetic field when an AC current is applied to it. The transducers operate without the use of piezoelectric materials, which can simplify manufacturing and prolong the lifetime of the transducers, as well as open up possibilities for high-temperature applications. The results show that different designs perform best for the generation and reception of ultrasound. All three designs produced large acoustic pressure outputs, with a recorded sound pressure level (SPL) above 120 dB at a 40 cm distance from the highest output transducer. The sensitivity of the transducers was low, however, with single shot signal-to-noise ratio (SNR)≃15 dB in transmit–receive mode, with transmitter and receiver 40 cm apart. PMID:27571075

  4. Experimental Characterisation of Moreno Cross Slot Couplers for Blass Matrix Design

    Directory of Open Access Journals (Sweden)

    K. Jery Varghese

    1998-10-01

    Full Text Available This paper presents the experimental characterisation of Moreno cross-slot coupler which is the basic building block of multiple beam forming network (Blass matrix. The lack of exact theory of such coupler requires extensive experimental evaluation. A novel test jig has been designed, fabricated and tested for this purpose. The experimental results for different scattering parameters are presented.

  5. Using Propensity Scores in Quasi-Experimental Designs to Equate Groups

    Science.gov (United States)

    Lane, Forrest C.; Henson, Robin K.

    2010-01-01

    Education research rarely lends itself to large scale experimental research and true randomization, leaving the researcher to quasi-experimental designs. The problem with quasi-experimental research is that underlying factors may impact group selection and lead to potentially biased results. One way to minimize the impact of non-randomization is…

  6. "Using Power Tables to Compute Statistical Power in Multilevel Experimental Designs"

    Science.gov (United States)

    Konstantopoulos, Spyros

    2009-01-01

    Power computations for one-level experimental designs that assume simple random samples are greatly facilitated by power tables such as those presented in Cohen's book about statistical power analysis. However, in education and the social sciences experimental designs have naturally nested structures and multilevel models are needed to compute the…

  7. Optimization of preservatives in a topical formulation using experimental design.

    Science.gov (United States)

    Rahali, Y; Pensé-Lhéritier, A-M; Mielcarek, C; Bensouda, Y

    2009-12-01

    Optimizing the preservative regime for a preparation requires the antimicrobial effectiveness of several preservative combinations to be determined. In this study, three preservatives were tested: benzoic acid, sorbic acid and benzylic alcohol. Their preservative effects were evaluated using the antimicrobial preservative efficacy test (challenge-test) of the European Pharmacopeia (EP). A D-optimal mixture design was used to provide a maximum of information from a limited number of experiments. The results of this study were analysed with the help of the Design Expert software and enabled us to formulate emulsions satisfying both requirements A and B of the EP.

  8. A Short Guide to Experimental Design and Analysis for Engineers

    Science.gov (United States)

    2014-04-01

    study of women’s ratings of men based on masculinity , Frederick and Haselton had participants performing octuple duty – providing ratings of...et al. (2012) CK-12 Middle School Math Grade 6. Vol. 1, CK-12 Foundation Mitchell, M. L. and Jolley, J. M. (2010) Research Design Explained. 7th ed

  9. Design and experimental evaluation of cooperative adaptive cruise control

    NARCIS (Netherlands)

    Ploeg, J.; Scheepers, B.T.M.; Nunen, E. van; Wouw, N. van de; Nijmeijer, H.

    2011-01-01

    Road throughput can be increased by driving at small inter-vehicle time gaps. The amplification of velocity disturbances in upstream direction, however, poses limitations to the minimum feasible time gap. String-stable behavior is thus considered an essential requirement for the design of automatic

  10. Tokamak experimental power reactor conceptual design. Volume II

    Energy Technology Data Exchange (ETDEWEB)

    1976-08-01

    Volume II contains the following appendices: (1) summary of EPR design parameters, (2) impurity control, (3) plasma computational models, (4) structural support system, (5) materials considerations for the primary energy conversion system, (6) magnetics, (7) neutronics penetration analysis, (8) first wall stress analysis, (9) enrichment of isotopes of hydrogen by cryogenic distillation, and (10) noncircular plasma considerations. (MOW)

  11. Creativity in Advertising Design Education: An Experimental Study

    Science.gov (United States)

    Cheung, Ming

    2011-01-01

    Have you ever thought about why qualities whose definitions are elusive, such as those of a sunset or a half-opened rose, affect us so powerfully? According to de Saussure (Course in general linguistics, 1983), the making of meanings is closely related to the production and interpretation of signs. All types of design, including advertising…

  12. The Inquiry Flame: Scaffolding for Scientific Inquiry through Experimental Design

    Science.gov (United States)

    Pardo, Richard; Parker, Jennifer

    2010-01-01

    In the lesson presented in this article, students learn to organize their thinking and design their own inquiry experiments through careful observation of an object, situation, or event. They then conduct these experiments and report their findings in a lab report, poster, trifold board, slide, or video that follows the typical format of the…

  13. HEAO C-1 gamma-ray spectrometer. [experimental design

    Science.gov (United States)

    Mahoney, W. A.; Ling, J. C.; Willett, J. B.; Jacobson, A. S.

    1978-01-01

    The gamma-ray spectroscopy experiment to be launched on the third High Energy Astronomy Observatory (HEAO C) will perform a complete sky search for narrow gamma-ray line emission to the level of about 00001 photons/sq cm -sec for steady point sources. The design of this experiment and its performance based on testing and calibration to date are discussed.

  14. Design and experimental evaluation of cooperative adaptive cruise control

    NARCIS (Netherlands)

    Ploeg, J.; Scheepers, B.T.M.; Nunen, E. van; Wouw, N. van de; Nijmeijer, H.

    2011-01-01

    Road throughput can be increased by driving at small inter-vehicle time gaps. The amplification of velocity disturbances in upstream direction, however, poses limitations to the minimum feasible time gap. String-stable behavior is thus considered an essential requirement for the design of automatic

  15. OPTIMIZATION OF EXPERIMENTAL DESIGNS BY INCORPORATING NIF FACILITY IMPACTS

    Energy Technology Data Exchange (ETDEWEB)

    Eder, D C; Whitman, P K; Koniges, A E; Anderson, R W; Wang, P; Gunney, B T; Parham, T G; Koerner, J G; Dixit, S N; . Suratwala, T I; Blue, B E; Hansen, J F; Tobin, M T; Robey, H F; Spaeth, M L; MacGowan, B J

    2005-08-31

    For experimental campaigns on the National Ignition Facility (NIF) to be successful, they must obtain useful data without causing unacceptable impact on the facility. Of particular concern is excessive damage to optics and diagnostic components. There are 192 fused silica main debris shields (MDS) exposed to the potentially hostile target chamber environment on each shot. Damage in these optics results either from the interaction of laser light with contamination and pre-existing imperfections on the optic surface or from the impact of shrapnel fragments. Mitigation of this second damage source is possible by identifying shrapnel sources and shielding optics from them. It was recently demonstrated that the addition of 1.1-mm thick borosilicate disposable debris shields (DDS) block the majority of debris and shrapnel fragments from reaching the relatively expensive MDS's. However, DDS's cannot stop large, faster moving fragments. We have experimentally demonstrated one shrapnel mitigation technique showing that it is possible to direct fast moving fragments by changing the source orientation, in this case a Ta pinhole array. Another mitigation method is to change the source material to one that produces smaller fragments. Simulations and validating experiments are necessary to determine which fragments can penetrate or break 1-3 mm thick DDS's. Three-dimensional modeling of complex target-diagnostic configurations is necessary to predict the size, velocity, and spatial distribution of shrapnel fragments. The tools we are developing will be used to set the allowed level of debris and shrapnel generation for all NIF experimental campaigns.

  16. The near-infrared spectroscopy-derived deoxygenated haemoglobin breaking-point is a repeatable measure that demarcates exercise intensity domains.

    Science.gov (United States)

    Iannetta, Danilo; Qahtani, Ahmad; Mattioni Maturana, Felipe; Murias, Juan Manuel

    2017-09-01

    A breaking-point in the near-infrared spectroscopy (NIRS)-derived deoxygenated haemoglobin ([HHb]) profile towards the end of a ramp incremental (RI) cycling test has been associated to the respiratory compensation point (RCP). Despite the physiological value of this measure, its repeatability remains unknown. The aim was to examine the repeatability of the [HHb] breaking-point ([HHb]BP) and its association to RCP during a RI cycling test. A repeated measures design was performed on 11 males (30.5±8.4 year; 76.5±8.4kg) and 4 females (30.5±5.9 year; 61.9±4.4 Kg). Gas exchange and NIRS [HHb] data were collected during RI tests performed on two different days separated by 48h. The [HHb]BP and the RCP were determined and compared for each trial. The [HHb]BP and the respiratory compensation point (RCP) occurred at the same VO2 in test 1 and test 2 ([HHb]BP: 3.49±0.52Lmin(-1) test 1; 3.48±0.45Lmin(-1) test 2; RCP: 3.38±0.40Lmin(-1) test 1; 3.38±0.44Lmin(-1) test 2) (P>0.05). The VO2 associated with the [HHb]BP and the VO2 at RCP were not significantly different from each other either in test 1 as well as in test 2 (P>0.05). Neither test 1 nor test 2 showed significant mean average error between the VO2 at the [HHb]BP and RCP using Bland & Altman plots. The [HHb]BP is a repeatable measure that consistently occurs towards the end of a RI test. The association between the [HHb]BP and the RCP reinforces the idea that these parameters may share similar mechanistic basis. Copyright © 2017 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  17. On the construction of experimental designs for a given task by jointly optimizing several quality criteria: Pareto-optimal experimental designs.

    Science.gov (United States)

    Sánchez, M S; Sarabia, L A; Ortiz, M C

    2012-11-19

    Experimental designs for a given task should be selected on the base of the problem being solved and of some criteria that measure their quality. There are several such criteria because there are several aspects to be taken into account when making a choice. The most used criteria are probably the so-called alphabetical optimality criteria (for example, the A-, E-, and D-criteria related to the joint estimation of the coefficients, or the I- and G-criteria related to the prediction variance). Selecting a proper design to solve a problem implies finding a balance among these several criteria that measure the performance of the design in different aspects. Technically this is a problem of multi-criteria optimization, which can be tackled from different views. The approach presented here addresses the problem in its real vector nature, so that ad hoc experimental designs are generated with an algorithm based on evolutionary algorithms to find the Pareto-optimal front. There is not theoretical limit to the number of criteria that can be studied and, contrary to other approaches, no just one experimental design is computed but a set of experimental designs all of them with the property of being Pareto-optimal in the criteria needed by the user. Besides, the use of an evolutionary algorithm makes it possible to search in both continuous and discrete domains and avoid the need of having a set of candidate points, usual in exchange algorithms.

  18. Design and experimental validation of a compact collimated Knudsen source

    CERN Document Server

    Wouters, Steinar H W; Mutsaers, Peter H A; Vredenbregt, Edgar J D

    2015-01-01

    In this paper we discuss the design and performance of a collimated Knudsen source which has the benefit of a simple design over recirculating sources. Measurements of the flux, transverse velocity distribution and brightness at different temperatures were conducted to evaluate the performance. The scaling of the flux and brightness with the source temperature follow the theoretical predictions. The transverse velocity distribution in the transparent operation regime also agrees with the simulated data. The source was found able to produce a flux of $10^{14}$ s$^{-1}$ at a temperature of 433 K. Furthermore the transverse reduced brightness of an ion beam with equal properties as the atomic beam reads $1.7 \\times 10^2$ A/(m${}^2$ sr eV) which is sufficient for our goal: the creation of an ultra-cold ion beam by ionization of a laser-cooled and compressed atomic rubidium beam.

  19. Design and experimental demonstration of optomechanical paddle nanocavities

    CERN Document Server

    Healey, Chris; Wu, Marcelo; Khanaliloo, Behzad; Mitchell, Matthew; Hryciw, Aaron C; Barclay, Paul E

    2015-01-01

    We present the design, fabrication and initial characterization of a paddle nanocavity consisting of a suspended sub-picogram nanomechanical resonator optomechanically coupled to a photonic crystal nanocavity. The optical and mechanical properties of the paddle nanocavity can be systematically designed and optimized, and key characteristics including mechanical frequency easily tailored. Measurements under ambient conditions of a silicon paddle nanocavity demonstrate an optical mode with quality factor $Q_o$ ~ 6000 near 1550 nm, and optomechanical coupling to several mechanical resonances with frequencies $\\omega_m/2\\pi$ ~ 12-64 MHz, effective masses $m_\\text{eff}$ ~ 350-650 fg, and mechanical quality factors $Q_m$ ~ 44-327. Paddle nanocavities are promising for optomechanical sensing and nonlinear optomechanics experiments.

  20. Design and experimental demonstration of optomechanical paddle nanocavities

    Science.gov (United States)

    Healey, Chris; Kaviani, Hamidreza; Wu, Marcelo; Khanaliloo, Behzad; Mitchell, Matthew; Hryciw, Aaron C.; Barclay, Paul E.

    2015-12-01

    We present the design, fabrication, and initial characterization of a paddle nanocavity consisting of a suspended sub-picogram nanomechanical resonator optomechanically coupled to a photonic crystal nanocavity. The optical and mechanical properties of the paddle nanocavity can be systematically designed and optimized, and the key characteristics including mechanical frequency can be easily tailored. Measurements under ambient conditions of a silicon paddle nanocavity demonstrate an optical mode with a quality factor Q o ˜ 6000 near 1550 nm and optomechanical coupling to several mechanical resonances with frequencies ω m / 2 π ˜ 12 - 64 MHz, effective masses m eff ˜ 350 - 650 fg, and mechanical quality factors Q m ˜ 44 - 327 . Paddle nanocavities are promising for optomechanical sensing and nonlinear optomechanics experiments.

  1. High-power CMUTs: design and experimental verification.

    Science.gov (United States)

    Yamaner, F Yalçin; Olçum, Selim; Oğuz, H Kağan; Bozkurt, Ayhan; Köymen, Hayrettin; Atalar, Abdullah

    2012-06-01

    Capacitive micromachined ultrasonic transducers (CMUTs) have great potential to compete with piezoelectric transducers in high-power applications. As the output pressures increase, nonlinearity of CMUT must be reconsidered and optimization is required to reduce harmonic distortions. In this paper, we describe a design approach in which uncollapsed CMUT array elements are sized so as to operate at the maximum radiation impedance and have gap heights such that the generated electrostatic force can sustain a plate displacement with full swing at the given drive amplitude. The proposed design enables high output pressures and low harmonic distortions at the output. An equivalent circuit model of the array is used that accurately simulates the uncollapsed mode of operation. The model facilities the design of CMUT parameters for high-pressure output, without the intensive need for computationally involved FEM tools. The optimized design requires a relatively thick plate compared with a conventional CMUT plate. Thus, we used a silicon wafer as the CMUT plate. The fabrication process involves an anodic bonding process for bonding the silicon plate with the glass substrate. To eliminate the bias voltage, which may cause charging problems, the CMUT array is driven with large continuous wave signals at half of the resonant frequency. The fabricated arrays are tested in an oil tank by applying a 125-V peak 5-cycle burst sinusoidal signal at 1.44 MHz. The applied voltage is increased until the plate is about to touch the bottom electrode to get the maximum peak displacement. The observed pressure is about 1.8 MPa with -28 dBc second harmonic at the surface of the array.

  2. Experimental design and construction of an enhanced solar battery charger

    OpenAIRE

    Faithpraise, Fina; Bassey, Donatus; Charles, Mfon; Osahon, Okoro; Udoh, Monday; Chatwin, Chris

    2016-01-01

    A Solar Battery Charger circuit is designed, built and tested. It acts as a control circuit to monitor and regulate the process of charging several batteries ranging from 4 volts to 12 volts, using a photovoltaic (PV) solar panel as the input source for the battery charging process. The circuit is economical and can be easily constructed from discrete electronic components. The circuit operation is based on matching the solar panel terminal load voltage to the input terminal of the charging c...

  3. Optimal experimental designs for dose–response studies with continuous endpoints

    OpenAIRE

    Holland-Letz, Tim; Kopp-Schneider, Annette

    2014-01-01

    In most areas of clinical and preclinical research, the required sample size determines the costs and effort for any project, and thus, optimizing sample size is of primary importance. An experimental design of dose–response studies is determined by the number and choice of dose levels as well as the allocation of sample size to each level. The experimental design of toxicological studies tends to be motivated by convention. Statistical optimal design theory, however, allows the setting of ex...

  4. Statistical Approaches in Analysis of Variance: from Random Arrangements to Latin Square Experimental Design

    OpenAIRE

    2009-01-01

    Background: The choices of experimental design as well as of statisticalanalysis are of huge importance in field experiments. These are necessary tobe correctly in order to obtain the best possible precision of the results. Therandom arrangements, randomized blocks and Latin square designs werereviewed and analyzed from the statistical perspective of error analysis.Material and Method: Random arrangements, randomized block and Latinsquares experimental designs were used as field experiments. ...

  5. Experimental burn plot trial in the Kruger National Park: history, experimental design and suggestions for data analysis

    Directory of Open Access Journals (Sweden)

    R. Biggs

    2003-12-01

    Full Text Available The experimental burn plot (EBP trial initiated in 1954 is one of few ongoing long-termfire ecology research projects in Africa. The trial aims to assess the impacts of differentfire regimes in the Kruger National Park. Recent studies on the EBPs have raised questions as to the experimental design of the trial, and the appropriate model specificationwhen analysing data. Archival documentation reveals that the original design was modified on several occasions, related to changes in the park's fire policy. These modifications include the addition of extra plots, subdivision of plots and changes in treatmentsover time, and have resulted in a design which is only partially randomised. The representativity of the trial plots has been questioned on account of their relatively small size,the concentration of herbivores on especially the frequently burnt plots, and soil variation between plots. It is suggested that these factors be included as covariates inexplanatory models or that certain plots be excluded from data analysis based on resultsof independent studies of these factors. Suggestions are provided for the specificationof the experimental design when analysing data using Analysis of Variance. It is concluded that there is no practical alternative to treating the trial as a fully randomisedcomplete block design.

  6. Gladstone-Dale constant for CF4. [experimental design

    Science.gov (United States)

    Burner, A. W., Jr.; Goad, W. K.

    1980-01-01

    The Gladstone-Dale constant, which relates the refractive index to density, was measured for CF4 by counting fringes of a two-beam interferometer, one beam of which passes through a cell containing the test gas. The experimental approach and sources of systematic and imprecision errors are discussed. The constant for CF4 was measured at several wavelengths in the visible region of the spectrum. A value of 0.122 cu cm/g with an uncertainty of plus or minus 0.001 cu cm/g was determined for use in the visible region. A procedure for noting the departure of the gas density from the ideal-gas law is discussed.

  7. Seasonal variation in objectively measured physical activity, sedentary time, cardio-respiratory fitness and sleep duration among 8–11 year-old Danish children: a repeated-measures study

    DEFF Research Database (Denmark)

    Hjorth, Mads F.; Chaput, Jean-Philippe; Michaelsen, Kim;

    2013-01-01

    BACKGROUND: Understanding fluctuations in lifestyle indicators is important to identify relevant time periods to intervene in order to promote a healthy lifestyle; however, objective assessment of multiple lifestyle indicators has never been done using a repeated-measures design. The primary aim...... was, therefore, to examine between-season and within-week variation in physical activity, sedentary behaviour, cardio-respiratory fitness and sleep duration among 8–11 year-old children. METHODS: A total of 1021 children from nine Danish schools were invited to participate and 834 accepted. Due...

  8. Experimental design and quality assurance: in situ fluorescence instrumentation

    Science.gov (United States)

    Conmy, Robyn N.; Del Castillo, Carlos E.; Downing, Bryan D.; Chen, Robert F.

    2014-01-01

    Both instrument design and capabilities of fluorescence spectroscopy have greatly advanced over the last several decades. Advancements include solid-state excitation sources, integration of fiber optic technology, highly sensitive multichannel detectors, rapid-scan monochromators, sensitive spectral correction techniques, and improve data manipulation software (Christian et al., 1981, Lochmuller and Saavedra, 1986; Cabniss and Shuman, 1987; Lakowicz, 2006; Hudson et al., 2007). The cumulative effect of these improvements have pushed the limits and expanded the application of fluorescence techniques to numerous scientific research fields. One of the more powerful advancements is the ability to obtain in situ fluorescence measurements of natural waters (Moore, 1994). The development of submersible fluorescence instruments has been made possible by component miniaturization and power reduction including advances in light sources technologies (light-emitting diodes, xenon lamps, ultraviolet [UV] lasers) and the compatible integration of new optical instruments with various sampling platforms (Twardowski et at., 2005 and references therein). The development of robust field sensors skirt the need for cumbersome and or time-consuming filtration techniques, the potential artifacts associated with sample storage, and coarse sampling designs by increasing spatiotemporal resolution (Chen, 1999; Robinson and Glenn, 1999). The ability to obtain rapid, high-quality, highly sensitive measurements over steep gradients has revolutionized investigations of dissolved organic matter (DOM) optical properties, thereby enabling researchers to address novel biogeochemical questions regarding colored or chromophoric DOM (CDOM). This chapter is dedicated to the origin, design, calibration, and use of in situ field fluorometers. It will serve as a review of considerations to be accounted for during the operation of fluorescence field sensors and call attention to areas of concern when making

  9. The ISR Asymmetrical Capacitor Thruster: Experimental Results and Improved Designs

    Science.gov (United States)

    Canning, Francis X.; Cole, John; Campbell, Jonathan; Winet, Edwin

    2004-01-01

    A variety of Asymmetrical Capacitor Thrusters has been built and tested at the Institute for Scientific Research (ISR). The thrust produced for various voltages has been measured, along with the current flowing, both between the plates and to ground through the air (or other gas). VHF radiation due to Trichel pulses has been measured and correlated over short time scales to the current flowing through the capacitor. A series of designs were tested, which were increasingly efficient. Sharp features on the leading capacitor surface (e.g., a disk) were found to increase the thrust. Surprisingly, combining that with sharp wires on the trailing edge of the device produced the largest thrust. Tests were performed for both polarizations of the applied voltage, and for grounding one or the other capacitor plate. In general (but not always) it was found that the direction of the thrust depended on the asymmetry of the capacitor rather than on the polarization of the voltage. While no force was measured in a vacuum, some suggested design changes are given for operation in reduced pressures.

  10. Physics Design of Criticality Assembly in Experimental Research About Criticality Safety in Spent Fuel Dissolver

    Institute of Scientific and Technical Information of China (English)

    ZHOU; Qi

    2012-01-01

    <正>In order to meet the experimental demand of criticality safety research in the spent fuel dissolver, we need to design a suitable criticality assembly. The key problem of the design work is the core design because there are many limits for it such as the number of fuel rods loaded, fissile materials existed in the solution, reactivity control, core size and etc.

  11. Single-Case Experimental Designs: A Systematic Review of Published Research and Current Standards

    Science.gov (United States)

    Smith, Justin D.

    2012-01-01

    This article systematically reviews the research design and methodological characteristics of single-case experimental design (SCED) research published in peer-reviewed journals between 2000 and 2010. SCEDs provide researchers with a flexible and viable alternative to group designs with large sample sizes. However, methodological challenges have…

  12. Manufacturing cereal bars with high nutritional value through experimental design

    Directory of Open Access Journals (Sweden)

    Roberta Covino

    2015-01-01

    Full Text Available Organizations responsible for public health throughout the world have been increasingly worrying about how to feed populations encouraging a nutritious and balanced diet in order to decrease the occurrence of chronic diseases, which are constantly related to an inadequate diet. Still, due to matters of modern lifestyle consumers are increasingly seeking convenient products. This being so, cereal bars have been an option when the matter is low calorie fast food which is also source of fiber. This study aimed at developing a cereal bar with high dietary fiber, iron, vitamins A and vitamin E, in order to easily enable adult population achieve the daily recommendation for such nutrients. Eight formulations plus the focal point were conducted through experimental planning; sensory analysis with 110 tasters for each block and texture. Afterwards, we conducted centesimal analysis for all three formulations presenting the best sensory results. After statistical analysis and comparison to the means for products available in the market, it was possible to conclude that the product developed presented great acceptance and fiber level more than twice as much as the means for commercial products.

  13. A projection method for under determined optimal experimental designs

    KAUST Repository

    Long, Quan

    2014-01-09

    A new implementation, based on the Laplace approximation, was developed in (Long, Scavino, Tempone, & Wang 2013) to accelerate the estimation of the post–experimental expected information gains in the model parameters and predictive quantities of interest. A closed–form approximation of the inner integral and the order of the corresponding dominant error term were obtained in the cases where the parameters are determined by the experiment. In this work, we extend that method to the general cases where the model parameters could not be determined completely by the data from the proposed experiments. We carry out the Laplace approximations in the directions orthogonal to the null space of the corresponding Jacobian matrix, so that the information gain (Kullback–Leibler divergence) can be reduced to an integration against the marginal density of the transformed parameters which are not determined by the experiments. Furthermore, the expected information gain can be approximated by an integration over the prior, where the integrand is a function of the projected posterior covariance matrix. To deal with the issue of dimensionality in a complex problem, we use Monte Carlo sampling or sparse quadratures for the integration over the prior probability density function, depending on the regularity of the integrand function. We demonstrate the accuracy, efficiency and robustness of the proposed method via several nonlinear under determined numerical examples.

  14. A modified experimental hut design for studying responses of disease-transmitting mosquitoes to indoor interventions: the Ifakara experimental huts.

    Directory of Open Access Journals (Sweden)

    Fredros O Okumu

    Full Text Available Differences between individual human houses can confound results of studies aimed at evaluating indoor vector control interventions such as insecticide treated nets (ITNs and indoor residual insecticide spraying (IRS. Specially designed and standardised experimental huts have historically provided a solution to this challenge, with an added advantage that they can be fitted with special interception traps to sample entering or exiting mosquitoes. However, many of these experimental hut designs have a number of limitations, for example: 1 inability to sample mosquitoes on all sides of huts, 2 increased likelihood of live mosquitoes flying out of the huts, leaving mainly dead ones, 3 difficulties of cleaning the huts when a new insecticide is to be tested, and 4 the generally small size of the experimental huts, which can misrepresent actual local house sizes or airflow dynamics in the local houses. Here, we describe a modified experimental hut design - The Ifakara Experimental Huts- and explain how these huts can be used to more realistically monitor behavioural and physiological responses of wild, free-flying disease-transmitting mosquitoes, including the African malaria vectors of the species complexes Anopheles gambiae and Anopheles funestus, to indoor vector control-technologies including ITNs and IRS. Important characteristics of the Ifakara experimental huts include: 1 interception traps fitted onto eave spaces and windows, 2 use of eave baffles (panels that direct mosquito movement to control exit of live mosquitoes through the eave spaces, 3 use of replaceable wall panels and ceilings, which allow safe insecticide disposal and reuse of the huts to test different insecticides in successive periods, 4 the kit format of the huts allowing portability and 5 an improved suite of entomological procedures to maximise data quality.

  15. A passive exoskeleton with artificial tendons: design and experimental evaluation.

    Science.gov (United States)

    van Dijk, Wietse; van der Kooij, Herman; Hekman, Edsko

    2011-01-01

    We developed a passive exoskeleton that was designed to minimize joint work during walking. The exoskeleton makes use of passive structures, called artificial tendons, acting in parallel with the leg. Artificial tendons are elastic elements that are able to store and redistribute energy over the human leg joints. The elastic characteristics of the tendons have been optimized to minimize the mechanical work of the human leg joints. In simulation the maximal reduction was 40 percent. The performance of the exoskeleton was evaluated in an experiment in which nine subjects participated. Energy expenditure and muscle activation were measured during three conditions: Normal walking, walking with the exoskeleton without artificial tendons, and walking with the exoskeleton with the artificial tendons. Normal walking was the most energy efficient. While walking with the exoskeleton, the artificial tendons only resulted in a negligibly small decrease in energy expenditure.

  16. Expanded microchannel heat exchanger: design, fabrication and preliminary experimental test

    CERN Document Server

    Denkenberger, David C; Pearce, Joshua M; Zhai, John; 10.1177/0957650912442781

    2012-01-01

    This paper first reviews non-traditional heat exchanger geometry, laser welding, practical issues with microchannel heat exchangers, and high effectiveness heat exchangers. Existing microchannel heat exchangers have low material costs, but high manufacturing costs. This paper presents a new expanded microchannel heat exchanger design and accompanying continuous manufacturing technique for potential low-cost production. Polymer heat exchangers have the potential for high effectiveness. The paper discusses one possible joining method - a new type of laser welding named "forward conduction welding," used to fabricate the prototype. The expanded heat exchanger has the potential to have counter-flow, cross-flow, or parallel-flow configurations, be used for all types of fluids, and be made of polymers, metals, or polymer-ceramic precursors. The cost and ineffectiveness reduction may be an order of magnitude or more, saving a large fraction of primary energy. The measured effectiveness of the prototype with 28 micro...

  17. Design,fabrication and experimental research for an electrohydrodynamic micropump

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    This paper presented a novel electrohydrodynamic (EHD) micropump based on MEMS technology. The working mechanisms and classification of EHD micropump were introduced. The fabrication process of EHD micropump was presented with the material selection,optimal design of microelectrode and assembly process. Static pressure experiments and flow experiments were carried out using different fluid and the channel depth. The results indicated that the micropump could achieve a maximum static pressure head of 268 Pa at an applied voltage of 90 V. The maximum flow rate of the micropump-driven fluid could reach 106 μL/min. This paper analyzed the future of combining micropump with heat pipe to deal with heat dissipation of high power electronic chips. The maximum heat dissipation capacity of 87 W/cm2 can be realized by vaporizing the micropump-driven liquid on vaporizing section of the heat pipe.

  18. Constructing experimental designs for discrete-choice experiments: report of the ISPOR Conjoint Analysis Experimental Design Good Research Practices Task Force.

    Science.gov (United States)

    Reed Johnson, F; Lancsar, Emily; Marshall, Deborah; Kilambi, Vikram; Mühlbacher, Axel; Regier, Dean A; Bresnahan, Brian W; Kanninen, Barbara; Bridges, John F P

    2013-01-01

    Stated-preference methods are a class of evaluation techniques for studying the preferences of patients and other stakeholders. While these methods span a variety of techniques, conjoint-analysis methods-and particularly discrete-choice experiments (DCEs)-have become the most frequently applied approach in health care in recent years. Experimental design is an important stage in the development of such methods, but establishing a consensus on standards is hampered by lack of understanding of available techniques and software. This report builds on the previous ISPOR Conjoint Analysis Task Force Report: Conjoint Analysis Applications in Health-A Checklist: A Report of the ISPOR Good Research Practices for Conjoint Analysis Task Force. This report aims to assist researchers specifically in evaluating alternative approaches to experimental design, a difficult and important element of successful DCEs. While this report does not endorse any specific approach, it does provide a guide for choosing an approach that is appropriate for a particular study. In particular, it provides an overview of the role of experimental designs for the successful implementation of the DCE approach in health care studies, and it provides researchers with an introduction to constructing experimental designs on the basis of study objectives and the statistical model researchers have selected for the study. The report outlines the theoretical requirements for designs that identify choice-model preference parameters and summarizes and compares a number of available approaches for constructing experimental designs. The task-force leadership group met via bimonthly teleconferences and in person at ISPOR meetings in the United States and Europe. An international group of experimental-design experts was consulted during this process to discuss existing approaches for experimental design and to review the task force's draft reports. In addition, ISPOR members contributed to developing a consensus

  19. Using experimental designs for modelling of intermittent air filtration process

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Identification of the optimal operating conditions and evaluation of their robustness are critical issues for industrial processes.A standard procedure,for modelling a laboratory-scale wire-to-cylinder electrostatic precipitator and for guiding the research of the set point,is presented.The procedure consists of formulating a set of recommendations regarding the choice of parameter values for electrostatic precipitation.The experiments were carried out on a laboratory cylindrical precipitator,built by one of the authors,with samples of wood particles.The parameters considered are the applied high voltage U,the air flow F,and the quantity of dust in air m.Several"one-factor-at-a-time"followed by factorial composite design experiments were performed,based on the following three-step strategy:1)Identify the domain of variation of the variables;2)Determine the mathematical model of the process outcome:3)Validation of the mathematical model and optimisation of the process.

  20. New experimental model design for VAPEX process experiments

    Energy Technology Data Exchange (ETDEWEB)

    Yazdani, J.A.; Maini, B.B. [Calgary Univ., AB (Canada)

    2004-07-01

    The vast resources of heavy oil and bitumen deposits in Venezuela and Canada have become targets for finding less costly and more effective hydrocarbon recovery methods. Solvent extraction of heavy deposits is an attractive option that is gaining much attention as an in-situ recovery method. Vapour extraction (VAPEX) is analogous to the steam assisted gravity drainage (SAGD) process for the recovery of heavy oil and tar sand reservoirs. However, in VAPEX, vaporized solvents are used instead of high temperature steam and the viscosity of the oil is reduced in situ. VAPEX is particularly suited for formations that are thin and where heat losses are unavoidable. It is also well suited in the presence of overlying gas caps; bottom water aquifers; low thermal conductivity; high water saturation; clay swelling; and, formation damage. Most modelling studies use rectangular shaped models, but these have limitations at high reservoir pressures. This study presents a new design of physical models that overcomes these limitations. The annular space between two cylindrical pipes is used for developing slice-type and sand-filled models. This newly developed model is more compatible with higher pressures. This paper compares results of VAPEX experiments using the cylindrical models and the rectangular models. The stabilized drainage rates from the newly developed cylindrical models are in very good agreement with those from the rectangular models. 16 refs., 3 tabs., 11 figs.

  1. A method to construct a points system to predict cardiovascular disease considering repeated measures of risk factors

    Science.gov (United States)

    Carbayo-Herencia, Julio Antonio; Vigo, Maria Isabel; Gil-Guillén, Vicente Francisco

    2016-01-01

    Current predictive models for cardiovascular disease based on points systems use the baseline situation of the risk factors as independent variables. These models do not take into account the variability of the risk factors over time. Predictive models for other types of disease also exist that do consider the temporal variability of a single biological marker in addition to the baseline variables. However, due to their complexity these other models are not used in daily clinical practice. Bearing in mind the clinical relevance of these issues and that cardiovascular diseases are the leading cause of death worldwide we show the properties and viability of a new methodological alternative for constructing cardiovascular risk scores to make predictions of cardiovascular disease with repeated measures of the risk factors and retaining the simplicity of the points systems so often used in clinical practice (construction, statistical validation by simulation and explanation of potential utilization). We have also applied the system clinically upon a set of simulated data solely to help readers understand the procedure constructed. PMID:26893963

  2. Selecting a linear mixed model for longitudinal data: repeated measures analysis of variance, covariance pattern model, and growth curve approaches.

    Science.gov (United States)

    Liu, Siwei; Rovine, Michael J; Molenaar, Peter C M

    2012-03-01

    With increasing popularity, growth curve modeling is more and more often considered as the 1st choice for analyzing longitudinal data. Although the growth curve approach is often a good choice, other modeling strategies may more directly answer questions of interest. It is common to see researchers fit growth curve models without considering alterative modeling strategies. In this article we compare 3 approaches for analyzing longitudinal data: repeated measures analysis of variance, covariance pattern models, and growth curve models. As all are members of the general linear mixed model family, they represent somewhat different assumptions about the way individuals change. These assumptions result in different patterns of covariation among the residuals around the fixed effects. In this article, we first indicate the kinds of data that are appropriately modeled by each and use real data examples to demonstrate possible problems associated with the blanket selection of the growth curve model. We then present a simulation that indicates the utility of Akaike information criterion and Bayesian information criterion in the selection of a proper residual covariance structure. The results cast doubt on the popular practice of automatically using growth curve modeling for longitudinal data without comparing the fit of different models. Finally, we provide some practical advice for assessing mean changes in the presence of correlated data.

  3. Design and experimental tests of free electron laser wire scanners

    Science.gov (United States)

    Orlandi, G. L.; Heimgartner, P.; Ischebeck, R.; Loch, C. Ozkan; Trovati, S.; Valitutti, P.; Schlott, V.; Ferianis, M.; Penco, G.

    2016-09-01

    SwissFEL is a x-rays free electron laser (FEL) driven by a 5.8 GeV linac under construction at Paul Scherrer Institut. In SwissFEL, wire scanners (WSCs) will be complementary to view-screens for emittance measurements and routinely used to monitor the transverse profile of the electron beam during FEL operations. The SwissFEL WSC is composed of an in-vacuum beam-probe—motorized by a stepper motor—and an out-vacuum pick-up of the wire signal. The mechanical stability of the WSC in-vacuum hardware has been characterized on a test bench. In particular, the motor induced vibrations of the wire have been measured and mapped for different motor speeds. Electron-beam tests of the entire WSC setup together with different wire materials have been carried out at the 250 MeV SwissFEL Injector Test Facility (SITF, Paul Scherrer Institut, CH) and at FERMI (Elettra-Sincrotrone Trieste, Italy). In particular, a comparative study of the relative measurement accuracy and the radiation-dose release of Al (99 )∶Si (1 ) and tungsten (W) wires has been carried out. On the basis of the outcome of the bench and electron-beam tests, the SwissFEL WSC can be qualified as a high resolution and machine-saving diagnostic tool in consideration of the mechanical stability of the scanning wire at the micrometer level and the choice of the wire material ensuring a drastic reduction of the radiation-dose release with respect to conventional metallic wires. The main aspects of the design, laboratory characterization and electron beam tests of the SwissFEL WSCs are presented.

  4. STRONG LENS TIME DELAY CHALLENGE. I. EXPERIMENTAL DESIGN

    Energy Technology Data Exchange (ETDEWEB)

    Dobler, Gregory [Kavli Institute for Theoretical Physics, University of California Santa Barbara, Santa Barbara, CA 93106 (United States); Fassnacht, Christopher D.; Rumbaugh, Nicholas [Department of Physics, University of California, 1 Shields Avenue, Davis, CA 95616 (United States); Treu, Tommaso; Liao, Kai [Department of Physics, University of California, Santa Barbara, CA 93106 (United States); Marshall, Phil [Kavli Institute for Particle Astrophysics and Cosmology, P.O. Box 20450, MS29, Stanford, CA 94309 (United States); Hojjati, Alireza [Department of Physics and Astronomy, University of British Columbia, 6224 Agricultural Road, Vancouver, B.C. V6T 1Z1 (Canada); Linder, Eric, E-mail: tt@astro.ucla.edu [Lawrence Berkeley National Laboratory and University of California, Berkeley, CA 94720 (United States)

    2015-02-01

    The time delays between point-like images in gravitational lens systems can be used to measure cosmological parameters. The number of lenses with measured time delays is growing rapidly; the upcoming Large Synoptic Survey Telescope (LSST) will monitor ∼10{sup 3} strongly lensed quasars. In an effort to assess the present capabilities of the community, to accurately measure the time delays, and to provide input to dedicated monitoring campaigns and future LSST cosmology feasibility studies, we have invited the community to take part in a ''Time Delay Challenge'' (TDC). The challenge is organized as a set of ''ladders'', each containing a group of simulated data sets to be analyzed blindly by participating teams. Each rung on a ladder consists of a set of realistic mock observed lensed quasar light curves, with the rungs' data sets increasing in complexity and realism. The initial challenge described here has two ladders, TDC0 and TDC1. TDC0 has a small number of data sets, and is designed to be used as a practice set by the participating teams. The (non-mandatory) deadline for completion of TDC0 was the TDC1 launch date, 2013 December 1. The TDC1 deadline was 2014 July 1. Here we give an overview of the challenge, we introduce a set of metrics that will be used to quantify the goodness of fit, efficiency, precision, and accuracy of the algorithms, and we present the results of TDC0. Thirteen teams participated in TDC0 using 47 different methods. Seven of those teams qualified for TDC1, which is described in the companion paper.

  5. Batch phenol biodegradation study and application of factorial experimental design

    Directory of Open Access Journals (Sweden)

    A. Hellal

    2010-01-01

    Full Text Available A bacterium, Pseudomonas aeruginosa (ATTC27853, was investigated for its ability to grow and to degrade phenol as solecarbon source, in aerobic batch culture. The parameters which affect the substrate biodegradation such as the adaptation ofbacteria to phenol, the temperature, and the nature of the bacteria were investigated. The results show that for a range oftemperature of 30 to 40°C, the best degradation of phenol for a concentration of 100mg/l was observed at 30°C. The regenerationof the bacterium which allows the reactivation of its enzymatic activity, shows that the degradation of 100 mg/ l ofsubstrate at 30° C required approximately 50 hours with revivified bacteria, while it only starts after 72 hours for those norevivified. Adapted to increasing concentrations, allows the bacteria to degrade a substrate concentration of about 400mg/l in less than 350 hours.A second part was consisted in the determination of a substrate degradation model using the factorial experiment design,as a function of temperature (30-40°C and of the size of the inoculums (260.88 - 521.76mg/ l. The results were analyzedstatistically using the Student’s t-test, analysis of variance, and F-test. The value of R2 (0.99872 and adjusted R2 (0.9962close to 1.0, verifies the good correlation between the observed and the predicted values, and provides the excellent relationshipbetween the independent variables (factors and the response (the time of the phenol degradation. F-value found above200, indicates that the considered model is statistically significant.

  6. Delineamento experimental e tamanho de amostra para alface cultivada em hidroponia Experimental design and sample size for hydroponic lettuce crop

    Directory of Open Access Journals (Sweden)

    Valéria Schimitz Marodim

    2000-10-01

    Full Text Available Este estudo visa a estabelecer o delineamento experimental e o tamanho de amostra para a cultura da alface (Lactuca sativa em hidroponia, pelo sistema NFT (Nutrient film technique. O experimento foi conduzido no Laboratório de Cultivos Sem Solo/Hidroponia, no Departamento de Fitotecnia da Universidade Federal de Santa Maria e baseou-se em dados de massa de plantas. Os resultados obtidos mostraram que, usando estrutura de cultivo de alface em hidroponia sobre bancadas de fibrocimento com seis canais, o delineamento experimental adequado é blocos ao acaso se a unidade experimental for constituída de faixas transversais aos canais das bancadas, e deve ser inteiramente casualizado se a bancada for a unidade experimental; para a variável massa de plantas, o tamanho da amostra é de 40 plantas para uma semi-amplitude do intervalo de confiança em percentagem da média (d igual a 5% e de 7 plantas para um d igual a 20%.This study was carried out to establish the experimental design and sample size for hydroponic lettuce (Lactuca sativa crop under nutrient film technique. The experiment was conducted in the Laboratory of Hydroponic Crops of the Horticulture Department of the Federal University of Santa Maria. The evaluated traits were plant weight. Under hydroponic conditions on concrete bench with six ducts, the most indicated experimental design for lettuce is randomised blocks for duct transversal plots or completely randomised for bench plot. The sample size for plant weight should be 40 and 7 plants, respectively, for a confidence interval of mean percentage (d equal to 5% and 20%.

  7. Design, construction and testing of a radon experimental chamber; Diseno, construccion y pruebas de una camara experimental de radon

    Energy Technology Data Exchange (ETDEWEB)

    Chavez B, A.; Balcazar G, M

    1991-10-15

    To carry out studies on the radon behavior under controlled and stable conditions it was designed and constructed a system that consists of two parts: a container of mineral rich in Uranium and an experimentation chamber with radon united one to the other one by a step valve. The container of uranium mineral approximately contains 800 gr of uranium with a law of 0.28%; the radon gas emanated by the mineral is contained tightly by the container. When the valve opens up the radon gas it spreads to the radon experimental chamber; this contains 3 accesses that allow to install different types of detectors. The versatility of the system is exemplified with two experiments: 1. With the radon experimental chamber and an associated spectroscopic system, the radon and two of its decay products are identified. 2. The design of the system allows to couple the mineral container to other experimental geometries to demonstrate this fact it was coupled and proved a new automatic exchanger system of passive detectors of radon. The results of the new automatic exchanger system when it leave to flow the radon freely among the container and the automatic exchanger through a plastic membrane of 15 m. are shown. (Author)

  8. Scaffolded Instruction Improves Student Understanding of the Scientific Method & Experimental Design

    Science.gov (United States)

    D'Costa, Allison R.; Schlueter, Mark A.

    2013-01-01

    Implementation of a guided-inquiry lab in introductory biology classes, along with scaffolded instruction, improved students' understanding of the scientific method, their ability to design an experiment, and their identification of experimental variables. Pre- and postassessments from experimental versus control sections over three semesters…

  9. Experimental Device for Learning of Logical Circuit Design using Integrated Circuits

    OpenAIRE

    石橋, 孝昭

    2012-01-01

    This paper presents an experimental device for learning of logical circuit design using integrated circuits and breadboards. The experimental device can be made at a low cost and can be used for many subjects such as logical circuits, computer engineering, basic electricity, electrical circuits and electronic circuits. The proposed device is effective to learn the logical circuits than the usual lecture.

  10. Measuring and Advancing Experimental Design Ability in an Introductory Course without Altering Existing Lab Curriculum†

    Science.gov (United States)

    Shanks, Ryan A.; Robertson, Chuck L.; Haygood, Christian S.; Herdliksa, Anna M.; Herdliska, Heather R.; Lloyd, Steven A.

    2017-01-01

    Introductory biology courses provide an important opportunity to prepare students for future courses, yet existing cookbook labs, although important in their own way, fail to provide many of the advantages of semester-long research experiences. Engaging, authentic research experiences aid biology students in meeting many learning goals. Therefore, overlaying a research experience onto the existing lab structure allows faculty to overcome barriers involving curricular change. Here we propose a working model for this overlay design in an introductory biology course and detail a means to conduct this lab with minimal increases in student and faculty workloads. Furthermore, we conducted exploratory factor analysis of the Experimental Design Ability Test (EDAT) and uncovered two latent factors which provide valid means to assess this overlay model’s ability to increase advanced experimental design abilities. In a pre-test/post-test design, we demonstrate significant increases in both basic and advanced experimental design abilities in an experimental and comparison group. We measured significantly higher gains in advanced experimental design understanding in students in the experimental group. We believe this overlay model and EDAT factor analysis contribute a novel means to conduct and assess the effectiveness of authentic research experiences in an introductory course without major changes to the course curriculum and with minimal increases in faculty and student workloads. PMID:28904647

  11. Measuring and Advancing Experimental Design Ability in an Introductory Course without Altering Existing Lab Curriculum

    Directory of Open Access Journals (Sweden)

    Ryan A. Shanks

    2017-05-01

    Full Text Available Introductory biology courses provide an important opportunity to prepare students for future courses, yet existing cookbook labs, although important in their own way, fail to provide many of the advantages of semester-long research experiences. Engaging, authentic research experiences aid biology students in meeting many learning goals. Therefore, overlaying a research experience onto the existing lab structure allows faculty to overcome barriers involving curricular change. Here we propose a working model for this overlay design in an introductory biology course and detail a means to conduct this lab with minimal increases in student and faculty workloads. Furthermore, we conducted exploratory factor analysis of the Experimental Design Ability Test (EDAT and uncovered two latent factors which provide valid means to assess this overlay model’s ability to increase advanced experimental design abilities. In a pre-test/post-test design, we demonstrate significant increases in both basic and advanced experimental design abilities in an experimental and comparison group. We measured significantly higher gains in advanced experimental design understanding in students in the experimental group. We believe this overlay model and EDAT factor analysis contribute a novel means to conduct and assess the effectiveness of authentic research experiences in an introductory course without major changes to the course curriculum and with minimal increases in faculty and student workloads.

  12. Analytical and experimental study of freeform object design and simultaneous manufacturing

    Science.gov (United States)

    Zhang, Weihan; Zhang, Wei; Wang, Xiaofang; Yan, Jingbin

    2003-04-01

    Applications of Virtual Reality (VR) technology in many fields have gained great success. In the product development field, VR is a good tool to provide interactive and friendly human-machine interface. Freeform Object Design and Simultaneous Manufacturing (FODSM) uses VR to establish an interactive design environment and enable simultaneous manufacturing. It aims at improving design efficiency, creativity, ease of use, and also aims at integrating design and manufacturing in order to obtain the designed object by the designer independently and simultaneously. For the current stage, key technologies to implement FODSM include the algorithm of swept volume calculation and the following Boolean operation, mechanism to provide natural and intuitive feedback. This paper uses an analytical and experimental method to implement the novel design and manufacturing technology. Key issues are analyzed and tested. Experimental details are demonstrated.

  13. Detecting variable responses in time-series using repeated measures ANOVA: Application to physiologic challenges [version 2; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Paul M. Macey

    2016-07-01

    Full Text Available We present an approach to analyzing physiologic timetrends recorded during a stimulus by comparing means at each time point using repeated measures analysis of variance (RMANOVA. The approach allows temporal patterns to be examined without an a priori model of expected timing or pattern of response. The approach was originally applied to signals recorded from functional magnetic resonance imaging (fMRI volumes-of-interest (VOI during a physiologic challenge, but we have used the same technique to analyze continuous recordings of other physiological signals such as heart rate, breathing rate, and pulse oximetry. For fMRI, the method serves as a complement to whole-brain voxel-based analyses, and is useful for detecting complex responses within pre-determined brain regions, or as a post-hoc analysis of regions of interest identified by whole-brain assessments. We illustrate an implementation of the technique in the statistical software packages R and SAS. VOI timetrends are extracted from conventionally preprocessed fMRI images. A timetrend of average signal intensity across the VOI during the scanning period is calculated for each subject. The values are scaled relative to baseline periods, and time points are binned. In SAS, the procedure PROC MIXED implements the RMANOVA in a single step. In R, we present one option for implementing RMANOVA with the mixed model function “lme”. Model diagnostics, and predicted means and differences are best performed with additional libraries and commands in R; we present one example. The ensuing results allow determination of significant overall effects, and time-point specific within- and between-group responses relative to baseline. We illustrate the technique using fMRI data from two groups of subjects who underwent a respiratory challenge. RMANOVA allows insight into the timing of responses and response differences between groups, and so is suited to physiologic testing paradigms eliciting complex

  14. Optimum Experimental Design applied to MEMS accelerometer calibration for 9-parameter auto-calibration model.

    Science.gov (United States)

    Ye, Lin; Su, Steven W

    2015-01-01

    Optimum Experimental Design (OED) is an information gathering technique used to estimate parameters, which aims to minimize the variance of parameter estimation and prediction. In this paper, we further investigate an OED for MEMS accelerometer calibration of the 9-parameter auto-calibration model. Based on a linearized 9-parameter accelerometer model, we show the proposed OED is both G-optimal and rotatable, which are the desired properties for the calibration of wearable sensors for which only simple calibration devices are available. The experimental design is carried out with a newly developed wearable health monitoring device and desired experimental results have been achieved.

  15. An evaluation of a structured learning program as a component of the clinical practicum in undergraduate nurse education: A repeated measures analysis.

    Science.gov (United States)

    Watt, Elizabeth; Murphy, Maria; MacDonald, Lee; Pascoe, Elizabeth; Storen, Heather; Scanlon, Andrew

    2016-01-01

    There is evidence that nursing students experience stress and anxiety and a reduction in self-efficacy when undertaking clinical placements. Previous reports have identified that a structured three-day program within the Bachelor of Nursing (BN) clinical practicum reduces the students self-report of anxiety and increases self-efficacy. However, it is unreported whether these improved outcomes are sustained for the duration of the clinical placement. The aim of this study was to evaluate the duration of the effect of a three-day structured learning program within the clinical placement on final year Bachelor of Nursing student's report of anxiety and self-efficacy pre- and post-program participation in this intervention and following completion of the clinical practicum. A repeated measures design. University-based Clinical School of Nursing, acute care clinical practicum. Final year Bachelor of Nursing students. The intervention comprised the three-day program on starting the clinical practicum. A questionnaire included the anxiety subscale of The Hospital Anxiety & Depression Scale (The HAD) and the General Self-Efficacy Scale (GSES-12). The questionnaire was completed on day one (time one), upon completion of the three-day program (time two) and upon completion of placement on day 18 (time three). The questionnaire response rate varied over time. There was a statistically significant effect in reducing anxiety over time: F(1.73,74.46)=25.20, plearning program and the benefit of the intervention is sustained for the clinical placement duration. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Experimental validation of systematically designed acoustic hyperbolic meta material slab exhibiting negative refraction

    DEFF Research Database (Denmark)

    Christiansen, Rasmus Ellebæk; Sigmund, Ole

    2016-01-01

    This Letter reports on the experimental validation of a two-dimensional acoustic hyperbolic metamaterial slab optimized to exhibit negative refractive behavior. The slab was designed using a topology optimization based systematic design method allowing for tailoring the refractive behavior. The e...

  17. A Sino-Finnish Initiative for Experimental Teaching Practices Using the Design Factory Pedagogical Platform

    Science.gov (United States)

    Björklund, Tua A.; Nordström, Katrina M.; Clavert, Maria

    2013-01-01

    The paper presents a Sino-Finnish teaching initiative, including the design and experiences of a series of pedagogical workshops implemented at the Aalto-Tongji Design Factory (DF), Shanghai, China, and the experimentation plans collected from the 54 attending professors and teachers. The workshops aimed to encourage trying out interdisciplinary…

  18. Mobilities Design – towards an experimental field of research and practice

    DEFF Research Database (Denmark)

    Jensen, Ole B.; Lanng, Ditte Bendix

    2016-01-01

    and physical form. The exchange value with design is twofold; first this means getting closer to the ‘material’ which is needed if mobilities research can claim to have understood contemporary mobilities, second it means that the creative, explorative and experimental approaches of the design world becomes...

  19. Overview of design development of FCC-hh Experimental Interaction Regions

    CERN Document Server

    Seryi, Andrei; Cruz Alaniz, Emilia; Van Riesen-Haupt, Leon; Benedikt, Michael; Besana, Maria Ilaria; Buffat, Xavier; Burkhardt, Helmut; Cerutti, Francesco; Langner, Andy Sven; Martin, Roman; Riegler, Werner; Schulte, Daniel; Tomas Garcia, Rogelio; Appleby, Robert Barrie; Rafique, Haroon; Barranco Garcia, Javier; Pieloni, Tatiana; Boscolo, Manuela; Collamati, Francesco; Nevay, Laurence James; Hofer, Michael

    2017-01-01

    The experimental interaction region (EIR) is one of the key areas that define the performance of the Future Circular Collider. In this overview we will describe the status and the evolution of the design of EIR of FCC-hh, focusing on design of the optics, energy deposition in EIR elements, beam-beam effects and machine detector interface issues.

  20. A Sino-Finnish Initiative for Experimental Teaching Practices Using the Design Factory Pedagogical Platform

    Science.gov (United States)

    Björklund, Tua A.; Nordström, Katrina M.; Clavert, Maria

    2013-01-01

    The paper presents a Sino-Finnish teaching initiative, including the design and experiences of a series of pedagogical workshops implemented at the Aalto-Tongji Design Factory (DF), Shanghai, China, and the experimentation plans collected from the 54 attending professors and teachers. The workshops aimed to encourage trying out interdisciplinary…

  1. Experimental and Theoretical Progress of Linear Collider Final Focus Design and ATF2 Facility

    CERN Document Server

    Seryi, Andrei; Zimmermann, Frank; Kubo, Kiyoshi; Kuroda, Shigeru; Okugi, Toshiyuki; Tauchi, Toshiaki; Terunuma, Nobuhiro; Urakawa, Junji; White, Glen; Woodley, Mark; Angal-Kalinin, Deepa

    2014-01-01

    In this brief overview we will reflect on the process of the design of the linear collider (LC) final focus (FF) optics, and will also describe the theoretical and experimental efforts on design and practical realisation of a prototype of the LC FF optics implemented in the ATF2 facility at KEK, Japan, presently being commissioned and operated.

  2. Assessing the Effectiveness of a Computer Simulation for Teaching Ecological Experimental Design

    Science.gov (United States)

    Stafford, Richard; Goodenough, Anne E.; Davies, Mark S.

    2010-01-01

    Designing manipulative ecological experiments is a complex and time-consuming process that is problematic to teach in traditional undergraduate classes. This study investigates the effectiveness of using a computer simulation--the Virtual Rocky Shore (VRS)--to facilitate rapid, student-centred learning of experimental design. We gave a series of…

  3. Scaffolding a Complex Task of Experimental Design in Chemistry with a Computer Environment

    Science.gov (United States)

    Girault, Isabelle; d'Ham, Cédric

    2014-01-01

    When solving a scientific problem through experimentation, students may have the responsibility to design the experiment. When students work in a conventional condition, with paper and pencil, the designed procedures stay at a very general level. There is a need for additional scaffolds to help the students perform this complex task. We propose a…

  4. Development and Validation of a Rubric for Diagnosing Students' Experimental Design Knowledge and Difficulties

    Science.gov (United States)

    Dasgupta, Annwesa P.; Anderson, Trevor R.; Pelaez, Nancy

    2014-01-01

    It is essential to teach students about experimental design, as this facilitates their deeper understanding of how most biological knowledge was generated and gives them tools to perform their own investigations. Despite the importance of this area, surprisingly little is known about what students actually learn from designing biological…

  5. Using a Discussion about Scientific Controversy to Teach Central Concepts in Experimental Design

    Science.gov (United States)

    Bennett, Kimberley Ann

    2015-01-01

    Students may need explicit training in informal statistical reasoning in order to design experiments or use formal statistical tests effectively. By using scientific scandals and media misinterpretation, we can explore the need for good experimental design in an informal way. This article describes the use of a paper that reviews the measles mumps…

  6. Visualizing Experimental Designs for Balanced ANOVA Models using Lisp-Stat

    Directory of Open Access Journals (Sweden)

    Philip W. Iversen

    2004-12-01

    Full Text Available The structure, or Hasse, diagram described by Taylor and Hilton (1981, American Statistician provides a visual display of the relationships between factors for balanced complete experimental designs. Using the Hasse diagram, rules exist for determining the appropriate linear model, ANOVA table, expected means squares, and F-tests in the case of balanced designs. This procedure has been implemented in Lisp-Stat using a software representation of the experimental design. The user can interact with the Hasse diagram to add, change, or delete factors and see the effect on the proposed analysis. The system has potential uses in teaching and consulting.

  7. Two-Stage Experimental Design for Dose–Response Modeling in Toxicology Studies

    OpenAIRE

    Wang, Kai; Yang, Feng; Porter, Dale W; Wu, Nianqiang

    2013-01-01

    The efficient design of experiments (i.e., selection of experimental doses and allocation of animals) is important to establishing dose–response relationships in toxicology studies. The proposed procedure for design of experiments is distinct from those in the literature because it is able to adequately accommodate the special features of the dose–response data, which include non-normality, variance heterogeneity, possibly nonlinearity of the dose–response curve, and data scarcity. The design...

  8. An Empirical Study of Parameter Estimation for Stated Preference Experimental Design

    Directory of Open Access Journals (Sweden)

    Fei Yang

    2014-01-01

    Full Text Available The stated preference experimental design can affect the reliability of the parameters estimation in discrete choice model. Some scholars have proposed some new experimental designs, such as D-efficient, Bayesian D-efficient. But insufficient empirical research has been conducted on the effectiveness of these new designs and there has been little comparative analysis of the new designs against the traditional designs. In this paper, a new metro connecting Chengdu and its satellite cities is taken as the research subject to demonstrate the validity of the D-efficient and Bayesian D-efficient design. Comparisons between these new designs and orthogonal design were made by the fit of model and standard deviation of parameters estimation; then the best model result is obtained to analyze the travel choice behavior. The results indicate that Bayesian D-efficient design works better than D-efficient design. Some of the variables can affect significantly the choice behavior of people, including the waiting time and arrival time. The D-efficient and Bayesian D-efficient design for MNL can acquire reliability result in ML model, but the ML model cannot develop the theory advantages of these two designs. Finally, the metro can handle over 40% passengers flow if the metro will be operated in the future.

  9. The Effects of Group Leader Learning Style on Student Knowledge Gain in a Leadership Camp Setting: A Repeated-Measures Experiment

    Science.gov (United States)

    Brown, Nicholas R.; Terry, Robert, Jr.

    2013-01-01

    Many state FFA associations conduct summer camps focusing on leadership and personal development for FFA members. Interestingly, little research has been conducted on the impact or outcomes of these common activities. The purpose of this split-plot factorial repeated-measures experiment was to assess the level of campers' learning of the…

  10. Wind refrigeration : design and results of an experimental facility; Refrigeracion eolica: Diseno y resultados de una instalacion experimental

    Energy Technology Data Exchange (ETDEWEB)

    Beltran, R. G.; Talero, A.

    2004-07-01

    This article describes the experimental setup used to obtain design parameters for a wind driven refrigeration equipment. The system compressor is directly coupled to the wind mill and will provide refrigeration to a community located in La Guajira in northern Colombia. The testing on the experimental installation assessed the refrigeration capacity that could be provided by an open type commercial compressor coupled to the wind mill axis. Power and torque requirements have been evaluated for different wind mill rotational speeds. An assessment of the local conditions relating to wind speed, frequency and preferred direction for the installation site has been made based on measurements by the Meteorological National Institute and independent data from other sources. (Author)

  11. Computerized errorless learning-based memory rehabilitation for Chinese patients with brain injury: a preliminary quasi-experimental clinical design study.

    Science.gov (United States)

    Dou, Z L; Man, D W K; Ou, H N; Zheng, J L; Tam, S F

    2006-03-01

    To evaluate the effectiveness of a computerized, errorless learning-based memory rehabilitation program for Chinese patients with traumatic brain injury (TBI). This study adopted a pre- and post-test quasi-experimental design. A total of 37 patients with TBI were randomly assigned to a Computer-Assisted Memory Training Group (CAMG), a Therapist-administered Memory Training Group (TAMG) and a Control Group (CG). Except for the CG, the patients in both the CAMG and TAMG groups received, respectively, 1-month memory training programmes that were similar in content but differed in delivery mode. All patients were followed up 1 month after treatment. The outcome measures that were taken were the Neurobehavioural Cognitive Status Examination (NCSE or Cognistat), the Rivermead Behavioural Memory Test (RBMT) and The Hong Kong List Learning Test (HKLLT). Repeated measure analyses were performed to investigate differences among the three groups. The patients in the Computer-assisted Memory Rehabilitation (CAMG) and Therapist-administered Memory Rehabilitation group (TAMG) were found to perform better than the CG in the NCSE and RBMT, but no significant differences were found between the CAMG and TAMG. The CAMG showed significant improvement in their HKLLT assessment as compared with the TAMG and CG. No statistically significant differences were found between the CAMG and TAMG when comparing the post-training outcome measures with the follow-up results. There is no difference between CAMG and TAMG, but the efficacy has been demonstrated when comparing with CG. It is suggested that the combined use of an errorless learning and a computerized approach may be an effective way of enhancing the memories of patients with TBI. This new method may smooth the progress of the whole human memory process and produce a better carryover treatment effect.

  12. Predictors and Variability of Repeat Measurements of Urinary Phenols and Parabens in a Cohort of Shanghai Women and Men

    Science.gov (United States)

    Buckley, Jessie P.; Yang, Gong; Liao, Linda M.; Satagopan, Jaya; Calafat, Antonia M.; Matthews, Charles E.; Cai, Qiuyin; Ji, Bu-Tian; Cai, Hui; Wolff, Mary S.; Rothman, Nathaniel; Zheng, Wei; Xiang, Yong-Bing; Shu, Xiao-Ou; Gao, Yu-Tang; Chow, Wong-Ho

    2014-01-01

    , under certain circumstances, among women. Citation: Engel LS, Buckley JP, Yang G, Liao LM, Satagopan J, Calafat AM, Matthews CE, Cai Q, Ji BT, Cai H, Engel SM, Wolff MS, Rothman N, Zheng W, Xiang YB, Shu XO, Gao YT, Chow WH. 2014. Predictors and variability of repeat measurements of urinary phenols and parabens in a cohort of Shanghai women and men. Environ Health Perspect 122:733–740; http://dx.doi.org/10.1289/ehp.1306830 PMID:24659570

  13. Determination of hydroxy acids in cosmetics by chemometric experimental design and cyclodextrin-modified capillary electrophoresis.

    Science.gov (United States)

    Liu, Pei-Yu; Lin, Yi-Hui; Feng, Chia Hsien; Chen, Yen-Ling

    2012-10-01

    A CD-modified CE method was established for quantitative determination of seven hydroxy acids in cosmetic products. This method involved chemometric experimental design aspects, including fractional factorial design and central composite design. Chemometric experimental design was used to enhance the method's separation capability and to explore the interactions between parameters. Compared to the traditional investigation that uses multiple parameters, the method that used chemometric experimental design was less time-consuming and lower in cost. In this study, the influences of three experimental variables (phosphate concentration, surfactant concentration, and methanol percentage) on the experimental response were investigated by applying a chromatographic resolution statistic function. The optimized conditions were as follows: a running buffer of 150 mM phosphate solution (pH 7) containing 0.5 mM CTAB, 3 mM γ-CD, and 25% methanol; 20 s sample injection at 0.5 psi; a separation voltage of -15 kV; temperature was set at 25°C; and UV detection at 200 nm. The seven hydroxy acids were well separated in less than 10 min. The LOD (S/N = 3) was 625 nM for both salicylic acid and mandelic acid. The correlation coefficient of the regression curve was greater than 0.998. The RSD and relative error values were all less than 9.21%. After optimization and validation, this simple and rapid analysis method was considered to be established and was successfully applied to several commercial cosmetic products.

  14. Adaptive combinatorial design to explore large experimental spaces: approach and validation.

    Science.gov (United States)

    Lejay, L V; Shasha, D E; Palenchar, P M; Kouranov, A Y; Cruikshank, A A; Chou, M F; Coruzzi, G M

    2004-12-01

    Systems biology requires mathematical tools not only to analyse large genomic datasets, but also to explore large experimental spaces in a systematic yet economical way. We demonstrate that two-factor combinatorial design (CD), shown to be useful in software testing, can be used to design a small set of experiments that would allow biologists to explore larger experimental spaces. Further, the results of an initial set of experiments can be used to seed further 'Adaptive' CD experimental designs. As a proof of principle, we demonstrate the usefulness of this Adaptive CD approach by analysing data from the effects of six binary inputs on the regulation of genes in the N-assimilation pathway of Arabidopsis. This CD approach identified the more important regulatory signals previously discovered by traditional experiments using far fewer experiments, and also identified examples of input interactions previously unknown. Tests using simulated data show that Adaptive CD suffers from fewer false positives than traditional experimental designs in determining decisive inputs, and succeeds far more often than traditional or random experimental designs in determining when genes are regulated by input interactions. We conclude that Adaptive CD offers an economical framework for discovering dominant inputs and interactions that affect different aspects of genomic outputs and organismal responses.

  15. Experimental Modelling of the Breakdown Voltage of Air Using Design of Experiments

    Directory of Open Access Journals (Sweden)

    REZOUGA, M.

    2009-02-01

    Full Text Available Many experimental and numerical studies were devoted to the electric discharge of air, and some mathematical models were proposed for the critical breakdown voltage. As this latter depends on several parameters, it is difficult to find a formula, theoretical or experimental, which considers many factors. The aim of this paper is to model the critical breakdown voltage in a "Sphere-Sphere� electrodes system by using the methodology of experimental designs. Several factors were considered, such as geometrical factors (inter-electrodes interval, diameter of the electrodes and climatic factors (temperature, humidity. Two factorial centred faces experimental designs (CCF were carried out, a first one for the geometrical factors and a second one for the climatic factors. The obtained results made it possible to propose mathematical models and to study the interactions between the various factors.

  16. Experimental design for stable genetic manipulation in mammalian cell lines: lentivirus and alternatives.

    Science.gov (United States)

    Shearer, Robert F; Saunders, Darren N

    2015-01-01

    The use of third-generation lentiviral vectors is now commonplace in most areas of basic biology. These systems provide a fast, efficient means for modulating gene expression, but experimental design needs to be carefully considered to minimize potential artefacts arising from off-target effects and other confounding factors. This review offers a starting point for those new to lentiviral-based vector systems, addressing the main issues involved with the use of lentiviral systems in vitro and outlines considerations which should be taken into account during experimental design. Factors such as selecting an appropriate system and controls, and practical titration of viral transduction are important considerations for experimental design. We also briefly describe some of the more recent advances in genome editing technology. TALENs and CRISPRs offer an alternative to lentivirus, providing endogenous gene editing with reduced off-target effects often at the expense of efficiency.

  17. Conceptual design of superconducting magnet systems for the Argonne Tokamak Experimental Power Reactor

    Energy Technology Data Exchange (ETDEWEB)

    Wang, S.T.; Turner, L.R.; Mills, F.E.; DeMichele, D.W.; Smelser, P.; Kim, S.H.

    1976-01-01

    As an integral effort in the Argonne Tokamak Experimental Power Reactor Conceptual Design, the conceptual design of a 10-tesla, pure-tension superconducting toroidal-field (TF) coil system has been developed in sufficient detail to define a realistic design for the TF coil system that could be built based upon the current state of technology with minimum technological extrapolations. A conceptual design study on the superconducting ohmic-heating (OH) coils and the superconducting equilibrium-field (EF) coils were also completed. These conceptual designs are developed in sufficient detail with clear information on high current ac conductor design, cooling, venting provision, coil structural support and zero loss poloidal coil cryostat design. Also investigated is the EF penetration into the blanket and shield.

  18. A new experimental design method to optimize formulations focusing on a lubricant for hydrophilic matrix tablets.

    Science.gov (United States)

    Choi, Du Hyung; Shin, Sangmun; Khoa Viet Truong, Nguyen; Jeong, Seong Hoon

    2012-09-01

    A robust experimental design method was developed with the well-established response surface methodology and time series modeling to facilitate the formulation development process with magnesium stearate incorporated into hydrophilic matrix tablets. Two directional analyses and a time-oriented model were utilized to optimize the experimental responses. Evaluations of tablet gelation and drug release were conducted with two factors x₁ and x₂: one was a formulation factor (the amount of magnesium stearate) and the other was a processing factor (mixing time), respectively. Moreover, different batch sizes (100 and 500 tablet batches) were also evaluated to investigate an effect of batch size. The selected input control factors were arranged in a mixture simplex lattice design with 13 experimental runs. The obtained optimal settings of magnesium stearate for gelation were 0.46 g, 2.76 min (mixing time) for a 100 tablet batch and 1.54 g, 6.51 min for a 500 tablet batch. The optimal settings for drug release were 0.33 g, 7.99 min for a 100 tablet batch and 1.54 g, 6.51 min for a 500 tablet batch. The exact ratio and mixing time of magnesium stearate could be formulated according to the resulting hydrophilic matrix tablet properties. The newly designed experimental method provided very useful information for characterizing significant factors and hence to obtain optimum formulations allowing for a systematic and reliable experimental design method.

  19. Perspectives on Prediction Variance and Bias in Developing, Assessing, and Comparing Experimental Designs

    Energy Technology Data Exchange (ETDEWEB)

    Piepel, Gregory F.

    2010-12-01

    The vast majority of response surface methods used in practice to develop, assess, and compare experimental designs focus on variance properties of designs. Because response surface models only approximate the true unknown relationships, models are subject to bias errors as well as variance errors. Beginning with the seminal paper of Box and Draper (1959) and over the subsequent 50 years, methods that consider bias and mean-squared-error (variance and bias) properties of designs have been presented in the literature. However, these methods are not widely implemented in software and are not routinely used to develop, assess, and compare experimental designs in practice. Methods for developing, assessing, and comparing response surface designs that account for variance properties are reviewed. Brief synopses of publications that consider bias or mean-squared-error properties are provided. The difficulties and approaches for addressing bias properties of designs are summarized. Perspectives on experimental design methods that account for bias and/or variance properties and on future needs are presented.

  20. Optimal Design and Experimental characterisation of short optical pulse compression using CDPF

    DEFF Research Database (Denmark)

    Yujun, Qian; Quist, S.

    1999-01-01

    We present optimal design and experimental characterisation ofoptical pulse compression using a comblike dispersion-profiled fibre(CDPF). A pulse train at 10GHz with puslewidth of 1ps and side-lobesuppression of 30dB can be obtained.......We present optimal design and experimental characterisation ofoptical pulse compression using a comblike dispersion-profiled fibre(CDPF). A pulse train at 10GHz with puslewidth of 1ps and side-lobesuppression of 30dB can be obtained....

  1. Experimental Investigations of Decentralised Control Design for The Stabilisation of Rotor-Gas Bearings

    DEFF Research Database (Denmark)

    Theisen, Lukas Roy Svane; Galeazzi, Roberto; Niemann, Hans Henrik;

    2015-01-01

    directions. Hardening and softening P-lead controllers are designed based on the models experimentally identified, and salient features of both controllers are discussed. Both controllers are implemented and validated on the physical test rig. Experimental results confirm the validity of the proposed......-Box identification for the design of stabilising controllers, capable of enabling the active lubrication of the journal. The root locus analysis shows that two different control solutions are feasible for the dampening of the first two eigenfrequencies of the rotor-gas bearing in the horizontal and vertical...

  2. Recent developments in optimal experimental designs for functional magnetic resonance imaging

    Institute of Scientific and Technical Information of China (English)

    Ming-Hung; Kao; M’hamed; Temkit; Weng; Kee; Wong

    2014-01-01

    Functional magnetic resonance imaging(fMRI)is one of the leading brain mapping technologies for studying brain activity in response to mental stimuli.For neuroimaging studies utilizing this pioneering technology,there is a great demand of high-quality experimental designs that help to collect informative data to make precise and valid inference about brain functions.This paper provides a survey on recent developments in experimental designs for fMRI studies.We briefly introduce some analytical and computational tools for obtaining good designs based on a specified design selection criterion.Research results about some commonly considered designs such as blocked designs,and m-sequences are also discussed.Moreover,we present a recently proposed new type of fMRI designs that can be constructed using a certain type of Hadamard matrices.Under certain assumptions,these designs can be shown to be statistically optimal.Some future research directions in design of fMRI experiments are also discussed.

  3. Visual Servoing Tracking Control of a Ball and Plate System: Design, Implementation and Experimental Validation

    Directory of Open Access Journals (Sweden)

    Ming-Tzu Ho

    2013-07-01

    Full Text Available This paper presents the design, implementation and validation of real‐time visual servoing tracking control for a ball and plate system. The position of the ball is measured with a machine vision system. The image processing algorithms of the machine vision system are pipelined and implemented on a field programmable gate array (FPGA device to meet real‐ time constraints. A detailed dynamic model of the system is derived for the simulation study.By neglecting the high‐order coupling terms, the ball and plate system model is simplified into two decoupled ball and beam systems, and an approximate input‐ output feedback linearization approach is then used to design the controller for trajectory tracking. The designed control law is implemented on a digital signal processor (DSP. The validity of the performance of the developed control system is investigated through simulation and experimental studies. Experimental results show that the designed system functions well with reasonable agreement with simulations.

  4. Visual Servoing Tracking Control of a Ball and Plate System: Design, Implementation and Experimental Validation

    Directory of Open Access Journals (Sweden)

    Ming-Tzu Ho

    2013-07-01

    Full Text Available This paper presents the design, implementation and validation of real-time visual servoing tracking control for a ball and plate system. The position of the ball is measured with a machine vision system. The image processing algorithms of the machine vision system are pipelined and implemented on a field programmable gate array (FPGA device to meet real-time constraints. A detailed dynamic model of the system is derived for the simulation study. By neglecting the high-order coupling terms, the ball and plate system model is simplified into two decoupled ball and beam systems, and an approximate input-output feedback linearization approach is then used to design the controller for trajectory tracking. The designed control law is implemented on a digital signal processor (DSP. The validity of the performance of the developed control system is investigated through simulation and experimental studies. Experimental results show that the designed system functions well with reasonable agreement with simulations.

  5. Single-Case Experimental Design: Current Standards and Applications in Occupational Therapy.

    Science.gov (United States)

    Lane, Justin D; Ledford, Jennifer R; Gast, David L

    Occupational therapy is a field with a long-standing history of recommending and implementing interventions designed to improve the quality of life of clients with disabilities. Often, the interventions are individualized to meet the needs of this diverse group of clients in dynamic settings. Identifying effective and efficient interventions for such a diverse group of clients and settings requires a flexible research approach. Single-case experimental designs (SCEDs) allow practitioners and researchers to answer experimental questions within the context of rigorous research designs. The purpose of this article is to highlight the similarities between the mission of occupational therapy and SCEDs. Recommendations for designing single-case studies with the framework provided by the Single-Case Reporting Guideline in Behavioral Interventions are provided. In addition, common problems and proposed solutions, along with implications for practitioners and researchers, are provided. Copyright © 2017 by the American Occupational Therapy Association, Inc.

  6. Design and Experimental Validation of a Simple Controller for a Multi-Segment Magnetic Crawler Robot

    Science.gov (United States)

    2015-04-01

    X., "Development of a wall climbing robot for ship rust removal," Int. Conf. on Mechatronics and Automation (ICMA), 4610-4615 (2009). [6] Leon...Design and experimental validation of a simple controller for a multi-segment magnetic crawler robot Leah Kelley*a, Saam Ostovari**b, Aaron B...magnetic crawler robot has been designed for ship hull inspection. In its simplest version, passive linkages that provide two degrees of relative

  7. EXPERIMENTAL DESIGN APPLIED TO MODELING OF THE AIR-TIGHTNESS OF A BUILDING

    OpenAIRE

    2015-01-01

    The paper presents experimental designs that can be used in modeling of the air-tightness of buildings as second-order functions using response surface method and corresponding experiment designs. The factors supposed to be significant for a model of building air-tightness—and thus those used in experiment designs—are the heat transfer coefficient for external walls, the heat transfer coefficient of the windows, and the position of the housing units with respect to the building envelope. We c...

  8. Experimental Design of a UCAV-Based High-Energy Laser Weapon

    Science.gov (United States)

    2016-12-01

    DESIGN OF A UCAV-BASED HIGH- ENERGY LASER WEAPON by Antonios Lionis December 2016 Thesis Advisor: Keith R. Cohn Co-Advisor: Eugene Paulo...COVERED Master’s thesis 4. TITLE AND SUBTITLE EXPERIMENTAL DESIGN OF A UCAV-BASED HIGH- ENERGY LASER WEAPON 5. FUNDING NUMBERS 6. AUTHOR(S...NUMBER 9. SPONSORING /MONITORING AGENCY NAME(S) AND ADDRESS(ES) N/ A 10. SPONSORING / MONITORING AGENCY REPORT NUMBER 11. SUPPLEMENTARY NOTES

  9. Using R in experimental design with BIBD: An application in health sciences

    Science.gov (United States)

    Oliveira, Teresa A.; Francisco, Carla; Oliveira, Amílcar; Ferreira, Agostinho

    2016-06-01

    Considering the implementation of an Experimental Design, in any field, the experimenter must pay particular attention and look for the best strategies in the following steps: planning the design selection, conduct the experiments, collect observed data, proceed to analysis and interpretation of results. The focus is on providing both - a deep understanding of the problem under research and a powerful experimental process at a reduced cost. Mainly thanks to the possibility of allowing to separate variation sources, the importance of Experimental Design in Health Sciences is strongly recommended since long time. Particular attention has been devoted to Block Designs and more precisely to Balanced Incomplete Block Designs - in this case the relevance states from the fact that these designs allow testing simultaneously a number of treatments bigger than the block size. Our example refers to a possible study of inter reliability of the Parkinson disease, taking into account the UPDRS (Unified Parkinson's disease rating scale) in order to test if there are significant differences between the specialists who evaluate the patients performances. Statistical studies on this disease were described for example in Richards et al (1994), where the authors investigate the inter-rater Reliability of the Unified Parkinson's Disease Rating Scale Motor Examination. We consider a simulation of a practical situation in which the patients were observed by different specialists and the UPDRS on assessing the impact of Parkinson's disease in patients was observed. Assigning treatments to the subjects following a particular BIBD(9,24,8,3,2) structure, we illustrate that BIB Designs can be used as a powerful tool to solve emerging problems in this area. Once a structure with repeated blocks allows to have some block contrasts with minimum variance, see Oliveira et al. (2006), the design with cardinality 12 was selected for the example. R software was used for computations.

  10. Quantification of pore size distribution using diffusion NMR: experimental design and physical insights.

    Science.gov (United States)

    Katz, Yaniv; Nevo, Uri

    2014-04-28

    Pulsed field gradient (PFG) diffusion NMR experiments are sensitive to restricted diffusion within porous media and can thus reveal essential microstructural information about the confining geometry. Optimal design methods of inverse problems are designed to select preferred experimental settings to improve parameter estimation quality. However, in pore size distribution (PSD) estimation using NMR methods as in other ill-posed problems, optimal design strategies and criteria are scarce. We formulate here a new optimization framework for ill-posed problems. This framework is suitable for optimizing PFG experiments for probing geometries that are solvable by the Multiple Correlation Function approach. The framework is based on a heuristic methodology designed to select experimental sets which balance between lowering the inherent ill-posedness and increasing the NMR signal intensity. This method also selects favorable discrete pore sizes used for PSD estimation. Numerical simulations performed demonstrate that using this framework greatly improves the sensitivity of PFG experimental sets to the pores' sizes. The optimization also sheds light on significant features of the preferred experimental sets. Increasing the gradient strength and varying multiple experimental parameters is found to be preferable for reducing the ill-posedness. We further evaluate the amount of pore size information that can be obtained by wisely selecting the duration of the diffusion and mixing times. Finally, we discuss the ramification of using single PFG or double PFG sequences for PSD estimation. In conclusion, the above optimization method can serve as a useful tool for experimenters interested in quantifying PSDs of different specimens. Moreover, the applicability of the suggested optimization framework extends far beyond the field of PSD estimation in diffusion NMR, and reaches design of sampling schemes of other ill-posed problems.

  11. Design characteristics and requirements of irradiation holes for research reactor experimental facilities

    Energy Technology Data Exchange (ETDEWEB)

    Park, Cheol; Lee, B. C.; Chae, H. T.; Lee, C. S.; Seo, C. G

    2003-07-01

    In order to be helpful for the design of a new research reactor with high performance, are summarized the applications of research reactors in various fields and the design characteristics of experimental facility such as vertical irradiation holes and beam tubes. Basic requirements of such experimental facilities are also described. Research reactor has been widely utilized in various fields such as industry, engineering, medicine, life science, environment etc., and now the application fields are gradually being expanded together with the development of technology. Looking into the research reactors which are recently constructed or in plan, it seems that to develop a multi-purpose research reactor with intensive neutron beam research capability has become tendency. In the layout of the experimental facilities, the number and configuration of irradiation and beam holes should be optimized to meet required test conditions such as neutron flux at the early design stage. But, basically high neutron flux is required to perform experiments efficiently. In this aspect, neutron flux is regarded as one of important parameters to judge the degree of research reactor performance. One of main information for a new research reactor design is utilization demands and requirements of experimental holes. So basic requirements which should be considered in a new research reactor design were summarized from the survey of experimental facilities characteristics of various research reactors with around 20 MW thermal power and the experiences of HANARO utilization. Also is suggested an example of the requirements of experimental holes such as size, number and neutron flux, which are thought as minimum, in a new research reactor for exporting to developing countries such as Vietnam.

  12. Experimental Design for Hanford Low-Activity Waste Glasses with High Waste Loading

    Energy Technology Data Exchange (ETDEWEB)

    Piepel, Gregory F. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Cooley, Scott K. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Vienna, John D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Crum, Jarrod V. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-07-24

    This report discusses the development of an experimental design for the initial phase of the Hanford low-activity waste (LAW) enhanced glass study. This report is based on a manuscript written for an applied statistics journal. Appendices A, B, and E include additional information relevant to the LAW enhanced glass experimental design that is not included in the journal manuscript. The glass composition experimental region is defined by single-component constraints (SCCs), linear multiple-component constraints (MCCs), and a nonlinear MCC involving 15 LAW glass components. Traditional methods and software for designing constrained mixture experiments with SCCs and linear MCCs are not directly applicable because of the nonlinear MCC. A modification of existing methodology to account for the nonlinear MCC was developed and is described in this report. One of the glass components, SO3, has a solubility limit in glass that depends on the composition of the balance of the glass. A goal was to design the experiment so that SO3 would not exceed its predicted solubility limit for any of the experimental glasses. The SO3 solubility limit had previously been modeled by a partial quadratic mixture model expressed in the relative proportions of the 14 other components. The partial quadratic mixture model was used to construct a nonlinear MCC in terms of all 15 components. In addition, there were SCCs and linear MCCs. This report describes how a layered design was generated to (i) account for the SCCs, linear MCCs, and nonlinear MCC and (ii) meet the goals of the study. A layered design consists of points on an outer layer, and inner layer, and a center point. There were 18 outer-layer glasses chosen using optimal experimental design software to augment 147 existing glass compositions that were within the LAW glass composition experimental region. Then 13 inner-layer glasses were chosen with the software to augment the existing and outer

  13. Digital learning material for experimental design and model building in molecular biology

    NARCIS (Netherlands)

    Aegerter-Wilmsen, T.

    2005-01-01

    Designing experimental approaches is a major cognitive skill in molecular biology research, and building models, including quantitative ones, is a cognitive skill which is rapidly gaining importance. Since molecular biology education at university level is aimed at educating future researchers, we c

  14. Whither Instructional Design and Teacher Training? The Need for Experimental Research

    Science.gov (United States)

    Gropper, George L.

    2015-01-01

    This article takes a contrarian position: an "instructional design" or "teacher training" model, because of the sheer number of its interconnected parameters, is too complex to assess or to compare with other models. Models may not be the way to go just yet. This article recommends instead prior experimental research on limited…

  15. Guided-Inquiry Labs Using Bean Beetles for Teaching the Scientific Method & Experimental Design

    Science.gov (United States)

    Schlueter, Mark A.; D'Costa, Allison R.

    2013-01-01

    Guided-inquiry lab activities with bean beetles ("Callosobruchus maculatus") teach students how to develop hypotheses, design experiments, identify experimental variables, collect and interpret data, and formulate conclusions. These activities provide students with real hands-on experiences and skills that reinforce their understanding of the…

  16. Optimal experimental design for non-linear models theory and applications

    CERN Document Server

    Kitsos, Christos P

    2013-01-01

    This book tackles the Optimal Non-Linear Experimental Design problem from an applications perspective. At the same time it offers extensive mathematical background material that avoids technicalities, making it accessible to non-mathematicians: Biologists, Medical Statisticians, Sociologists, Engineers, Chemists and Physicists will find new approaches to conducting their experiments. The book is recommended for Graduate Students and Researchers.

  17. Characterizing Variability in Smestad and Gratzel's Nanocrystalline Solar Cells: A Collaborative Learning Experience in Experimental Design

    Science.gov (United States)

    Lawson, John; Aggarwal, Pankaj; Leininger, Thomas; Fairchild, Kenneth

    2011-01-01

    This article describes a collaborative learning experience in experimental design that closely approximates what practicing statisticians and researchers in applied science experience during consulting. Statistics majors worked with a teaching assistant from the chemistry department to conduct a series of experiments characterizing the variation…

  18. An Experimental Two-Way Video Teletraining System: Design, Development and Evaluation.

    Science.gov (United States)

    Simpson, Henry; And Others

    1991-01-01

    Describes the design, development, and evaluation of an experimental two-way video teletraining (VTT) system by the Navy that consisted of two classrooms linked by a land line to enable two-way audio/video communication. Trends in communication and computer technology for training are described, and a cost analysis is included. (12 references)…

  19. Characterizing Variability in Smestad and Gratzel's Nanocrystalline Solar Cells: A Collaborative Learning Experience in Experimental Design

    Science.gov (United States)

    Lawson, John; Aggarwal, Pankaj; Leininger, Thomas; Fairchild, Kenneth

    2011-01-01

    This article describes a collaborative learning experience in experimental design that closely approximates what practicing statisticians and researchers in applied science experience during consulting. Statistics majors worked with a teaching assistant from the chemistry department to conduct a series of experiments characterizing the variation…

  20. Experimental Aeroelastic Models Design and Wind Tunnel Testing for Correlation with New Theory

    Directory of Open Access Journals (Sweden)

    2016-04-01

    Full Text Available Several examples of experimental model designs, wind tunnel tests and correlation with new theory are presented in this paper. The goal is not only to evaluate a new theory, new computational method or new aeroelastic phonomenon, but also to provide new insights into nonlinear aeroelastic phenomena, flutter, limit cycle oscillation (LCO and gust response.

  1. Whither Instructional Design and Teacher Training? The Need for Experimental Research

    Science.gov (United States)

    Gropper, George L.

    2015-01-01

    This article takes a contrarian position: an "instructional design" or "teacher training" model, because of the sheer number of its interconnected parameters, is too complex to assess or to compare with other models. Models may not be the way to go just yet. This article recommends instead prior experimental research on limited…

  2. Guided Inquiry in a Biochemistry Laboratory Course Improves Experimental Design Ability

    Science.gov (United States)

    Goodey, Nina M.; Talgar, Cigdem P.

    2016-01-01

    Many biochemistry laboratory courses expose students to laboratory techniques through pre-determined experiments in which students follow stepwise protocols provided by the instructor. This approach fails to provide students with sufficient opportunities to practice experimental design and critical thinking. Ten inquiry modules were created for a…

  3. Using Superstitions & Sayings To Teach Experimental Design in Beginning and Advanced Biology Classes.

    Science.gov (United States)

    Hoefnagels, Marielle H.; Rippel, Scott A.

    2003-01-01

    Presents a collaborative learning exercise intended to teach the unfamiliar terminology of experimental design both in biology classes and biochemistry laboratories. The exercise promotes discussion and debate, develops communication skills, and emphasizes peer review. The effectiveness of the exercise is supported by student surveys. (SOE)

  4. OPTIMIZING THE PRECISION OF TOXICITY THRESHOLD ESTIMATION USING A TWO-STAGE EXPERIMENTAL DESIGN

    Science.gov (United States)

    An important consideration for risk assessment is the existence of a threshold, i.e., the highest toxicant dose where the response is not distinguishable from background. We have developed methodology for finding an experimental design that optimizes the precision of threshold mo...

  5. Trade-offs in experimental designs for estimating post-release mortality in containment studies

    Science.gov (United States)

    Rogers, Mark W.; Barbour, Andrew B; Wilson, Kyle L

    2014-01-01

    Estimates of post-release mortality (PRM) facilitate accounting for unintended deaths from fishery activities and contribute to development of fishery regulations and harvest quotas. The most popular method for estimating PRM employs containers for comparing control and treatment fish, yet guidance for experimental design of PRM studies with containers is lacking. We used simulations to evaluate trade-offs in the number of containers (replicates) employed versus the number of fish-per container when estimating tagging mortality. We also investigated effects of control fish survival and how among container variation in survival affects the ability to detect additive mortality. Simulations revealed that high experimental effort was required when: (1) additive treatment mortality was small, (2) control fish mortality was non-negligible, and (3) among container variability in control fish mortality exceeded 10% of the mean. We provided programming code to allow investigators to compare alternative designs for their individual scenarios and expose trade-offs among experimental design options. Results from our simulations and simulation code will help investigators develop efficient PRM experimental designs for precise mortality assessment.

  6. Return to Our Roots: Raising Radishes to Teach Experimental Design. Methods and Techniques.

    Science.gov (United States)

    Stallings, William M.

    1993-01-01

    Reviews research in teaching applied statistics. Concludes that students should analyze data from studies they have designed and conducted. Describes an activity in which students study germination and growth of radish seeds. Includes a table providing student instructions for both the experimental procedure and data analysis. (CFR)

  7. A Course on Experimental Design for Different University Specialties: Experiences and Changes over a Decade

    Science.gov (United States)

    Martinez Luaces, Victor; Velazquez, Blanca; Dee, Valerie

    2009-01-01

    We analyse the origin and development of an Experimental Design course which has been taught in several faculties of the Universidad de la Republica and other institutions in Uruguay, over a 10-year period. At the end of the course, students were assessed by carrying out individual work projects on real-life problems, which was innovative for…

  8. Digital learning material for experimental design and model building in molecular biology

    NARCIS (Netherlands)

    Aegerter-Wilmsen, T.

    2005-01-01

    Designing experimental approaches is a major cognitive skill in molecular biology research, and building models, including quantitative ones, is a cognitive skill which is rapidly gaining importance. Since molecular biology education at university level is aimed at educating future researchers, we c

  9. Bias Corrections for Standardized Effect Size Estimates Used with Single-Subject Experimental Designs

    Science.gov (United States)

    Ugille, Maaike; Moeyaert, Mariola; Beretvas, S. Natasha; Ferron, John M.; Van den Noortgate, Wim

    2014-01-01

    A multilevel meta-analysis can combine the results of several single-subject experimental design studies. However, the estimated effects are biased if the effect sizes are standardized and the number of measurement occasions is small. In this study, the authors investigated 4 approaches to correct for this bias. First, the standardized effect…

  10. An experimental design for total container impact response modeling at extreme temperatures

    Science.gov (United States)

    Kobler, V. P.; Wyskida, R. M.; Johannes, J. D.

    1979-01-01

    An experimental design (a drop test) was developed to test the effects of confinement upon cushions. The drop test produced consistent corner void cushion data from which mathematical models were developed. A mathematical relationship between temperature and drop height was found.

  11. Development of the Neuron Assessment for Measuring Biology Students' Use of Experimental Design Concepts and Representations

    Science.gov (United States)

    Dasgupta, Annwesa P.; Anderson, Trevor R.; Pelaez, Nancy J.

    2016-01-01

    Researchers, instructors, and funding bodies in biology education are unanimous about the importance of developing students' competence in experimental design. Despite this, only limited measures are available for assessing such competence development, especially in the areas of molecular and cellular biology. Also, existing assessments do not…

  12. Combined application of mixture experimental design and artificial neural networks in the solid dispersion development.

    Science.gov (United States)

    Medarević, Djordje P; Kleinebudde, Peter; Djuriš, Jelena; Djurić, Zorica; Ibrić, Svetlana

    2016-01-01

    This study for the first time demonstrates combined application of mixture experimental design and artificial neural networks (ANNs) in the solid dispersions (SDs) development. Ternary carbamazepine-Soluplus®-poloxamer 188 SDs were prepared by solvent casting method to improve carbamazepine dissolution rate. The influence of the composition of prepared SDs on carbamazepine dissolution rate was evaluated using d-optimal mixture experimental design and multilayer perceptron ANNs. Physicochemical characterization proved the presence of the most stable carbamazepine polymorph III within the SD matrix. Ternary carbamazepine-Soluplus®-poloxamer 188 SDs significantly improved carbamazepine dissolution rate compared to pure drug. Models developed by ANNs and mixture experimental design well described the relationship between proportions of SD components and percentage of carbamazepine released after 10 (Q10) and 20 (Q20) min, wherein ANN model exhibit better predictability on test data set. Proportions of carbamazepine and poloxamer 188 exhibited the highest influence on carbamazepine release rate. The highest carbamazepine release rate was observed for SDs with the lowest proportions of carbamazepine and the highest proportions of poloxamer 188. ANNs and mixture experimental design can be used as powerful data modeling tools in the systematic development of SDs. Taking into account advantages and disadvantages of both techniques, their combined application should be encouraged.

  13. Building upon the Experimental Design in Media Violence Research: The Importance of Including Receiver Interpretations.

    Science.gov (United States)

    Potter, W. James; Tomasello, Tami K.

    2003-01-01

    Argues that the inclusion of viewer interpretation variables in experimental design and analysis procedures can greatly increase the methodology's ability to explain variance. Focuses attention on the between-group differences, while an analysis of how individual participants interpret the cues in the stimulus material focused attention on the…

  14. Digital learning material for experimental design and model building in molecular biology

    NARCIS (Netherlands)

    Aegerter-Wilmsen, T.

    2005-01-01

    Designing experimental approaches is a major cognitive skill in molecular biology research, and building models, including quantitative ones, is a cognitive skill which is rapidly gaining importance. Since molecular biology education at university level is aimed at educating future researchers, we

  15. An Experimental Two-Way Video Teletraining System: Design, Development and Evaluation.

    Science.gov (United States)

    Simpson, Henry; And Others

    1991-01-01

    Describes the design, development, and evaluation of an experimental two-way video teletraining (VTT) system by the Navy that consisted of two classrooms linked by a land line to enable two-way audio/video communication. Trends in communication and computer technology for training are described, and a cost analysis is included. (12 references)…

  16. An interactive visualization tool and data model for experimental design in systems biology.

    Science.gov (United States)

    Kapoor, Shray; Quo, Chang Feng; Merrill, Alfred H; Wang, May D

    2008-01-01

    Experimental design is important, but is often under-supported, in systems biology research. To improve experimental design, we extend the visualization of complex sphingolipid pathways to study biosynthetic origin in SphinGOMAP. We use the ganglio-series sphingolipid dataset as a test bed and the Java Universal Network / Graph Framework (JUNG) visualization toolkit. The result is an interactive visualization tool and data model for experimental design in lipid systems biology research. We improve the current SphinGOMAP in terms of interactive visualization by allowing (i) choice of four different network layouts, (ii) dynamic addition / deletion of on-screen molecules and (iii) mouse-over to reveal detailed molecule data. Future work will focus on integrating various lipid-relevant data systematically i.e. SphinGOMAP biosynthetic data, Lipid Bank molecular data (Japan) and Lipid MAPS metabolic pathway data (USA). We aim to build a comprehensive and interactive communication platform to improve experimental design for scientists globally in high-throughput lipid systems biology research.

  17. A Course on Experimental Design for Different University Specialties: Experiences and Changes over a Decade

    Science.gov (United States)

    Martinez Luaces, Victor; Velazquez, Blanca; Dee, Valerie

    2009-01-01

    We analyse the origin and development of an Experimental Design course which has been taught in several faculties of the Universidad de la Republica and other institutions in Uruguay, over a 10-year period. At the end of the course, students were assessed by carrying out individual work projects on real-life problems, which was innovative for…

  18. Intermediate experimental vehicle, ESA program aerodynamics-aerothermodynamics key technologies for spacecraft design and successful flight

    Science.gov (United States)

    Dutheil, Sylvain; Pibarot, Julien; Tran, Dac; Vallee, Jean-Jacques; Tribot, Jean-Pierre

    2016-07-01

    With the aim of placing Europe among the world's space players in the strategic area of atmospheric re-entry, several studies on experimental vehicle concepts and improvements of critical re-entry technologies have paved the way for the flight of an experimental space craft. The successful flight of the Intermediate eXperimental Vehicle (IXV), under ESA's Future Launchers Preparatory Programme (FLPP), is definitively a significant step forward from the Atmospheric Reentry Demonstrator flight (1998), establishing Europe as a key player in this field. The IXV project objectives were the design, development, manufacture and ground and flight verification of an autonomous European lifting and aerodynamically controlled reentry system, which is highly flexible and maneuverable. The paper presents, the role of aerodynamics aerothermodynamics as part of the key technologies for designing an atmospheric re-entry spacecraft and securing a successful flight.

  19. Experimental system design for the integration of trapped-ion and superconducting qubit systems

    Science.gov (United States)

    De Motte, D.; Grounds, A. R.; Rehák, M.; Rodriguez Blanco, A.; Lekitsch, B.; Giri, G. S.; Neilinger, P.; Oelsner, G.; Il'ichev, E.; Grajcar, M.; Hensinger, W. K.

    2016-12-01

    We present a design for the experimental integration of ion trapping and superconducting qubit systems as a step towards the realization of a quantum hybrid system. The scheme addresses two key difficulties in realizing such a system: a combined microfabricated ion trap and superconducting qubit architecture, and the experimental infrastructure to facilitate both technologies. Developing upon work by Kielpinski et al. (Phys Rev Lett 108(13):130504, 2012. doi: 10.1103/PhysRevLett.108.130504), we describe the design, simulation and fabrication process for a microfabricated ion trap capable of coupling an ion to a superconducting microwave LC circuit with a coupling strength in the tens of kHz. We also describe existing difficulties in combining the experimental infrastructure of an ion trapping set-up into a dilution refrigerator with superconducting qubits and present solutions that can be immediately implemented using current technology.

  20. Simulated Conversations With Virtual Humans to Improve Patient-Provider Communication and Reduce Unnecessary Prescriptions for Antibiotics: A Repeated Measure Pilot Study

    Science.gov (United States)

    2017-01-01

    Background Despite clear evidence that antibiotics do not cure viral infections, the problem of unnecessary prescribing of antibiotics in ambulatory care persists, and in some cases, prescribing patterns have increased. The overuse of antibiotics for treating viral infections has created numerous economic and clinical consequences including increased medical costs due to unnecessary hospitalizations, antibiotic resistance, disruption of gut bacteria, and obesity. Recent research has underscored the importance of collaborative patient-provider communication as a means to reduce the high rates of unnecessary prescriptions for antibiotics. However, most patients and providers do not feel prepared to engage in such challenging conversations. Objectives The aim of this pilot study was to assess the ability of a brief 15-min simulated role-play conversation with virtual humans to serve as a preliminary step to help health care providers and patients practice, and learn how to engage in effective conversations about antibiotics overuse. Methods A total of 69 participants (35 providers and 34 patients) completed the simulation once in one sitting. A pre-post repeated measures design was used to assess changes in patients’ and providers’ self-reported communication behaviors, activation, and preparedness, intention, and confidence to effectively communicate in the patient-provider encounter. Changes in patients’ knowledge and beliefs regarding antibiotic use were also evaluated. Results Patients experienced a short-term positive improvement in beliefs about appropriate antibiotic use for infection (F1,30=14.10, P=.001). Knowledge scores regarding the correct uses of antibiotics improved immediately postsimulation, but decreased at the 1-month follow-up (F1,30=31.16, P.10) Patients with lower levels of activation exhibited positive, short-term benefits in increased intent and confidence to discuss their needs and ask questions in the clinic visit, positive attitudes

  1. Heart rate variability and DNA methylation levels are altered after short-term metal fume exposure among occupational welders: a repeated-measures panel study

    OpenAIRE

    2014-01-01

    Background: In occupational settings, boilermakers are exposed to high levels of metallic fine particulate matter (PM2.5) generated during the welding process. The effect of welding PM2.5 on heart rate variability (HRV) has been described, but the relationship between PM2.5, DNA methylation, and HRV is not known. Methods: In this repeated-measures panel study, we recorded resting HRV and measured DNA methylation levels in transposable elements Alu and long interspersed nuclear element-1 (LINE...

  2. Marginal biotin deficiency can be induced experimentally in humans using a cost-effective outpatient design.

    Science.gov (United States)

    Stratton, Shawna L; Henrich, Cindy L; Matthews, Nell I; Bogusiewicz, Anna; Dawson, Amanda M; Horvath, Thomas D; Owen, Suzanne N; Boysen, Gunnar; Moran, Jeffery H; Mock, Donald M

    2012-01-01

    To date, marginal, asymptomatic biotin deficiency has been successfully induced experimentally by the use of labor-intensive inpatient designs requiring rigorous dietary control. We sought to determine if marginal biotin deficiency could be induced in humans in a less expensive outpatient design incorporating a self-selected, mixed general diet. We sought to examine the efficacy of three outpatient study designs: two based on oral avidin dosing and one based on a diet high in undenatured egg white for a period of 28 d. In study design 1, participants (n = 4; 3 women) received avidin in capsules with a biotin binding capacity of 7 times the estimated dietary biotin intake of a typical self-selected diet. In study design 2, participants (n = 2; 2 women) received double the amount of avidin capsules (14 times the estimated dietary biotin intake). In study design 3, participants (n = 5; 3 women) consumed egg-white beverages containing avidin with a biotin binding capacity of 7 times the estimated dietary biotin intake. Established indices of biotin status [lymphocyte propionyl-CoA carboxylase activity; urinary excretion of 3-hydroxyisovaleric acid, 3-hydroxyisovaleryl carnitine (3HIA-carnitine), and biotin; and plasma concentration of 3HIA-carnitine] indicated that study designs 1 and 2 were not effective in inducing marginal biotin deficiency, but study design 3 was as effective as previous inpatient study designs that induced deficiency by egg-white beverage. Marginal biotin deficiency can be induced experimentally by using a cost-effective outpatient design by avidin delivery in egg-white beverages. This design should be useful to the broader nutritional research community.

  3. Experimental modeling of high-voltage corona discharge using design of experiments

    Institute of Scientific and Technical Information of China (English)

    Rezzouga M; Tilmatine A; Gouri R; Medics K; Dascalescu L

    2007-01-01

    Many studies,both experimental and numerical,were devoted to the electric current of corona discharge and some mathematical models were proposed to express it.As it depends on several parameters,it is difficult to find a theoretical or an experimental formula,which considers all the factors.So we opted for the methodology of experimental designs,also called Tagushi's methodology,which represents a powerful tool generally employed when the process has many factors to consider.The objective of this paper is to model current using this experimental methodology.The factors considered were geometrical factors (interelectrode interval,surface of the grounded plane electrode,curvature radius of the point electrode),climatic factors (temperature and relative humidity),and applied high voltage.Results of experiments made it possible to obtain mathematical models and to analyse the interactions between all factors.

  4. Efficient experimental design and analysis strategies for the detection of differential expression using RNA-Sequencing

    Directory of Open Access Journals (Sweden)

    Robles José A

    2012-09-01

    Full Text Available Abstract Background RNA sequencing (RNA-Seq has emerged as a powerful approach for the detection of differential gene expression with both high-throughput and high resolution capabilities possible depending upon the experimental design chosen. Multiplex experimental designs are now readily available, these can be utilised to increase the numbers of samples or replicates profiled at the cost of decreased sequencing depth generated per sample. These strategies impact on the power of the approach to accurately identify differential expression. This study presents a detailed analysis of the power to detect differential expression in a range of scenarios including simulated null and differential expression distributions with varying numbers of biological or technical replicates, sequencing depths and analysis methods. Results Differential and non-differential expression datasets were simulated using a combination of negative binomial and exponential distributions derived from real RNA-Seq data. These datasets were used to evaluate the performance of three commonly used differential expression analysis algorithms and to quantify the changes in power with respect to true and false positive rates when simulating variations in sequencing depth, biological replication and multiplex experimental design choices. Conclusions This work quantitatively explores comparisons between contemporary analysis tools and experimental design choices for the detection of differential expression using RNA-Seq. We found that the DESeq algorithm performs more conservatively than edgeR and NBPSeq. With regard to testing of various experimental designs, this work strongly suggests that greater power is gained through the use of biological replicates relative to library (technical replicates and sequencing depth. Strikingly, sequencing depth could be reduced as low as 15% without substantial impacts on false positive or true positive rates.

  5. Designing specific protein-protein interactions using computation, experimental library screening, or integrated methods.

    Science.gov (United States)

    Chen, T Scott; Keating, Amy E

    2012-07-01

    Given the importance of protein-protein interactions for nearly all biological processes, the design of protein affinity reagents for use in research, diagnosis or therapy is an important endeavor. Engineered proteins would ideally have high specificities for their intended targets, but achieving interaction specificity by design can be challenging. There are two major approaches to protein design or redesign. Most commonly, proteins and peptides are engineered using experimental library screening and/or in vitro evolution. An alternative approach involves using protein structure and computational modeling to rationally choose sequences predicted to have desirable properties. Computational design has successfully produced novel proteins with enhanced stability, desired interactions and enzymatic function. Here we review the strengths and limitations of experimental library screening and computational structure-based design, giving examples where these methods have been applied to designing protein interaction specificity. We highlight recent studies that demonstrate strategies for combining computational modeling with library screening. The computational methods provide focused libraries predicted to be enriched in sequences with the properties of interest. Such integrated approaches represent a promising way to increase the efficiency of protein design and to engineer complex functionality such as interaction specificity.

  6. Bayesian experimental design for the active nitridation of graphite by atomic nitrogen

    CERN Document Server

    Terejanu, Gabriel; Miki, Kenji

    2011-01-01

    The problem of optimal data collection to efficiently learn the model parameters of a graphite nitridation experiment is studied in the context of Bayesian analysis using both synthetic and real experimental data. The paper emphasizes that the optimal design can be obtained as a result of an information theoretic sensitivity analysis. Thus, the preferred design is where the statistical dependence between the model parameters and observables is the highest possible. In this paper, the statistical dependence between random variables is quantified by mutual information and estimated using a k-nearest neighbor based approximation. It is shown, that by monitoring the inference process via measures such as entropy or Kullback-Leibler divergence, one can determine when to stop the data collection process. The methodology is applied to select the most informative designs on both a simulated data set and on an experimental data set, previously published in the literature. It is also shown that the sequential Bayesian ...

  7. Thermoelastic Femoral Stress Imaging for Experimental Evaluation of Hip Prosthesis Design

    Science.gov (United States)

    Hyodo, Koji; Inomoto, Masayoshi; Ma, Wenxiao; Miyakawa, Syunpei; Tateishi, Tetsuya

    An experimental system using the thermoelastic stress analysis method and a synthetic femur was utilized to perform reliable and convenient mechanical biocompatibility evaluation of hip prosthesis design. Unlike the conventional technique, the unique advantage of the thermoelastic stress analysis method is its ability to image whole-surface stress (Δ(σ1+σ2)) distribution in specimens. The mechanical properties of synthetic femurs agreed well with those of cadaveric femurs with little variability between specimens. We applied this experimental system for stress distribution visualization of the intact femur, and the femurs implanted with an artificial joint. The surface stress distribution of the femurs sensitively reflected the prosthesis design and the contact condition between the stem and the bone. By analyzing the relationship between the stress distribution and the clinical results of the artificial joint, this technique can be used in mechanical biocompatibility evaluation and pre-clinical performance prediction of new artificial joint design.

  8. Theoretical and Experimental Results of Substrate Effects on Microstrip Power Divider Designs

    Directory of Open Access Journals (Sweden)

    Suhair Mansoor Mahmood

    2011-01-01

    Full Text Available The effects of substrate materials on the design of microstrip power divider are investigated theoretically and experimentally. Three dielectric substrate materials, Duroid 3003, G10/FR4 epoxy Glass, and Duroid 3010, are chosen to be studied. A three-way two-stage power divider is designed at S-band frequency of 2.25 GHz and etched on each studied substrate separately. The substrate effects on the characteristics and performance of the microsrip circuits are studied taking into consideration the large difference in dielectric constant and the dissipation factor. The circuit designs presented here are analyzed using the Genesys CAD program and implemented and tested experimentally. The simulated and measured results are compared and discussed, and they indicate that significant changes in the characteristics of the microstrip power divider are observed.

  9. Comment: Spurious Correlation and Other Observations on Experimental Design for Engineering Dimensional Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Piepel, Gregory F.

    2013-08-01

    This article discusses the paper "Experimental Design for Engineering Dimensional Analysis" by Albrecht et al. (2013, Technometrics). That paper provides and overview of engineering dimensional analysis (DA) for use in developing DA models. The paper proposes methods for generating model-robust experimental designs to supporting fitting DA models. The specific approach is to develop a design that maximizes the efficiency of a specified empirical model (EM) in the original independent variables, subject to a minimum efficiency for a DA model expressed in terms of dimensionless groups (DGs). This discussion article raises several issues and makes recommendations regarding the proposed approach. Also, the concept of spurious correlation is raised and discussed. Spurious correlation results from the response DG being calculated using several independent variables that are also used to calculate predictor DGs in the DA model.

  10. Experimental validation of systematically designed acoustic hyperbolic meta material slab exhibiting negative refraction

    DEFF Research Database (Denmark)

    Christiansen, Rasmus Ellebæk; Sigmund, Ole

    2016-01-01

    This Letter reports on the experimental validation of a two-dimensional acoustic hyperbolic metamaterial slab optimized to exhibit negative refractive behavior. The slab was designed using a topology optimization based systematic design method allowing for tailoring the refractive behavior....... The experimental results confirm the predicted refractive capability as well as the predicted transmission at an interface. The study simultaneously provides an estimate of the attenuation inside the slab stemming from the boundary layer effects—insight which can be utilized in the further design...... of the metamaterial slabs. The capability of tailoring the refractive behavior opens possibilities for different applications. For instance, a slab exhibiting zero refraction across a wide angular range is capable of funneling acoustic energy through it, while a material exhibiting the negative refractive behavior...

  11. Design, Simulation and Experimental Investigation of a Solar System Based on PV Panels and PVT Collectors

    Directory of Open Access Journals (Sweden)

    Annamaria Buonomano

    2016-06-01

    Full Text Available This paper presents numerical and experimental analyses aimed at evaluating the technical and economic feasibility of photovoltaic/thermal (PVT collectors. An experimental setup was purposely designed and constructed in order to compare the electrical performance of a PVT solar field with the one achieved by an identical solar field consisting of conventional photovoltaic (PV panels. The experimental analysis also aims at evaluating the potential advantages of PVT vs. PV in terms of enhancement of electrical efficiency and thermal energy production. The installed experimental set-up includes four flat polycrystalline silicon PV panels and four flat unglazed polycrystalline silicon PVT collectors. The total electrical power and area of the solar field are 2 kWe and 13 m2, respectively. The experimental set-up is currently installed at the company AV Project Ltd., located in Avellino (Italy. This study also analyzes the system from a numerical point of view, including a thermo-economic dynamic simulation model for the design and the assessment of energy performance and economic profitability of the solar systems consisting of glazed PVT and PV collectors. The experimental setup was modelled and partly simulated in TRNSYS environment. The simulation model was useful to analyze efficiencies and temperatures reached by such solar technologies, by taking into account the reference technology of PVTs (consisting of glazed collectors as well as to compare the numerical data obtained by dynamic simulations with the gathered experimental results for the PV technology. The numerical analysis shows that the PVT global efficiency is about 26%. Conversely, from the experimental point of view, the average thermal efficiency of PVT collectors is around 13% and the electrical efficiencies of both technologies are almost coincident and equal to 15%.

  12. Man-machine Integration Design and Analysis System (MIDAS) Task Loading Model (TLM) experimental and software detailed design report

    Science.gov (United States)

    Staveland, Lowell

    1994-01-01

    This is the experimental and software detailed design report for the prototype task loading model (TLM) developed as part of the man-machine integration design and analysis system (MIDAS), as implemented and tested in phase 6 of the Army-NASA Aircrew/Aircraft Integration (A3I) Program. The A3I program is an exploratory development effort to advance the capabilities and use of computational representations of human performance and behavior in the design, synthesis, and analysis of manned systems. The MIDAS TLM computationally models the demands designs impose on operators to aide engineers in the conceptual design of aircraft crewstations. This report describes TLM and the results of a series of experiments which were run this phase to test its capabilities as a predictive task demand modeling tool. Specifically, it includes discussions of: the inputs and outputs of TLM, the theories underlying it, the results of the test experiments, the use of the TLM as both stand alone tool and part of a complete human operator simulation, and a brief introduction to the TLM software design.

  13. Highly Efficient Design-of-Experiments Methods for Combining CFD Analysis and Experimental Data

    Science.gov (United States)

    Anderson, Bernhard H.; Haller, Harold S.

    2009-01-01

    It is the purpose of this study to examine the impact of "highly efficient" Design-of-Experiments (DOE) methods for combining sets of CFD generated analysis data with smaller sets of Experimental test data in order to accurately predict performance results where experimental test data were not obtained. The study examines the impact of micro-ramp flow control on the shock wave boundary layer (SWBL) interaction where a complete paired set of data exist from both CFD analysis and Experimental measurements By combining the complete set of CFD analysis data composed of fifteen (15) cases with a smaller subset of experimental test data containing four/five (4/5) cases, compound data sets (CFD/EXP) were generated which allows the prediction of the complete set of Experimental results No statistical difference were found to exist between the combined (CFD/EXP) generated data sets and the complete Experimental data set composed of fifteen (15) cases. The same optimal micro-ramp configuration was obtained using the (CFD/EXP) generated data as obtained with the complete set of Experimental data, and the DOE response surfaces generated by the two data sets were also not statistically different.

  14. An experimental school prototype: Integrating 3rs (reduce, reuse & recycle concept into architectural design

    Directory of Open Access Journals (Sweden)

    Kong Seng Yeap

    2012-06-01

    Full Text Available The authors conducted a design project to examine the use of school as an ecological learning hub for children. Specifically, this study explores the ecological innovations that transform physical environment into three-dimensional textbooks for environmental education. A series of design workshops were carried out to gain interdisciplinary input for ecological school design. The findings suggest to integrate the concept of 3Rs (Reduce, Reuse & Recycle into the physical environment. As a result, an experimental school prototype is developed. It represents a series of recommendations that are rendered by novel ideas through the amalgamation of architecture, ecology and education. These findings promote the development of sustainable and interactive learning spaces through cross-disciplinary investigations in school architecture. Designers and practitioners interested in educational facilities design would find this article useful.

  15. Aspects of experimental design for plant metabolomics experiments and guidelines for growth of plant material.

    Science.gov (United States)

    Gibon, Yves; Rolin, Dominique

    2012-01-01

    Experiments involve the deliberate variation of one or more factors in order to provoke responses, the identification of which then provides the first step towards functional knowledge. Because environmental, biological, and/or technical noise is unavoidable, biological experiments usually need to be designed. Thus, once the major sources of experimental noise have been identified, individual samples can be grouped, randomised, and/or pooled. Like other 'omics approaches, metabolomics is characterised by the numbers of analytes largely exceeding sample number. While this unprecedented singularity in biology dramatically increases false discovery, experimental error can nevertheless be decreased in plant metabolomics experiments. For this, each step from plant cultivation to data acquisition needs to be evaluated in order to identify the major sources of error and then an appropriate design can be produced, as with any other experimental approach. The choice of technology, the time at which tissues are harvested, and the way metabolism is quenched also need to be taken into consideration, as they decide which metabolites can be studied. A further recommendation is to document data and metadata in a machine readable way. The latter should also describe every aspect of the experiment. This should provide valuable hints for future experimental design and ultimately give metabolomic data a second life. To facilitate the identification of critical steps, a list of items to be considered before embarking on time-consuming and costly metabolomic experiments is proposed.

  16. Incorporating experimental design and error into coalescent/mutation models of population history.

    Science.gov (United States)

    Knudsen, Bjarne; Miyamoto, Michael M

    2007-08-01

    Coalescent theory provides a powerful framework for estimating the evolutionary, demographic, and genetic parameters of a population from a small sample of individuals. Current coalescent models have largely focused on population genetic factors (e.g., mutation, population growth, and migration) rather than on the effects of experimental design and error. This study develops a new coalescent/mutation model that accounts for unobserved polymorphisms due to missing data, sequence errors, and multiple reads for diploid individuals. The importance of accommodating these effects of experimental design and error is illustrated with evolutionary simulations and a real data set from a population of the California sea hare. In particular, a failure to account for sequence errors can lead to overestimated mutation rates, inflated coalescent times, and inappropriate conclusions about the population. This current model can now serve as a starting point for the development of newer models with additional experimental and population genetic factors. It is currently implemented as a maximum-likelihood method, but this model may also serve as the basis for the development of Bayesian approaches that incorporate experimental design and error.

  17. Design and application on experimental platform for high-speed bearing with grease lubrication

    Directory of Open Access Journals (Sweden)

    He Qiang

    2015-12-01

    Full Text Available The experimental platform for high-speed grease is an important tool for research and development of high-speed motorized spindle with grease lubrication. In this article, the experimental platform for high-speed grease is designed and manufactured which consists of the drive system, the test portion, the loading system, the lubrication system, the control system, and so on. In the meantime, the high-speed angular contact ceramic ball bearings B7005C/HQ1P4 as the research object are tested and contrasted in the grease lubrication and oil mist lubrication. The experimental platform performance is validated by contrast experiment, and the high-speed lubricated bearing performance is also studied especially in the relationship among the rotating speed,load and temperature rise. The results show that the experimental platform works steadily, accurate, and reliable in the experimental testing. And the grease lubrication ceramic ball bearings B7005C/HQ1P4 can be used in high-speed motorized spindle in the circular water cooling conditions when the rotating speed is lower than 40,000 r/min or the DN value (the value of the bearing diameter times the rotating speed is lower than the 1.44 × 106 mm r/min. Grease lubrication instead of oil mist lubrication under high-speed rotating will simplify the structure design of the high-speed motorized spindle and reduce the pollution to the environment.

  18. Optimization of rheological parameter for micro-bubble drilling fluids by multiple regression experimental design

    Institute of Scientific and Technical Information of China (English)

    郑力会; 王金凤; 李潇鹏; 张燕; 李都

    2008-01-01

    In order to optimize plastic viscosity of 18 mPa·s circulating micro-bubble drilling fluid formula,orthogonal and uniform experimental design methods were applied,and the plastic viscosities of 36 and 24 groups of agent were tested,respectively.It is found that these two experimental design methods show drawbacks,that is,the amount of agent is difficult to determine,and the results are not fully optimized.Therefore,multiple regression experimental method was used to design experimental formula.By randomly selecting arbitrary agent with the amount within the recommended range,17 groups of drilling fluid formula were designed,and the plastic viscosity of each experiment formula was measured.Set plastic viscosity as the objective function,through multiple regressions,then quadratic regression model is obtained,whose correlation coefficient meets the requirement.Set target values of plastic viscosity to be 18,20 and 22 mPa·s,respectively,with the trial method,5 drilling fluid formulas are obtained with accuracy of 0.000 3,0.000 1 and 0.000 3.Arbitrarily select target value of each of the two groups under the formula for experimental verification of drilling fluid,then the measurement errors between theoretical and tested plastic viscosity are less than 5%,confirming that regression model can be applied to optimizing the circulating of plastic-foam drilling fluid viscosity.In accordance with the precision of different formulations of drilling fluid for other constraints,the methods result in the optimization of the circulating micro-bubble drilling fluid parameters.

  19. Optimal experimental designs for estimating Henry's law constants via the method of phase ratio variation.

    Science.gov (United States)

    Kapelner, Adam; Krieger, Abba; Blanford, William J

    2016-10-14

    When measuring Henry's law constants (kH) using the phase ratio variation (PRV) method via headspace gas chromatography (GC), the value of kH of the compound under investigation is calculated from the ratio of the slope to the intercept of a linear regression of the inverse GC response versus the ratio of gas to liquid volumes of a series of vials drawn from the same parent solution. Thus, an experimenter collects measurements consisting of the independent variable (the gas/liquid volume ratio) and dependent variable (the GC(-1) peak area). A review of the literature found that the common design is a simple uniform spacing of liquid volumes. We present an optimal experimental design which estimates kH with minimum error and provides multiple means for building confidence intervals for such estimates. We illustrate performance improvements of our design with an example measuring the kH for Naphthalene in aqueous solution as well as simulations on previous studies. Our designs are most applicable after a trial run defines the linear GC response and the linear phase ratio to the GC(-1) region (where the PRV method is suitable) after which a practitioner can collect measurements in bulk. The designs can be easily computed using our open source software optDesignSlopeInt, an R package on CRAN.

  20. Single-Case Experimental Designs: A Systematic Review of Published Research and Current Standards

    Science.gov (United States)

    Smith, Justin D.

    2013-01-01

    This article systematically reviews the research design and methodological characteristics of single-case experimental design (SCED) research published in peer-reviewed journals between 2000 and 2010. SCEDs provide researchers with a flexible and viable alternative to group designs with large sample sizes. However, methodological challenges have precluded widespread implementation and acceptance of the SCED as a viable complementary methodology to the predominant group design. This article includes a description of the research design, measurement, and analysis domains distinctive to the SCED; a discussion of the results within the framework of contemporary standards and guidelines in the field; and a presentation of updated benchmarks for key characteristics (e.g., baseline sampling, method of analysis), and overall, it provides researchers and reviewers with a resource for conducting and evaluating SCED research. The results of the systematic review of 409 studies suggest that recently published SCED research is largely in accordance with contemporary criteria for experimental quality. Analytic method emerged as an area of discord. Comparison of the findings of this review with historical estimates of the use of statistical analysis indicates an upward trend, but visual analysis remains the most common analytic method and also garners the most support amongst those entities providing SCED standards. Although consensus exists along key dimensions of single-case research design and researchers appear to be practicing within these parameters, there remains a need for further evaluation of assessment and sampling techniques and data analytic methods. PMID:22845874

  1. Optimization of Protease Production from Aspergillus Oryzae Sp. Using Box-Behnken Experimental Design

    Directory of Open Access Journals (Sweden)

    G. Srinu Babu

    2007-01-01

    Full Text Available Protease production by Aspergillus oryzae was optimized in shake-flask cultures using Box-Behnken experimental design. An empirical model was developed through response surface methodology to describe the relationship between tested variable (peptone, glucose, soyabeanmeal and pH. Maximum enzyme activity was attained with Peptone at 4 g∕L; temperature at 30 °C glucose at 6 g∕L; 30 °C and pH at 10. Experimental verification of the model showed a validation of 95%, which is more than 3-fold increase compare to the basal medium.

  2. Experimental Investigations of Decentralised Control Design for The Stabilisation of Rotor-Gas Bearings

    DEFF Research Database (Denmark)

    Theisen, Lukas Roy Svane; Galeazzi, Roberto; Niemann, Hans Henrik

    2015-01-01

    Rotor-gas bearings are attracting increasing interest because of their high speed capabilities, low friction and clean operation. However, hydrostatic rotor-gas bearings show reduced damping characteristics, which makes it challenging to operate the rotating machine at and about the resonance...... directions. Hardening and softening P-lead controllers are designed based on the models experimentally identified, and salient features of both controllers are discussed. Both controllers are implemented and validated on the physical test rig. Experimental results confirm the validity of the proposed...

  3. Increased performance in a bottom-up designed robot by experimentally guided redesign

    DEFF Research Database (Denmark)

    Larsen, Jørgen Christian

    2013-01-01

    the bottom-up, mode-free approach, the authors used the robotic construction kit, LocoKit. This construction kit allows researchers to construct legged robots, without having a mathematical model beforehand. The authors used no specific mathematical model to design the robot, but instead used intuition...... and took inspiration from biology. The results were afterwards compared with results gained from biology, to see if the robot has some of the key elements the authors were looking for. Findings – With the use of LocoKit as the experimental platform, combined with known experimental measurement methods from...

  4. Intuitive web-based experimental design for high-throughput biomedical data.

    Science.gov (United States)

    Friedrich, Andreas; Kenar, Erhan; Kohlbacher, Oliver; Nahnsen, Sven

    2015-01-01

    Big data bioinformatics aims at drawing biological conclusions from huge and complex biological datasets. Added value from the analysis of big data, however, is only possible if the data is accompanied by accurate metadata annotation. Particularly in high-throughput experiments intelligent approaches are needed to keep track of the experimental design, including the conditions that are studied as well as information that might be interesting for failure analysis or further experiments in the future. In addition to the management of this information, means for an integrated design and interfaces for structured data annotation are urgently needed by researchers. Here, we propose a factor-based experimental design approach that enables scientists to easily create large-scale experiments with the help of a web-based system. We present a novel implementation of a web-based interface allowing the collection of arbitrary metadata. To exchange and edit information we provide a spreadsheet-based, humanly readable format. Subsequently, sample sheets with identifiers and metainformation for data generation facilities can be created. Data files created after measurement of the samples can be uploaded to a datastore, where they are automatically linked to the previously created experimental design model.

  5. Bi-functionally Graded Electrode Supported SOFC Modeling and Computational Thermal Fluid Analysis for Experimental Design

    Energy Technology Data Exchange (ETDEWEB)

    Shi, J.; Xue, X.

    2011-01-01

    A comprehensive 3D CFD model is developed for a bi-electrode supported cell (BSC) SOFC. The model includes complicated transport phenomena of mass/heat transfer, charge (electron and ion) migration, and electrochemical reaction. The uniqueness of the modeling study is that functionally graded porous electrode property is taken into account, including not only linear but nonlinear porosity distributions. Extensive numerical analysis is performed to elucidate the effects of both porous microstructure distributions and operating condition on cell performance. Results indicate that cell performance is strongly dependent on both operating conditions and porous microstructure distributions of electrodes. Using the proposed fuel/gas feeding design, the uniform hydrogen distribution within porous anode is achieved; the oxygen distribution within the cathode is dependent on porous microstructure distributions as well as pressure loss conditions. Simulation results show that fairly uniform temperature distribution can be obtained with the proposed fuel/gas feeding design. The modeling results can be employed to guide experimental design of BSC test and provide pre-experimental analysis, as a result, to circumvent high cost associated with try-and-error experimental design and setup.

  6. Experimental Design of Electrocoagulation and Magnetic Technology for Enhancing Suspended Solids Removal from Synthetic Wastewater

    Directory of Open Access Journals (Sweden)

    Moh Faiqun Ni'am

    2014-10-01

    Full Text Available Design of experiments (DOE is one of the statistical method that is used as a tool to enhance and improve experimental quality. The changes to the variables of a process or system is supposed to give the optimal result (response and quite satisfactory. Experimental design can defined as a test or series of test series by varying the input variables (factors of a process that can known to cause changes in output (response. This paper presents the results of experimental design of wastewater treatment by electrocoagulation (EC technique. A combined magnet and electrocoagulation (EC technology were designed to increase settling velocity and to enhance suspended solid removal efficiencies from wastewater samples. In this experiment, a synthetic wastewater samples were prepared by mixing 700 mg of the milk powder in one litre of water and treated by using an acidic buffer solution. The monopolar iron (Fe plate anodes and cathodes were employed as electrodes. Direct current was varied in a range of between 0.5 and 1.1 A, and flowrate in a range of between 1.00 to 3.50 mL/s. One permanent magnets namely AlNiCo with a magnetic strength of 0.16T was used in this experiment. The results show that the magnetic field and the flowrate have major influences on suspended solids removal. The efficiency removals of suspended solids, turbidity and COD removal efficiencies at optimum conditions were found to be more than 85%, 95%, and 75%, respectively.

  7. RNA-seq Data: Challenges in and Recommendations for Experimental Design and Analysis.

    Science.gov (United States)

    Williams, Alexander G; Thomas, Sean; Wyman, Stacia K; Holloway, Alisha K

    2014-10-01

    RNA-seq is widely used to determine differential expression of genes or transcripts as well as identify novel transcripts, identify allele-specific expression, and precisely measure translation of transcripts. Thoughtful experimental design and choice of analysis tools are critical to ensure high-quality data and interpretable results. Important considerations for experimental design include number of replicates, whether to collect paired-end or single-end reads, sequence length, and sequencing depth. Common analysis steps in all RNA-seq experiments include quality control, read alignment, assigning reads to genes or transcripts, and estimating gene or transcript abundance. Our aims are two-fold: to make recommendations for common components of experimental design and assess tool capabilities for each of these steps. We also test tools designed to detect differential expression, since this is the most widespread application of RNA-seq. We hope that these analyses will help guide those who are new to RNA-seq and will generate discussion about remaining needs for tool improvement and development.

  8. Engineering at SLAC: Designing and constructing experimental devices for the Stanford Synchrotron Radiation Lightsource - Final Paper

    Energy Technology Data Exchange (ETDEWEB)

    Djang, Austin [SLAC National Accelerator Lab., Menlo Park, CA (United States)

    2015-08-22

    Thanks to the versatility of the beam lines at SSRL, research there is varied and benefits multiple fields. Each experiment requires a particular set of experiment equipment, which in turns requires its own particular assembly. As such, new engineering challenges arise from each new experiment. My role as an engineering intern has been to help solve these challenges, by designing and assembling experimental devices. My first project was to design a heated sample holder, which will be used to investigate the effect of temperature on a sample's x-ray diffraction pattern. My second project was to help set up an imaging test, which involved designing a cooled grating holder and assembling multiple positioning stages. My third project was designing a 3D-printed pencil holder for the SSRL workstations.

  9. Experimental investigation of undesired stable equilibria in pumpkin shape super-pressure balloon designs

    Science.gov (United States)

    Schur, W. W.

    2004-01-01

    Excess in skin material of a pneumatic envelope beyond what is required for minimum enclosure of a gas bubble is a necessary but by no means sufficient condition for the existence of multiple equilibrium configurations for that pneumatic envelope. The very design of structurally efficient super-pressure balloons of the pumpkin shape type requires such excess. Undesired stable equilibria in pumpkin shape balloons have been observed on experimental pumpkin shape balloons. These configurations contain regions with stress levels far higher than those predicted for the cyclically symmetric design configuration under maximum pressurization. Successful designs of pumpkin shape super-pressure balloons do not allow such undesired stable equilibria under full pressurization. This work documents efforts made so far and describes efforts still underway by the National Aeronautics and Space Administration's Balloon Program Office to arrive on guidance on the design of pumpkin shape super-pressure balloons that guarantee full and proper deployment.

  10. Design of passive directional acoustic devices using Topology Optimization - from method to experimental validation

    DEFF Research Database (Denmark)

    Christiansen, Rasmus Ellebæk; Fernandez Grande, Efren

    2016-01-01

    The paper presents a topology optimization based method for designing acoustic focusing devices, capable of tailoring the sound emission pattern of one or several sources, across a chosen frequency band. The method is demonstrated numerically considering devices optimized for directional sound...... emission in two dimensions and is experimentally validated using three dimensional prints of the optimized designs. The emitted fields exhibit a level difference of at least 15 dB on axis relative to the off-axis directions, over frequency bands of approximately an octave. It is demonstrated to be possible...... to outperform the latter in terms of directivity and maximum side-lobe level over nearly an octave band. A set of frequencies are considered simultaneously in the design formulation and performance robustness toward uniform spatial production errors in the designed devices is assured by including perturbations...

  11. Design and analysis of a high pressure and high temperature sulfuric acid experimental system

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Sung-Deok, E-mail: sdhong1@kaeri.re.kr [Korea Atomic Energy Research Institute, Yuseong-Gu, Daejeon 305-600 (Korea, Republic of); Kim, Chan-Soo; Kim, Yong-Wan [Korea Atomic Energy Research Institute, Yuseong-Gu, Daejeon 305-600 (Korea, Republic of); Seo, Dong-Un; Park, Goon-Cherl [Seoul National University, San56-1, Sillim-Dong, Kwanak-Gu, Seoul 151-742 (Korea, Republic of)

    2012-10-15

    We discuss the design and analysis of a small scale sulfuric acid experimental system that can simulate a part of the hydrogen production module. Because nuclear hydrogen coupled components such as a SO{sub 3} decomposer and a sulfuric acid evaporator should be tested under high pressure and high temperature operating conditions, we developed the sulfuric acid loop to satisfy design specifications of 900 Degree-Sign C in temperature and 1.0 MPa in pressure. The components for the sulfuric acid loop were specially designed using a combination of materials with good corrosion resistance; a ceramic and Hastelloy-C276. The design feature of the loop was tested for performance in a 10 h sulfuric acid experiment and optimized using Aspen+ code simulation.

  12. Labeled experimental choice design for estimating attribute and availability cross effects with N attributes and specific brand attribute levels

    DEFF Research Database (Denmark)

    Nguyen, Thong Tien

    2011-01-01

    Experimental designs are required in widely used techniques in marketing research, especially for preference-based conjoint analysis and discrete-choice studies. Ideally, marketing researchers prefer orthogonal designs because this technique could give uncorrelated parameter estimates. However...

  13. Methodologic issues, theoretical considerations, and design criteria for experimental animal and cell culture experiments.

    Science.gov (United States)

    Birt, D F

    1997-12-01

    This article provides background information that is important when evaluating the relevance to humans of particular animal or in vitro experiments designed to assess the relations between fatty acids and cancer. Considerations in designing carcinogenesis studies to assess the relation between dietary fatty acids and human cancer include selection of the animal model and design of the experimental diets. Animal carcinogenesis models are generally best for evaluating the early phases of cancer development: the initiation and promotion of cancer. Transplantation protocols have been developed for evaluating the effect of diet on the growth and metastasis of partially or fully transformed cells. The variables that are important in such models are the origin and biology of the cell line, the animal host used for the implantation, the site of transplantation, whether the primary tumor is excised after a period of time to allow for metastasis, and when the diets are fed relative to the different phases of tumor growth and metastasis. Studies in cultured cells have been particularly useful for assessing the mechanisms by which fatty acids affect cancer. Considerations in designing studies with cultured cells include selection of the cell line, cell culture conditions, selection of biological endpoints that are relevant to human cancer, and in vivo confirmation of the mechanisms observed in vitro. Design considerations for each of these experimental approaches are discussed and the contributions of each approach are summarized.

  14. An experimental investigation of two 15 percent-scale wind tunnel fan-blade designs

    Science.gov (United States)

    Signor, David B.

    1988-01-01

    An experimental 3-D investigation of two fan-blade designs was conducted. The fan blades tested were 15 percent-scale models of blades to be used in the fan drive of the National Full-Scale Aerodynamic Complex at NASA Ames Research Center. NACA 65- and modified NACA 65-series sections incorporated increased thickness on the upper surface, between the leading edge and the one-half-chord position. Twist and taper were the same for both blade designs. The fan blades with modified 65-series sections were found to have an increased stall margin when they were compared with the unmodified blades.

  15. Performance of Different Experimental Absorber Designs in Absorption Heat Pump Cycle Technologies: A Review

    Directory of Open Access Journals (Sweden)

    Jonathan Ibarra-Bahena

    2014-02-01

    Full Text Available The absorber is a major component of absorption cycle systems, and its performance directly impacts the overall size and energy supplies of these devices. Absorption cooling and heating cycles have different absorber design requirements: in absorption cooling systems, the absorber works close to ambient temperature, therefore, the mass transfer is the most important phenomenon in order to reduce the generator size; on the other hand, in heat transformer absorption systems, is important to recover the heat delivered by exothermic reactions produced in the absorber. In this paper a review of the main experimental results of different absorber designs reported in absorption heat pump cycles is presented.

  16. Active vibration absorber for the CSI evolutionary model - Design and experimental results. [Controls Structures Interaction

    Science.gov (United States)

    Bruner, Anne M.; Belvin, W. Keith; Horta, Lucas G.; Juang, Jer-Nan

    1991-01-01

    The development of control of large flexible structures technology must include practical demonstrations to aid in the understanding and characterization of controlled structures in space. To support this effort, a testbed facility has been developed to study practical implementation of new control technologies under realistic conditions. The paper discusses the design of a second order, acceleration feedback controller which acts as an active vibration absorber. This controller provides guaranteed stability margins for collocated sensor/actuator pairs in the absence of sensor/actuator dynamics and computational time delay. Experimental results in the presence of these factors are presented and discussed. The robustness of this design under model uncertainty is demonstrated.

  17. Comparison of a Conventional Heat Exchangers with a New Designed Heat Exchanger Experimentally

    Directory of Open Access Journals (Sweden)

    Tansel Koyun

    2014-04-01

    Full Text Available In this study, the air-water heat exchanger designed have been experimentally compared to conventional heat exchangers with and without fin. The same parameters for the three heat exchangers (pump flow, heating power, etc... have been used. In the experiments, speed-flow adjustment has been made to supply heat transfer at an optimum. As a result, during the circulation of water in pipe of the air-water heat exchanger, the corrosion fouling factor has not been formed. In addition, the efficiency of the new designed heat exchanger has been found between fin and finless heat exchanger efficiencies. The results have been shown in the diagrams.

  18. Design, Analysis, Prototyping, and Experimental Evaluation of an Efficient Double Coil Magnetorheological Valve

    Directory of Open Access Journals (Sweden)

    Guoliang Hu

    2014-05-01

    Full Text Available A double coil magnetorheological (MR valve with an outer annular resistance gap was designed and prototyped. The finite element modeling and analysis of double coil MR valve were carried out using ANSYS/Emag software, and the optimal magnetic field distribution and magnetic flux density of the double coil MR valve were achieved. The mechanism of the pressure drop was studied by building a mathematical model of pressure drop in the double coil MR valve. The proposed double coil MR valve was prototyped and its performance was experimentally evaluated. The new MR valve design has improved the efficiency of double coil MR valve significantly.

  19. Designation and Implementation of Microcomputer Principle and Interface Technology Virtual Experimental Platform Website

    Science.gov (United States)

    Gao, JinYue; Tang, Yin

    This paper explicitly discusses the designation and implementation thought and method of Microcomputer Principle and Interface Technology virtual experimental platform website construction. The instructional design of this platform mainly follows with the students-oriented constructivism learning theory, and the overall structure is subject to the features of teaching aims, teaching contents and interactive methods. Virtual experiment platform production and development should fully take the characteristics of network operation into consideration and adopt relevant technologies to improve the effect and speed of network software application in internet.

  20. Speculations on Quasi-Experimental Design in HIV/AIDS Prevention Research

    Directory of Open Access Journals (Sweden)

    Donald T. Campbell

    2012-10-01

    Full Text Available This paper provides a speculative discussion on what quasi-experimental designs might be useful in various aspects of HIV/AIDS research. The first author’s expertise is in research design, not HIV, while the second author has been active in HIV prevention research. It is hoped that it may help the HIV/AIDS research community in discovering and inventing an expanded range of possibilities for valid causal inference. DOI: 10.2458/azu_jmmss.v3i1.16113

  1. Statistical analysis of sonochemical synthesis of SAPO-34 nanocrystals using Taguchi experimental design

    Energy Technology Data Exchange (ETDEWEB)

    Askari, Sima [Faculty of Chemical Engineering, Amirkabir University of Technology (Tehran Polytechnic), P.O. Box 15875-4413, Hafez Ave., Tehran (Iran, Islamic Republic of); Halladj, Rouein, E-mail: halladj@aut.ac.ir [Faculty of Chemical Engineering, Amirkabir University of Technology (Tehran Polytechnic), P.O. Box 15875-4413, Hafez Ave., Tehran (Iran, Islamic Republic of); Nazari, Mahdi [Faculty of Chemical Engineering, Amirkabir University of Technology (Tehran Polytechnic), P.O. Box 15875-4413, Hafez Ave., Tehran (Iran, Islamic Republic of)

    2013-05-15

    Highlights: ► Sonochemical synthesis of SAPO-34 nanocrystals. ► Using Taguchi experimental design (L9) for optimizing the experimental procedure. ► The significant effects of all the ultrasonic parameters on the response. - Abstract: SAPO-34 nanocrystals with high crystallinity were synthesized by means of sonochemical method. An L9 orthogonal array of the Taguchi method was implemented to investigate the effects of sonication conditions on the preparation of SAPO-34 with respect to crystallinity of the final product phase. The experimental data establish the favorable phase crystallinity which is improved by increasing the ultrasonic power and the sonication temperature. In the case of ultrasonic irradiation time, however, an initial increases in crystallinity from 5 min to 15 min is followed by a decrease in crystallinity for longer sonication time.

  2. Pliocene Model Intercomparison (PlioMIP) Phase 2: scientific objectives and experimental design

    Science.gov (United States)

    Haywood, A. M.; Dowsett, H. J.; Dolan, A. M.; Rowley, D.; Abe-Ouchi, A.; Otto-Bliesner, B.; Chandler, M. A.; Hunter, S. J.; Lunt, D. J.; Pound, M.; Salzmann, U.

    2015-08-01

    The Pliocene Model Intercomparison Project (PlioMIP) is a co-ordinated international climate modelling initiative to study and understand climate and environments of the Late Pliocene, and their potential relevance in the context of future climate change. PlioMIP operates under the umbrella of the Palaeoclimate Modelling Intercomparison Project (PMIP), which examines multiple intervals in Earth history, the consistency of model predictions in simulating these intervals and their ability to reproduce climate signals preserved in geological climate archives. This paper provides a thorough model intercomparison project description, and documents the experimental design in a detailed way. Specifically, this paper describes the experimental design and boundary conditions that will be utilised for the experiments in Phase 2 of PlioMIP.

  3. Pliocene Model Intercomparison Project (PlioMIP): experimental design and boundary conditions (Experiment 2)

    Science.gov (United States)

    Haywood, A.M.; Dowsett, H.J.; Robinson, M.M.; Stoll, D.K.; Dolan, A.M.; Lunt, D.J.; Otto-Bliesner, B.; Chandler, M.A.

    2011-01-01

    The Palaeoclimate Modelling Intercomparison Project has expanded to include a model intercomparison for the mid-Pliocene warm period (3.29 to 2.97 million yr ago). This project is referred to as PlioMIP (the Pliocene Model Intercomparison Project). Two experiments have been agreed upon and together compose the initial phase of PlioMIP. The first (Experiment 1) is being performed with atmosphere-only climate models. The second (Experiment 2) utilises fully coupled ocean-atmosphere climate models. Following on from the publication of the experimental design and boundary conditions for Experiment 1 in Geoscientific Model Development, this paper provides the necessary description of differences and/or additions to the experimental design for Experiment 2.

  4. Pliocene Model Intercomparison Project (PlioMIP): Experimental Design and Boundary Conditions (Experiment 2)

    Science.gov (United States)

    Haywood, A. M.; Dowsett, H. J.; Robinson, M. M.; Stoll, D. K.; Dolan, A. M.; Lunt, D. J.; Otto-Bliesner, B.; Chandler, M. A.

    2011-01-01

    The Palaeoclimate Modelling Intercomparison Project has expanded to include a model intercomparison for the mid-Pliocene warm period (3.29 to 2.97 million yr ago). This project is referred to as PlioMIP (the Pliocene Model Intercomparison Project). Two experiments have been agreed upon and together compose the initial phase of PlioMIP. The first (Experiment 1) is being performed with atmosphere only climate models. The second (Experiment 2) utilizes fully coupled ocean-atmosphere climate models. Following on from the publication of the experimental design and boundary conditions for Experiment 1 in Geoscientific Model Development, this paper provides the necessary description of differences and/or additions to the experimental design for Experiment 2.

  5. Design and construction of an experimental pervious paved parking area to harvest reusable rainwater.

    Science.gov (United States)

    Gomez-Ullate, E; Novo, A V; Bayon, J R; Hernandez, Jorge R; Castro-Fresno, Daniel

    2011-01-01

    Pervious pavements are sustainable urban drainage systems already known as rainwater infiltration techniques which reduce runoff formation and diffuse pollution in cities. The present research is focused on the design and construction of an experimental parking area, composed of 45 pervious pavement parking bays. Every pervious pavement was experimentally designed to store rainwater and measure the levels of the stored water and its quality over time. Six different pervious surfaces are combined with four different geotextiles in order to test which materials respond better to the good quality of rainwater storage over time and under the specific weather conditions of the north of Spain. The aim of this research was to obtain a good performance of pervious pavements that offered simultaneously a positive urban service and helped to harvest rainwater with a good quality to be used for non potable demands.

  6. Experimental and numerical analysis for optimal design parameters of a falling film evaporator

    Indian Academy of Sciences (India)

    RAJNEESH KAUSHAL; RAJ KUMAR; GAURAV VATS

    2016-06-01

    Present study exhibits an experimental examination of mass transfer coefficient and evaporative effectiveness of a falling film evaporator. Further, a statistical replica is extended in order to have optimal controlling parameters viz. non-dimensional enthalpy potential, film Reynolds number of cooling water, Reynolds number of air and relative humidity of up-streaming air. The models not only give an optimal solution but also help in establishing a correlation among controlling parameters. In this context, response surface methodology is employed by aid of design of experiment approach. Later, the response surface curves are studied using ANOVA. Finally, the relations established are confirmed experimentally to validate the models. The relations thus established are beneficent in furtherance of designing evaporators. Additionally, the presentstudy is among the first attempts to reveal the effect of humidity on the performance of falling film evaporator.

  7. Experimental design in supercritical fluid extraction of cocaine from coca leaves.

    Science.gov (United States)

    Brachet, A; Christen, P; Gauvrit, J Y; Longeray, R; Lantéri, P; Veuthey, J L

    2000-07-05

    An optimisation procedure for the supercritical fluid extraction (SFE) of cocaine from the leaves of Erythroxylum coca var. coca was investigated by means of experimental design. After preliminary experiments where the SFE rate-controlling mechanism was determined, a central composite design was applied to evaluate interactions between selected SFE factors such as pressure, temperature, nature and percentage of the polar modifier, as well as to optimise these factors. Predicted and experimental contents of cocaine were compared and robustness of the extraction method estimated by drawing response surfaces. The analysis of cocaine in crude extracts was carried out by capillary GC equipped with a flame ionisation detector (GC-FID), as well as by capillary GC coupled with a mass spectrometer (GC-MS) for peak identification.

  8. Optimal design and experimental analyses of a new micro-vibration control payload-platform

    Science.gov (United States)

    Sun, Xiaoqing; Yang, Bintang; Zhao, Long; Sun, Xiaofen

    2016-07-01

    This paper presents a new payload-platform, for precision devices, which possesses the capability of isolating the complex space micro-vibration in low frequency range below 5 Hz. The novel payload-platform equipped with smart material actuators is investigated and designed through optimization strategy based on the minimum energy loss rate, for the aim of achieving high drive efficiency and reducing the effect of the magnetic circuit nonlinearity. Then, the dynamic model of the driving element is established by using the Lagrange method and the performance of the designed payload-platform is further discussed through the combination of the controlled auto regressive moving average (CARMA) model with modified generalized prediction control (MGPC) algorithm. Finally, an experimental prototype is developed and tested. The experimental results demonstrate that the payload-platform has an impressive potential of micro-vibration isolation.

  9. Towards an optimal experimental design for N2O model calibration during biological nitrogen removal

    DEFF Research Database (Denmark)

    Domingo Felez, Carlos; Valverde Pérez, Borja; Plósz, Benedek G.

    Process models describing nitrous oxide (N2O) production during biological nitrogen removal allow for the development of mitigation strategies of this potent greenhouse gas. N2O is an intermediate of nitrogen removal, hence its prediction is negatively affected by the uncertainty associated to its...... substrates. Improving experimental designs for model calibration reduces prediction uncertainties. Moreover, the individual analysis of autotrophic and heterotrophic contribution to the total NO and N2O pool was assessed for already proposed model structures under different experimental scenarios....... The results show the need for information-rich experiemental designs to assess the predicting capabilities of N2O models. This work represents a step further in understanding the N2O production and emissions associated to conventional wastewater treatment. Moreovere, it will facilitate the development...

  10. A case study on robust optimal experimental design for model calibration of ω-Transaminase

    DEFF Research Database (Denmark)

    Daele, Timothy, Van; Van Hauwermeiren, Daan; Ringborg, Rolf Hoffmeyer

    and measurement errors. Since the latter was not provided, a conservative standard deviation of 5% was assumed. The confidence analysis yielded that only two (Vr and Kac) out of five parameters were reliable estimates, which means that model predictions and decisions based on them are highly uncertain. The reason...... the experimental space. However, it is expected that more informative experiments can be designed to increase the confidence of the parameter estimates. Therefore, we apply Optimal Experimental Design (OED) to the calibrated model of Shin and Kim (1998). The total number of samples was retained to allow fair......” parameter values are not known before finishing the model calibration. However, it is important that the chosen parameter values are close to the real parameter values, otherwise the OED can possibly yield non-informative experiments. To counter this problem, one can use robust OED. The idea of robust OED...

  11. Selecting appropriate animal models and experimental designs for endocrine disruptor research and testing studies.

    Science.gov (United States)

    Stokes, William S

    2004-01-01

    Evidence that chemicals in the environment may cause developmental and reproductive abnormalities in fish and wildlife by disrupting normal endocrine functions has increased concern about potential adverse human health effects from such chemicals. US laws have now been enacted that require the US Environmental Protection Agency (EPA) to develop and validate a screening program to identify chemicals in food and water with potential endocrine-disrupting activity. EPA subsequently proposed an Endocrine Disruptor Screening Program that uses in vitro and in vivo test systems to identify chemicals that may adversely affect humans and ecologically important animal species. However, the endocrine system can be readily modulated by many experimental factors, including diet and the genetic background of the selected animal strain or stock. It is therefore desirable to minimize or avoid factors that cause or contribute to experimental variation in endocrine disruptor research and testing studies. Standard laboratory animal diets contain high and variable levels of phytoestrogens, which can modulate physiologic and behavioral responses similar to both endogenous estrogen as well as exogenous estrogenic chemicals. Other studies have determined that some commonly used outbred mice and rats are less responsive to estrogenic substances than certain inbred mouse and rat strains for various estrogen-sensitive endpoints. It is therefore critical to select appropriate biological models and diets for endocrine disruptor studies that provide optimal sensitivity and specificity to accomplish the research or testing objectives. An introduction is provided to 11 other papers in this issue that review these and other important laboratory animal experimental design considerations in greater detail, and that review laboratory animal and in vitro models currently being used or evaluated for endocrine disruptor research and testing. Selection of appropriate animal models and experimental design

  12. Experimental design and analysis for accelerated degradation tests with Li-ion cells.

    Energy Technology Data Exchange (ETDEWEB)

    Doughty, Daniel Harvey; Thomas, Edward Victor; Jungst, Rudolph George; Roth, Emanuel Peter

    2003-08-01

    This document describes a general protocol (involving both experimental and data analytic aspects) that is designed to be a roadmap for rapidly obtaining a useful assessment of the average lifetime (at some specified use conditions) that might be expected from cells of a particular design. The proposed experimental protocol involves a series of accelerated degradation experiments. Through the acquisition of degradation data over time specified by the experimental protocol, an unambiguous assessment of the effects of accelerating factors (e.g., temperature and state of charge) on various measures of the health of a cell (e.g., power fade and capacity fade) will result. In order to assess cell lifetime, it is necessary to develop a model that accurately predicts degradation over a range of the experimental factors. In general, it is difficult to specify an appropriate model form without some preliminary analysis of the data. Nevertheless, assuming that the aging phenomenon relates to a chemical reaction with simple first-order rate kinetics, a data analysis protocol is also provided to construct a useful model that relates performance degradation to the levels of the accelerating factors. This model can then be used to make an accurate assessment of the average cell lifetime. The proposed experimental and data analysis protocols are illustrated with a case study involving the effects of accelerated aging on the power output from Gen-2 cells. For this case study, inadequacies of the simple first-order kinetics model were observed. However, a more complex model allowing for the effects of two concurrent mechanisms provided an accurate representation of the experimental data.

  13. Survey of the quality of experimental design, statistical analysis and reporting of research using animals.

    Directory of Open Access Journals (Sweden)

    Carol Kilkenny

    Full Text Available For scientific, ethical and economic reasons, experiments involving animals should be appropriately designed, correctly analysed and transparently reported. This increases the scientific validity of the results, and maximises the knowledge gained from each experiment. A minimum amount of relevant information must be included in scientific publications to ensure that the methods and results of a study can be reviewed, analysed and repeated. Omitting essential information can raise scientific and ethical concerns. We report the findings of a systematic survey of reporting, experimental design and statistical analysis in published biomedical research using laboratory animals. Medline and EMBASE were searched for studies reporting research on live rats, mice and non-human primates carried out in UK and US publicly funded research establishments. Detailed information was collected from 271 publications, about the objective or hypothesis of the study, the number, sex, age and/or weight of animals used, and experimental and statistical methods. Only 59% of the studies stated the hypothesis or objective of the study and the number and characteristics of the animals used. Appropriate and efficient experimental design is a critical component of high-quality science. Most of the papers surveyed did not use randomisation (87% or blinding (86%, to reduce bias in animal selection and outcome assessment. Only 70% of the publications that used statistical methods described their methods and presented the results with a measure of error or variability. This survey has identified a number of issues that need to be addressed in order to improve experimental design and reporting in publications describing research using animals. Scientific publication is a powerful and important source of information; the authors of scientific publications therefore have a responsibility to describe their methods and results comprehensively, accurately and transparently, and peer

  14. Survey of the Quality of Experimental Design, Statistical Analysis and Reporting of Research Using Animals

    Science.gov (United States)

    Kilkenny, Carol; Parsons, Nick; Kadyszewski, Ed; Festing, Michael F. W.; Cuthill, Innes C.; Fry, Derek; Hutton, Jane; Altman, Douglas G.

    2009-01-01

    For scientific, ethical and economic reasons, experiments involving animals should be appropriately designed, correctly analysed and transparently reported. This increases the scientific validity of the results, and maximises the knowledge gained from each experiment. A minimum amount of relevant information must be included in scientific publications to ensure that the methods and results of a study can be reviewed, analysed and repeated. Omitting essential information can raise scientific and ethical concerns. We report the findings of a systematic survey of reporting, experimental design and statistical analysis in published biomedical research using laboratory animals. Medline and EMBASE were searched for studies reporting research on live rats, mice and non-human primates carried out in UK and US publicly funded research establishments. Detailed information was collected from 271 publications, about the objective or hypothesis of the study, the number, sex, age and/or weight of animals used, and experimental and statistical methods. Only 59% of the studies stated the hypothesis or objective of the study and the number and characteristics of the animals used. Appropriate and efficient experimental design is a critical component of high-quality science. Most of the papers surveyed did not use randomisation (87%) or blinding (86%), to reduce bias in animal selection and outcome assessment. Only 70% of the publications that used statistical methods described their methods and presented the results with a measure of error or variability. This survey has identified a number of issues that need to be addressed in order to improve experimental design and reporting in publications describing research using animals. Scientific publication is a powerful and important source of information; the authors of scientific publications therefore have a responsibility to describe their methods and results comprehensively, accurately and transparently, and peer reviewers and

  15. A GENERALIZED SAMPLING THEOREM OVER GALOIS FIELD DOMAINS FOR EXPERIMENTAL DESIGN

    Directory of Open Access Journals (Sweden)

    Yoshifumi Ukita

    2015-12-01

    Full Text Available In this paper, the sampling theorem for bandlimited functions over domains is generalized to one over ∏ domains. The generalized theorem is applicable to the experimental design model in which each factor has a different number of levels and enables us to estimate the parameters in the model by using Fourier transforms. Moreover, the relationship between the proposed sampling theorem and orthogonal arrays is also provided. KEY

  16. Experimental evaluation of the Battelle accelerated test design for the solar array at Mead, Nebraska

    Science.gov (United States)

    Frickland, P. O.; Repar, J.

    1982-01-01

    A previously developed test design for accelerated aging of photovoltaic modules was experimentally evaluated. The studies included a review of relevant field experience, environmental chamber cycling of full size modules, and electrical and physical evaluation of the effects of accelerated aging during and after the tests. The test results indicated that thermally induced fatigue of the interconnects was the primary mode of module failure as measured by normalized power output. No chemical change in the silicone encapsulant was detectable after 360 test cycles.

  17. Experimental evaluation of the Battelle accelerated test design for the solar array at Mead, Nebraska

    Science.gov (United States)

    Frickland, P. O.; Repar, J.

    1982-04-01

    A previously developed test design for accelerated aging of photovoltaic modules was experimentally evaluated. The studies included a review of relevant field experience, environmental chamber cycling of full size modules, and electrical and physical evaluation of the effects of accelerated aging during and after the tests. The test results indicated that thermally induced fatigue of the interconnects was the primary mode of module failure as measured by normalized power output. No chemical change in the silicone encapsulant was detectable after 360 test cycles.

  18. Survey of the quality of experimental design, statistical analysis and reporting of research using animals.

    Science.gov (United States)

    Kilkenny, Carol; Parsons, Nick; Kadyszewski, Ed; Festing, Michael F W; Cuthill, Innes C; Fry, Derek; Hutton, Jane; Altman, Douglas G

    2009-11-30

    For scientific, ethical and economic reasons, experiments involving animals should be appropriately designed, correctly analysed and transparently reported. This increases the scientific validity of the results, and maximises the knowledge gained from each experiment. A minimum amount of relevant information must be included in scientific publications to ensure that the methods and results of a study can be reviewed, analysed and repeated. Omitting essential information can raise scientific and ethical concerns. We report the findings of a systematic survey of reporting, experimental design and statistical analysis in published biomedical research using laboratory animals. Medline and EMBASE were searched for studies reporting research on live rats, mice and non-human primates carried out in UK and US publicly funded research establishments. Detailed information was collected from 271 publications, about the objective or hypothesis of the study, the number, sex, age and/or weight of animals used, and experimental and statistical methods. Only 59% of the studies stated the hypothesis or objective of the study and the number and characteristics of the animals used. Appropriate and efficient experimental design is a critical component of high-quality science. Most of the papers surveyed did not use randomisation (87%) or blinding (86%), to reduce bias in animal selection and outcome assessment. Only 70% of the publications that used statistical methods described their methods and presented the results with a measure of error or variability. This survey has identified a number of issues that need to be addressed in order to improve experimental design and reporting in publications describing research using animals. Scientific publication is a powerful and important source of information; the authors of scientific publications therefore have a responsibility to describe their methods and results comprehensively, accurately and transparently, and peer reviewers and

  19. 情绪切换的实验设计%The Experimental Design of Emotional Switching

    Institute of Scientific and Technical Information of China (English)

    雷文斌

    2011-01-01

    In order to study people's emotional switching ability, the emotional switching is carried out experimental design from sex, stimulus type and task type.%为了研究人们的情绪切换能力,特从性别、刺激类型和任务类型三个方面对情绪切换进行实验设计.

  20. Optimizing laboratory animal stress paradigms: The H-H* experimental design.

    Science.gov (United States)

    McCarty, Richard

    2017-01-01

    Major advances in behavioral neuroscience have been facilitated by the development of consistent and highly reproducible experimental paradigms that have been widely adopted. In contrast, many different experimental approaches have been employed to expose laboratory mice and rats to acute versus chronic intermittent stress. An argument is advanced in this review that more consistent approaches to the design of chronic intermittent stress experiments would provide greater reproducibility of results across laboratories and greater reliability relating to various neural, endocrine, immune, genetic, and behavioral adaptations. As an example, the H-H* experimental design incorporates control, homotypic (H), and heterotypic (H*) groups and allows for comparisons across groups, where each animal is exposed to the same stressor, but that stressor has vastly different biological and behavioral effects depending upon each animal's prior stress history. Implementation of the H-H* experimental paradigm makes possible a delineation of transcriptional changes and neural, endocrine, and immune pathways that are activated in precisely defined stressor contexts. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Life on rock. Scaling down biological weathering in a new experimental design at Biosphere-2

    Science.gov (United States)

    Zaharescu, D. G.; Dontsova, K.; Burghelea, C. I.; Chorover, J.; Maier, R.; Perdrial, J. N.

    2012-12-01

    Biological colonization and weathering of bedrock on Earth is a major driver of landscape and ecosystem development, its effects reaching out into other major systems such climate and geochemical cycles of elements. In order to understand how microbe-plant-mycorrhizae communities interact with bedrock in the first phases of mineral weathering we developed a novel experimental design in the Desert Biome at Biosphere-2, University of Arizona (U.S.A). This presentation will focus on the development of the experimental setup. Briefly, six enclosed modules were designed to hold 288 experimental columns that will accommodate 4 rock types and 6 biological treatments. Each module is developed on 3 levels. A lower volume, able to withstand the weight of both, rock material and the rest of the structure, accommodates the sampling elements. A middle volume, houses the experimental columns in a dark chamber. A clear, upper section forms the habitat exposed to sunlight. This volume is completely sealed form exterior and it allows a complete control of its air and water parameters. All modules are connected in parallel with a double air purification system that delivers a permanent air flow. This setup is expected to provide a model experiment, able to test important processes in the interaction rock-life at grain-to- molecular scale.

  2. Quasi-experimental study designs series - paper 1: history and introduction.

    Science.gov (United States)

    Bärnighausen, Till; Røttingen, John-Arne; Rockers, Peter; Shemilt, Ian; Tugwell, Peter

    2017-07-07

    To contrast the historical development of experiments and quasi-experiments and provide the motivation for a journal series on quasi-experimental designs in health research STUDY DESIGN: A short historical narrative, with concrete examples, and arguments based on an understanding of the practice of health research and evidence synthesis RESULTS: Health research has played a key role in developing today's gold standard for causal inference - the randomized controlled multiply blinded trial. Historically, allocation approaches developed from convenience and purposive allocation to alternate and, finally, to random allocation. This development was motivated both by concerns for manipulation in allocation as well as statistical and theoretical developments demonstrating the power of randomization in creating counterfactuals for causal inference. In contrast to the sequential development of experiments, quasi-experiments originated at very different points in time, from very different scientific perspectives, and with frequent and long interruptions in their methodological development. Health researchers have only recently started to recognize the value of quasi-experiments for generating novel insights on causal relationships. While quasi-experiments are unlikely to replace experiments in generating the efficacy and safety evidence required for clinical guidelines and regulatory approval of medical technologies, quasi-experiments can play an important role in establishing the effectiveness of healthcare practice, programs and policies. The papers in this series describe and discuss a range of important issues in utilizing quasi-experimental designs for primary research and quasi-experimental results for evidence synthesis. Copyright © 2017. Published by Elsevier Inc.

  3. Effect of experimental design on the prediction performance of calibration models based on near-infrared spectroscopy for pharmaceutical applications.

    Science.gov (United States)

    Bondi, Robert W; Igne, Benoît; Drennen, James K; Anderson, Carl A

    2012-12-01

    Near-infrared spectroscopy (NIRS) is a valuable tool in the pharmaceutical industry, presenting opportunities for online analyses to achieve real-time assessment of intermediates and finished dosage forms. The purpose of this work was to investigate the effect of experimental designs on prediction performance of quantitative models based on NIRS using a five-component formulation as a model system. The following experimental designs were evaluated: five-level, full factorial (5-L FF); three-level, full factorial (3-L FF); central composite; I-optimal; and D-optimal. The factors for all designs were acetaminophen content and the ratio of microcrystalline cellulose to lactose monohydrate. Other constituents included croscarmellose sodium and magnesium stearate (content remained constant). Partial least squares-based models were generated using data from individual experimental designs that related acetaminophen content to spectral data. The effect of each experimental design was evaluated by determining the statistical significance of the difference in bias and standard error of the prediction for that model's prediction performance. The calibration model derived from the I-optimal design had similar prediction performance as did the model derived from the 5-L FF design, despite containing 16 fewer design points. It also outperformed all other models estimated from designs with similar or fewer numbers of samples. This suggested that experimental-design selection for calibration-model development is critical, and optimum performance can be achieved with efficient experimental designs (i.e., optimal designs).

  4. Advanced computational tools for PEM fuel cell design. Part 2. Detailed experimental validation and parametric study

    Science.gov (United States)

    Sui, P. C.; Kumar, S.; Djilali, N.

    This paper reports on the systematic experimental validation of a comprehensive 3D CFD-based computational model presented and documented in Part 1. Simulations for unit cells with straight channels, similar to the Ballard Mk902 hardware, are performed and analyzed in conjunction with detailed current mapping measurements and water mass distributions in the membrane-electrode assembly. The experiments were designed to display sensitivity of the cell over a range of operating parameters including current density, humidification, and coolant temperature, making the data particularly well suited for systematic validation. Based on the validation and analysis of the predictions, values of model parameters, including the electro-osmotic drag coefficient, capillary diffusion coefficient, and catalyst specific surface area are determined adjusted to fit experimental data of current density and MEA water content. The predicted net water flux out of the anode (normalized by the total water generated) increases as anode humidification water flow rate is increased, in agreement with experimental results. A modification of the constitutive equation for the capillary diffusivity of water in the porous electrodes that attempts to incorporate the experimentally observed immobile (or irreducible) saturation yields a better fit of the predicted MEA water mass with experimental data. The specific surface area parameter used in the catalyst layer model is found to be effective in tuning the simulations to predict the correct cell voltage over a range of stoichiometries.

  5. Effects of repeatability measures on results of fMRI sICA: a study on simulated and real resting-state effects.

    Science.gov (United States)

    Remes, Jukka J; Starck, Tuomo; Nikkinen, Juha; Ollila, Esa; Beckmann, Christian F; Tervonen, Osmo; Kiviniemi, Vesa; Silven, Olli

    2011-05-15

    Spatial independent components analysis (sICA) has become a widely applied data-driven method for fMRI data, especially for resting-state studies. These sICA approaches are often based on iterative estimation algorithms and there are concerns about accuracy due to noise. Repeatability measures such as ICASSO, RAICAR and ARABICA have been introduced as remedies but information on their effects on estimates is limited. The contribution of this study was to provide more of such information and test if the repeatability analyses are necessary. We compared FastICA-based ordinary and repeatability approaches concerning mixing vector estimates. Comparisons included original FastICA, FSL4 Melodic FastICA and original and modified ICASSO. The effects of bootstrapping and convergence threshold were evaluated. The results show that there is only moderate improvement due to repeatability measures and only in the bootstrapping case. Bootstrapping attenuated power from time courses of resting-state network related ICs at frequencies higher than 0.1 Hz and made subsets of low frequency oscillations more emphasized IC-wise. The convergence threshold did not have a significant role concerning the accuracy of estimates. The performance results suggest that repeatability measures or strict converge criteria might not be needed in sICA analyses of fMRI data. Consequently, the results in existing sICA fMRI literature are probably valid in this sense. A decreased accuracy of original bootstrapping ICASSO was observed and corrected by using centrotype mixing estimates but the results warrant for thorough evaluations of data-driven methods in general. Also, given the fMRI-specific considerations, further development of sICA methods is strongly encouraged. Copyright © 2010 Elsevier Inc. All rights reserved.

  6. Managing Model Data Introduced Uncertainties in Simulator Predictions for Generation IV Systems via Optimum Experimental Design

    Energy Technology Data Exchange (ETDEWEB)

    Turinsky, Paul J [North Carolina State Univ., Raleigh, NC (United States); Abdel-Khalik, Hany S [North Carolina State Univ., Raleigh, NC (United States); Stover, Tracy E [North Carolina State Univ., Raleigh, NC (United States)

    2011-03-01

    An optimization technique has been developed to select optimized experimental design specifications to produce data specifically designed to be assimilated to optimize a given reactor concept. Data from the optimized experiment is assimilated to generate posteriori uncertainties on the reactor concept’s core attributes from which the design responses are computed. The reactor concept is then optimized with the new data to realize cost savings by reducing margin. The optimization problem iterates until an optimal experiment is found to maximize the savings. A new generation of innovative nuclear reactor designs, in particular fast neutron spectrum recycle reactors, are being considered for the application of closing the nuclear fuel cycle in the future. Safe and economical design of these reactors will require uncertainty reduction in basic nuclear data which are input to the reactor design. These data uncertainty propagate to design responses which in turn require the reactor designer to incorporate additional safety margin into the design, which often increases the cost of the reactor. Therefore basic nuclear data needs to be improved and this is accomplished through experimentation. Considering the high cost of nuclear experiments, it is desired to have an optimized experiment which will provide the data needed for uncertainty reduction such that a reactor design concept can meet its target accuracies or to allow savings to be realized by reducing the margin required due to uncertainty propagated from basic nuclear data. However, this optimization is coupled to the reactor design itself because with improved data the reactor concept can be re-optimized itself. It is thus desired to find the experiment that gives the best optimized reactor design. Methods are first established to model both the reactor concept and the experiment and to efficiently propagate the basic nuclear data uncertainty through these models to outputs. The representativity of the experiment

  7. Managing Model Data Introduced Uncertainties in Simulator Predictions for Generation IV Systems via Optimum Experimental Design

    Energy Technology Data Exchange (ETDEWEB)

    Turinsky, Paul J; Abdel-Khalik, Hany S; Stover, Tracy E

    2011-03-31

    An optimization technique has been developed to select optimized experimental design specifications to produce data specifically designed to be assimilated to optimize a given reactor concept. Data from the optimized experiment is assimilated to generate posteriori uncertainties on the reactor concept’s core attributes from which the design responses are computed. The reactor concept is then optimized with the new data to realize cost savings by reducing margin. The optimization problem iterates until an optimal experiment is found to maximize the savings. A new generation of innovative nuclear reactor designs, in particular fast neutron spectrum recycle reactors, are being considered for the application of closing the nuclear fuel cycle in the future. Safe and economical design of these reactors will require uncertainty reduction in basic nuclear data which are input to the reactor design. These data uncertainty propagate to design responses which in turn require the reactor designer to incorporate additional safety margin into the design, which often increases the cost of the reactor. Therefore basic nuclear data needs to be improved and this is accomplished through experimentation. Considering the high cost of nuclear experiments, it is desired to have an optimized experiment which will provide the data needed for uncertainty reduction such that a reactor design concept can meet its target accuracies or to allow savings to be realized by reducing the margin required due to uncertainty propagated from basic nuclear data. However, this optimization is coupled to the reactor design itself because with improved data the reactor concept can be re-optimized itself. It is thus desired to find the experiment that gives the best optimized reactor design. Methods are first established to model both the reactor concept and the experiment and to efficiently propagate the basic nuclear data uncertainty through these models to outputs. The representativity of the experiment

  8. Avaliação do desempenho zootécnico de genótipos de frangos de corte utilizando-se a análise de medidas repetidas Performance evaluation of broiler genotypes by repeated measures

    Directory of Open Access Journals (Sweden)

    Millor Fernandes do Rosário

    2005-12-01

    Full Text Available Objetivou-se avaliar genótipos de frangos de corte por meio do desempenho zootécnico utilizando-se medidas repetidas. Os tratamentos consistiram de quatro genótipos (A, B, C e D e dois sexos avaliados em seis idades (7, 14, 21, 28, 35 e 42 dias. As variáveis analisadas foram: consumo médio de alimento (CONS, peso vivo médio (PV e conversão alimentar (CA. O delineamento experimental foi em blocos incompletos desbalanceados, em esquema fatorial 4x2 nas parcelas experimentais, com seis medidas realizadas no decorrer do experimento. A análise estatística foi realizada por meio do procedimento MIXED do SAS®, sendo testadas cinco estruturas de variância e covariância do erro. As médias foram estimadas por quadrados mínimos e comparadas pelo teste Tukey-Kramer. Foram estimadas funções de resposta quadrática para CONS e CA e Gompertz para PV e seus respectivos coeficientes de determinação, pelo procedimento NLIN do SAS®. Detectaram-se efeitos de algumas interações triplas ou duplas para todas as variáveis. Verificaram-se diferenças significativas entre genótipos dentro de cada idade e sexo para CONS e PV e para CA entre genótipos e sexos, dentro de cada idade, apenas aos 42 e a partir dos 21 dias, respectivamente. Constatou-se que o genótipo D, apesar de apresentar maiores médias de CONS e PV, não mostrou menor CA, que foi verificada nos genótipos C e B. As funções de resposta estimadas explicaram adequadamente cada variável em função da idade, constatando-se que o CONS e PV dos machos do genótipo D foram maiores a partir dos 14 dias e dos 28 aos 42 dias, distinguindo este tratamento dos demais. A melhor CA foi verificada nos machos e no genótipo C. É possível avaliar o desempenho zootécnico de frangos de corte por medidas repetidas, sendo que os genótipos B e C apresentaram melhor desempenho zootécnico.The objetive of this study was to evaluate the performance of broiler genotypes using repeated measurements

  9. Two-dimensional dielectric collimator design and its experimental verification for microwave beam focusing

    Science.gov (United States)

    Kim, H.; Park, J.; Seo, I.; Yoo, J.

    2016-10-01

    A collimator is an electromagnetic device that focuses or aligns the direction of wave propagation to achieve a narrow, intense beam. In this study, we propose a two-dimensional dielectric collimator for microwave beam focusing. This is something that is difficult to achieve using theoretical- or intuition-based approaches. We therefore used a systematic design process, which is referred to as the phase field design method, to obtain an optimal topological configuration for the collimator. The phase field parameter determines the optimal configuration of the dielectric material and, as a consequence, it determines the relative permittivity of the component. To verify the design results, we fabricated a prototype via three-dimensional printing and performed an experimental verification using an electric field scanner to measure the near field distributions of the designed collimator positioned parallel to an incident wave. We also performed angle dependent experiments for which the collimator position was offset at various angles. We confirmed that the experimental results are consistent with the simulation results.

  10. Experimental design and Bayesian networks for enhancement of delta-endotoxin production by Bacillus thuringiensis.

    Science.gov (United States)

    Ennouri, Karim; Ayed, Rayda Ben; Hassen, Hanen Ben; Mazzarello, Maura; Ottaviani, Ennio

    2015-12-01

    Bacillus thuringiensis (Bt) is a Gram-positive bacterium. The entomopathogenic activity of Bt is related to the existence of the crystal consisting of protoxins, also called delta-endotoxins. In order to optimize and explain the production of delta-endotoxins of Bacillus thuringiensis kurstaki, we studied seven medium components: soybean meal, starch, KH₂PO₄, K₂HPO₄, FeSO₄, MnSO₄, and MgSO₄and their relationships with the concentration of delta-endotoxins using an experimental design (Plackett-Burman design) and Bayesian networks modelling. The effects of the ingredients of the culture medium on delta-endotoxins production were estimated. The developed model showed that different medium components are important for the Bacillus thuringiensis fermentation. The most important factors influenced the production of delta-endotoxins are FeSO₄, K2HPO₄, starch and soybean meal. Indeed, it was found that soybean meal, K₂HPO₄, KH₂PO₄and starch also showed positive effect on the delta-endotoxins production. However, FeSO4 and MnSO4 expressed opposite effect. The developed model, based on Bayesian techniques, can automatically learn emerging models in data to serve in the prediction of delta-endotoxins concentrations. The constructed model in the present study implies that experimental design (Plackett-Burman design) joined with Bayesian networks method could be used for identification of effect variables on delta-endotoxins variation.

  11. Polynomial-time Approximability Results for combinatorial problems arising in Optimal Experimental Design

    CERN Document Server

    Sagnol, Guillaume

    2010-01-01

    The theory of "optimal experimental design" explains how to best select experiments in order to estimate a set of parameters. The quality of the estimation can be measured by the confidence ellipsoids of a certain estimator. This leads to concave maximization problems in which the objective function is nondecreasing with respect to the L\\"owner ordering of symmetric matrices, and is applied to the "information matrix" describing the structure of these confidence ellipsoids. In a number of real-world applications, the variables controlling the experimental design are discrete, or binary. This paper provides approximability bounds for this NP-hard problem. In particular, we establish a matrix inequality which shows that the objective function is submodular, from which it follows that the greedy approach, which has often been used for this problem, always gives a design within $1-1/e$ of the optimum. We next study the design found by rounding the solution of the continuous relaxed problem, an approach which has ...

  12. Repeated measurements of blood lactate concentration as a prognostic marker in horses with acute colitis evaluated with classification and regression trees (CART) and random forest analysis.

    Science.gov (United States)

    Petersen, M B; Tolver, A; Husted, L; Tølbøll, T H; Pihl, T H

    2016-07-01

    The objective of this study was to investigate the prognostic value of single and repeated measurements of blood l-lactate (Lac) and ionised calcium (iCa) concentrations, packed cell volume (PCV) and plasma total protein (TP) concentration in horses with acute colitis. A total of 66 adult horses admitted with acute colitis (2 mmol/L (sensitivity, 0.72; specificity, 0.8). In conclusion, blood lactate concentration measured at admission and repeated 6 h later aided the prognostic evaluation of horses with acute colitis in this population with a very high mortality rate. This should allow clinicians to give a more reliable prognosis for the horse.

  13. The influence of the design matrix on treatment effect estimates in the quantitative analyses of single-subject experimental design research.

    Science.gov (United States)

    Moeyaert, Mariola; Ugille, Maaike; Ferron, John M; Beretvas, S Natasha; Van den Noortgate, Wim

    2014-09-01

    The quantitative methods for analyzing single-subject experimental data have expanded during the last decade, including the use of regression models to statistically analyze the data, but still a lot of questions remain. One question is how to specify predictors in a regression model to account for the specifics of the design and estimate the effect size of interest. These quantitative effect sizes are used in retrospective analyses and allow synthesis of single-subject experimental study results which is informative for evidence-based decision making, research and theory building, and policy discussions. We discuss different design matrices that can be used for the most common single-subject experimental designs (SSEDs), namely, the multiple-baseline designs, reversal designs, and alternating treatment designs, and provide empirical illustrations. The purpose of this article is to guide single-subject experimental data analysts interested in analyzing and meta-analyzing SSED data. © The Author(s) 2014.

  14. Using an Animal Group Vigilance Practical Session to Give Learners a "Heads-Up" to Problems in Experimental Design

    Science.gov (United States)

    Rands, Sean A.

    2011-01-01

    The design of experimental ecological fieldwork is difficult to teach to classes, particularly when protocols for data collection are normally carefully controlled by the class organiser. Normally, reinforcement of the some problems of experimental design such as the avoidance of pseudoreplication and appropriate sampling techniques does not occur…

  15. Development of a Model for Measuring Scientific Processing Skills Based on Brain-Imaging Technology: Focused on the Experimental Design Process

    Science.gov (United States)

    Lee, Il-Sun; Byeon, Jung-Ho; Kim, Young-shin; Kwon, Yong-Ju

    2014-01-01

    The purpose of this study was to develop a model for measuring experimental design ability based on functional magnetic resonance imaging (fMRI) during biological inquiry. More specifically, the researchers developed an experimental design task that measures experimental design ability. Using the developed experimental design task, they measured…

  16. The Best Location for Speed Bump Installation Using Experimental Design Methodology

    Directory of Open Access Journals (Sweden)

    Alireza Khademi

    2013-12-01

    Full Text Available Speed bumps, as traffic calming devices, have been extensively used to reduce traffic speed on local streets. This study represents a unique application of experimental design methodology where the effects of some controllable factors in determining the best location for installing speed bumps before stop points (e.g. entry gates, road junctions were investigated. Through Classical Design of Experiments (DOE, the optimum location of the speed bump was obtained based on the graphical plots of the significant effects. The speed at the stop point was treated as the response and minimum speed is desirable. Design-Expert® software was used to evaluate and analyze the results obtained. The suggested mathematical model effectively explains the performance indicators within the ranges of the factors. The car speed is the most significant factor that affects the distance-time in comparison with other factors, which provides secondary contributions.

  17. On the optimal experimental design for heat and moisture parameter estimation

    CERN Document Server

    Berger, Julien; Mendes, Nathan

    2016-01-01

    In the context of estimating material properties of porous walls based on in-site measurements and identification method, this paper presents the concept of Optimal Experiment Design (OED). It aims at searching the best experimental conditions in terms of quantity and position of sensors and boundary conditions imposed to the material. These optimal conditions ensure to provide the maximum accuracy of the identification method and thus the estimated parameters. The search of the OED is done by using the Fisher information matrix and a priori knowledge of the parameters. The methodology is applied for two case studies. The first one deals with purely conductive heat transfer. The concept of optimal experiment design is detailed and verified with 100 inverse problems for different experiment designs. The second case study combines a strong coupling between heat and moisture transfer through a porous building material. The methodology presented is based on a scientific formalism for efficient planning of experim...

  18. Randomized block experimental designs can increase the power and reproducibility of laboratory animal experiments.

    Science.gov (United States)

    Festing, Michael F W

    2014-01-01

    Randomized block experimental designs have been widely used in agricultural and industrial research for many decades. Usually they are more powerful, have higher external validity, are less subject to bias, and produce more reproducible results than the completely randomized designs typically used in research involving laboratory animals. Reproducibility can be further increased by using time as a blocking factor. These benefits can be achieved at no extra cost. A small experiment investigating the effect of an antioxidant on the activity of a liver enzyme in four inbred mouse strains, which had two replications (blocks) separated by a period of two months, illustrates this approach. The widespread failure to use these designs more widely in research involving laboratory animals has probably led to a substantial waste of animals, money, and scientific resources and slowed down the development of new treatments for human and animal diseases.

  19. Statistical issues in quality control of proteomic analyses: good experimental design and planning.

    Science.gov (United States)

    Cairns, David A

    2011-03-01

    Quality control is becoming increasingly important in proteomic investigations as experiments become more multivariate and quantitative. Quality control applies to all stages of an investigation and statistics can play a key role. In this review, the role of statistical ideas in the design and planning of an investigation is described. This involves the design of unbiased experiments using key concepts from statistical experimental design, the understanding of the biological and analytical variation in a system using variance components analysis and the determination of a required sample size to perform a statistically powerful investigation. These concepts are described through simple examples and an example data set from a 2-D DIGE pilot experiment. Each of these concepts can prove useful in producing better and more reproducible data. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Multi-objective experimental design for (13)C-based metabolic flux analysis.

    Science.gov (United States)

    Bouvin, Jeroen; Cajot, Simon; D'Huys, Pieter-Jan; Ampofo-Asiama, Jerry; Anné, Jozef; Van Impe, Jan; Geeraerd, Annemie; Bernaerts, Kristel

    2015-10-01

    (13)C-based metabolic flux analysis is an excellent technique to resolve fluxes in the central carbon metabolism but costs can be significant when using specialized tracers. This work presents a framework for cost-effective design of (13)C-tracer experiments, illustrated on two different networks. Linear and non-linear optimal input mixtures are computed for networks for Streptomyces lividans and a carcinoma cell line. If only glucose tracers are considered as labeled substrate for a carcinoma cell line or S. lividans, the best parameter estimation accuracy is obtained by mixtures containing high amounts of 1,2-(13)C2 glucose combined with uniformly labeled glucose. Experimental designs are evaluated based on a linear (D-criterion) and non-linear approach (S-criterion). Both approaches generate almost the same input mixture, however, the linear approach is favored due to its low computational effort. The high amount of 1,2-(13)C2 glucose in the optimal designs coincides with a high experimental cost, which is further enhanced when labeling is introduced in glutamine and aspartate tracers. Multi-objective optimization gives the possibility to assess experimental quality and cost at the same time and can reveal excellent compromise experiments. For example, the combination of 100% 1,2-(13)C2 glucose with 100% position one labeled glutamine and the combination of 100% 1,2-(13)C2 glucose with 100% uniformly labeled glutamine perform equally well for the carcinoma cell line, but the first mixture offers a decrease in cost of $ 120 per ml-scale cell culture experiment. We demonstrated the validity of a multi-objective linear approach to perform optimal experimental designs for the non-linear problem of (13)C-metabolic flux analysis. Tools and a workflow are provided to perform multi-objective design. The effortless calculation of the D-criterion can be exploited to perform high-throughput screening of possible (13)C-tracers, while the illustrated benefit of multi

  1. LOGICAL AND EXPERIMENTAL DESIGN FOR PHENOL DEGRADATION USING IMMOBILIZED ACINETOBACTER SP. CULTURE

    Directory of Open Access Journals (Sweden)

    Amro Abd Al Fattah Amara

    2010-05-01

    Full Text Available Phenol degradation processes were conducted through a series of enzymatic reactions effects and is affect by different components of the microbial metabolic flux. Using different optimization strategies like mutagenesis could lead to a successful optimization but also lead to lost of some important microbial features or to release a new virulence or unexpected characters. Plackett-Burman closes much gab between optimization, safety, time, cost, Man/hr, the complexity of the metabolic flux etc. Using Plackett-Burman experimental design lead to map the points affect in the optimization process by well understanding their request from nutrient and the best environmental condition required. In this study nine variables include pH (X1, oC (X2, glucose (X3, yeast extract (X4, meat extract (X5, NH4NO3 (X6, K-salt (X7, Mg-salt (X8 and trace element (X9 are optimized during phenol degradation by Acinetobacter sp., using Plackett-Burman design method. Plackett-Burman included 16 experiments, each was used in two levels, [-1] low and high [+1]. According to Blackett-Burman design experiments the maximum degradation rate was 31.25 mg/l/h. Logical and statistical analysis of the data lead to select pH, Temperature and Meat extract as three factors affecting on phenol degradation rate. These three variables have been used in Box-Behnken experimental design for further optimization. Meat extract, which is not statistically recommended for optimization has been used while it can substitute trace element, which is statistically significant. Glucose, which is statistically significant, did not included while it has a negative effect and gave the best result at 0 g/l amount. Glucose has been completely omitted from the media.  pH, temperature and meat extract were used in fifteen experiments each was used in three levels, –1, 0, and +1 according to Box-Behnken design. Microsoft Excel 2002 solver tool was used to optimize the model created from Box-Behnken. The

  2. Facility for Advanced Accelerator Experimental Tests at SLAC (FACET) Conceptual Design Report

    Energy Technology Data Exchange (ETDEWEB)

    Amann, J.; Bane, K.; /SLAC

    2009-10-30

    This Conceptual Design Report (CDR) describes the design of FACET. It will be updated to stay current with the developing design of the facility. This CDR begins as the baseline conceptual design and will evolve into an 'as-built' manual for the completed facility. The Executive Summary, Chapter 1, gives an introduction to the FACET project and describes the salient features of its design. Chapter 2 gives an overview of FACET. It describes the general parameters of the machine and the basic approaches to implementation. The FACET project does not include the implementation of specific scientific experiments either for plasma wake-field acceleration for other applications. Nonetheless, enough work has been done to define potential experiments to assure that the facility can meet the requirements of the experimental community. Chapter 3, Scientific Case, describes the planned plasma wakefield and other experiments. Chapter 4, Technical Description of FACET, describes the parameters and design of all technical systems of FACET. FACET uses the first two thirds of the existing SLAC linac to accelerate the beam to about 20GeV, and compress it with the aid of two chicanes, located in Sector 10 and Sector 20. The Sector 20 area will include a focusing system, the generic experimental area and the beam dump. Chapter 5, Management of Scientific Program, describes the management of the scientific program at FACET. Chapter 6, Environment, Safety and Health and Quality Assurance, describes the existing programs at SLAC and their application to the FACET project. It includes a preliminary analysis of safety hazards and the planned mitigation. Chapter 7, Work Breakdown Structure, describes the structure used for developing the cost estimates, which will also be used to manage the project. The chapter defines the scope of work of each element down to level 3.

  3. Experimental Validation of an Electromagnet Thermal Design Methodology for Magnetized Dusty Plasma Research

    Science.gov (United States)

    Birmingham, W. J.; Bates, E. M.; Romero-Talamás, C. A.; Rivera, W. F.

    2016-10-01

    An analytic thermal design method developed to aid in the engineering design of Bitter-type magnets, as well as finite element calculations of heat transfer, are compared against experimental measurements of temperature evolution in a prototype magnet designed to operate continuously at 1 T fields while dissipating 9 kW of heat. The analytic thermal design method is used to explore a variety of configurations of cooling holes in the Bitter plates, including their geometry and radial placement. The prototype has diagnostic ports that can accommodate thermocouples, pressure sensors, and optical access to measure the water flow. We present temperature and pressure sensor data from the prototype compared to the analytic thermal model and finite element calculations. The data is being used to guide the design of a 10 T Bitter magnet capable of sustained fields of up to 10 T for at least 10 seconds, which will be used in dusty plasma experiments at the University of Maryland Baltimore County. Preliminary design plans and progress towards the construction of the 10 T electromagnet are also presented.

  4. Design Considerations and Experimental Verification of a Rail Brake Armature Based on Linear Induction Motor Technology

    Science.gov (United States)

    Sakamoto, Yasuaki; Kashiwagi, Takayuki; Hasegawa, Hitoshi; Sasakawa, Takashi; Fujii, Nobuo

    This paper describes the design considerations and experimental verification of an LIM rail brake armature. In order to generate power and maximize the braking force density despite the limited area between the armature and the rail and the limited space available for installation, we studied a design method that is suitable for designing an LIM rail brake armature; we considered adoption of a ring winding structure. To examine the validity of the proposed design method, we developed a prototype ring winding armature for the rail brakes and examined its electromagnetic characteristics in a dynamic test system with roller rigs. By repeating various tests, we confirmed that unnecessary magnetic field components, which were expected to be present under high speed running condition or when a ring winding armature was used, were not present. Further, the necessary magnetic field component and braking force attained the desired values. These studies have helped us to develop a basic design method that is suitable for designing the LIM rail brake armatures.

  5. Experimental investigation of shaping disturbance observer design for motion control of precision mechatronic stages with resonances

    Science.gov (United States)

    Yang, Jin; Hu, Chuxiong; Zhu, Yu; Wang, Ze; Zhang, Ming

    2017-08-01

    In this paper, shaping disturbance observer (SDOB) is investigated for precision mechatronic stages with middle-frequency zero/pole type resonance to achieve good motion control performance in practical manufacturing situations. Compared with traditional standard disturbance observer (DOB), in SDOB a pole-zero cancellation based shaping filter is cascaded to the mechatronic stage plant to meet the challenge of motion control performance deterioration caused by actual resonance. Noting that pole-zero cancellation is inevitably imperfect and the controller may even consequently become unstable in practice, frequency domain stability analysis is conducted to find out how each parameter of the shaping filter affects the control stability. Moreover, the robust design criterion of the shaping filter, and the design procedure of SDOB, are both proposed to guide the actual design and facilitate practical implementation. The SDOB with the proposed design criterion is applied to a linear motor driven stage and a voice motor driven stage, respectively. Experimental results consistently validate the effectiveness nature of the proposed SDOB scheme in practical mechatronics motion applications. The proposed SDOB design actually could be an effective unit in the controller design for motion stages of mechanical manufacture equipments.

  6. Study Design Rigor in Animal-Experimental Research Published in Anesthesia Journals.

    Science.gov (United States)

    Hoerauf, Janine M; Moss, Angela F; Fernandez-Bustamante, Ana; Bartels, Karsten

    2017-02-08

    Lack of reproducibility of preclinical studies has been identified as an impediment for translation of basic mechanistic research into effective clinical therapies. Indeed, the National Institutes of Health has revised its grant application process to require more rigorous study design, including sample size calculations, blinding procedures, and randomization steps. We hypothesized that the reporting of such metrics of study design rigor has increased over time for animal-experimental research published in anesthesia journals. PubMed was searched for animal-experimental studies published in 2005, 2010, and 2015 in primarily English-language anesthesia journals. A total of 1466 publications were graded on the performance of sample size estimation, randomization, and blinding. Cochran-Armitage test was used to assess linear trends over time for the primary outcome of whether or not a metric was reported. Interrater agreement for each of the 3 metrics (power, randomization, and blinding) was assessed using the weighted κ coefficient in a 10% random sample of articles rerated by a second investigator blinded to the ratings of the first investigator. A total of 1466 manuscripts were analyzed. Reporting for all 3 metrics of experimental design rigor increased over time (2005 to 2010 to 2015): for power analysis, from 5% (27/516), to 12% (59/485), to 17% (77/465); for randomization, from 41% (213/516), to 50% (243/485), to 54% (253/465); and for blinding, from 26% (135/516), to 38% (186/485), to 47% (217/465). The weighted κ coefficients and 98.3% confidence interval indicate almost perfect agreement between the 2 raters beyond that which occurs by chance alone (power, 0.93 [0.85, 1.0], randomization, 0.91 [0.85, 0.98], and blinding, 0.90 [0.84, 0.96]). Our hypothesis that reported metrics of rigor in animal-experimental studies in anesthesia journals have increased during the past decade was confirmed. More consistent reporting, or explicit justification for absence

  7. Design and Experimental Demonstration of Cherenkov Radiation Source Based on Metallic Photonic Crystal Slow Wave Structure

    Science.gov (United States)

    Fu, Tao; Yang, Zi-Qiang; Ouyang, Zheng-Biao

    2016-11-01

    This paper presents a kind of Cherenkov radiation source based on metallic photonic crystal (MPC) slow-wave structure (SWS) cavity. The Cherenkov source designed by linear theory works at 34.7 GHz when the cathode voltage is 550 kV. The three-dimensional particle-in-cell (PIC) simulation of the SWS shows the operating frequency of 35.56 GHz with a single TM01 mode is basically consistent with the theoretically one under the same parameters. An experiment was implemented to testify the results of theory and PIC simulation. The experimental system includes a cathode emitting unit, the SWS, a magnetic system, an output antenna, and detectors. Experimental results show that the operating frequency through detecting the retarded time of wave propagation in waveguides is around 35.5 GHz with a single TM01 mode and an output power reaching 54 MW. It indicates that the MPC structure can reduce mode competition. The purpose of the paper is to show in theory and in preliminary experiment that a SWS with PBG can produce microwaves in TM01 mode. But it still provides a good experimental and theoretical foundation for designing high-power microwave devices.

  8. Pathobiology of aging mice and GEM: background strains and experimental design.

    Science.gov (United States)

    Brayton, C F; Treuting, P M; Ward, J M

    2012-01-01

    The use of induced and spontaneous mutant mice and genetically engineered mice (and combinations thereof) to study cancers and other aging phenotypes to advance improved functional human life spans will involve studies of aging mice. Genetic background contributes to pathology phenotypes and to causes of death as well as to longevity. Increased recognition of expected phenotypes, experimental variables that influence phenotypes and research outcomes, and experimental design options and rationales can maximize the utility of genetically engineered mice (GEM) models to translational research on aging. This review aims to provide resources to enhance the design and practice of chronic and longevity studies involving GEM. C57BL6, 129, and FVB/N strains are emphasized because of their widespread use in the generation of knockout, transgenic, and conditional mutant GEM. Resources are included also for pathology of other inbred strain families, including A, AKR, BALB/c, C3H, C57L, C58, CBA, DBA, GR, NOD.scid, SAMP, and SJL/J, and non-inbred mice, including 4WC, AB6F1, Ames dwarf, B6, 129, B6C3F1, BALB/c,129, Het3, nude, SENCAR, and several Swiss stocks. Experimental strategies for long-term cross-sectional and longitudinal studies to assess causes of or contributors to death, disease burden, spectrum of pathology phenotypes, longevity, and functional healthy life spans (health spans) are compared and discussed.

  9. Experimental evolution in silico: a custom-designed mathematical model for virulence evolution of Bacillus thuringiensis.

    Science.gov (United States)

    Strauß, Jakob Friedrich; Crain, Philip; Schulenburg, Hinrich; Telschow, Arndt

    2016-08-01

    Most mathematical models on the evolution of virulence are based on epidemiological models that assume parasite transmission follows the mass action principle. In experimental evolution, however, mass action is often violated due to controlled infection protocols. This "theory-experiment mismatch" raises the question whether there is a need for new mathematical models to accommodate the particular characteristics of experimental evolution. Here, we explore the experimental evolution model system of Bacillus thuringiensis as a parasite and Caenorhabditis elegans as a host. Recent experimental studies with strict control of parasite transmission revealed that one-sided adaptation of B. thuringiensis with non-evolving hosts selects for intermediate or no virulence, sometimes coupled with parasite extinction. In contrast, host-parasite coevolution selects for high virulence and for hosts with strong resistance against B. thuringiensis. In order to explain the empirical results, we propose a new mathematical model that mimics the basic experimental set-up. The key assumptions are: (i) controlled parasite transmission (no mass action), (ii) discrete host generations, and (iii) context-dependent cost of toxin production. Our model analysis revealed the same basic trends as found in the experiments. Especially, we could show that resistant hosts select for highly virulent bacterial strains. Moreover, we found (i) that the evolved level of virulence is independent of the initial level of virulence, and (ii) that the average amount of bacteria ingested significantly affects the evolution of virulence with fewer bacteria ingested selecting for highly virulent strains. These predictions can be tested in future experiments. This study highlights the usefulness of custom-designed mathematical models in the analysis and interpretation of empirical results from experimental evolution.

  10. Experimental and Sampling Design for the INL-2 Sample Collection Operational Test

    Energy Technology Data Exchange (ETDEWEB)

    Piepel, Gregory F.; Amidan, Brett G.; Matzke, Brett D.

    2009-02-16

    This report describes the experimental and sampling design developed to assess sampling approaches and methods for detecting contamination in a building and clearing the building for use after decontamination. An Idaho National Laboratory (INL) building will be contaminated with BG (Bacillus globigii, renamed Bacillus atrophaeus), a simulant for Bacillus anthracis (BA). The contamination, sampling, decontamination, and re-sampling will occur per the experimental and sampling design. This INL-2 Sample Collection Operational Test is being planned by the Validated Sampling Plan Working Group (VSPWG). The primary objectives are: 1) Evaluate judgmental and probabilistic sampling for characterization as well as probabilistic and combined (judgment and probabilistic) sampling approaches for clearance, 2) Conduct these evaluations for gradient contamination (from low or moderate down to absent or undetectable) for different initial concentrations of the contaminant, 3) Explore judgment composite sampling approaches to reduce sample numbers, 4) Collect baseline data to serve as an indication of the actual levels of contamination in the tests. A combined judgmental and random (CJR) approach uses Bayesian methodology to combine judgmental and probabilistic samples to make clearance statements of the form "X% confidence that at least Y% of an area does not contain detectable contamination” (X%/Y% clearance statements). The INL-2 experimental design has five test events, which 1) vary the floor of the INL building on which the contaminant will be released, 2) provide for varying the amount of contaminant released to obtain desired concentration gradients, and 3) investigate overt as well as covert release of contaminants. Desirable contaminant gradients would have moderate to low concentrations of contaminant in rooms near the release point, with concentrations down to zero in other rooms. Such gradients would provide a range of contamination levels to challenge the sampling

  11. AN INNOVATIVE APPROACH FOR PLANNING AND EXECUTION OF PRE-EXPERIMENTAL RUNS FOR DESIGN OF EXPERIMENTS

    Directory of Open Access Journals (Sweden)

    Arsalan Farooq, Muhammad

    2016-09-01

    Full Text Available This paper addresses the study of the pre-experimental planning phase of the Design of Experiments (DoE in order to improve the final product quality. The pre-experimental planning phase includes a clear identification of the problem statement, selection of control factors and their respective levels and ranges. To improve production quality based on the DoE a new approach for the pre-experimental planning phase, called Non-Conformity Matrix (NCM, is presented. This article also addresses the key steps of the pre-experimental runs considering a consumer goods manufacturing process. Results of the application for an industrial case show thatthis methodology can support a clear definition ofthe problem and also a correct identification of the factor ranges in particular situations. The proposed new approach allows modeling the entire manufacturing system holistically and correctly defining the factor ranges and respective levels for a more effective application of DoE. This new approach can be a useful resource for both research and industrial practitioners who are dedicated to large DoE projects with unknown factor interactions, when the operational levels and ranges are not completely defined.

  12. Optimization of Conversion Treatment on Austenitic Stainless Steel Using Experimental Designs

    Directory of Open Access Journals (Sweden)

    S. El Hajjaji

    2013-01-01

    Full Text Available Conversion coating is commonly used as treatment to improve the adherence of ceramics films. The conversion coating properties depend on the structure of alloy as well as on the treatment parameters. These conversion coatings must be characterized by strong interfacial adhesion, high roughness, and high real surface area, which were measured by an electrochemical method. The influence of all the elaboration factors (temperature, time, and bath composition: sulphuric acid, thiosulphate as accelerator, propargyl alcohol as inhibitor, and surface state and also the interactions between these factors were evaluated, using statistical experimental design. The specific surface area and optical factor (α correspond to the quantitative responses. The evaluation showed, by using a designed experimental procedure, that the most important factor was “surface state.” Sanded surface allows the formation of conversion coating with high real surface area. A further aim was to optimise two parameters: treatment time and temperature using Doehlert shell design and simplex method. The growth of the conversion coating is also influenced by treatment time and temperature. With such optimized conditions, the real surface area of conversion coating obtained was about 235 m2/m2.

  13. Computational simulations of frictional losses in pipe networks confirmed in experimental apparatusses designed by honors students

    Science.gov (United States)

    Pohlman, Nicholas A.; Hynes, Eric; Kutz, April

    2015-11-01

    Lectures in introductory fluid mechanics at NIU are a combination of students with standard enrollment and students seeking honors credit for an enriching experience. Most honors students dread the additional homework problems or an extra paper assigned by the instructor. During the past three years, honors students of my class have instead collaborated to design wet-lab experiments for their peers to predict variable volume flow rates of open reservoirs driven by gravity. Rather than learn extra, the honors students learn the Bernoulli head-loss equation earlier to design appropriate systems for an experimental wet lab. Prior designs incorporated minor loss features such as sudden contraction or multiple unions and valves. The honors students from Spring 2015 expanded the repertoire of available options by developing large scale set-ups with multiple pipe networks that could be combined together to test the flexibility of the student team's computational programs. The engagement of bridging the theory with practice was appreciated by all of the students such that multiple teams were able to predict performance within 4% accuracy. The challenges, schedules, and cost estimates of incorporating the experimental lab into an introductory fluid mechanics course will be reported.

  14. Micro-Randomized Trials: An Experimental Design for Developing Just-in-Time Adaptive Interventions

    Science.gov (United States)

    Klasnja, Predrag; Hekler, Eric B.; Shiffman, Saul; Boruvka, Audrey; Almirall, Daniel; Tewari, Ambuj; Murphy, Susan A.

    2015-01-01

    Objective This paper presents an experimental design, the micro-randomized trial, developed to support optimization of just-in-time adaptive interventions (JITAIs). JITAIs are mHealth technologies that aim to deliver the right intervention components at the right times and locations to optimally support individuals’ health behaviors. Micro-randomized trials offer a way to optimize such interventions by enabling modeling of causal effects and time-varying effect moderation for individual intervention components within a JITAI. Methods The paper describes the micro-randomized trial design, enumerates research questions that this experimental design can help answer, and provides an overview of the data analyses that can be used to assess the causal effects of studied intervention components and investigate time-varying moderation of those effects. Results Micro-randomized trials enable causal modeling of proximal effects of the randomized intervention components and assessment of time-varying moderation of those effects. Conclusions Micro-randomized trials can help researchers understand whether their interventions are having intended effects, when and for whom they are effective, and what factors moderate the interventions’ effects, enabling creation of more effective JITAIs. PMID:26651463

  15. Design concept and preliminary experimental demonstration of MEMS gyroscopes with 4-DOF master-slave architecture

    Science.gov (United States)

    Acar, Cenk; Shkel, Andrei M.

    2002-07-01

    This paper reports a design concept for MEMS gyroscopes that shifts the complexity of the design from control architecture to system dynamics, utilizing the passive disturbance rejection capability of the 4-DOF dynamical system. Specifically, a novel wide-bandwidth micromachined gyroscope design approach based on increasing the degrees-of-freedom of the oscillatory system by the use of two independently oscillating interconnected proof masses is presented along with preliminary experimental demonstration of implementation feasibility. With the concept of using a 4-DOF system, inherent disturbance rejection is achieved due to the wide operation frequency range of the dynamic system, providing reduced sensitivity to structural and thermal parameter fluctuations. Thus, less demanding active control strategies are required for operation under presence of perturbations. The fabricated prototype dual-mass gyroscopes successfully demonstrated a dramatically wide driving frequency range within where the drive direction oscillation amplitude varies insignificantly without any active control, in contrast to the conventional gyroscopes where the mass has to be sustained in constant amplitude oscillation in a very narrow frequency band. Mechanical amplification of driven mass oscillation by the sensing element was also experimentally demonstrated, providing large oscillation amplitudes, which is crucial for sensor performance.

  16. A MILP-based flux alternative generation and NMR experimental design strategy for metabolic engineering.

    Science.gov (United States)

    Phalakornkule, C; Lee, S; Zhu, T; Koepsel, R; Ataai, M M; Grossmann, I E; Domach, M M

    2001-04-01

    A mixed-integer linear program (MILP) is described that can enumerate all the ways fluxes can distribute in a metabolic network while still satisfying the same constraints and objective function. The multiple solutions can be used to (1) generate alternative flux scenarios that can account for limited experimental observations, (2) forecast the potential responses to mutation (e.g., new reaction pathways may be used), and (3) (as illustrated) design (13)C NMR experiments such that different potential flux patterns in a mutant can be distinguished. The experimental design is enabled by using the MILP results as an input to an isotopomer mapping matrices (IMM)-based program, which accounts for the network circulation of (13)C from a precursor such as glucose. The IMM-based program can interface to common plotting programs with the result that the user is provided with predicted NMR spectra that are complete with splittings and Lorentzian line-shape features. The example considered is the trafficking of carbon in an Escherichia coli mutant, which has pyruvate kinase activity deleted for the purpose of eliminating acetate production. Similar yields and extracellular measurements would be manifested by the flux alternatives. The MILP-IMM results suggest how NMR experiments can be designed such that the spectra of glutamate for two flux distribution scenarios differ significantly.

  17. Experimental research of the synthetic jet generator designs based on actuation of diaphragm with piezoelectric actuator

    Science.gov (United States)

    Rimasauskiene, R.; Matejka, M.; Ostachowicz, W.; Kurowski, M.; Malinowski, P.; Wandowski, T.; Rimasauskas, M.

    2015-01-01

    Experimental analyses of four own developed synthetic jet generator designs were presented in this paper. The main task of this work was to find the most appropriate design of the synthetic jet generator. Dynamic characteristics of the synthetic jet generator's diaphragm with piezoelectric material were measured using non-contact measuring equipment laser vibrometer Polytec®PSV 400. Temperatures of the piezoelectric diaphragms working in resonance frequency were measured with Fiber Bragg Grating (FBG) sensor. Experimental analysis of the synthetic jet generator amplitude-frequency characteristics were performed using CTA (hot wire anemometer) measuring techniques. Piezoelectric diaphragm in diameter of 27 mm was excited by sinusoidal voltage signal and it was fixed tightly inside the chamber of the synthetic jet generator. The number of the synthetic jet generator orifices (1 or 3) and volume of cavity (height of cavity vary from 0.5 mm to 1.5 mm) were changed. The highest value of the synthetic jet velocity 25 m/s was obtained with synthetic jet generator which has cavity 0.5 mm and 1 orifice (resonance frequency of the piezoelectric diaphragm 2.8 kHz). It can be concluded that this type of the design is preferred in order to get the peak velocity of the synthetic jet.

  18. EXPERIMENTAL RESEARCH AND DESIGN ON HEAT TRANSFER OF EVAPORATOR USED IN THE LARGE QUICK FREEZE PLANT

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    The evaporator is the main part of a quick-freeze equipment. There are many factors influencing the heat transfer coefficient of an evaporator. The most important factors among them are the fin shape, tube diameter, distance of fin space, frost, and velocity of air flow etc. They mainly influence the thermal efficiency of an evaporator, and therefore its thermal efficiency has direct relationship with the whole efficiency of the quick freeze plant. Evaporators with different structural types have different heat transfer efficiency. In order to obtain high efficiency structure of evaporator, 8 evaporator models with different fin shape, tube diameter and tube arrangement are analyzed and compared. The calculation results show that the integral waved fins, equilateral-triangle arranged small diameter tubes and varying fin-spacing has the highest heat transfer coefficient. The experimental result also shows that the evaporator with this type of structure has better thermal efficiency. The experimental result is in good agreement with the calculation result. It can instruct engineering design for usual designer. A real quick-freeze equipment is designed and put into production. The result shows that, compared with traditional domestic quick-freeze equipments, this equipment decreases by 40% in size and by 20% in energy consumption.

  19. Experimental characterization and numerical modeling of PEMFC stacks designed for different application fields

    Energy Technology Data Exchange (ETDEWEB)

    Jannelli, E.; Minutillo, M. [University of Naples, Parthenope, Centro Direzionale, Naples (Italy); Perna, A. [University of Cassino, Cassino (Italy)

    2011-12-15

    Proton exchange membrane fuel cell (PEMFC) is regarded as a potential future power technology for stationary and mobile applications due to its high efficiency (full and partial load), rapid start-up, high power density, and low emissions. Depending on their particular application field (decentralized combined heat and power production, uninterrupted power supplies (UPS), or mobile applications) different operating conditions and designing parameters are required and different performance can be expected. Thus, the aim of this paper is to investigate the behavior and performance of two stacks of the same size, developed with a different approach according to their application sectors. The first PEMFC stack is designed for UPS units or mobile purpose, the second one, is designed to supply heat and power in residential applications (CHP units). The analysis of the stacks behavior has been carried out by using both experimental and numerical investigations. Experimental results have allowed: (i) to characterize the stacks; (ii) to calibrate the numerical model; (iii) to supply useful data for setting and improving the control system. (Copyright copyright 2011 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  20. Experimental characterization and multidisciplinary conceptual design optimization of a bendable load stiffened unmanned air vehicle wing

    Science.gov (United States)

    Jagdale, Vijay Narayan

    Demand for deployable MAVs and UAVs with wings designed to reduce aircraft storage volume led to the development of a bendable wing concept at the University of Florida (UF). The wing shows an ability to load stiffen in the flight load direction, still remaining compliant in the opposite direction, enabling UAV storage inside smaller packing volumes. From the design prospective, when the wing shape parameters are treated as design variables, the performance requirements : high aerodynamic efficiency, structural stability under aggressive flight loads and desired compliant nature to prevent breaking while stored, in general conflict with each other. Creep deformation induced by long term storage and its effect on the wing flight characteristics are additional considerations. Experimental characterization of candidate bendable UAV wings is performed in order to demonstrate and understand aerodynamic and structural behavior of the bendable load stiffened wing under flight loads and while the wings are stored inside a canister for long duration, in the process identifying some important wing shape parameters. A multidisciplinary, multiobjective design optimization approach is utilized for conceptual design of a 24 inch span and 7 inch root chord bendable wing. Aerodynamic performance of the wing is studied using an extended vortex lattice method based Athena Vortex Lattice (AVL) program. An arc length method based nonlinear FEA routine in ABAQUS is used to evaluate the structural performance of the wing and to determine maximum flying velocity that the wing can withstand without buckling or failing under aggressive flight loads. An analytical approach is used to study the stresses developed in the composite wing during storage and Tsai-Wu criterion is used to check failure of the composite wing due to the rolling stresses to determine minimum safe storage diameter. Multidisciplinary wing shape and layup optimization is performed using an elitist non-dominated sorting