WorldWideScience

Sample records for sampling time designs

  1. Low-sensitivity H ∞ filter design for linear delta operator systems with sampling time jitter

    Science.gov (United States)

    Guo, Xiang-Gui; Yang, Guang-Hong

    2012-04-01

    This article is concerned with the problem of designing H ∞ filters for a class of linear discrete-time systems with low-sensitivity to sampling time jitter via delta operator approach. Delta-domain model is used to avoid the inherent numerical ill-condition resulting from the use of the standard shift-domain model at high sampling rates. Based on projection lemma in combination with the descriptor system approach often used to solve problems related to delay, a novel bounded real lemma with three slack variables for delta operator systems is presented. A sensitivity approach based on this novel lemma is proposed to mitigate the effects of sampling time jitter on system performance. Then, the problem of designing a low-sensitivity filter can be reduced to a convex optimisation problem. An important consideration in the design of correlation filters is the optimal trade-off between the standard H ∞ criterion and the sensitivity of the transfer function with respect to sampling time jitter. Finally, a numerical example demonstrating the validity of the proposed design method is given.

  2. Precision, time, and cost: a comparison of three sampling designs in an emergency setting

    Science.gov (United States)

    Deitchler, Megan; Deconinck, Hedwig; Bergeron, Gilles

    2008-01-01

    The conventional method to collect data on the health, nutrition, and food security status of a population affected by an emergency is a 30 × 30 cluster survey. This sampling method can be time and resource intensive and, accordingly, may not be the most appropriate one when data are needed rapidly for decision making. In this study, we compare the precision, time and cost of the 30 × 30 cluster survey with two alternative sampling designs: a 33 × 6 cluster design (33 clusters, 6 observations per cluster) and a 67 × 3 cluster design (67 clusters, 3 observations per cluster). Data for each sampling design were collected concurrently in West Darfur, Sudan in September-October 2005 in an emergency setting. Results of the study show the 30 × 30 design to provide more precise results (i.e. narrower 95% confidence intervals) than the 33 × 6 and 67 × 3 design for most child-level indicators. Exceptions are indicators of immunization and vitamin A capsule supplementation coverage which show a high intra-cluster correlation. Although the 33 × 6 and 67 × 3 designs provide wider confidence intervals than the 30 × 30 design for child anthropometric indicators, the 33 × 6 and 67 × 3 designs provide the opportunity to conduct a LQAS hypothesis test to detect whether or not a critical threshold of global acute malnutrition prevalence has been exceeded, whereas the 30 × 30 design does not. For the household-level indicators tested in this study, the 67 × 3 design provides the most precise results. However, our results show that neither the 33 × 6 nor the 67 × 3 design are appropriate for assessing indicators of mortality. In this field application, data collection for the 33 × 6 and 67 × 3 designs required substantially less time and cost than that required for the 30 × 30 design. The findings of this study suggest the 33 × 6 and 67 × 3 designs can provide useful time- and resource-saving alternatives to the 30 × 30 method of data collection in emergency

  3. Precision, time, and cost: a comparison of three sampling designs in an emergency setting

    Directory of Open Access Journals (Sweden)

    Deconinck Hedwig

    2008-05-01

    Full Text Available Abstract The conventional method to collect data on the health, nutrition, and food security status of a population affected by an emergency is a 30 × 30 cluster survey. This sampling method can be time and resource intensive and, accordingly, may not be the most appropriate one when data are needed rapidly for decision making. In this study, we compare the precision, time and cost of the 30 × 30 cluster survey with two alternative sampling designs: a 33 × 6 cluster design (33 clusters, 6 observations per cluster and a 67 × 3 cluster design (67 clusters, 3 observations per cluster. Data for each sampling design were collected concurrently in West Darfur, Sudan in September-October 2005 in an emergency setting. Results of the study show the 30 × 30 design to provide more precise results (i.e. narrower 95% confidence intervals than the 33 × 6 and 67 × 3 design for most child-level indicators. Exceptions are indicators of immunization and vitamin A capsule supplementation coverage which show a high intra-cluster correlation. Although the 33 × 6 and 67 × 3 designs provide wider confidence intervals than the 30 × 30 design for child anthropometric indicators, the 33 × 6 and 67 × 3 designs provide the opportunity to conduct a LQAS hypothesis test to detect whether or not a critical threshold of global acute malnutrition prevalence has been exceeded, whereas the 30 × 30 design does not. For the household-level indicators tested in this study, the 67 × 3 design provides the most precise results. However, our results show that neither the 33 × 6 nor the 67 × 3 design are appropriate for assessing indicators of mortality. In this field application, data collection for the 33 × 6 and 67 × 3 designs required substantially less time and cost than that required for the 30 × 30 design. The findings of this study suggest the 33 × 6 and 67 × 3 designs can provide useful time- and resource-saving alternatives to the 30 × 30 method of data

  4. Choosing a Cluster Sampling Design for Lot Quality Assurance Sampling Surveys

    OpenAIRE

    Hund, Lauren; Bedrick, Edward J.; Pagano, Marcello

    2015-01-01

    Lot quality assurance sampling (LQAS) surveys are commonly used for monitoring and evaluation in resource-limited settings. Recently several methods have been proposed to combine LQAS with cluster sampling for more timely and cost-effective data collection. For some of these methods, the standard binomial model can be used for constructing decision rules as the clustering can be ignored. For other designs, considered here, clustering is accommodated in the design phase. In this paper, we comp...

  5. A Frequency Domain Design Method For Sampled-Data Compensators

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Jannerup, Ole Erik

    1990-01-01

    A new approach to the design of a sampled-data compensator in the frequency domain is investigated. The starting point is a continuous-time compensator for the continuous-time system which satisfy specific design criteria. The new design method will graphically show how the discrete...

  6. Design and development of multiple sample counting setup

    International Nuclear Information System (INIS)

    Rath, D.P.; Murali, S.; Babu, D.A.R.

    2010-01-01

    Full text: The analysis of active samples on regular basis for ambient air activity and floor contamination from radio chemical lab accounts for major chunk of the operational activity in Health Physicist's responsibility. The requirement for daily air sample analysis on immediate counting and delayed counting from various labs in addition to samples of smear swipe check of lab led to the urge for development of system that could cater multiple sample analysis in a time programmed manner on a single sample loading. A multiple alpha/beta counting system for counting was designed and fabricated. It has arrangements for loading 10 samples in slots in order, get counted in a time programmed manner with results displayed and records maintained in PC. The paper describes the design and development of multiple sample counting setup presently in use at the facility has resulted in reduction of man-hour consumption in counting and recording of the results

  7. Sample design effects in landscape genetics

    Science.gov (United States)

    Oyler-McCance, Sara J.; Fedy, Bradley C.; Landguth, Erin L.

    2012-01-01

    An important research gap in landscape genetics is the impact of different field sampling designs on the ability to detect the effects of landscape pattern on gene flow. We evaluated how five different sampling regimes (random, linear, systematic, cluster, and single study site) affected the probability of correctly identifying the generating landscape process of population structure. Sampling regimes were chosen to represent a suite of designs common in field studies. We used genetic data generated from a spatially-explicit, individual-based program and simulated gene flow in a continuous population across a landscape with gradual spatial changes in resistance to movement. Additionally, we evaluated the sampling regimes using realistic and obtainable number of loci (10 and 20), number of alleles per locus (5 and 10), number of individuals sampled (10-300), and generational time after the landscape was introduced (20 and 400). For a simulated continuously distributed species, we found that random, linear, and systematic sampling regimes performed well with high sample sizes (>200), levels of polymorphism (10 alleles per locus), and number of molecular markers (20). The cluster and single study site sampling regimes were not able to correctly identify the generating process under any conditions and thus, are not advisable strategies for scenarios similar to our simulations. Our research emphasizes the importance of sampling data at ecologically appropriate spatial and temporal scales and suggests careful consideration for sampling near landscape components that are likely to most influence the genetic structure of the species. In addition, simulating sampling designs a priori could help guide filed data collection efforts.

  8. Choosing a Cluster Sampling Design for Lot Quality Assurance Sampling Surveys.

    Directory of Open Access Journals (Sweden)

    Lauren Hund

    Full Text Available Lot quality assurance sampling (LQAS surveys are commonly used for monitoring and evaluation in resource-limited settings. Recently several methods have been proposed to combine LQAS with cluster sampling for more timely and cost-effective data collection. For some of these methods, the standard binomial model can be used for constructing decision rules as the clustering can be ignored. For other designs, considered here, clustering is accommodated in the design phase. In this paper, we compare these latter cluster LQAS methodologies and provide recommendations for choosing a cluster LQAS design. We compare technical differences in the three methods and determine situations in which the choice of method results in a substantively different design. We consider two different aspects of the methods: the distributional assumptions and the clustering parameterization. Further, we provide software tools for implementing each method and clarify misconceptions about these designs in the literature. We illustrate the differences in these methods using vaccination and nutrition cluster LQAS surveys as example designs. The cluster methods are not sensitive to the distributional assumptions but can result in substantially different designs (sample sizes depending on the clustering parameterization. However, none of the clustering parameterizations used in the existing methods appears to be consistent with the observed data, and, consequently, choice between the cluster LQAS methods is not straightforward. Further research should attempt to characterize clustering patterns in specific applications and provide suggestions for best-practice cluster LQAS designs on a setting-specific basis.

  9. Choosing a Cluster Sampling Design for Lot Quality Assurance Sampling Surveys.

    Science.gov (United States)

    Hund, Lauren; Bedrick, Edward J; Pagano, Marcello

    2015-01-01

    Lot quality assurance sampling (LQAS) surveys are commonly used for monitoring and evaluation in resource-limited settings. Recently several methods have been proposed to combine LQAS with cluster sampling for more timely and cost-effective data collection. For some of these methods, the standard binomial model can be used for constructing decision rules as the clustering can be ignored. For other designs, considered here, clustering is accommodated in the design phase. In this paper, we compare these latter cluster LQAS methodologies and provide recommendations for choosing a cluster LQAS design. We compare technical differences in the three methods and determine situations in which the choice of method results in a substantively different design. We consider two different aspects of the methods: the distributional assumptions and the clustering parameterization. Further, we provide software tools for implementing each method and clarify misconceptions about these designs in the literature. We illustrate the differences in these methods using vaccination and nutrition cluster LQAS surveys as example designs. The cluster methods are not sensitive to the distributional assumptions but can result in substantially different designs (sample sizes) depending on the clustering parameterization. However, none of the clustering parameterizations used in the existing methods appears to be consistent with the observed data, and, consequently, choice between the cluster LQAS methods is not straightforward. Further research should attempt to characterize clustering patterns in specific applications and provide suggestions for best-practice cluster LQAS designs on a setting-specific basis.

  10. Adaptive designs for the one-sample log-rank test.

    Science.gov (United States)

    Schmidt, Rene; Faldum, Andreas; Kwiecien, Robert

    2017-09-22

    Traditional designs in phase IIa cancer trials are single-arm designs with a binary outcome, for example, tumor response. In some settings, however, a time-to-event endpoint might appear more appropriate, particularly in the presence of loss to follow-up. Then the one-sample log-rank test might be the method of choice. It allows to compare the survival curve of the patients under treatment to a prespecified reference survival curve. The reference curve usually represents the expected survival under standard of the care. In this work, convergence of the one-sample log-rank statistic to Brownian motion is proven using Rebolledo's martingale central limit theorem while accounting for staggered entry times of the patients. On this basis, a confirmatory adaptive one-sample log-rank test is proposed where provision is made for data dependent sample size reassessment. The focus is to apply the inverse normal method. This is done in two different directions. The first strategy exploits the independent increments property of the one-sample log-rank statistic. The second strategy is based on the patient-wise separation principle. It is shown by simulation that the proposed adaptive test might help to rescue an underpowered trial and at the same time lowers the average sample number (ASN) under the null hypothesis as compared to a single-stage fixed sample design. © 2017, The International Biometric Society.

  11. Designing an enhanced groundwater sample collection system

    International Nuclear Information System (INIS)

    Schalla, R.

    1994-10-01

    As part of an ongoing technical support mission to achieve excellence and efficiency in environmental restoration activities at the Laboratory for Energy and Health-Related Research (LEHR), Pacific Northwest Laboratory (PNL) provided guidance on the design and construction of monitoring wells and identified the most suitable type of groundwater sampling pump and accessories for monitoring wells. The goal was to utilize a monitoring well design that would allow for hydrologic testing and reduce turbidity to minimize the impact of sampling. The sampling results of the newly designed monitoring wells were clearly superior to those of the previously installed monitoring wells. The new wells exhibited reduced turbidity, in addition to improved access for instrumentation and hydrologic testing. The variable frequency submersible pump was selected as the best choice for obtaining groundwater samples. The literature references are listed at the end of this report. Despite some initial difficulties, the actual performance of the variable frequency, submersible pump and its accessories was effective in reducing sampling time and labor costs, and its ease of use was preferred over the previously used bladder pumps. The surface seals system, called the Dedicator, proved to be useful accessory to prevent surface contamination while providing easy access for water-level measurements and for connecting the pump. Cost savings resulted from the use of the pre-production pumps (beta units) donated by the manufacturer for the demonstration. However, larger savings resulted from shortened field time due to the ease in using the submersible pumps and the surface seal access system. Proper deployment of the monitoring wells also resulted in cost savings and ensured representative samples

  12. Discrete-time control system design with applications

    CERN Document Server

    Rabbath, C A

    2014-01-01

    This book presents practical techniques of discrete-time control system design. In general, the design techniques lead to low-order dynamic compensators that ensure satisfactory closed-loop performance for a wide range of sampling rates. The theory is given in the form of theorems, lemmas, and propositions. The design of the control systems is presented as step-by-step procedures and algorithms. The proposed feedback control schemes are applied to well-known dynamic system models. This book also discusses: Closed-loop performance of generic models of mobile robot and airborne pursuer dynamic systems under discrete-time feedback control with limited computing capabilities Concepts of discrete-time models and sampled-data models of continuous-time systems, for both single- and dual-rate operation Local versus global digital redesign Optimal, closed-loop digital redesign methods Plant input mapping design Generalized holds and samplers for use in feedback control loops, Numerical simulation of fixed-point arithm...

  13. Planetary Sample Caching System Design Options

    Science.gov (United States)

    Collins, Curtis; Younse, Paulo; Backes, Paul

    2009-01-01

    Potential Mars Sample Return missions would aspire to collect small core and regolith samples using a rover with a sample acquisition tool and sample caching system. Samples would need to be stored in individual sealed tubes in a canister that could be transfered to a Mars ascent vehicle and returned to Earth. A sample handling, encapsulation and containerization system (SHEC) has been developed as part of an integrated system for acquiring and storing core samples for application to future potential MSR and other potential sample return missions. Requirements and design options for the SHEC system were studied and a recommended design concept developed. Two families of solutions were explored: 1)transfer of a raw sample from the tool to the SHEC subsystem and 2)transfer of a tube containing the sample to the SHEC subsystem. The recommended design utilizes sample tool bit change out as the mechanism for transferring tubes to and samples in tubes from the tool. The SHEC subsystem design, called the Bit Changeout Caching(BiCC) design, is intended for operations on a MER class rover.

  14. Design and development of a highly sensitive, field portable plasma source instrument for on-line liquid stream monitoring and real-time sample analysis

    International Nuclear Information System (INIS)

    Duan, Yixiang; Su, Yongxuan; Jin, Zhe; Abeln, Stephen P.

    2000-01-01

    The development of a highly sensitive, field portable, low-powered instrument for on-site, real-time liquid waste stream monitoring is described in this article. A series of factors such as system sensitivity and portability, plasma source, sample introduction, desolvation system, power supply, and the instrument configuration, were carefully considered in the design of the portable instrument. A newly designed, miniature, modified microwave plasma source was selected as the emission source for spectroscopy measurement, and an integrated small spectrometer with a charge-coupled device detector was installed for signal processing and detection. An innovative beam collection system with optical fibers was designed and used for emission signal collection. Microwave plasma can be sustained with various gases at relatively low power, and it possesses high detection capabilities for both metal and nonmetal pollutants, making it desirable to use for on-site, real-time, liquid waste stream monitoring. An effective in situ sampling system was coupled with a high efficiency desolvation device for direct-sampling liquid samples into the plasma. A portable computer control system is used for data processing. The new, integrated instrument can be easily used for on-site, real-time monitoring in the field. The system possesses a series of advantages, including high sensitivity for metal and nonmetal elements; in situ sampling; compact structure; low cost; and ease of operation and handling. These advantages will significantly overcome the limitations of previous monitoring techniques and make great contributions to environmental restoration and monitoring. (c)

  15. Estimating time to pregnancy from current durations in a cross-sectional sample

    DEFF Research Database (Denmark)

    Keiding, Niels; Kvist, Kajsa; Hartvig, Helle

    2002-01-01

    A new design for estimating the distribution of time to pregnancy is proposed and investigated. The design is based on recording current durations in a cross-sectional sample of women, leading to statistical problems similar to estimating renewal time distributions from backward recurrence times....

  16. Outcome-Dependent Sampling Design and Inference for Cox's Proportional Hazards Model.

    Science.gov (United States)

    Yu, Jichang; Liu, Yanyan; Cai, Jianwen; Sandler, Dale P; Zhou, Haibo

    2016-11-01

    We propose a cost-effective outcome-dependent sampling design for the failure time data and develop an efficient inference procedure for data collected with this design. To account for the biased sampling scheme, we derive estimators from a weighted partial likelihood estimating equation. The proposed estimators for regression parameters are shown to be consistent and asymptotically normally distributed. A criteria that can be used to optimally implement the ODS design in practice is proposed and studied. The small sample performance of the proposed method is evaluated by simulation studies. The proposed design and inference procedure is shown to be statistically more powerful than existing alternative designs with the same sample sizes. We illustrate the proposed method with an existing real data from the Cancer Incidence and Mortality of Uranium Miners Study.

  17. Mechanical design and simulation of an automatized sample exchanger

    International Nuclear Information System (INIS)

    Lopez, Yon; Gora, Jimmy; Bedregal, Patricia; Hernandez, Yuri; Baltuano, Oscar; Gago, Javier

    2013-01-01

    The design of a turntable type sample exchanger for irradiation and with a capacity for up to 20 capsules was performed. Its function is the automatic sending of samples contained in polyethylene capsules, for irradiation in the grid position of the reactor core, using a pneumatic system and further analysis by neutron activation. This study shows the structural design analysis and calculations in selecting motors and actuators. This development will improve efficiency in the analysis, reducing the contribution of the workers and also the radiation exposure time. (authors).

  18. Eigenvalue sensitivity of sampled time systems operating in closed loop

    Science.gov (United States)

    Bernal, Dionisio

    2018-05-01

    The use of feedback to create closed-loop eigenstructures with high sensitivity has received some attention in the Structural Health Monitoring field. Although practical implementation is necessarily digital, and thus in sampled time, work thus far has center on the continuous time framework, both in design and in checking performance. It is shown in this paper that the performance in discrete time, at typical sampling rates, can differ notably from that anticipated in the continuous time formulation and that discrepancies can be particularly large on the real part of the eigenvalue sensitivities; a consequence being important error on the (linear estimate) of the level of damage at which closed-loop stability is lost. As one anticipates, explicit consideration of the sampling rate poses no special difficulties in the closed-loop eigenstructure design and the relevant expressions are developed in the paper, including a formula for the efficient evaluation of the derivative of the matrix exponential based on the theory of complex perturbations. The paper presents an easily reproduced numerical example showing the level of error that can result when the discrete time implementation of the controller is not considered.

  19. Secondary Analysis under Cohort Sampling Designs Using Conditional Likelihood

    Directory of Open Access Journals (Sweden)

    Olli Saarela

    2012-01-01

    Full Text Available Under cohort sampling designs, additional covariate data are collected on cases of a specific type and a randomly selected subset of noncases, primarily for the purpose of studying associations with a time-to-event response of interest. With such data available, an interest may arise to reuse them for studying associations between the additional covariate data and a secondary non-time-to-event response variable, usually collected for the whole study cohort at the outset of the study. Following earlier literature, we refer to such a situation as secondary analysis. We outline a general conditional likelihood approach for secondary analysis under cohort sampling designs and discuss the specific situations of case-cohort and nested case-control designs. We also review alternative methods based on full likelihood and inverse probability weighting. We compare the alternative methods for secondary analysis in two simulated settings and apply them in a real-data example.

  20. Evaluation of optimized bronchoalveolar lavage sampling designs for characterization of pulmonary drug distribution.

    Science.gov (United States)

    Clewe, Oskar; Karlsson, Mats O; Simonsson, Ulrika S H

    2015-12-01

    Bronchoalveolar lavage (BAL) is a pulmonary sampling technique for characterization of drug concentrations in epithelial lining fluid and alveolar cells. Two hypothetical drugs with different pulmonary distribution rates (fast and slow) were considered. An optimized BAL sampling design was generated assuming no previous information regarding the pulmonary distribution (rate and extent) and with a maximum of two samples per subject. Simulations were performed to evaluate the impact of the number of samples per subject (1 or 2) and the sample size on the relative bias and relative root mean square error of the parameter estimates (rate and extent of pulmonary distribution). The optimized BAL sampling design depends on a characterized plasma concentration time profile, a population plasma pharmacokinetic model, the limit of quantification (LOQ) of the BAL method and involves only two BAL sample time points, one early and one late. The early sample should be taken as early as possible, where concentrations in the BAL fluid ≥ LOQ. The second sample should be taken at a time point in the declining part of the plasma curve, where the plasma concentration is equivalent to the plasma concentration in the early sample. Using a previously described general pulmonary distribution model linked to a plasma population pharmacokinetic model, simulated data using the final BAL sampling design enabled characterization of both the rate and extent of pulmonary distribution. The optimized BAL sampling design enables characterization of both the rate and extent of the pulmonary distribution for both fast and slowly equilibrating drugs.

  1. Outcome-Dependent Sampling Design and Inference for Cox’s Proportional Hazards Model

    Science.gov (United States)

    Yu, Jichang; Liu, Yanyan; Cai, Jianwen; Sandler, Dale P.; Zhou, Haibo

    2016-01-01

    We propose a cost-effective outcome-dependent sampling design for the failure time data and develop an efficient inference procedure for data collected with this design. To account for the biased sampling scheme, we derive estimators from a weighted partial likelihood estimating equation. The proposed estimators for regression parameters are shown to be consistent and asymptotically normally distributed. A criteria that can be used to optimally implement the ODS design in practice is proposed and studied. The small sample performance of the proposed method is evaluated by simulation studies. The proposed design and inference procedure is shown to be statistically more powerful than existing alternative designs with the same sample sizes. We illustrate the proposed method with an existing real data from the Cancer Incidence and Mortality of Uranium Miners Study. PMID:28090134

  2. Design of 2D time-varying vector fields.

    Science.gov (United States)

    Chen, Guoning; Kwatra, Vivek; Wei, Li-Yi; Hansen, Charles D; Zhang, Eugene

    2012-10-01

    Design of time-varying vector fields, i.e., vector fields that can change over time, has a wide variety of important applications in computer graphics. Existing vector field design techniques do not address time-varying vector fields. In this paper, we present a framework for the design of time-varying vector fields, both for planar domains as well as manifold surfaces. Our system supports the creation and modification of various time-varying vector fields with desired spatial and temporal characteristics through several design metaphors, including streamlines, pathlines, singularity paths, and bifurcations. These design metaphors are integrated into an element-based design to generate the time-varying vector fields via a sequence of basis field summations or spatial constrained optimizations at the sampled times. The key-frame design and field deformation are also introduced to support other user design scenarios. Accordingly, a spatial-temporal constrained optimization and the time-varying transformation are employed to generate the desired fields for these two design scenarios, respectively. We apply the time-varying vector fields generated using our design system to a number of important computer graphics applications that require controllable dynamic effects, such as evolving surface appearance, dynamic scene design, steerable crowd movement, and painterly animation. Many of these are difficult or impossible to achieve via prior simulation-based methods. In these applications, the time-varying vector fields have been applied as either orientation fields or advection fields to control the instantaneous appearance or evolving trajectories of the dynamic effects.

  3. Correction of Sample-Time Error for Time-Interleaved Sampling System Using Cubic Spline Interpolation

    Directory of Open Access Journals (Sweden)

    Qin Guo-jie

    2014-08-01

    Full Text Available Sample-time errors can greatly degrade the dynamic range of a time-interleaved sampling system. In this paper, a novel correction technique employing a cubic spline interpolation is proposed for inter-channel sample-time error compensation. The cubic spline interpolation compensation filter is developed in the form of a finite-impulse response (FIR filter structure. The correction method of the interpolation compensation filter coefficients is deduced. A 4GS/s two-channel, time-interleaved ADC prototype system has been implemented to evaluate the performance of the technique. The experimental results showed that the correction technique is effective to attenuate the spurious spurs and improve the dynamic performance of the system.

  4. Design of 2D Time-Varying Vector Fields

    KAUST Repository

    Chen, Guoning

    2012-10-01

    Design of time-varying vector fields, i.e., vector fields that can change over time, has a wide variety of important applications in computer graphics. Existing vector field design techniques do not address time-varying vector fields. In this paper, we present a framework for the design of time-varying vector fields, both for planar domains as well as manifold surfaces. Our system supports the creation and modification of various time-varying vector fields with desired spatial and temporal characteristics through several design metaphors, including streamlines, pathlines, singularity paths, and bifurcations. These design metaphors are integrated into an element-based design to generate the time-varying vector fields via a sequence of basis field summations or spatial constrained optimizations at the sampled times. The key-frame design and field deformation are also introduced to support other user design scenarios. Accordingly, a spatial-temporal constrained optimization and the time-varying transformation are employed to generate the desired fields for these two design scenarios, respectively. We apply the time-varying vector fields generated using our design system to a number of important computer graphics applications that require controllable dynamic effects, such as evolving surface appearance, dynamic scene design, steerable crowd movement, and painterly animation. Many of these are difficult or impossible to achieve via prior simulation-based methods. In these applications, the time-varying vector fields have been applied as either orientation fields or advection fields to control the instantaneous appearance or evolving trajectories of the dynamic effects. © 1995-2012 IEEE.

  5. Design of 2D Time-Varying Vector Fields

    KAUST Repository

    Chen, Guoning; Kwatra, Vivek; Wei, Li-Yi; Hansen, Charles D.; Zhang, Eugene

    2012-01-01

    Design of time-varying vector fields, i.e., vector fields that can change over time, has a wide variety of important applications in computer graphics. Existing vector field design techniques do not address time-varying vector fields. In this paper, we present a framework for the design of time-varying vector fields, both for planar domains as well as manifold surfaces. Our system supports the creation and modification of various time-varying vector fields with desired spatial and temporal characteristics through several design metaphors, including streamlines, pathlines, singularity paths, and bifurcations. These design metaphors are integrated into an element-based design to generate the time-varying vector fields via a sequence of basis field summations or spatial constrained optimizations at the sampled times. The key-frame design and field deformation are also introduced to support other user design scenarios. Accordingly, a spatial-temporal constrained optimization and the time-varying transformation are employed to generate the desired fields for these two design scenarios, respectively. We apply the time-varying vector fields generated using our design system to a number of important computer graphics applications that require controllable dynamic effects, such as evolving surface appearance, dynamic scene design, steerable crowd movement, and painterly animation. Many of these are difficult or impossible to achieve via prior simulation-based methods. In these applications, the time-varying vector fields have been applied as either orientation fields or advection fields to control the instantaneous appearance or evolving trajectories of the dynamic effects. © 1995-2012 IEEE.

  6. Sampling designs and methods for estimating fish-impingement losses at cooling-water intakes

    International Nuclear Information System (INIS)

    Murarka, I.P.; Bodeau, D.J.

    1977-01-01

    Several systems for estimating fish impingement at power plant cooling-water intakes are compared to determine the most statistically efficient sampling designs and methods. Compared to a simple random sampling scheme the stratified systematic random sampling scheme, the systematic random sampling scheme, and the stratified random sampling scheme yield higher efficiencies and better estimators for the parameters in two models of fish impingement as a time-series process. Mathematical results and illustrative examples of the applications of the sampling schemes to simulated and real data are given. Some sampling designs applicable to fish-impingement studies are presented in appendixes

  7. Experimental and Sampling Design for the INL-2 Sample Collection Operational Test

    Energy Technology Data Exchange (ETDEWEB)

    Piepel, Gregory F.; Amidan, Brett G.; Matzke, Brett D.

    2009-02-16

    This report describes the experimental and sampling design developed to assess sampling approaches and methods for detecting contamination in a building and clearing the building for use after decontamination. An Idaho National Laboratory (INL) building will be contaminated with BG (Bacillus globigii, renamed Bacillus atrophaeus), a simulant for Bacillus anthracis (BA). The contamination, sampling, decontamination, and re-sampling will occur per the experimental and sampling design. This INL-2 Sample Collection Operational Test is being planned by the Validated Sampling Plan Working Group (VSPWG). The primary objectives are: 1) Evaluate judgmental and probabilistic sampling for characterization as well as probabilistic and combined (judgment and probabilistic) sampling approaches for clearance, 2) Conduct these evaluations for gradient contamination (from low or moderate down to absent or undetectable) for different initial concentrations of the contaminant, 3) Explore judgment composite sampling approaches to reduce sample numbers, 4) Collect baseline data to serve as an indication of the actual levels of contamination in the tests. A combined judgmental and random (CJR) approach uses Bayesian methodology to combine judgmental and probabilistic samples to make clearance statements of the form "X% confidence that at least Y% of an area does not contain detectable contamination” (X%/Y% clearance statements). The INL-2 experimental design has five test events, which 1) vary the floor of the INL building on which the contaminant will be released, 2) provide for varying the amount of contaminant released to obtain desired concentration gradients, and 3) investigate overt as well as covert release of contaminants. Desirable contaminant gradients would have moderate to low concentrations of contaminant in rooms near the release point, with concentrations down to zero in other rooms. Such gradients would provide a range of contamination levels to challenge the sampling

  8. Practical reporting times for environmental samples

    International Nuclear Information System (INIS)

    Bayne, C.K.; Schmoyer, D.D.; Jenkins, R.A.

    1993-02-01

    Preanalytical holding times for environmental samples are specified because chemical and physical characteristics may change between sampling and chemical analysis. For example, the Federal Register prescribes a preanalytical holding time of 14 days for volatile organic compounds in soil stored at 4 degrees C. The American Society for Testing Materials (ASTM) uses a more technical definition that the preanalytical holding time is the day when the analyte concentration for an environmental sample falls below the lower 99% confidence interval on the analyte concentration at day zero. This study reviews various holding time definitions and suggest a new preanalytical holding time approach using acceptable error rates for measuring an environmental analyte. This practical reporting time (PRT) approach has been applied to nineteen volatile organic compounds and four explosives in three environmental soil samples. A PRT nomograph of error rates has been developed to estimate the consequences of missing a preanalytical holding time. This nomograph can be applied to a large class of analytes with concentrations that decay linearly or exponentially with time regardless of sample matrices and storage conditions

  9. Practical reporting times for environmental samples

    Energy Technology Data Exchange (ETDEWEB)

    Bayne, C.K.; Schmoyer, D.D.; Jenkins, R.A.

    1993-02-01

    Preanalytical holding times for environmental samples are specified because chemical and physical characteristics may change between sampling and chemical analysis. For example, the Federal Register prescribes a preanalytical holding time of 14 days for volatile organic compounds in soil stored at 4{degrees}C. The American Society for Testing Materials (ASTM) uses a more technical definition that the preanalytical holding time is the day when the analyte concentration for an environmental sample falls below the lower 99% confidence interval on the analyte concentration at day zero. This study reviews various holding time definitions and suggest a new preanalytical holding time approach using acceptable error rates for measuring an environmental analyte. This practical reporting time (PRT) approach has been applied to nineteen volatile organic compounds and four explosives in three environmental soil samples. A PRT nomograph of error rates has been developed to estimate the consequences of missing a preanalytical holding time. This nomograph can be applied to a large class of analytes with concentrations that decay linearly or exponentially with time regardless of sample matrices and storage conditions.

  10. Method for Hot Real-Time Sampling of Gasification Products

    Energy Technology Data Exchange (ETDEWEB)

    Pomeroy, Marc D [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-09-29

    The Thermochemical Process Development Unit (TCPDU) at the National Renewable Energy Laboratory (NREL) is a highly instrumented half-ton/day pilot scale plant capable of demonstrating industrially relevant thermochemical technologies from lignocellulosic biomass conversion, including gasification. Gasification creates primarily Syngas (a mixture of Hydrogen and Carbon Monoxide) that can be utilized with synthesis catalysts to form transportation fuels and other valuable chemicals. Biomass derived gasification products are a very complex mixture of chemical components that typically contain Sulfur and Nitrogen species that can act as catalysis poisons for tar reforming and synthesis catalysts. Real-time hot online sampling techniques, such as Molecular Beam Mass Spectrometry (MBMS), and Gas Chromatographs with Sulfur and Nitrogen specific detectors can provide real-time analysis providing operational indicators for performance. Sampling typically requires coated sampling lines to minimize trace sulfur interactions with steel surfaces. Other materials used inline have also shown conversion of sulfur species into new components and must be minimized. Sample line Residence time within the sampling lines must also be kept to a minimum to reduce further reaction chemistries. Solids from ash and char contribute to plugging and must be filtered at temperature. Experience at NREL has shown several key factors to consider when designing and installing an analytical sampling system for biomass gasification products. They include minimizing sampling distance, effective filtering as close to source as possible, proper line sizing, proper line materials or coatings, even heating of all components, minimizing pressure drops, and additional filtering or traps after pressure drops.

  11. Real-time recursive hyperspectral sample and band processing algorithm architecture and implementation

    CERN Document Server

    Chang, Chein-I

    2017-01-01

    This book explores recursive architectures in designing progressive hyperspectral imaging algorithms. In particular, it makes progressive imaging algorithms recursive by introducing the concept of Kalman filtering in algorithm design so that hyperspectral imagery can be processed not only progressively sample by sample or band by band but also recursively via recursive equations. This book can be considered a companion book of author’s books, Real-Time Progressive Hyperspectral Image Processing, published by Springer in 2016. Explores recursive structures in algorithm architecture Implements algorithmic recursive architecture in conjunction with progressive sample and band processing Derives Recursive Hyperspectral Sample Processing (RHSP) techniques according to Band-Interleaved Sample/Pixel (BIS/BIP) acquisition format Develops Recursive Hyperspectral Band Processing (RHBP) techniques according to Band SeQuential (BSQ) acquisition format for hyperspectral data.

  12. 30 CFR 71.208 - Bimonthly sampling; designated work positions.

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Bimonthly sampling; designated work positions... UNDERGROUND COAL MINES Sampling Procedures § 71.208 Bimonthly sampling; designated work positions. (a) Each... standard when quartz is present), respirable dust sampling of designated work positions shall begin on the...

  13. Sampling designs matching species biology produce accurate and affordable abundance indices

    Directory of Open Access Journals (Sweden)

    Grant Harris

    2013-12-01

    sessions, which raised capture probabilities. The grid design was least biased (−10.5%, but imprecise (CV 21.2%, and used most effort (16,100 trap-nights. The targeted configuration was more biased (−17.3%, but most precise (CV 12.3%, with least effort (7,000 trap-nights. Targeted sampling generated encounter rates four times higher, and capture and recapture probabilities 11% and 60% higher than grid sampling, in a sampling frame 88% smaller. Bears had unequal probability of capture with both sampling designs, partly because some bears never had traps available to sample them. Hence, grid and targeted sampling generated abundance indices, not estimates. Overall, targeted sampling provided the most accurate and affordable design to index abundance. Targeted sampling may offer an alternative method to index the abundance of other species inhabiting expansive and inaccessible landscapes elsewhere, provided their attraction to resource concentrations.

  14. Sampling designs matching species biology produce accurate and affordable abundance indices.

    Science.gov (United States)

    Harris, Grant; Farley, Sean; Russell, Gareth J; Butler, Matthew J; Selinger, Jeff

    2013-01-01

    , which raised capture probabilities. The grid design was least biased (-10.5%), but imprecise (CV 21.2%), and used most effort (16,100 trap-nights). The targeted configuration was more biased (-17.3%), but most precise (CV 12.3%), with least effort (7,000 trap-nights). Targeted sampling generated encounter rates four times higher, and capture and recapture probabilities 11% and 60% higher than grid sampling, in a sampling frame 88% smaller. Bears had unequal probability of capture with both sampling designs, partly because some bears never had traps available to sample them. Hence, grid and targeted sampling generated abundance indices, not estimates. Overall, targeted sampling provided the most accurate and affordable design to index abundance. Targeted sampling may offer an alternative method to index the abundance of other species inhabiting expansive and inaccessible landscapes elsewhere, provided their attraction to resource concentrations.

  15. Sampling designs matching species biology produce accurate and affordable abundance indices

    Science.gov (United States)

    Farley, Sean; Russell, Gareth J.; Butler, Matthew J.; Selinger, Jeff

    2013-01-01

    raised capture probabilities. The grid design was least biased (−10.5%), but imprecise (CV 21.2%), and used most effort (16,100 trap-nights). The targeted configuration was more biased (−17.3%), but most precise (CV 12.3%), with least effort (7,000 trap-nights). Targeted sampling generated encounter rates four times higher, and capture and recapture probabilities 11% and 60% higher than grid sampling, in a sampling frame 88% smaller. Bears had unequal probability of capture with both sampling designs, partly because some bears never had traps available to sample them. Hence, grid and targeted sampling generated abundance indices, not estimates. Overall, targeted sampling provided the most accurate and affordable design to index abundance. Targeted sampling may offer an alternative method to index the abundance of other species inhabiting expansive and inaccessible landscapes elsewhere, provided their attraction to resource concentrations. PMID:24392290

  16. Architectural Design Space Exploration of an FPGA-based Compressed Sampling Engine

    DEFF Research Database (Denmark)

    El-Sayed, Mohammad; Koch, Peter; Le Moullec, Yannick

    2015-01-01

    We present the architectural design space exploration of a compressed sampling engine for use in a wireless heart-rate monitoring system. We show how parallelism affects execution time at the register transfer level. Furthermore, two example solutions (modified semi-parallel and full...

  17. On efficiency of some ratio estimators in double sampling design ...

    African Journals Online (AJOL)

    In this paper, three sampling ratio estimators in double sampling design were proposed with the intention of finding an alternative double sampling design estimator to the conventional ratio estimator in double sampling design discussed by Cochran (1997), Okafor (2002) , Raj (1972) and Raj and Chandhok (1999).

  18. Low-sampling-rate ultra-wideband digital receiver using equivalent-time sampling

    KAUST Repository

    Ballal, Tarig

    2014-09-01

    In this paper, we propose an all-digital scheme for ultra-wideband symbol detection. In the proposed scheme, the received symbols are sampled many times below the Nyquist rate. It is shown that when the number of symbol repetitions, P, is co-prime with the symbol duration given in Nyquist samples, the receiver can sample the received data P times below the Nyquist rate, without loss of fidelity. The proposed scheme is applied to perform channel estimation and binary pulse position modulation (BPPM) detection. Results are presented for two receivers operating at two different sampling rates that are 10 and 20 times below the Nyquist rate. The feasibility of the proposed scheme is demonstrated in different scenarios, with reasonable bit error rates obtained in most of the cases.

  19. Low-sampling-rate ultra-wideband digital receiver using equivalent-time sampling

    KAUST Repository

    Ballal, Tarig; Al-Naffouri, Tareq Y.

    2014-01-01

    In this paper, we propose an all-digital scheme for ultra-wideband symbol detection. In the proposed scheme, the received symbols are sampled many times below the Nyquist rate. It is shown that when the number of symbol repetitions, P, is co-prime with the symbol duration given in Nyquist samples, the receiver can sample the received data P times below the Nyquist rate, without loss of fidelity. The proposed scheme is applied to perform channel estimation and binary pulse position modulation (BPPM) detection. Results are presented for two receivers operating at two different sampling rates that are 10 and 20 times below the Nyquist rate. The feasibility of the proposed scheme is demonstrated in different scenarios, with reasonable bit error rates obtained in most of the cases.

  20. Detecting chaos in irregularly sampled time series.

    Science.gov (United States)

    Kulp, C W

    2013-09-01

    Recently, Wiebe and Virgin [Chaos 22, 013136 (2012)] developed an algorithm which detects chaos by analyzing a time series' power spectrum which is computed using the Discrete Fourier Transform (DFT). Their algorithm, like other time series characterization algorithms, requires that the time series be regularly sampled. Real-world data, however, are often irregularly sampled, thus, making the detection of chaotic behavior difficult or impossible with those methods. In this paper, a characterization algorithm is presented, which effectively detects chaos in irregularly sampled time series. The work presented here is a modification of Wiebe and Virgin's algorithm and uses the Lomb-Scargle Periodogram (LSP) to compute a series' power spectrum instead of the DFT. The DFT is not appropriate for irregularly sampled time series. However, the LSP is capable of computing the frequency content of irregularly sampled data. Furthermore, a new method of analyzing the power spectrum is developed, which can be useful for differentiating between chaotic and non-chaotic behavior. The new characterization algorithm is successfully applied to irregularly sampled data generated by a model as well as data consisting of observations of variable stars.

  1. Design compliance matrix waste sample container filling system for nested, fixed-depth sampling system

    International Nuclear Information System (INIS)

    BOGER, R.M.

    1999-01-01

    This design compliance matrix document provides specific design related functional characteristics, constraints, and requirements for the container filling system that is part of the nested, fixed-depth sampling system. This document addresses performance, external interfaces, ALARA, Authorization Basis, environmental and design code requirements for the container filling system. The container filling system will interface with the waste stream from the fluidic pumping channels of the nested, fixed-depth sampling system and will fill containers with waste that meet the Resource Conservation and Recovery Act (RCRA) criteria for waste that contains volatile and semi-volatile organic materials. The specifications for the nested, fixed-depth sampling system are described in a Level 2 Specification document (HNF-3483, Rev. 1). The basis for this design compliance matrix document is the Tank Waste Remediation System (TWRS) desk instructions for design Compliance matrix documents (PI-CP-008-00, Rev. 0)

  2. The current duration design for estimating the time to pregnancy distribution

    DEFF Research Database (Denmark)

    Gasbarra, Dario; Arjas, Elja; Vehtari, Aki

    2015-01-01

    This paper was inspired by the studies of Niels Keiding and co-authors on estimating the waiting time-to-pregnancy (TTP) distribution, and in particular on using the current duration design in that context. In this design, a cross-sectional sample of women is collected from those who are currently...... attempting to become pregnant, and then by recording from each the time she has been attempting. Our aim here is to study the identifiability and the estimation of the waiting time distribution on the basis of current duration data. The main difficulty in this stems from the fact that very short waiting...... times are only rarely selected into the sample of current durations, and this renders their estimation unstable. We introduce here a Bayesian method for this estimation problem, prove its asymptotic consistency, and compare the method to some variants of the non-parametric maximum likelihood estimators...

  3. Assessing the precision of a time-sampling-based study among GPs: balancing sample size and measurement frequency.

    Science.gov (United States)

    van Hassel, Daniël; van der Velden, Lud; de Bakker, Dinny; van der Hoek, Lucas; Batenburg, Ronald

    2017-12-04

    Our research is based on a technique for time sampling, an innovative method for measuring the working hours of Dutch general practitioners (GPs), which was deployed in an earlier study. In this study, 1051 GPs were questioned about their activities in real time by sending them one SMS text message every 3 h during 1 week. The required sample size for this study is important for health workforce planners to know if they want to apply this method to target groups who are hard to reach or if fewer resources are available. In this time-sampling method, however, standard power analyses is not sufficient for calculating the required sample size as this accounts only for sample fluctuation and not for the fluctuation of measurements taken from every participant. We investigated the impact of the number of participants and frequency of measurements per participant upon the confidence intervals (CIs) for the hours worked per week. Statistical analyses of the time-use data we obtained from GPs were performed. Ninety-five percent CIs were calculated, using equations and simulation techniques, for various different numbers of GPs included in the dataset and for various frequencies of measurements per participant. Our results showed that the one-tailed CI, including sample and measurement fluctuation, decreased from 21 until 3 h between one and 50 GPs. As a result of the formulas to calculate CIs, the increase of the precision continued and was lower with the same additional number of GPs. Likewise, the analyses showed how the number of participants required decreased if more measurements per participant were taken. For example, one measurement per 3-h time slot during the week requires 300 GPs to achieve a CI of 1 h, while one measurement per hour requires 100 GPs to obtain the same result. The sample size needed for time-use research based on a time-sampling technique depends on the design and aim of the study. In this paper, we showed how the precision of the

  4. Sample design for the residential energy consumption survey

    Energy Technology Data Exchange (ETDEWEB)

    1994-08-01

    The purpose of this report is to provide detailed information about the multistage area-probability sample design used for the Residential Energy Consumption Survey (RECS). It is intended as a technical report, for use by statisticians, to better understand the theory and procedures followed in the creation of the RECS sample frame. For a more cursory overview of the RECS sample design, refer to the appendix entitled ``How the Survey was Conducted,`` which is included in the statistical reports produced for each RECS survey year.

  5. System design description for sampling fuel in K basins

    International Nuclear Information System (INIS)

    Baker, R.B.

    1996-01-01

    This System Design Description provides: (1) statements of the Spent Nuclear Fuel Projects (SNFP) needs requiring sampling of fuel in the K East and K West Basins, (2) the sampling equipment functions and requirements, (3) a general work plan and the design logic being followed to develop the equipment, and (4) a summary description of the design for the sampling equipment. The report summarizes the integrated application of both the subject equipment and the canister sludge sampler in near-term characterization campaigns at K Basins

  6. Sampling design for use by the soil decontamination project

    International Nuclear Information System (INIS)

    Rutherford, D.W.; Stevens, J.R.

    1981-01-01

    This report proposes a general approach to the problem and discusses sampling of soil to map the contaminated area and to provide samples for characterizaton of soil components and contamination. Basic concepts in sample design are reviewed with reference to environmental transuranic studies. Common designs are reviewed and evaluated for use with specific objectives that might be required by the soil decontamination project. Examples of a hierarchial design pilot study and a combined hierarchial and grid study are proposed for the Rocky Flats 903 pad area

  7. Design of real-time monitoring and control system of 222Rn/220Rn sampling for radon chamber

    International Nuclear Information System (INIS)

    Wu Rongyan; Zhao Xiuliang; Zhang Meiqin; Yu Hong

    2008-01-01

    This paper describes the design of 222 Rn/ 220 Rn sampling monitoring and control system based on single-chip microcomputer of series Intel51. The hardware design involves the choosing and usage of sensors-chips, A/D conversion-chip, USB interface-chip, keyboard-chip, digital display-chip, photoelectric coupling isolation-chips and drive circuit-chips of the direct current pump. Software design is composed by software of Personal Computer (PC) and software of Single Chip Microcomputer (SCM). The data acquisition and conversion and the flux control of direct current pump are realized by using soft of Visual Basic and assemble language. The program flow charts are given. Furthermore, we improved the stability of the direct current pump by means of PID Control Algorithms. (authors)

  8. [Saarland Growth Study: sampling design].

    Science.gov (United States)

    Danker-Hopfe, H; Zabransky, S

    2000-01-01

    The use of reference data to evaluate the physical development of children and adolescents is part of the daily routine in the paediatric ambulance. The construction of such reference data is based on the collection of extensive reference data. There are different kinds of reference data: cross sectional references, which are based on data collected from a big representative cross-sectional sample of the population, longitudinal references, which are based on follow-up surveys of usually smaller samples of individuals from birth to maturity, and mixed longitudinal references, which are a combination of longitudinal and cross-sectional reference data. The advantages and disadvantages of the different methods of data collection and the resulting reference data are discussed. The Saarland Growth Study was conducted for several reasons: growth processes are subject to secular changes, there are no specific reference data for children and adolescents from this part of the country and the growth charts in use in the paediatric praxis are possibly not appropriate any more. Therefore, the Saarland Growth Study served two purposes a) to create actual regional reference data and b) to create a database for future studies on secular trends in growth processes of children and adolescents from Saarland. The present contribution focusses on general remarks on the sampling design of (cross-sectional) growth surveys and its inferences for the design of the present study.

  9. A phoswich detector design for improved spatial sampling in PET

    Science.gov (United States)

    Thiessen, Jonathan D.; Koschan, Merry A.; Melcher, Charles L.; Meng, Fang; Schellenberg, Graham; Goertzen, Andrew L.

    2018-02-01

    Block detector designs, utilizing a pixelated scintillator array coupled to a photosensor array in a light-sharing design, are commonly used for positron emission tomography (PET) imaging applications. In practice, the spatial sampling of these designs is limited by the crystal pitch, which must be large enough for individual crystals to be resolved in the detector flood image. Replacing the conventional 2D scintillator array with an array of phoswich elements, each consisting of an optically coupled side-by-side scintillator pair, may improve spatial sampling in one direction of the array without requiring resolving smaller crystal elements. To test the feasibility of this design, a 4 × 4 phoswich array was constructed, with each phoswich element consisting of two optically coupled, 3 . 17 × 1 . 58 × 10mm3 LSO crystals co-doped with cerium and calcium. The amount of calcium doping was varied to create a 'fast' LSO crystal with decay time of 32.9 ns and a 'slow' LSO crystal with decay time of 41.2 ns. Using a Hamamatsu R8900U-00-C12 position-sensitive photomultiplier tube (PS-PMT) and a CAEN V1720 250 MS/s waveform digitizer, we were able to show effective discrimination of the fast and slow LSO crystals in the phoswich array. Although a side-by-side phoswich array is feasible, reflections at the crystal boundary due to a mismatch between the refractive index of the optical adhesive (n = 1 . 5) and LSO (n = 1 . 82) caused it to behave optically as an 8 × 4 array rather than a 4 × 4 array. Direct coupling of each phoswich element to individual photodetector elements may be necessary with the current phoswich array design. Alternatively, in order to implement this phoswich design with a conventional light sharing PET block detector, a high refractive index optical adhesive is necessary to closely match the refractive index of LSO.

  10. The real-time fitting of radioactive decay curves. Pt. 3. Counting during sampling

    International Nuclear Information System (INIS)

    Hartley, B.M.

    1994-01-01

    An analysis of a least-squares method for the real-time fitting of the theoretical total count function to the actual total count from radioactive decays has been given previously for the case where counting takes place after a sample is taken. The counting may be done in a number of different counting systems which distinguish between different types or energies of radiation emitted from the sample. The method would allow real-time determination of the numbers of atoms and hence activities of the individual isotopes present and has been designated the Time Evolved Least-Squares method (TELS). If the radioactivity which is to be measured exists as an aerosol or in a form where a sample is taken at a constant rate it may be possible to count during sampling and by so doing reduce the total time required to determine the activity of the individual isotopes present. The TELS method is extended here to the case where counting and the evaluation of the activity takes place concurrently with the sampling. The functions which need to be evaluated are derived and the calculations required to implement the method are discussed. As with the TELS method of counting after sampling the technique of counting during sampling and the simultaneous evaluation of activity could be achieved in real-time. Results of testing the method by computer simulation for two counting schemes for the descendants of radon are presented. ((orig.))

  11. Sampling design for long-term regional trends in marine rocky intertidal communities

    Science.gov (United States)

    Irvine, Gail V.; Shelley, Alice

    2013-01-01

    Probability-based designs reduce bias and allow inference of results to the pool of sites from which they were chosen. We developed and tested probability-based designs for monitoring marine rocky intertidal assemblages at Glacier Bay National Park and Preserve (GLBA), Alaska. A multilevel design was used that varied in scale and inference. The levels included aerial surveys, extensive sampling of 25 sites, and more intensive sampling of 6 sites. Aerial surveys of a subset of intertidal habitat indicated that the original target habitat of bedrock-dominated sites with slope ≤30° was rare. This unexpected finding illustrated one value of probability-based surveys and led to a shift in the target habitat type to include steeper, more mixed rocky habitat. Subsequently, we evaluated the statistical power of different sampling methods and sampling strategies to detect changes in the abundances of the predominant sessile intertidal taxa: barnacles Balanomorpha, the mussel Mytilus trossulus, and the rockweed Fucus distichus subsp. evanescens. There was greatest power to detect trends in Mytilus and lesser power for barnacles and Fucus. Because of its greater power, the extensive, coarse-grained sampling scheme was adopted in subsequent years over the intensive, fine-grained scheme. The sampling attributes that had the largest effects on power included sampling of “vertical” line transects (vs. horizontal line transects or quadrats) and increasing the number of sites. We also evaluated the power of several management-set parameters. Given equal sampling effort, sampling more sites fewer times had greater power. The information gained through intertidal monitoring is likely to be useful in assessing changes due to climate, including ocean acidification; invasive species; trampling effects; and oil spills.

  12. Probability sampling design in ethnobotanical surveys of medicinal plants

    Directory of Open Access Journals (Sweden)

    Mariano Martinez Espinosa

    2012-07-01

    Full Text Available Non-probability sampling design can be used in ethnobotanical surveys of medicinal plants. However, this method does not allow statistical inferences to be made from the data generated. The aim of this paper is to present a probability sampling design that is applicable in ethnobotanical studies of medicinal plants. The sampling design employed in the research titled "Ethnobotanical knowledge of medicinal plants used by traditional communities of Nossa Senhora Aparecida do Chumbo district (NSACD, Poconé, Mato Grosso, Brazil" was used as a case study. Probability sampling methods (simple random and stratified sampling were used in this study. In order to determine the sample size, the following data were considered: population size (N of 1179 families; confidence coefficient, 95%; sample error (d, 0.05; and a proportion (p, 0.5. The application of this sampling method resulted in a sample size (n of at least 290 families in the district. The present study concludes that probability sampling methods necessarily have to be employed in ethnobotanical studies of medicinal plants, particularly where statistical inferences have to be made using data obtained. This can be achieved by applying different existing probability sampling methods, or better still, a combination of such methods.

  13. Note: Design and development of wireless controlled aerosol sampling network for large scale aerosol dispersion experiments

    International Nuclear Information System (INIS)

    Gopalakrishnan, V.; Subramanian, V.; Baskaran, R.; Venkatraman, B.

    2015-01-01

    Wireless based custom built aerosol sampling network is designed, developed, and implemented for environmental aerosol sampling. These aerosol sampling systems are used in field measurement campaign, in which sodium aerosol dispersion experiments have been conducted as a part of environmental impact studies related to sodium cooled fast reactor. The sampling network contains 40 aerosol sampling units and each contains custom built sampling head and the wireless control networking designed with Programmable System on Chip (PSoC™) and Xbee Pro RF modules. The base station control is designed using graphical programming language LabView. The sampling network is programmed to operate in a preset time and the running status of the samplers in the network is visualized from the base station. The system is developed in such a way that it can be used for any other environment sampling system deployed in wide area and uneven terrain where manual operation is difficult due to the requirement of simultaneous operation and status logging

  14. Note: Design and development of wireless controlled aerosol sampling network for large scale aerosol dispersion experiments

    Energy Technology Data Exchange (ETDEWEB)

    Gopalakrishnan, V.; Subramanian, V.; Baskaran, R.; Venkatraman, B. [Radiation Impact Assessment Section, Radiological Safety Division, Indira Gandhi Centre for Atomic Research, Kalpakkam 603 102 (India)

    2015-07-15

    Wireless based custom built aerosol sampling network is designed, developed, and implemented for environmental aerosol sampling. These aerosol sampling systems are used in field measurement campaign, in which sodium aerosol dispersion experiments have been conducted as a part of environmental impact studies related to sodium cooled fast reactor. The sampling network contains 40 aerosol sampling units and each contains custom built sampling head and the wireless control networking designed with Programmable System on Chip (PSoC™) and Xbee Pro RF modules. The base station control is designed using graphical programming language LabView. The sampling network is programmed to operate in a preset time and the running status of the samplers in the network is visualized from the base station. The system is developed in such a way that it can be used for any other environment sampling system deployed in wide area and uneven terrain where manual operation is difficult due to the requirement of simultaneous operation and status logging.

  15. Design and building of a homemade sample changer for automation of the irradiation in neutron activation analysis technique

    International Nuclear Information System (INIS)

    Gago, Javier; Hernandez, Yuri; Baltuano, Oscar; Bedregal, Patricia; Lopez, Yon; Urquizo, Rafael

    2014-01-01

    Because the RP-10 research reactor operates during weekends, it was necessary to design and build a sample changer for irradiation as part of the automation process of neutron activation analysis technique. The device is formed by an aluminum turntable disk which can accommodate 19 polyethylene capsules, containing samples to be sent using the pneumatic transfer system from the laboratory to the irradiation position. The system is operate by a control switchboard to send and return capsules in a variable preset time and by two different ways, allowing the determination of short, medium and long lived radionuclides. Also another mechanism is designed called 'exchange valve' for changing travel paths (pipelines) allowing the irradiated samples to be stored for a longer time in the reactor hall. The system design has allowed complete automation of this technique, enabling the irradiation of samples without the presence of an analyst. The design, construction and operation of the device is described and presented in this article. (authors).

  16. Extending cluster lot quality assurance sampling designs for surveillance programs.

    Science.gov (United States)

    Hund, Lauren; Pagano, Marcello

    2014-07-20

    Lot quality assurance sampling (LQAS) has a long history of applications in industrial quality control. LQAS is frequently used for rapid surveillance in global health settings, with areas classified as poor or acceptable performance on the basis of the binary classification of an indicator. Historically, LQAS surveys have relied on simple random samples from the population; however, implementing two-stage cluster designs for surveillance sampling is often more cost-effective than simple random sampling. By applying survey sampling results to the binary classification procedure, we develop a simple and flexible nonparametric procedure to incorporate clustering effects into the LQAS sample design to appropriately inflate the sample size, accommodating finite numbers of clusters in the population when relevant. We use this framework to then discuss principled selection of survey design parameters in longitudinal surveillance programs. We apply this framework to design surveys to detect rises in malnutrition prevalence in nutrition surveillance programs in Kenya and South Sudan, accounting for clustering within villages. By combining historical information with data from previous surveys, we design surveys to detect spikes in the childhood malnutrition rate. Copyright © 2014 John Wiley & Sons, Ltd.

  17. Method for Hot Real-Time Sampling of Pyrolysis Vapors

    Energy Technology Data Exchange (ETDEWEB)

    Pomeroy, Marc D [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-09-29

    Biomass Pyrolysis has been an increasing topic of research, in particular as a replacement for crude oil. This process utilizes moderate temperatures to thermally deconstruct the biomass which is then condensed into a mixture of liquid oxygenates to be used as fuel precursors. Pyrolysis oils contain more than 400 compounds, up to 60 percent of which do not re-volatilize for subsequent chemical analysis. Vapor chemical composition is also complicated as additional condensation reactions occur during the condensation and collection of the product. Due to the complexity of the pyrolysis oil, and a desire to catalytically upgrade the vapor composition before condensation, online real-time analytical techniques such as Molecular Beam Mass Spectrometry (MBMS) are of great use. However, in order to properly sample hot pyrolysis vapors, many challenges must be overcome. Sampling must occur within a narrow range of temperatures to reduce product composition changes from overheating or partial condensation or plugging of lines from condensed products. Residence times must be kept at a minimum to reduce further reaction chemistries. Pyrolysis vapors also form aerosols that are carried far downstream and can pass through filters resulting in build-up in downstream locations. The co-produced bio-char and ash from the pyrolysis process can lead to plugging of the sample lines, and must be filtered out at temperature, even with the use of cyclonic separators. A practical approach for considerations and sampling system design, as well as lessons learned are integrated into the hot analytical sampling system of the National Renewable Energy Laboratory's (NREL) Thermochemical Process Development Unit (TCPDU) to provide industrially relevant demonstrations of thermochemical transformations of biomass feedstocks at the pilot scale.

  18. Thermal probe design for Europa sample acquisition

    Science.gov (United States)

    Horne, Mera F.

    2018-01-01

    The planned lander missions to the surface of Europa will access samples from the subsurface of the ice in a search for signs of life. A small thermal drill (probe) is proposed to meet the sample requirement of the Science Definition Team's (SDT) report for the Europa mission. The probe is 2 cm in diameter and 16 cm in length and is designed to access the subsurface to 10 cm deep and to collect five ice samples of 7 cm3 each, approximately. The energy required to penetrate the top 10 cm of ice in a vacuum is 26 Wh, approximately, and to melt 7 cm3 of ice is 1.2 Wh, approximately. The requirement stated in the SDT report of collecting samples from five different sites can be accommodated with repeated use of the same thermal drill. For smaller sample sizes, a smaller probe of 1.0 cm in diameter with the same length of 16 cm could be utilized that would require approximately 6.4 Wh to penetrate the top 10 cm of ice, and 0.02 Wh to collect 0.1 g of sample. The thermal drill has the advantage of simplicity of design and operations and the ability to penetrate ice over a range of densities and hardness while maintaining sample integrity.

  19. Importance Sampling for Stochastic Timed Automata

    DEFF Research Database (Denmark)

    Jegourel, Cyrille; Larsen, Kim Guldstrand; Legay, Axel

    2016-01-01

    We present an importance sampling framework that combines symbolic analysis and simulation to estimate the probability of rare reachability properties in stochastic timed automata. By means of symbolic exploration, our framework first identifies states that cannot reach the goal. A state-wise cha......We present an importance sampling framework that combines symbolic analysis and simulation to estimate the probability of rare reachability properties in stochastic timed automata. By means of symbolic exploration, our framework first identifies states that cannot reach the goal. A state...

  20. Triangulation based inclusion probabilities: a design-unbiased sampling approach

    OpenAIRE

    Fehrmann, Lutz; Gregoire, Timothy; Kleinn, Christoph

    2011-01-01

    A probabilistic sampling approach for design-unbiased estimation of area-related quantitative characteristics of spatially dispersed population units is proposed. The developed field protocol includes a fixed number of 3 units per sampling location and is based on partial triangulations over their natural neighbors to derive the individual inclusion probabilities. The performance of the proposed design is tested in comparison to fixed area sample plots in a simulation with two forest stands. ...

  1. Time-Scale and Time-Frequency Analyses of Irregularly Sampled Astronomical Time Series

    Directory of Open Access Journals (Sweden)

    S. Roques

    2005-09-01

    Full Text Available We evaluate the quality of spectral restoration in the case of irregular sampled signals in astronomy. We study in details a time-scale method leading to a global wavelet spectrum comparable to the Fourier period, and a time-frequency matching pursuit allowing us to identify the frequencies and to control the error propagation. In both cases, the signals are first resampled with a linear interpolation. Both results are compared with those obtained using Lomb's periodogram and using the weighted waveletZ-transform developed in astronomy for unevenly sampled variable stars observations. These approaches are applied to simulations and to light variations of four variable stars. This leads to the conclusion that the matching pursuit is more efficient for recovering the spectral contents of a pulsating star, even with a preliminary resampling. In particular, the results are almost independent of the quality of the initial irregular sampling.

  2. Optimizing sampling design to deal with mist-net avoidance in Amazonian birds and bats.

    Directory of Open Access Journals (Sweden)

    João Tiago Marques

    Full Text Available Mist netting is a widely used technique to sample bird and bat assemblages. However, captures often decline with time because animals learn and avoid the locations of nets. This avoidance or net shyness can substantially decrease sampling efficiency. We quantified the day-to-day decline in captures of Amazonian birds and bats with mist nets set at the same location for four consecutive days. We also evaluated how net avoidance influences the efficiency of surveys under different logistic scenarios using re-sampling techniques. Net avoidance caused substantial declines in bird and bat captures, although more accentuated in the latter. Most of the decline occurred between the first and second days of netting: 28% in birds and 47% in bats. Captures of commoner species were more affected. The numbers of species detected also declined. Moving nets daily to minimize the avoidance effect increased captures by 30% in birds and 70% in bats. However, moving the location of nets may cause a reduction in netting time and captures. When moving the nets caused the loss of one netting day it was no longer advantageous to move the nets frequently. In bird surveys that could even decrease the number of individuals captured and species detected. Net avoidance can greatly affect sampling efficiency but adjustments in survey design can minimize this. Whenever nets can be moved without losing netting time and the objective is to capture many individuals, they should be moved daily. If the main objective is to survey species present then nets should still be moved for bats, but not for birds. However, if relocating nets causes a significant loss of netting time, moving them to reduce effects of shyness will not improve sampling efficiency in either group. Overall, our findings can improve the design of mist netting sampling strategies in other tropical areas.

  3. ACS sampling system: design, implementation, and performance evaluation

    Science.gov (United States)

    Di Marcantonio, Paolo; Cirami, Roberto; Chiozzi, Gianluca

    2004-09-01

    By means of ACS (ALMA Common Software) framework we designed and implemented a sampling system which allows sampling of every Characteristic Component Property with a specific, user-defined, sustained frequency limited only by the hardware. Collected data are sent to various clients (one or more Java plotting widgets, a dedicated GUI or a COTS application) using the ACS/CORBA Notification Channel. The data transport is optimized: samples are cached locally and sent in packets with a lower and user-defined frequency to keep network load under control. Simultaneous sampling of the Properties of different Components is also possible. Together with the design and implementation issues we present the performance of the sampling system evaluated on two different platforms: on a VME based system using VxWorks RTOS (currently adopted by ALMA) and on a PC/104+ embedded platform using Red Hat 9 Linux operating system. The PC/104+ solution offers, as an alternative, a low cost PC compatible hardware environment with free and open operating system.

  4. Mobile Variable Depth Sampling System Design Study

    International Nuclear Information System (INIS)

    BOGER, R.M.

    2000-01-01

    A design study is presented for a mobile, variable depth sampling system (MVDSS) that will support the treatment and immobilization of Hanford LAW and HLW. The sampler can be deployed in a 4-inch tank riser and has a design that is based on requirements identified in the Level 2 Specification (latest revision). The waste feed sequence for the MVDSS is based on Phase 1, Case 3S6 waste feed sequence. Technical information is also presented that supports the design study

  5. Mobile Variable Depth Sampling System Design Study

    Energy Technology Data Exchange (ETDEWEB)

    BOGER, R.M.

    2000-08-25

    A design study is presented for a mobile, variable depth sampling system (MVDSS) that will support the treatment and immobilization of Hanford LAW and HLW. The sampler can be deployed in a 4-inch tank riser and has a design that is based on requirements identified in the Level 2 Specification (latest revision). The waste feed sequence for the MVDSS is based on Phase 1, Case 3S6 waste feed sequence. Technical information is also presented that supports the design study.

  6. Design and characterization of poly(dimethylsiloxane)-based valves for interfacing continuous-flow sampling to microchip electrophoresis.

    Science.gov (United States)

    Li, Michelle W; Huynh, Bryan H; Hulvey, Matthew K; Lunte, Susan M; Martin, R Scott

    2006-02-15

    This work describes the fabrication and evaluation of a poly(dimethyl)siloxane (PDMS)-based device that enables the discrete injection of a sample plug from a continuous-flow stream into a microchannel for subsequent analysis by electrophoresis. Devices were fabricated by aligning valving and flow channel layers followed by plasma sealing the combined layers onto a glass plate that contained fittings for the introduction of liquid sample and nitrogen gas. The design incorporates a reduced-volume pneumatic valve that actuates (on the order of hundreds of milliseconds) to allow analyte from a continuously flowing sampling channel to be injected into a separation channel for electrophoresis. The injector design was optimized to include a pushback channel to flush away stagnant sample associated with the injector dead volume. The effect of the valve actuation time, the pushback voltage, and the sampling stream flow rate on the performance of the device was characterized. Using the optimized design and an injection frequency of 0.64 Hz showed that the injection process is reproducible (RSD of 1.77%, n = 15). Concentration change experiments using fluorescein as the analyte showed that the device could achieve a lag time as small as 14 s. Finally, to demonstrate the potential uses of this device, the microchip was coupled to a microdialysis probe to monitor a concentration change and sample a fluorescein dye mixture.

  7. Toward time-based design: Creating an applied time evaluation checklist for urban design research

    Directory of Open Access Journals (Sweden)

    Amir Shakibamanesh

    2017-09-01

    Full Text Available The perception of a 3D space, in which movement takes place, is subjectively based on experience. The pedestrians’ perception of subjective duration is one of the related issues that receive little attention in urban design literature. Pedestrians often misperceive the required time to pass a certain distance. A wide range of factors affects one׳s perception of time in urban environments. These factors include individual factors (e.g., gender, age, and psychological state, social and cultural contexts, purpose and motivation for being in the space, and knowledge of the given area. This study aims to create an applied checklist that can be used by urban designers in analyzing the effects of individual experience on subjective duration. This checklist will enable urban designers to perform a phenomenological assessment of time perception and compare this perception in different urban spaces, thereby improving pedestrians’ experiences of time through a purposeful design. A combination of exploratory and descriptive analytical research is used as methodology due to the complexity of time perception.

  8. Robust Adaptive Stabilization of Linear Time-Invariant Dynamic Systems by Using Fractional-Order Holds and Multirate Sampling Controls

    Directory of Open Access Journals (Sweden)

    S. Alonso-Quesada

    2010-01-01

    Full Text Available This paper presents a strategy for designing a robust discrete-time adaptive controller for stabilizing linear time-invariant (LTI continuous-time dynamic systems. Such systems may be unstable and noninversely stable in the worst case. A reduced-order model is considered to design the adaptive controller. The control design is based on the discretization of the system with the use of a multirate sampling device with fast-sampled control signal. A suitable on-line adaptation of the multirate gains guarantees the stability of the inverse of the discretized estimated model, which is used to parameterize the adaptive controller. A dead zone is included in the parameters estimation algorithm for robustness purposes under the presence of unmodeled dynamics in the controlled dynamic system. The adaptive controller guarantees the boundedness of the system measured signal for all time. Some examples illustrate the efficacy of this control strategy.

  9. Sample-interpolation timing: an optimized technique for the digital measurement of time of flight for γ rays and neutrons at relatively low sampling rates

    International Nuclear Information System (INIS)

    Aspinall, M D; Joyce, M J; Mackin, R O; Jarrah, Z; Boston, A J; Nolan, P J; Peyton, A J; Hawkes, N P

    2009-01-01

    A unique, digital time pick-off method, known as sample-interpolation timing (SIT) is described. This method demonstrates the possibility of improved timing resolution for the digital measurement of time of flight compared with digital replica-analogue time pick-off methods for signals sampled at relatively low rates. Three analogue timing methods have been replicated in the digital domain (leading-edge, crossover and constant-fraction timing) for pulse data sampled at 8 GSa s −1 . Events arising from the 7 Li(p, n) 7 Be reaction have been detected with an EJ-301 organic liquid scintillator and recorded with a fast digital sampling oscilloscope. Sample-interpolation timing was developed solely for the digital domain and thus performs more efficiently on digital signals compared with analogue time pick-off methods replicated digitally, especially for fast signals that are sampled at rates that current affordable and portable devices can achieve. Sample interpolation can be applied to any analogue timing method replicated digitally and thus also has the potential to exploit the generic capabilities of analogue techniques with the benefits of operating in the digital domain. A threshold in sampling rate with respect to the signal pulse width is observed beyond which further improvements in timing resolution are not attained. This advance is relevant to many applications in which time-of-flight measurement is essential

  10. Reliability of impingement sampling designs: An example from the Indian Point station

    International Nuclear Information System (INIS)

    Mattson, M.T.; Waxman, J.B.; Watson, D.A.

    1988-01-01

    A 4-year data base (1976-1979) of daily fish impingement counts at the Indian Point electric power station on the Hudson River was used to compare the precision and reliability of three random-sampling designs: (1) simple random, (2) seasonally stratified, and (3) empirically stratified. The precision of daily impingement estimates improved logarithmically for each design as more days in the year were sampled. Simple random sampling was the least, and empirically stratified sampling was the most precise design, and the difference in precision between the two stratified designs was small. Computer-simulated sampling was used to estimate the reliability of the two stratified-random-sampling designs. A seasonally stratified sampling design was selected as the most appropriate reduced-sampling program for Indian Point station because: (1) reasonably precise and reliable impingement estimates were obtained using this design for all species combined and for eight common Hudson River fish by sampling only 30% of the days in a year (110 d); and (2) seasonal strata may be more precise and reliable than empirical strata if future changes in annual impingement patterns occur. The seasonally stratified design applied to the 1976-1983 Indian Point impingement data showed that selection of sampling dates based on daily species-specific impingement variability gave results that were more precise, but not more consistently reliable, than sampling allocations based on the variability of all fish species combined. 14 refs., 1 fig., 6 tabs

  11. Conditional estimation of exponential random graph models from snowball sampling designs

    NARCIS (Netherlands)

    Pattison, Philippa E.; Robins, Garry L.; Snijders, Tom A. B.; Wang, Peng

    2013-01-01

    A complete survey of a network in a large population may be prohibitively difficult and costly. So it is important to estimate models for networks using data from various network sampling designs, such as link-tracing designs. We focus here on snowball sampling designs, designs in which the members

  12. Sampled-data and discrete-time H2 optimal control

    NARCIS (Netherlands)

    Trentelman, Harry L.; Stoorvogel, Anton A.

    1993-01-01

    This paper deals with the sampled-data H2 optimal control problem. Given a linear time-invariant continuous-time system, the problem of minimizing the H2 performance over all sampled-data controllers with a fixed sampling period can be reduced to a pure discrete-time H2 optimal control problem. This

  13. Time-dependent importance sampling in semiclassical initial value representation calculations for time correlation functions.

    Science.gov (United States)

    Tao, Guohua; Miller, William H

    2011-07-14

    An efficient time-dependent importance sampling method is developed for the Monte Carlo calculation of time correlation functions via the initial value representation (IVR) of semiclassical (SC) theory. A prefactor-free time-dependent sampling function weights the importance of a trajectory based on the magnitude of its contribution to the time correlation function, and global trial moves are used to facilitate the efficient sampling the phase space of initial conditions. The method can be generally applied to sampling rare events efficiently while avoiding being trapped in a local region of the phase space. Results presented in the paper for two system-bath models demonstrate the efficiency of this new importance sampling method for full SC-IVR calculations.

  14. ANL small-sample calorimeter system design and operation

    International Nuclear Information System (INIS)

    Roche, C.T.; Perry, R.B.; Lewis, R.N.; Jung, E.A.; Haumann, J.R.

    1978-07-01

    The Small-Sample Calorimetric System is a portable instrument designed to measure the thermal power produced by radioactive decay of plutonium-containing fuels. The small-sample calorimeter is capable of measuring samples producing power up to 32 milliwatts at a rate of one sample every 20 min. The instrument is contained in two packages: a data-acquisition module consisting of a microprocessor with an 8K-byte nonvolatile memory, and a measurement module consisting of the calorimeter and a sample preheater. The total weight of the system is 18 kg

  15. Baseline Design Compliance Matrix for the Rotary Mode Core Sampling System

    International Nuclear Information System (INIS)

    LECHELT, J.A.

    2000-01-01

    The purpose of the design compliance matrix (DCM) is to provide a single-source document of all design requirements associated with the fifteen subsystems that make up the rotary mode core sampling (RMCS) system. It is intended to be the baseline requirement document for the RMCS system and to be used in governing all future design and design verification activities associated with it. This document is the DCM for the RMCS system used on Hanford single-shell radioactive waste storage tanks. This includes the Exhauster System, Rotary Mode Core Sample Trucks, Universal Sampling System, Diesel Generator System, Distribution Trailer, X-Ray Cart System, Breathing Air Compressor, Nitrogen Supply Trailer, Casks and Cask Truck, Service Trailer, Core Sampling Riser Equipment, Core Sampling Support Trucks, Foot Clamp, Ramps and Platforms and Purged Camera System. Excluded items are tools such as light plants and light stands. Other items such as the breather inlet filter are covered by a different design baseline. In this case, the inlet breather filter is covered by the Tank Farms Design Compliance Matrix

  16. Sampling returns for realized variance calculations: tick time or transaction time?

    NARCIS (Netherlands)

    Griffin, J.E.; Oomen, R.C.A.

    2008-01-01

    This article introduces a new model for transaction prices in the presence of market microstructure noise in order to study the properties of the price process on two different time scales, namely, transaction time where prices are sampled with every transaction and tick time where prices are

  17. Design of an automatic sample changer for the measurement of neutron flux by gamma spectrometry

    International Nuclear Information System (INIS)

    Gago, Javier; Bruna, Ruben; Baltuano, Oscar; Montoya, Eduardo; Descreaux, Killian

    2014-01-01

    This paper presents calculus, selection and components design for the construction of an automatic system in order to measure neutron flux in a working nuclear reactor by the gamma spectrometry technique using samples irradiated on the RP-10 nucleus. This system will perform the measurement of interchanging 100 samples in a programed and automatic way, reducing operation time by the user and obtaining more accurate measures. (authors).

  18. Enhancing sampling design in mist-net bat surveys by accounting for sample size optimization

    OpenAIRE

    Trevelin, Leonardo Carreira; Novaes, Roberto Leonan Morim; Colas-Rosas, Paul François; Benathar, Thayse Cristhina Melo; Peres, Carlos A.

    2017-01-01

    The advantages of mist-netting, the main technique used in Neotropical bat community studies to date, include logistical implementation, standardization and sampling representativeness. Nonetheless, study designs still have to deal with issues of detectability related to how different species behave and use the environment. Yet there is considerable sampling heterogeneity across available studies in the literature. Here, we approach the problem of sample size optimization. We evaluated the co...

  19. Sample design considerations of indoor air exposure surveys

    International Nuclear Information System (INIS)

    Cox, B.G.; Mage, D.T.; Immerman, F.W.

    1988-01-01

    Concern about the potential for indoor air pollution has prompted recent surveys of radon and NO 2 concentrations in homes and personal exposure studies of volatile organics, carbon monoxide and pesticides, to name a few. The statistical problems in designing sample surveys that measure the physical environment are diverse and more complicated than those encountered in traditional surveys of human attitudes and attributes. This paper addresses issues encountered when designing indoor air quality (IAQ) studies. General statistical concepts related to target population definition, frame creation, and sample selection for area household surveys and telephone surveys are presented. The implications of different measurement approaches are discussed, and response rate considerations are described

  20. A time-sorting pitfall trap and temperature datalogger for the sampling of surface-active arthropods

    Directory of Open Access Journals (Sweden)

    Marshall S. McMunn

    2017-04-01

    Full Text Available Nearly all arthropods display consistent patterns of activity according to time of day. These patterns of activity often limit the extent of animal co-occurrence in space and time. Quantifying when particular species are active and how activity varies with environmental conditions is difficult without the use of automated devices due to the need for continuous monitoring. Time-sorting pitfall traps passively collect active arthropods into containers with known beginning and end sample times. The trap described here, similar to previous designs, sorts arthropods by the time they fall into the trap using a rotating circular rack of vials. This trap represents a reduction in size, cost, and time of construction, while increasing the number of time windows sampled. The addition of temperature data collection extends functionality, while the use of store-bought components and inclusion of customizable software make the trap easy to reproduce and use.

  1. Accurate Sample Time Reconstruction of Inertial FIFO Data

    Directory of Open Access Journals (Sweden)

    Sebastian Stieber

    2017-12-01

    Full Text Available In the context of modern cyber-physical systems, the accuracy of underlying sensor data plays an increasingly important role in sensor data fusion and feature extraction. The raw events of multiple sensors have to be aligned in time to enable high quality sensor fusion results. However, the growing number of simultaneously connected sensor devices make the energy saving data acquisition and processing more and more difficult. Hence, most of the modern sensors offer a first-in-first-out (FIFO interface to store multiple data samples and to relax timing constraints, when handling multiple sensor devices. However, using the FIFO interface increases the negative influence of individual clock drifts—introduced by fabrication inaccuracies, temperature changes and wear-out effects—onto the sampling data reconstruction. Furthermore, additional timing offset errors due to communication and software latencies increases with a growing number of sensor devices. In this article, we present an approach for an accurate sample time reconstruction independent of the actual clock drift with the help of an internal sensor timer. Such timers are already available in modern sensors, manufactured in micro-electromechanical systems (MEMS technology. The presented approach focuses on calculating accurate time stamps using the sensor FIFO interface in a forward-only processing manner as a robust and energy saving solution. The proposed algorithm is able to lower the overall standard deviation of reconstructed sampling periods below 40 μ s, while run-time savings of up to 42% are achieved, compared to single sample acquisition.

  2. Estimating HIES Data through Ratio and Regression Methods for Different Sampling Designs

    Directory of Open Access Journals (Sweden)

    Faqir Muhammad

    2007-01-01

    Full Text Available In this study, comparison has been made for different sampling designs, using the HIES data of North West Frontier Province (NWFP for 2001-02 and 1998-99 collected from the Federal Bureau of Statistics, Statistical Division, Government of Pakistan, Islamabad. The performance of the estimators has also been considered using bootstrap and Jacknife. A two-stage stratified random sample design is adopted by HIES. In the first stage, enumeration blocks and villages are treated as the first stage Primary Sampling Units (PSU. The sample PSU’s are selected with probability proportional to size. Secondary Sampling Units (SSU i.e., households are selected by systematic sampling with a random start. They have used a single study variable. We have compared the HIES technique with some other designs, which are: Stratified Simple Random Sampling. Stratified Systematic Sampling. Stratified Ranked Set Sampling. Stratified Two Phase Sampling. Ratio and Regression methods were applied with two study variables, which are: Income (y and Household sizes (x. Jacknife and Bootstrap are used for variance replication. Simple Random Sampling with sample size (462 to 561 gave moderate variances both by Jacknife and Bootstrap. By applying Systematic Sampling, we received moderate variance with sample size (467. In Jacknife with Systematic Sampling, we obtained variance of regression estimator greater than that of ratio estimator for a sample size (467 to 631. At a sample size (952 variance of ratio estimator gets greater than that of regression estimator. The most efficient design comes out to be Ranked set sampling compared with other designs. The Ranked set sampling with jackknife and bootstrap, gives minimum variance even with the smallest sample size (467. Two Phase sampling gave poor performance. Multi-stage sampling applied by HIES gave large variances especially if used with a single study variable.

  3. Creel survey sampling designs for estimating effort in short-duration Chinook salmon fisheries

    Science.gov (United States)

    McCormick, Joshua L.; Quist, Michael C.; Schill, Daniel J.

    2013-01-01

    Chinook Salmon Oncorhynchus tshawytscha sport fisheries in the Columbia River basin are commonly monitored using roving creel survey designs and require precise, unbiased catch estimates. The objective of this study was to examine the relative bias and precision of total catch estimates using various sampling designs to estimate angling effort under the assumption that mean catch rate was known. We obtained information on angling populations based on direct visual observations of portions of Chinook Salmon fisheries in three Idaho river systems over a 23-d period. Based on the angling population, Monte Carlo simulations were used to evaluate the properties of effort and catch estimates for each sampling design. All sampling designs evaluated were relatively unbiased. Systematic random sampling (SYS) resulted in the most precise estimates. The SYS and simple random sampling designs had mean square error (MSE) estimates that were generally half of those observed with cluster sampling designs. The SYS design was more efficient (i.e., higher accuracy per unit cost) than a two-cluster design. Increasing the number of clusters available for sampling within a day decreased the MSE of estimates of daily angling effort, but the MSE of total catch estimates was variable depending on the fishery. The results of our simulations provide guidelines on the relative influence of sample sizes and sampling designs on parameters of interest in short-duration Chinook Salmon fisheries.

  4. Lagoa Real design. Description and evaluation of sampling system

    International Nuclear Information System (INIS)

    Hashizume, B.K.

    1982-10-01

    This report describes the samples preparation system of drilling from Lagoa Real Design, aiming obtainment representative fraction of the half from drilling outlier. The error of sampling + analysis and analytical accuracy was obtainment by delayed neutron analysis. (author)

  5. Software Design Methods for Real-Time Systems

    Science.gov (United States)

    1989-12-01

    This module describes the concepts and methods used in the software design of real time systems . It outlines the characteristics of real time systems , describes...the role of software design in real time system development, surveys and compares some software design methods for real - time systems , and

  6. Sampling design optimisation for rainfall prediction using a non-stationary geostatistical model

    Science.gov (United States)

    Wadoux, Alexandre M. J.-C.; Brus, Dick J.; Rico-Ramirez, Miguel A.; Heuvelink, Gerard B. M.

    2017-09-01

    The accuracy of spatial predictions of rainfall by merging rain-gauge and radar data is partly determined by the sampling design of the rain-gauge network. Optimising the locations of the rain-gauges may increase the accuracy of the predictions. Existing spatial sampling design optimisation methods are based on minimisation of the spatially averaged prediction error variance under the assumption of intrinsic stationarity. Over the past years, substantial progress has been made to deal with non-stationary spatial processes in kriging. Various well-documented geostatistical models relax the assumption of stationarity in the mean, while recent studies show the importance of considering non-stationarity in the variance for environmental processes occurring in complex landscapes. We optimised the sampling locations of rain-gauges using an extension of the Kriging with External Drift (KED) model for prediction of rainfall fields. The model incorporates both non-stationarity in the mean and in the variance, which are modelled as functions of external covariates such as radar imagery, distance to radar station and radar beam blockage. Spatial predictions are made repeatedly over time, each time recalibrating the model. The space-time averaged KED variance was minimised by Spatial Simulated Annealing (SSA). The methodology was tested using a case study predicting daily rainfall in the north of England for a one-year period. Results show that (i) the proposed non-stationary variance model outperforms the stationary variance model, and (ii) a small but significant decrease of the rainfall prediction error variance is obtained with the optimised rain-gauge network. In particular, it pays off to place rain-gauges at locations where the radar imagery is inaccurate, while keeping the distribution over the study area sufficiently uniform.

  7. Optimal time points sampling in pathway modelling.

    Science.gov (United States)

    Hu, Shiyan

    2004-01-01

    Modelling cellular dynamics based on experimental data is at the heart of system biology. Considerable progress has been made to dynamic pathway modelling as well as the related parameter estimation. However, few of them gives consideration for the issue of optimal sampling time selection for parameter estimation. Time course experiments in molecular biology rarely produce large and accurate data sets and the experiments involved are usually time consuming and expensive. Therefore, to approximate parameters for models with only few available sampling data is of significant practical value. For signal transduction, the sampling intervals are usually not evenly distributed and are based on heuristics. In the paper, we investigate an approach to guide the process of selecting time points in an optimal way to minimize the variance of parameter estimates. In the method, we first formulate the problem to a nonlinear constrained optimization problem by maximum likelihood estimation. We then modify and apply a quantum-inspired evolutionary algorithm, which combines the advantages of both quantum computing and evolutionary computing, to solve the optimization problem. The new algorithm does not suffer from the morass of selecting good initial values and being stuck into local optimum as usually accompanied with the conventional numerical optimization techniques. The simulation results indicate the soundness of the new method.

  8. Design review report for rotary mode core sample truck (RMCST) modifications for flammable gas tanks, preliminary design

    International Nuclear Information System (INIS)

    Corbett, J.E.

    1996-02-01

    This report documents the completion of a preliminary design review for the Rotary Mode Core Sample Truck (RMCST) modifications for flammable gas tanks. The RMCST modifications are intended to support core sampling operations in waste tanks requiring flammable gas controls. The objective of this review was to validate basic design assumptions and concepts to support a path forward leading to a final design. The conclusion reached by the review committee was that the design was acceptable and efforts should continue toward a final design review

  9. Design, analysis, and interpretation of field quality-control data for water-sampling projects

    Science.gov (United States)

    Mueller, David K.; Schertz, Terry L.; Martin, Jeffrey D.; Sandstrom, Mark W.

    2015-01-01

    The process of obtaining and analyzing water samples from the environment includes a number of steps that can affect the reported result. The equipment used to collect and filter samples, the bottles used for specific subsamples, any added preservatives, sample storage in the field, and shipment to the laboratory have the potential to affect how accurately samples represent the environment from which they were collected. During the early 1990s, the U.S. Geological Survey implemented policies to include the routine collection of quality-control samples in order to evaluate these effects and to ensure that water-quality data were adequately representing environmental conditions. Since that time, the U.S. Geological Survey Office of Water Quality has provided training in how to design effective field quality-control sampling programs and how to evaluate the resultant quality-control data. This report documents that training material and provides a reference for methods used to analyze quality-control data.

  10. Real-time PCR (qPCR) primer design using free online software.

    Science.gov (United States)

    Thornton, Brenda; Basu, Chhandak

    2011-01-01

    Real-time PCR (quantitative PCR or qPCR) has become the preferred method for validating results obtained from assays which measure gene expression profiles. The process uses reverse transcription polymerase chain reaction (RT-PCR), coupled with fluorescent chemistry, to measure variations in transcriptome levels between samples. The four most commonly used fluorescent chemistries are SYBR® Green dyes and TaqMan®, Molecular Beacon or Scorpion probes. SYBR® Green is very simple to use and cost efficient. As SYBR® Green dye binds to any double-stranded DNA product, its success depends greatly on proper primer design. Many types of online primer design software are available, which can be used free of charge to design desirable SYBR® Green-based qPCR primers. This laboratory exercise is intended for those who have a fundamental background in PCR. It addresses the basic fluorescent chemistries of real-time PCR, the basic rules and pitfalls of primer design, and provides a step-by-step protocol for designing SYBR® Green-based primers with free, online software. Copyright © 2010 Wiley Periodicals, Inc.

  11. Digital timing: sampling frequency, anti-aliasing filter and signal interpolation filter dependence on timing resolution

    International Nuclear Information System (INIS)

    Cho, Sanghee; Grazioso, Ron; Zhang Nan; Aykac, Mehmet; Schmand, Matthias

    2011-01-01

    The main focus of our study is to investigate how the performance of digital timing methods is affected by sampling rate, anti-aliasing and signal interpolation filters. We used the Nyquist sampling theorem to address some basic questions such as what will be the minimum sampling frequencies? How accurate will the signal interpolation be? How do we validate the timing measurements? The preferred sampling rate would be as low as possible, considering the high cost and power consumption of high-speed analog-to-digital converters. However, when the sampling rate is too low, due to the aliasing effect, some artifacts are produced in the timing resolution estimations; the shape of the timing profile is distorted and the FWHM values of the profile fluctuate as the source location changes. Anti-aliasing filters are required in this case to avoid the artifacts, but the timing is degraded as a result. When the sampling rate is marginally over the Nyquist rate, a proper signal interpolation is important. A sharp roll-off (higher order) filter is required to separate the baseband signal from its replicates to avoid the aliasing, but in return the computation will be higher. We demonstrated the analysis through a digital timing study using fast LSO scintillation crystals as used in time-of-flight PET scanners. From the study, we observed that there is no significant timing resolution degradation down to 1.3 Ghz sampling frequency, and the computation requirement for the signal interpolation is reasonably low. A so-called sliding test is proposed as a validation tool checking constant timing resolution behavior of a given timing pick-off method regardless of the source location change. Lastly, the performance comparison for several digital timing methods is also shown.

  12. Adaptive Sampling of Time Series During Remote Exploration

    Science.gov (United States)

    Thompson, David R.

    2012-01-01

    This work deals with the challenge of online adaptive data collection in a time series. A remote sensor or explorer agent adapts its rate of data collection in order to track anomalous events while obeying constraints on time and power. This problem is challenging because the agent has limited visibility (all its datapoints lie in the past) and limited control (it can only decide when to collect its next datapoint). This problem is treated from an information-theoretic perspective, fitting a probabilistic model to collected data and optimizing the future sampling strategy to maximize information gain. The performance characteristics of stationary and nonstationary Gaussian process models are compared. Self-throttling sensors could benefit environmental sensor networks and monitoring as well as robotic exploration. Explorer agents can improve performance by adjusting their data collection rate, preserving scarce power or bandwidth resources during uninteresting times while fully covering anomalous events of interest. For example, a remote earthquake sensor could conserve power by limiting its measurements during normal conditions and increasing its cadence during rare earthquake events. A similar capability could improve sensor platforms traversing a fixed trajectory, such as an exploration rover transect or a deep space flyby. These agents can adapt observation times to improve sample coverage during moments of rapid change. An adaptive sampling approach couples sensor autonomy, instrument interpretation, and sampling. The challenge is addressed as an active learning problem, which already has extensive theoretical treatment in the statistics and machine learning literature. A statistical Gaussian process (GP) model is employed to guide sample decisions that maximize information gain. Nonsta tion - ary (e.g., time-varying) covariance relationships permit the system to represent and track local anomalies, in contrast with current GP approaches. Most common GP models

  13. Design Specifications for Adaptive Real-Time Systems

    Science.gov (United States)

    1991-12-01

    TICfl \\ E CT E Design Specifications for JAN’\\ 1992 Adaptive Real - Time Systems fl Randall W. Lichota U, Alice H. Muntz - December 1991 \\ \\\\/ 0 / r...268-2056 Technical Report CMU/SEI-91-TR-20 ESD-91-TR-20 December 1991 Design Specifications for Adaptive Real - Time Systems Randall W. Lichota Hughes...Design Specifications for Adaptive Real - Time Systems Abstract: The design specification method described in this report treats a software

  14. Monitoring oil persistence on beaches : SCAT versus stratified random sampling designs

    International Nuclear Information System (INIS)

    Short, J.W.; Lindeberg, M.R.; Harris, P.M.; Maselko, J.M.; Pella, J.J.; Rice, S.D.

    2003-01-01

    In the event of a coastal oil spill, shoreline clean-up assessment teams (SCAT) commonly rely on visual inspection of the entire affected area to monitor the persistence of the oil on beaches. Occasionally, pits are excavated to evaluate the persistence of subsurface oil. This approach is practical for directing clean-up efforts directly following a spill. However, sampling of the 1989 Exxon Valdez oil spill in Prince William Sound 12 years later has shown that visual inspection combined with pit excavation does not offer estimates of contaminated beach area of stranded oil volumes. This information is needed to statistically evaluate the significance of change with time. Assumptions regarding the correlation of visually-evident surface oil and cryptic subsurface oil are usually not evaluated as part of the SCAT mandate. Stratified random sampling can avoid such problems and could produce precise estimates of oiled area and volume that allow for statistical assessment of major temporal trends and the extent of the impact. The 2001 sampling of the shoreline of Prince William Sound showed that 15 per cent of surface oil occurrences were associated with subsurface oil. This study demonstrates the usefulness of the stratified random sampling method and shows how sampling design parameters impact statistical outcome. Power analysis based on the study results, indicate that optimum power is derived when unnecessary stratification is avoided. It was emphasized that sampling effort should be balanced between choosing sufficient beaches for sampling and the intensity of sampling

  15. Modular time division multiplexer: Efficient simultaneous characterization of fast and slow transients in multiple samples

    Science.gov (United States)

    Kim, Stephan D.; Luo, Jiajun; Buchholz, D. Bruce; Chang, R. P. H.; Grayson, M.

    2016-09-01

    A modular time division multiplexer (MTDM) device is introduced to enable parallel measurement of multiple samples with both fast and slow decay transients spanning from millisecond to month-long time scales. This is achieved by dedicating a single high-speed measurement instrument for rapid data collection at the start of a transient, and by multiplexing a second low-speed measurement instrument for slow data collection of several samples in parallel for the later transients. The MTDM is a high-level design concept that can in principle measure an arbitrary number of samples, and the low cost implementation here allows up to 16 samples to be measured in parallel over several months, reducing the total ensemble measurement duration and equipment usage by as much as an order of magnitude without sacrificing fidelity. The MTDM was successfully demonstrated by simultaneously measuring the photoconductivity of three amorphous indium-gallium-zinc-oxide thin films with 20 ms data resolution for fast transients and an uninterrupted parallel run time of over 20 days. The MTDM has potential applications in many areas of research that manifest response times spanning many orders of magnitude, such as photovoltaics, rechargeable batteries, amorphous semiconductors such as silicon and amorphous indium-gallium-zinc-oxide.

  16. Observer-based output feedback control of networked control systems with non-uniform sampling and time-varying delay

    Science.gov (United States)

    Meng, Su; Chen, Jie; Sun, Jian

    2017-10-01

    This paper investigates the problem of observer-based output feedback control for networked control systems with non-uniform sampling and time-varying transmission delay. The sampling intervals are assumed to vary within a given interval. The transmission delay belongs to a known interval. A discrete-time model is first established, which contains time-varying delay and norm-bounded uncertainties coming from non-uniform sampling intervals. It is then converted to an interconnection of two subsystems in which the forward channel is delay-free. The scaled small gain theorem is used to derive the stability condition for the closed-loop system. Moreover, the observer-based output feedback controller design method is proposed by utilising a modified cone complementary linearisation algorithm. Finally, numerical examples illustrate the validity and superiority of the proposed method.

  17. Sampling design considerations for demographic studies: a case of colonial seabirds

    Science.gov (United States)

    Kendall, William L.; Converse, Sarah J.; Doherty, Paul F.; Naughton, Maura B.; Anders, Angela; Hines, James E.; Flint, Elizabeth

    2009-01-01

    For the purposes of making many informed conservation decisions, the main goal for data collection is to assess population status and allow prediction of the consequences of candidate management actions. Reducing the bias and variance of estimates of population parameters reduces uncertainty in population status and projections, thereby reducing the overall uncertainty under which a population manager must make a decision. In capture-recapture studies, imperfect detection of individuals, unobservable life-history states, local movement outside study areas, and tag loss can cause bias or precision problems with estimates of population parameters. Furthermore, excessive disturbance to individuals during capture?recapture sampling may be of concern because disturbance may have demographic consequences. We address these problems using as an example a monitoring program for Black-footed Albatross (Phoebastria nigripes) and Laysan Albatross (Phoebastria immutabilis) nesting populations in the northwestern Hawaiian Islands. To mitigate these estimation problems, we describe a synergistic combination of sampling design and modeling approaches. Solutions include multiple capture periods per season and multistate, robust design statistical models, dead recoveries and incidental observations, telemetry and data loggers, buffer areas around study plots to neutralize the effect of local movements outside study plots, and double banding and statistical models that account for band loss. We also present a variation on the robust capture?recapture design and a corresponding statistical model that minimizes disturbance to individuals. For the albatross case study, this less invasive robust design was more time efficient and, when used in combination with a traditional robust design, reduced the standard error of detection probability by 14% with only two hours of additional effort in the field. These field techniques and associated modeling approaches are applicable to studies of

  18. Timing system design and tests for the Gravity Probe B relativity mission

    International Nuclear Information System (INIS)

    Li, J; Keiser, G M; Ohshima, Y; Shestople, P; Lockhart, J M

    2015-01-01

    In this paper, we discuss the timing system design and tests for the NASA/Stanford Gravity Probe B (GP-B) relativity mission. The primary clock of GP-B, called the 16f o clock, was an oven-controlled crystal oscillator that produced a 16.368 MHz master frequency 3 . The 16f o clock and the 10 Hz data strobe, which was divided down from the 16f o clock, provided clock signals to all GP-B components and synchronized the data collection, transmission, and processing. The sampled data of science signals were stamped with the vehicle time, a counter of the 10 Hz data strobe. The time latency between the time of data sampling and the stamped vehicle time was compensated in the ground data processing. Two redundant global positioning system receivers onboard the GP-B satellite supplied an external reference for time transfer between the vehicle time and coordinated universal time (UTC), and the time conversion was established in the ground preprocessing of the telemetry timing data. The space flight operation showed that the error of time conversion between the vehicle time and UTC was less than 2 μs. Considering that the constant timing offsets were compensated in the ground processing of the GP-B science data, the time latency between the effective sampling time of GP-B science signals and the stamped vehicle time was verified to within 1 ms in the ground tests. (paper)

  19. Design development of robotic system for on line sampling in fuel reprocessing

    International Nuclear Information System (INIS)

    Balasubramanian, G.R.; Venugopal, P.R.; Padmashali, G.K.

    1990-01-01

    This presentation describes the design and developmental work that is being carried out for the design of an automated sampling system for fast reactor fuel reprocessing plants. The plant proposes to use integrated sampling system. The sample is taken across regular process streams from any intermediate hold up pot. A robot system is planned to take the sample from the sample pot, transfer it to the sample bottle, cap the bottle and transfer the bottle to a pneumatic conveying station. The system covers a large number of sample pots. Alternate automated systems are also examined (1). (author). 4 refs., 2 figs

  20. Failure-censored accelerated life test sampling plans for Weibull distribution under expected test time constraint

    International Nuclear Information System (INIS)

    Bai, D.S.; Chun, Y.R.; Kim, J.G.

    1995-01-01

    This paper considers the design of life-test sampling plans based on failure-censored accelerated life tests. The lifetime distribution of products is assumed to be Weibull with a scale parameter that is a log linear function of a (possibly transformed) stress. Two levels of stress higher than the use condition stress, high and low, are used. Sampling plans with equal expected test times at high and low test stresses which satisfy the producer's and consumer's risk requirements and minimize the asymptotic variance of the test statistic used to decide lot acceptability are obtained. The properties of the proposed life-test sampling plans are investigated

  1. Stratified sampling design based on data mining.

    Science.gov (United States)

    Kim, Yeonkook J; Oh, Yoonhwan; Park, Sunghoon; Cho, Sungzoon; Park, Hayoung

    2013-09-01

    To explore classification rules based on data mining methodologies which are to be used in defining strata in stratified sampling of healthcare providers with improved sampling efficiency. We performed k-means clustering to group providers with similar characteristics, then, constructed decision trees on cluster labels to generate stratification rules. We assessed the variance explained by the stratification proposed in this study and by conventional stratification to evaluate the performance of the sampling design. We constructed a study database from health insurance claims data and providers' profile data made available to this study by the Health Insurance Review and Assessment Service of South Korea, and population data from Statistics Korea. From our database, we used the data for single specialty clinics or hospitals in two specialties, general surgery and ophthalmology, for the year 2011 in this study. Data mining resulted in five strata in general surgery with two stratification variables, the number of inpatients per specialist and population density of provider location, and five strata in ophthalmology with two stratification variables, the number of inpatients per specialist and number of beds. The percentages of variance in annual changes in the productivity of specialists explained by the stratification in general surgery and ophthalmology were 22% and 8%, respectively, whereas conventional stratification by the type of provider location and number of beds explained 2% and 0.2% of variance, respectively. This study demonstrated that data mining methods can be used in designing efficient stratified sampling with variables readily available to the insurer and government; it offers an alternative to the existing stratification method that is widely used in healthcare provider surveys in South Korea.

  2. Latent spatial models and sampling design for landscape genetics

    Science.gov (United States)

    Hanks, Ephraim M.; Hooten, Mevin B.; Knick, Steven T.; Oyler-McCance, Sara J.; Fike, Jennifer A.; Cross, Todd B.; Schwartz, Michael K.

    2016-01-01

    We propose a spatially-explicit approach for modeling genetic variation across space and illustrate how this approach can be used to optimize spatial prediction and sampling design for landscape genetic data. We propose a multinomial data model for categorical microsatellite allele data commonly used in landscape genetic studies, and introduce a latent spatial random effect to allow for spatial correlation between genetic observations. We illustrate how modern dimension reduction approaches to spatial statistics can allow for efficient computation in landscape genetic statistical models covering large spatial domains. We apply our approach to propose a retrospective spatial sampling design for greater sage-grouse (Centrocercus urophasianus) population genetics in the western United States.

  3. Identification of continuous-time systems from samples of input ...

    Indian Academy of Sciences (India)

    Abstract. This paper presents an introductory survey of the methods that have been developed for identification of continuous-time systems from samples of input±output data. The two basic approaches may be described as (i) the indirect method, where first a discrete-time model is estimated from the sampled data and then ...

  4. Adaptive clinical trial designs with pre-specified rules for modifying the sample size: understanding efficient types of adaptation.

    Science.gov (United States)

    Levin, Gregory P; Emerson, Sarah C; Emerson, Scott S

    2013-04-15

    Adaptive clinical trial design has been proposed as a promising new approach that may improve the drug discovery process. Proponents of adaptive sample size re-estimation promote its ability to avoid 'up-front' commitment of resources, better address the complicated decisions faced by data monitoring committees, and minimize accrual to studies having delayed ascertainment of outcomes. We investigate aspects of adaptation rules, such as timing of the adaptation analysis and magnitude of sample size adjustment, that lead to greater or lesser statistical efficiency. Owing in part to the recent Food and Drug Administration guidance that promotes the use of pre-specified sampling plans, we evaluate alternative approaches in the context of well-defined, pre-specified adaptation. We quantify the relative costs and benefits of fixed sample, group sequential, and pre-specified adaptive designs with respect to standard operating characteristics such as type I error, maximal sample size, power, and expected sample size under a range of alternatives. Our results build on others' prior research by demonstrating in realistic settings that simple and easily implemented pre-specified adaptive designs provide only very small efficiency gains over group sequential designs with the same number of analyses. In addition, we describe optimal rules for modifying the sample size, providing efficient adaptation boundaries on a variety of scales for the interim test statistic for adaptation analyses occurring at several different stages of the trial. We thus provide insight into what are good and bad choices of adaptive sampling plans when the added flexibility of adaptive designs is desired. Copyright © 2012 John Wiley & Sons, Ltd.

  5. Visual Sample Plan (VSP) Software: Designs and Data Analyses for Sampling Contaminated Buildings

    International Nuclear Information System (INIS)

    Pulsipher, Brent A.; Wilson, John E.; Gilbert, Richard O.; Nuffer, Lisa L.; Hassig, Nancy L.

    2005-01-01

    A new module of the Visual Sample Plan (VSP) software has been developed to provide sampling designs and data analyses for potentially contaminated buildings. An important application is assessing levels of contamination in buildings after a terrorist attack. This new module, funded by DHS through the Combating Terrorism Technology Support Office, Technical Support Working Group, was developed to provide a tailored, user-friendly and visually-orientated buildings module within the existing VSP software toolkit, the latest version of which can be downloaded from http://dqo.pnl.gov/vsp. In case of, or when planning against, a chemical, biological, or radionuclide release within a building, the VSP module can be used to quickly and easily develop and visualize technically defensible sampling schemes for walls, floors, ceilings, and other surfaces to statistically determine if contamination is present, its magnitude and extent throughout the building and if decontamination has been effective. This paper demonstrates the features of this new VSP buildings module, which include: the ability to import building floor plans or to easily draw, manipulate, and view rooms in several ways; being able to insert doors, windows and annotations into a room; 3-D graphic room views with surfaces labeled and floor plans that show building zones that have separate air handing units. The paper will also discuss the statistical design and data analysis options available in the buildings module. Design objectives supported include comparing an average to a threshold when the data distribution is normal or unknown, and comparing measurements to a threshold to detect hotspots or to insure most of the area is uncontaminated when the data distribution is normal or unknown

  6. An investigation of the effects of relevant samples and a comparison of verification versus discovery based lab design

    Science.gov (United States)

    Rieben, James C., Jr.

    This study focuses on the effects of relevance and lab design on student learning within the chemistry laboratory environment. A general chemistry conductivity of solutions experiment and an upper level organic chemistry cellulose regeneration experiment were employed. In the conductivity experiment, the two main variables studied were the effect of relevant (or "real world") samples on student learning and a verification-based lab design versus a discovery-based lab design. With the cellulose regeneration experiment, the effect of a discovery-based lab design vs. a verification-based lab design was the sole focus. Evaluation surveys consisting of six questions were used at three different times to assess student knowledge of experimental concepts. In the general chemistry laboratory portion of this study, four experimental variants were employed to investigate the effect of relevance and lab design on student learning. These variants consisted of a traditional (or verification) lab design, a traditional lab design using "real world" samples, a new lab design employing real world samples/situations using unknown samples, and the new lab design using real world samples/situations that were known to the student. Data used in this analysis were collected during the Fall 08, Winter 09, and Fall 09 terms. For the second part of this study a cellulose regeneration experiment was employed to investigate the effects of lab design. A demonstration creating regenerated cellulose "rayon" was modified and converted to an efficient and low-waste experiment. In the first variant students tested their products and verified a list of physical properties. In the second variant, students filled in a blank physical property chart with their own experimental results for the physical properties. Results from the conductivity experiment show significant student learning of the effects of concentration on conductivity and how to use conductivity to differentiate solution types with the

  7. Rapid Sampling of Hydrogen Bond Networks for Computational Protein Design.

    Science.gov (United States)

    Maguire, Jack B; Boyken, Scott E; Baker, David; Kuhlman, Brian

    2018-05-08

    Hydrogen bond networks play a critical role in determining the stability and specificity of biomolecular complexes, and the ability to design such networks is important for engineering novel structures, interactions, and enzymes. One key feature of hydrogen bond networks that makes them difficult to rationally engineer is that they are highly cooperative and are not energetically favorable until the hydrogen bonding potential has been satisfied for all buried polar groups in the network. Existing computational methods for protein design are ill-equipped for creating these highly cooperative networks because they rely on energy functions and sampling strategies that are focused on pairwise interactions. To enable the design of complex hydrogen bond networks, we have developed a new sampling protocol in the molecular modeling program Rosetta that explicitly searches for sets of amino acid mutations that can form self-contained hydrogen bond networks. For a given set of designable residues, the protocol often identifies many alternative sets of mutations/networks, and we show that it can readily be applied to large sets of residues at protein-protein interfaces or in the interior of proteins. The protocol builds on a recently developed method in Rosetta for designing hydrogen bond networks that has been experimentally validated for small symmetric systems but was not extensible to many larger protein structures and complexes. The sampling protocol we describe here not only recapitulates previously validated designs with performance improvements but also yields viable hydrogen bond networks for cases where the previous method fails, such as the design of large, asymmetric interfaces relevant to engineering protein-based therapeutics.

  8. Considerations on sample holder design and custom-made non-polarizable electrodes for Spectral Induced Polarization measurements on unsaturated soils

    Science.gov (United States)

    Kaouane, C.; Chouteau, M. C.; Fauchard, C.; Cote, P.

    2014-12-01

    Spectral Induced Polarization (SIP) is a geophysical method sensitive to water content, saturation and grain size distribution. It could be used as an alternative to nuclear probes to assess the compaction of soils in road works. To evaluate the potential of SIP as a practical tool, we designed an experiment for complex conductivity measurements on unsaturated soil samples.Literature presents a large variety of sample holders and designs, each depending on the context. Although we might find some precise description about the sample holder, exact replication is not always possible. Furthermore, the potential measurements are often done using custom-made Ag/AgCl electrodes and very few indications are given on their reliability with time and temperature. Our objective is to perform complex conductivity measurements on soil samples compacted in a PVC cylindrical mould (10 cm-long, 5 cm-diameter) according to geotechnical standards. To expect homogeneous current density, electrical current is transmitted through the sample via chambers filled with agar gel. Agar gel is a good non-polarizable conductor within the frequency range (1 mHz -20kHz). Its electrical properties are slightly known. We measured increasing of agar-agar electrical conductivity in time. We modelled the influence of this variation on the measurement. If the electrodes are located on the sample, it is minimized. Because of the dimensions at stake and the need for simple design, potential electrodes are located outside the sample, hence the gel contributes to the measurements. Since the gel is fairly conductive, we expect to overestimate the sample conductivity. Potential electrodes are non-polarizable Ag/AgCl electrodes. To avoid any leakage, the KCl solution in the electrodes is replaced by saturated KCl-agar gel. These electrodes are low cost and show a low, stable, self-potential (<1mV). In addition, the technique of making electrode can be easily reproduced and storage and maintenance are simple

  9. Random sampling or geostatistical modelling? Choosing between design-based and model-based sampling strategies for soil (with discussion)

    NARCIS (Netherlands)

    Brus, D.J.; Gruijter, de J.J.

    1997-01-01

    Classical sampling theory has been repeatedly identified with classical statistics which assumes that data are identically and independently distributed. This explains the switch of many soil scientists from design-based sampling strategies, based on classical sampling theory, to the model-based

  10. Algae viability over time in a ballast water sample

    Science.gov (United States)

    Gollasch, Stephan; David, Matej

    2018-03-01

    The biology of vessels' ballast water needs to be analysed for several reasons, one of these being performance tests of ballast water management systems. This analysis includes a viability assessment of phytoplankton. To overcome logistical problems to get algae sample processing gear on board of a vessel to document algae viability, samples may be transported to land-based laboratories. Concerns were raised how the storage conditions of the sample may impact algae viability over time and what the most appropriate storage conditions were. Here we answer these questions with a long-term algae viability study with daily sample analysis using Pulse-Amplitude Modulated (PAM) fluorometry. The sample was analysed over 79 days. We tested different storage conditions: fridge and room temperature with and without light. It seems that during the first two weeks of the experiment the viability remains almost unchanged with a slight downwards trend. In the continuing period, before the sample was split, a slightly stronger downwards viability trend was observed, which occurred at a similar rate towards the end of the experiment. After the sample was split, the strongest viability reduction was measured for the sample stored without light at room temperature. We concluded that the storage conditions, especially regarding temperature and light exposure, have a stronger impact on algae viability compared to the storage duration and that inappropriate storage conditions reduce algal viability. A sample storage time of up to two weeks in a dark and cool environment has little influence on the organism viability. This indicates that a two week time duration between sample taking on board a vessel and the viability measurement in a land-based laboratory may not be very critical.

  11. Low-sampling-rate ultra-wideband channel estimation using equivalent-time sampling

    KAUST Repository

    Ballal, Tarig

    2014-09-01

    In this paper, a low-sampling-rate scheme for ultra-wideband channel estimation is proposed. The scheme exploits multiple observations generated by transmitting multiple pulses. In the proposed scheme, P pulses are transmitted to produce channel impulse response estimates at a desired sampling rate, while the ADC samples at a rate that is P times slower. To avoid loss of fidelity, the number of sampling periods (based on the desired rate) in the inter-pulse interval is restricted to be co-prime with P. This condition is affected when clock drift is present and the transmitted pulse locations change. To handle this case, and to achieve an overall good channel estimation performance, without using prior information, we derive an improved estimator based on the bounded data uncertainty (BDU) model. It is shown that this estimator is related to the Bayesian linear minimum mean squared error (LMMSE) estimator. Channel estimation performance of the proposed sub-sampling scheme combined with the new estimator is assessed in simulation. The results show that high reduction in sampling rate can be achieved. The proposed estimator outperforms the least squares estimator in almost all cases, while in the high SNR regime it also outperforms the LMMSE estimator. In addition to channel estimation, a synchronization method is also proposed that utilizes the same pulse sequence used for channel estimation. © 2014 IEEE.

  12. System Level Design of a Continuous-Time Delta-Sigma Modulator for Portable Ultrasound Scanners

    DEFF Research Database (Denmark)

    Llimos Muntal, Pere; Færch, Kjartan; Jørgensen, Ivan Harald Holger

    2015-01-01

    In this paper the system level design of a continuous-time ∆Σ modulator for portable ultrasound scanners is presented. The overall required signal-to-noise ratio (SNR) is derived to be 42 dB and the sampling frequency used is 320 MHz for an oversampling ratio of 16. In order to match these requir......, based on high-level VerilogA simulations, the performance of the ∆Σ modulator versus various block performance parameters is presented as trade-off curves. Based on these results, the block specifications are derived.......In this paper the system level design of a continuous-time ∆Σ modulator for portable ultrasound scanners is presented. The overall required signal-to-noise ratio (SNR) is derived to be 42 dB and the sampling frequency used is 320 MHz for an oversampling ratio of 16. In order to match...

  13. Design and relevant sample calculations for a neutral particle energy diagnostic based on time of flight

    Energy Technology Data Exchange (ETDEWEB)

    Cecconello, M

    1999-05-01

    Extrap T2 will be equipped with a neutral particles energy diagnostic based on time of flight technique. In this report, the expected neutral fluxes for Extrap T2 are estimated and discussed in order to determine the feasibility and the limits of such diagnostic. These estimates are based on a 1D model of the plasma. The input parameters of such model are the density and temperature radial profiles of electrons and ions and the density of neutrals at the edge and in the centre of the plasma. The atomic processes included in the model are the charge-exchange and the electron-impact ionization processes. The results indicate that the plasma attenuation length varies from a/5 to a, a being the minor radius. Differential neutral fluxes, as well as the estimated power losses due to CX processes (2 % of the input power), are in agreement with experimental results obtained in similar devices. The expected impurity influxes vary from 10{sup 14} to 10{sup 11} cm{sup -2}s{sup -1}. The neutral particles detection and acquisition systems are discussed. The maximum detectable energy varies from 1 to 3 keV depending on the flight distances d. The time resolution is 0.5 ms. Output signals from the waveform recorder are foreseen in the range 0-200 mV. An 8-bit waveform recorder having 2 MHz sampling frequency and 100K sample of memory capacity is the minimum requirement for the acquisition system 20 refs, 19 figs.

  14. A proposal of optimal sampling design using a modularity strategy

    Science.gov (United States)

    Simone, A.; Giustolisi, O.; Laucelli, D. B.

    2016-08-01

    In real water distribution networks (WDNs) are present thousands nodes and optimal placement of pressure and flow observations is a relevant issue for different management tasks. The planning of pressure observations in terms of spatial distribution and number is named sampling design and it was faced considering model calibration. Nowadays, the design of system monitoring is a relevant issue for water utilities e.g., in order to manage background leakages, to detect anomalies and bursts, to guarantee service quality, etc. In recent years, the optimal location of flow observations related to design of optimal district metering areas (DMAs) and leakage management purposes has been faced considering optimal network segmentation and the modularity index using a multiobjective strategy. Optimal network segmentation is the basis to identify network modules by means of optimal conceptual cuts, which are the candidate locations of closed gates or flow meters creating the DMAs. Starting from the WDN-oriented modularity index, as a metric for WDN segmentation, this paper proposes a new way to perform the sampling design, i.e., the optimal location of pressure meters, using newly developed sampling-oriented modularity index. The strategy optimizes the pressure monitoring system mainly based on network topology and weights assigned to pipes according to the specific technical tasks. A multiobjective optimization minimizes the cost of pressure meters while maximizing the sampling-oriented modularity index. The methodology is presented and discussed using the Apulian and Exnet networks.

  15. On the sample transport time of a pneumatic transfer system

    International Nuclear Information System (INIS)

    Kondo, Yoshihide

    1983-01-01

    The counts accumulated in measuring system are affected by the variations in transport time of the sample on cyclic activation experiments with a mechanical sample transfer system. In use of the pneumatic transfer system, which has been set up, the transport time is variable according to the differences as follows: The form, size and weight of samples, the pneumatic pressure and so on. Comprehending the relationships between the transpot time and these variable factors is essentially important to make experiments with this transfer system. (author)

  16. Quantum supremacy in constant-time measurement-based computation: A unified architecture for sampling and verification

    Science.gov (United States)

    Miller, Jacob; Sanders, Stephen; Miyake, Akimasa

    2017-12-01

    While quantum speed-up in solving certain decision problems by a fault-tolerant universal quantum computer has been promised, a timely research interest includes how far one can reduce the resource requirement to demonstrate a provable advantage in quantum devices without demanding quantum error correction, which is crucial for prolonging the coherence time of qubits. We propose a model device made of locally interacting multiple qubits, designed such that simultaneous single-qubit measurements on it can output probability distributions whose average-case sampling is classically intractable, under similar assumptions as the sampling of noninteracting bosons and instantaneous quantum circuits. Notably, in contrast to these previous unitary-based realizations, our measurement-based implementation has two distinctive features. (i) Our implementation involves no adaptation of measurement bases, leading output probability distributions to be generated in constant time, independent of the system size. Thus, it could be implemented in principle without quantum error correction. (ii) Verifying the classical intractability of our sampling is done by changing the Pauli measurement bases only at certain output qubits. Our usage of random commuting quantum circuits in place of computationally universal circuits allows a unique unification of sampling and verification, so they require the same physical resource requirements in contrast to the more demanding verification protocols seen elsewhere in the literature.

  17. Detecting spatial structures in throughfall data: The effect of extent, sample size, sampling design, and variogram estimation method

    Science.gov (United States)

    Voss, Sebastian; Zimmermann, Beate; Zimmermann, Alexander

    2016-09-01

    In the last decades, an increasing number of studies analyzed spatial patterns in throughfall by means of variograms. The estimation of the variogram from sample data requires an appropriate sampling scheme: most importantly, a large sample and a layout of sampling locations that often has to serve both variogram estimation and geostatistical prediction. While some recommendations on these aspects exist, they focus on Gaussian data and high ratios of the variogram range to the extent of the study area. However, many hydrological data, and throughfall data in particular, do not follow a Gaussian distribution. In this study, we examined the effect of extent, sample size, sampling design, and calculation method on variogram estimation of throughfall data. For our investigation, we first generated non-Gaussian random fields based on throughfall data with large outliers. Subsequently, we sampled the fields with three extents (plots with edge lengths of 25 m, 50 m, and 100 m), four common sampling designs (two grid-based layouts, transect and random sampling) and five sample sizes (50, 100, 150, 200, 400). We then estimated the variogram parameters by method-of-moments (non-robust and robust estimators) and residual maximum likelihood. Our key findings are threefold. First, the choice of the extent has a substantial influence on the estimation of the variogram. A comparatively small ratio of the extent to the correlation length is beneficial for variogram estimation. Second, a combination of a minimum sample size of 150, a design that ensures the sampling of small distances and variogram estimation by residual maximum likelihood offers a good compromise between accuracy and efficiency. Third, studies relying on method-of-moments based variogram estimation may have to employ at least 200 sampling points for reliable variogram estimates. These suggested sample sizes exceed the number recommended by studies dealing with Gaussian data by up to 100 %. Given that most previous

  18. Adaptive control of theophylline therapy: importance of blood sampling times.

    Science.gov (United States)

    D'Argenio, D Z; Khakmahd, K

    1983-10-01

    A two-observation protocol for estimating theophylline clearance during a constant-rate intravenous infusion is used to examine the importance of blood sampling schedules with regard to the information content of resulting concentration data. Guided by a theory for calculating maximally informative sample times, population simulations are used to assess the effect of specific sampling times on the precision of resulting clearance estimates and subsequent predictions of theophylline plasma concentrations. The simulations incorporated noise terms for intersubject variability, dosing errors, sample collection errors, and assay error. Clearance was estimated using Chiou's method, least squares, and a Bayesian estimation procedure. The results of these simulations suggest that clinically significant estimation and prediction errors may result when using the above two-point protocol for estimating theophylline clearance if the time separating the two blood samples is less than one population mean elimination half-life.

  19. A UAV-Based Fog Collector Design for Fine-Scale Aerobiological Sampling

    Science.gov (United States)

    Gentry, Diana; Guarro, Marcello; Demachkie, Isabella Siham; Stumfall, Isabel; Dahlgren, Robert P.

    2017-01-01

    Airborne microbes are found throughout the troposphere and into the stratosphere. Knowing how the activity of airborne microorganisms can alter water, carbon, and other geochemical cycles is vital to a full understanding of local and global ecosystems. Just as on the land or in the ocean, atmospheric regions vary in habitability; the underlying geochemical, climatic, and ecological dynamics must be characterized at different scales to be effectively modeled. Most aerobiological studies have focused on a high level: 'How high are airborne microbes found?' and 'How far can they travel?' Most fog and cloud water studies collect from stationary ground stations (point) or along flight transects (1D). To complement and provide context for this data, we have designed a UAV-based modified fog and cloud water collector to retrieve 4D-resolved samples for biological and chemical analysis.Our design uses a passive impacting collector hanging from a rigid rod suspended between two multi-rotor UAVs. The suspension design reduces the effect of turbulence and potential for contamination from the UAV downwash. The UAVs are currently modeled in a leader-follower configuration, taking advantage of recent advances in modular UAVs, UAV swarming, and flight planning.The collector itself is a hydrophobic mesh. Materials including Tyvek, PTFE, nylon, and polypropylene monofilament fabricated via laser cutting, CNC knife, or 3D printing were characterized for droplet collection efficiency using a benchtop atomizer and particle counter. Because the meshes can be easily and inexpensively fabricated, a set can be pre-sterilized and brought to the field for 'hot swapping' to decrease cross-contamination between flight sessions or use as negative controls.An onboard sensor and logging system records the time and location of each sample; when combined with flight tracking data, the samples can be resolved into a 4D volumetric map of the fog bank. Collected samples can be returned to the lab for

  20. Design of a sample acquistion system for the Mars exobiological penetrator

    Science.gov (United States)

    Thomson, Ron; Gwynne, Owen

    1988-01-01

    The Mars Exobiological Penetrator will be imbedded into several locations on the Martian surface. It contains various scientific instruments, such as an Alpha-Particle Instrument (API), Differential Scanning Calorimeter (DSC), Evolved Gas Analyzer (EGA) and accelerometers. A sample is required for analysis in the API and DSC. To avoid impact contaminated material, this sample must be taken from soil greater than 2 cm away from the penetrator shell. This study examines the design of a dedicated sampling system including deployment, suspension, fore/after body coupling, sample gathering and placement. To prevent subsurface material from entering the penetrator sampling compartment during impact, a plug is placed in the exit hole of the wall. A U-lever device is used to hold this plug in the penetrator wall. The U-lever rotates upon initial motion of the core-grinder mechanism (CGM), releasing the plug. Research points to a combination of coring and grinding as a plausible solution to the problem of dry drilling. The CGM, driven by two compressed springs, will be deployed along a tracking system. A slowly varying load i.e., springs, is favored over a fixed displacement motion because of its adaptability to different material hardness. However, to accommodate sampling in a low density soil, two dash pots set a maximum transverse velocity. In addition, minimal power use is achieved by unidirectional motion of the CGM. The sample will be transported to the scientific instruments by means of a sample placement tray that is driven by a compressed spring to avoid unnecessary power usage. This paper also explores possible modifications for size, weight, and time as well as possible future studies.

  1. Optimization of sampling pattern and the design of Fourier ptychographic illuminator.

    Science.gov (United States)

    Guo, Kaikai; Dong, Siyuan; Nanda, Pariksheet; Zheng, Guoan

    2015-03-09

    Fourier ptychography (FP) is a recently developed imaging approach that facilitates high-resolution imaging beyond the cutoff frequency of the employed optics. In the original FP approach, a periodic LED array is used for sample illumination, and therefore, the scanning pattern is a uniform grid in the Fourier space. Such a uniform sampling scheme leads to 3 major problems for FP, namely: 1) it requires a large number of raw images, 2) it introduces the raster grid artefacts in the reconstruction process, and 3) it requires a high-dynamic-range detector. Here, we investigate scanning sequences and sampling patterns to optimize the FP approach. For most biological samples, signal energy is concentrated at low-frequency region, and as such, we can perform non-uniform Fourier sampling in FP by considering the signal structure. In contrast, conventional ptychography perform uniform sampling over the entire real space. To implement the non-uniform Fourier sampling scheme in FP, we have designed and built an illuminator using LEDs mounted on a 3D-printed plastic case. The advantages of this illuminator are threefold in that: 1) it reduces the number of image acquisitions by at least 50% (68 raw images versus 137 in the original FP setup), 2) it departs from the translational symmetry of sampling to solve the raster grid artifact problem, and 3) it reduces the dynamic range of the captured images 6 fold. The results reported in this paper significantly shortened acquisition time and improved quality of FP reconstructions. It may provide new insights for developing Fourier ptychographic imaging platforms and find important applications in digital pathology.

  2. Using forbidden ordinal patterns to detect determinism in irregularly sampled time series.

    Science.gov (United States)

    Kulp, C W; Chobot, J M; Niskala, B J; Needhammer, C J

    2016-02-01

    It is known that when symbolizing a time series into ordinal patterns using the Bandt-Pompe (BP) methodology, there will be ordinal patterns called forbidden patterns that do not occur in a deterministic series. The existence of forbidden patterns can be used to identify deterministic dynamics. In this paper, the ability to use forbidden patterns to detect determinism in irregularly sampled time series is tested on data generated from a continuous model system. The study is done in three parts. First, the effects of sampling time on the number of forbidden patterns are studied on regularly sampled time series. The next two parts focus on two types of irregular-sampling, missing data and timing jitter. It is shown that forbidden patterns can be used to detect determinism in irregularly sampled time series for low degrees of sampling irregularity (as defined in the paper). In addition, comments are made about the appropriateness of using the BP methodology to symbolize irregularly sampled time series.

  3. Real-time systems design and analysis

    CERN Document Server

    Laplante, Phillip A

    2004-01-01

    "Real-Time Systems Design and Analysis, Third Edition is essential for students and practicing software engineers who want improved designs, faster computation, and ultimate cost savings. Chapters discuss hardware considerations and software requirements, software systems design, the software production process, performance estimation and optimization, and engineering considerations."--Jacket.

  4. Within-otolith variability in chemical fingerprints: implications for sampling designs and possible environmental interpretation.

    Directory of Open Access Journals (Sweden)

    Antonio Di Franco

    Full Text Available Largely used as a natural biological tag in studies of dispersal/connectivity of fish, otolith elemental fingerprinting is usually analyzed by laser ablation-inductively coupled plasma-mass spectrometry (LA-ICP-MS. LA-ICP-MS produces an elemental fingerprint at a discrete time-point in the life of a fish and can generate data on within-otolith variability of that fingerprint. The presence of within-otolith variability has been previously acknowledged but not incorporated into experimental designs on the presumed, but untested, grounds of both its negligibility compared to among-otolith variability and of spatial autocorrelation among multiple ablations within an otolith. Here, using a hierarchical sampling design of spatial variation at multiple scales in otolith chemical fingerprints for two Mediterranean coastal fishes, we explore: 1 whether multiple ablations within an otolith can be used as independent replicates for significance tests among otoliths, and 2 the implications of incorporating within-otolith variability when assessing spatial variability in otolith chemistry at a hierarchy of spatial scales (different fish, from different sites, at different locations on the Apulian Adriatic coast. We find that multiple ablations along the same daily rings do not necessarily exhibit spatial dependency within the otolith and can be used to estimate residual variability in a hierarchical sampling design. Inclusion of within-otolith measurements reveals that individuals at the same site can show significant variability in elemental uptake. Within-otolith variability examined across the spatial hierarchy identifies differences between the two fish species investigated, and this finding leads to discussion of the potential for within-otolith variability to be used as a marker for fish exposure to stressful conditions. We also demonstrate that a 'cost'-optimal allocation of sampling effort should typically include some level of within

  5. Within-otolith variability in chemical fingerprints: implications for sampling designs and possible environmental interpretation.

    Science.gov (United States)

    Di Franco, Antonio; Bulleri, Fabio; Pennetta, Antonio; De Benedetto, Giuseppe; Clarke, K Robert; Guidetti, Paolo

    2014-01-01

    Largely used as a natural biological tag in studies of dispersal/connectivity of fish, otolith elemental fingerprinting is usually analyzed by laser ablation-inductively coupled plasma-mass spectrometry (LA-ICP-MS). LA-ICP-MS produces an elemental fingerprint at a discrete time-point in the life of a fish and can generate data on within-otolith variability of that fingerprint. The presence of within-otolith variability has been previously acknowledged but not incorporated into experimental designs on the presumed, but untested, grounds of both its negligibility compared to among-otolith variability and of spatial autocorrelation among multiple ablations within an otolith. Here, using a hierarchical sampling design of spatial variation at multiple scales in otolith chemical fingerprints for two Mediterranean coastal fishes, we explore: 1) whether multiple ablations within an otolith can be used as independent replicates for significance tests among otoliths, and 2) the implications of incorporating within-otolith variability when assessing spatial variability in otolith chemistry at a hierarchy of spatial scales (different fish, from different sites, at different locations on the Apulian Adriatic coast). We find that multiple ablations along the same daily rings do not necessarily exhibit spatial dependency within the otolith and can be used to estimate residual variability in a hierarchical sampling design. Inclusion of within-otolith measurements reveals that individuals at the same site can show significant variability in elemental uptake. Within-otolith variability examined across the spatial hierarchy identifies differences between the two fish species investigated, and this finding leads to discussion of the potential for within-otolith variability to be used as a marker for fish exposure to stressful conditions. We also demonstrate that a 'cost'-optimal allocation of sampling effort should typically include some level of within-otolith replication in the

  6. Extending cluster Lot Quality Assurance Sampling designs for surveillance programs

    OpenAIRE

    Hund, Lauren; Pagano, Marcello

    2014-01-01

    Lot quality assurance sampling (LQAS) has a long history of applications in industrial quality control. LQAS is frequently used for rapid surveillance in global health settings, with areas classified as poor or acceptable performance based on the binary classification of an indicator. Historically, LQAS surveys have relied on simple random samples from the population; however, implementing two-stage cluster designs for surveillance sampling is often more cost-effective than ...

  7. Human Exploration Using Real-Time Robotic Operations (HERRO)- Crew Telerobotic Control Vehicle (CTCV) Design

    Science.gov (United States)

    Oleson, Steven R.; McGuire, Melissa L.; Burke, Laura; Chato, David; Fincannon, James; Landis, Geoff; Sandifer, Carl; Warner, Joe; Williams, Glenn; Colozza, Tony; hide

    2010-01-01

    The HERRO concept allows real time investigation of planets and small bodies by sending astronauts to orbit these targets and telerobotically explore them using robotic systems. Several targets have been put forward by past studies including Mars, Venus, and near Earth asteroids. A conceptual design study was funded by the NASA Innovation Fund to explore what the HERRO concept and it's vehicles would look like and what technological challenges need to be met. This design study chose Mars as the target destination. In this way the HERRO studies can define the endpoint design concepts for an all-up telerobotic exploration of the number one target of interest Mars. This endpoint design will serve to help planners define combined precursor telerobotics science missions and technology development flights. A suggested set of these technologies and demonstrator missions is shown in Appendix B. The HERRO concept includes a crewed telerobotics orbit vehicle as well three Truck rovers, each supporting two teleoperated geologist robots Rockhounds (each truck/Rockhounds set is landed using a commercially launched aeroshell landing system.) Options include a sample ascent system teamed with an orbital telerobotic sample rendezvous and return spacecraft (S/C) (yet to be designed). Each truck rover would be landed in a science location with the ability to traverse a 100 km diameter area, carrying the Rockhounds to 100 m diameter science areas for several week science activities. The truck is not only responsible for transporting the Rockhounds to science areas, but also for relaying telecontrol and high-res communications to/from the Rockhound and powering/heating the Rockhound during the non-science times (including night-time). The Rockhounds take the place of human geologists by providing an agile robotic platform with real-time telerobotics control to the Rockhound from the crew telerobotics orbiter. The designs of the Truck rovers and Rockhounds will be described in other

  8. Sampling Development

    Science.gov (United States)

    Adolph, Karen E.; Robinson, Scott R.

    2011-01-01

    Research in developmental psychology requires sampling at different time points. Accurate depictions of developmental change provide a foundation for further empirical studies and theories about developmental mechanisms. However, overreliance on widely spaced sampling intervals in cross-sectional and longitudinal designs threatens the validity of…

  9. Bionic Design for Mars Sampling Scoop Inspired by Himalayan Marmot Claw

    Directory of Open Access Journals (Sweden)

    Long Xue

    2016-01-01

    Full Text Available Cave animals are often adapted to digging and life underground, with claw toes similar in structure and function to a sampling scoop. In this paper, the clawed toes of the Himalayan marmot were selected as a biological prototype for bionic research. Based on geometric parameter optimization of the clawed toes, a bionic sampling scoop for use on Mars was designed. Using a 3D laser scanner, the point cloud data of the second front claw toe was acquired. Parametric equations and contour curves for the claw were then built with cubic polynomial fitting. We obtained 18 characteristic curve equations for the internal and external contours of the claw. A bionic sampling scoop was designed according to the structural parameters of Curiosity’s sampling shovel and the contours of the Himalayan marmot’s claw. Verifying test results showed that when the penetration angle was 45° and the sampling speed was 0.33 r/min, the bionic sampling scoops’ resistance torque was 49.6% less than that of the prototype sampling scoop. When the penetration angle was 60° and the sampling speed was 0.22 r/min, the resistance torque of the bionic sampling scoop was 28.8% lower than that of the prototype sampling scoop.

  10. Synchronization of a Class of Memristive Stochastic Bidirectional Associative Memory Neural Networks with Mixed Time-Varying Delays via Sampled-Data Control

    Directory of Open Access Journals (Sweden)

    Manman Yuan

    2018-01-01

    Full Text Available The paper addresses the issue of synchronization of memristive bidirectional associative memory neural networks (MBAMNNs with mixed time-varying delays and stochastic perturbation via a sampled-data controller. First, we propose a new model of MBAMNNs with mixed time-varying delays. In the proposed approach, the mixed delays include time-varying distributed delays and discrete delays. Second, we design a new method of sampled-data control for the stochastic MBAMNNs. Traditional control methods lack the capability of reflecting variable synaptic weights. In this paper, the methods are carefully designed to confirm the synchronization processes are suitable for the feather of the memristor. Third, sufficient criteria guaranteeing the synchronization of the systems are derived based on the derive-response concept. Finally, the effectiveness of the proposed mechanism is validated with numerical experiments.

  11. Transformation-cost time-series method for analyzing irregularly sampled data.

    Science.gov (United States)

    Ozken, Ibrahim; Eroglu, Deniz; Stemler, Thomas; Marwan, Norbert; Bagci, G Baris; Kurths, Jürgen

    2015-06-01

    Irregular sampling of data sets is one of the challenges often encountered in time-series analysis, since traditional methods cannot be applied and the frequently used interpolation approach can corrupt the data and bias the subsequence analysis. Here we present the TrAnsformation-Cost Time-Series (TACTS) method, which allows us to analyze irregularly sampled data sets without degenerating the quality of the data set. Instead of using interpolation we consider time-series segments and determine how close they are to each other by determining the cost needed to transform one segment into the following one. Using a limited set of operations-with associated costs-to transform the time series segments, we determine a new time series, that is our transformation-cost time series. This cost time series is regularly sampled and can be analyzed using standard methods. While our main interest is the analysis of paleoclimate data, we develop our method using numerical examples like the logistic map and the Rössler oscillator. The numerical data allows us to test the stability of our method against noise and for different irregular samplings. In addition we provide guidance on how to choose the associated costs based on the time series at hand. The usefulness of the TACTS method is demonstrated using speleothem data from the Secret Cave in Borneo that is a good proxy for paleoclimatic variability in the monsoon activity around the maritime continent.

  12. Transformation-cost time-series method for analyzing irregularly sampled data

    Science.gov (United States)

    Ozken, Ibrahim; Eroglu, Deniz; Stemler, Thomas; Marwan, Norbert; Bagci, G. Baris; Kurths, Jürgen

    2015-06-01

    Irregular sampling of data sets is one of the challenges often encountered in time-series analysis, since traditional methods cannot be applied and the frequently used interpolation approach can corrupt the data and bias the subsequence analysis. Here we present the TrAnsformation-Cost Time-Series (TACTS) method, which allows us to analyze irregularly sampled data sets without degenerating the quality of the data set. Instead of using interpolation we consider time-series segments and determine how close they are to each other by determining the cost needed to transform one segment into the following one. Using a limited set of operations—with associated costs—to transform the time series segments, we determine a new time series, that is our transformation-cost time series. This cost time series is regularly sampled and can be analyzed using standard methods. While our main interest is the analysis of paleoclimate data, we develop our method using numerical examples like the logistic map and the Rössler oscillator. The numerical data allows us to test the stability of our method against noise and for different irregular samplings. In addition we provide guidance on how to choose the associated costs based on the time series at hand. The usefulness of the TACTS method is demonstrated using speleothem data from the Secret Cave in Borneo that is a good proxy for paleoclimatic variability in the monsoon activity around the maritime continent.

  13. Optimal experiment design in a filtering context with application to sampled network data

    OpenAIRE

    Singhal, Harsh; Michailidis, George

    2010-01-01

    We examine the problem of optimal design in the context of filtering multiple random walks. Specifically, we define the steady state E-optimal design criterion and show that the underlying optimization problem leads to a second order cone program. The developed methodology is applied to tracking network flow volumes using sampled data, where the design variable corresponds to controlling the sampling rate. The optimal design is numerically compared to a myopic and a naive strategy. Finally, w...

  14. Sampling and Timing: A Task for the Environmetal Process

    NARCIS (Netherlands)

    Hilderink, G.H.; Broenink, Johannes F.

    2003-01-01

    Sampling and timing is considered a responsibility of the environment of controller software. In this paper we will illustrate a concept whereby an environmental process and multi-way events play an important role in applying timing for untimed CSP software architectures. We use this timing concept

  15. Design Recovery Technology for Real-Time Systems.

    Science.gov (United States)

    1995-10-01

    RL-TR-95-208 Final Technical Report October 1995 DESIGN RECOVERY TECHNOLOGY FOR REAL TIME SYSTEMS The MITRE Corporation Lester J. Holtzblatt...92 - Jan 95 4. TTTLE AND SUBTITLE DESIGN RECOVERY TECHNOLOGY FOR REAL - TIME SYSTEMS 6. AUTHOR(S) Lester J. Holtzblatt, Richard Piazza, and Susan...behavior of real - time systems in general, our initial efforts have centered on recovering this information from one system in particular, the Modular

  16. Design/Operations review of core sampling trucks and associated equipment

    International Nuclear Information System (INIS)

    Shrivastava, H.P.

    1996-01-01

    A systematic review of the design and operations of the core sampling trucks was commissioned by Characterization Equipment Engineering of the Westinghouse Hanford Company in October 1995. The review team reviewed the design documents, specifications, operating procedure, training manuals and safety analysis reports. The review process, findings and corrective actions are summarized in this supporting document

  17. Target Tracking of a Linear Time Invariant System under Irregular Sampling

    Directory of Open Access Journals (Sweden)

    Jin Xue-Bo

    2012-11-01

    Full Text Available Due to event-triggered sampling in a system, or maybe with the aim of reducing data storage, tracking many applications will encounter irregular sampling time. By calculating the matrix exponential using an inverse Laplace transform, this paper transforms the irregular sampling tracking problem to the problem of tracking with time-varying parameters of a system. Using the common Kalman filter, the developed method is used to track a target for the simulated trajectory and video tracking. The results of simulation experiments have shown that it can obtain good estimation performance even at a very high irregular rate of measurement sampling time.

  18. Practical iterative learning control with frequency domain design and sampled data implementation

    CERN Document Server

    Wang, Danwei; Zhang, Bin

    2014-01-01

    This book is on the iterative learning control (ILC) with focus on the design and implementation. We approach the ILC design based on the frequency domain analysis and address the ILC implementation based on the sampled data methods. This is the first book of ILC from frequency domain and sampled data methodologies. The frequency domain design methods offer ILC users insights to the convergence performance which is of practical benefits. This book presents a comprehensive framework with various methodologies to ensure the learnable bandwidth in the ILC system to be set with a balance between learning performance and learning stability. The sampled data implementation ensures effective execution of ILC in practical dynamic systems. The presented sampled data ILC methods also ensure the balance of performance and stability of learning process. Furthermore, the presented theories and methodologies are tested with an ILC controlled robotic system. The experimental results show that the machines can work in much h...

  19. Sample requirements and design of an inter-laboratory trial for radiocarbon laboratories

    International Nuclear Information System (INIS)

    Bryant, Charlotte; Carmi, Israel; Cook, Gordon; Gulliksen, Steinar; Harkness, Doug; Heinemeier, Jan; McGee, Edward; Naysmith, Philip; Possnert, Goran; Scott, Marian; Plicht, Hans van der; Strydonck, Mark van

    2000-01-01

    An on-going inter-comparison programme which is focused on assessing and establishing consensus protocols to be applied in the identification, selection and sub-sampling of materials for subsequent 14 C analysis is described. The outcome of the programme will provide a detailed quantification of the uncertainties associated with 14 C measurements including the issues of accuracy and precision. Such projects have become recognised as a fundamental aspect of continuing laboratory quality assurance schemes, providing a mechanism for the harmonisation of measurements and for demonstrating the traceability of results. The design of this study and its rationale are described. In summary, a suite of core samples has been defined which will be made available to both AMS and radiometric laboratories. These core materials are representative of routinely dated material and their ages span the full range of the applied 14 C time-scale. Two of the samples are of wood from the German and Irish dendrochronologies, thus providing a direct connection to the master dendrochronological calibration curve. Further samples link this new inter-comparison to past studies. Sample size and precision have been identified as being of paramount importance in defining dating confidence, and so several core samples have been identified for more in-depth study of these practical issues. In addition to the core samples, optional samples have been identified and prepared specifically for either AMS and/or radiometric laboratories. For AMS laboratories, these include bone, textile, leather and parchment samples. Participation in the study requires a commitment to a minimum of 10 core analyses, with results to be returned within a year

  20. Design of sampling tools for Monte Carlo particle transport code JMCT

    International Nuclear Information System (INIS)

    Shangguan Danhua; Li Gang; Zhang Baoyin; Deng Li

    2012-01-01

    A class of sampling tools for general Monte Carlo particle transport code JMCT is designed. Two ways are provided to sample from distributions. One is the utilization of special sampling methods for special distribution; the other is the utilization of general sampling methods for arbitrary discrete distribution and one-dimensional continuous distribution on a finite interval. Some open source codes are included in the general sampling method for the maximum convenience of users. The sampling results show sampling correctly from distribution which are popular in particle transport can be achieved with these tools, and the user's convenience can be assured. (authors)

  1. Designing Garments to Evolve Over Time

    DEFF Research Database (Denmark)

    Riisberg, Vibeke; Grose, Lynda

    2017-01-01

    This paper proposes a REDO of the current fashion paradigm by investigating how garments might be designed to evolve over time. The purpose is to discuss ways of expanding the traditional role of the designer to include temporal dimensions of creating, producing and using clothes and to suggest...... to a REDO of design education, to further research and the future fashion and textile industry....

  2. Optimizing incomplete sample designs for item response model parameters

    NARCIS (Netherlands)

    van der Linden, Willem J.

    Several models for optimizing incomplete sample designs with respect to information on the item parameters are presented. The following cases are considered: (1) known ability parameters; (2) unknown ability parameters; (3) item sets with multiple ability scales; and (4) response models with

  3. Non-Cartesian MRI scan time reduction through sparse sampling

    NARCIS (Netherlands)

    Wajer, F.T.A.W.

    2001-01-01

    Non-Cartesian MRI Scan-Time Reduction through Sparse Sampling Magnetic resonance imaging (MRI) signals are measured in the Fourier domain, also called k-space. Samples of the MRI signal can not be taken at will, but lie along k-space trajectories determined by the magnetic field gradients. MRI

  4. The Study on Mental Health at Work: Design and sampling.

    Science.gov (United States)

    Rose, Uwe; Schiel, Stefan; Schröder, Helmut; Kleudgen, Martin; Tophoven, Silke; Rauch, Angela; Freude, Gabriele; Müller, Grit

    2017-08-01

    The Study on Mental Health at Work (S-MGA) generates the first nationwide representative survey enabling the exploration of the relationship between working conditions, mental health and functioning. This paper describes the study design, sampling procedures and data collection, and presents a summary of the sample characteristics. S-MGA is a representative study of German employees aged 31-60 years subject to social security contributions. The sample was drawn from the employment register based on a two-stage cluster sampling procedure. Firstly, 206 municipalities were randomly selected from a pool of 12,227 municipalities in Germany. Secondly, 13,590 addresses were drawn from the selected municipalities for the purpose of conducting 4500 face-to-face interviews. The questionnaire covers psychosocial working and employment conditions, measures of mental health, work ability and functioning. Data from personal interviews were combined with employment histories from register data. Descriptive statistics of socio-demographic characteristics and logistic regressions analyses were used for comparing population, gross sample and respondents. In total, 4511 face-to-face interviews were conducted. A test for sampling bias revealed that individuals in older cohorts participated more often, while individuals with an unknown educational level, residing in major cities or with a non-German ethnic background were slightly underrepresented. There is no indication of major deviations in characteristics between the basic population and the sample of respondents. Hence, S-MGA provides representative data for research on work and health, designed as a cohort study with plans to rerun the survey 5 years after the first assessment.

  5. The Study on Mental Health at Work: Design and sampling

    Science.gov (United States)

    Rose, Uwe; Schiel, Stefan; Schröder, Helmut; Kleudgen, Martin; Tophoven, Silke; Rauch, Angela; Freude, Gabriele; Müller, Grit

    2017-01-01

    Aims: The Study on Mental Health at Work (S-MGA) generates the first nationwide representative survey enabling the exploration of the relationship between working conditions, mental health and functioning. This paper describes the study design, sampling procedures and data collection, and presents a summary of the sample characteristics. Methods: S-MGA is a representative study of German employees aged 31–60 years subject to social security contributions. The sample was drawn from the employment register based on a two-stage cluster sampling procedure. Firstly, 206 municipalities were randomly selected from a pool of 12,227 municipalities in Germany. Secondly, 13,590 addresses were drawn from the selected municipalities for the purpose of conducting 4500 face-to-face interviews. The questionnaire covers psychosocial working and employment conditions, measures of mental health, work ability and functioning. Data from personal interviews were combined with employment histories from register data. Descriptive statistics of socio-demographic characteristics and logistic regressions analyses were used for comparing population, gross sample and respondents. Results: In total, 4511 face-to-face interviews were conducted. A test for sampling bias revealed that individuals in older cohorts participated more often, while individuals with an unknown educational level, residing in major cities or with a non-German ethnic background were slightly underrepresented. Conclusions: There is no indication of major deviations in characteristics between the basic population and the sample of respondents. Hence, S-MGA provides representative data for research on work and health, designed as a cohort study with plans to rerun the survey 5 years after the first assessment. PMID:28673202

  6. Corganiser: a web-based software tool for planning time-sensitive sampling of whole rounds during scientific drilling

    DEFF Research Database (Denmark)

    Marshall, Ian

    2014-01-01

    with a wide range of core and section configurations and can thus be used in future drilling projects. Corganiser is written in the Python programming language and is implemented both as a graphical web interface and command-line interface. It can be accessed online at http://130.226.247.137/.......Corganiser is a software tool developed to simplify the process of preparing whole-round sampling plans for time-sensitive microbiology and geochemistry sampling during scientific drilling. It was developed during the Integrated Ocean Drilling Program (IODP) Expedition 347, but is designed to work...

  7. In-Sample Confidence Bands and Out-of-Sample Forecast Bands for Time-Varying Parameters in Observation Driven Models

    NARCIS (Netherlands)

    Blasques, F.; Koopman, S.J.; Lasak, K.A.; Lucas, A.

    2016-01-01

    We study the performances of alternative methods for calculating in-sample confidence and out-of-sample forecast bands for time-varying parameters. The in-sample bands reflect parameter uncertainty, while the out-of-sample bands reflect not only parameter uncertainty, but also innovation

  8. Sampling effects on the identification of roadkill hotspots: Implications for survey design.

    Science.gov (United States)

    Santos, Sara M; Marques, J Tiago; Lourenço, André; Medinas, Denis; Barbosa, A Márcia; Beja, Pedro; Mira, António

    2015-10-01

    Although locating wildlife roadkill hotspots is essential to mitigate road impacts, the influence of study design on hotspot identification remains uncertain. We evaluated how sampling frequency affects the accuracy of hotspot identification, using a dataset of vertebrate roadkills (n = 4427) recorded over a year of daily surveys along 37 km of roads. "True" hotspots were identified using this baseline dataset, as the 500-m segments where the number of road-killed vertebrates exceeded the upper 95% confidence limit of the mean, assuming a Poisson distribution of road-kills per segment. "Estimated" hotspots were identified likewise, using datasets representing progressively lower sampling frequencies, which were produced by extracting data from the baseline dataset at appropriate time intervals (1-30 days). Overall, 24.3% of segments were "true" hotspots, concentrating 40.4% of roadkills. For different groups, "true" hotspots accounted from 6.8% (bats) to 29.7% (small birds) of road segments, concentrating from 60% (lizards, lagomorphs, carnivores) of roadkills. Spatial congruence between "true" and "estimated" hotspots declined rapidly with increasing time interval between surveys, due primarily to increasing false negatives (i.e., missing "true" hotspots). There were also false positives (i.e., wrong "estimated" hotspots), particularly at low sampling frequencies. Spatial accuracy decay with increasing time interval between surveys was higher for smaller-bodied (amphibians, reptiles, small birds, small mammals) than for larger-bodied species (birds of prey, hedgehogs, lagomorphs, carnivores). Results suggest that widely used surveys at weekly or longer intervals may produce poor estimates of roadkill hotspots, particularly for small-bodied species. Surveying daily or at two-day intervals may be required to achieve high accuracy in hotspot identification for multiple species. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. AN EVALUATION OF PRIMARY DATA-COLLECTION MODES IN AN ADDRESS-BASED SAMPLING DESIGN.

    Science.gov (United States)

    Amaya, Ashley; Leclere, Felicia; Carris, Kari; Liao, Youlian

    2015-01-01

    As address-based sampling becomes increasingly popular for multimode surveys, researchers continue to refine data-collection best practices. While much work has been conducted to improve efficiency within a given mode, additional research is needed on how multimode designs can be optimized across modes. Previous research has not evaluated the consequences of mode sequencing on multimode mail and phone surveys, nor has significant research been conducted to evaluate mode sequencing on a variety of indicators beyond response rates. We conducted an experiment within the Racial and Ethnic Approaches to Community Health across the U.S. Risk Factor Survey (REACH U.S.) to evaluate two multimode case-flow designs: (1) phone followed by mail (phone-first) and (2) mail followed by phone (mail-first). We compared response rates, cost, timeliness, and data quality to identify differences across case-flow design. Because surveys often differ on the rarity of the target population, we also examined whether changes in the eligibility rate altered the choice of optimal case flow. Our results suggested that, on most metrics, the mail-first design was superior to the phone-first design. Compared with phone-first, mail-first achieved a higher yield rate at a lower cost with equivalent data quality. While the phone-first design initially achieved more interviews compared to the mail-first design, over time the mail-first design surpassed it and obtained the greatest number of interviews.

  10. An Optimization-Based Reconfigurable Design for a 6-Bit 11-MHz Parallel Pipeline ADC with Double-Sampling S&H

    Directory of Open Access Journals (Sweden)

    Wilmar Carvajal

    2012-01-01

    Full Text Available This paper presents a 6 bit, 11 MS/s time-interleaved pipeline A/D converter design. The specification process, from block level to elementary circuits, is gradually covered to draw a design methodology. Both power consumption and mismatch between the parallel chain elements are intended to be reduced by using some techniques such as double and bottom-plate sampling, fully differential circuits, RSD digital correction, and geometric programming (GP optimization of the elementary analog circuits (OTAs and comparators design. Prelayout simulations of the complete ADC are presented to characterize the designed converter, which consumes 12 mW while sampling a 500 kHz input signal. Moreover, the block inside the ADC with the most stringent requirements in power, speed, and precision was sent to fabrication in a CMOS 0.35 μm AMS technology, and some postlayout results are shown.

  11. Space-Time Code Designs for Broadband Wireless Communications

    National Research Council Canada - National Science Library

    Xia, Xiang-Gen

    2005-01-01

    The goal of this research is to design new space AND time codes, such as complex orthogonal space AND time block codes with rate above 1/2 from complex orthogonal designs for QAM, PSK, and CPM signals...

  12. A weighted sampling algorithm for the design of RNA sequences with targeted secondary structure and nucleotide distribution.

    Science.gov (United States)

    Reinharz, Vladimir; Ponty, Yann; Waldispühl, Jérôme

    2013-07-01

    The design of RNA sequences folding into predefined secondary structures is a milestone for many synthetic biology and gene therapy studies. Most of the current software uses similar local search strategies (i.e. a random seed is progressively adapted to acquire the desired folding properties) and more importantly do not allow the user to control explicitly the nucleotide distribution such as the GC-content in their sequences. However, the latter is an important criterion for large-scale applications as it could presumably be used to design sequences with better transcription rates and/or structural plasticity. In this article, we introduce IncaRNAtion, a novel algorithm to design RNA sequences folding into target secondary structures with a predefined nucleotide distribution. IncaRNAtion uses a global sampling approach and weighted sampling techniques. We show that our approach is fast (i.e. running time comparable or better than local search methods), seedless (we remove the bias of the seed in local search heuristics) and successfully generates high-quality sequences (i.e. thermodynamically stable) for any GC-content. To complete this study, we develop a hybrid method combining our global sampling approach with local search strategies. Remarkably, our glocal methodology overcomes both local and global approaches for sampling sequences with a specific GC-content and target structure. IncaRNAtion is available at csb.cs.mcgill.ca/incarnation/. Supplementary data are available at Bioinformatics online.

  13. Adding Timing Requirements to the CODARTS Real-Time Software Design Method

    DEFF Research Database (Denmark)

    Bach, K.R.

    The CODARTS software design method consideres how concurrent, distributed and real-time applications can be designed. Although accounting for the important issues of task and communication, the method does not provide means for expressing the timeliness of the tasks and communication directly...

  14. Dual Source Time-of-flight Mass Spectrometer and Sample Handling System

    Science.gov (United States)

    Brinckerhoff, W.; Mahaffy, P.; Cornish, T.; Cheng, A.; Gorevan, S.; Niemann, H.; Harpold, D.; Rafeek, S.; Yucht, D.

    We present details of an instrument under development for potential NASA missions to planets and small bodies. The instrument comprises a dual ionization source (laser and electron impact) time-of-flight mass spectrometer (TOF-MS) and a carousel sam- ple handling system for in situ analysis of solid materials acquired by, e.g., a coring drill. This DSTOF instrument could be deployed on a fixed lander or a rover, and has an open design that would accommodate measurements by additional instruments. The sample handling system (SHS) is based on a multi-well carousel, originally de- signed for Champollion/DS4. Solid samples, in the form of drill cores or as loose chips or fines, are inserted through an access port, sealed in vacuum, and transported around the carousel to a pyrolysis cell and/or directly to the TOF-MS inlet. Samples at the TOF-MS inlet are xy-addressable for laser or optical microprobe. Cups may be ejected from their holders for analyzing multiple samples or caching them for return. Samples are analyzed with laser desorption and evolved-gas/electron-impact sources. The dual ion source permits studies of elemental, isotopic, and molecular composition of unprepared samples with a single mass spectrometer. Pulsed laser desorption per- mits the measurement of abundance and isotope ratios of refractory elements, as well as the detection of high-mass organic molecules in solid samples. Evolved gas analysis permits similar measurements of the more volatile species in solids and aerosols. The TOF-MS is based on previous miniature prototypes at JHU/APL that feature high sensitivity and a wide mass range. The laser mode, in which the sample cup is directly below the TOF-MS inlet, permits both ablation and desorption measurements, to cover elemental and molecular species, respectively. In the evolved gas mode, sample cups are raised into a small pyrolysis cell and heated, producing a neutral gas that is elec- tron ionized and pulsed into the TOF-MS. (Any imaging

  15. Weighted statistical parameters for irregularly sampled time series

    Science.gov (United States)

    Rimoldini, Lorenzo

    2014-01-01

    Unevenly spaced time series are common in astronomy because of the day-night cycle, weather conditions, dependence on the source position in the sky, allocated telescope time and corrupt measurements, for example, or inherent to the scanning law of satellites like Hipparcos and the forthcoming Gaia. Irregular sampling often causes clumps of measurements and gaps with no data which can severely disrupt the values of estimators. This paper aims at improving the accuracy of common statistical parameters when linear interpolation (in time or phase) can be considered an acceptable approximation of a deterministic signal. A pragmatic solution is formulated in terms of a simple weighting scheme, adapting to the sampling density and noise level, applicable to large data volumes at minimal computational cost. Tests on time series from the Hipparcos periodic catalogue led to significant improvements in the overall accuracy and precision of the estimators with respect to the unweighted counterparts and those weighted by inverse-squared uncertainties. Automated classification procedures employing statistical parameters weighted by the suggested scheme confirmed the benefits of the improved input attributes. The classification of eclipsing binaries, Mira, RR Lyrae, Delta Cephei and Alpha2 Canum Venaticorum stars employing exclusively weighted descriptive statistics achieved an overall accuracy of 92 per cent, about 6 per cent higher than with unweighted estimators.

  16. Constraining designs for synthesis and timing analysis a practical guide to synopsys design constraints (SDC)

    CERN Document Server

    Gangadharan, Sridhar

    2013-01-01

    This book serves as a hands-on guide to timing constraints in integrated circuit design.  Readers will learn to maximize performance of their IC designs, by specifying timing requirements correctly.  Coverage includes key aspects of the design flow impacted by timing constraints, including synthesis, static timing analysis and placement and routing.  Concepts needed for specifying timing requirements are explained in detail and then applied to specific stages in the design flow, all within the context of Synopsys Design Constraints (SDC), the industry-leading format for specifying constraints.  ·         Provides a hands-on guide to synthesis and timing analysis, using Synopsys Design Constraints (SDC), the industry-leading format for specifying constraints; ·         Includes key topics of interest to a synthesis, static timing analysis or  place and route engineer; ·         Explains which constraints command to use for ease of maintenance and reuse, given several options pos...

  17. Degradation of hydrocarbons in soil samples analyzed within accepted analytical holding times

    International Nuclear Information System (INIS)

    Jackson, J.; Thomey, N.; Dietlein, L.F.

    1992-01-01

    Samples which are collected in conjunction with subsurface investigations at leaking petroleum storage tank sites and petroleum refineries are routinely analyzed for benzene, toluene, ethylbenzene, xylenes (BTEX), and total petroleum hydrocarbons (TPH). Water samples are preserved by the addition of hydrochloric acid and maintained at four degrees centigrade prior to analysis. This is done to prevent bacterial degradation of hydrocarbons. Chemical preservation is not presently performed on soil samples. Instead, the samples are cooled and maintained at four degrees centigrade. This study was done to measure the degree of degradation of hydrocarbons in soil samples which are analyzed within accepted holding times. Soil samples were collected and representative subsamples were prepared from the initial sample. Subsamples were analyzed in triplicate for BTEX and TPH throughout the length of the approved holding times to measure the extent of sample constituent degradation prior to analysis. Findings imply that for sandy soils, BTEX and TPH concentrations can be highly dependent upon the length of time which elapses between sample collection and analysis

  18. Reducing Design Cycle Time and Cost Through Process Resequencing

    Science.gov (United States)

    Rogers, James L.

    2004-01-01

    In today's competitive environment, companies are under enormous pressure to reduce the time and cost of their design cycle. One method for reducing both time and cost is to develop an understanding of the flow of the design processes and the effects of the iterative subcycles that are found in complex design projects. Once these aspects are understood, the design manager can make decisions that take advantage of decomposition, concurrent engineering, and parallel processing techniques to reduce the total time and the total cost of the design cycle. One software tool that can aid in this decision-making process is the Design Manager's Aid for Intelligent Decomposition (DeMAID). The DeMAID software minimizes the feedback couplings that create iterative subcycles, groups processes into iterative subcycles, and decomposes the subcycles into a hierarchical structure. The real benefits of producing the best design in the least time and at a minimum cost are obtained from sequencing the processes in the subcycles.

  19. A clinical trial design using the concept of proportional time using the generalized gamma ratio distribution.

    Science.gov (United States)

    Phadnis, Milind A; Wetmore, James B; Mayo, Matthew S

    2017-11-20

    Traditional methods of sample size and power calculations in clinical trials with a time-to-event end point are based on the logrank test (and its variations), Cox proportional hazards (PH) assumption, or comparison of means of 2 exponential distributions. Of these, sample size calculation based on PH assumption is likely the most common and allows adjusting for the effect of one or more covariates. However, when designing a trial, there are situations when the assumption of PH may not be appropriate. Additionally, when it is known that there is a rapid decline in the survival curve for a control group, such as from previously conducted observational studies, a design based on the PH assumption may confer only a minor statistical improvement for the treatment group that is neither clinically nor practically meaningful. For such scenarios, a clinical trial design that focuses on improvement in patient longevity is proposed, based on the concept of proportional time using the generalized gamma ratio distribution. Simulations are conducted to evaluate the performance of the proportional time method and to identify the situations in which such a design will be beneficial as compared to the standard design using a PH assumption, piecewise exponential hazards assumption, and specific cases of a cure rate model. A practical example in which hemorrhagic stroke patients are randomized to 1 of 2 arms in a putative clinical trial demonstrates the usefulness of this approach by drastically reducing the number of patients needed for study enrollment. Copyright © 2017 John Wiley & Sons, Ltd.

  20. A confirmatory holding time study for purgeable VOCs in water samples

    International Nuclear Information System (INIS)

    West, O.R.; Bayne, C.K.; Siegrist, R.L.; Holden, W.H.; Bottrell, D.W.

    1996-01-01

    Analyte stability during pre-analytical storage is essential to the accurate quantification contaminants in environmental samples. This is particularly true for volatile organic compounds (VOCS) which can easily volatilize and/or degrade during sample storage. Recognizing this, regulatory agencies require water samples be collected in vials without headspace and stored at 4 degrees C, and that analyses be conducted within 14 days, 2048 even if samples are acid-preserved. Since the selection of a 14-day holding time was largely arbitrary, the appropriateness of this requirement must be re-evaluated. The goal of the study described here was to provide regulatory agencies with the necessary data to extend the maximum holding time for properly preserved VOC water samples to 28 days

  1. Statistical aspects of quantitative real-time PCR experiment design.

    Science.gov (United States)

    Kitchen, Robert R; Kubista, Mikael; Tichopad, Ales

    2010-04-01

    Experiments using quantitative real-time PCR to test hypotheses are limited by technical and biological variability; we seek to minimise sources of confounding variability through optimum use of biological and technical replicates. The quality of an experiment design is commonly assessed by calculating its prospective power. Such calculations rely on knowledge of the expected variances of the measurements of each group of samples and the magnitude of the treatment effect; the estimation of which is often uninformed and unreliable. Here we introduce a method that exploits a small pilot study to estimate the biological and technical variances in order to improve the design of a subsequent large experiment. We measure the variance contributions at several 'levels' of the experiment design and provide a means of using this information to predict both the total variance and the prospective power of the assay. A validation of the method is provided through a variance analysis of representative genes in several bovine tissue-types. We also discuss the effect of normalisation to a reference gene in terms of the measured variance components of the gene of interest. Finally, we describe a software implementation of these methods, powerNest, that gives the user the opportunity to input data from a pilot study and interactively modify the design of the assay. The software automatically calculates expected variances, statistical power, and optimal design of the larger experiment. powerNest enables the researcher to minimise the total confounding variance and maximise prospective power for a specified maximum cost for the large study. Copyright 2010 Elsevier Inc. All rights reserved.

  2. A general theory on frequency and time-frequency analysis of irregularly sampled time series based on projection methods - Part 2: Extension to time-frequency analysis

    Science.gov (United States)

    Lenoir, Guillaume; Crucifix, Michel

    2018-03-01

    Geophysical time series are sometimes sampled irregularly along the time axis. The situation is particularly frequent in palaeoclimatology. Yet, there is so far no general framework for handling the continuous wavelet transform when the time sampling is irregular. Here we provide such a framework. To this end, we define the scalogram as the continuous-wavelet-transform equivalent of the extended Lomb-Scargle periodogram defined in Part 1 of this study (Lenoir and Crucifix, 2018). The signal being analysed is modelled as the sum of a locally periodic component in the time-frequency plane, a polynomial trend, and a background noise. The mother wavelet adopted here is the Morlet wavelet classically used in geophysical applications. The background noise model is a stationary Gaussian continuous autoregressive-moving-average (CARMA) process, which is more general than the traditional Gaussian white and red noise processes. The scalogram is smoothed by averaging over neighbouring times in order to reduce its variance. The Shannon-Nyquist exclusion zone is however defined as the area corrupted by local aliasing issues. The local amplitude in the time-frequency plane is then estimated with least-squares methods. We also derive an approximate formula linking the squared amplitude and the scalogram. Based on this property, we define a new analysis tool: the weighted smoothed scalogram, which we recommend for most analyses. The estimated signal amplitude also gives access to band and ridge filtering. Finally, we design a test of significance for the weighted smoothed scalogram against the stationary Gaussian CARMA background noise, and provide algorithms for computing confidence levels, either analytically or with Monte Carlo Markov chain methods. All the analysis tools presented in this article are available to the reader in the Python package WAVEPAL.

  3. Design of a gravity corer for near shore sediment sampling

    Digital Repository Service at National Institute of Oceanography (India)

    Bhat, S.T.; Sonawane, A.V.; Nayak, B.U.

    For the purpose of geotechnical investigation a gravity corer has been designed and fabricated to obtain undisturbed sediment core samples from near shore waters. The corer was successfully operated at 75 stations up to water depth 30 m. Simplicity...

  4. The significance of sampling time in therapeutic drug monitoring of clozapine

    DEFF Research Database (Denmark)

    Jakobsen, M I; Larsen, J R; Svensson, C K

    2017-01-01

    OBJECTIVE: Therapeutic drug monitoring (TDM) of clozapine is standardized to 12-h postdose samplings. In clinical settings, sampling time often deviates from this time point, although the importance of the deviation is unknown. To this end, serum concentrations (s-) of clozapine and its metabolite...... N-desmethyl-clozapine (norclozapine) were measured at 12 ± 1 and 2 h postdose. METHOD: Forty-six patients with a diagnosis of schizophrenia, and on stable clozapine treatment, were enrolled for hourly, venous blood sampling at 10-14 h postdose. RESULTS: Minor changes in median percentage values were...

  5. A binary logistic regression model with complex sampling design of ...

    African Journals Online (AJOL)

    2017-09-03

    Sep 3, 2017 ... Bi-variable and multi-variable binary logistic regression model with complex sampling design was fitted. .... Data was entered into STATA-12 and analyzed using. SPSS-21. .... lack of access/too far or costs too much. 35. 1.2.

  6. Are quantitative trait-dependent sampling designs cost-effective for analysis of rare and common variants?

    Science.gov (United States)

    Yilmaz, Yildiz E; Bull, Shelley B

    2011-11-29

    Use of trait-dependent sampling designs in whole-genome association studies of sequence data can reduce total sequencing costs with modest losses of statistical efficiency. In a quantitative trait (QT) analysis of data from the Genetic Analysis Workshop 17 mini-exome for unrelated individuals in the Asian subpopulation, we investigate alternative designs that sequence only 50% of the entire cohort. In addition to a simple random sampling design, we consider extreme-phenotype designs that are of increasing interest in genetic association analysis of QTs, especially in studies concerned with the detection of rare genetic variants. We also evaluate a novel sampling design in which all individuals have a nonzero probability of being selected into the sample but in which individuals with extreme phenotypes have a proportionately larger probability. We take differential sampling of individuals with informative trait values into account by inverse probability weighting using standard survey methods which thus generalizes to the source population. In replicate 1 data, we applied the designs in association analysis of Q1 with both rare and common variants in the FLT1 gene, based on knowledge of the generating model. Using all 200 replicate data sets, we similarly analyzed Q1 and Q4 (which is known to be free of association with FLT1) to evaluate relative efficiency, type I error, and power. Simulation study results suggest that the QT-dependent selection designs generally yield greater than 50% relative efficiency compared to using the entire cohort, implying cost-effectiveness of 50% sample selection and worthwhile reduction of sequencing costs.

  7. A novel method for sampling the suspended sediment load in the tidal environment using bi-directional time-integrated mass-flux sediment (TIMS) samplers

    Science.gov (United States)

    Elliott, Emily A.; Monbureau, Elaine; Walters, Glenn W.; Elliott, Mark A.; McKee, Brent A.; Rodriguez, Antonio B.

    2017-12-01

    Identifying the source and abundance of sediment transported within tidal creeks is essential for studying the connectivity between coastal watersheds and estuaries. The fine-grained suspended sediment load (SSL) makes up a substantial portion of the total sediment load carried within an estuarine system and efficient sampling of the SSL is critical to our understanding of nutrient and contaminant transport, anthropogenic influence, and the effects of climate. Unfortunately, traditional methods of sampling the SSL, including instantaneous measurements and automatic samplers, can be labor intensive, expensive and often yield insufficient mass for comprehensive geochemical analysis. In estuaries this issue is even more pronounced due to bi-directional tidal flow. This study tests the efficacy of a time-integrated mass sediment sampler (TIMS) design, originally developed for uni-directional flow within the fluvial environment, modified in this work for implementation the tidal environment under bi-directional flow conditions. Our new TIMS design utilizes an 'L' shaped outflow tube to prevent backflow, and when deployed in mirrored pairs, each sampler collects sediment uniquely in one direction of tidal flow. Laboratory flume experiments using dye and particle image velocimetry (PIV) were used to characterize the flow within the sampler, specifically, to quantify the settling velocities and identify stagnation points. Further laboratory tests of sediment indicate that bidirectional TIMS capture up to 96% of incoming SSL across a range of flow velocities (0.3-0.6 m s-1). The modified TIMS design was tested in the field at two distinct sampling locations within the tidal zone. Single-time point suspended sediment samples were collected at high and low tide and compared to time-integrated suspended sediment samples collected by the bi-directional TIMS over the same four-day period. Particle-size composition from the bi-directional TIMS were representative of the array of

  8. A Discrete-Time Chattering Free Sliding Mode Control with Multirate Sampling Method for Flight Simulator

    Directory of Open Access Journals (Sweden)

    Yunjie Wu

    2013-01-01

    Full Text Available In order to improve the tracking accuracy of flight simulator and expend its frequency response, a multirate-sampling-method-based discrete-time chattering free sliding mode control is developed and imported into the systems. By constructing the multirate sampling sliding mode controller, the flight simulator can perfectly track a given reference signal with an arbitrarily small dynamic tracking error, and the problems caused by a contradiction of reference signal period and control period in traditional design method can be eliminated. It is proved by theoretical analysis that the extremely high dynamic tracking precision can be obtained. Meanwhile, the robustness is guaranteed by sliding mode control even though there are modeling mismatch, external disturbances and measure noise. The validity of the proposed method is confirmed by experiments on flight simulator.

  9. Sampling methods for rumen microbial counts by Real-Time PCR techniques

    Directory of Open Access Journals (Sweden)

    S. Puppo

    2010-02-01

    Full Text Available Fresh rumen samples were withdrawn from 4 cannulated buffalo females fed a fibrous diets in order to quantify bacteria concentration in the rumen by Real-Time PCR techniques. To obtain DNA of a good quality from whole rumen fluid, eight (M1-M8 different pre-filtration methods (cheese cloths, glass-fibre and nylon filter in combination with various centrifugation speeds (1000, 5000 and 14,000 rpm were tested. Genomic DNA extraction was performed either on fresh or frozen samples (-20°C. The quantitative bacteria analysis was realized according to Real-Time PCR procedure for Butyrivibrio fibrisolvens reported in literature. M5 resulted the best sampling procedure allowing to obtain a suitable genomic DNA. No differences were revealed between fresh and frozen samples.

  10. Sample Processor for Life on Icy Worlds (SPLIce): Design and Test Results

    Science.gov (United States)

    Chinn, Tori N.; Lee, Anthony K.; Boone, Travis D.; Tan, Ming X.; Chin, Matthew M.; McCutcheon, Griffin C.; Horne, Mera F.; Padgen, Michael R.; Blaich, Justin T.; Forgione, Joshua B.; hide

    2017-01-01

    We report the design, development, and testing of the Sample Processor for Life on Icy Worlds (SPLIce) system, a microfluidic sample processor to enable autonomous detection of signatures of life and measurements of habitability parameters in Ocean Worlds. This monolithic fluid processing-and-handling system (Figure 1; mass 0.5 kg) retrieves a 50-L-volume sample and prepares it to supply a suite of detection instruments, each with unique preparation needs. SPLIce has potential applications in orbiter missions that sample ocean plumes, such as found in Saturns icy moon Enceladus, or landed missions on the surface of icy satellites, such as Jupiters moon Europa. Answering the question Are we alone in the universe? is captivating and exceptionally challenging. Even general criteria that define life very broadly include a significant role for water [1,2]. Searches for extinct or extant life therefore prioritize locations of abundant water whether in ancient (Mars), or present (Europa and Enceladus) times. Only two previous planetary missions had onboard fluid processing: the Viking Biology Experiments [3] and Phoenixs Wet Chemistry Laboratory (WCL) [4]. SPLIce differs crucially from those systems, including its capability to process and distribute L-volume samples and the integration autonomous control of a wide range of fluidic functions, including: 1) retrieval of fluid samples from an evacuated sample chamber; 2) onboard multi-year storage of dehydrated reagents; 3) integrated pressure, pH, and conductivity measurement; 4) filtration and retention of insoluble particles for microscopy; 5) dilution or vacuum-driven concentration of samples to accommodate instrument working ranges; 6) removal of gas bubbles from sample aliquots; 7) unidirectional flow (check valves); 8) active flow-path selection (solenoid-actuated valves); 9) metered pumping in 100 nL volume increments. The SPLIce manifold, made of three thermally fused layers of precision-machined cyclo

  11. Analysis of sample composition using resonant ionization and time-of-flight techniques

    International Nuclear Information System (INIS)

    Cruz, A. de la; Ortiz, M.; Campos, J.

    1995-01-01

    This paper describes the setting up of a linear time-of-flight mass spectrometer that uses a tunable laser to produce resonant ionization of atoms and molecules in a pulsed supersonic beam. The ability of this kind of systems to produce time resolved signals for each species present in the sample allows quantitative analysis of its composition. By using a tunable laser beam of high spectral resolution to produce ionization, studies based on the structure of the photoionization spectra obtained are possible. In the present work several isotopic species of ordinary and deuterated benzene have been studied. Special care has been dedicated to the influence of the presence of a 13C in the ring. In this way values for spectroscopic constants and isotopic shifts have been obtained. Another system based in a homemade proportional counter has been designed and used is an auxiliary system. The results obtained with it are independent of these mentioned above and compatible with them. This system is of great utility for laser wavelength tuning to produce ionization in the mass spectrometer. (Author) 98 refs

  12. Analysis of sample composition using resonant ionization and time-of-flight techniques

    International Nuclear Information System (INIS)

    Luz, A. de la; Ortiz, M.; Campos, J.

    1995-01-01

    This paper describes the setting up of a linear time-of-flight mass spectrometer that uses a tunable laser to produce resonant ionization of atoms and molecules in a pulsed supersonic beam. The ability of this kind of systems to produce time resolved signals for each species present in the samples allows quantitative analysis of its composition. By using a tunable laser beam of high spectral resolution to produce ionization, studies based on the structure of the photoionization spectra obtained are possible. In the present work several isotopic species of ordinary and deuterated benzene have been studies. special care has been dedicated to the influence of the presence of a ''13 C in the ring. In this way values for spectroscopic constants and isotopic shifts have been obtained. Another system based in a homemade proportional counter has been designed and used as an auxiliary system. The results obtained with it are independent of these mentioned above and compatible with them. This system is of great utility for laser wavelength tuning to produce ionization in the mass spectrometer

  13. Sample size reassessment for a two-stage design controlling the false discovery rate.

    Science.gov (United States)

    Zehetmayer, Sonja; Graf, Alexandra C; Posch, Martin

    2015-11-01

    Sample size calculations for gene expression microarray and NGS-RNA-Seq experiments are challenging because the overall power depends on unknown quantities as the proportion of true null hypotheses and the distribution of the effect sizes under the alternative. We propose a two-stage design with an adaptive interim analysis where these quantities are estimated from the interim data. The second stage sample size is chosen based on these estimates to achieve a specific overall power. The proposed procedure controls the power in all considered scenarios except for very low first stage sample sizes. The false discovery rate (FDR) is controlled despite of the data dependent choice of sample size. The two-stage design can be a useful tool to determine the sample size of high-dimensional studies if in the planning phase there is high uncertainty regarding the expected effect sizes and variability.

  14. Preconcentration and determination of ceftazidime in real samples using dispersive liquid-liquid microextraction and high-performance liquid chromatography with the aid of experimental design.

    Science.gov (United States)

    Razmi, Rasoul; Shahpari, Behrouz; Pourbasheer, Eslam; Boustanifar, Mohammad Hasan; Azari, Zhila; Ebadi, Amin

    2016-11-01

    A rapid and simple method for the extraction and preconcentration of ceftazidime in aqueous samples has been developed using dispersive liquid-liquid microextraction followed by high-performance liquid chromatography analysis. The extraction parameters, such as the volume of extraction solvent and disperser solvent, salt effect, sample volume, centrifuge rate, centrifuge time, extraction time, and temperature in the dispersive liquid-liquid microextraction process, were studied and optimized with the experimental design methods. Firstly, for the preliminary screening of the parameters the taguchi design was used and then, the fractional factorial design was used for significant factors optimization. At the optimum conditions, the calibration curves for ceftazidime indicated good linearity over the range of 0.001-10 μg/mL with correlation coefficients higher than the 0.98, and the limits of detection were 0.13 and 0.17 ng/mL, for water and urine samples, respectively. The proposed method successfully employed to determine ceftazidime in water and urine samples and good agreement between the experimental data and predictive values has been achieved. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Random sampling of evolution time space and Fourier transform processing

    International Nuclear Information System (INIS)

    Kazimierczuk, Krzysztof; Zawadzka, Anna; Kozminski, Wiktor; Zhukov, Igor

    2006-01-01

    Application of Fourier Transform for processing 3D NMR spectra with random sampling of evolution time space is presented. The 2D FT is calculated for pairs of frequencies, instead of conventional sequence of one-dimensional transforms. Signal to noise ratios and linewidths for different random distributions were investigated by simulations and experiments. The experimental examples include 3D HNCA, HNCACB and 15 N-edited NOESY-HSQC spectra of 13 C 15 N labeled ubiquitin sample. Obtained results revealed general applicability of proposed method and the significant improvement of resolution in comparison with conventional spectra recorded in the same time

  16. Time-dependent importance sampling in semiclassical initial value representation calculations for time correlation functions. II. A simplified implementation.

    Science.gov (United States)

    Tao, Guohua; Miller, William H

    2012-09-28

    An efficient time-dependent (TD) Monte Carlo (MC) importance sampling method has recently been developed [G. Tao and W. H. Miller, J. Chem. Phys. 135, 024104 (2011)] for the evaluation of time correlation functions using the semiclassical (SC) initial value representation (IVR) methodology. In this TD-SC-IVR method, the MC sampling uses information from both time-evolved phase points as well as their initial values, and only the "important" trajectories are sampled frequently. Even though the TD-SC-IVR was shown in some benchmark examples to be much more efficient than the traditional time-independent sampling method (which uses only initial conditions), the calculation of the SC prefactor-which is computationally expensive, especially for large systems-is still required for accepted trajectories. In the present work, we present an approximate implementation of the TD-SC-IVR method that is completely prefactor-free; it gives the time correlation function as a classical-like magnitude function multiplied by a phase function. Application of this approach to flux-flux correlation functions (which yield reaction rate constants) for the benchmark H + H(2) system shows very good agreement with exact quantum results. Limitations of the approximate approach are also discussed.

  17. Design and verification of a self-timed RAM

    DEFF Research Database (Denmark)

    Nielsen, Lars Skovby; Staunstrup, Jørgen

    1995-01-01

    This paper describes a self-timed static RAM. A single bit RAM is described in the design language SYNCHRONIZED TRANSITIONS and using the verification tools supporting this language, it is shown that the design is speed-independent. Furthermore, a transistor level implementation of the design...

  18. Progress on the design of a data push architecture for an array of optimized time tagging pixels

    International Nuclear Information System (INIS)

    Shapiro, S.; Cords, D.; Mani, S.; Holbrook, B.; Atlas, E.

    1993-06-01

    A pixel array has been proposed which features a completely data driven architecture. A pixel cell has been designed that has been optimized for this readout. It retains the features of preceding designs which allow low noise operation, time stamping, analog signal processing, XY address recording, ghost elimination and sparse data transmission. The pixel design eliminates a number of problems inherent in previous designs, by the use of sampled data techniques, destructive readout, and current mode output drivers. This architecture and pixel design is directed at applications such as a forward spectrometer at the SSC, an e + e - B factory at SLAC, and fixed target experiments at FNAL

  19. A statistically rigorous sampling design to integrate avian monitoring and management within Bird Conservation Regions.

    Science.gov (United States)

    Pavlacky, David C; Lukacs, Paul M; Blakesley, Jennifer A; Skorkowsky, Robert C; Klute, David S; Hahn, Beth A; Dreitz, Victoria J; George, T Luke; Hanni, David J

    2017-01-01

    Monitoring is an essential component of wildlife management and conservation. However, the usefulness of monitoring data is often undermined by the lack of 1) coordination across organizations and regions, 2) meaningful management and conservation objectives, and 3) rigorous sampling designs. Although many improvements to avian monitoring have been discussed, the recommendations have been slow to emerge in large-scale programs. We introduce the Integrated Monitoring in Bird Conservation Regions (IMBCR) program designed to overcome the above limitations. Our objectives are to outline the development of a statistically defensible sampling design to increase the value of large-scale monitoring data and provide example applications to demonstrate the ability of the design to meet multiple conservation and management objectives. We outline the sampling process for the IMBCR program with a focus on the Badlands and Prairies Bird Conservation Region (BCR 17). We provide two examples for the Brewer's sparrow (Spizella breweri) in BCR 17 demonstrating the ability of the design to 1) determine hierarchical population responses to landscape change and 2) estimate hierarchical habitat relationships to predict the response of the Brewer's sparrow to conservation efforts at multiple spatial scales. The collaboration across organizations and regions provided economy of scale by leveraging a common data platform over large spatial scales to promote the efficient use of monitoring resources. We designed the IMBCR program to address the information needs and core conservation and management objectives of the participating partner organizations. Although it has been argued that probabilistic sampling designs are not practical for large-scale monitoring, the IMBCR program provides a precedent for implementing a statistically defensible sampling design from local to bioregional scales. We demonstrate that integrating conservation and management objectives with rigorous statistical

  20. A statistically rigorous sampling design to integrate avian monitoring and management within Bird Conservation Regions.

    Directory of Open Access Journals (Sweden)

    David C Pavlacky

    Full Text Available Monitoring is an essential component of wildlife management and conservation. However, the usefulness of monitoring data is often undermined by the lack of 1 coordination across organizations and regions, 2 meaningful management and conservation objectives, and 3 rigorous sampling designs. Although many improvements to avian monitoring have been discussed, the recommendations have been slow to emerge in large-scale programs. We introduce the Integrated Monitoring in Bird Conservation Regions (IMBCR program designed to overcome the above limitations. Our objectives are to outline the development of a statistically defensible sampling design to increase the value of large-scale monitoring data and provide example applications to demonstrate the ability of the design to meet multiple conservation and management objectives. We outline the sampling process for the IMBCR program with a focus on the Badlands and Prairies Bird Conservation Region (BCR 17. We provide two examples for the Brewer's sparrow (Spizella breweri in BCR 17 demonstrating the ability of the design to 1 determine hierarchical population responses to landscape change and 2 estimate hierarchical habitat relationships to predict the response of the Brewer's sparrow to conservation efforts at multiple spatial scales. The collaboration across organizations and regions provided economy of scale by leveraging a common data platform over large spatial scales to promote the efficient use of monitoring resources. We designed the IMBCR program to address the information needs and core conservation and management objectives of the participating partner organizations. Although it has been argued that probabilistic sampling designs are not practical for large-scale monitoring, the IMBCR program provides a precedent for implementing a statistically defensible sampling design from local to bioregional scales. We demonstrate that integrating conservation and management objectives with rigorous

  1. Design for mosquito abundance, diversity, and phenology sampling within the National Ecological Observatory Network

    Science.gov (United States)

    Hoekman, D.; Springer, Yuri P.; Barker, C.M.; Barrera, R.; Blackmore, M.S.; Bradshaw, W.E.; Foley, D. H.; Ginsberg, Howard; Hayden, M. H.; Holzapfel, C. M.; Juliano, S. A.; Kramer, L. D.; LaDeau, S. L.; Livdahl, T. P.; Moore, C. G.; Nasci, R.S.; Reisen, W.K.; Savage, H. M.

    2016-01-01

    The National Ecological Observatory Network (NEON) intends to monitor mosquito populations across its broad geographical range of sites because of their prevalence in food webs, sensitivity to abiotic factors and relevance for human health. We describe the design of mosquito population sampling in the context of NEON’s long term continental scale monitoring program, emphasizing the sampling design schedule, priorities and collection methods. Freely available NEON data and associated field and laboratory samples, will increase our understanding of how mosquito abundance, demography, diversity and phenology are responding to land use and climate change.

  2. Sampled-data-based vibration control for structural systems with finite-time state constraint and sensor outage.

    Science.gov (United States)

    Weng, Falu; Liu, Mingxin; Mao, Weijie; Ding, Yuanchun; Liu, Feifei

    2018-05-10

    The problem of sampled-data-based vibration control for structural systems with finite-time state constraint and sensor outage is investigated in this paper. The objective of designing controllers is to guarantee the stability and anti-disturbance performance of the closed-loop systems while some sensor outages happen. Firstly, based on matrix transformation, the state-space model of structural systems with sensor outages and uncertainties appearing in the mass, damping and stiffness matrices is established. Secondly, by considering most of those earthquakes or strong winds happen in a very short time, and it is often the peak values make the structures damaged, the finite-time stability analysis method is introduced to constrain the state responses in a given time interval, and the H-infinity stability is adopted in the controller design to make sure that the closed-loop system has a prescribed level of disturbance attenuation performance during the whole control process. Furthermore, all stabilization conditions are expressed in the forms of linear matrix inequalities (LMIs), whose feasibility can be easily checked by using the LMI Toolbox. Finally, numerical examples are given to demonstrate the effectiveness of the proposed theorems. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  3. Effects of storage time and temperature on pH, specific gravity, and crystal formation in urine samples from dogs and cats.

    Science.gov (United States)

    Albasan, Hasan; Lulich, Jody P; Osborne, Carl A; Lekcharoensuk, Chalermpol; Ulrich, Lisa K; Carpenter, Kathleen A

    2003-01-15

    To determine effects of storage temperature and time on pH and specific gravity of and number and size of crystals in urine samples from dogs and cats. Randomized complete block design. 31 dogs and 8 cats. Aliquots of each urine sample were analyzed within 60 minutes of collection or after storage at room or refrigeration temperatures (20 vs 6 degrees C [68 vs 43 degrees F]) for 6 or 24 hours. Crystals formed in samples from 11 of 39 (28%) animals. Calcium oxalate (CaOx) crystals formed in vitro in samples from 1 cat and 8 dogs. Magnesium ammonium phosphate (MAP) crystals formed in vitro in samples from 2 dogs. Compared with aliquots stored at room temperature, refrigeration increased the number and size of crystals that formed in vitro; however, the increase in number and size of MAP crystals in stored urine samples was not significant. Increased storage time and decreased storage temperature were associated with a significant increase in number of CaOx crystals formed. Greater numbers of crystals formed in urine aliquots stored for 24 hours than in aliquots stored for 6 hours. Storage time and temperature did not have a significant effect on pH or specific gravity. Urine samples should be analyzed within 60 minutes of collection to minimize temperature- and time-dependent effects on in vitro crystal formation. Presence of crystals observed in stored samples should be validated by reevaluation of fresh urine.

  4. Multi-saline sample distillation apparatus for hydrogen isotope analyses : design and accuracy

    Science.gov (United States)

    Hassan, Afifa Afifi

    1981-01-01

    A distillation apparatus for saline water samples was designed and tested. Six samples may be distilled simultaneously. The temperature was maintained at 400 C to ensure complete dehydration of the precipitating salts. Consequently, the error in the measured ratio of stable hydrogen isotopes resulting from incomplete dehydration of hydrated salts during distillation was eliminated. (USGS)

  5. Time Pressure and Creativity in Industrial Design

    Science.gov (United States)

    Hsiao, Shih-Wen; Wang, Ming-Feng; Chen, Chien-Wie

    2017-01-01

    Creativity is a critical aspect of competitiveness in all trades and professions. In the case of designers, creativity is of the utmost importance. Based on the perspective of industrial design, the relationship between creativity and time pressure was investigated in this study using control and experimental groups. In the first part of the…

  6. Comparison of correlation analysis techniques for irregularly sampled time series

    Directory of Open Access Journals (Sweden)

    K. Rehfeld

    2011-06-01

    Full Text Available Geoscientific measurements often provide time series with irregular time sampling, requiring either data reconstruction (interpolation or sophisticated methods to handle irregular sampling. We compare the linear interpolation technique and different approaches for analyzing the correlation functions and persistence of irregularly sampled time series, as Lomb-Scargle Fourier transformation and kernel-based methods. In a thorough benchmark test we investigate the performance of these techniques.

    All methods have comparable root mean square errors (RMSEs for low skewness of the inter-observation time distribution. For high skewness, very irregular data, interpolation bias and RMSE increase strongly. We find a 40 % lower RMSE for the lag-1 autocorrelation function (ACF for the Gaussian kernel method vs. the linear interpolation scheme,in the analysis of highly irregular time series. For the cross correlation function (CCF the RMSE is then lower by 60 %. The application of the Lomb-Scargle technique gave results comparable to the kernel methods for the univariate, but poorer results in the bivariate case. Especially the high-frequency components of the signal, where classical methods show a strong bias in ACF and CCF magnitude, are preserved when using the kernel methods.

    We illustrate the performances of interpolation vs. Gaussian kernel method by applying both to paleo-data from four locations, reflecting late Holocene Asian monsoon variability as derived from speleothem δ18O measurements. Cross correlation results are similar for both methods, which we attribute to the long time scales of the common variability. The persistence time (memory is strongly overestimated when using the standard, interpolation-based, approach. Hence, the Gaussian kernel is a reliable and more robust estimator with significant advantages compared to other techniques and suitable for large scale application to paleo-data.

  7. Screen Space Ambient Occlusion Based Multiple Importance Sampling for Real-Time Rendering

    Science.gov (United States)

    Zerari, Abd El Mouméne; Babahenini, Mohamed Chaouki

    2018-03-01

    We propose a new approximation technique for accelerating the Global Illumination algorithm for real-time rendering. The proposed approach is based on the Screen-Space Ambient Occlusion (SSAO) method, which approximates the global illumination for large, fully dynamic scenes at interactive frame rates. Current algorithms that are based on the SSAO method suffer from difficulties due to the large number of samples that are required. In this paper, we propose an improvement to the SSAO technique by integrating it with a Multiple Importance Sampling technique that combines a stratified sampling method with an importance sampling method, with the objective of reducing the number of samples. Experimental evaluation demonstrates that our technique can produce high-quality images in real time and is significantly faster than traditional techniques.

  8. Shielding design of highly activated sample storage at reactor TRIGA PUSPATI

    International Nuclear Information System (INIS)

    Naim Syauqi Hamzah; Julia Abdul Karim; Mohamad Hairie Rabir; Muhd Husamuddin Abdul Khalil; Mohd Amin Sharifuldin Salleh

    2010-01-01

    Radiation protection has always been one of the most important things considered in Reaktor Triga PUSPATI (RTP) management. Currently, demands on sample activation were increased from variety of applicant in different research field area. Radiological hazard may occur if the samples evaluation done were misjudge or miscalculated. At present, there is no appropriate storage for highly activated samples. For that purpose, special irradiated samples storage box should be provided in order to segregate highly activated samples that produce high dose level and typical activated samples that produce lower dose level (1 - 2 mR/ hr). In this study, thickness required by common shielding material such as lead and concrete to reduce highly activated radiotracer sample (potassium bromide) with initial exposure dose of 5 R/ hr to background level (0.05 mR/ hr) were determined. Analyses were done using several methods including conventional shielding equation, half value layer calculation and Micro shield computer code. Design of new irradiated samples storage box for RTP that capable to contain high level gamma radioactivity were then proposed. (author)

  9. Design of Field Experiments for Adaptive Sampling of the Ocean with Autonomous Vehicles

    Science.gov (United States)

    Zheng, H.; Ooi, B. H.; Cho, W.; Dao, M. H.; Tkalich, P.; Patrikalakis, N. M.

    2010-05-01

    Due to the highly non-linear and dynamical nature of oceanic phenomena, the predictive capability of various ocean models depends on the availability of operational data. A practical method to improve the accuracy of the ocean forecast is to use a data assimilation methodology to combine in-situ measured and remotely acquired data with numerical forecast models of the physical environment. Autonomous surface and underwater vehicles with various sensors are economic and efficient tools for exploring and sampling the ocean for data assimilation; however there is an energy limitation to such vehicles, and thus effective resource allocation for adaptive sampling is required to optimize the efficiency of exploration. In this paper, we use physical oceanography forecasts of the coastal zone of Singapore for the design of a set of field experiments to acquire useful data for model calibration and data assimilation. The design process of our experiments relied on the oceanography forecast including the current speed, its gradient, and vorticity in a given region of interest for which permits for field experiments could be obtained and for time intervals that correspond to strong tidal currents. Based on these maps, resources available to our experimental team, including Autonomous Surface Craft (ASC) are allocated so as to capture the oceanic features that result from jets and vortices behind bluff bodies (e.g., islands) in the tidal current. Results are summarized from this resource allocation process and field experiments conducted in January 2009.

  10. Space-time design of the public city

    CERN Document Server

    Thomaier, Susanne; Könecke, Benjamin; Zedda, Roberto; Stabilini, Stefano

    2013-01-01

    Time has become an increasingly important topic in urban studies and urban planning. The spatial-temporal interplay is not only of relevance for the theory of urban development and urban politics, but also for urban planning and governance. The space-time approach focuses on the human being with its various habits and routines in the city. Understanding and taking those habits into account in urban planning and public policies offers a new way to improve the quality of life in our cities. Adapting the supply and accessibility of public spaces and services to the inhabitants’ space-time needs calls for an integrated approach to the physical design of urban space and to the organization of cities. In the last two decades the body of practical and theoretical work on urban space-time topics has grown substantially. The book offers a state of the art overview of the theoretical reasoning, the development of new analytical tools, and practical experience of the space-time design of public cities in major Europea...

  11. Design Aids for Real-Time Systems (DARTS)

    Science.gov (United States)

    Szulewski, P. A.

    1982-01-01

    Design-Aids for Real-Time Systems (DARTS) is a tool that assists in defining embedded computer systems through tree structured graphics, military standard documentation support, and various analyses including automated Software Science parameter counting and metrics calculation. These analyses provide both static and dynamic design quality feedback which can potentially aid in producing efficient, high quality software systems.

  12. Two specialized delayed-neutron detector designs for assays of fissionable elements in water and sediment samples

    International Nuclear Information System (INIS)

    Balestrini, S.J.; Balagna, J.P.; Menlove, H.O.

    1976-01-01

    Two specialized neutron-sensitive detectors are described which are employed for rapid assays of fissionable elements by sensing for delayed neutrons emitted by samples after they have been irradiated in a nuclear reactor. The more sensitive of the two detectors, designed to assay for uranium in water samples, is 40% efficient; the other, designed for sediment sample assays, is 27% efficient. These detectors are also designed to operate under water as an inexpensive shielding against neutron leakage from the reactor and neutrons from cosmic rays. (Auth.)

  13. A stratified two-stage sampling design for digital soil mapping in a Mediterranean basin

    Science.gov (United States)

    Blaschek, Michael; Duttmann, Rainer

    2015-04-01

    The quality of environmental modelling results often depends on reliable soil information. In order to obtain soil data in an efficient manner, several sampling strategies are at hand depending on the level of prior knowledge and the overall objective of the planned survey. This study focuses on the collection of soil samples considering available continuous secondary information in an undulating, 16 km²-sized river catchment near Ussana in southern Sardinia (Italy). A design-based, stratified, two-stage sampling design has been applied aiming at the spatial prediction of soil property values at individual locations. The stratification based on quantiles from density functions of two land-surface parameters - topographic wetness index and potential incoming solar radiation - derived from a digital elevation model. Combined with four main geological units, the applied procedure led to 30 different classes in the given test site. Up to six polygons of each available class were selected randomly excluding those areas smaller than 1ha to avoid incorrect location of the points in the field. Further exclusion rules were applied before polygon selection masking out roads and buildings using a 20m buffer. The selection procedure was repeated ten times and the set of polygons with the best geographical spread were chosen. Finally, exact point locations were selected randomly from inside the chosen polygon features. A second selection based on the same stratification and following the same methodology (selecting one polygon instead of six) was made in order to create an appropriate validation set. Supplementary samples were obtained during a second survey focusing on polygons that have either not been considered during the first phase at all or were not adequately represented with respect to feature size. In total, both field campaigns produced an interpolation set of 156 samples and a validation set of 41 points. The selection of sample point locations has been done using

  14. Small Sample Properties of Bayesian Multivariate Autoregressive Time Series Models

    Science.gov (United States)

    Price, Larry R.

    2012-01-01

    The aim of this study was to compare the small sample (N = 1, 3, 5, 10, 15) performance of a Bayesian multivariate vector autoregressive (BVAR-SEM) time series model relative to frequentist power and parameter estimation bias. A multivariate autoregressive model was developed based on correlated autoregressive time series vectors of varying…

  15. Engineering design knowledge recycling in near-real-time

    Science.gov (United States)

    Leifer, Larry; Baya, Vinod; Toye, George; Baudin, Catherine; Underwood, Jody Gevins

    1994-01-01

    It is hypothesized that the capture and reuse of machine readable design records is cost beneficial. This informal engineering notebook design knowledge can be used to model the artifact and the design process. Design rationale is, in part, preserved and available for examination. Redesign cycle time is significantly reduced (Baya et al, 1992). These factors contribute to making it less costly to capture and reuse knowledge than to recreate comparable knowledge (current practice). To test the hypothesis, we have focused on validation of the concept and tools in two 'real design' projects this past year: (1) a short (8 month) turnaround project for NASA life science bioreactor researchers was done by a team of three mechanical engineering graduate students at Stanford University (in a class, ME210abc 'Mechatronic Systems Design and Methodology' taught by one of the authors, Leifer); and (2) a long range (8 to 20 year) international consortium project for NASA's Space Science program (STEP: satellite test of the equivalence principle). Design knowledge capture was supported this year by assigning the use of a Team-Design PowerBook. Design records were cataloged in near-real time. These records were used to qualitatively model the artifact design as it evolved. Dedal, an 'intelligent librarian' developed at NASA-ARC, was used to navigate and retrieve captured knowledge for reuse.

  16. Discrete-Time Mixing Receiver Architecture for RF-Sampling Software-Defined Radio

    NARCIS (Netherlands)

    Ru, Z.; Klumperink, Eric A.M.; Nauta, Bram

    2010-01-01

    Abstract—A discrete-time (DT) mixing architecture for RF-sampling receivers is presented. This architecture makes RF sampling more suitable for software-defined radio (SDR) as it achieves wideband quadrature demodulation and wideband harmonic rejection. The paper consists of two parts. In the first

  17. Quality-control design for surface-water sampling in the National Water-Quality Network

    Science.gov (United States)

    Riskin, Melissa L.; Reutter, David C.; Martin, Jeffrey D.; Mueller, David K.

    2018-04-10

    The data-quality objectives for samples collected at surface-water sites in the National Water-Quality Network include estimating the extent to which contamination, matrix effects, and measurement variability affect interpretation of environmental conditions. Quality-control samples provide insight into how well the samples collected at surface-water sites represent the true environmental conditions. Quality-control samples used in this program include field blanks, replicates, and field matrix spikes. This report describes the design for collection of these quality-control samples and the data management needed to properly identify these samples in the U.S. Geological Survey’s national database.

  18. Planning Considerations for a Mars Sample Receiving Facility: Summary and Interpretation of Three Design Studies

    Science.gov (United States)

    Beaty, David W.; Allen, Carlton C.; Bass, Deborah S.; Buxbaum, Karen L.; Campbell, James K.; Lindstrom, David J.; Miller, Sylvia L.; Papanastassiou, Dimitri A.

    2009-10-01

    It has been widely understood for many years that an essential component of a Mars Sample Return mission is a Sample Receiving Facility (SRF). The purpose of such a facility would be to take delivery of the flight hardware that lands on Earth, open the spacecraft and extract the sample container and samples, and conduct an agreed-upon test protocol, while ensuring strict containment and contamination control of the samples while in the SRF. Any samples that are found to be non-hazardous (or are rendered non-hazardous by sterilization) would then be transferred to long-term curation. Although the general concept of an SRF is relatively straightforward, there has been considerable discussion about implementation planning. The Mars Exploration Program carried out an analysis of the attributes of an SRF to establish its scope, including minimum size and functionality, budgetary requirements (capital cost, operating costs, cost profile), and development schedule. The approach was to arrange for three independent design studies, each led by an architectural design firm, and compare the results. While there were many design elements in common identified by each study team, there were significant differences in the way human operators were to interact with the systems. In aggregate, the design studies provided insight into the attributes of a future SRF and the complex factors to consider for future programmatic planning.

  19. Design a Fault Tolerance for Real Time Distributed System

    OpenAIRE

    Ban M. Khammas

    2012-01-01

    This paper designed a fault tolerance for soft real time distributed system (FTRTDS). This system is designed to be independently on specific mechanisms and facilities of the underlying real time distributed system. It is designed to be distributed on all the computers in the distributed system and controlled by a central unit.Besides gathering information about a target program spontaneously, it provides information about the target operating system and the target hardware in order to diagno...

  20. Understanding the cluster randomised crossover design: a graphical illustraton of the components of variation and a sample size tutorial.

    Science.gov (United States)

    Arnup, Sarah J; McKenzie, Joanne E; Hemming, Karla; Pilcher, David; Forbes, Andrew B

    2017-08-15

    In a cluster randomised crossover (CRXO) design, a sequence of interventions is assigned to a group, or 'cluster' of individuals. Each cluster receives each intervention in a separate period of time, forming 'cluster-periods'. Sample size calculations for CRXO trials need to account for both the cluster randomisation and crossover aspects of the design. Formulae are available for the two-period, two-intervention, cross-sectional CRXO design, however implementation of these formulae is known to be suboptimal. The aims of this tutorial are to illustrate the intuition behind the design; and provide guidance on performing sample size calculations. Graphical illustrations are used to describe the effect of the cluster randomisation and crossover aspects of the design on the correlation between individual responses in a CRXO trial. Sample size calculations for binary and continuous outcomes are illustrated using parameters estimated from the Australia and New Zealand Intensive Care Society - Adult Patient Database (ANZICS-APD) for patient mortality and length(s) of stay (LOS). The similarity between individual responses in a CRXO trial can be understood in terms of three components of variation: variation in cluster mean response; variation in the cluster-period mean response; and variation between individual responses within a cluster-period; or equivalently in terms of the correlation between individual responses in the same cluster-period (within-cluster within-period correlation, WPC), and between individual responses in the same cluster, but in different periods (within-cluster between-period correlation, BPC). The BPC lies between zero and the WPC. When the WPC and BPC are equal the precision gained by crossover aspect of the CRXO design equals the precision lost by cluster randomisation. When the BPC is zero there is no advantage in a CRXO over a parallel-group cluster randomised trial. Sample size calculations illustrate that small changes in the specification of

  1. Worst-case execution time analysis-driven object cache design

    DEFF Research Database (Denmark)

    Huber, Benedikt; Puffitsch, Wolfgang; Schoeberl, Martin

    2012-01-01

    result in a WCET analysis‐friendly design. Aiming for a time‐predictable design, we therefore propose to employ WCET analysis techniques for the design space exploration of processor architectures. We evaluated different object cache configurations using static analysis techniques. The number of field......Hard real‐time systems need a time‐predictable computing platform to enable static worst‐case execution time (WCET) analysis. All performance‐enhancing features need to be WCET analyzable. However, standard data caches containing heap‐allocated data are very hard to analyze statically....... In this paper we explore a new object cache design, which is driven by the capabilities of static WCET analysis. Simulations of standard benchmarks estimating the expected average case performance usually drive computer architecture design. The design decisions derived from this methodology do not necessarily...

  2. Re-orientating time in product design : a phenomenology-inspired perspective

    NARCIS (Netherlands)

    Stienstra, J.T.; Hengeveld, B.J.; Hummels, C.C.M.

    2015-01-01

    This paper presents a work in progress design case that is used to exemplify how a phenomenology-inspired perspective on time can impact the design of highly interactive systems and products. The design presents a calendar with a re-orientated layout that is based on a bodily relationship with time,

  3. 21 CFR 316.23 - Timing of requests for orphan-drug designation; designation of already approved drugs.

    Science.gov (United States)

    2010-04-01

    ...) A sponsor may request orphan-drug designation at any time in the drug development process prior to... 21 Food and Drugs 5 2010-04-01 2010-04-01 false Timing of requests for orphan-drug designation..., DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) DRUGS FOR HUMAN USE ORPHAN DRUGS Designation of an Orphan...

  4. Electro-optic sampling for time resolving relativistic ultrafast electron diffraction

    International Nuclear Information System (INIS)

    Scoby, C. M.; Musumeci, P.; Moody, J.; Gutierrez, M.; Tran, T.

    2009-01-01

    The Pegasus laboratory at UCLA features a state-of-the-art electron photoinjector capable of producing ultrashort (<100 fs) high-brightness electron bunches at energies of 3.75 MeV. These beams recently have been used to produce static diffraction patterns from scattering off thin metal foils, and it is foreseen to take advantage of the ultrashort nature of these bunches in future pump-probe time-resolved diffraction studies. In this paper, single shot 2-d electro-optic sampling is presented as a potential technique for time of arrival stamping of electron bunches used for diffraction. Effects of relatively low bunch charge (a few 10's of pC) and modestly relativistic beams are discussed and background compensation techniques to obtain high signal-to-noise ratio are explored. From these preliminary tests, electro-optic sampling is suitable to be a reliable nondestructive time stamping method for relativistic ultrafast electron diffraction at the Pegasus lab.

  5. Reasoning about repairability of workflows at design time

    NARCIS (Netherlands)

    Tagni, Gaston; Ten Teije, Annette; Van Harmelen, Frank

    2009-01-01

    This paper describes an approach for reasoning about the repairability of workflows at design time. We propose a heuristic-based analysis of a workflow that aims at evaluating its definition, considering different design aspects and characteristics that affect its repairability (called repairability

  6. Spatiotemporally Representative and Cost-Efficient Sampling Design for Validation Activities in Wanglang Experimental Site

    Directory of Open Access Journals (Sweden)

    Gaofei Yin

    2017-11-01

    Full Text Available Spatiotemporally representative Elementary Sampling Units (ESUs are required for capturing the temporal variations in surface spatial heterogeneity through field measurements. Since inaccessibility often coexists with heterogeneity, a cost-efficient sampling design is mandatory. We proposed a sampling strategy to generate spatiotemporally representative and cost-efficient ESUs based on the conditioned Latin hypercube sampling scheme. The proposed strategy was constrained by multi-temporal Normalized Difference Vegetation Index (NDVI imagery, and the ESUs were limited within a sampling feasible region established based on accessibility criteria. A novel criterion based on the Overlapping Area (OA between the NDVI frequency distribution histogram from the sampled ESUs and that from the entire study area was used to assess the sampling efficiency. A case study in Wanglang National Nature Reserve in China showed that the proposed strategy improves the spatiotemporally representativeness of sampling (mean annual OA = 74.7% compared to the single-temporally constrained (OA = 68.7% and the random sampling (OA = 63.1% strategies. The introduction of the feasible region constraint significantly reduces in-situ labour-intensive characterization necessities at expenses of about 9% loss in the spatiotemporal representativeness of the sampling. Our study will support the validation activities in Wanglang experimental site providing a benchmark for locating the nodes of automatic observation systems (e.g., LAINet which need a spatially distributed and temporally fixed sampling design.

  7. Active SAmpling Protocol (ASAP) to Optimize Individual Neurocognitive Hypothesis Testing: A BCI-Inspired Dynamic Experimental Design.

    Science.gov (United States)

    Sanchez, Gaëtan; Lecaignard, Françoise; Otman, Anatole; Maby, Emmanuel; Mattout, Jérémie

    2016-01-01

    The relatively young field of Brain-Computer Interfaces has promoted the use of electrophysiology and neuroimaging in real-time. In the meantime, cognitive neuroscience studies, which make extensive use of functional exploration techniques, have evolved toward model-based experiments and fine hypothesis testing protocols. Although these two developments are mostly unrelated, we argue that, brought together, they may trigger an important shift in the way experimental paradigms are being designed, which should prove fruitful to both endeavors. This change simply consists in using real-time neuroimaging in order to optimize advanced neurocognitive hypothesis testing. We refer to this new approach as the instantiation of an Active SAmpling Protocol (ASAP). As opposed to classical (static) experimental protocols, ASAP implements online model comparison, enabling the optimization of design parameters (e.g., stimuli) during the course of data acquisition. This follows the well-known principle of sequential hypothesis testing. What is radically new, however, is our ability to perform online processing of the huge amount of complex data that brain imaging techniques provide. This is all the more relevant at a time when physiological and psychological processes are beginning to be approached using more realistic, generative models which may be difficult to tease apart empirically. Based upon Bayesian inference, ASAP proposes a generic and principled way to optimize experimental design adaptively. In this perspective paper, we summarize the main steps in ASAP. Using synthetic data we illustrate its superiority in selecting the right perceptual model compared to a classical design. Finally, we briefly discuss its future potential for basic and clinical neuroscience as well as some remaining challenges.

  8. Effect of Sample Storage Temperature and Time Delay on Blood Gases, Bicarbonate and pH in Human Arterial Blood Samples.

    Science.gov (United States)

    Mohammadhoseini, Elham; Safavi, Enayat; Seifi, Sepideh; Seifirad, Soroush; Firoozbakhsh, Shahram; Peiman, Soheil

    2015-03-01

    Results of arterial blood gas analysis can be biased by pre-analytical factors, such as time interval before analysis, temperature during storage and syringe type. To investigate the effects of samples storage temperature and time delay on blood gases, bicarbonate and PH results in human arterial blood samples. 2.5 mL arterial blood samples were drawn from 45 patients via an indwelling Intraarterial catheter. Each sample was divided into five equal samples and stored in multipurpose tuberculin plastic syringes. Blood gas analysis was performed on one of five samples as soon as possible. Four other samples were divided into two groups stored at 22°C and 0°C. Blood gas analyses were repeated at 30 and 60 minutes after sampling. PaO2 of the samples stored at 0°C was increased significantly after 60 minutes (P = 0.007). The PaCO2 of the samples kept for 30 and 60 minutes at 22°C was significantly higher than primary result (P = 0.04, P samples stored at 22°C, pH decreased significantly after 30 and 60 minutes (P = 0.017, P = 0.001). There were no significant differences in other results of samples stored at 0°C or 22°C after 30 or 60 minutes. In samples stored in plastic syringes, overestimation of PaO2 levels should be noted if samples cooled before analysis. In samples stored in plastic syringes, it is not necessary to store samples in iced water when analysis delayed up to one hour.

  9. Estimation of time-delayed mutual information and bias for irregularly and sparsely sampled time-series

    International Nuclear Information System (INIS)

    Albers, D.J.; Hripcsak, George

    2012-01-01

    Highlights: ► Time-delayed mutual information for irregularly sampled time-series. ► Estimation bias for the time-delayed mutual information calculation. ► Fast, simple, PDF estimator independent, time-delayed mutual information bias estimate. ► Quantification of data-set-size limits of the time-delayed mutual calculation. - Abstract: A method to estimate the time-dependent correlation via an empirical bias estimate of the time-delayed mutual information for a time-series is proposed. In particular, the bias of the time-delayed mutual information is shown to often be equivalent to the mutual information between two distributions of points from the same system separated by infinite time. Thus intuitively, estimation of the bias is reduced to estimation of the mutual information between distributions of data points separated by large time intervals. The proposed bias estimation techniques are shown to work for Lorenz equations data and glucose time series data of three patients from the Columbia University Medical Center database.

  10. Robust nonhomogeneous training samples detection method for space-time adaptive processing radar using sparse-recovery with knowledge-aided

    Science.gov (United States)

    Li, Zhihui; Liu, Hanwei; Zhang, Yongshun; Guo, Yiduo

    2017-10-01

    The performance of space-time adaptive processing (STAP) may degrade significantly when some of the training samples are contaminated by the signal-like components (outliers) in nonhomogeneous clutter environments. To remove the training samples contaminated by outliers in nonhomogeneous clutter environments, a robust nonhomogeneous training samples detection method using the sparse-recovery (SR) with knowledge-aided (KA) is proposed. First, the reduced-dimension (RD) overcomplete spatial-temporal steering dictionary is designed with the prior knowledge of system parameters and the possible target region. Then, the clutter covariance matrix (CCM) of cell under test is efficiently estimated using a modified focal underdetermined system solver (FOCUSS) algorithm, where a RD overcomplete spatial-temporal steering dictionary is applied. Third, the proposed statistics are formed by combining the estimated CCM with the generalized inner products (GIP) method, and the contaminated training samples can be detected and removed. Finally, several simulation results validate the effectiveness of the proposed KA-SR-GIP method.

  11. Investigation of Bicycle Travel Time Estimation Using Bluetooth Sensors for Low Sampling Rates

    Directory of Open Access Journals (Sweden)

    Zhenyu Mei

    2014-10-01

    Full Text Available Filtering the data for bicycle travel time using Bluetooth sensors is crucial to the estimation of link travel times on a corridor. The current paper describes an adaptive filtering algorithm for estimating bicycle travel times using Bluetooth data, with consideration of low sampling rates. The data for bicycle travel time using Bluetooth sensors has two characteristics. First, the bicycle flow contains stable and unstable conditions. Second, the collected data have low sampling rates (less than 1%. To avoid erroneous inference, filters are introduced to “purify” multiple time series. The valid data are identified within a dynamically varying validity window with the use of a robust data-filtering procedure. The size of the validity window varies based on the number of preceding sampling intervals without a Bluetooth record. Applications of the proposed algorithm to the dataset from Genshan East Road and Moganshan Road in Hangzhou demonstrate its ability to track typical variations in bicycle travel time efficiently, while suppressing high frequency noise signals.

  12. Learning Bounds of ERM Principle for Sequences of Time-Dependent Samples

    Directory of Open Access Journals (Sweden)

    Mingchen Yao

    2015-01-01

    Full Text Available Many generalization results in learning theory are established under the assumption that samples are independent and identically distributed (i.i.d.. However, numerous learning tasks in practical applications involve the time-dependent data. In this paper, we propose a theoretical framework to analyze the generalization performance of the empirical risk minimization (ERM principle for sequences of time-dependent samples (TDS. In particular, we first present the generalization bound of ERM principle for TDS. By introducing some auxiliary quantities, we also give a further analysis of the generalization properties and the asymptotical behaviors of ERM principle for TDS.

  13. Interference Cancellation Using Space-Time Processing and Precoding Design

    CERN Document Server

    Li, Feng

    2013-01-01

    Interference Cancellation Using Space-Time Processing and Precoding Design introduces original design methods to achieve interference cancellation, low-complexity decoding and full diversity for a series of multi-user systems. In multi-user environments, co-channel interference will diminish the performance of wireless communications systems. In this book, we investigate how to design robust space-time codes and pre-coders to suppress the co-channel interference when multiple antennas are available.   This book offers a valuable reference work for graduate students, academic researchers and engineers who are interested in interference cancellation in wireless communications. Rigorous performance analysis and various simulation illustrations are included for each design method.   Dr. Feng Li is a scientific researcher at Cornell University.

  14. Attributions, future time perspective and career maturity in nursing undergraduates: correlational study design.

    Science.gov (United States)

    Cheng, Cheng; Yang, Liu; Chen, Yuxia; Zou, Huijing; Su, Yonggang; Fan, Xiuzhen

    2016-01-25

    Career maturity is an important parameter as nursing undergraduates prepare for their future careers. However, little is known regarding the relationships between attributions, future time perspective and career maturity among nursing undergraduates. The purpose of this study was to investigate the degree of career maturity and its relationship with attributions and future time perspective. A cross-sectional survey was designed. This survey was administered to 431 Chinese nursing undergraduates. Independent-sample t-tests and one-way ANOVA were performed to examine the mean differences between categories of binary and categorical demographic characteristics, respectively. Pearson correlations and multiple linear regressions were used to test the relationships between attributions, future time perspective and career maturity. The degree of career maturity was moderate among nursing undergraduates and that internal attributions of academic achievement, future efficacy and future purpose consciousness were positively associated with career maturity (all p time perspective and to facilitate their transition from school to clinical practice.

  15. Time constrained liner shipping network design

    DEFF Research Database (Denmark)

    Karsten, Christian Vad; Brouer, Berit Dangaard; Desaulniers, Guy

    2017-01-01

    We present a mathematical model and a solution method for the liner shipping network design problem. The model takes into account coordination between vessels and transit time restrictions on the cargo flow. The solution method is an improvement heuristic, where an integer program is solved...... iteratively to perform moves in a large neighborhood search. Our improvement heuristic is applicable as a real-time decision support tool for a liner shipping company. It can be used to find improvements to the network when evaluating changes in operating conditions or testing different scenarios...

  16. Design Flow Instantiation for Run-Time Reconfigurable Systems: A Case Study

    Directory of Open Access Journals (Sweden)

    Yang Qu

    2007-12-01

    Full Text Available Reconfigurable system is a promising alternative to deliver both flexibility and performance at the same time. New reconfigurable technologies and technology-dependent tools have been developed, but a complete overview of the whole design flow for run-time reconfigurable systems is missing. In this work, we present a design flow instantiation for such systems using a real-life application. The design flow is roughly divided into two parts: system level and implementation. At system level, our supports for hardware resource estimation and performance evaluation are applied. At implementation level, technology-dependent tools are used to realize the run-time reconfiguration. The design case is part of a WCDMA decoder on a commercially available reconfigurable platform. The results show that using run-time reconfiguration can save over 40% area when compared to a functionally equivalent fixed system and achieve 30 times speedup in processing time when compared to a functionally equivalent pure software design.

  17. Sampling Design of Soil Physical Properties in a Conilon Coffee Field

    Directory of Open Access Journals (Sweden)

    Eduardo Oliveira de Jesus Santos

    Full Text Available ABSTRACT Establishing the number of samples required to determine values of soil physical properties ultimately results in optimization of labor and allows better representation of such attributes. The objective of this study was to analyze the spatial variability of soil physical properties in a Conilon coffee field and propose a soil sampling method better attuned to conditions of the management system. The experiment was performed in a Conilon coffee field in Espírito Santo state, Brazil, under a 3.0 × 2.0 × 1.0 m (4,000 plants ha-1 double spacing design. An irregular grid, with dimensions of 107 × 95.7 m and 65 sampling points, was set up. Soil samples were collected from the 0.00-0.20 m depth from each sampling point. Data were analyzed under descriptive statistical and geostatistical methods. Using statistical parameters, the adequate number of samples for analyzing the attributes under study was established, which ranged from 1 to 11 sampling points. With the exception of particle density, all soil physical properties showed a spatial dependence structure best fitted to the spherical model. Establishment of the number of samples and spatial variability for the physical properties of soils may be useful in developing sampling strategies that minimize costs for farmers within a tolerable and predictable level of error.

  18. Exponential Synchronization for Stochastic Neural Networks with Mixed Time Delays and Markovian Jump Parameters via Sampled Data

    Directory of Open Access Journals (Sweden)

    Yingwei Li

    2014-01-01

    Full Text Available The exponential synchronization issue for stochastic neural networks (SNNs with mixed time delays and Markovian jump parameters using sampled-data controller is investigated. Based on a novel Lyapunov-Krasovskii functional, stochastic analysis theory, and linear matrix inequality (LMI approach, we derived some novel sufficient conditions that guarantee that the master systems exponentially synchronize with the slave systems. The design method of the desired sampled-data controller is also proposed. To reflect the most dynamical behaviors of the system, both Markovian jump parameters and stochastic disturbance are considered, where stochastic disturbances are given in the form of a Brownian motion. The results obtained in this paper are a little conservative comparing the previous results in the literature. Finally, two numerical examples are given to illustrate the effectiveness of the proposed methods.

  19. REAL-TIME PCR DETECTION OF LISTERIA MONOCYTOGENES IN FOOD SAMPLES OF ANIMAL ORIGIN

    Directory of Open Access Journals (Sweden)

    Jaroslav Pochop

    2013-02-01

    Full Text Available The aim of this study was to follow the contamination of food with Listeria monocytogenes by using Step One real time polymerase chain reaction (PCR. We used the PrepSEQ Rapid Spin Sample Preparation Kit for isolation of DNA and SensiFAST SYBR Hi-ROX Kit for the real-time PCR performance. In 24 samples of food of animal origin without incubation were detected strains of Listeria monocytogenes in 15 samples (swabs. Nine samples were negative. Our results indicated that the real-time PCR assay developed in this study could sensitively detect Listeria monocytogenes in food of animal origin without incubation. This could prevent infection caused by Listeria monocytogenes, and also could benefit food manufacturing companies by extending their product’s shelf-life as well as saving the cost of warehousing their food products while awaiting pathogen testing results. The rapid real-time PCR-based method performed very well compared to the conventional method. It is a fast, simple, specific and sensitive way to detect nucleic acids, which could be used in clinical diagnostic tests in the future.

  20. Sampling and sample processing in pesticide residue analysis.

    Science.gov (United States)

    Lehotay, Steven J; Cook, Jo Marie

    2015-05-13

    Proper sampling and sample processing in pesticide residue analysis of food and soil have always been essential to obtain accurate results, but the subject is becoming a greater concern as approximately 100 mg test portions are being analyzed with automated high-throughput analytical methods by agrochemical industry and contract laboratories. As global food trade and the importance of monitoring increase, the food industry and regulatory laboratories are also considering miniaturized high-throughput methods. In conjunction with a summary of the symposium "Residues in Food and Feed - Going from Macro to Micro: The Future of Sample Processing in Residue Analytical Methods" held at the 13th IUPAC International Congress of Pesticide Chemistry, this is an opportune time to review sampling theory and sample processing for pesticide residue analysis. If collected samples and test portions do not adequately represent the actual lot from which they came and provide meaningful results, then all costs, time, and efforts involved in implementing programs using sophisticated analytical instruments and techniques are wasted and can actually yield misleading results. This paper is designed to briefly review the often-neglected but crucial topic of sample collection and processing and put the issue into perspective for the future of pesticide residue analysis. It also emphasizes that analysts should demonstrate the validity of their sample processing approaches for the analytes/matrices of interest and encourages further studies on sampling and sample mass reduction to produce a test portion.

  1. Highly efficient detection of paclobutrazol in environmental water and soil samples by time-resolved fluoroimmunoassay

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Zhenjiang, E-mail: lzj1984@ujs.edu.cn [School of the Environment and Safety Engineering, Jiangsu University, Zhenjiang 212013 (China); Wei, Xi [School of the Environment and Safety Engineering, Jiangsu University, Zhenjiang 212013 (China); The Affiliated First People' s Hospital of Jiangsu University, Zhenjiang 212002 (China); Ren, Kewei; Zhu, Gangbing; Zhang, Zhen; Wang, Jiagao; Du, Daolin [School of the Environment and Safety Engineering, Jiangsu University, Zhenjiang 212013 (China)

    2016-11-01

    A fast and ultrasensitive indirect competitive time-resolved fluoroimmunoassay (TRFIA) was developed for the analysis of paclobutrazol in environmental water and soil samples. Paclobutrazol hapten was synthesized and conjugated to bovine serum albumin (BSA) for producing polyclonal antibodies. Under optimal conditions, the 50% inhibitory concentration (IC{sub 50} value) and limit of detection (LOD, IC{sub 20} value) were 1.09 μg L{sup −} {sup 1} and 0.067 μg L{sup −} {sup 1}, respectively. The LOD of TRFIA was improved 30-fold compared to the already reported ELISA. There was almost no cross-reactivity of the antibody with the other structural analogues of triazole compounds, indicating that the antibody had high specificity. The average recoveries from spiked samples were in the range from 80.2% to 104.7% with a relative standard deviation of 1.0–9.5%. The TRFIA results for the real samples were in good agreement with that obtained by high-performance liquid chromatography analyses. The results indicate that the established TRFIA has potential application for screening paclobutrazol in environmental samples. - Highlights: • The approach to design and synthesize the PBZ hapten was more straightforward. • A rapid and ultrasensitive TRFIA was developed and applied to the screening of PBZ. • The TRFIA for real soil samples showed reliability and high correlation with HPLC. • The PBZ TRFIA showed high sensitivity, simple operation, a wide range of quantitative analyses and no radioactive hazards.

  2. Dealing with trade-offs in destructive sampling designs for occupancy surveys.

    Directory of Open Access Journals (Sweden)

    Stefano Canessa

    Full Text Available Occupancy surveys should be designed to minimise false absences. This is commonly achieved by increasing replication or increasing the efficiency of surveys. In the case of destructive sampling designs, in which searches of individual microhabitats represent the repeat surveys, minimising false absences leads to an inherent trade-off. Surveyors can sample more low quality microhabitats, bearing the resultant financial costs and producing wider-spread impacts, or they can target high quality microhabitats were the focal species is more likely to be found and risk more severe impacts on local habitat quality. We show how this trade-off can be solved with a decision-theoretic approach, using the Millewa Skink Hemiergis millewae from southern Australia as a case study. Hemiergis millewae is an endangered reptile that is best detected using destructive sampling of grass hummocks. Within sites that were known to be occupied by H. millewae, logistic regression modelling revealed that lizards were more frequently detected in large hummocks. If this model is an accurate representation of the detection process, searching large hummocks is more efficient and requires less replication, but this strategy also entails destruction of the best microhabitats for the species. We developed an optimisation tool to calculate the minimum combination of the number and size of hummocks to search to achieve a given cumulative probability of detecting the species at a site, incorporating weights to reflect the sensitivity of the results to a surveyor's priorities. The optimisation showed that placing high weight on minimising volume necessitates impractical replication, whereas placing high weight on minimising replication requires searching very large hummocks which are less common and may be vital for H. millewae. While destructive sampling methods are sometimes necessary, surveyors must be conscious of the ecological impacts of these methods. This study provides a

  3. A test of alternative estimators for volume at time 1 from remeasured point samples

    Science.gov (United States)

    Francis A. Roesch; Edwin J. Green; Charles T. Scott

    1993-01-01

    Two estimators for volume at time 1 for use with permanent horizontal point samples are evaluated. One estimator, used traditionally, uses only the trees sampled at time 1, while the second estimator, originally presented by Roesch and coauthors (F.A. Roesch, Jr., E.J. Green, and C.T. Scott. 1989. For. Sci. 35(2):281-293). takes advantage of additional sample...

  4. A novel sampling design to explore gene-longevity associations

    DEFF Research Database (Denmark)

    De Rango, Francesco; Dato, Serena; Bellizzi, Dina

    2008-01-01

    To investigate the genetic contribution to familial similarity in longevity, we set up a novel experimental design where cousin-pairs born from siblings who were concordant or discordant for the longevity trait were analyzed. To check this design, two chromosomal regions already known to encompass...... from concordant and discordant siblings. In addition, we analyzed haplotype transmission from centenarians to offspring, and a statistically significant Transmission Ratio Distortion (TRD) was observed for both chromosomal regions in the discordant families (P=0.007 for 6p21.3 and P=0.015 for 11p15.......5). In concordant families, a marginally significant TRD was observed at 6p21.3 only (P=0.06). Although no significant difference emerged between the two groups of cousin-pairs, our study gave new insights on the hindrances to recruiting a suitable sample to obtain significant IBD data on longevity...

  5. Effects of holding time and measurement error on culturing Legionella in environmental water samples.

    Science.gov (United States)

    Flanders, W Dana; Kirkland, Kimberly H; Shelton, Brian G

    2014-10-01

    Outbreaks of Legionnaires' disease require environmental testing of water samples from potentially implicated building water systems to identify the source of exposure. A previous study reports a large impact on Legionella sample results due to shipping and delays in sample processing. Specifically, this same study, without accounting for measurement error, reports more than half of shipped samples tested had Legionella levels that arbitrarily changed up or down by one or more logs, and the authors attribute this result to shipping time. Accordingly, we conducted a study to determine the effects of sample holding/shipping time on Legionella sample results while taking into account measurement error, which has previously not been addressed. We analyzed 159 samples, each split into 16 aliquots, of which one-half (8) were processed promptly after collection. The remaining half (8) were processed the following day to assess impact of holding/shipping time. A total of 2544 samples were analyzed including replicates. After accounting for inherent measurement error, we found that the effect of holding time on observed Legionella counts was small and should have no practical impact on interpretation of results. Holding samples increased the root mean squared error by only about 3-8%. Notably, for only one of 159 samples, did the average of the 8 replicate counts change by 1 log. Thus, our findings do not support the hypothesis of frequent, significant (≥= 1 log10 unit) Legionella colony count changes due to holding. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  6. Assessment of long-term gas sampling design at two commercial manure-belt layer barns.

    Science.gov (United States)

    Chai, Li-Long; Ni, Ji-Qin; Chen, Yan; Diehl, Claude A; Heber, Albert J; Lim, Teng T

    2010-06-01

    Understanding temporal and spatial variations of aerial pollutant concentrations is important for designing air quality monitoring systems. In long-term and continuous air quality monitoring in large livestock and poultry barns, these systems usually use location-shared analyzers and sensors and can only sample air at limited number of locations. To assess the validity of the gas sampling design at a commercial layer farm, a new methodology was developed to map pollutant gas concentrations using portable sensors under steady-state or quasi-steady-state barn conditions. Three assessment tests were conducted from December 2008 to February 2009 in two manure-belt layer barns. Each barn was 140.2 m long and 19.5 m wide and had 250,000 birds. Each test included four measurements of ammonia and carbon dioxide concentrations at 20 locations that covered all operating fans, including six of the fans used in the long-term sampling that represented three zones along the lengths of the barns, to generate data for complete-barn monitoring. To simulate the long-term monitoring, gas concentrations from the six long-term sampling locations were extracted from the 20 assessment locations. Statistical analyses were performed to test the variances (F-test) and sample means (t test) between the 6- and 20-sample data. The study clearly demonstrated ammonia and carbon dioxide concentration gradients that were characterized by increasing concentrations from the west to east ends of the barns following the under-cage manure-belt travel direction. Mean concentrations increased from 7.1 to 47.7 parts per million (ppm) for ammonia and from 2303 to 3454 ppm for carbon dioxide from the west to east of the barns. Variations of mean gas concentrations were much less apparent between the south and north sides of the barns, because they were 21.2 and 20.9 ppm for ammonia and 2979 and 2951 ppm for carbon dioxide, respectively. The null hypotheses that the variances and means between the 6- and 20

  7. Synchronization of Hierarchical Time-Varying Neural Networks Based on Asynchronous and Intermittent Sampled-Data Control.

    Science.gov (United States)

    Xiong, Wenjun; Patel, Ragini; Cao, Jinde; Zheng, Wei Xing

    In this brief, our purpose is to apply asynchronous and intermittent sampled-data control methods to achieve the synchronization of hierarchical time-varying neural networks. The asynchronous and intermittent sampled-data controllers are proposed for two reasons: 1) the controllers may not transmit the control information simultaneously and 2) the controllers cannot always exist at any time . The synchronization is then discussed for a kind of hierarchical time-varying neural networks based on the asynchronous and intermittent sampled-data controllers. Finally, the simulation results are given to illustrate the usefulness of the developed criteria.In this brief, our purpose is to apply asynchronous and intermittent sampled-data control methods to achieve the synchronization of hierarchical time-varying neural networks. The asynchronous and intermittent sampled-data controllers are proposed for two reasons: 1) the controllers may not transmit the control information simultaneously and 2) the controllers cannot always exist at any time . The synchronization is then discussed for a kind of hierarchical time-varying neural networks based on the asynchronous and intermittent sampled-data controllers. Finally, the simulation results are given to illustrate the usefulness of the developed criteria.

  8. Classifier-guided sampling for discrete variable, discontinuous design space exploration: Convergence and computational performance

    Energy Technology Data Exchange (ETDEWEB)

    Backlund, Peter B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Shahan, David W. [HRL Labs., LLC, Malibu, CA (United States); Seepersad, Carolyn Conner [Univ. of Texas, Austin, TX (United States)

    2014-04-22

    A classifier-guided sampling (CGS) method is introduced for solving engineering design optimization problems with discrete and/or continuous variables and continuous and/or discontinuous responses. The method merges concepts from metamodel-guided sampling and population-based optimization algorithms. The CGS method uses a Bayesian network classifier for predicting the performance of new designs based on a set of known observations or training points. Unlike most metamodeling techniques, however, the classifier assigns a categorical class label to a new design, rather than predicting the resulting response in continuous space, and thereby accommodates nondifferentiable and discontinuous functions of discrete or categorical variables. The CGS method uses these classifiers to guide a population-based sampling process towards combinations of discrete and/or continuous variable values with a high probability of yielding preferred performance. Accordingly, the CGS method is appropriate for discrete/discontinuous design problems that are ill-suited for conventional metamodeling techniques and too computationally expensive to be solved by population-based algorithms alone. In addition, the rates of convergence and computational properties of the CGS method are investigated when applied to a set of discrete variable optimization problems. Results show that the CGS method significantly improves the rate of convergence towards known global optima, on average, when compared to genetic algorithms.

  9. Molecular dynamics based enhanced sampling of collective variables with very large time steps

    Science.gov (United States)

    Chen, Pei-Yang; Tuckerman, Mark E.

    2018-01-01

    Enhanced sampling techniques that target a set of collective variables and that use molecular dynamics as the driving engine have seen widespread application in the computational molecular sciences as a means to explore the free-energy landscapes of complex systems. The use of molecular dynamics as the fundamental driver of the sampling requires the introduction of a time step whose magnitude is limited by the fastest motions in a system. While standard multiple time-stepping methods allow larger time steps to be employed for the slower and computationally more expensive forces, the maximum achievable increase in time step is limited by resonance phenomena, which inextricably couple fast and slow motions. Recently, we introduced deterministic and stochastic resonance-free multiple time step algorithms for molecular dynamics that solve this resonance problem and allow ten- to twenty-fold gains in the large time step compared to standard multiple time step algorithms [P. Minary et al., Phys. Rev. Lett. 93, 150201 (2004); B. Leimkuhler et al., Mol. Phys. 111, 3579-3594 (2013)]. These methods are based on the imposition of isokinetic constraints that couple the physical system to Nosé-Hoover chains or Nosé-Hoover Langevin schemes. In this paper, we show how to adapt these methods for collective variable-based enhanced sampling techniques, specifically adiabatic free-energy dynamics/temperature-accelerated molecular dynamics, unified free-energy dynamics, and by extension, metadynamics, thus allowing simulations employing these methods to employ similarly very large time steps. The combination of resonance-free multiple time step integrators with free-energy-based enhanced sampling significantly improves the efficiency of conformational exploration.

  10. Designing a two-rank acceptance sampling plan for quality inspection of geospatial data products

    Science.gov (United States)

    Tong, Xiaohua; Wang, Zhenhua; Xie, Huan; Liang, Dan; Jiang, Zuoqin; Li, Jinchao; Li, Jun

    2011-10-01

    To address the disadvantages of classical sampling plans designed for traditional industrial products, we originally propose a two-rank acceptance sampling plan (TRASP) for the inspection of geospatial data outputs based on the acceptance quality level (AQL). The first rank sampling plan is to inspect the lot consisting of map sheets, and the second is to inspect the lot consisting of features in an individual map sheet. The TRASP design is formulated as an optimization problem with respect to sample size and acceptance number, which covers two lot size cases. The first case is for a small lot size with nonconformities being modeled by a hypergeometric distribution function, and the second is for a larger lot size with nonconformities being modeled by a Poisson distribution function. The proposed TRASP is illustrated through two empirical case studies. Our analysis demonstrates that: (1) the proposed TRASP provides a general approach for quality inspection of geospatial data outputs consisting of non-uniform items and (2) the proposed acceptance sampling plan based on TRASP performs better than other classical sampling plans. It overcomes the drawbacks of percent sampling, i.e., "strictness for large lot size, toleration for small lot size," and those of a national standard used specifically for industrial outputs, i.e., "lots with different sizes corresponding to the same sampling plan."

  11. Optimal sampling designs for large-scale fishery sample surveys in Greece

    Directory of Open Access Journals (Sweden)

    G. BAZIGOS

    2007-12-01

    The paper deals with the optimization of the following three large scale sample surveys: biological sample survey of commercial landings (BSCL, experimental fishing sample survey (EFSS, and commercial landings and effort sample survey (CLES.

  12. Designers workbench: toward real-time immersive modeling

    Science.gov (United States)

    Kuester, Falko; Duchaineau, Mark A.; Hamann, Bernd; Joy, Kenneth I.; Ma, Kwan-Liu

    2000-05-01

    This paper introduces the Designers Workbench, a semi- immersive virtual environment for two-handed modeling, sculpting and analysis tasks. The paper outlines the fundamental tools, design metaphors and hardware components required for an intuitive real-time modeling system. As companies focus on streamlining productivity to cope with global competition, the migration to computer-aided design (CAD), computer-aided manufacturing, and computer-aided engineering systems has established a new backbone of modern industrial product development. However, traditionally a product design frequently originates form a clay model that, after digitization, forms the basis for the numerical description of CAD primitives. The Designers Workbench aims at closing this technology or 'digital gap' experienced by design and CAD engineers by transforming the classical design paradigm into its fully integrate digital and virtual analog allowing collaborative development in a semi- immersive virtual environment. This project emphasizes two key components form the classical product design cycle: freeform modeling and analysis. In the freedom modeling stage, content creation in the form of two-handed sculpting of arbitrary objects using polygonal, volumetric or mathematically defined primitives is emphasized, whereas the analysis component provides the tools required for pre- and post-processing steps for finite element analysis tasks applied to the created models.

  13. Identification and genotyping of molluscum contagiosum virus from genital swab samples by real-time PCR and Pyrosequencing.

    Science.gov (United States)

    Trama, Jason P; Adelson, Martin E; Mordechai, Eli

    2007-12-01

    Laboratory diagnosis of molluscum contagiosum virus (MCV) is important as lesions can be confused with those caused by Cryptococcus neoformans, herpes simplex virus, human papillomavirus, and varicella-zoster virus. To develop a rapid method for identifying patients infected with MCV via swab sampling. Two dual-labeled probe real-time PCR assays, one homologous to the p43K gene and one to the MC080R gene, were designed. The p43K PCR was designed to be used in conjunction with Pyrosequencing for confirmation of PCR products and discrimination between MCV1 and MCV2. Both PCR assays were optimized with respect to reaction components, thermocycling parameters, and primer and probe concentrations. The specificities of both PCR assays were confirmed by non-amplification of 38 known human pathogens. Sensitivity assays demonstrated detection of as few as 10 copies per reaction. Testing 703 swabs, concordance between the two real-time PCR assays was 99.9%. Under the developed conditions, Pyrosequencing of the p43K PCR product was capable of providing enough nucleotide sequence to definitively differentiate MCV1 and MCV2. These real-time PCR assays can be used for the rapid, sensitive, and specific detection of MCV and, when combined with Pyrosequencing, can further discriminate between MCV1 and MCV2.

  14. Fast Bayesian experimental design: Laplace-based importance sampling for the expected information gain

    KAUST Repository

    Beck, Joakim

    2018-02-19

    In calculating expected information gain in optimal Bayesian experimental design, the computation of the inner loop in the classical double-loop Monte Carlo requires a large number of samples and suffers from underflow if the number of samples is small. These drawbacks can be avoided by using an importance sampling approach. We present a computationally efficient method for optimal Bayesian experimental design that introduces importance sampling based on the Laplace method to the inner loop. We derive the optimal values for the method parameters in which the average computational cost is minimized for a specified error tolerance. We use three numerical examples to demonstrate the computational efficiency of our method compared with the classical double-loop Monte Carlo, and a single-loop Monte Carlo method that uses the Laplace approximation of the return value of the inner loop. The first demonstration example is a scalar problem that is linear in the uncertain parameter. The second example is a nonlinear scalar problem. The third example deals with the optimal sensor placement for an electrical impedance tomography experiment to recover the fiber orientation in laminate composites.

  15. Fast Bayesian experimental design: Laplace-based importance sampling for the expected information gain

    Science.gov (United States)

    Beck, Joakim; Dia, Ben Mansour; Espath, Luis F. R.; Long, Quan; Tempone, Raúl

    2018-06-01

    In calculating expected information gain in optimal Bayesian experimental design, the computation of the inner loop in the classical double-loop Monte Carlo requires a large number of samples and suffers from underflow if the number of samples is small. These drawbacks can be avoided by using an importance sampling approach. We present a computationally efficient method for optimal Bayesian experimental design that introduces importance sampling based on the Laplace method to the inner loop. We derive the optimal values for the method parameters in which the average computational cost is minimized according to the desired error tolerance. We use three numerical examples to demonstrate the computational efficiency of our method compared with the classical double-loop Monte Carlo, and a more recent single-loop Monte Carlo method that uses the Laplace method as an approximation of the return value of the inner loop. The first example is a scalar problem that is linear in the uncertain parameter. The second example is a nonlinear scalar problem. The third example deals with the optimal sensor placement for an electrical impedance tomography experiment to recover the fiber orientation in laminate composites.

  16. Single-subject withdrawal designs in delayed matching-to-sample procedures

    OpenAIRE

    Eilifsen, Christoffer; Arntzen, Erik

    2011-01-01

    In most studies of delayed matching-to-sample (DMTS) and stimulus equivalence, the delay has remained fixed throughout a single experimental condition. We wanted to expand on the DMTS and stimulus equivalence literature by examining the effects of using titrating delays with different starting points during the establishment of conditional discriminations prerequisite for stimulus equivalence. In Experiment 1, a variation of a single-subject withdrawal design was used. Ten adults were exposed...

  17. An empirical comparison of respondent-driven sampling, time location sampling, and snowball sampling for behavioral surveillance in men who have sex with men, Fortaleza, Brazil.

    Science.gov (United States)

    Kendall, Carl; Kerr, Ligia R F S; Gondim, Rogerio C; Werneck, Guilherme L; Macena, Raimunda Hermelinda Maia; Pontes, Marta Kerr; Johnston, Lisa G; Sabin, Keith; McFarland, Willi

    2008-07-01

    Obtaining samples of populations at risk for HIV challenges surveillance, prevention planning, and evaluation. Methods used include snowball sampling, time location sampling (TLS), and respondent-driven sampling (RDS). Few studies have made side-by-side comparisons to assess their relative advantages. We compared snowball, TLS, and RDS surveys of men who have sex with men (MSM) in Forteleza, Brazil, with a focus on the socio-economic status (SES) and risk behaviors of the samples to each other, to known AIDS cases and to the general population. RDS produced a sample with wider inclusion of lower SES than snowball sampling or TLS-a finding of health significance given the majority of AIDS cases reported among MSM in the state were low SES. RDS also achieved the sample size faster and at lower cost. For reasons of inclusion and cost-efficiency, RDS is the sampling methodology of choice for HIV surveillance of MSM in Fortaleza.

  18. Laboratory sample turnaround times: do they cause delays in the ED?

    Science.gov (United States)

    Gill, Dipender; Galvin, Sean; Ponsford, Mark; Bruce, David; Reicher, John; Preston, Laura; Bernard, Stephani; Lafferty, Jessica; Robertson, Andrew; Rose-Morris, Anna; Stoneham, Simon; Rieu, Romelie; Pooley, Sophie; Weetch, Alison; McCann, Lloyd

    2012-02-01

    Blood tests are requested for approximately 50% of patients attending the emergency department (ED). The time taken to obtain the results is perceived as a common reason for delay. The objective of this study was therefore to investigate the turnaround time (TAT) for blood results and whether this affects patient length of stay (LOS) and to identify potential areas for improvement. A time-in-motion study was performed at the ED of the John Radcliffe Hospital (JRH), Oxford, UK. The duration of each of the stages leading up to receipt of 101 biochemistry and haematology results was recorded, along with the corresponding patient's LOS. The findings reveal that the mean time for haematology results to become available was 1 hour 6 minutes (95% CI: 29 minutes to 2 hours 13 minutes), while biochemistry samples took 1 hour 42 minutes (95% CI: 1 hour 1 minute to 4 hours 21 minutes), with some positive correlation noted with the patient LOS, but no significant variation between different days or shifts. With the fastest 10% of samples being reported within 35 minutes (haematology) and 1 hour 5 minutes (biochemistry) of request, our study showed that delays can be attributable to laboratory TAT. Given the limited ability to further improve laboratory processes, the solutions to improving TAT need to come from a collaborative and integrated approach that includes strategies before samples reach the laboratory and downstream review of results. © 2010 Blackwell Publishing Ltd.

  19. Influence of sampling depth and post-sampling analysis time on the ...

    African Journals Online (AJOL)

    Bacteriological analysis was carried out for samples taken at water depth and at 1, 6, 12 and 24 hours post-sampling. It was observed that the total and faecal coliform bacteria were significantly higher in the 3 m water depth samples than in the surface water samples (ANOVA, F = 59.41, 26.751, 9.82 (T.C); 46.41, 26.81, ...

  20. Simple DNA extraction of urine samples: Effects of storage temperature and storage time.

    Science.gov (United States)

    Ng, Huey Hian; Ang, Hwee Chen; Hoe, See Ying; Lim, Mae-Lynn; Tai, Hua Eng; Soh, Richard Choon Hock; Syn, Christopher Kiu-Choong

    2018-06-01

    Urine samples are commonly analysed in cases with suspected illicit drug consumption. In events of alleged sample mishandling, urine sample source identification may be necessary. A simple DNA extraction procedure suitable for STR typing of urine samples was established on the Promega Maxwell ® 16 paramagnetic silica bead platform. A small sample volume of 1.7mL was used. Samples were stored at room temperature, 4°C and -20°C for 100days to investigate the influence of storage temperature and time on extracted DNA quantity and success rate of STR typing. Samples stored at room temperature exhibited a faster decline in DNA yield with time and lower typing success rates as compared to those at 4°C and -20°C. This trend can likely be attributed to DNA degradation. In conclusion, this study presents a quick and effective DNA extraction protocol from a small urine volume stored for up to 100days at 4°C and -20°C. Copyright © 2018 Elsevier B.V. All rights reserved.

  1. Designing efficient nitrous oxide sampling strategies in agroecosystems using simulation models

    Science.gov (United States)

    Debasish Saha; Armen R. Kemanian; Benjamin M. Rau; Paul R. Adler; Felipe Montes

    2017-01-01

    Annual cumulative soil nitrous oxide (N2O) emissions calculated from discrete chamber-based flux measurements have unknown uncertainty. We used outputs from simulations obtained with an agroecosystem model to design sampling strategies that yield accurate cumulative N2O flux estimates with a known uncertainty level. Daily soil N2O fluxes were simulated for Ames, IA (...

  2. An Alternative View of Some FIA Sample Design and Analysis Issues

    Science.gov (United States)

    Paul C. Van Deusen

    2005-01-01

    Sample design and analysis decisions are the result of compromises and inputs from many sources. The end result would likely change if different individuals or groups were involved in the planning process. Discussed here are some alternatives to the procedures that are currently being used for the annual inventory. The purpose is to indicate that alternatives exist and...

  3. Study on auto-plating process time versus recovery for polonium, Po-210 in environmental sample

    International Nuclear Information System (INIS)

    Jalal Sharib; Zaharudin Ahmad; Abdul Kadir Ishak; Norfaizal Mohamed; Ahmad Sanadi Abu Bakar; Yii Mei Wo; Kamarozaman Ishak; Siti Aminah Yusoff

    2008-08-01

    This study was carried out to evaluate time effectiveness and recovery 16 samples of 4 Kuala Muda stations during auto-plating process procedures for determination Polonium, Po 210 activity concentration in environmental sample. The study was performed using Kuala Muda sediment as sample in the same methodology. The auto-plating process runs for 4, 12, 24 and 30 hours on a silver disc for 4 samples each station, and then counted for one (1) day using an alpha spectrometry counting system. The objectives for this study is to justify on time duration for auto-plating process effecting a chemical yield of Po-209.The results showed recovery are increasing versus time and constantly at 24 hour auto-plating. Its mean, 24 hour is an optimum time for auto-plating process for determination of Polonium, Po 210 activity concentration in environmental sample. (Author)

  4. Effects of Long-Term Storage Time and Original Sampling Month on Biobank Plasma Protein Concentrations

    Directory of Open Access Journals (Sweden)

    Stefan Enroth

    2016-10-01

    Full Text Available The quality of clinical biobank samples is crucial to their value for life sciences research. A number of factors related to the collection and storage of samples may affect the biomolecular composition. We have studied the effect of long-time freezer storage, chronological age at sampling, season and month of the year and on the abundance levels of 108 proteins in 380 plasma samples collected from 106 Swedish women. Storage time affected 18 proteins and explained 4.8–34.9% of the observed variance. Chronological age at sample collection after adjustment for storage-time affected 70 proteins and explained 1.1–33.5% of the variance. Seasonal variation had an effect on 15 proteins and month (number of sun hours affected 36 proteins and explained up to 4.5% of the variance after adjustment for storage-time and age. The results show that freezer storage time and collection date (month and season exerted similar effect sizes as age on the protein abundance levels. This implies that information on the sample handling history, in particular storage time, should be regarded as equally prominent covariates as age or gender and need to be included in epidemiological studies involving protein levels.

  5. Scheduling sampling to maximize information about time dependence in experiments with limited resources

    DEFF Research Database (Denmark)

    Græsbøll, Kaare; Christiansen, Lasse Engbo

    2013-01-01

    Looking for periodicity in sampled data requires that periods (lags) of different length are represented in the sampling plan. We here present a method to assist in planning of temporal studies with sparse resources, which optimizes the number of observed time lags for a fixed amount of samples w...

  6. Embedded real-time operating system micro kernel design

    Science.gov (United States)

    Cheng, Xiao-hui; Li, Ming-qiang; Wang, Xin-zheng

    2005-12-01

    Embedded systems usually require a real-time character. Base on an 8051 microcontroller, an embedded real-time operating system micro kernel is proposed consisting of six parts, including a critical section process, task scheduling, interruption handle, semaphore and message mailbox communication, clock managent and memory managent. Distributed CPU and other resources are among tasks rationally according to the importance and urgency. The design proposed here provides the position, definition, function and principle of micro kernel. The kernel runs on the platform of an ATMEL AT89C51 microcontroller. Simulation results prove that the designed micro kernel is stable and reliable and has quick response while operating in an application system.

  7. The SDSS-IV MaNGA Sample: Design, Optimization, and Usage Considerations

    Science.gov (United States)

    Wake, David A.; Bundy, Kevin; Diamond-Stanic, Aleksandar M.; Yan, Renbin; Blanton, Michael R.; Bershady, Matthew A.; Sánchez-Gallego, José R.; Drory, Niv; Jones, Amy; Kauffmann, Guinevere; Law, David R.; Li, Cheng; MacDonald, Nicholas; Masters, Karen; Thomas, Daniel; Tinker, Jeremy; Weijmans, Anne-Marie; Brownstein, Joel R.

    2017-09-01

    We describe the sample design for the SDSS-IV MaNGA survey and present the final properties of the main samples along with important considerations for using these samples for science. Our target selection criteria were developed while simultaneously optimizing the size distribution of the MaNGA integral field units (IFUs), the IFU allocation strategy, and the target density to produce a survey defined in terms of maximizing signal-to-noise ratio, spatial resolution, and sample size. Our selection strategy makes use of redshift limits that only depend on I-band absolute magnitude (M I ), or, for a small subset of our sample, M I and color (NUV - I). Such a strategy ensures that all galaxies span the same range in angular size irrespective of luminosity and are therefore covered evenly by the adopted range of IFU sizes. We define three samples: the Primary and Secondary samples are selected to have a flat number density with respect to M I and are targeted to have spectroscopic coverage to 1.5 and 2.5 effective radii (R e ), respectively. The Color-Enhanced supplement increases the number of galaxies in the low-density regions of color-magnitude space by extending the redshift limits of the Primary sample in the appropriate color bins. The samples cover the stellar mass range 5× {10}8≤slant {M}* ≤slant 3× {10}11 {M}⊙ {h}-2 and are sampled at median physical resolutions of 1.37 and 2.5 kpc for the Primary and Secondary samples, respectively. We provide weights that will statistically correct for our luminosity and color-dependent selection function and IFU allocation strategy, thus correcting the observed sample to a volume-limited sample.

  8. Minimizing Experimental Setup Time and Effort at APS beamline 1-ID through Instrumentation Design

    Energy Technology Data Exchange (ETDEWEB)

    Benda, Erika; Almer, Jonathan; Kenesei, Peter; Mashayekhi, Ali; Okasinksi, John; Park, Jun-Sang; Ranay, Rogelio; Shastri, Sarvijt

    2016-01-01

    Sector 1-ID at the APS accommodates a number of dif-ferent experimental techniques in the same spatial enve-lope of the E-hutch end station. These include high-energy small and wide angle X-ray scattering (SAXS and WAXS), high-energy diffraction microscopy (HEDM, both near and far field modes) and high-energy X-ray tomography. These techniques are frequently combined to allow the users to obtain multimodal data, often attaining 1 μm spatial resolution and <0.05º angular resolution. Furthermore, these techniques are utilized while the sam-ple is thermo-mechanically loaded to mimic real operat-ing conditions. The instrumentation required for each of these techniques and environments has been designed and configured in a modular way with a focus on stability and repeatability between changeovers. This approach allows the end station to be more versatile, capable of collecting multi-modal data in-situ while reducing time and effort typically required for set up and alignment, resulting in more efficient beam time use. Key instrumentation de-sign features and layout of the end station are presented.

  9. Router Designs for an Asynchronous Time-Division-Multiplexed Network-on-Chip

    DEFF Research Database (Denmark)

    Kasapaki, Evangelia; Sparsø, Jens; Sørensen, Rasmus Bo

    2013-01-01

    In this paper we explore the design of an asynchronous router for a time-division-multiplexed (TDM) network-on-chip (NOC) that is being developed for a multi-processor platform for hard real-time systems. TDM inherently requires a common time reference, and existing TDM-based NOC designs are either....... This adds hardware complexity and increases area and power consumption. We propose to use asynchronous routers in order to achieve a simpler, more robust and globally-asynchronous NOC, and this represents an unexplored point in the design space. The paper presents a range of alternative router designs. All...... routers have been synthesized for a 65nm CMOS technology, and the paper reports post-layout figures for area, speed and energy and compares the asynchronous designs with an existing mesochronous clocked router. The results show that an asynchronous router is 2 times smaller, marginally slower...

  10. A sampling approach to constructing Lyapunov functions for nonlinear continuous–time systems

    NARCIS (Netherlands)

    Bobiti, R.V.; Lazar, M.

    2016-01-01

    The problem of constructing a Lyapunov function for continuous-time nonlinear dynamical systems is tackled in this paper via a sampling-based approach. The main idea of the sampling-based method is to verify a Lyapunov-type inequality for a finite number of points (known state vectors) in the

  11. Crystallite size variation of TiO_2 samples depending time heat treatment

    International Nuclear Information System (INIS)

    Galante, A.G.M.; Paula, F.R. de; Montanhera, M.A.; Pereira, E.A.; Spada, E.R.

    2016-01-01

    Titanium dioxide (TiO_2) is an oxide semiconductor that may be found in mixed phase or in distinct phases: brookite, anatase and rutile. In this work was carried out the study of the residence time influence at a given temperature in the TiO_2 powder physical properties. After the powder synthesis, the samples were divided and heat treated at 650 °C with a ramp up to 3 °C/min and a residence time ranging from 0 to 20 hours and subsequently characterized by x-ray diffraction. Analyzing the obtained diffraction patterns, it was observed that, from 5-hour residence time, began the two-distinct phase coexistence: anatase and rutile. It also calculated the average crystallite size of each sample. The results showed an increase in average crystallite size with increasing residence time of the heat treatment. (author)

  12. Accuracy assessment of the National Forest Inventory map of Mexico: sampling designs and the fuzzy characterization of landscapes

    Directory of Open Access Journals (Sweden)

    Stéphane Couturier

    2009-10-01

    Full Text Available There is no record so far in the literature of a comprehensive method to assess the accuracy of regional scale Land Cover/ Land Use (LCLU maps in the sub-tropical belt. The elevated biodiversity and the presence of highly fragmented classes hamper the use of sampling designs commonly employed in previous assessments of mainly temperate zones. A sampling design for assessing the accuracy of the Mexican National Forest Inventory (NFI map at community level is presented. A pilot study was conducted on the Cuitzeo Lake watershed region covering 400 000 ha of the 2000 Landsat-derived map. Various sampling designs were tested in order to find a trade-off between operational costs, a good spatial distribution of the sample and the inclusion of all scarcely distributed classes (‘rare classes’. A two-stage sampling design where the selection of Primary Sampling Units (PSU was done under separate schemes for commonly and scarcely distributed classes, showed best characteristics. A total of 2 023 punctual secondary sampling units were verified against their NFI map label. Issues regarding the assessment strategy and trends of class confusions are devised.

  13. Sampling rare fluctuations of discrete-time Markov chains

    Science.gov (United States)

    Whitelam, Stephen

    2018-03-01

    We describe a simple method that can be used to sample the rare fluctuations of discrete-time Markov chains. We focus on the case of Markov chains with well-defined steady-state measures, and derive expressions for the large-deviation rate functions (and upper bounds on such functions) for dynamical quantities extensive in the length of the Markov chain. We illustrate the method using a series of simple examples, and use it to study the fluctuations of a lattice-based model of active matter that can undergo motility-induced phase separation.

  14. A two-stage Bayesian design with sample size reestimation and subgroup analysis for phase II binary response trials.

    Science.gov (United States)

    Zhong, Wei; Koopmeiners, Joseph S; Carlin, Bradley P

    2013-11-01

    Frequentist sample size determination for binary outcome data in a two-arm clinical trial requires initial guesses of the event probabilities for the two treatments. Misspecification of these event rates may lead to a poor estimate of the necessary sample size. In contrast, the Bayesian approach that considers the treatment effect to be random variable having some distribution may offer a better, more flexible approach. The Bayesian sample size proposed by (Whitehead et al., 2008) for exploratory studies on efficacy justifies the acceptable minimum sample size by a "conclusiveness" condition. In this work, we introduce a new two-stage Bayesian design with sample size reestimation at the interim stage. Our design inherits the properties of good interpretation and easy implementation from Whitehead et al. (2008), generalizes their method to a two-sample setting, and uses a fully Bayesian predictive approach to reduce an overly large initial sample size when necessary. Moreover, our design can be extended to allow patient level covariates via logistic regression, now adjusting sample size within each subgroup based on interim analyses. We illustrate the benefits of our approach with a design in non-Hodgkin lymphoma with a simple binary covariate (patient gender), offering an initial step toward within-trial personalized medicine. Copyright © 2013 Elsevier Inc. All rights reserved.

  15. Economic Design of Acceptance Sampling Plans in a Two-Stage Supply Chain

    Directory of Open Access Journals (Sweden)

    Lie-Fern Hsu

    2012-01-01

    Full Text Available Supply Chain Management, which is concerned with material and information flows between facilities and the final customers, has been considered the most popular operations strategy for improving organizational competitiveness nowadays. With the advanced development of computer technology, it is getting easier to derive an acceptance sampling plan satisfying both the producer's and consumer's quality and risk requirements. However, all the available QC tables and computer software determine the sampling plan on a noneconomic basis. In this paper, we design an economic model to determine the optimal sampling plan in a two-stage supply chain that minimizes the producer's and the consumer's total quality cost while satisfying both the producer's and consumer's quality and risk requirements. Numerical examples show that the optimal sampling plan is quite sensitive to the producer's product quality. The product's inspection, internal failure, and postsale failure costs also have an effect on the optimal sampling plan.

  16. Sampling design and procedures for fixed surface-water sites in the Georgia-Florida coastal plain study unit, 1993

    Science.gov (United States)

    Hatzell, H.H.; Oaksford, E.T.; Asbury, C.E.

    1995-01-01

    The implementation of design guidelines for the National Water-Quality Assessment (NAWQA) Program has resulted in the development of new sampling procedures and the modification of existing procedures commonly used in the Water Resources Division of the U.S. Geological Survey. The Georgia-Florida Coastal Plain (GAFL) study unit began the intensive data collection phase of the program in October 1992. This report documents the implementation of the NAWQA guidelines by describing the sampling design and procedures for collecting surface-water samples in the GAFL study unit in 1993. This documentation is provided for agencies that use water-quality data and for future study units that will be entering the intensive phase of data collection. The sampling design is intended to account for large- and small-scale spatial variations, and temporal variations in water quality for the study area. Nine fixed sites were selected in drainage basins of different sizes and different land-use characteristics located in different land-resource provinces. Each of the nine fixed sites was sampled regularly for a combination of six constituent groups composed of physical and chemical constituents: field measurements, major ions and metals, nutrients, organic carbon, pesticides, and suspended sediments. Some sites were also sampled during high-flow conditions and storm events. Discussion of the sampling procedure is divided into three phases: sample collection, sample splitting, and sample processing. A cone splitter was used to split water samples for the analysis of the sampling constituent groups except organic carbon from approximately nine liters of stream water collected at four fixed sites that were sampled intensively. An example of the sample splitting schemes designed to provide the sample volumes required for each sample constituent group is described in detail. Information about onsite sample processing has been organized into a flowchart that describes a pathway for each of

  17. Optimizing trial design in pharmacogenetics research: comparing a fixed parallel group, group sequential, and adaptive selection design on sample size requirements.

    Science.gov (United States)

    Boessen, Ruud; van der Baan, Frederieke; Groenwold, Rolf; Egberts, Antoine; Klungel, Olaf; Grobbee, Diederick; Knol, Mirjam; Roes, Kit

    2013-01-01

    Two-stage clinical trial designs may be efficient in pharmacogenetics research when there is some but inconclusive evidence of effect modification by a genomic marker. Two-stage designs allow to stop early for efficacy or futility and can offer the additional opportunity to enrich the study population to a specific patient subgroup after an interim analysis. This study compared sample size requirements for fixed parallel group, group sequential, and adaptive selection designs with equal overall power and control of the family-wise type I error rate. The designs were evaluated across scenarios that defined the effect sizes in the marker positive and marker negative subgroups and the prevalence of marker positive patients in the overall study population. Effect sizes were chosen to reflect realistic planning scenarios, where at least some effect is present in the marker negative subgroup. In addition, scenarios were considered in which the assumed 'true' subgroup effects (i.e., the postulated effects) differed from those hypothesized at the planning stage. As expected, both two-stage designs generally required fewer patients than a fixed parallel group design, and the advantage increased as the difference between subgroups increased. The adaptive selection design added little further reduction in sample size, as compared with the group sequential design, when the postulated effect sizes were equal to those hypothesized at the planning stage. However, when the postulated effects deviated strongly in favor of enrichment, the comparative advantage of the adaptive selection design increased, which precisely reflects the adaptive nature of the design. Copyright © 2013 John Wiley & Sons, Ltd.

  18. Design-based estimators for snowball sampling

    OpenAIRE

    Shafie, Termeh

    2010-01-01

    Snowball sampling, where existing study subjects recruit further subjects from amongtheir acquaintances, is a popular approach when sampling from hidden populations.Since people with many in-links are more likely to be selected, there will be a selectionbias in the samples obtained. In order to eliminate this bias, the sample data must beweighted. However, the exact selection probabilities are unknown for snowball samplesand need to be approximated in an appropriate way. This paper proposes d...

  19. Sampling and energy evaluation challenges in ligand binding protein design.

    Science.gov (United States)

    Dou, Jiayi; Doyle, Lindsey; Jr Greisen, Per; Schena, Alberto; Park, Hahnbeom; Johnsson, Kai; Stoddard, Barry L; Baker, David

    2017-12-01

    The steroid hormone 17α-hydroxylprogesterone (17-OHP) is a biomarker for congenital adrenal hyperplasia and hence there is considerable interest in development of sensors for this compound. We used computational protein design to generate protein models with binding sites for 17-OHP containing an extended, nonpolar, shape-complementary binding pocket for the four-ring core of the compound, and hydrogen bonding residues at the base of the pocket to interact with carbonyl and hydroxyl groups at the more polar end of the ligand. Eight of 16 designed proteins experimentally tested bind 17-OHP with micromolar affinity. A co-crystal structure of one of the designs revealed that 17-OHP is rotated 180° around a pseudo-two-fold axis in the compound and displays multiple binding modes within the pocket, while still interacting with all of the designed residues in the engineered site. Subsequent rounds of mutagenesis and binding selection improved the ligand affinity to nanomolar range, while appearing to constrain the ligand to a single bound conformation that maintains the same "flipped" orientation relative to the original design. We trace the discrepancy in the design calculations to two sources: first, a failure to model subtle backbone changes which alter the distribution of sidechain rotameric states and second, an underestimation of the energetic cost of desolvating the carbonyl and hydroxyl groups of the ligand. The difference between design model and crystal structure thus arises from both sampling limitations and energy function inaccuracies that are exacerbated by the near two-fold symmetry of the molecule. © 2017 The Authors Protein Science published by Wiley Periodicals, Inc. on behalf of The Protein Society.

  20. Design and Development of a Robot-Based Automation System for Cryogenic Crystal Sample Mounting at the Advanced Photon Source

    International Nuclear Information System (INIS)

    Shu, D.; Preissner, C.; Nocher, D.; Han, Y.; Barraza, J.; Lee, P.; Lee, W.-K.; Cai, Z.; Ginell, S.; Alkire, R.; Lazarski, K.; Schuessler, R.; Joachimiak, A.

    2004-01-01

    X-ray crystallography is the primary method to determine the 3D structures of complex macromolecules at high resolution. In the years to come, the Advanced Photon Source (APS) and similar 3rd-generation synchrotron sources elsewhere will become the most powerful tools for studying atomic structures of biological molecules. One of the major bottlenecks in the x-ray data collection process is the constant need to change and realign the crystal sample. This is a very time- and manpower-consuming task. An automated sample mounting system will help to solve this bottleneck problem. We have developed a novel robot-based automation system for cryogenic crystal sample mounting at the APS. Design of the robot-based automation system, as well as its on-line test results at the Argonne Structural Biology Center (SBC) 19-BM experimental station, are presented in this paper

  1. Time optimization of 90Sr measurements: Sequential measurement of multiple samples during ingrowth of 90Y

    International Nuclear Information System (INIS)

    Holmgren, Stina; Tovedal, Annika; Björnham, Oscar; Ramebäck, Henrik

    2016-01-01

    The aim of this paper is to contribute to a more rapid determination of a series of samples containing 90 Sr by making the Cherenkov measurement of the daughter nuclide 90 Y more time efficient. There are many instances when an optimization of the measurement method might be favorable, such as; situations requiring rapid results in order to make urgent decisions or, on the other hand, to maximize the throughput of samples in a limited available time span. In order to minimize the total analysis time, a mathematical model was developed which calculates the time of ingrowth as well as individual measurement times for n samples in a series. This work is focused on the measurement of 90 Y during ingrowth, after an initial chemical separation of strontium, in which it is assumed that no other radioactive strontium isotopes are present. By using a fixed minimum detectable activity (MDA) and iterating the measurement time for each consecutive sample the total analysis time will be less, compared to using the same measurement time for all samples. It was found that by optimization, the total analysis time for 10 samples can be decreased greatly, from 21 h to 6.5 h, when assuming a MDA of 1 Bq/L and at a background count rate of approximately 0.8 cpm. - Highlights: • An approach roughly a factor of three more efficient than an un-optimized method. • The optimization gives a more efficient use of instrument time. • The efficiency increase ranges from a factor of three to 10, for 10 to 40 samples.

  2. Comparison of serum pools and oral fluid samples for detection of porcine circovirus type 2 by quantitative real-time PCR in finisher pigs

    DEFF Research Database (Denmark)

    Nielsen, Gitte Blach; Nielsen, Jens Peter; Haugegaard, John

    2018-01-01

    Porcine circovirus type 2 (PCV2) diagnostics in live pigs often involves pooled serum and/or oral fluid samples for group-level determination of viral load by quantitative real-time polymerase chain reaction (qPCR). The purpose of the study was to compare the PCV2 viral load determined by q......PCR of paired samples at the pen level of pools of sera (SP) from 4 to 5 pigs and the collective oral fluid (OF) from around 30 pigs corresponding to one rope put in the same pen. Pigs in pens of 2 finishing herds were sampled by cross-sectional (Herd 1) and cross-sectional with follow-up (Herd 2) study designs....... In Herd 1, 50 sample pairs consisting of SP from 4 to 5 pigs and OF from around 23 pigs were collected. In Herd 2, 65 sample pairs consisting of 4 (SP) and around 30 (OF) pigs were collected 4 times at 3-week intervals. A higher proportion of PCV2-positive pens (86% vs. 80% and 100% vs. 91%) and higher...

  3. A Real-Time PCR Detection of Genus Salmonella in Meat and Milk Samples

    Directory of Open Access Journals (Sweden)

    Jaroslav Pochop

    2013-05-01

    Full Text Available The aim of this study was follow the contamination of ready to eat milk and meat products with Salmonella spp. by using the Step One real-time PCR. Classical microbiological methods for detection of food-borne bacteria involve the use of pre-enrichment and/or specific enrichment, followed by the isolation of the bacteria in solid media and a final confirmation by biochemical and/or serological tests. We used the PrepSEQ Rapid Spin Sample Preparation Kit for isolation of DNA and SensiFAST SYBR Hi-ROX Kit for the real-time PCR performance. In the investigated samples without incubation we could detect strain of Salmonella sp. in five out of twenty three samples (swabs. This Step One real-time PCR assay is extremely useful for any laboratory in possession of a real-time PCR. It is a fast, reproducible, simple, specific and sensitive way to detect nucleic acids, which could be used in clinical diagnostic tests in the future. Our results indicated that the Step One real-time PCR assay developed in this study could sensitively detect Salmonella spp. in ready to eat food.

  4. Sampling Methodologies for Epidemiologic Surveillance of Men Who Have Sex with Men and Transgender Women in Latin America: An Empiric Comparison of Convenience Sampling, Time Space Sampling, and Respondent Driven Sampling

    OpenAIRE

    Clark, J. L.; Konda, K. A.; Silva-Santisteban, A.; Peinado, J.; Lama, J. R.; Kusunoki, L.; Perez-Brumer, A.; Pun, M.; Cabello, R.; Sebastian, J. L.; Suarez-Ognio, L.; Sanchez, J.

    2014-01-01

    Alternatives to convenience sampling (CS) are needed for HIV/STI surveillance of most-at-risk populations in Latin America. We compared CS, time space sampling (TSS), and respondent driven sampling (RDS) for recruitment of men who have sex with men (MSM) and transgender women (TW) in Lima, Peru. During concurrent 60-day periods from June-August, 2011, we recruited MSM/TW for epidemiologic surveillance using CS, TSS, and RDS. A total of 748 participants were recruited through CS, 233 through T...

  5. Design of a real-time hybrid controller

    NARCIS (Netherlands)

    Lim, K.W.; Preisig, H.A.; Rauch, H.E.

    1998-01-01

    This paper describes the framework of an automated supervisory control system realisation. It is developed to support rapid prototyping of real time hybrid con trol systems, described using a simple and flexible set of text descriptors. The design is versatile in allow ing the user to define

  6. Visualization of Time-Series Sensor Data to Inform the Design of Just-In-Time Adaptive Stress Interventions.

    Science.gov (United States)

    Sharmin, Moushumi; Raij, Andrew; Epstien, David; Nahum-Shani, Inbal; Beck, J Gayle; Vhaduri, Sudip; Preston, Kenzie; Kumar, Santosh

    2015-09-01

    We investigate needs, challenges, and opportunities in visualizing time-series sensor data on stress to inform the design of just-in-time adaptive interventions (JITAIs). We identify seven key challenges: massive volume and variety of data, complexity in identifying stressors, scalability of space, multifaceted relationship between stress and time, a need for representation at multiple granularities, interperson variability, and limited understanding of JITAI design requirements due to its novelty. We propose four new visualizations based on one million minutes of sensor data (n=70). We evaluate our visualizations with stress researchers (n=6) to gain first insights into its usability and usefulness in JITAI design. Our results indicate that spatio-temporal visualizations help identify and explain between- and within-person variability in stress patterns and contextual visualizations enable decisions regarding the timing, content, and modality of intervention. Interestingly, a granular representation is considered informative but noise-prone; an abstract representation is the preferred starting point for designing JITAIs.

  7. Time-dependent rheoforging of A6061 aluminum alloy on a mechanical servo press and the effects of forming conditions on homogeneity of rheoforged samples

    Directory of Open Access Journals (Sweden)

    Meng Yi

    2015-01-01

    Full Text Available The solid and liquid phases in semisolid metal slurry exhibited different forming behaviours during deformation result in products with inhomogeneous quality. A6061 aluminum alloy was forged in the semisolid state on a mechanical servo press with the capability of multistage compression. To improve the homogeneity of rheoforged samples a time-dependent rheoforging strategy was designed. The distributions of the microstructure and mechanical properties the samples manufactured under various experimental conditions were investigated. The A6061 samples forged in the temperature range from 625 to 628 ∘C with a short holding time of 4 s and the upper die preheated to 300 ∘C exhibited a homogeneous microstructure and mechanical properties. The homogeneity of rheoforged samples resulted from the controllable free motion capability of the mechanical servo press and the adjustable fluidity and viscosity of the semisolid slurry.

  8. Optoelectronic time-domain characterization of a 100 GHz sampling oscilloscope

    International Nuclear Information System (INIS)

    Füser, H; Baaske, K; Kuhlmann, K; Judaschke, R; Pierz, K; Bieler, M; Eichstädt, S; Elster, C

    2012-01-01

    We have carried out an optoelectronic measurement of the impulse response of an ultrafast sampling oscilloscope with a nominal bandwidth of 100 GHz within a time window of approximately 100 ps. Our experimental technique also considers frequency components above the cut-off frequency of higher order modes of the 1.0 mm coaxial line, which is shown to be important for the specification of the impulse response of ultrafast sampling oscilloscopes. Additionally, we have measured the reflection coefficient of the sampling head induced by the mismatch of the sampling circuit and the coaxial connector which is larger than 0.5 for certain frequencies. The uncertainty analysis has been performed using the Monte Carlo method of Supplement 1 to the 'Guide to the Expression of Uncertainty in Measurement' and correlations in the estimated impulse response have been determined. Our measurements extend previous work which deals with the characterization of 70 GHz oscilloscopes and the measurement of 100 GHz oscilloscopes up to the cut-off frequency of higher order modes

  9. Study on the NaOH/metakaolin ratio and crystallization time for zeolite a synthesis from kaolin using statistical design

    Energy Technology Data Exchange (ETDEWEB)

    Silva Filho, Severino Higino da; Bieseki, Lindiane; Pergher, Sibele Berenice Castella, E-mail: sibelepergher@gmail.com [Universidade Federal do Rio Grande do Norte (LABPEMOL/UFRN), Natal, RN (Brazil). Lab. de Peneiras Moleculares; Maia, Ana Aurea B.; Angelica, Romulo Simoes [Universidade Federal do Para (UFPA), Belem PA (Brazil); Treichel, Helen [Universidade Federal da Fronteira Sil (UFFS), Erechim, RS (Brazil)

    2017-05-15

    The NaOH/metakaolin ratio and crystallization time were studied for the synthesis of zeolite NaA from a sample of kaolin from a Capim mine. The tests were carried out by using statistical design with axial points and replication of the central point. The samples obtained were characterized by X-ray diffraction (DRX), scanning electron microscopy and chemical analysis using a microprobe EPMA. The results showed that there is a relationship between the amount of NaOH added and crystallization time. The tests carried out using the lowest NaOH/metakaolin ratio (0.5) and the shortest time (4 h) produced a non-crystalline material. On the other hand, increasing the NaOH/metakaolin ratio and the crystallization time led to the formation of a NaA phase with a high structural level, but with the presence of a sodalite phase as an impurity. (author)

  10. Time, Interaction, and Design in Support of a Good Life

    DEFF Research Database (Denmark)

    Froes, Isabel Cristina G.; Laaksolahti, Jarmo Matti; Witzner Hansen, Dan

    development. Our ongoing investigation of Interaction Design for a Good Life (http://itu.dk/IxDLab/) has uncovered a broad diversity of perspectives on how to unpack ‘a good life’ through research and design, and indeed what constitutes a good life to begin with. Through our investigations, time has surfaced...... as pivotal to our budding understanding of elements of a good life and a useful framing for future investigations. Time is also an important topic for interaction design, because it is at the core of interaction, the practice of design, and, in many ways, our use and relationships with technological......This position paper outlines a research program for the IxD Research Group at the IT University of Copenhagen. As such, it presents a range of questions addressing different corners of the question of time in IxD from the point of view of HCI, social interaction, material expressions, and software...

  11. ADL: a graphical design language for real time parallel applications

    NARCIS (Netherlands)

    M.R. van Steen; T. Vogel; A. ten Dam

    1993-01-01

    textabstractDesigning parallel applications is generally experienced as a tedious and difficult task, especially when hard real-time performance requirements have to be met. This paper discusses on-going work concerning the construction of a Design Entry System which supports the design phase of

  12. Cross-sample entropy of foreign exchange time series

    Science.gov (United States)

    Liu, Li-Zhi; Qian, Xi-Yuan; Lu, Heng-Yao

    2010-11-01

    The correlation of foreign exchange rates in currency markets is investigated based on the empirical data of DKK/USD, NOK/USD, CAD/USD, JPY/USD, KRW/USD, SGD/USD, THB/USD and TWD/USD for a period from 1995 to 2002. Cross-SampEn (cross-sample entropy) method is used to compare the returns of every two exchange rate time series to assess their degree of asynchrony. The calculation method of confidence interval of SampEn is extended and applied to cross-SampEn. The cross-SampEn and its confidence interval for every two of the exchange rate time series in periods 1995-1998 (before the Asian currency crisis) and 1999-2002 (after the Asian currency crisis) are calculated. The results show that the cross-SampEn of every two of these exchange rates becomes higher after the Asian currency crisis, indicating a higher asynchrony between the exchange rates. Especially for Singapore, Thailand and Taiwan, the cross-SampEn values after the Asian currency crisis are significantly higher than those before the Asian currency crisis. Comparison with the correlation coefficient shows that cross-SampEn is superior to describe the correlation between time series.

  13. Wuchereria bancrofti in Tanzania: microfilarial periodicity and effect of blood sampling time on microfilarial intensities

    DEFF Research Database (Denmark)

    Simonsen, Poul Erik; Niemann, L.; Meyrowitsch, Dan Wolf

    1997-01-01

    The circadian periodicity of Wuchereria bancrofti microfilarial (mf) intensities in peripheral blood was analysed in a group of infected individuals from an endemic community in north-eastern Tanzania. The mf density was quantified at two-hourly intervals for 24 hours. A clear nocturnal periodic...... of blood sampling before peak time is discussed, and the importance of taking sampling time into consideration when analysing data from epidemiological studies is emphasized. A simple method is devised which can be used to adjust for the influence of time on mf intensities, in studies where accurate...... information on mf intensities is necessary, and where it is impossible to obtain all samples at peak time....

  14. OSIRIS-REx Touch-and-Go (TAG) Mission Design for Asteroid Sample Collection

    Science.gov (United States)

    May, Alexander; Sutter, Brian; Linn, Timothy; Bierhaus, Beau; Berry, Kevin; Mink, Ron

    2014-01-01

    The Origins Spectral Interpretation Resource Identification Security Regolith Explorer (OSIRIS-REx) mission is a NASA New Frontiers mission launching in September 2016 to rendezvous with the near-Earth asteroid Bennu in October 2018. After several months of proximity operations to characterize the asteroid, OSIRIS-REx flies a Touch-And-Go (TAG) trajectory to the asteroid's surface to collect at least 60 g of pristine regolith sample for Earth return. This paper provides mission and flight system overviews, with more details on the TAG mission design and key events that occur to safely and successfully collect the sample. An overview of the navigation performed relative to a chosen sample site, along with the maneuvers to reach the desired site is described. Safety monitoring during descent is performed with onboard sensors providing an option to abort, troubleshoot, and try again if necessary. Sample collection occurs using a collection device at the end of an articulating robotic arm during a brief five second contact period, while a constant force spring mechanism in the arm assists to rebound the spacecraft away from the surface. Finally, the sample is measured quantitatively utilizing the law of conservation of angular momentum, along with qualitative data from imagery of the sampling device. Upon sample mass verification, the arm places the sample into the Stardust-heritage Sample Return Capsule (SRC) for return to Earth in September 2023.

  15. Towards the production of reliable quantitative microbiological data for risk assessment: Direct quantification of Campylobacter in naturally infected chicken fecal samples using selective culture and real-time PCR

    DEFF Research Database (Denmark)

    Garcia Clavero, Ana Belén; Vigre, Håkan; Josefsen, Mathilde Hasseldam

    2015-01-01

    of Campylobacter by real-time PCR was performed using standard curves designed for two different DNA extraction methods: Easy-DNA™ Kit from Invitrogen (Easy-DNA) and NucliSENS® MiniMAG® from bioMérieux (MiniMAG). Results indicated that the estimation of the numbers of Campylobacter present in chicken fecal samples...... and for the evaluation of control strategies implemented in poultry production. The aim of this study was to compare estimates of the numbers of Campylobacter spp. in naturally infected chicken fecal samples obtained using direct quantification by selective culture and by real-time PCR. Absolute quantification....... Although there were differences in terms of estimates of Campylobacter numbers between the methods and samples, the differences between culture and real-time PCR were not statistically significant for most of the samples used in this study....

  16. Robotic Irradiated Sample Handling Concept Design in Reactor TRIGA PUSPATI using Simulation Software

    International Nuclear Information System (INIS)

    Mohd Khairulezwan Abdul Manan; Mohd Sabri Minhat; Ridzuan Abdul Mutalib; Zareen Khan Abdul Jalil Khan; Nurfarhana Ayuni Joha

    2015-01-01

    This paper introduces the concept design of an Robotic Irradiated Sample Handling Machine using graphical software application, designed as a general, flexible and open platform to work on robotics. Webots has proven to be a useful tool in many fields of robotics, such as manipulator programming, mobile robots control (wheeled, sub-aquatic and walking robots), distance computation, sensor simulation, collision detection, motion planning and so on. Webots is used as the common interface for all the applications. Some practical cases and application for this concept design are illustrated on the paper to present the possibilities of this simulation software. (author)

  17. Sitting Time and Body Mass Index, in a Portuguese Sample of Men: Results from the Azorean Physical Activity and Health Study (APAHS

    Directory of Open Access Journals (Sweden)

    Rute Santos

    2010-03-01

    Full Text Available The aim of this study was to verify the relation between body mass index (BMI and sitting time in a sample of 4,091 Azorean men. BMI was calculated from self-reported weight and height. Total physical activity (PA time and total sitting time were assessed with the IPAQ (short version. Linear Regression analysis showed that total sitting time (hours/day was positively associated with BMI (B = 0.078; p < 0.001 after adjustments for age, meal frequency, alcohol and tobacco consumptions, island of residence, education level and total PA time. Although the cross sectional design precludes us from establishing causality, our findings emphasize the importance of reducing sedentary behavior to decrease the risk of obesity.

  18. A novel heterogeneous training sample selection method on space-time adaptive processing

    Science.gov (United States)

    Wang, Qiang; Zhang, Yongshun; Guo, Yiduo

    2018-04-01

    The performance of ground target detection about space-time adaptive processing (STAP) decreases when non-homogeneity of clutter power is caused because of training samples contaminated by target-like signals. In order to solve this problem, a novel nonhomogeneous training sample selection method based on sample similarity is proposed, which converts the training sample selection into a convex optimization problem. Firstly, the existing deficiencies on the sample selection using generalized inner product (GIP) are analyzed. Secondly, the similarities of different training samples are obtained by calculating mean-hausdorff distance so as to reject the contaminated training samples. Thirdly, cell under test (CUT) and the residual training samples are projected into the orthogonal subspace of the target in the CUT, and mean-hausdorff distances between the projected CUT and training samples are calculated. Fourthly, the distances are sorted in order of value and the training samples which have the bigger value are selective preference to realize the reduced-dimension. Finally, simulation results with Mountain-Top data verify the effectiveness of the proposed method.

  19. Evaluation of single and two-stage adaptive sampling designs for estimation of density and abundance of freshwater mussels in a large river

    Science.gov (United States)

    Smith, D.R.; Rogala, J.T.; Gray, B.R.; Zigler, S.J.; Newton, T.J.

    2011-01-01

    Reliable estimates of abundance are needed to assess consequences of proposed habitat restoration and enhancement projects on freshwater mussels in the Upper Mississippi River (UMR). Although there is general guidance on sampling techniques for population assessment of freshwater mussels, the actual performance of sampling designs can depend critically on the population density and spatial distribution at the project site. To evaluate various sampling designs, we simulated sampling of populations, which varied in density and degree of spatial clustering. Because of logistics and costs of large river sampling and spatial clustering of freshwater mussels, we focused on adaptive and non-adaptive versions of single and two-stage sampling. The candidate designs performed similarly in terms of precision (CV) and probability of species detection for fixed sample size. Both CV and species detection were determined largely by density, spatial distribution and sample size. However, designs did differ in the rate that occupied quadrats were encountered. Occupied units had a higher probability of selection using adaptive designs than conventional designs. We used two measures of cost: sample size (i.e. number of quadrats) and distance travelled between the quadrats. Adaptive and two-stage designs tended to reduce distance between sampling units, and thus performed better when distance travelled was considered. Based on the comparisons, we provide general recommendations on the sampling designs for the freshwater mussels in the UMR, and presumably other large rivers.

  20. Issues in environmental survey design

    International Nuclear Information System (INIS)

    Iachan, R.

    1989-01-01

    Several environmental survey design issues are discussed and illustrated with surveys designed by Research Triangle Institute statisticians. Issues related to sampling and nonsampling errors are illustrated for indoor air quality surveys, radon surveys, pesticide surveys, and occupational and personal exposure surveys. Sample design issues include the use of auxiliary information (e.g. for stratification), and sampling in time. We also discuss the reduction and estimation of nonsampling errors, including nonresponse and measurement bias

  1. Sampling design for the Study of Cardiovascular Risks in Adolescents (ERICA

    Directory of Open Access Journals (Sweden)

    Mauricio Teixeira Leite de Vasconcellos

    2015-05-01

    Full Text Available The Study of Cardiovascular Risk in Adolescents (ERICA aims to estimate the prevalence of cardiovascular risk factors and metabolic syndrome in adolescents (12-17 years enrolled in public and private schools of the 273 municipalities with over 100,000 inhabitants in Brazil. The study population was stratified into 32 geographical strata (27 capitals and five sets with other municipalities in each macro-region of the country and a sample of 1,251 schools was selected with probability proportional to size. In each school three combinations of shift (morning and afternoon and grade were selected, and within each of these combinations, one class was selected. All eligible students in the selected classes were included in the study. The design sampling weights were calculated by the product of the reciprocals of the inclusion probabilities in each sampling stage, and were later calibrated considering the projections of the numbers of adolescents enrolled in schools located in the geographical strata by sex and age.

  2. Evaluation design of New York City's regulations on nutrition, physical activity, and screen time in early child care centers.

    Science.gov (United States)

    Breck, Andrew; Goodman, Ken; Dunn, Lillian; Stephens, Robert L; Dawkins, Nicola; Dixon, Beth; Jernigan, Jan; Kakietek, Jakub; Lesesne, Catherine; Lessard, Laura; Nonas, Cathy; O'Dell, Sarah Abood; Osuji, Thearis A; Bronson, Bernice; Xu, Ye; Kettel Khan, Laura

    2014-10-16

    This article describes the multi-method cross-sectional design used to evaluate New York City Department of Health and Mental Hygiene's regulations of nutrition, physical activity, and screen time for children aged 3 years or older in licensed group child care centers. The Center Evaluation Component collected data from a stratified random sample of 176 licensed group child care centers in New York City. Compliance with the regulations was measured through a review of center records, a facility inventory, and interviews of center directors, lead teachers, and food service staff. The Classroom Evaluation Component included an observational and biometric study of a sample of approximately 1,400 children aged 3 or 4 years attending 110 child care centers and was designed to complement the center component at the classroom and child level. The study methodology detailed in this paper may aid researchers in designing policy evaluation studies that can inform other jurisdictions considering similar policies.

  3. A design-based approximation to the Bayes Information Criterion in finite population sampling

    Directory of Open Access Journals (Sweden)

    Enrico Fabrizi

    2014-05-01

    Full Text Available In this article, various issues related to the implementation of the usual Bayesian Information Criterion (BIC are critically examined in the context of modelling a finite population. A suitable design-based approximation to the BIC is proposed in order to avoid the derivation of the exact likelihood of the sample which is often very complex in a finite population sampling. The approximation is justified using a theoretical argument and a Monte Carlo simulation study.

  4. Adaptation of G-TAG Software for Validating Touch-and-Go Comet Surface Sampling Design Methodology

    Science.gov (United States)

    Mandic, Milan; Acikmese, Behcet; Blackmore, Lars

    2011-01-01

    The G-TAG software tool was developed under the R&TD on Integrated Autonomous Guidance, Navigation, and Control for Comet Sample Return, and represents a novel, multi-body dynamics simulation software tool for studying TAG sampling. The G-TAG multi-body simulation tool provides a simulation environment in which a Touch-and-Go (TAG) sampling event can be extensively tested. TAG sampling requires the spacecraft to descend to the surface, contact the surface with a sampling collection device, and then to ascend to a safe altitude. The TAG event lasts only a few seconds but is mission-critical with potentially high risk. Consequently, there is a need for the TAG event to be well characterized and studied by simulation and analysis in order for the proposal teams to converge on a reliable spacecraft design. This adaptation of the G-TAG tool was developed to support the Comet Odyssey proposal effort, and is specifically focused to address comet sample return missions. In this application, the spacecraft descends to and samples from the surface of a comet. Performance of the spacecraft during TAG is assessed based on survivability and sample collection performance. For the adaptation of the G-TAG simulation tool to comet scenarios, models are developed that accurately describe the properties of the spacecraft, approach trajectories, and descent velocities, as well as the models of the external forces and torques acting on the spacecraft. The adapted models of the spacecraft, descent profiles, and external sampling forces/torques were more sophisticated and customized for comets than those available in the basic G-TAG simulation tool. Scenarios implemented include the study of variations in requirements, spacecraft design (size, locations, etc. of the spacecraft components), and the environment (surface properties, slope, disturbances, etc.). The simulations, along with their visual representations using G-View, contributed to the Comet Odyssey New Frontiers proposal

  5. A study on HCI design strategy using emergent features and response time

    International Nuclear Information System (INIS)

    Lee, Sung Jin; Chang, Soon Heung; Park, Jin Gyun

    2001-01-01

    Existing design process of user interface has some weak point that there is no feedback information and no quantitative information between each sub process. If they're such information in design process, the design time cycle will be decreased and the contentment of HCI in the aspect of user will be more easily archived. In this study, new design process with feedback information and quantitative information was proposed using emergent features and user response time. The proposed methodology was put together with three main parts. First part is to calculate distinctiveness of a user interface or expanded user interface with consideration of emergent features. Second part is to expand a prototype user interface with design option for purpose of design requirement using directed structure graph (or nodal graph) theory. Last part is to convert non-realized value, distinctiveness, into realized value, response time, by response time database or response time correlation in the form of Hick-Hyman law equation. From the present validations, the usefulness of the proposed methodology was obtained by simple validation testing. It was found that emergent features should be improved for high reflection of real user interface. For the reliability of response time database, lots of end-user experiment is necessary. Expansion algorithm and representation technique of qualitative information should be somewhat improved for more efficient design process

  6. MCMC-ODPR: Primer design optimization using Markov Chain Monte Carlo sampling

    Directory of Open Access Journals (Sweden)

    Kitchen James L

    2012-11-01

    Full Text Available Abstract Background Next generation sequencing technologies often require numerous primer designs that require good target coverage that can be financially costly. We aimed to develop a system that would implement primer reuse to design degenerate primers that could be designed around SNPs, thus find the fewest necessary primers and the lowest cost whilst maintaining an acceptable coverage and provide a cost effective solution. We have implemented Metropolis-Hastings Markov Chain Monte Carlo for optimizing primer reuse. We call it the Markov Chain Monte Carlo Optimized Degenerate Primer Reuse (MCMC-ODPR algorithm. Results After repeating the program 1020 times to assess the variance, an average of 17.14% fewer primers were found to be necessary using MCMC-ODPR for an equivalent coverage without implementing primer reuse. The algorithm was able to reuse primers up to five times. We compared MCMC-ODPR with single sequence primer design programs Primer3 and Primer-BLAST and achieved a lower primer cost per amplicon base covered of 0.21 and 0.19 and 0.18 primer nucleotides on three separate gene sequences, respectively. With multiple sequences, MCMC-ODPR achieved a lower cost per base covered of 0.19 than programs BatchPrimer3 and PAMPS, which achieved 0.25 and 0.64 primer nucleotides, respectively. Conclusions MCMC-ODPR is a useful tool for designing primers at various melting temperatures at good target coverage. By combining degeneracy with optimal primer reuse the user may increase coverage of sequences amplified by the designed primers at significantly lower costs. Our analyses showed that overall MCMC-ODPR outperformed the other primer-design programs in our study in terms of cost per covered base.

  7. MCMC-ODPR: primer design optimization using Markov Chain Monte Carlo sampling.

    Science.gov (United States)

    Kitchen, James L; Moore, Jonathan D; Palmer, Sarah A; Allaby, Robin G

    2012-11-05

    Next generation sequencing technologies often require numerous primer designs that require good target coverage that can be financially costly. We aimed to develop a system that would implement primer reuse to design degenerate primers that could be designed around SNPs, thus find the fewest necessary primers and the lowest cost whilst maintaining an acceptable coverage and provide a cost effective solution. We have implemented Metropolis-Hastings Markov Chain Monte Carlo for optimizing primer reuse. We call it the Markov Chain Monte Carlo Optimized Degenerate Primer Reuse (MCMC-ODPR) algorithm. After repeating the program 1020 times to assess the variance, an average of 17.14% fewer primers were found to be necessary using MCMC-ODPR for an equivalent coverage without implementing primer reuse. The algorithm was able to reuse primers up to five times. We compared MCMC-ODPR with single sequence primer design programs Primer3 and Primer-BLAST and achieved a lower primer cost per amplicon base covered of 0.21 and 0.19 and 0.18 primer nucleotides on three separate gene sequences, respectively. With multiple sequences, MCMC-ODPR achieved a lower cost per base covered of 0.19 than programs BatchPrimer3 and PAMPS, which achieved 0.25 and 0.64 primer nucleotides, respectively. MCMC-ODPR is a useful tool for designing primers at various melting temperatures at good target coverage. By combining degeneracy with optimal primer reuse the user may increase coverage of sequences amplified by the designed primers at significantly lower costs. Our analyses showed that overall MCMC-ODPR outperformed the other primer-design programs in our study in terms of cost per covered base.

  8. Defining an optimum pumping-time requirement for sampling ground-water wells on the Hanford site

    International Nuclear Information System (INIS)

    Scharnhorst, N.L.

    1982-04-01

    The objective was to determine the optimum time period necessary to pump water from a well before a representative sample of the ground water can be obtained. It was assumed that a representative sample has been collected if the concentration of chemical parameters is the same in a number of samples taken consecutively, so that the concentration of parameters does not vary with time of collection. Ground-water samples used in this project were obtained by pumping selected wells on the Hanford Site. At each well, samples were taken at two minute intervals, and on each sample various chemical analyses were performed. Samples were checked for pH, sulfate, iron, specific conductivity, chloride, nitrate and alkalinity. The data showed that pH, alkalinity, sulfate and specific conductivity levels stabilized almost immediately after pumping of the well began. In many wells, the chloride and nitrate levels were unstable throughout the 38-minute sampling period. Iron levels, however, did not behave in either fashion. The concentration of iron in the samples was high when pumping began but dropped rapidly as pumping continued. The best explanation for this is that iron is flushed from the sides of the casing into the well when pumping begins. After several minutes of pumping, most of the dissolved iron is washed from the well casing and the iron concentration reaches a stable plateau representative of the iron concentration in the ground water.Since iron concentration takes longest to stabilize, the optimum pumping time for a well is based on the iron stabilization time for that well

  9. Asymptotic theory for the sample covariance matrix of a heavy-tailed multivariate time series

    DEFF Research Database (Denmark)

    Davis, Richard A.; Mikosch, Thomas Valentin; Pfaffel, Olivier

    2016-01-01

    In this paper we give an asymptotic theory for the eigenvalues of the sample covariance matrix of a multivariate time series. The time series constitutes a linear process across time and between components. The input noise of the linear process has regularly varying tails with index α∈(0,4) in...... particular, the time series has infinite fourth moment. We derive the limiting behavior for the largest eigenvalues of the sample covariance matrix and show point process convergence of the normalized eigenvalues. The limiting process has an explicit form involving points of a Poisson process and eigenvalues...... of a non-negative definite matrix. Based on this convergence we derive limit theory for a host of other continuous functionals of the eigenvalues, including the joint convergence of the largest eigenvalues, the joint convergence of the largest eigenvalue and the trace of the sample covariance matrix...

  10. Causality in Statistical Power: Isomorphic Properties of Measurement, Research Design, Effect Size, and Sample Size

    Directory of Open Access Journals (Sweden)

    R. Eric Heidel

    2016-01-01

    Full Text Available Statistical power is the ability to detect a significant effect, given that the effect actually exists in a population. Like most statistical concepts, statistical power tends to induce cognitive dissonance in hepatology researchers. However, planning for statistical power by an a priori sample size calculation is of paramount importance when designing a research study. There are five specific empirical components that make up an a priori sample size calculation: the scale of measurement of the outcome, the research design, the magnitude of the effect size, the variance of the effect size, and the sample size. A framework grounded in the phenomenon of isomorphism, or interdependencies amongst different constructs with similar forms, will be presented to understand the isomorphic effects of decisions made on each of the five aforementioned components of statistical power.

  11. Baseline Design Compliance Matrix for the Type 4 In Situ Vapor Samplers and Supernate and Sludge and Soft Saltcake Grab Sampling

    International Nuclear Information System (INIS)

    BOGER, R.M.

    2000-01-01

    The DOE has identified a need to sample vapor space, exhaust ducts, supernate, sludge, and soft saltcake in waste tanks that store radioactive waste. This document provides the Design Compliance Matrix (DCM) for the Type 4 In-Situ Vapor Sampling (ISVS) system and the Grab Sampling System that are used for completing this type of sampling function. The DCM identifies the design requirements and the source of the requirements for the Type 4 ISVS system and the Grab Sampling system. The DCM is a single-source compilation design requirements for sampling and sampling support equipment and supports the configuration management of these systems

  12. Studying time to pregnancy by use of a retrospective design

    DEFF Research Database (Denmark)

    Joffe, Michael; Key, Jane; Best, Nicky

    2005-01-01

    Biologic fertility can be measured using time to pregnancy (TTP). Retrospective designs, although lacking detailed timed information about behavior and exposure, are useful since they have a well-defined target population, often have good response rates, and are simpler and less expensive...... at the beginning of unprotected intercourse. More complete inference is possible if the study design covers the whole population, not just those who achieve a pregnancy....

  13. Discrete- vs. Continuous-Time Modeling of Unequally Spaced Experience Sampling Method Data

    Directory of Open Access Journals (Sweden)

    Silvia de Haan-Rietdijk

    2017-10-01

    Full Text Available The Experience Sampling Method is a common approach in psychological research for collecting intensive longitudinal data with high ecological validity. One characteristic of ESM data is that it is often unequally spaced, because the measurement intervals within a day are deliberately varied, and measurement continues over several days. This poses a problem for discrete-time (DT modeling approaches, which are based on the assumption that all measurements are equally spaced. Nevertheless, DT approaches such as (vector autoregressive modeling are often used to analyze ESM data, for instance in the context of affective dynamics research. There are equivalent continuous-time (CT models, but they are more difficult to implement. In this paper we take a pragmatic approach and evaluate the practical relevance of the violated model assumption in DT AR(1 and VAR(1 models, for the N = 1 case. We use simulated data under an ESM measurement design to investigate the bias in the parameters of interest under four different model implementations, ranging from the true CT model that accounts for all the exact measurement times, to the crudest possible DT model implementation, where even the nighttime is treated as a regular interval. An analysis of empirical affect data illustrates how the differences between DT and CT modeling can play out in practice. We find that the size and the direction of the bias in DT (VAR models for unequally spaced ESM data depend quite strongly on the true parameter in addition to data characteristics. Our recommendation is to use CT modeling whenever possible, especially now that new software implementations have become available.

  14. MUP, CEC-DES, STRADE. Codes for uncertainty propagation, experimental design and stratified random sampling techniques

    International Nuclear Information System (INIS)

    Amendola, A.; Astolfi, M.; Lisanti, B.

    1983-01-01

    The report describes the how-to-use of the codes: MUP (Monte Carlo Uncertainty Propagation) for uncertainty analysis by Monte Carlo simulation, including correlation analysis, extreme value identification and study of selected ranges of the variable space; CEC-DES (Central Composite Design) for building experimental matrices according to the requirements of Central Composite and Factorial Experimental Designs; and, STRADE (Stratified Random Design) for experimental designs based on the Latin Hypercube Sampling Techniques. Application fields, of the codes are probabilistic risk assessment, experimental design, sensitivity analysis and system identification problems

  15. [Design of standard voice sample text for subjective auditory perceptual evaluation of voice disorders].

    Science.gov (United States)

    Li, Jin-rang; Sun, Yan-yan; Xu, Wen

    2010-09-01

    To design a speech voice sample text with all phonemes in Mandarin for subjective auditory perceptual evaluation of voice disorders. The principles for design of a speech voice sample text are: The short text should include the 21 initials and 39 finals, this may cover all the phonemes in Mandarin. Also, the short text should have some meanings. A short text was made out. It had 155 Chinese words, and included 21 initials and 38 finals (the final, ê, was not included because it was rarely used in Mandarin). Also, the text covered 17 light tones and one "Erhua". The constituent ratios of the initials and finals presented in this short text were statistically similar as those in Mandarin according to the method of similarity of the sample and population (r = 0.742, P text were statistically not similar as those in Mandarin (r = 0.731, P > 0.05). A speech voice sample text with all phonemes in Mandarin was made out. The constituent ratios of the initials and finals presented in this short text are similar as those in Mandarin. Its value for subjective auditory perceptual evaluation of voice disorders need further study.

  16. A Time of Flight Fast Neutron Imaging System Design Study

    Science.gov (United States)

    Canion, Bonnie; Glenn, Andrew; Sheets, Steven; Wurtz, Ron; Nakae, Les; Hausladen, Paul; McConchie, Seth; Blackston, Matthew; Fabris, Lorenzo; Newby, Jason

    2017-09-01

    LLNL and ORNL are designing an active/passive fast neutron imaging system that is flexible to non-ideal detector positioning. It is often not possible to move an inspection object in fieldable imager applications such as safeguards, arms control treaty verification, and emergency response. Particularly, we are interested in scenarios which inspectors do not have access to all sides of an inspection object, due to interfering objects or walls. This paper will present the results of a simulation-based design parameter study, that will determine the optimum system design parameters for a fieldable system to perform time-of-flight based imaging analysis. The imaging analysis is based on the use of an associated particle imaging deuterium-tritium (API DT) neutron generator to get the time-of-flight of radiation induced within an inspection object. This design study will investigate the optimum design parameters for such a system (e.g. detector size, ideal placement, etc.), as well as the upper and lower feasible design parameters that the system can expect to provide results within a reasonable amount of time (e.g. minimum/maximum detector efficiency, detector standoff, etc.). Ideally the final prototype from this project will be capable of using full-access techniques, such as transmission imaging, when the measurement circumstances allow, but with the additional capability of producing results at reduced accessibility.

  17. Designing carbon markets. Part I: Carbon markets in time

    International Nuclear Information System (INIS)

    Fankhauser, Samuel; Hepburn, Cameron

    2010-01-01

    This paper analyses the design of carbon markets in time (i.e., intertemporally). It is part of a twin set of papers that ask, starting from first principles, what an optimal global carbon market would look like by around 2030. Our focus is on firm-level cap-and-trade systems, although much of what we say would also apply to government-level trading and carbon offset schemes. We examine the 'first principles' of temporal design that would help to maximise flexibility and to minimise costs, including banking and borrowing and other mechanisms to provide greater carbon price predictability and credibility over time.

  18. Derivation of time dependent design-values for SNR 300 structural material

    International Nuclear Information System (INIS)

    Lorenz, H.; Breitling, H.; de Heesen, E.

    1976-01-01

    Time-dependent design values were derived from long-term creep rupture data for steel X 6 CrNi 1811 in the unwelded and welded condition. The design values had to satisfy the ASME CC 1592 criterea with respect to creep rupture strength, time to reach 1% strain and transition to tertiary creep as well as the requirement of German regulatory rules to properly account for weld bahaviour. For the evaluation and extrapolation 2 proven computer programmes were used. The design data derived under consideration of weld joints show relative good agreement with the values of ASME CC 1592. Consideration of welds leads to lower design values above 550 0 C and 5x10 3 h with the difference between rolled and weld material becoming larger with increasing time and temperature. (author)

  19. Unsupervised Ensemble Anomaly Detection Using Time-Periodic Packet Sampling

    Science.gov (United States)

    Uchida, Masato; Nawata, Shuichi; Gu, Yu; Tsuru, Masato; Oie, Yuji

    We propose an anomaly detection method for finding patterns in network traffic that do not conform to legitimate (i.e., normal) behavior. The proposed method trains a baseline model describing the normal behavior of network traffic without using manually labeled traffic data. The trained baseline model is used as the basis for comparison with the audit network traffic. This anomaly detection works in an unsupervised manner through the use of time-periodic packet sampling, which is used in a manner that differs from its intended purpose — the lossy nature of packet sampling is used to extract normal packets from the unlabeled original traffic data. Evaluation using actual traffic traces showed that the proposed method has false positive and false negative rates in the detection of anomalies regarding TCP SYN packets comparable to those of a conventional method that uses manually labeled traffic data to train the baseline model. Performance variation due to the probabilistic nature of sampled traffic data is mitigated by using ensemble anomaly detection that collectively exploits multiple baseline models in parallel. Alarm sensitivity is adjusted for the intended use by using maximum- and minimum-based anomaly detection that effectively take advantage of the performance variations among the multiple baseline models. Testing using actual traffic traces showed that the proposed anomaly detection method performs as well as one using manually labeled traffic data and better than one using randomly sampled (unlabeled) traffic data.

  20. Statistical sampling approaches for soil monitoring

    NARCIS (Netherlands)

    Brus, D.J.

    2014-01-01

    This paper describes three statistical sampling approaches for regional soil monitoring, a design-based, a model-based and a hybrid approach. In the model-based approach a space-time model is exploited to predict global statistical parameters of interest such as the space-time mean. In the hybrid

  1. The economic analysis of power market architectures: application to real-time market design

    International Nuclear Information System (INIS)

    Saguan, M.

    2007-04-01

    This work contributes to the economic analysis of power market architectures. A modular framework is used to separate problems of market design in different modules. The work's goal is to study real-time market design. A two-stage market equilibrium model is used to analyse the two main real-time designs: the 'market' and the 'mechanism' (with penalty). Numerical simulations show that design applied in real-time is not neutral vis-a-vis of energy markets sequence and the competition dynamic. Designs using penalty (mechanisms) cause distortions, inefficiencies and can create barriers to entry. The size of distortions is given by the temporal position of the gate that closure the forward markets. This model has also allowed us to show the key role of real-time integration between zones and the importance of good harmonization between real-time designs of each zone. (author)

  2. Design and performance of a multi-channel, multi-sampling, PSD-enabling integrated circuit

    Energy Technology Data Exchange (ETDEWEB)

    Engel, G.L., E-mail: gengel@siue.ed [Department of Electrical and Computer Engineering, VLSI Design Research Laboratory, Southern Illinois University Edwardsville, Engineering Building, Room 3043 Edwardsville, IL 62026 1081 (United States); Hall, M.J.; Proctor, J.M. [Department of Electrical and Computer Engineering, VLSI Design Research Laboratory, Southern Illinois University Edwardsville, Engineering Building, Room 3043 Edwardsville, IL 62026 1081 (United States); Elson, J.M.; Sobotka, L.G.; Shane, R.; Charity, R.J. [Departments of Chemistry and Physics, Washington University, Saint Louis, MO 63130 (United States)

    2009-12-21

    This paper presents the design and test results of an eight-channel prototype integrated circuit chip intended to greatly simplify the pulse-processing electronics needed for large arrays of scintillation detectors. Because the chip design employs (user-controlled) multi-region charge integration, particle identification is incorporated into the basic design. Each channel on the chip also contains a time-to-voltage converter which provides relative time information. The pulse-height integrals and the relative time are all stored on capacitors and are either reset, after a user controlled time, or sequentially read out if acquisition of the event is desired. Each of the three pulse-height sub-channels consists of a gated integrator with eight programmable charging rates and an externally programmable gate generator that defines the start (with four time ranges) and width (with four time ranges) of the gate relative to an external discriminator signal. The chip supports three triggering modes, two time ranges, two power modes, and produces four sparsified analog pulse trains (three for the integrators and another for the time) with synchronized addresses for off-chip digitization with a pipelined ADC. The eight-channel prototype chip occupies an area of 2.8 mmx5.7 mm, dissipates 60 mW (low-power mode), and was fabricated in the AMI 0.5-mum process (C5N).

  3. Design and performance of a multi-channel, multi-sampling, PSD-enabling integrated circuit

    International Nuclear Information System (INIS)

    Engel, G.L.; Hall, M.J.; Proctor, J.M.; Elson, J.M.; Sobotka, L.G.; Shane, R.; Charity, R.J.

    2009-01-01

    This paper presents the design and test results of an eight-channel prototype integrated circuit chip intended to greatly simplify the pulse-processing electronics needed for large arrays of scintillation detectors. Because the chip design employs (user-controlled) multi-region charge integration, particle identification is incorporated into the basic design. Each channel on the chip also contains a time-to-voltage converter which provides relative time information. The pulse-height integrals and the relative time are all stored on capacitors and are either reset, after a user controlled time, or sequentially read out if acquisition of the event is desired. Each of the three pulse-height sub-channels consists of a gated integrator with eight programmable charging rates and an externally programmable gate generator that defines the start (with four time ranges) and width (with four time ranges) of the gate relative to an external discriminator signal. The chip supports three triggering modes, two time ranges, two power modes, and produces four sparsified analog pulse trains (three for the integrators and another for the time) with synchronized addresses for off-chip digitization with a pipelined ADC. The eight-channel prototype chip occupies an area of 2.8 mmx5.7 mm, dissipates 60 mW (low-power mode), and was fabricated in the AMI 0.5-μm process (C5N).

  4. Gas and liquid sampling for closed canisters in K-West basins - functional design criteria

    International Nuclear Information System (INIS)

    Pitkoff, C.C.

    1994-01-01

    The purpose of this document is to provide functions and requirements for the design and fabrication of equipment for sampling closed canisters in the K-West basin. The samples will be used to help determine the state of the fuel elements in closed canisters. The characterization information obtained will support evaluation and development of processes required for safe storage and disposition of Spent Nuclear Fuel (SNF) materials

  5. The U-tube sampling methodology and real-time analysis of geofluids

    International Nuclear Information System (INIS)

    Freifeld, Barry; Perkins, Ernie; Underschultz, James; Boreham, Chris

    2009-01-01

    The U-tube geochemical sampling methodology, an extension of the porous cup technique proposed by Wood (1973), provides minimally contaminated aliquots of multiphase fluids from deep reservoirs and allows for accurate determination of dissolved gas composition. The initial deployment of the U-tube during the Frio Brine Pilot CO 2 storage experiment, Liberty County, Texas, obtained representative samples of brine and supercritical CO 2 from a depth of 1.5 km. A quadrupole mass spectrometer provided real-time analysis of dissolved gas composition. Since the initial demonstration, the U-tube has been deployed for (1) sampling of fluids down gradient of the proposed Yucca Mountain High-Level Waste Repository, Armagosa Valley, Nevada (2) acquiring fluid samples beneath permafrost in Nunuvut Territory, Canada, and (3) at a CO 2 storage demonstration project within a depleted gas reservoir, Otway Basin, Victoria, Australia. The addition of in-line high-pressure pH and EC sensors allows for continuous monitoring of fluid during sample collection. Difficulties have arisen during U-tube sampling, such as blockage of sample lines from naturally occurring waxes or from freezing conditions; however, workarounds such as solvent flushing or heating have been used to address these problems. The U-tube methodology has proven to be robust, and with careful consideration of the constraints and limitations, can provide high quality geochemical samples.

  6. Incorporating covariance estimation uncertainty in spatial sampling design for prediction with trans-Gaussian random fields

    Directory of Open Access Journals (Sweden)

    Gunter eSpöck

    2015-05-01

    Full Text Available Recently, Spock and Pilz [38], demonstratedthat the spatial sampling design problem forthe Bayesian linear kriging predictor can betransformed to an equivalent experimentaldesign problem for a linear regression modelwith stochastic regression coefficients anduncorrelated errors. The stochastic regressioncoefficients derive from the polar spectralapproximation of the residual process. Thus,standard optimal convex experimental designtheory can be used to calculate optimal spatialsampling designs. The design functionals ̈considered in Spock and Pilz [38] did nottake into account the fact that kriging isactually a plug-in predictor which uses theestimated covariance function. The resultingoptimal designs were close to space-fillingconfigurations, because the design criteriondid not consider the uncertainty of thecovariance function.In this paper we also assume that thecovariance function is estimated, e.g., byrestricted maximum likelihood (REML. Wethen develop a design criterion that fully takesaccount of the covariance uncertainty. Theresulting designs are less regular and space-filling compared to those ignoring covarianceuncertainty. The new designs, however, alsorequire some closely spaced samples in orderto improve the estimate of the covariancefunction. We also relax the assumption ofGaussian observations and assume that thedata is transformed to Gaussianity by meansof the Box-Cox transformation. The resultingprediction method is known as trans-Gaussiankriging. We apply the Smith and Zhu [37]approach to this kriging method and show thatresulting optimal designs also depend on theavailable data. We illustrate our results witha data set of monthly rainfall measurementsfrom Upper Austria.

  7. Generation of artificial earthquake time histories for seismic design at Hanford, Washington

    International Nuclear Information System (INIS)

    Salmon, M.W.; Kuilanoff, G.

    1991-01-01

    The purpose of the development of artificial time-histories is to provide the designer with ground motion estimates which will meet the requirements of the design guidelines at the Hanford site. In particular, the artificial time histories presented in this paper were prepared to assist designers of the Hanford Waste Vitrification Plant (HWVP) with time histories that envelop the requirements for both a large magnitude earthquake (MI > 6.0) and a small magnitude, near-field earthquake (MI < 5. 0). A background of the requirements for both the large magnitude and small magnitude events is presented in this paper. The work done in generating time histories which produce response spectra matching those of the design seismic events is also presented. Finally, some preliminary results from studies performed using the small-magnitude near-filed earthquake time-history are presented

  8. Sampling/classification of gasifier particulates

    International Nuclear Information System (INIS)

    Wegrzyn, J.

    1984-01-01

    A high temperature and pressure real time extractive sampling probe for particulate monitoring was built at Brookhaven National Laboratory and tested on Morgantown Energy Technology Center's 42 inch fixed bed gasifier. The probe was specifically designed for the conditions of highly loaded particulate and condensable streams, that exist at the outlet of a fixed bed gasifier. Some of the salient features of the probe are: porous tube gas injection, aerodynamic particle classification in the presence of condensable vapors, β gauge particle detection, and micro processor control. Three of the key design problems were the separation of the particles from the vapor without promoting condensation, the prevention of plugging, and real time monitoring. Some plugging did occur over the seven day sampling period, but by over pressurizing and back purging the clog was blown back into the process stream. The tests validate the proof of concept of the sampling probe and indicated that the particulate output from the bed came in the form of bursts (several minutes in duration) rather than in the form of a steady stream

  9. Evaluation of design flood estimates with respect to sample size

    Science.gov (United States)

    Kobierska, Florian; Engeland, Kolbjorn

    2016-04-01

    Estimation of design floods forms the basis for hazard management related to flood risk and is a legal obligation when building infrastructure such as dams, bridges and roads close to water bodies. Flood inundation maps used for land use planning are also produced based on design flood estimates. In Norway, the current guidelines for design flood estimates give recommendations on which data, probability distribution, and method to use dependent on length of the local record. If less than 30 years of local data is available, an index flood approach is recommended where the local observations are used for estimating the index flood and regional data are used for estimating the growth curve. For 30-50 years of data, a 2 parameter distribution is recommended, and for more than 50 years of data, a 3 parameter distribution should be used. Many countries have national guidelines for flood frequency estimation, and recommended distributions include the log Pearson II, generalized logistic and generalized extreme value distributions. For estimating distribution parameters, ordinary and linear moments, maximum likelihood and Bayesian methods are used. The aim of this study is to r-evaluate the guidelines for local flood frequency estimation. In particular, we wanted to answer the following questions: (i) Which distribution gives the best fit to the data? (ii) Which estimation method provides the best fit to the data? (iii) Does the answer to (i) and (ii) depend on local data availability? To answer these questions we set up a test bench for local flood frequency analysis using data based cross-validation methods. The criteria were based on indices describing stability and reliability of design flood estimates. Stability is used as a criterion since design flood estimates should not excessively depend on the data sample. The reliability indices describe to which degree design flood predictions can be trusted.

  10. Methodology for object-oriented real-time systems analysis and design: Software engineering

    Science.gov (United States)

    Schoeffler, James D.

    1991-01-01

    Successful application of software engineering methodologies requires an integrated analysis and design life-cycle in which the various phases flow smoothly 'seamlessly' from analysis through design to implementation. Furthermore, different analysis methodologies often lead to different structuring of the system so that the transition from analysis to design may be awkward depending on the design methodology to be used. This is especially important when object-oriented programming is to be used for implementation when the original specification and perhaps high-level design is non-object oriented. Two approaches to real-time systems analysis which can lead to an object-oriented design are contrasted: (1) modeling the system using structured analysis with real-time extensions which emphasizes data and control flows followed by the abstraction of objects where the operations or methods of the objects correspond to processes in the data flow diagrams and then design in terms of these objects; and (2) modeling the system from the beginning as a set of naturally occurring concurrent entities (objects) each having its own time-behavior defined by a set of states and state-transition rules and seamlessly transforming the analysis models into high-level design models. A new concept of a 'real-time systems-analysis object' is introduced and becomes the basic building block of a series of seamlessly-connected models which progress from the object-oriented real-time systems analysis and design system analysis logical models through the physical architectural models and the high-level design stages. The methodology is appropriate to the overall specification including hardware and software modules. In software modules, the systems analysis objects are transformed into software objects.

  11. Design of sample analysis device for iodine adsorption efficiency test in NPPs

    International Nuclear Information System (INIS)

    Ji Jinnan

    2015-01-01

    In nuclear power plants, iodine adsorption efficiency test is used to check the iodine adsorption efficiency of the iodine adsorber. The iodine adsorption efficiency can be calculated through the analysis of the test sample, and thus to determine if the performance of the adsorber meets the requirement on the equipment operation and emission. Considering the process of test and actual demand, in this paper, a special device for the analysis of this kind of test sample is designed. The application shows that the device is with convenient operation and high reliability and accurate calculation, and improves the experiment efficiency and reduces the experiment risk. (author)

  12. Design and test for the time module of HXMT

    International Nuclear Information System (INIS)

    Ji Jianfeng; Zhang Zhi; Liu Congzhan

    2007-01-01

    The time module is a key component of HXMT, which determines whether HXMT can realize its scientific aim, scan imaging and timing analysis, correctly. The design in this paper splits the time information into two parts. The time information above second, which can be synchronized by control system of HXMT, is generated by GPSs second pulse. The time information below second is generated by high precision crystal and revised by GPSs second pulse. In order to solve the problem of how to verify the correctness of the time module, a new method using timing analysis is proposed and implemented in this paper. (authors)

  13. Practical experience applied to the design of injection and sample manifolds to perform in-place surveillance tests according to ANSI/ASME N-510

    Energy Technology Data Exchange (ETDEWEB)

    Banks, E.M.; Wikoff, W.O.; Shaffer, L.L. [NUCON International, Inc., Columbus, OH (United States)

    1997-08-01

    At the current level of maturity and experience in the nuclear industry, regarding testing of air treatment systems, it is now possible to design and qualify injection and sample manifolds for most applications. While the qualification of sample manifolds is still in its infancy, injection manifolds have reached a mature stage that helps to eliminate the {open_quotes}hit or miss{close_quotes} type of design. During the design phase, manifolds can be adjusted to compensate for poor airflow distribution, laminar flow conditions, and to take advantage of any system attributes. Experience has shown that knowing the system attributes before the design phase begins is an essential element to a successful manifold design. The use of a spreadsheet type program commonly found on most personal computers can afford a greater flexibility and a reduction in time spent in the design phase. The experience gained from several generations of manifold design has culminated in a set of general design guidelines. Use of these guidelines, along with a good understanding of the type of testing (theoretical and practical), can result in a good manifold design requiring little or no field modification. The requirements for manifolds came about because of the use of multiple banks of components and unconventional housing inlet configurations. Multiple banks of adsorbers and pre and post HEPA`s required that each bank be tested to insure that each one does not exceed a specific allowable leakage criterion. 5 refs., 5 figs., 1 tab.

  14. Design and evaluation of a new Peltier-cooled laser ablation cell with on-sample temperature control.

    Science.gov (United States)

    Konz, Ioana; Fernández, Beatriz; Fernández, M Luisa; Pereiro, Rosario; Sanz-Medel, Alfredo

    2014-01-27

    A new custom-built Peltier-cooled laser ablation cell is described. The proposed cryogenic cell combines a small internal volume (20 cm(3)) with a unique and reliable on-sample temperature control. The use of a flexible temperature sensor, directly located on the sample surface, ensures a rigorous sample temperature control throughout the entire analysis time and allows instant response to any possible fluctuation. In this way sample integrity and, therefore, reproducibility can be guaranteed during the ablation. The refrigeration of the proposed cryogenic cell combines an internal refrigeration system, controlled by a sensitive thermocouple, with an external refrigeration system. Cooling of the sample is directly carried out by 8 small (1 cm×1 cm) Peltier elements placed in a circular arrangement in the base of the cell. These Peltier elements are located below a copper plate where the sample is placed. Due to the small size of the cooling electronics and their circular allocation it was possible to maintain a peephole under the sample for illumination allowing a much better visualization of the sample, a factor especially important when working with structurally complex tissue sections. The analytical performance of the cryogenic cell was studied using a glass reference material (SRM NIST 612) at room temperature and at -20°C. The proposed cell design shows a reasonable signal washout (signal decay within less than 10 s to background level), high sensitivity and good signal stability (in the range 6.6-11.7%). Furthermore, high precision (0.4-2.6%) and accuracy (0.3-3.9%) in the isotope ratio measurements were also observed operating the cell both at room temperature and at -20°C. Finally, experimental results obtained for the cell application to qualitative elemental imaging of structurally complex tissue samples (e.g. eye sections from a native frozen porcine eye and fresh flower leaves) demonstrate that working in cryogenic conditions is critical in such

  15. Impact of insufficient energy content in the design time history on the structure response

    International Nuclear Information System (INIS)

    Ma, D.C.; Gvildys, J.; Chang, Y.W.; Seidensticker, R.W.

    1989-01-01

    In the design of nuclear power plants, it is often desirable to use the time history method in the soil-structure interaction analysis to determine the plant floor response to seismic loads. Because many design criteria are specified in terms of design response spectra, the artificial time history needs to be generated under the requirement that the response spectra of the artificial history should envelop the given design response spectra. However, recent studies indicate that the artificial time history used in the plant design may have insufficient energy in the frequency range of interest, even though the response spectra of the design time history closely envelop the design response spectra. This paper presents an investigation of the effects of the insufficient energy content in the design time history on the response of the soil-structure system. Numerical studies were carried out. Both the real earthquake records and the artificial time histories were used as the input motions in a simple lumped-mass soil-structure interaction model. The results obtained from this study provide a better understanding of the effects of the insufficient energy content in the design time history on the structural response

  16. Multi-saline sample distillation apparatus for hydrogen isotope analyses: design and accuracy. Water-resources investigations

    International Nuclear Information System (INIS)

    Hassan, A.A.

    1981-04-01

    A distillation apparatus for saline water samples was designed and tested. Six samples may be distilled simultaneously. The temperature was maintained at 400 degrees C to ensure complete dehydration of the precipitating salts. Consequently, the error in the measured ratio of stable hydrogen isotopes resulting from incomplete dehydration of hydrated salts during distillation was eliminated

  17. Correction to the count-rate detection limit and sample/blank time-allocation methods

    International Nuclear Information System (INIS)

    Alvarez, Joseph L.

    2013-01-01

    A common form of count-rate detection limits contains a propagation of uncertainty error. This error originated in methods to minimize uncertainty in the subtraction of the blank counts from the gross sample counts by allocation of blank and sample counting times. Correct uncertainty propagation showed that the time allocation equations have no solution. This publication presents the correct form of count-rate detection limits. -- Highlights: •The paper demonstrated a proper method of propagating uncertainty of count rate differences. •The standard count-rate detection limits were in error. •Count-time allocation methods for minimum uncertainty were in error. •The paper presented the correct form of the count-rate detection limit. •The paper discussed the confusion between count-rate uncertainty and count uncertainty

  18. Field Investigation Plan for 1301-N and 1325-N Facilities Sampling to Support Remedial Design

    International Nuclear Information System (INIS)

    Weiss, S. G.

    1998-01-01

    This field investigation plan (FIP) provides for the sampling and analysis activities supporting the remedial design planning for the planned removal action for the 1301-N and 1325-N Liquid Waste Disposal Facilities (LWDFs), which are treatment, storage,and disposal (TSD) units (cribs/trenches). The planned removal action involves excavation, transportation, and disposal of contaminated material at the Environmental Restoration Disposal Facility (ERDF).An engineering study (BHI 1997) was performed to develop and evaluate various options that are predominantly influenced by the volume of high- and low-activity contaminated soil requiring removal. The study recommended that additional sampling be performed to supplement historical data for use in the remedial design

  19. Design Considerations for Integration of Terahertz Time-Domain Spectroscopy in Microfluidic Platforms

    Directory of Open Access Journals (Sweden)

    Rasha Al-Hujazy

    2018-03-01

    Full Text Available Microfluidic platforms have received much attention in recent years. In particular, there is interest in combining spectroscopy with microfluidic platforms. This work investigates the integration of microfluidic platforms and terahertz time-domain spectroscopy (THz-TDS systems. A semiclassical computational model is used to simulate the emission of THz radiation from a GaAs photoconductive THz emitter. This model incorporates white noise with increasing noise amplitude (corresponding to decreasing dynamic range values. White noise is selected over other noise due to its contributions in THz-TDS systems. The results from this semiclassical computational model, in combination with defined sample thicknesses, can provide the maximum measurable absorption coefficient for a microfluidic-based THz-TDS system. The maximum measurable frequencies for such systems can be extracted through the relationship between the maximum measurable absorption coefficient and the absorption coefficient for representative biofluids. The sample thickness of the microfluidic platform and the dynamic range of the THz-TDS system play a role in defining the maximum measurable frequency for microfluidic-based THz-TDS systems. The results of this work serve as a design tool for the development of such systems.

  20. Impact of insufficient energy content in the design time history on the structure response

    International Nuclear Information System (INIS)

    Ma, D.C.; Gvildys, J.; Chang, Y.W.; Seidensticker, R.W.

    1989-01-01

    In the design of nuclear power plants, it is often desirable to use the time history method in the soil-structure interaction analysis to determine the plant floor response to seismic loads. Because many design criteria are specified in terms of design response spectra, the artificial time history needs to be generated under the requirement that the response spectra of the artificial history should envelop the given design response spectra. However, recent studies indicate that the artificial time history used in the plant design may have insufficient energy in the frequency range of interest, even though the response spectra of the design time history closely envelop the design response spectra. Therefore, the proposed changes in the NRC Standard Review Plan requires that when a single time history is used in the seismic design, it must satisfy requirements for both response spectra enveloping and matching a power spectra density (PSD) function in the frequency range of interest. The use of multiple artificial time histories (at least five time histories) in the plant design is also suggested in the new Standard Review Plan. This paper presents an investigation of the effects of the insufficient energy content in the design time history on the response of the soil-structure system. Numerical studies were carried out. Both the real earthquake records and the artificial time histories were used as the input motions in a simple lumped-mass soil-structure interaction model. The results obtained from this study provide a better understanding of the effects of the insufficient energy content in the design time history on the structural response. 5 refs., 10 figs., 1 tab

  1. Changes in tar yields and cigarette design in samples of Chinese cigarettes, 2009 and 2012.

    Science.gov (United States)

    Schneller, Liane M; Zwierzchowski, Benjamin A; Caruso, Rosalie V; Li, Qiang; Yuan, Jiang; Fong, Geoffrey T; O'Connor, Richard J

    2015-11-01

    China is home to the greatest number of smokers as well as the greatest number of smoking-related deaths. An active and growing market of cigarettes marketed as 'light' or 'low tar' may keep health-concerned smokers from quitting, wrongly believing that such brands are less harmful. This study sought to observe changes in cigarette design characteristics and reported tar, nicotine and carbon monoxide (TNCO) levels in a sample of cigarette brands obtained in seven Chinese cities from 2009 to 2012. Cigarettes were purchased and shipped to Roswell Park Cancer Institute, where 91 pairs of packs were selected for physical cigarette design characteristic testing and recording of TNCO values. Data analysis was conducted using SPSS, and was initially characterised using descriptive statistics, correlations and generalised estimating equations to observe changes in brand varieties over time. Reported TNCO values on packs saw mean tar, nicotine and CO levels decrease from 2009 to 2012 by 7.9%, 4.5% and 6.0%, respectively. Ventilation was the only cigarette design feature that significantly changed over time (p<0.001), with an increase of 31.7%. Significant predictors of tar and CO yield overall were ventilation and per-cigarette tobacco weight, while for nicotine tobacco moisture was also an independent predictor of yield. The use of ventilation to decrease TNCO emissions is misleading smokers to believe that they are smoking a 'light/low' tar cigarette that is healthier, and is potentially forestalling the quitting behaviours that would begin to reduce the health burden of tobacco in China, and so should be prohibited. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  2. Air exposure and sample storage time influence on hydrogen release from tungsten

    Energy Technology Data Exchange (ETDEWEB)

    Moshkunov, K.A., E-mail: moshkunov@gmail.co [National Research Nuclear University ' MEPhI' , Kashirskoe sh. 31, 115409 Moscow (Russian Federation); Schmid, K.; Mayer, M. [Max-Planck-Institut fuer Plasmaphysik, EURATOM Association, Boltzmannstrasse 2, D-85748 Garching (Germany); Kurnaev, V.A.; Gasparyan, Yu.M. [National Research Nuclear University ' MEPhI' , Kashirskoe sh. 31, 115409 Moscow (Russian Federation)

    2010-09-30

    In investigations of hydrogen retention in first wall components the influence of the conditions of the implanted target storage prior to analysis and the storage time is often neglected. Therefore we have performed a dedicated set of experiments. The release of hydrogen from samples exposed to ambient air after irradiation was compared to samples kept in vacuum. For air exposed samples significant amounts of HDO and D{sub 2}O are detected during TDS. Additional experiments have shown that heavy water is formed by recombination of releasing D and H atoms with O on the W surface. This water formation can alter hydrogen retention results significantly, in particular - for low retention cases. In addition to the influence of ambient air exposure also the influence of storage time in vacuum was investigated. After implantation at 300 K the samples were stored in vacuum for up to 1 week during which the retained amount decreased significantly. The subsequently measured TDS spectra showed that D was lost from both the high and low energy peaks during storage at ambient temperature of {approx}300 K. An attempt to simulate this release from both peaks during room temperature storage by TMAP 7 calculations showed that this effect cannot be explained by conventional diffusion/trapping models.

  3. Air exposure and sample storage time influence on hydrogen release from tungsten

    International Nuclear Information System (INIS)

    Moshkunov, K.A.; Schmid, K.; Mayer, M.; Kurnaev, V.A.; Gasparyan, Yu.M.

    2010-01-01

    In investigations of hydrogen retention in first wall components the influence of the conditions of the implanted target storage prior to analysis and the storage time is often neglected. Therefore we have performed a dedicated set of experiments. The release of hydrogen from samples exposed to ambient air after irradiation was compared to samples kept in vacuum. For air exposed samples significant amounts of HDO and D 2 O are detected during TDS. Additional experiments have shown that heavy water is formed by recombination of releasing D and H atoms with O on the W surface. This water formation can alter hydrogen retention results significantly, in particular - for low retention cases. In addition to the influence of ambient air exposure also the influence of storage time in vacuum was investigated. After implantation at 300 K the samples were stored in vacuum for up to 1 week during which the retained amount decreased significantly. The subsequently measured TDS spectra showed that D was lost from both the high and low energy peaks during storage at ambient temperature of ∼300 K. An attempt to simulate this release from both peaks during room temperature storage by TMAP 7 calculations showed that this effect cannot be explained by conventional diffusion/trapping models.

  4. Air exposure and sample storage time influence on hydrogen release from tungsten

    Science.gov (United States)

    Moshkunov, K. A.; Schmid, K.; Mayer, M.; Kurnaev, V. A.; Gasparyan, Yu. M.

    2010-09-01

    In investigations of hydrogen retention in first wall components the influence of the conditions of the implanted target storage prior to analysis and the storage time is often neglected. Therefore we have performed a dedicated set of experiments. The release of hydrogen from samples exposed to ambient air after irradiation was compared to samples kept in vacuum. For air exposed samples significant amounts of HDO and D 2O are detected during TDS. Additional experiments have shown that heavy water is formed by recombination of releasing D and H atoms with O on the W surface. This water formation can alter hydrogen retention results significantly, in particular - for low retention cases. In addition to the influence of ambient air exposure also the influence of storage time in vacuum was investigated. After implantation at 300 K the samples were stored in vacuum for up to 1 week during which the retained amount decreased significantly. The subsequently measured TDS spectra showed that D was lost from both the high and low energy peaks during storage at ambient temperature of ˜300 K. An attempt to simulate this release from both peaks during room temperature storage by TMAP 7 calculations showed that this effect cannot be explained by conventional diffusion/trapping models.

  5. A research on motion design for APP's loading pages based on time perception

    Science.gov (United States)

    Cao, Huai; Hu, Xiaoyun

    2018-04-01

    Due to restrictions caused by objective reasons like network bandwidth, hardware performance and etc., waiting is still an inevitable phenomenon that appears in our using mobile-terminal products. Relevant researches show that users' feelings in a waiting scenario can affect their evaluations on the whole product and services the product provides. With the development of user experience and inter-facial design subjects, the role of motion effect in the interface design has attracted more and more scholars' attention. In the current studies, the research theory of motion design in a waiting scenario is imperfect. This article will use the basic theory and experimental research methods of cognitive psychology to explore the motion design's impact on user's time perception when users are waiting for loading APP pages. Firstly, the article analyzes the factors that affect waiting experience of loading APP pages based on the theory of time perception, and then discusses motion design's impact on the level of time-perception when loading pages and its design strategy. Moreover, by the operation analysis of existing loading motion designs, the article classifies the existing loading motions and designs an experiment to verify the impact of different types of motions on the user's time perception. The result shows that the waiting time perception of mobile's terminals' APPs is related to the loading motion types, the combination type of loading motions can effectively shorten the waiting time perception as it scores a higher mean value in the length of time perception.

  6. A two-phase sampling design for increasing detections of rare species in occupancy surveys

    Science.gov (United States)

    Pacifici, Krishna; Dorazio, Robert M.; Dorazio, Michael J.

    2012-01-01

    1. Occupancy estimation is a commonly used tool in ecological studies owing to the ease at which data can be collected and the large spatial extent that can be covered. One major obstacle to using an occupancy-based approach is the complications associated with designing and implementing an efficient survey. These logistical challenges become magnified when working with rare species when effort can be wasted in areas with none or very few individuals. 2. Here, we develop a two-phase sampling approach that mitigates these problems by using a design that places more effort in areas with higher predicted probability of occurrence. We compare our new sampling design to traditional single-season occupancy estimation under a range of conditions and population characteristics. We develop an intuitive measure of predictive error to compare the two approaches and use simulations to assess the relative accuracy of each approach. 3. Our two-phase approach exhibited lower predictive error rates compared to the traditional single-season approach in highly spatially correlated environments. The difference was greatest when detection probability was high (0·75) regardless of the habitat or sample size. When the true occupancy rate was below 0·4 (0·05-0·4), we found that allocating 25% of the sample to the first phase resulted in the lowest error rates. 4. In the majority of scenarios, the two-phase approach showed lower error rates compared to the traditional single-season approach suggesting our new approach is fairly robust to a broad range of conditions and design factors and merits use under a wide variety of settings. 5. Synthesis and applications. Conservation and management of rare species are a challenging task facing natural resource managers. It is critical for studies involving rare species to efficiently allocate effort and resources as they are usually of a finite nature. We believe our approach provides a framework for optimal allocation of effort while

  7. Chromosomal radiosensitivity of human leucocytes in relation to sampling time

    International Nuclear Information System (INIS)

    Buul, P.P.W. van; Natarajan, A.T.

    1980-01-01

    Frequencies of chromosomal aberrations after irradiation with X-rays of peripheral blood lymphocytes in vitro were determined at different times after initiation of cultures. In each culture, the kinetics of cell multiplication was followed by using BrdU labelling and differential staining of chromosomes. The results indicate that the mixing up of first and second cell cycle cells at later sampling times cannot explain the observed variation in the frequencies of chromosomal aberrations but that donor-to-donor variation is a predominant factor influencing yields of aberrations. The condition of a donor seems to be most important because repeats on the same donor also showed marked variability. (orig.)

  8. Measuring Sulfur Isotope Ratios from Solid Samples with the Sample Analysis at Mars Instrument and the Effects of Dead Time Corrections

    Science.gov (United States)

    Franz, H. B.; Mahaffy, P. R.; Kasprzak, W.; Lyness, E.; Raaen, E.

    2011-01-01

    The Sample Analysis at Mars (SAM) instrument suite comprises the largest science payload on the Mars Science Laboratory (MSL) "Curiosity" rover. SAM will perform chemical and isotopic analysis of volatile compounds from atmospheric and solid samples to address questions pertaining to habitability and geochemical processes on Mars. Sulfur is a key element of interest in this regard, as sulfur compounds have been detected on the Martian surface by both in situ and remote sensing techniques. Their chemical and isotopic composition can belp constrain environmental conditions and mechanisms at the time of formation. A previous study examined the capability of the SAM quadrupole mass spectrometer (QMS) to determine sulfur isotope ratios of SO2 gas from a statistical perspective. Here we discuss the development of a method for determining sulfur isotope ratios with the QMS by sampling SO2 generated from heating of solid sulfate samples in SAM's pyrolysis oven. This analysis, which was performed with the SAM breadboard system, also required development of a novel treatment of the QMS dead time to accommodate the characteristics of an aging detector.

  9. DESIGN AND CALIBRATION OF A VIBRANT SAMPLE MAGNETOMETER: CHARACTERIZATION OF MAGNETIC MATERIALS

    Directory of Open Access Journals (Sweden)

    Freddy P. Guachun

    2018-01-01

    Full Text Available This paper presents the process followed in the implementation of a vibrating sample magnetometer (VSM, constructed with materials commonly found in an electromagnetism laboratory. It describes the design, construction, calibration and use in the characterization of some magnetic materials. A VSM measures the magnetic moment of a sample when it is vibrated perpendicular to a uniform magnetic field; Magnetization and magnetic susceptibility can be determined from these readings. This instrument stands out for its simplicity, versatility and low cost, but it is very sensitive and capable of eliminating or minimizing many sources of error that are found in other methods of measurement, allowing to obtain very accurate and reliable results. Its operation is based on the law of magnetic induction of Lenz-Faraday that consists in measuring the induced voltage in coils of detection produced by the variation of the magnetic flux that crosses them. The calibration of the VSM was performed by means of a standard sample (Magnetite and verified by means of a test sample (Nickel.

  10. Design of modified annulus air sampling system for the detection of leakage in waste transfer line

    International Nuclear Information System (INIS)

    Deokar, U.V; Khot, A.R.; Mathew, P.; Ganesh, G.; Tripathi, R.M.; Srivastava, Srishti

    2018-01-01

    Various liquid waste streams are generated during the operation of reprocessing plant. The High Level (HL), Intermediate Level (IL) and Low Level (LL) liquid wastes generated, are transferred from reprocessing plant to Waste Management Facility. These respective waste streams are transferred through pipe-in-pipe lines along the shielded concrete trench. For detection of radioactive leakage from primary waste transfer line into secondary line, sampling of the annulus air between the two pipes is carried out. The currently installed pressurized annulus air sampling system did not have online leakage detection provision. Hence, there are chances of personal exposure and airborne activity in the working area. To overcome these design flaws, free air flow modified online annulus air sampling system with more safety features is designed

  11. Rapid, Real-time Methane Detection in Ground Water Using a New Gas-Water Equilibrator Design

    Science.gov (United States)

    Ruybal, C. J.; DiGiulio, D. C.; Wilkin, R. T.; Hargrove, K. D.; McCray, J. E.

    2014-12-01

    Recent increases in unconventional gas development have been accompanied by public concern for methane contamination in drinking water wells near production areas. Although not a regulated pollutant, methane may be a marker contaminant for others that are less mobile in groundwater and thus may be detected later, or at a location closer to the source. In addition, methane poses an explosion hazard if exsolved concentrations reach 5 - 15% volume in air. Methods for determining dissolved gases, such as methane, have evolved over 60 years. However, the response time of these methods is insufficient to monitor trends in methane concentration in real-time. To enable rapid, real-time monitoring of aqueous methane concentrations during ground water purging, a new gas-water equilibrator (GWE) was designed that increases gas-water mass exchange rates of methane for measurement. Monitoring of concentration trends allows a comparison of temporal trends between sampling events and comparison of baseline conditions with potential post-impact conditions. These trends may be a result of removal of stored casing water, pre-purge ambient borehole flow, formation physical and chemical heterogeneity, or flow outside of well casing due to inadequate seals. Real-time information in the field can help focus an investigation, aid in determining when to collect a sample, save money by limiting costs (e.g. analytical, sample transport and storage), and provide an immediate assessment of local methane concentrations. Four domestic water wells, one municipal water well, and one agricultural water well were sampled for traditional laboratory analysis and compared to the field GWE results. Aqueous concentrations measured on the GWE ranged from non-detect to 1,470 μg/L methane. Some trends in aqueous methane concentrations measured on the GWE were observed during purging. Applying a paired t-test comparing the new GWE method and traditional laboratory analysis yielded a p-value 0

  12. Design and building of a homemade sample changer for automation of the irradiation in neutron activation analysis technique; Diseno y construccion de un prototipo de intercambiador para la automatizacion de la tecnica de analisis por activacion neutronica

    Energy Technology Data Exchange (ETDEWEB)

    Gago, Javier; Hernandez, Yuri; Baltuano, Oscar; Bedregal, Patricia [Direccion de Investigacion y Desarrollo, Instituto Peruano de Energia Nuclear, Lima (Peru); Lopez, Yon [Universidad Nacional de Ingenieria, Lima (Peru); Urquizo, Rafael [Universidad Tecnologica del Peru, Lima (Peru)

    2014-07-01

    Because the RP-10 research reactor operates during weekends, it was necessary to design and build a sample changer for irradiation as part of the automation process of neutron activation analysis technique. The device is formed by an aluminum turntable disk which can accommodate 19 polyethylene capsules, containing samples to be sent using the pneumatic transfer system from the laboratory to the irradiation position. The system is operate by a control switchboard to send and return capsules in a variable preset time and by two different ways, allowing the determination of short, medium and long lived radionuclides. Also another mechanism is designed called 'exchange valve' for changing travel paths (pipelines) allowing the irradiated samples to be stored for a longer time in the reactor hall. The system design has allowed complete automation of this technique, enabling the irradiation of samples without the presence of an analyst. The design, construction and operation of the device is described and presented in this article. (authors).

  13. Analysis and Design of Timing Recovery Schemes for DMT Systems over Indoor Power-Line Channels

    Directory of Open Access Journals (Sweden)

    Cortés José Antonio

    2007-01-01

    Full Text Available Discrete multitone (DMT modulation is a suitable technique to cope with main impairments of broadband indoor power-line channels: spectral selectivity and cyclic time variations. Due to the high-density constellations employed to achieve the required bit-rates, synchronization issues became an important concern in these scenarios. This paper analyzes the performance of a conventional DMT timing recovery scheme designed for linear time-invariant (LTI channels when employed over indoor power lines. The influence of the channel cyclic short-term variations and the sampling jitter on the system performance is assessed. Bit-rate degradation due to timing errors is evaluated in a set of measured channels. It is shown that this synchronization mechanism limits the system performance in many residential channels. Two improvements are proposed to avoid this end: a new phase error estimator that takes into account the short-term changes in the channel response, and the introduction of notch filters in the timing recovery loop. Simulations confirm that the new scheme eliminates the bit-rate loss in most situations.

  14. Optimization of sample preparation variables for wedelolactone from Eclipta alba using Box-Behnken experimental design followed by HPLC identification.

    Science.gov (United States)

    Patil, A A; Sachin, B S; Shinde, D B; Wakte, P S

    2013-07-01

    Coumestan wedelolactone is an important phytocomponent from Eclipta alba (L.) Hassk. It possesses diverse pharmacological activities, which have prompted the development of various extraction techniques and strategies for its better utilization. The aim of the present study is to develop and optimize supercritical carbon dioxide assisted sample preparation and HPLC identification of wedelolactone from E. alba (L.) Hassk. The response surface methodology was employed to study the optimization of sample preparation using supercritical carbon dioxide for wedelolactone from E. alba (L.) Hassk. The optimized sample preparation involves the investigation of quantitative effects of sample preparation parameters viz. operating pressure, temperature, modifier concentration and time on yield of wedelolactone using Box-Behnken design. The wedelolactone content was determined using validated HPLC methodology. The experimental data were fitted to second-order polynomial equation using multiple regression analysis and analyzed using the appropriate statistical method. By solving the regression equation and analyzing 3D plots, the optimum extraction conditions were found to be: extraction pressure, 25 MPa; temperature, 56 °C; modifier concentration, 9.44% and extraction time, 60 min. Optimum extraction conditions demonstrated wedelolactone yield of 15.37 ± 0.63 mg/100 g E. alba (L.) Hassk, which was in good agreement with the predicted values. Temperature and modifier concentration showed significant effect on the wedelolactone yield. The supercritical carbon dioxide extraction showed higher selectivity than the conventional Soxhlet assisted extraction method. Copyright © 2013 Elsevier Masson SAS. All rights reserved.

  15. Analysis of time series and size of equivalent sample

    International Nuclear Information System (INIS)

    Bernal, Nestor; Molina, Alicia; Pabon, Daniel; Martinez, Jorge

    2004-01-01

    In a meteorological context, a first approach to the modeling of time series is to use models of autoregressive type. This allows one to take into account the meteorological persistence or temporal behavior, thereby identifying the memory of the analyzed process. This article seeks to pre-sent the concept of the size of an equivalent sample, which helps to identify in the data series sub periods with a similar structure. Moreover, in this article we examine the alternative of adjusting the variance of the series, keeping in mind its temporal structure, as well as an adjustment to the covariance of two time series. This article presents two examples, the first one corresponding to seven simulated series with autoregressive structure of first order, and the second corresponding to seven meteorological series of anomalies of the air temperature at the surface in two Colombian regions

  16. Standardised Resting Time Prior to Blood Sampling and Diurnal Variation Associated with Risk of Patient Misclassification

    DEFF Research Database (Denmark)

    Bøgh Andersen, Ida; Brasen, Claus L.; Christensen, Henry

    2015-01-01

    .9×10-7) and sodium (p = 8.7×10-16). Only TSH and albumin were clinically significantly influenced by diurnal variation. Resting time had no clinically significant effect. CONCLUSIONS: We found no need for resting 15 minutes prior to blood sampling. However, diurnal variation was found to have a significant......BACKGROUND: According to current recommendations, blood samples should be taken in the morning after 15 minutes' resting time. Some components exhibit diurnal variation and in response to pressures to expand opening hours and reduce waiting time, the aims of this study were to investigate...... the impact of resting time prior to blood sampling and diurnal variation on biochemical components, including albumin, thyrotropin (TSH), total calcium and sodium in plasma. METHODS: All patients referred to an outpatient clinic for blood sampling were included in the period Nov 2011 until June 2014 (opening...

  17. Impact of collection container material and holding times on sample integrity for mercury and methylmercury in water

    Energy Technology Data Exchange (ETDEWEB)

    Riscassi, Ami L [ORNL; Miller, Carrie L [ORNL; Brooks, Scott C [ORNL

    2014-01-01

    Mercury (Hg) and methylmercury (MeHg) concentrations in streamwater can vary on short timescales (hourly or less) during storm flow and on a diel cycle; the frequency and timing of sampling required to accurately characterize these dynamics may be difficult to accomplish manually. Automated sampling can assist in sample collection; however use has been limited for Hg and MeHg analysis due to stability concerns of trace concentrations during extended storage times. We examined the viability of using automated samplers with disposable low-density polyethylene (LDPE) sample bags to collect industrially contaminated streamwater for unfiltered and filtered Hg and MeHg analysis. Specifically we investigated the effect of holding times ranging from hours to days on streamwater collected during baseflow and storm flow. Unfiltered and filtered Hg and MeHg concentrations decreased with increases in time prior to sample processing; holding times of 24 hours or less resulted in concentration changes (mean 11 7% different) similar to variability in duplicates collected manually during analogous field conditions (mean 7 10% different). Comparisons of samples collected with manual and automated techniques throughout a year for a wide range of stream conditions were also found to be similar to differences observed between duplicate grab samples. These results demonstrate automated sampling into LDPE bags with holding times of 24 hours or less can be effectively used to collect streamwater for Hg and MeHg analysis, and encourage the testing of these materials and methods for implementation in other aqueous systems where high-frequency sampling is warranted.

  18. A Robust Pre-Filter and Power Loading Design for Time Reversal UWB Systems over Time-Correlated MIMO Channels

    Directory of Open Access Journals (Sweden)

    Sajjad Alizadeh

    2014-04-01

    Full Text Available Conventional Time Reversal (TR technique suffers from performance degradation in time varying Multiple-Input Multiple-Output Ultra-Wideband (MIMO-UWB systems due to outdating Channel State Information (CSI over time progressions. That is, the outdated CSI degrades the TR performance significantly in time varying channels. The correlation property of time correlated channels can improve the TR performance against other traditional TR designs. Based on this property, at first, we propose a robust TR-MIMO-UWB system design for a time-varying channel in which the CSI is updated only at the beginning of each block of data where the CSI is assumed to be known. As the channel varies over time, pre-processor blindly pre-equalizes the channel during the next symbol time by using the correlation property. Then, a novel recursive power allocation strategy is derived over time-correlated time-varying TR-MIMO-UWB channels. We show that the proposed power loading technique, considerably improves the BER performance of TR-MIMO-UWB system in imperfect CSI with robust pre-filter. The proposed algorithms lead to a cost-efficient CSI updating procedure for the TR optimization. Simulation results are provided to confirm the new design performance against traditional method.

  19. Time delay estimation in a reverberant environment by low rate sampling of impulsive acoustic sources

    KAUST Repository

    Omer, Muhammad

    2012-07-01

    This paper presents a new method of time delay estimation (TDE) using low sample rates of an impulsive acoustic source in a room environment. The proposed method finds the time delay from the room impulse response (RIR) which makes it robust against room reverberations. The RIR is considered a sparse phenomenon and a recently proposed sparse signal reconstruction technique called orthogonal clustering (OC) is utilized for its estimation from the low rate sampled received signal. The arrival time of the direct path signal at a pair of microphones is identified from the estimated RIR and their difference yields the desired time delay. Low sampling rates reduce the hardware and computational complexity and decrease the communication between the microphones and the centralized location. The performance of the proposed technique is demonstrated by numerical simulations and experimental results. © 2012 IEEE.

  20. Defining And Characterizing Sample Representativeness For DWPF Melter Feed Samples

    Energy Technology Data Exchange (ETDEWEB)

    Shine, E. P.; Poirier, M. R.

    2013-10-29

    Representative sampling is important throughout the Defense Waste Processing Facility (DWPF) process, and the demonstrated success of the DWPF process to achieve glass product quality over the past two decades is a direct result of the quality of information obtained from the process. The objective of this report was to present sampling methods that the Savannah River Site (SRS) used to qualify waste being dispositioned at the DWPF. The goal was to emphasize the methodology, not a list of outcomes from those studies. This methodology includes proven methods for taking representative samples, the use of controlled analytical methods, and data interpretation and reporting that considers the uncertainty of all error sources. Numerous sampling studies were conducted during the development of the DWPF process and still continue to be performed in order to evaluate options for process improvement. Study designs were based on use of statistical tools applicable to the determination of uncertainties associated with the data needs. Successful designs are apt to be repeated, so this report chose only to include prototypic case studies that typify the characteristics of frequently used designs. Case studies have been presented for studying in-tank homogeneity, evaluating the suitability of sampler systems, determining factors that affect mixing and sampling, comparing the final waste glass product chemical composition and durability to that of the glass pour stream sample and other samples from process vessels, and assessing the uniformity of the chemical composition in the waste glass product. Many of these studies efficiently addressed more than one of these areas of concern associated with demonstrating sample representativeness and provide examples of statistical tools in use for DWPF. The time when many of these designs were implemented was in an age when the sampling ideas of Pierre Gy were not as widespread as they are today. Nonetheless, the engineers and

  1. Defining And Characterizing Sample Representativeness For DWPF Melter Feed Samples

    International Nuclear Information System (INIS)

    Shine, E. P.; Poirier, M. R.

    2013-01-01

    Representative sampling is important throughout the Defense Waste Processing Facility (DWPF) process, and the demonstrated success of the DWPF process to achieve glass product quality over the past two decades is a direct result of the quality of information obtained from the process. The objective of this report was to present sampling methods that the Savannah River Site (SRS) used to qualify waste being dispositioned at the DWPF. The goal was to emphasize the methodology, not a list of outcomes from those studies. This methodology includes proven methods for taking representative samples, the use of controlled analytical methods, and data interpretation and reporting that considers the uncertainty of all error sources. Numerous sampling studies were conducted during the development of the DWPF process and still continue to be performed in order to evaluate options for process improvement. Study designs were based on use of statistical tools applicable to the determination of uncertainties associated with the data needs. Successful designs are apt to be repeated, so this report chose only to include prototypic case studies that typify the characteristics of frequently used designs. Case studies have been presented for studying in-tank homogeneity, evaluating the suitability of sampler systems, determining factors that affect mixing and sampling, comparing the final waste glass product chemical composition and durability to that of the glass pour stream sample and other samples from process vessels, and assessing the uniformity of the chemical composition in the waste glass product. Many of these studies efficiently addressed more than one of these areas of concern associated with demonstrating sample representativeness and provide examples of statistical tools in use for DWPF. The time when many of these designs were implemented was in an age when the sampling ideas of Pierre Gy were not as widespread as they are today. Nonetheless, the engineers and

  2. Design of TIME2 code: time dependent effects on Land 2 type repositories for Department of the Environment

    International Nuclear Information System (INIS)

    1985-07-01

    Design details for the proposed TIME2 computer code are presented for the purposes of information, planning and to serve as a guideline during code development. The TIME2 code will describe the long-term evolution of the environments of Land 2 type radioactive waste disposal sites (also known as 'time dependent effects'). Outlines are presented of code purpose and utilisation, specification and structure, input and output design, verification and validation, quality assurance and documentation. (author)

  3. Design and Proof-of-Concept Use of a Circular PMMA Platform with 16-Well Sample Capacity for Microwave-Accelerated Bioassays.

    Science.gov (United States)

    Mohammed, Muzaffer; Aslan, Kadir

    2013-01-01

    We demonstrate the design and the proof-of-concept use of a new, circular poly(methyl methacrylate)-based bioassay platform (PMMA platform), which affords for the rapid processing of 16 samples at once. The circular PMMA platform (5 cm in diameter) was coated with a silver nanoparticle film to accelerate the bioassay steps by microwave heating. A model colorimetric bioassay for biotinylated albumin (using streptavidin-labeled horse radish peroxidase) was performed on the PMMA platform coated with and without silver nanoparticles (a control experiment), and at room temperature and using microwave heating. It was shown that the simulated temperature profile of the PMMA platform during microwave heating were comparable to the real-time temperature profile during actual microwave heating of the constructed PMMA platform in a commercial microwave oven. The model colorimetric bioassay for biotinylated albumin was successfully completed in ~2 min (total assay time) using microwave heating, as compared to 90 min at room temperature (total assay time), which indicates a ~45-fold decrease in assay time. Our PMMA platform design afforded for significant reduction in non-specific interactions and low background signal as compared to non-silvered PMMA surfaces when employed in a microwave-accelerated bioassay carried out in a conventional microwave cavity.

  4. Software documentation and user's manual for fish-impingement sampling design and estimation method computer programs

    International Nuclear Information System (INIS)

    Murarka, I.P.; Bodeau, D.J.

    1977-11-01

    This report contains a description of three computer programs that implement the theory of sampling designs and the methods for estimating fish-impingement at the cooling-water intakes of nuclear power plants as described in companion report ANL/ES-60. Complete FORTRAN listings of these programs, named SAMPLE, ESTIMA, and SIZECO, are given and augmented with examples of how they are used

  5. Neural-Fuzzy Digital Strategy of Continuous-Time Nonlinear Systems Using Adaptive Prediction and Random-Local-Optimization Design

    Directory of Open Access Journals (Sweden)

    Zhi-Ren Tsai

    2013-01-01

    Full Text Available A tracking problem, time-delay, uncertainty and stability analysis of a predictive control system are considered. The predictive control design is based on the input and output of neural plant model (NPM, and a recursive fuzzy predictive tracker has scaling factors which limit the value zone of measured data and cause the tuned parameters to converge to obtain a robust control performance. To improve the further control performance, the proposed random-local-optimization design (RLO for a model/controller uses offline initialization to obtain a near global optimal model/controller. Other issues are the considerations of modeling error, input-delay, sampling distortion, cost, greater flexibility, and highly reliable digital products of the model-based controller for the continuous-time (CT nonlinear system. They are solved by a recommended two-stage control design with the first-stage (offline RLO and second-stage (online adaptive steps. A theorizing method is then put forward to replace the sensitivity calculation, which reduces the calculation of Jacobin matrices of the back-propagation (BP method. Finally, the feedforward input of reference signals helps the digital fuzzy controller improve the control performance, and the technique works to control the CT systems precisely.

  6. Inclusion of mobile telephone numbers into an ongoing population health survey in New South Wales, Australia, using an overlapping dual-frame design: impact on the time series.

    Science.gov (United States)

    Barr, Margo L; Ferguson, Raymond A; Steel, David G

    2014-08-12

    Since 1997, the NSW Population Health Survey (NSWPHS) had selected the sample using random digit dialing of landline telephone numbers. When the survey began coverage of the population by landline phone frames was high (96%). As landline coverage in Australia has declined and continues to do so, in 2012, a sample of mobile telephone numbers was added to the survey using an overlapping dual-frame design. Details of the methodology are published elsewhere. This paper discusses the impacts of the sampling frame change on the time series, and provides possible approaches to handling these impacts. Prevalence estimates were calculated for type of phone-use, and a range of health indicators. Prevalence ratios (PR) for each of the health indicators were also calculated using Poisson regression analysis with robust variance estimation by type of phone-use. Health estimates for 2012 were compared to 2011. The full time series was examined for selected health indicators. It was estimated from the 2012 NSWPHS that 20.0% of the NSW population were mobile-only phone users. Looking at the full time series for overweight or obese and current smoking if the NSWPHS had continued to be undertaken only using a landline frame, overweight or obese would have been shown to continue to increase and current smoking would have been shown to continue to decrease. However, with the introduction of the overlapping dual-frame design in 2012, overweight or obese increased until 2011 and then decreased in 2012, and current smoking decreased until 2011, and then increased in 2012. Our examination of these time series showed that the changes were a consequence of the sampling frame change and were not real changes. Both the backcasting method and the minimal coverage method could adequately adjust for the design change and allow for the continuation of the time series. The inclusion of the mobile telephone numbers, through an overlapping dual-frame design, did impact on the time series for some of

  7. Iohexol plasma clearance measurement in older adults with chronic kidney disease-sampling time matters.

    Science.gov (United States)

    Ebert, Natalie; Loesment, Amina; Martus, Peter; Jakob, Olga; Gaedeke, Jens; Kuhlmann, Martin; Bartel, Jan; Schuchardt, Mirjam; Tölle, Markus; Huang, Tao; van der Giet, Markus; Schaeffner, Elke

    2015-08-01

    Accurate and precise measurement of GFR is important for patients with chronic kidney disease (CKD). Sampling time of exogenous filtration markers may have great impact on measured GFR (mGFR) results, but there is still uncertainty about optimal timing of plasma clearance measurement in patients with advanced CKD, for whom 24-h measurement is recommended. This satellite project of the Berlin Initiative Study evaluates whether 24-h iohexol plasma clearance reveals a clinically relevant difference compared with 5-h measurement in older adults. In 104 participants with a mean age of 79 years and diagnosed CKD, we performed standard GFR measurement over 5 h (mGFR300) using iohexol plasma concentrations at 120, 180, 240 and 300 min after injection. With an additional sample at 1440 min, we assessed 24-h GFR measurement (mGFR1440). Study design was cross-sectional. Calculation of mGFR was conducted with a one compartment model using the Brochner-Mortensen equation to calculate the fast component. mGFR values were compared with estimated GFR values (MDRD, CKD-EPI, BIS1, Revised Lund-Malmö and Cockcroft-Gault). In all 104 subjects, mGFR1440 was lower than mGFR300 (23 ± 8 versus 29 ± 9 mL/min/1.73 m(2), mean ± SD; P clearance up to 5 h leads to a clinically relevant overestimation of GFR compared with 24-h measurement. In clinical care, this effect should be bore in mind especially for patients with considerably reduced GFR levels. A new correction formula has been developed to predict mGFR1440 from mGFR300. For accurate GFR estimates in elderly CKD patients, we recommend the Revised Lund Malmö equation. © The Author 2015. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.

  8. Optimizing sampling approaches along ecological gradients

    DEFF Research Database (Denmark)

    Schweiger, Andreas; Irl, Severin D. H.; Steinbauer, Manuel

    2016-01-01

    1. Natural scientists and especially ecologists use manipulative experiments or field observations along gradients to differentiate patterns driven by processes from those caused by random noise. A well-conceived sampling design is essential for identifying, analysing and reporting underlying...... patterns in a statistically solid and reproducible manner, given the normal restrictions in labour, time and money. However, a technical guideline about an adequate sampling design to maximize prediction success under restricted resources is lacking. This study aims at developing such a solid...... and reproducible guideline for sampling along gradients in all fields of ecology and science in general. 2. We conducted simulations with artificial data for five common response types known in ecology, each represented by a simple function (no response, linear, exponential, symmetric unimodal and asymmetric...

  9. [Design and implementation of real-time continuous glucose monitoring instrument].

    Science.gov (United States)

    Huang, Yonghong; Liu, Hongying; Tian, Senfu; Jia, Ziru; Wang, Zi; Pi, Xitian

    2017-12-01

    Real-time continuous glucose monitoring can help diabetics to control blood sugar levels within the normal range. However, in the process of practical monitoring, the output of real-time continuous glucose monitoring system is susceptible to glucose sensor and environment noise, which will influence the measurement accuracy of the system. Aiming at this problem, a dual-calibration algorithm for the moving-window double-layer filtering algorithm combined with real-time self-compensation calibration algorithm is proposed in this paper, which can realize the signal drift compensation for current data. And a real-time continuous glucose monitoring instrument based on this study was designed. This real-time continuous glucose monitoring instrument consisted of an adjustable excitation voltage module, a current-voltage converter module, a microprocessor and a wireless transceiver module. For portability, the size of the device was only 40 mm × 30 mm × 5 mm and its weight was only 30 g. In addition, a communication command code algorithm was designed to ensure the security and integrity of data transmission in this study. Results of experiments in vitro showed that current detection of the device worked effectively. A 5-hour monitoring of blood glucose level in vivo showed that the device could continuously monitor blood glucose in real time. The relative error of monitoring results of the designed device ranged from 2.22% to 7.17% when comparing to a portable blood meter.

  10. Analysing designed experiments in distance sampling

    Science.gov (United States)

    Stephen T. Buckland; Robin E. Russell; Brett G. Dickson; Victoria A. Saab; Donal N. Gorman; William M. Block

    2009-01-01

    Distance sampling is a survey technique for estimating the abundance or density of wild animal populations. Detection probabilities of animals inherently differ by species, age class, habitats, or sex. By incorporating the change in an observer's ability to detect a particular class of animals as a function of distance, distance sampling leads to density estimates...

  11. Sampling times influence the estimate of parameters in the Weibull dissolution model

    Czech Academy of Sciences Publication Activity Database

    Čupera, J.; Lánský, Petr; Šklubalová, Z.

    2015-01-01

    Roč. 78, Oct 12 (2015), s. 171-176 ISSN 0928-0987 Institutional support: RVO:67985823 Keywords : dissolution * Fisher information * rate constant * optimal sampling times Subject RIV: BA - General Mathematics Impact factor: 3.773, year: 2015

  12. A review on design of experiments and surrogate models in aircraft real-time and many-query aerodynamic analyses

    Science.gov (United States)

    Yondo, Raul; Andrés, Esther; Valero, Eusebio

    2018-01-01

    Full scale aerodynamic wind tunnel testing, numerical simulation of high dimensional (full-order) aerodynamic models or flight testing are some of the fundamental but complex steps in the various design phases of recent civil transport aircrafts. Current aircraft aerodynamic designs have increase in complexity (multidisciplinary, multi-objective or multi-fidelity) and need to address the challenges posed by the nonlinearity of the objective functions and constraints, uncertainty quantification in aerodynamic problems or the restrained computational budgets. With the aim to reduce the computational burden and generate low-cost but accurate models that mimic those full order models at different values of the design variables, Recent progresses have witnessed the introduction, in real-time and many-query analyses, of surrogate-based approaches as rapid and cheaper to simulate models. In this paper, a comprehensive and state-of-the art survey on common surrogate modeling techniques and surrogate-based optimization methods is given, with an emphasis on models selection and validation, dimensionality reduction, sensitivity analyses, constraints handling or infill and stopping criteria. Benefits, drawbacks and comparative discussions in applying those methods are described. Furthermore, the paper familiarizes the readers with surrogate models that have been successfully applied to the general field of fluid dynamics, but not yet in the aerospace industry. Additionally, the review revisits the most popular sampling strategies used in conducting physical and simulation-based experiments in aircraft aerodynamic design. Attractive or smart designs infrequently used in the field and discussions on advanced sampling methodologies are presented, to give a glance on the various efficient possibilities to a priori sample the parameter space. Closing remarks foster on future perspectives, challenges and shortcomings associated with the use of surrogate models by aircraft industrial

  13. A simple and efficient alternative to implementing systematic random sampling in stereological designs without a motorized microscope stage.

    Science.gov (United States)

    Melvin, Neal R; Poda, Daniel; Sutherland, Robert J

    2007-10-01

    When properly applied, stereology is a very robust and efficient method to quantify a variety of parameters from biological material. A common sampling strategy in stereology is systematic random sampling, which involves choosing a random sampling [corrected] start point outside the structure of interest, and sampling relevant objects at [corrected] sites that are placed at pre-determined, equidistant intervals. This has proven to be a very efficient sampling strategy, and is used widely in stereological designs. At the microscopic level, this is most often achieved through the use of a motorized stage that facilitates the systematic random stepping across the structure of interest. Here, we report a simple, precise and cost-effective software-based alternative to accomplishing systematic random sampling under the microscope. We believe that this approach will facilitate the use of stereological designs that employ systematic random sampling in laboratories that lack the resources to acquire costly, fully automated systems.

  14. A Bayesian Justification for Random Sampling in Sample Survey

    Directory of Open Access Journals (Sweden)

    Glen Meeden

    2012-07-01

    Full Text Available In the usual Bayesian approach to survey sampling the sampling design, plays a minimal role, at best. Although a close relationship between exchangeable prior distributions and simple random sampling has been noted; how to formally integrate simple random sampling into the Bayesian paradigm is not clear. Recently it has been argued that the sampling design can be thought of as part of a Bayesian's prior distribution. We will show here that under this scenario simple random sample can be given a Bayesian justification in survey sampling.

  15. Sample pooling for real-time PCR detection and virulence determination of the footrot pathogen Dichelobacter nodosus.

    Science.gov (United States)

    Frosth, Sara; König, Ulrika; Nyman, Ann-Kristin; Aspán, Anna

    2017-09-01

    Dichelobacter nodosus is the principal cause of ovine footrot and strain virulence is an important factor in disease severity. Therefore, detection and virulence determination of D. nodosus is important for proper diagnosis of the disease. Today this is possible by real-time PCR analysis. Analysis of large numbers of samples is costly and laborious; therefore, pooling of individual samples is common in surveillance programs. However, pooling can reduce the sensitivity of the method. The aim of this study was to develop a pooling method for real-time PCR analysis that would allow sensitive detection and simultaneous virulence determination of D. nodosus. A total of 225 sheep from 17 flocks were sampled using ESwabs within the Swedish Footrot Control Program in 2014. Samples were first analysed individually and then in pools of five by real-time PCR assays targeting the 16S rRNA and aprV2/B2 genes of D. nodosus. Each pool consisted of four negative and one positive D. nodosus samples with varying amounts of the bacterium. In the individual analysis, 61 (27.1%) samples were positive in the 16S rRNA and the aprV2/B2 PCR assays and 164 (72.9%) samples were negative. All samples positive in the aprV2/B2 PCR-assay were of aprB2 variant. The pooled analysis showed that all 41 pools were also positive for D. nodosus 16S rRNA and the aprB2 variant. The diagnostic sensitivity for pooled and individual samples was therefore similar. Our method includes concentration of the bacteria before DNA-extraction. This may account for the maintenance of diagnostic sensitivity. Diagnostic sensitivity in the real-time PCR assays of the pooled samples were comparable to the sensitivity obtained for individually analysed samples. Even sub-clinical infections were able to be detected in the pooled PCR samples which is important for control of the disease. This method may therefore be implemented in footrot control programs where it can replace analysis of individual samples.

  16. Graphical Data Analysis on the Circle: Wrap-Around Time Series Plots for (Interrupted) Time Series Designs.

    Science.gov (United States)

    Rodgers, Joseph Lee; Beasley, William Howard; Schuelke, Matthew

    2014-01-01

    Many data structures, particularly time series data, are naturally seasonal, cyclical, or otherwise circular. Past graphical methods for time series have focused on linear plots. In this article, we move graphical analysis onto the circle. We focus on 2 particular methods, one old and one new. Rose diagrams are circular histograms and can be produced in several different forms using the RRose software system. In addition, we propose, develop, illustrate, and provide software support for a new circular graphical method, called Wrap-Around Time Series Plots (WATS Plots), which is a graphical method useful to support time series analyses in general but in particular in relation to interrupted time series designs. We illustrate the use of WATS Plots with an interrupted time series design evaluating the effect of the Oklahoma City bombing on birthrates in Oklahoma County during the 10 years surrounding the bombing of the Murrah Building in Oklahoma City. We compare WATS Plots with linear time series representations and overlay them with smoothing and error bands. Each method is shown to have advantages in relation to the other; in our example, the WATS Plots more clearly show the existence and effect size of the fertility differential.

  17. HPLC/DAD determination of rosmarinic acid in Salvia officinalis: sample preparation optimization by factorial design

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Karina B. de [Universidade Federal do Parana (UFPR), Curitiba, PR (Brazil). Dept. de Farmacia; Oliveira, Bras H. de, E-mail: bho@ufpr.br [Universidade Federal do Parana (UFPR), Curitiba, PR (Brazil). Dept. de Quimica

    2013-01-15

    Sage (Salvia officinalis) contains high amounts of the biologically active rosmarinic acid (RA) and other polyphenolic compounds. RA is easily oxidized, and may undergo degradation during sample preparation for analysis. The objective of this work was to develop and validate an analytical procedure for determination of RA in sage, using factorial design of experiments for optimizing sample preparation. The statistically significant variables for improving RA extraction yield were determined initially and then used in the optimization step, using central composite design (CCD). The analytical method was then fully validated, and used for the analysis of commercial samples of sage. The optimized procedure involved extraction with aqueous methanol (40%) containing an antioxidant mixture (ascorbic acid and ethylenediaminetetraacetic acid (EDTA)), with sonication at 45 deg C for 20 min. The samples were then injected in a system containing a C{sub 18} column, using methanol (A) and 0.1% phosphoric acid in water (B) in step gradient mode (45A:55B, 0-5 min; 80A:20B, 5-10 min) with flow rate of 1.0 mL min-1 and detection at 330 nm. Using this conditions, RA concentrations were 50% higher when compared to extractions without antioxidants (98.94 {+-} 1.07% recovery). Auto-oxidation of RA during sample extraction was prevented by the use of antioxidants resulting in more reliable analytical results. The method was then used for the analysis of commercial samples of sage. (author)

  18. HPLC/DAD determination of rosmarinic acid in Salvia officinalis: sample preparation optimization by factorial design

    International Nuclear Information System (INIS)

    Oliveira, Karina B. de; Oliveira, Bras H. de

    2013-01-01

    Sage (Salvia officinalis) contains high amounts of the biologically active rosmarinic acid (RA) and other polyphenolic compounds. RA is easily oxidized, and may undergo degradation during sample preparation for analysis. The objective of this work was to develop and validate an analytical procedure for determination of RA in sage, using factorial design of experiments for optimizing sample preparation. The statistically significant variables for improving RA extraction yield were determined initially and then used in the optimization step, using central composite design (CCD). The analytical method was then fully validated, and used for the analysis of commercial samples of sage. The optimized procedure involved extraction with aqueous methanol (40%) containing an antioxidant mixture (ascorbic acid and ethylenediaminetetraacetic acid (EDTA)), with sonication at 45 deg C for 20 min. The samples were then injected in a system containing a C 18 column, using methanol (A) and 0.1% phosphoric acid in water (B) in step gradient mode (45A:55B, 0-5 min; 80A:20B, 5-10 min) with flow rate of 1.0 mL min−1 and detection at 330 nm. Using this conditions, RA concentrations were 50% higher when compared to extractions without antioxidants (98.94 ± 1.07% recovery). Auto-oxidation of RA during sample extraction was prevented by the use of antioxidants resulting in more reliable analytical results. The method was then used for the analysis of commercial samples of sage. (author)

  19. JY1 time scale: a new Kalman-filter time scale designed at NIST

    International Nuclear Information System (INIS)

    Yao, Jian; Parker, Thomas E; Levine, Judah

    2017-01-01

    We report on a new Kalman-filter hydrogen-maser time scale (i.e. JY1 time scale) designed at the National Institute of Standards and Technology (NIST). The JY1 time scale is composed of a few hydrogen masers and a commercial Cs clock. The Cs clock is used as a reference clock to ease operations with existing data. Unlike other time scales, the JY1 time scale uses three basic time-scale equations, instead of only one equation. Also, this time scale can detect a clock error (i.e. time error, frequency error, or frequency drift error) automatically. These features make the JY1 time scale stiff and less likely to be affected by an abnormal clock. Tests show that the JY1 time scale deviates from the UTC by less than  ±5 ns for ∼100 d, when the time scale is initially aligned to the UTC and then is completely free running. Once the time scale is steered to a Cs fountain, it can maintain the time with little error even if the Cs fountain stops working for tens of days. This can be helpful when we do not have a continuously operated fountain or when the continuously operated fountain accidentally stops, or when optical clocks run occasionally. (paper)

  20. Understanding the Links Between Self-Report Emotional Intelligence and Suicide Risk: Does Psychological Distress Mediate This Relationship Across Time and Samples?

    Directory of Open Access Journals (Sweden)

    Sergio Mérida-López

    2018-05-01

    Full Text Available Objective: In the last decades, increasing attention has been paid to examining psychological resources that might contribute to our understanding of suicide risk. Although Emotional Intelligence (EI is one dimension that has been linked with decreased suicidal ideation and behaviors, we detected several gaps in the literature in this area regarding the research designs and samples involved. In this research, we aimed to test a mediator model considering self-report EI, psychological distress and suicide risk across samples adopting both cross-sectional and prospective designs in two independent studies.Method: In Study 1, our purpose was to examine the potential role of psychological distress as a mediator in the relationship between self-report EI and suicide risk in a community sample comprised of 438 adults (270 women; mean age: 33.21 years. In Study 2, we sought to examine the proposed mediator model considering a 2-month prospective design in a sample of college students (n = 330 in T1; n = 311 in T2; 264 women; mean age: 22.22 years.Results: In Study 1, we found that psychological distress partially mediated the effect of self-report EI on suicide risk. More interestingly, findings from Study 2 showed that psychological distress fully mediated the relationship between self-report EI and suicide risk at Time 2.Conclusion: These results point out the role of psychological distress as a mediator in the association between self-report EI and suicide risk. These findings suggest an underlying process by which self-report EI may act as a protective factor against suicidal ideation and behaviors. In line with the limitations of our work, plausible avenues for future research and interventions are discussed.

  1. Design and construction of a prototype vaporization calorimeter for the assay of radioisotopic samples

    International Nuclear Information System (INIS)

    Tormey, T.V.

    1979-10-01

    A prototype vaporization calorimeter has been designed and constructed for use in the assay of low power output radioisotopic samples. The prototype calorimeter design was based on that of a previous experimental instrument used by H.P. Stephens, to establish the feasibility of the vaporization calorimetry technique for this type of power measurement. The calorimeter is composed of a mechanical calorimeter assembly together with a data acquisition and control system. Detailed drawings of the calorimeter assembly are included and additional drawings are referenced. The data acquisition system is based on an HP 9825A programmable calculator. A description of the hardware is provided together with a listing of all system software programs. The operating procedure is outlined, including initial setup and operation of all related equipment. Preliminary system performance was evaluated by making a series of four measurements on two nominal 1.5W samples and on a nominal 0.75W sample. Data for these measurements indicate that the absolute accuracy (one standard deviation) is approx. = 0.0035W in this power range, resulting in an estimated relative one standard deviation accuracy of 0.24% at 1.5W and 0.48% at 0.75W

  2. Designing a monitoring program to estimate estuarine survival of anadromous salmon smolts: simulating the effect of sample design on inference

    Science.gov (United States)

    Romer, Jeremy D.; Gitelman, Alix I.; Clements, Shaun; Schreck, Carl B.

    2015-01-01

    A number of researchers have attempted to estimate salmonid smolt survival during outmigration through an estuary. However, it is currently unclear how the design of such studies influences the accuracy and precision of survival estimates. In this simulation study we consider four patterns of smolt survival probability in the estuary, and test the performance of several different sampling strategies for estimating estuarine survival assuming perfect detection. The four survival probability patterns each incorporate a systematic component (constant, linearly increasing, increasing and then decreasing, and two pulses) and a random component to reflect daily fluctuations in survival probability. Generally, spreading sampling effort (tagging) across the season resulted in more accurate estimates of survival. All sampling designs in this simulation tended to under-estimate the variation in the survival estimates because seasonal and daily variation in survival probability are not incorporated in the estimation procedure. This under-estimation results in poorer performance of estimates from larger samples. Thus, tagging more fish may not result in better estimates of survival if important components of variation are not accounted for. The results of our simulation incorporate survival probabilities and run distribution data from previous studies to help illustrate the tradeoffs among sampling strategies in terms of the number of tags needed and distribution of tagging effort. This information will assist researchers in developing improved monitoring programs and encourage discussion regarding issues that should be addressed prior to implementation of any telemetry-based monitoring plan. We believe implementation of an effective estuary survival monitoring program will strengthen the robustness of life cycle models used in recovery plans by providing missing data on where and how much mortality occurs in the riverine and estuarine portions of smolt migration. These data

  3. Design of real-time operating system software

    International Nuclear Information System (INIS)

    Stafford, D.L.; Hulbert, A.J.

    1976-01-01

    Design of real-time data collection and control systems is complicated by problems not found in common sequential programming projects. Timing and synchronization are critical; data collection equipment is often completely foreign to mainframe programming facilities; operating system response to real-time events may be far too slow or too cumbersome. The philosophy at the Inhalation Toxicology Research Institute (ITRI) is to allow the investigators to do as much of their own programming as possible. Inasmuch as programming is not their primary responsibility or area of training, it is unreasonable to assume that they are familiar with the subtleties involved. The problem, therefore, has been to develop a system whereby investigators are able, with minimal consultation, to write, modify and otherwise maintain their own data collection and control programs and to do so with a minimum of destructive interaction with other data collection and control tasks

  4. Designing life-time recycling loops for elastomer products

    NARCIS (Netherlands)

    Noordermeer, Jacques; Dierkes, Wilma; Blume, Anke; Dijkhuis, Kuno; van Hoek, Hans; Reuvekamp, Louis; Saiwari, Siti

    2017-01-01

    With depletion of natural resources and growing awareness of the limited capabilities of the globe to cope with pollution, the need to design life-time recycling loops for all types of products is steadily increasing. Rubber articles of all sorts and the need for a proper disposal of these at the

  5. Time-Efficiency of Sorting Chironomidae Surface-Floating Pupal Exuviae Samples from Urban Trout Streams in Northeast Minnesota, USA

    Directory of Open Access Journals (Sweden)

    Alyssa M Anderson

    2012-10-01

    Full Text Available Collections of Chironomidae surface-floating pupal exuviae (SFPE provide an effective means of assessing water quality in streams. Although not widely used in the United States, the technique is not new and has been shown to be more cost-efficient than traditional dip-net sampling techniques in organically enriched stream in an urban landscape. The intent of this research was to document the efficiency of sorting SFPE samples relative to dip-net samples in trout streams with catchments varying in amount of urbanization and differences in impervious surface. Samples of both SFPE and dip-nets were collected from 17 sample sites located on 12 trout streams in Duluth, MN, USA. We quantified time needed to sort subsamples of 100 macroinvertebrates from dip-net samples, and less than or greater than 100 chironomid exuviae from SFPE samples. For larger samples of SFPE, the time required to subsample up to 300 exuviae was also recorded. The average time to sort subsamples of 100 specimens was 22.5 minutes for SFPE samples, compared to 32.7 minutes for 100 macroinvertebrates in dip-net samples. Average time to sort up to 300 exuviae was 37.7 minutes. These results indicate that sorting SFPE samples is more time-efficient than traditional dip-net techniques in trout streams with varying catchment characteristics.doi: 10.5324/fn.v31i0.1380.Published online: 17 October 2012.

  6. Efficient Round-Trip Time Optimization for Replica-Exchange Enveloping Distribution Sampling (RE-EDS).

    Science.gov (United States)

    Sidler, Dominik; Cristòfol-Clough, Michael; Riniker, Sereina

    2017-06-13

    Replica-exchange enveloping distribution sampling (RE-EDS) allows the efficient estimation of free-energy differences between multiple end-states from a single molecular dynamics (MD) simulation. In EDS, a reference state is sampled, which can be tuned by two types of parameters, i.e., smoothness parameters(s) and energy offsets, such that all end-states are sufficiently sampled. However, the choice of these parameters is not trivial. Replica exchange (RE) or parallel tempering is a widely applied technique to enhance sampling. By combining EDS with the RE technique, the parameter choice problem could be simplified and the challenge shifted toward an optimal distribution of the replicas in the smoothness-parameter space. The choice of a certain replica distribution can alter the sampling efficiency significantly. In this work, global round-trip time optimization (GRTO) algorithms are tested for the use in RE-EDS simulations. In addition, a local round-trip time optimization (LRTO) algorithm is proposed for systems with slowly adapting environments, where a reliable estimate for the round-trip time is challenging to obtain. The optimization algorithms were applied to RE-EDS simulations of a system of nine small-molecule inhibitors of phenylethanolamine N-methyltransferase (PNMT). The energy offsets were determined using our recently proposed parallel energy-offset (PEOE) estimation scheme. While the multistate GRTO algorithm yielded the best replica distribution for the ligands in water, the multistate LRTO algorithm was found to be the method of choice for the ligands in complex with PNMT. With this, the 36 alchemical free-energy differences between the nine ligands were calculated successfully from a single RE-EDS simulation 10 ns in length. Thus, RE-EDS presents an efficient method for the estimation of relative binding free energies.

  7. Modern linear control design a time-domain approach

    CERN Document Server

    Caravani, Paolo

    2013-01-01

    This book offers a compact introduction to modern linear control design.  The simplified overview presented of linear time-domain methodology paves the road for the study of more advanced non-linear techniques. Only rudimentary knowledge of linear systems theory is assumed - no use of Laplace transforms or frequency design tools is required. Emphasis is placed on assumptions and logical implications, rather than abstract completeness; on interpretation and physical meaning, rather than theoretical formalism; on results and solutions, rather than derivation or solvability.  The topics covered include transient performance and stabilization via state or output feedback; disturbance attenuation and robust control; regional eigenvalue assignment and constraints on input or output variables; asymptotic regulation and disturbance rejection. Lyapunov theory and Linear Matrix Inequalities (LMI) are discussed as key design methods. All methods are demonstrated with MATLAB to promote practical use and comprehension. ...

  8. A computational study of a fast sampling valve designed to sample soot precursors inside a forming diesel spray plume

    International Nuclear Information System (INIS)

    Dumitrescu, Cosmin; Puzinauskas, Paulius V.; Agrawal, Ajay K.; Liu, Hao; Daly, Daniel T.

    2009-01-01

    Accurate chemical reaction mechanisms are critically needed to fully optimize combustion strategies for modern internal-combustion engines. These mechanisms are needed to predict emission formation and the chemical heat release characteristics for traditional direct-injection diesel as well as recently-developed and proposed variant combustion strategies. Experimental data acquired under conditions representative of such combustion strategies are required to validate these reaction mechanisms. This paper explores the feasibility of developing a fast sampling valve which extracts reactants at known locations in the spray reaction structure to provide these data. CHEMKIN software is used to establish the reaction timescales which dictate the required fast sampling capabilities. The sampling process is analyzed using separate FLUENT and CHEMKIN calculations. The non-reacting FLUENT CFD calculations give a quantitative estimate of the sample quantity as well as the fluid mixing and thermal history. A CHEMKIN reactor network has been created that reflects these mixing and thermal time scales and allows a theoretical evaluation of the quenching process

  9. Use of Time-Series, ARIMA Designs to Assess Program Efficacy.

    Science.gov (United States)

    Braden, Jeffery P.; And Others

    1990-01-01

    Illustrates use of time-series designs for determining efficacy of interventions with fictitious data describing drug-abuse prevention program. Discusses problems and procedures associated with time-series data analysis using Auto Regressive Integrated Moving Averages (ARIMA) models. Example illustrates application of ARIMA analysis for…

  10. The design of a real-time distributed system

    International Nuclear Information System (INIS)

    Hertzberger, L.O.; Tuynman, F.; Mullender, S.J.; Poletiek, G.; Vermeulen, J.C.; Renesse, R. van; Tanenbaum, A.S.

    1986-01-01

    In modern physics experiments an increasing number and variety of programmable processors is used. As a consequence, a software environment is needed that provides an integrated approach to development, testing and use of real-time distributed software. This contribution is based on work being done in the AMOEBA Distributed Operating System Project and the FADOS Real-Time Distributed Operating System Project. A short description of both systems is presented as an example of how basic real-time operating system services can be organized. AMOEBA is the result of fundamental research in the field of distributed operating systems, while FADOS has been designed for applications as encountered in experimental high-energy physics. (Auth.)

  11. Determination of melamine in soil samples using surfactant-enhanced hollow fiber liquid phase microextraction followed by HPLC–UV using experimental design

    Directory of Open Access Journals (Sweden)

    Ali Sarafraz Yazdi

    2015-11-01

    Full Text Available Surfactant-enhanced hollow fiber liquid phase (SE-HF-LPME microextraction was applied for the extraction of melamine in conjunction with high performance liquid chromatography with UV detection (HPLC–UV. Sodium dodecyl sulfate (SDS was added firstly to the sample solution at pH 1.9 to form hydrophobic ion-pair with protonated melamine. Then the protonated melamine–dodecyl sulfate ion-pair (Mel–DS was extracted from aqueous phase into organic phase immobilized in the pores and lumen of the hollow fiber. After extraction, the analyte-enriched 1-octanol was withdrawn into the syringe and injected into the HPLC. Preliminary, one variable at a time method was applied to select the type of extraction solvent. Then, in screening step, the other variables that may affect the extraction efficiency of the analyte were studied using a fractional factorial design. In the next step, a central composite design was applied for optimization of the significant factors having positive effects on extraction efficiency. The optimum operational conditions included: sample volume, 5 mL; surfactant concentration, 1.5 mM; pH 1.9; stirring rate, 1500 rpm and extraction time, 60 min. Using the optimum conditions, the method was analytically evaluated. The detection limit, relative standard deviation and linear range were 0.005 μg mL−1, 4.0% (3 μg mL−1, n = 5 and 0.01–8 μg mL−1, respectively. The performance of the procedure in extraction of melamine from the soil samples was good according to its relative recoveries in different spiking levels (95–109%.

  12. Effects of sampling design on age ratios of migrants captured at stopover sites

    Science.gov (United States)

    Jeffrey F. Kelly; Deborah M. Finch

    2000-01-01

    Age classes of migrant songbirds often differ in migration timing. This difference creates the potential for age-ratios recorded at stopover sites to vary with the amount and distribution of sampling effort used. To test for these biases, we sub-sampled migrant capture data from the Middle Rio Grande Valley of New Mexico. We created data sets that reflected the age...

  13. Multidimensional scaling analysis of financial time series based on modified cross-sample entropy methods

    Science.gov (United States)

    He, Jiayi; Shang, Pengjian; Xiong, Hui

    2018-06-01

    Stocks, as the concrete manifestation of financial time series with plenty of potential information, are often used in the study of financial time series. In this paper, we utilize the stock data to recognize their patterns through out the dissimilarity matrix based on modified cross-sample entropy, then three-dimensional perceptual maps of the results are provided through multidimensional scaling method. Two modified multidimensional scaling methods are proposed in this paper, that is, multidimensional scaling based on Kronecker-delta cross-sample entropy (MDS-KCSE) and multidimensional scaling based on permutation cross-sample entropy (MDS-PCSE). These two methods use Kronecker-delta based cross-sample entropy and permutation based cross-sample entropy to replace the distance or dissimilarity measurement in classical multidimensional scaling (MDS). Multidimensional scaling based on Chebyshev distance (MDSC) is employed to provide a reference for comparisons. Our analysis reveals a clear clustering both in synthetic data and 18 indices from diverse stock markets. It implies that time series generated by the same model are easier to have similar irregularity than others, and the difference in the stock index, which is caused by the country or region and the different financial policies, can reflect the irregularity in the data. In the synthetic data experiments, not only the time series generated by different models can be distinguished, the one generated under different parameters of the same model can also be detected. In the financial data experiment, the stock indices are clearly divided into five groups. Through analysis, we find that they correspond to five regions, respectively, that is, Europe, North America, South America, Asian-Pacific (with the exception of mainland China), mainland China and Russia. The results also demonstrate that MDS-KCSE and MDS-PCSE provide more effective divisions in experiments than MDSC.

  14. Sampling Methodologies for Epidemiologic Surveillance of Men Who Have Sex with Men and Transgender Women in Latin America: An Empiric Comparison of Convenience Sampling, Time Space Sampling, and Respondent Driven Sampling

    Science.gov (United States)

    Clark, J. L.; Konda, K. A.; Silva-Santisteban, A.; Peinado, J.; Lama, J. R.; Kusunoki, L.; Perez-Brumer, A.; Pun, M.; Cabello, R.; Sebastian, J. L.; Suarez-Ognio, L.; Sanchez, J.

    2014-01-01

    Alternatives to convenience sampling (CS) are needed for HIV/STI surveillance of most-at-risk populations in Latin America. We compared CS, time space sampling (TSS), and respondent driven sampling (RDS) for recruitment of men who have sex with men (MSM) and transgender women (TW) in Lima, Peru. During concurrent 60-day periods from June–August, 2011, we recruited MSM/TW for epidemiologic surveillance using CS, TSS, and RDS. A total of 748 participants were recruited through CS, 233 through TSS, and 127 through RDS. The TSS sample included the largest proportion of TW (30.7 %) and the lowest percentage of subjects who had previously participated in HIV/STI research (14.9 %). The prevalence of newly diagnosed HIV infection, according to participants’ self-reported previous HIV diagnosis, was highest among TSS recruits (17.9 %) compared with RDS (12.6 %) and CS (10.2 %). TSS identified diverse populations of MSM/TW with higher prevalences of HIV/STIs not accessed by other methods. PMID:24362754

  15. Sampling methodologies for epidemiologic surveillance of men who have sex with men and transgender women in Latin America: an empiric comparison of convenience sampling, time space sampling, and respondent driven sampling.

    Science.gov (United States)

    Clark, J L; Konda, K A; Silva-Santisteban, A; Peinado, J; Lama, J R; Kusunoki, L; Perez-Brumer, A; Pun, M; Cabello, R; Sebastian, J L; Suarez-Ognio, L; Sanchez, J

    2014-12-01

    Alternatives to convenience sampling (CS) are needed for HIV/STI surveillance of most-at-risk populations in Latin America. We compared CS, time space sampling (TSS), and respondent driven sampling (RDS) for recruitment of men who have sex with men (MSM) and transgender women (TW) in Lima, Peru. During concurrent 60-day periods from June-August, 2011, we recruited MSM/TW for epidemiologic surveillance using CS, TSS, and RDS. A total of 748 participants were recruited through CS, 233 through TSS, and 127 through RDS. The TSS sample included the largest proportion of TW (30.7 %) and the lowest percentage of subjects who had previously participated in HIV/STI research (14.9 %). The prevalence of newly diagnosed HIV infection, according to participants' self-reported previous HIV diagnosis, was highest among TSS recruits (17.9 %) compared with RDS (12.6 %) and CS (10.2 %). TSS identified diverse populations of MSM/TW with higher prevalences of HIV/STIs not accessed by other methods.

  16. Optimization of the Extraction of the Volatile Fraction from Honey Samples by SPME-GC-MS, Experimental Design, and Multivariate Target Functions

    Directory of Open Access Journals (Sweden)

    Elisa Robotti

    2017-01-01

    Full Text Available Head space (HS solid phase microextraction (SPME followed by gas chromatography with mass spectrometry detection (GC-MS is the most widespread technique to study the volatile profile of honey samples. In this paper, the experimental SPME conditions were optimized by a multivariate strategy. Both sensitivity and repeatability were optimized by experimental design techniques considering three factors: extraction temperature (from 50°C to 70°C, time of exposition of the fiber (from 20 min to 60 min, and amount of salt added (from 0 to 27.50%. Each experiment was evaluated by Principal Component Analysis (PCA that allows to take into consideration all the analytes at the same time, preserving the information about their different characteristics. Optimal extraction conditions were identified independently for signal intensity (extraction temperature: 70°C; extraction time: 60 min; salt percentage: 27.50% w/w and repeatability (extraction temperature: 50°C; extraction time: 60 min; salt percentage: 27.50% w/w and a final global compromise (extraction temperature: 70°C; extraction time: 60 min; salt percentage: 27.50% w/w was also reached. Considerations about the choice of the best internal standards were also drawn. The whole optimized procedure was than applied to the analysis of a multiflower honey sample and more than 100 compounds were identified.

  17. Real-time PCR to supplement gold-standard culture-based detection of Legionella in environmental samples.

    Science.gov (United States)

    Collins, S; Jorgensen, F; Willis, C; Walker, J

    2015-10-01

    Culture remains the gold-standard for the enumeration of environmental Legionella. However, it has several drawbacks including long incubation and poor sensitivity, causing delays in response times to outbreaks of Legionnaires' disease. This study aimed to validate real-time PCR assays to quantify Legionella species (ssrA gene), Legionella pneumophila (mip gene) and Leg. pneumophila serogroup-1 (wzm gene) to support culture-based detection in a frontline public health laboratory. Each qPCR assay had 100% specificity, excellent sensitivity (5 GU/reaction) and reproducibility. Comparison of the assays to culture-based enumeration of Legionella from 200 environmental samples showed that they had a negative predictive value of 100%. Thirty eight samples were positive for Legionella species by culture and qPCR. One hundred samples were negative by both methods, whereas 62 samples were negative by culture but positive by qPCR. The average log10 increase between culture and qPCR for Legionella spp. and Leg. pneumophila was 0·72 (P = 0·0002) and 0·51 (P = 0·006), respectively. The qPCR assays can be conducted on the same 1 l water sample as culture thus can be used as a supplementary technique to screen out negative samples and allow more rapid indication of positive samples. The assay could prove informative in public health investigations to identify or rule out sources of Legionella as well as to specifically identify Leg. pneumophila serogroup 1 in a timely manner not possible with culture. © 2015 The Society for Applied Microbiology.

  18. Real-time embedded systems design principles and engineering practices

    CERN Document Server

    Fan, Xiaocong

    2015-01-01

    This book integrates new ideas and topics from real time systems, embedded systems, and software engineering to give a complete picture of the whole process of developing software for real-time embedded applications. You will not only gain a thorough understanding of concepts related to microprocessors, interrupts, and system boot process, appreciating the importance of real-time modeling and scheduling, but you will also learn software engineering practices such as model documentation, model analysis, design patterns, and standard conformance. This book is split into four parts to help you

  19. Fixed-location hydroacoustic monitoring designs for estimating fish passage using stratified random and systematic sampling

    International Nuclear Information System (INIS)

    Skalski, J.R.; Hoffman, A.; Ransom, B.H.; Steig, T.W.

    1993-01-01

    Five alternate sampling designs are compared using 15 d of 24-h continuous hydroacoustic data to identify the most favorable approach to fixed-location hydroacoustic monitoring of salmonid outmigrants. Four alternative aproaches to systematic sampling are compared among themselves and with stratified random sampling (STRS). Stratifying systematic sampling (STSYS) on a daily basis is found to reduce sampling error in multiday monitoring studies. Although sampling precision was predictable with varying levels of effort in STRS, neither magnitude nor direction of change in precision was predictable when effort was varied in systematic sampling (SYS). Furthermore, modifying systematic sampling to include replicated (e.g., nested) sampling (RSYS) is further shown to provide unbiased point and variance estimates as does STRS. Numerous short sampling intervals (e.g., 12 samples of 1-min duration per hour) must be monitored hourly using RSYS to provide efficient, unbiased point and interval estimates. For equal levels of effort, STRS outperformed all variations of SYS examined. Parametric approaches to confidence interval estimates are found to be superior to nonparametric interval estimates (i.e., bootstrap and jackknife) in estimating total fish passage. 10 refs., 1 fig., 8 tabs

  20. Analysis of volatile organic compounds in compost samples: A potential tool to determine appropriate composting time.

    Science.gov (United States)

    Zhu, Fengxiang; Pan, Zaifa; Hong, Chunlai; Wang, Weiping; Chen, Xiaoyang; Xue, Zhiyong; Yao, Yanlai

    2016-12-01

    Changes in volatile organic compound contents in compost samples during pig manure composting were studied using a headspace, solid-phase micro-extraction method (HS-SPME) followed by gas chromatography with mass spectrometric detection (GC/MS). Parameters affecting the SPME procedure were optimized as follows: the coating was carbon molecular sieve/polydimethylsiloxane (CAR/PDMS) fiber, the temperature was 60°C and the time was 30min. Under these conditions, 87 compounds were identified from 17 composting samples. Most of the volatile components could only be detected before day 22. However, benzenes, alkanes and alkenes increased and eventually stabilized after day 22. Phenol and acid substances, which are important factors for compost quality, were almost undetectable on day 39 in natural compost (NC) samples and on day 13 in maggot-treated compost (MC) samples. Our results indicate that the approach can be effectively used to determine the composting times by analysis of volatile substances in compost samples. An appropriate composting time not only ensures the quality of compost and reduces the loss of composting material but also reduces the generation of hazardous substances. The appropriate composting times for MC and NC were approximately 22days and 40days, respectively, during the summer in Zhejiang. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Design of time interval generator based on hybrid counting method

    International Nuclear Information System (INIS)

    Yao, Yuan; Wang, Zhaoqi; Lu, Houbing; Chen, Lian; Jin, Ge

    2016-01-01

    Time Interval Generators (TIGs) are frequently used for the characterizations or timing operations of instruments in particle physics experiments. Though some “off-the-shelf” TIGs can be employed, the necessity of a custom test system or control system makes the TIGs, being implemented in a programmable device desirable. Nowadays, the feasibility of using Field Programmable Gate Arrays (FPGAs) to implement particle physics instrumentation has been validated in the design of Time-to-Digital Converters (TDCs) for precise time measurement. The FPGA-TDC technique is based on the architectures of Tapped Delay Line (TDL), whose delay cells are down to few tens of picosecond. In this case, FPGA-based TIGs with high delay step are preferable allowing the implementation of customized particle physics instrumentations and other utilities on the same FPGA device. A hybrid counting method for designing TIGs with both high resolution and wide range is presented in this paper. The combination of two different counting methods realizing an integratable TIG is described in detail. A specially designed multiplexer for tap selection is emphatically introduced. The special structure of the multiplexer is devised for minimizing the different additional delays caused by the unpredictable routings from different taps to the output. A Kintex-7 FPGA is used for the hybrid counting-based implementation of a TIG, providing a resolution up to 11 ps and an interval range up to 8 s.

  2. Design of time interval generator based on hybrid counting method

    Energy Technology Data Exchange (ETDEWEB)

    Yao, Yuan [State Key Laboratory of Particle Detection and Electronics, University of Science and Technology of China, Hefei, Anhui 230026 (China); Institute of Plasma Physics, Chinese Academy of Sciences, Hefei 230031 (China); Wang, Zhaoqi [State Key Laboratory of Particle Detection and Electronics, University of Science and Technology of China, Hefei, Anhui 230026 (China); Lu, Houbing [State Key Laboratory of Particle Detection and Electronics, University of Science and Technology of China, Hefei, Anhui 230026 (China); Hefei Electronic Engineering Institute, Hefei 230037 (China); Chen, Lian [State Key Laboratory of Particle Detection and Electronics, University of Science and Technology of China, Hefei, Anhui 230026 (China); Jin, Ge, E-mail: goldjin@ustc.edu.cn [State Key Laboratory of Particle Detection and Electronics, University of Science and Technology of China, Hefei, Anhui 230026 (China)

    2016-10-01

    Time Interval Generators (TIGs) are frequently used for the characterizations or timing operations of instruments in particle physics experiments. Though some “off-the-shelf” TIGs can be employed, the necessity of a custom test system or control system makes the TIGs, being implemented in a programmable device desirable. Nowadays, the feasibility of using Field Programmable Gate Arrays (FPGAs) to implement particle physics instrumentation has been validated in the design of Time-to-Digital Converters (TDCs) for precise time measurement. The FPGA-TDC technique is based on the architectures of Tapped Delay Line (TDL), whose delay cells are down to few tens of picosecond. In this case, FPGA-based TIGs with high delay step are preferable allowing the implementation of customized particle physics instrumentations and other utilities on the same FPGA device. A hybrid counting method for designing TIGs with both high resolution and wide range is presented in this paper. The combination of two different counting methods realizing an integratable TIG is described in detail. A specially designed multiplexer for tap selection is emphatically introduced. The special structure of the multiplexer is devised for minimizing the different additional delays caused by the unpredictable routings from different taps to the output. A Kintex-7 FPGA is used for the hybrid counting-based implementation of a TIG, providing a resolution up to 11 ps and an interval range up to 8 s.

  3. Freeze core sampling to validate time-lapse resistivity monitoring of the hyporheic zone.

    Science.gov (United States)

    Toran, Laura; Hughes, Brian; Nyquist, Jonathan; Ryan, Robert

    2013-01-01

    A freeze core sampler was used to characterize hyporheic zone storage during a stream tracer test. The pore water from the frozen core showed tracer lingered in the hyporheic zone after the tracer had returned to background concentration in collocated well samples. These results confirmed evidence of lingering subsurface tracer seen in time-lapse electrical resistivity tomographs. The pore water exhibited brine exclusion (ion concentrations in ice lower than source water) in a sediment matrix, despite the fast freezing time. Although freeze core sampling provided qualitative evidence of lingering tracer, it proved difficult to quantify tracer concentration because the amount of brine exclusion during freezing could not be accurately determined. Nonetheless, the additional evidence for lingering tracer supports using time-lapse resistivity to detect regions of low fluid mobility within the hyporheic zone that can act as chemically reactive zones of importance in stream health. © 2012, The Author(s). GroundWater © 2012, National Ground Water Association.

  4. Multiobjecitve Sampling Design for Calibration of Water Distribution Network Model Using Genetic Algorithm and Neural Network

    Directory of Open Access Journals (Sweden)

    Kourosh Behzadian

    2008-03-01

    Full Text Available In this paper, a novel multiobjective optimization model is presented for selecting optimal locations in the water distribution network (WDN with the aim of installing pressure loggers. The pressure data collected at optimal locations will be used later on in the calibration of the proposed WDN model. Objective functions consist of maximization of calibrated model prediction accuracy and minimization of the total cost for sampling design. In order to decrease the model run time, an optimization model has been developed using multiobjective genetic algorithm and adaptive neural network (MOGA-ANN. Neural networks (NNs are initially trained after a number of initial GA generations and periodically retrained and updated after generation of a specified number of full model-analyzed solutions. Trained NNs are replaced with the fitness evaluation of some chromosomes within the GA progress. Using cache prevents objective function evaluation of repetitive chromosomes within GA. Optimal solutions are obtained through pareto-optimal front with respect to the two objective functions. Results show that jointing NNs in MOGA for approximating portions of chromosomes’ fitness in each generation leads to considerable savings in model run time and can be promising for reducing run-time in optimization models with significant computational effort.

  5. Endurance time method for Seismic analysis and design of structures

    International Nuclear Information System (INIS)

    Estekanchi, H.E.; Vafai, A.; Sadeghazar, M.

    2004-01-01

    In this paper, a new method for performance based earthquake analysis and design has been introduced. In this method, the structure is subjected to accelerograms that impose increasing dynamic demand on the structure with time. Specified damage indexes are monitored up to the collapse level or other performance limit that defines the endurance limit point for the structure. Also, a method for generating standard intensifying accelerograms has been described. Three accelerograms have been generated using this method. Furthermore, the concept of Endurance Time has been described by applying these accelerograms to single and multi degree of freedom linear systems. The application of this method for analysis of complex nonlinear systems has been explained. Endurance Time method provides a uniform approach to seismic analysis and design of complex structures that can be applied in numerical and experimental investigations

  6. The Apollo lunar samples collection analysis and results

    CERN Document Server

    Young, Anthony

    2017-01-01

    This book focuses on the specific mission planning for lunar sample collection, the equipment used, and the analysis and findings concerning the samples at the Lunar Receiving Laboratory in Texas. Anthony Young documents the collection of Apollo samples for the first time for readers of all backgrounds, and includes interviews with many of those involved in planning and analyzing the samples. NASA contracted with the U.S. Geologic Survey to perform classroom and field training of the Apollo astronauts. NASA’s Geology Group within the Manned Spacecraft Center in Houston, Texas, helped to establish the goals of sample collection, as well as the design of sample collection tools, bags, and storage containers. In this book, detailed descriptions are given on the design of the lunar sampling tools, the Modular Experiment Transporter used on Apollo 14, and the specific areas of the Lunar Rover vehicle used for the Apollo 15, 16, and 17 missions, which carried the sampling tools, bags, and other related equipment ...

  7. Exploiting H infinity sampled-data control theory for high-precision electromechanical servo control design

    NARCIS (Netherlands)

    Oomen, T.A.E.; Wal, van de M.M.J.; Bosgra, O.H.

    2006-01-01

    Optimal design of digital controllers for industrial electromechanical servo systems using an Hinf-criterion is considered. Present industrial practice is to perform the control design in the continuous time domain and to discretize the controller a posteriori. This procedure involves unnecessary

  8. Group SkSP-R sampling plan for accelerated life tests

    Indian Academy of Sciences (India)

    Muhammad Aslam

    2017-09-15

    Sep 15, 2017 ... SkSP-R sampling; life test; Weibull distribution; producer's risk; ... designed a sampling plan under a time-truncated life test .... adjusted using an acceleration factor. ... where P is the probability of lot acceptance for a single.

  9. Design of a scanning probe microscope with advanced sample treatment capabilities: An atomic force microscope combined with a miniaturized inductively coupled plasma source

    International Nuclear Information System (INIS)

    Hund, Markus; Herold, Hans

    2007-01-01

    We describe the design and performance of an atomic force microscope (AFM) combined with a miniaturized inductively coupled plasma source working at a radio frequency of 27.12 MHz. State-of-the-art scanning probe microscopes (SPMs) have limited in situ sample treatment capabilities. Aggressive treatments such as plasma etching or harsh treatments such as etching in aggressive liquids typically require the removal of the sample from the microscope. Consequently, time consuming procedures are required if the same sample spot has to be imaged after successive processing steps. We have developed a first prototype of a SPM which features a quasi in situ sample treatment using a modified commercial atomic force microscope. A sample holder is positioned in a special reactor chamber; the AFM tip can be retracted by several millimeters so that the chamber can be closed for a treatment procedure. Most importantly, after the treatment, the tip is moved back to the sample with a lateral drift per process step in the 20 nm regime. The performance of the prototype is characterized by consecutive plasma etching of a nanostructured polymer film

  10. Just-in-time Design and Additive Manufacture of Patient-specific Medical Implants

    Science.gov (United States)

    Shidid, Darpan; Leary, Martin; Choong, Peter; Brandt, Milan

    Recent advances in medical imaging and manufacturing science have enabled the design and production of complex, patient-specific orthopaedic implants. Additive Manufacture (AM) generates three-dimensional structures layer by layer, and is not subject to the constraints associated with traditional manufacturing methods. AM provides significant opportunities for the design of novel geometries and complex lattice structures with enhanced functional performance. However, the design and manufacture of patient-specific AM implant structures requires unique expertise in handling various optimization platforms. Furthermore, the design process for complex structures is computationally intensive. The primary aim of this research is to enable the just-in-time customisation of AM prosthesis; whereby AM implant design and manufacture be completed within the time constraints of a single surgical procedure, while minimising prosthesis mass and optimising the lattice structure to match the stiffness of the surrounding bone tissue. In this research, a design approach using raw CT scan data is applied to the AM manufacture of femoral prosthesis. Using the proposed just-in-time concept, the mass of the prosthesis was rapidly designed and manufactured while satisfying the associated structural requirements. Compressive testing of lattice structures manufactured using proposed method shows that the load carrying capacity of the resected composite bone can be recovered by up to 85% and the compressive stiffness of the AM prosthesis is statistically indistinguishable from the stiffness of the initial bone.

  11. Influence of sampling depth and post-sampling analysis time

    African Journals Online (AJOL)

    designed to select for growth of total coliform and faecal coliform bacteria. Five tubes were used for each of the three decimal dilutions (10mL, 1mL and 0.1mL). All ..... Robertson A., Porter. S., and Brodie G. Creative. Publishers, St. John's, p 13 - 31. Beliaeff B and Cochard M 1995. Applying geostatistics to identification of.

  12. A liquid scintillation counter specifically designed for samples deposited on a flat matrix

    International Nuclear Information System (INIS)

    Potter, C.G.; Warner, G.T.

    1986-01-01

    A prototype liquid scintillation counter has been designed to count samples deposited as a 6x16 array on a flat matrix. Applications include the counting of labelled cells processed by a cell harvester from 96-well microtitration plates onto glass fibre filters and of DNA samples directly deposited onto nitrocellulose or nylon transfer membranes (e.g. 'Genescreen' NEN) for genetic studies by dot-blot hybridisation. The whole filter is placed in a bag with 4-12 ml of scintillant, sufficient to count all 96 samples. Nearest-neighbour intersample cross talk ranged from 0.004% for 3 H to 0.015% for 32 P. Background was 1.4 counts/min for glass fibre and 0.7 counts/min for 'Genescreen' in the 3 H channel: for 14 C the respective figures were 5.3 and 4.3 counts/min. Counting efficiency for 3 H-labelled cells on glass fibre was 54%(E 2 /B=2053) and 26% for tritiated thymidine spotted on 'Genescreen'(E 2 /B=980). Similar 14 C samples gave figures on 97%(E 2 /B=1775) and 81(E 2 B=1526) respectively. Electron emission counting from samples containing 125 I and 51 Cr was also possible. (U.K.)

  13. Optimal sampling designs for estimation of Plasmodium falciparum clearance rates in patients treated with artemisinin derivatives

    Science.gov (United States)

    2013-01-01

    Background The emergence of Plasmodium falciparum resistance to artemisinins in Southeast Asia threatens the control of malaria worldwide. The pharmacodynamic hallmark of artemisinin derivatives is rapid parasite clearance (a short parasite half-life), therefore, the in vivo phenotype of slow clearance defines the reduced susceptibility to the drug. Measurement of parasite counts every six hours during the first three days after treatment have been recommended to measure the parasite clearance half-life, but it remains unclear whether simpler sampling intervals and frequencies might also be sufficient to reliably estimate this parameter. Methods A total of 2,746 parasite density-time profiles were selected from 13 clinical trials in Thailand, Cambodia, Mali, Vietnam, and Kenya. In these studies, parasite densities were measured every six hours until negative after treatment with an artemisinin derivative (alone or in combination with a partner drug). The WWARN Parasite Clearance Estimator (PCE) tool was used to estimate “reference” half-lives from these six-hourly measurements. The effect of four alternative sampling schedules on half-life estimation was investigated, and compared to the reference half-life (time zero, 6, 12, 24 (A1); zero, 6, 18, 24 (A2); zero, 12, 18, 24 (A3) or zero, 12, 24 (A4) hours and then every 12 hours). Statistical bootstrap methods were used to estimate the sampling distribution of half-lives for parasite populations with different geometric mean half-lives. A simulation study was performed to investigate a suite of 16 potential alternative schedules and half-life estimates generated by each of the schedules were compared to the “true” half-life. The candidate schedules in the simulation study included (among others) six-hourly sampling, schedule A1, schedule A4, and a convenience sampling schedule at six, seven, 24, 25, 48 and 49 hours. Results The median (range) parasite half-life for all clinical studies combined was 3.1 (0

  14. Microrandomized trials: An experimental design for developing just-in-time adaptive interventions.

    Science.gov (United States)

    Klasnja, Predrag; Hekler, Eric B; Shiffman, Saul; Boruvka, Audrey; Almirall, Daniel; Tewari, Ambuj; Murphy, Susan A

    2015-12-01

    This article presents an experimental design, the microrandomized trial, developed to support optimization of just-in-time adaptive interventions (JITAIs). JITAIs are mHealth technologies that aim to deliver the right intervention components at the right times and locations to optimally support individuals' health behaviors. Microrandomized trials offer a way to optimize such interventions by enabling modeling of causal effects and time-varying effect moderation for individual intervention components within a JITAI. The article describes the microrandomized trial design, enumerates research questions that this experimental design can help answer, and provides an overview of the data analyses that can be used to assess the causal effects of studied intervention components and investigate time-varying moderation of those effects. Microrandomized trials enable causal modeling of proximal effects of the randomized intervention components and assessment of time-varying moderation of those effects. Microrandomized trials can help researchers understand whether their interventions are having intended effects, when and for whom they are effective, and what factors moderate the interventions' effects, enabling creation of more effective JITAIs. (PsycINFO Database Record (c) 2015 APA, all rights reserved).

  15. Sample design and gamma-ray counting strategy of neutron activation system for triton burnup measurements in KSTAR

    Energy Technology Data Exchange (ETDEWEB)

    Jo, Jungmin [Department of Energy System Engineering, Seoul National University, Seoul (Korea, Republic of); Cheon, Mun Seong [ITER Korea, National Fusion Research Institute, Daejeon (Korea, Republic of); Chung, Kyoung-Jae, E-mail: jkjlsh1@snu.ac.kr [Department of Energy System Engineering, Seoul National University, Seoul (Korea, Republic of); Hwang, Y.S. [Department of Energy System Engineering, Seoul National University, Seoul (Korea, Republic of)

    2016-11-01

    Highlights: • Sample design for triton burnup ratio measurement is carried out. • Samples for 14.1 MeV neutron measurements are selected for KSTAR. • Si and Cu are the most suitable materials for d-t neutron measurements. • Appropriate γ-ray counting strategies for each selected sample are established. - Abstract: On the purpose of triton burnup measurements in Korea Superconducting Tokamak Advanced Research (KSTAR) deuterium plasmas, appropriate neutron activation system (NAS) samples for 14.1 MeV d-t neutron measurements have been designed and gamma-ray counting strategy is established. Neutronics calculations are performed with the MCNP5 neutron transport code for the KSTAR neutral beam heated deuterium plasma discharges. Based on those calculations and the assumed d-t neutron yield, the activities induced by d-t neutrons are estimated with the inventory code FISPACT-2007 for candidate sample materials: Si, Cu, Al, Fe, Nb, Co, Ti, and Ni. It is found that Si, Cu, Al, and Fe are suitable for the KSATR NAS in terms of the minimum detectable activity (MDA) calculated based on the standard deviation of blank measurements. Considering background gamma-rays radiated from surrounding structures activated by thermalized fusion neutrons, appropriate gamma-ray counting strategy for each selected sample is established.

  16. Relative Efficiencies of a Three-Stage Versus a Two-Stage Sample Design For a New NLS Cohort Study. 22U-884-38.

    Science.gov (United States)

    Folsom, R. E.; Weber, J. H.

    Two sampling designs were compared for the planned 1978 national longitudinal survey of high school seniors with respect to statistical efficiency and cost. The 1972 survey used a stratified two-stage sample of high schools and seniors within schools. In order to minimize interviewer travel costs, an alternate sampling design was proposed,…

  17. Cluster designs to assess the prevalence of acute malnutrition by lot quality assurance sampling: a validation study by computer simulation.

    Science.gov (United States)

    Olives, Casey; Pagano, Marcello; Deitchler, Megan; Hedt, Bethany L; Egge, Kari; Valadez, Joseph J

    2009-04-01

    Traditional lot quality assurance sampling (LQAS) methods require simple random sampling to guarantee valid results. However, cluster sampling has been proposed to reduce the number of random starting points. This study uses simulations to examine the classification error of two such designs, a 67x3 (67 clusters of three observations) and a 33x6 (33 clusters of six observations) sampling scheme to assess the prevalence of global acute malnutrition (GAM). Further, we explore the use of a 67x3 sequential sampling scheme for LQAS classification of GAM prevalence. Results indicate that, for independent clusters with moderate intracluster correlation for the GAM outcome, the three sampling designs maintain approximate validity for LQAS analysis. Sequential sampling can substantially reduce the average sample size that is required for data collection. The presence of intercluster correlation can impact dramatically the classification error that is associated with LQAS analysis.

  18. A minimax technique for time-domain design of preset digital equalizers using linear programming

    Science.gov (United States)

    Vaughn, G. L.; Houts, R. C.

    1975-01-01

    A linear programming technique is presented for the design of a preset finite-impulse response (FIR) digital filter to equalize the intersymbol interference (ISI) present in a baseband channel with known impulse response. A minimax technique is used which minimizes the maximum absolute error between the actual received waveform and a specified raised-cosine waveform. Transversal and frequency-sampling FIR digital filters are compared as to the accuracy of the approximation, the resultant ISI and the transmitted energy required. The transversal designs typically have slightly better waveform accuracy for a given distortion; however, the frequency-sampling equalizer uses fewer multipliers and requires less transmitted energy. A restricted transversal design is shown to use the least number of multipliers at the cost of a significant increase in energy and loss of waveform accuracy at the receiver.

  19. Coastal California's Fog as a Unique Habitable Niche: Design for Autonomous Sampling and Preliminary Aerobiological Characterization

    Science.gov (United States)

    Gentry, Diana; Cynthia Ouandji; Arismendi, Dillon; Guarro, Marcello; Demachkie, Isabella; Crosbie, Ewan; Dadashazar, Hossein; MacDonald, Alex B.; Wang, Zhen; Sorooshian, Armin; hide

    2017-01-01

    Just as on the land or in the ocean, atmospheric regions may be more or less hospitable to life. The aerobiosphere, or collection of living things in Earth's atmosphere, is poorly understood due to the small number and ad hoc nature of samples studied. However, we know viable airborne microbes play important roles, such as providing cloud condensation nuclei. Knowing the distribution of such microorganisms and how their activity can alter water, carbon, and other geochemical cycles is key to developing criteria for planetary habitability, particularly for potential habitats with wet atmospheres but little stable surface water. Coastal California has regular, dense fog known to play a major transport role in the local ecosystem. In addition to the significant local (1 km) geographical variation in typical fog, previous studies have found that changes in height above surface of as little as a few meters can yield significant differences in typical concentrations, populations and residence times. No single current sampling platform (ground-based impactors, towers, balloons, aircraft) is capable of accessing all of these regions of interest.A novel passive fog and cloud water sampler, consisting of a lightweight passive impactor suspended from autonomous aerial vehicles (UAVs), is being developed to allow 4D point sampling within a single fog bank, allowing closer study of small-scale (100 m) system dynamics. Fog and cloud droplet water samples from low-altitude aircraft flights in nearby coastal waters were collected and assayed to estimate the required sample volumes, flight times, and sensitivity thresholds of the system under design.125 cloud water samples were collected from 16 flights of the Center for Interdisciplinary Remotely Piloted Aircraft Studies (CIRPAS) instrumented Twin Otter, equipped with a sampling tube collector, occurring between 18 July and 12 August 2016 below 1 km altitude off the central coast. The collector was flushed first with 70 ethanol

  20. Comparison of sampling designs for estimating deforestation from landsat TM and MODIS imagery: a case study in Mato Grosso, Brazil.

    Science.gov (United States)

    Zhu, Shanyou; Zhang, Hailong; Liu, Ronggao; Cao, Yun; Zhang, Guixin

    2014-01-01

    Sampling designs are commonly used to estimate deforestation over large areas, but comparisons between different sampling strategies are required. Using PRODES deforestation data as a reference, deforestation in the state of Mato Grosso in Brazil from 2005 to 2006 is evaluated using Landsat imagery and a nearly synchronous MODIS dataset. The MODIS-derived deforestation is used to assist in sampling and extrapolation. Three sampling designs are compared according to the estimated deforestation of the entire study area based on simple extrapolation and linear regression models. The results show that stratified sampling for strata construction and sample allocation using the MODIS-derived deforestation hotspots provided more precise estimations than simple random and systematic sampling. Moreover, the relationship between the MODIS-derived and TM-derived deforestation provides a precise estimate of the total deforestation area as well as the distribution of deforestation in each block.

  1. Simulated tempering distributed replica sampling: A practical guide to enhanced conformational sampling

    Energy Technology Data Exchange (ETDEWEB)

    Rauscher, Sarah; Pomes, Regis, E-mail: pomes@sickkids.ca

    2010-11-01

    Simulated tempering distributed replica sampling (STDR) is a generalized-ensemble method designed specifically for simulations of large molecular systems on shared and heterogeneous computing platforms [Rauscher, Neale and Pomes (2009) J. Chem. Theor. Comput. 5, 2640]. The STDR algorithm consists of an alternation of two steps: (1) a short molecular dynamics (MD) simulation; and (2) a stochastic temperature jump. Repeating these steps thousands of times results in a random walk in temperature, which allows the system to overcome energetic barriers, thereby enhancing conformational sampling. The aim of the present paper is to provide a practical guide to applying STDR to complex biomolecular systems. We discuss the details of our STDR implementation, which is a highly-parallel algorithm designed to maximize computational efficiency while simultaneously minimizing network communication and data storage requirements. Using a 35-residue disordered peptide in explicit water as a test system, we characterize the efficiency of the STDR algorithm with respect to both diffusion in temperature space and statistical convergence of structural properties. Importantly, we show that STDR provides a dramatic enhancement of conformational sampling compared to a canonical MD simulation.

  2. SU-E-T-21: A Novel Sampling Algorithm to Reduce Intensity-Modulated Radiation Therapy (IMRT) Optimization Time

    International Nuclear Information System (INIS)

    Tiwari, P; Xie, Y; Chen, Y; Deasy, J

    2014-01-01

    Purpose: The IMRT optimization problem requires substantial computer time to find optimal dose distributions because of the large number of variables and constraints. Voxel sampling reduces the number of constraints and accelerates the optimization process, but usually deteriorates the quality of the dose distributions to the organs. We propose a novel sampling algorithm that accelerates the IMRT optimization process without significantly deteriorating the quality of the dose distribution. Methods: We included all boundary voxels, as well as a sampled fraction of interior voxels of organs in the optimization. We selected a fraction of interior voxels using a clustering algorithm, that creates clusters of voxels that have similar influence matrix signatures. A few voxels are selected from each cluster based on the pre-set sampling rate. Results: We ran sampling and no-sampling IMRT plans for de-identified head and neck treatment plans. Testing with the different sampling rates, we found that including 10% of inner voxels produced the good dose distributions. For this optimal sampling rate, the algorithm accelerated IMRT optimization by a factor of 2–3 times with a negligible loss of accuracy that was, on average, 0.3% for common dosimetric planning criteria. Conclusion: We demonstrated that a sampling could be developed that reduces optimization time by more than a factor of 2, without significantly degrading the dose quality

  3. Transfer function design based on user selected samples for intuitive multivariate volume exploration

    KAUST Repository

    Zhou, Liang

    2013-02-01

    Multivariate volumetric datasets are important to both science and medicine. We propose a transfer function (TF) design approach based on user selected samples in the spatial domain to make multivariate volumetric data visualization more accessible for domain users. Specifically, the user starts the visualization by probing features of interest on slices and the data values are instantly queried by user selection. The queried sample values are then used to automatically and robustly generate high dimensional transfer functions (HDTFs) via kernel density estimation (KDE). Alternatively, 2D Gaussian TFs can be automatically generated in the dimensionality reduced space using these samples. With the extracted features rendered in the volume rendering view, the user can further refine these features using segmentation brushes. Interactivity is achieved in our system and different views are tightly linked. Use cases show that our system has been successfully applied for simulation and complicated seismic data sets. © 2013 IEEE.

  4. Transfer function design based on user selected samples for intuitive multivariate volume exploration

    KAUST Repository

    Zhou, Liang; Hansen, Charles

    2013-01-01

    Multivariate volumetric datasets are important to both science and medicine. We propose a transfer function (TF) design approach based on user selected samples in the spatial domain to make multivariate volumetric data visualization more accessible for domain users. Specifically, the user starts the visualization by probing features of interest on slices and the data values are instantly queried by user selection. The queried sample values are then used to automatically and robustly generate high dimensional transfer functions (HDTFs) via kernel density estimation (KDE). Alternatively, 2D Gaussian TFs can be automatically generated in the dimensionality reduced space using these samples. With the extracted features rendered in the volume rendering view, the user can further refine these features using segmentation brushes. Interactivity is achieved in our system and different views are tightly linked. Use cases show that our system has been successfully applied for simulation and complicated seismic data sets. © 2013 IEEE.

  5. Sparse-sampling with time-encoded (TICO) stimulated Raman scattering for fast image acquisition

    Science.gov (United States)

    Hakert, Hubertus; Eibl, Matthias; Karpf, Sebastian; Huber, Robert

    2017-07-01

    Modern biomedical imaging modalities aim to provide researchers a multimodal contrast for a deeper insight into a specimen under investigation. A very promising technique is stimulated Raman scattering (SRS) microscopy, which can unveil the chemical composition of a sample with a very high specificity. Although the signal intensities are enhanced manifold to achieve a faster acquisition of images if compared to standard Raman microscopy, there is a trade-off between specificity and acquisition speed. Commonly used SRS concepts either probe only very few Raman transitions as the tuning of the applied laser sources is complicated or record whole spectra with a spectrometer based setup. While the first approach is fast, it reduces the specificity and the spectrometer approach records whole spectra -with energy differences where no Raman information is present-, which limits the acquisition speed. Therefore, we present a new approach based on the TICO-Raman concept, which we call sparse-sampling. The TICO-sparse-sampling setup is fully electronically controllable and allows probing of only the characteristic peaks of a Raman spectrum instead of always acquiring a whole spectrum. By reducing the spectral points to the relevant peaks, the acquisition time can be greatly reduced compared to a uniformly, equidistantly sampled Raman spectrum while the specificity and the signal to noise ratio (SNR) are maintained. Furthermore, all laser sources are completely fiber based. The synchronized detection enables a full resolution of the Raman signal, whereas the analogue and digital balancing allows shot noise limited detection. First imaging results with polystyrene (PS) and polymethylmethacrylate (PMMA) beads confirm the advantages of TICO sparse-sampling. We achieved a pixel dwell time as low as 35 μs for an image differentiating both species. The mechanical properties of the applied voice coil stage for scanning the sample currently limits even faster acquisition.

  6. Time recording unit for a neutron time of flight spectrometer

    International Nuclear Information System (INIS)

    Puranik, Praful; Ajit Kiran, S.; Chandak, R.M.; Poudel, S.K.; Mukhopadhyay, R.

    2011-01-01

    Here the architecture and design of Time Recording Unit for a Neutron Time of Flight Spectrometer have been described. The Spectrometer would have an array of 50 Nos. of one meter long linear Position Sensitive Detector (PSD) placed vertically around the sample at a distance of 2000 mm. The sample receives periodic pulsed neutron beam coming through a Fermi chopper. The time and zone of detection of a scattered neutron in a PSD gives information of its flight time and path length, which will be used to calculate its energy. A neutron event zone (position) and time detection module for each PSD provides a 2 bit position/zone code and an event timing pulse. The path length assigned to a neutron detected in a zone (Z1, Z2 etc) in the PSD is the mean path length seen by the neutrons detected in that zone of the PSD. A Time recording unit described here receives event zone code and timing pulse for all the 50 detectors, tags a proper time window code to it, before streaming it to computer for calculation of the energy distribution of neutrons scattered from the sample

  7. Clinical Digital Libraries Project: design approach and exploratory assessment of timely use in clinical environments*

    Science.gov (United States)

    MacCall, Steven L.

    2006-01-01

    Objective: The paper describes and evaluates the use of Clinical Digital Libraries Project (CDLP) digital library collections in terms of their facilitation of timely clinical information seeking. Design: A convenience sample of CDLP Web server log activity over a twelve-month period (7/2002 to 6/2003) was analyzed for evidence of timely information seeking after users were referred to digital library clinical topic pages from Web search engines. Sample searches were limited to those originating from medical schools (26% North American and 19% non-North American) and from hospitals or clinics (51% North American and 4% non-North American). Measurement: Timeliness was determined based on a calculation of the difference between the timestamps of the first and last Web server log “hit” during each search in the sample. The calculated differences were mapped into one of three ranges: less than one minute, one to three minutes, and three to five minutes. Results: Of the 864 searches analyzed, 48% were less than 1 minute, 41% were 1 to 3 minutes, and 11% were 3 to 5 minutes. These results were further analyzed by environment (medical schools versus hospitals or clinics) and by geographic location (North America versus non-North American). Searches reflected a consistent pattern of less than 1 minute in these environments. Though the results were not consistent on a month-by-month basis over the entire time period, data for 8 of 12 months showed that searches shorter than 1 minute predominated and data for 1 month showed an equal number of less than 1 minute and 1 to 3 minute searches. Conclusions: The CDLP digital library collections provided timely access to high-quality Web clinical resources when used for information seeking in medical education and hospital or clinic environments from North American and non–North American locations and consistently provided access to the sought information within the documented two-minute standard. The limitations of the use of

  8. Design of a semi-custom integrated circuit for the SLAC SLC timing control system

    International Nuclear Information System (INIS)

    Linstadt, E.

    1984-10-01

    A semi-custom (gate array) integrated circuit has been designed for use in the SLAC Linear Collider timing and control system. The design process and SLAC's experiences during the phases of the design cycle are described. Issues concerning the partitioning of the design into semi-custom and standard components are discussed. Functional descriptions of the semi-custom integrated circuit and the timing module in which it is used are given

  9. Application of multi-station time sequence aerosol sampling and proton induced x-ray emission analysis techniques to the St. Louis regional air pollution study for investigating sulfur-trace metal relationships

    International Nuclear Information System (INIS)

    Pilotte, J.O.; Nelson, J.W.; Winchester, J.W.

    1976-01-01

    Time sequence streaker samplers, employing Nuclepore filters for aerosol collection, have been deployed over the 25-station St. Louis regional air monitoring network and operated for the months of July and August 1975 so as to determine aerosol composition variations with 2-hour time resolution. Elemental analysis of the 84 individual time steps per station for each week of sampling is carried out by 5 MeV proton irradiation and X-ray counting by Si(Li) detector, using a Van de Graaff accelerator with a special automated step drive sample handling device. Computer resolution of the X-ray spectra for the elements S, Cl, K, Ca, Ti, V, Cr, Mn, Fe Ni, Cu, Zn, Br, and Pb is carried out at a rate equal to the proton irradiation rate, five minutes or less for each time step analysis. The aerosol particle sampling equipment and conditions have been designed to take advantage of the high sensitivity of PIXE analysis, in the nanogram range for the elements determined

  10. On the High-dimensional Power of Linear-time Kernel Two-Sample Testing under Mean-difference Alternatives

    OpenAIRE

    Ramdas, Aaditya; Reddi, Sashank J.; Poczos, Barnabas; Singh, Aarti; Wasserman, Larry

    2014-01-01

    Nonparametric two sample testing deals with the question of consistently deciding if two distributions are different, given samples from both, without making any parametric assumptions about the form of the distributions. The current literature is split into two kinds of tests - those which are consistent without any assumptions about how the distributions may differ (\\textit{general} alternatives), and those which are designed to specifically test easier alternatives, like a difference in me...

  11. Autoregressive Prediction with Rolling Mechanism for Time Series Forecasting with Small Sample Size

    Directory of Open Access Journals (Sweden)

    Zhihua Wang

    2014-01-01

    Full Text Available Reasonable prediction makes significant practical sense to stochastic and unstable time series analysis with small or limited sample size. Motivated by the rolling idea in grey theory and the practical relevance of very short-term forecasting or 1-step-ahead prediction, a novel autoregressive (AR prediction approach with rolling mechanism is proposed. In the modeling procedure, a new developed AR equation, which can be used to model nonstationary time series, is constructed in each prediction step. Meanwhile, the data window, for the next step ahead forecasting, rolls on by adding the most recent derived prediction result while deleting the first value of the former used sample data set. This rolling mechanism is an efficient technique for its advantages of improved forecasting accuracy, applicability in the case of limited and unstable data situations, and requirement of little computational effort. The general performance, influence of sample size, nonlinearity dynamic mechanism, and significance of the observed trends, as well as innovation variance, are illustrated and verified with Monte Carlo simulations. The proposed methodology is then applied to several practical data sets, including multiple building settlement sequences and two economic series.

  12. Fast underdetermined BSS architecture design methodology for real time applications.

    Science.gov (United States)

    Mopuri, Suresh; Reddy, P Sreenivasa; Acharyya, Amit; Naik, Ganesh R

    2015-01-01

    In this paper, we propose a high speed architecture design methodology for the Under-determined Blind Source Separation (UBSS) algorithm using our recently proposed high speed Discrete Hilbert Transform (DHT) targeting real time applications. In UBSS algorithm, unlike the typical BSS, the number of sensors are less than the number of the sources, which is of more interest in the real time applications. The DHT architecture has been implemented based on sub matrix multiplication method to compute M point DHT, which uses N point architecture recursively and where M is an integer multiples of N. The DHT architecture and state of the art architecture are coded in VHDL for 16 bit word length and ASIC implementation is carried out using UMC 90 - nm technology @V DD = 1V and @ 1MHZ clock frequency. The proposed architecture implementation and experimental comparison results show that the DHT design is two times faster than state of the art architecture.

  13. A radial sampling strategy for uniform k-space coverage with retrospective respiratory gating in 3D ultrashort-echo-time lung imaging.

    Science.gov (United States)

    Park, Jinil; Shin, Taehoon; Yoon, Soon Ho; Goo, Jin Mo; Park, Jang-Yeon

    2016-05-01

    The purpose of this work was to develop a 3D radial-sampling strategy which maintains uniform k-space sample density after retrospective respiratory gating, and demonstrate its feasibility in free-breathing ultrashort-echo-time lung MRI. A multi-shot, interleaved 3D radial sampling function was designed by segmenting a single-shot trajectory of projection views such that each interleaf samples k-space in an incoherent fashion. An optimal segmentation factor for the interleaved acquisition was derived based on an approximate model of respiratory patterns such that radial interleaves are evenly accepted during the retrospective gating. The optimality of the proposed sampling scheme was tested by numerical simulations and phantom experiments using human respiratory waveforms. Retrospectively, respiratory-gated, free-breathing lung MRI with the proposed sampling strategy was performed in healthy subjects. The simulation yielded the most uniform k-space sample density with the optimal segmentation factor, as evidenced by the smallest standard deviation of the number of neighboring samples as well as minimal side-lobe energy in the point spread function. The optimality of the proposed scheme was also confirmed by minimal image artifacts in phantom images. Human lung images showed that the proposed sampling scheme significantly reduced streak and ring artifacts compared with the conventional retrospective respiratory gating while suppressing motion-related blurring compared with full sampling without respiratory gating. In conclusion, the proposed 3D radial-sampling scheme can effectively suppress the image artifacts due to non-uniform k-space sample density in retrospectively respiratory-gated lung MRI by uniformly distributing gated radial views across the k-space. Copyright © 2016 John Wiley & Sons, Ltd.

  14. Sensitivity of adaptive enrichment trial designs to accrual rates, time to outcome measurement, and prognostic variables

    Directory of Open Access Journals (Sweden)

    Tianchen Qian

    2017-12-01

    Full Text Available Adaptive enrichment designs involve rules for restricting enrollment to a subset of the population during the course of an ongoing trial. This can be used to target those who benefit from the experimental treatment. Trial characteristics such as the accrual rate and the prognostic value of baseline variables are typically unknown when a trial is being planned; these values are typically assumed based on information available before the trial starts. Because of the added complexity in adaptive enrichment designs compared to standard designs, it may be of special concern how sensitive the trial performance is to deviations from assumptions. Through simulation studies, we evaluate the sensitivity of Type I error, power, expected sample size, and trial duration to different design characteristics. Our simulation distributions mimic features of data from the Alzheimer's Disease Neuroimaging Initiative cohort study, and involve two subpopulations based on a genetic marker. We investigate the impact of the following design characteristics: the accrual rate, the time from enrollment to measurement of a short-term outcome and the primary outcome, and the prognostic value of baseline variables and short-term outcomes. To leverage prognostic information in baseline variables and short-term outcomes, we use a semiparametric, locally efficient estimator, and investigate its strengths and limitations compared to standard estimators. We apply information-based monitoring, and evaluate how accurately information can be estimated in an ongoing trial.

  15. Sampling inspection for the evaluation of time-dependent reliability of deteriorating systems under imperfect defect detection

    International Nuclear Information System (INIS)

    Kuniewski, Sebastian P.; Weide, Johannes A.M. van der; Noortwijk, Jan M. van

    2009-01-01

    The paper presents a sampling-inspection strategy for the evaluation of time-dependent reliability of deteriorating systems, where the deterioration is assumed to initiate at random times and at random locations. After initiation, defects are weakening the system's resistance. The system becomes unacceptable when at least one defect reaches a critical depth. The defects are assumed to initiate at random times modeled as event times of a non-homogeneous Poisson process (NHPP) and to develop according to a non-decreasing time-dependent gamma process. The intensity rate of the NHPP is assumed to be a combination of a known time-dependent shape function and an unknown proportionality constant. When sampling inspection (i.e. inspection of a selected subregion of the system) results in a number of defect initiations, Bayes' theorem can be used to update prior beliefs about the proportionality constant of the NHPP intensity rate to the posterior distribution. On the basis of a time- and space-dependent Poisson process for the defect initiation, an adaptive Bayesian model for sampling inspection is developed to determine the predictive probability distribution of the time to failure. A potential application is, for instance, the inspection of a large vessel or pipeline suffering pitting/localized corrosion in the oil industry. The possibility of imperfect defect detection is also incorporated in the model.

  16. Comparison of Sampling Designs for Estimating Deforestation from Landsat TM and MODIS Imagery: A Case Study in Mato Grosso, Brazil

    Directory of Open Access Journals (Sweden)

    Shanyou Zhu

    2014-01-01

    Full Text Available Sampling designs are commonly used to estimate deforestation over large areas, but comparisons between different sampling strategies are required. Using PRODES deforestation data as a reference, deforestation in the state of Mato Grosso in Brazil from 2005 to 2006 is evaluated using Landsat imagery and a nearly synchronous MODIS dataset. The MODIS-derived deforestation is used to assist in sampling and extrapolation. Three sampling designs are compared according to the estimated deforestation of the entire study area based on simple extrapolation and linear regression models. The results show that stratified sampling for strata construction and sample allocation using the MODIS-derived deforestation hotspots provided more precise estimations than simple random and systematic sampling. Moreover, the relationship between the MODIS-derived and TM-derived deforestation provides a precise estimate of the total deforestation area as well as the distribution of deforestation in each block.

  17. Design of Timing Synchronization Software on EAST-NBI

    International Nuclear Information System (INIS)

    Zhao Yuanzhe; Hu Chundong; Sheng Peng; Zhang Xiaodan

    2013-01-01

    To ensure the uniqueness and recognition of data and make it easy to analyze and process the data of all subsystems of the neutral beam injector (NBI), it is required that all subsystems have a unified system time. In this paper, the timing synchronization software is presented which is related to many kinds of technologies, such as shared memory, multithreading, TCP protocol and so on. Shared memory helps the server save the information of clients and system time, multithreading can deal with different clients with different threads, the server works under Linux operating system, the client works under Linux operating system and Windows operating system. With the help of this design, synchronization of all subsystems can be achieved in less than one second, and this accuracy is enough for the NBI system and the reliability of data is thus ensured. (fusion engineering)

  18. Dependability of Data Derived from Time Sampling Methods with Multiple Observation Targets

    Science.gov (United States)

    Johnson, Austin H.; Chafouleas, Sandra M.; Briesch, Amy M.

    2017-01-01

    In this study, generalizability theory was used to examine the extent to which (a) time-sampling methodology, (b) number of simultaneous behavior targets, and (c) individual raters influenced variance in ratings of academic engagement for an elementary-aged student. Ten graduate-student raters, with an average of 7.20 hr of previous training in…

  19. Design Review Report for formal review of safety class features of exhauster system for rotary mode core sampling

    International Nuclear Information System (INIS)

    JANICEK, G.P.

    2000-01-01

    Report documenting Formal Design Review conducted on portable exhausters used to support rotary mode core sampling of Hanford underground radioactive waste tanks with focus on Safety Class design features and control requirements for flammable gas environment operation and air discharge permitting compliance

  20. Design Review Report for formal review of safety class features of exhauster system for rotary mode core sampling

    Energy Technology Data Exchange (ETDEWEB)

    JANICEK, G.P.

    2000-06-08

    Report documenting Formal Design Review conducted on portable exhausters used to support rotary mode core sampling of Hanford underground radioactive waste tanks with focus on Safety Class design features and control requirements for flammable gas environment operation and air discharge permitting compliance.

  1. A general method to determine sampling windows for nonlinear mixed effects models with an application to population pharmacokinetic studies.

    Science.gov (United States)

    Foo, Lee Kien; McGree, James; Duffull, Stephen

    2012-01-01

    Optimal design methods have been proposed to determine the best sampling times when sparse blood sampling is required in clinical pharmacokinetic studies. However, the optimal blood sampling time points may not be feasible in clinical practice. Sampling windows, a time interval for blood sample collection, have been proposed to provide flexibility in blood sampling times while preserving efficient parameter estimation. Because of the complexity of the population pharmacokinetic models, which are generally nonlinear mixed effects models, there is no analytical solution available to determine sampling windows. We propose a method for determination of sampling windows based on MCMC sampling techniques. The proposed method attains a stationary distribution rapidly and provides time-sensitive windows around the optimal design points. The proposed method is applicable to determine sampling windows for any nonlinear mixed effects model although our work focuses on an application to population pharmacokinetic models. Copyright © 2012 John Wiley & Sons, Ltd.

  2. A simple method to adapt time sampling of the analog signal

    International Nuclear Information System (INIS)

    Kalinin, Yu.G.; Martyanov, I.S.; Sadykov, Kh.; Zastrozhnova, N.N.

    2004-01-01

    In this paper we briefly describe the time sampling method, which is adapted to the speed of the signal change. Principally, this method is based on a simple idea--the combination of discrete integration with differentiation of the analog signal. This method can be used in nuclear electronics research into the characteristics of detectors and the shape of the pulse signal, pulse and transitive characteristics of inertial systems of processing of signals, etc

  3. Optimal sampling strategies for detecting zoonotic disease epidemics.

    Directory of Open Access Journals (Sweden)

    Jake M Ferguson

    2014-06-01

    Full Text Available The early detection of disease epidemics reduces the chance of successful introductions into new locales, minimizes the number of infections, and reduces the financial impact. We develop a framework to determine the optimal sampling strategy for disease detection in zoonotic host-vector epidemiological systems when a disease goes from below detectable levels to an epidemic. We find that if the time of disease introduction is known then the optimal sampling strategy can switch abruptly between sampling only from the vector population to sampling only from the host population. We also construct time-independent optimal sampling strategies when conducting periodic sampling that can involve sampling both the host and the vector populations simultaneously. Both time-dependent and -independent solutions can be useful for sampling design, depending on whether the time of introduction of the disease is known or not. We illustrate the approach with West Nile virus, a globally-spreading zoonotic arbovirus. Though our analytical results are based on a linearization of the dynamical systems, the sampling rules appear robust over a wide range of parameter space when compared to nonlinear simulation models. Our results suggest some simple rules that can be used by practitioners when developing surveillance programs. These rules require knowledge of transition rates between epidemiological compartments, which population was initially infected, and of the cost per sample for serological tests.

  4. Optimal sampling strategies for detecting zoonotic disease epidemics.

    Science.gov (United States)

    Ferguson, Jake M; Langebrake, Jessica B; Cannataro, Vincent L; Garcia, Andres J; Hamman, Elizabeth A; Martcheva, Maia; Osenberg, Craig W

    2014-06-01

    The early detection of disease epidemics reduces the chance of successful introductions into new locales, minimizes the number of infections, and reduces the financial impact. We develop a framework to determine the optimal sampling strategy for disease detection in zoonotic host-vector epidemiological systems when a disease goes from below detectable levels to an epidemic. We find that if the time of disease introduction is known then the optimal sampling strategy can switch abruptly between sampling only from the vector population to sampling only from the host population. We also construct time-independent optimal sampling strategies when conducting periodic sampling that can involve sampling both the host and the vector populations simultaneously. Both time-dependent and -independent solutions can be useful for sampling design, depending on whether the time of introduction of the disease is known or not. We illustrate the approach with West Nile virus, a globally-spreading zoonotic arbovirus. Though our analytical results are based on a linearization of the dynamical systems, the sampling rules appear robust over a wide range of parameter space when compared to nonlinear simulation models. Our results suggest some simple rules that can be used by practitioners when developing surveillance programs. These rules require knowledge of transition rates between epidemiological compartments, which population was initially infected, and of the cost per sample for serological tests.

  5. Detection of Mycobacterium chelonae, Mycobacterium abscessus Group, and Mycobacterium fortuitum Complex by a Multiplex Real-Time PCR Directly from Clinical Samples Using the BD MAX System.

    Science.gov (United States)

    Rocchetti, Talita T; Silbert, Suzane; Gostnell, Alicia; Kubasek, Carly; Campos Pignatari, Antonio C; Widen, Raymond

    2017-03-01

    A new multiplex PCR test was designed to detect Mycobacterium chelonae, Mycobacterium abscessus group, and Mycobacterium fortuitum complex on the BD MAX System. A total of 197 clinical samples previously submitted for mycobacterial culture were tested using the new protocol. Samples were first treated with proteinase K, and then each sample was inoculated into the BD MAX Sample Buffer Tube. Extraction and multiplex PCR were performed by the BD MAX System, using the BD MAX ExK TNA-3 extraction kit and BD TNA Master Mix, along with specific in-house designed primers and probes for each target. The limit of detection of each target, as well as specificity, was evaluated. Of 197 clinical samples included in this study, 133 were positive and 60 were negative for mycobacteria by culture, and another 4 negative samples were spiked with M. chelonae ATCC 35752. The new multiplex PCR on the BD MAX had 97% concordant results with culture for M. abscessus group detection, 99% for M. chelonae, and 100% for M. fortuitum complex. The new multiplex PCR test performed on the BD MAX System proved to be a sensitive and specific test to detect M. chelonae, M. abscessus group, and M. fortuitum complex by real-time PCR on an automated sample-in results-out platform. Copyright © 2017 American Society for Investigative Pathology and the Association for Molecular Pathology. Published by Elsevier Inc. All rights reserved.

  6. Design Optimization of Multi-Cluster Embedded Systems for Real-Time Applications

    DEFF Research Database (Denmark)

    Pop, Paul; Eles, Petru; Peng, Zebo

    2004-01-01

    We present an approach to design optimization of multi-cluster embedded systems consisting of time-triggered and event-triggered clusters, interconnected via gateways. In this paper, we address design problems which are characteristic to multi-clusters: partitioning of the system functionality...... into time-triggered and event-triggered domains, process mapping, and the optimization of parameters corresponding to the communication protocol. We present several heuristics for solving these problems. Our heuristics are able to find schedulable implementations under limited resources, achieving...... an efficient utilization of the system. The developed algorithms are evaluated using extensive experiments and a real-life example....

  7. Design Optimization of Multi-Cluster Embedded Systems for Real-Time Applications

    DEFF Research Database (Denmark)

    Pop, Paul; Eles, Petru; Peng, Zebo

    2006-01-01

    We present an approach to design optimization of multi-cluster embedded systems consisting of time-triggered and event-triggered clusters, interconnected via gateways. In this paper, we address design problems which are characteristic to multi-clusters: partitioning of the system functionality...... into time-triggered and event-triggered domains, process mapping, and the optimization of parameters corresponding to the communication protocol. We present several heuristics for solving these problems. Our heuristics are able to find schedulable implementations under limited resources, achieving...... an efficient utilization of the system. The developed algorithms are evaluated using extensive experiments and a real-life example....

  8. Latin hypercube sampling with inequality constraints

    International Nuclear Information System (INIS)

    Iooss, B.; Petelet, M.; Asserin, O.; Loredo, A.

    2010-01-01

    In some studies requiring predictive and CPU-time consuming numerical models, the sampling design of the model input variables has to be chosen with caution. For this purpose, Latin hypercube sampling has a long history and has shown its robustness capabilities. In this paper we propose and discuss a new algorithm to build a Latin hypercube sample (LHS) taking into account inequality constraints between the sampled variables. This technique, called constrained Latin hypercube sampling (cLHS), consists in doing permutations on an initial LHS to honor the desired monotonic constraints. The relevance of this approach is shown on a real example concerning the numerical welding simulation, where the inequality constraints are caused by the physical decreasing of some material properties in function of the temperature. (authors)

  9. Concepts in sample size determination

    Directory of Open Access Journals (Sweden)

    Umadevi K Rao

    2012-01-01

    Full Text Available Investigators involved in clinical, epidemiological or translational research, have the drive to publish their results so that they can extrapolate their findings to the population. This begins with the preliminary step of deciding the topic to be studied, the subjects and the type of study design. In this context, the researcher must determine how many subjects would be required for the proposed study. Thus, the number of individuals to be included in the study, i.e., the sample size is an important consideration in the design of many clinical studies. The sample size determination should be based on the difference in the outcome between the two groups studied as in an analytical study, as well as on the accepted p value for statistical significance and the required statistical power to test a hypothesis. The accepted risk of type I error or alpha value, which by convention is set at the 0.05 level in biomedical research defines the cutoff point at which the p value obtained in the study is judged as significant or not. The power in clinical research is the likelihood of finding a statistically significant result when it exists and is typically set to >80%. This is necessary since the most rigorously executed studies may fail to answer the research question if the sample size is too small. Alternatively, a study with too large a sample size will be difficult and will result in waste of time and resources. Thus, the goal of sample size planning is to estimate an appropriate number of subjects for a given study design. This article describes the concepts in estimating the sample size.

  10. A Conceptual Level Design for a Static Scheduler for Hard Real-Time Systems

    Science.gov (United States)

    1988-03-01

    The design of hard real - time systems is gaining a great deal of attention in the software engineering field as more and more real-world processes are...for these hard real - time systems . PSDL, as an executable design language, is supported by an execution support system consisting of a static scheduler, dynamic scheduler, and translator.

  11. Cool City Design: Integrating Real-Time Urban Canyon Assessment into the Design Process for Chinese and Australian Cities

    Directory of Open Access Journals (Sweden)

    Marcus White

    2016-09-01

    Full Text Available Many cities are undergoing rapid urbanisation and intensification with the unintended consequence of creating dense urban fabric with deep ‘urban canyons’. Urban densification can trap longwave radiation impacting on local atmospheric conditions, contributing to the phenomena known as the Urban Heat Island (UHI. As global temperatures are predicted to increase, there is a critical need to better understand urban form and heat retention in cities and integrate analysis tools into the design decision making process to design cooler cities. This paper describes the application and validation of a novel three-dimensional urban canyon modelling approach calculating Sky View Factor (SVF, one important indicator used in the prediction of UHI. Our modified daylighting system based approach within a design modelling environment allows iterative design decision making informed by SVF on an urban design scale. This approach is tested on urban fabric samples from cities in both Australia and China. The new approach extends the applicability in the design process of existing methods by providing ‘real-time’ SVF feedback for complex three-dimensional urban scenarios. The modelling approach enables city designers to mix intuitive compositional design modelling with dynamic canyon feedback. The approach allows a greater understanding of existing and proposed urban forms and identifying potential canyon problem areas, improved decision making and design advocacy, and can potentially have an impact on cities’ temperature.

  12. A quick method for species identification of Japanese eel (Anguilla japonica) using real-time PCR: an onboard application for use during sampling surveys.

    Science.gov (United States)

    Watanabe, Shun; Minegishi, Yuki; Yoshinaga, Tatsuki; Aoyama, Jun; Tsukamoto, Katsumi

    2004-01-01

    To compensate for the limited number of morphological characteristics of fish eggs and larvae, we established a convenient and robust method of species identification for eggs of the Japanese eel (Anguilla japonica) using a real-time polymerase chain reaction (PCR) that can be performed onboard research ships at sea. A total of about 1.2 kbp of the mitochondrial 16S ribosomal RNA gene sequences from all species of Anguilla and 3 other anguilliform species were compared to design specific primer pairs and a probe for A. japonica. This real-time PCR amplification was conducted for a total of 44 specimens including A. japonica, A. marmorata, A. bicolor pacifica, and 6 other anguilliform species. Immediate PCR amplification was only observed in A. japonica. We then tested this method under onboard conditions and obtained the same result as had been produced in the laboratory. These results suggest that real-time PCR can be a powerful tool for detecting Japanese eel eggs and newly hatched larvae immediately after onboard sampling during research cruises and will allow targeted sampling efforts to occur rapidly in response to any positive onboard identification of the eggs and larvae of this species.

  13. Rigid Body Sampling and Individual Time Stepping for Rigid-Fluid Coupling of Fluid Simulation

    Directory of Open Access Journals (Sweden)

    Xiaokun Wang

    2017-01-01

    Full Text Available In this paper, we propose an efficient and simple rigid-fluid coupling scheme with scientific programming algorithms for particle-based fluid simulation and three-dimensional visualization. Our approach samples the surface of rigid bodies with boundary particles that interact with fluids. It contains two procedures, that is, surface sampling and sampling relaxation, which insures uniform distribution of particles with less iterations. Furthermore, we present a rigid-fluid coupling scheme integrating individual time stepping to rigid-fluid coupling, which gains an obvious speedup compared to previous method. The experimental results demonstrate the effectiveness of our approach.

  14. Development of a sampling strategy and sample size calculation to estimate the distribution of mammographic breast density in Korean women.

    Science.gov (United States)

    Jun, Jae Kwan; Kim, Mi Jin; Choi, Kui Son; Suh, Mina; Jung, Kyu-Won

    2012-01-01

    Mammographic breast density is a known risk factor for breast cancer. To conduct a survey to estimate the distribution of mammographic breast density in Korean women, appropriate sampling strategies for representative and efficient sampling design were evaluated through simulation. Using the target population from the National Cancer Screening Programme (NCSP) for breast cancer in 2009, we verified the distribution estimate by repeating the simulation 1,000 times using stratified random sampling to investigate the distribution of breast density of 1,340,362 women. According to the simulation results, using a sampling design stratifying the nation into three groups (metropolitan, urban, and rural), with a total sample size of 4,000, we estimated the distribution of breast density in Korean women at a level of 0.01% tolerance. Based on the results of our study, a nationwide survey for estimating the distribution of mammographic breast density among Korean women can be conducted efficiently.

  15. Design of cross-sensitive temperature and strain sensor based on sampled fiber grating

    Directory of Open Access Journals (Sweden)

    Zhang Xiaohang

    2017-02-01

    Full Text Available In this paper,a cross-sensitive temperature and strain sensor based on sampled fiber grating is designed.Its temperature measurement range is -50-200℃,and the strain measurement rangeis 0-2 000 με.The characteristics of the sensor are obtained using simulation method.Utilizing SPSS software,we found the dual-parameter matrix equations of measurement of temperature and strain,and calibrated the four sensing coefficients of the matrix equations.

  16. Calorimetric assay of HTGR fuel samples

    International Nuclear Information System (INIS)

    Allen, E.J.; McNeany, S.R.; Jenkins, J.D.

    1979-04-01

    A calorimeter using a neutron source was designed and fabricated by Mound Laboratory, according to ORNL specifications. A calibration curve of the device for HTGR standard fuel rods was experimentally determined. The precision of a single measurement at the 95% confidence level was estimated to be +-0.8 μW. For a fuel sample containing 0.3 g 235 U and a neutron source containing 691 μg 252 Cf, this represents a relative standard deviation of 0.5%. Measurement time was approximately 5.5 h per sample. Use of the calorimeter is limited by its relatively poor precision, long measurement time, manual sample changing, sensitivity to room environment, and possibility of accumulated dust blocking water flow through the calorimeter. The calorimeter could be redesigned to resolve most of these difficulties, but not without significant development work

  17. System design specification for rotary mode core sample trucks No. 2, 3, and 4 programmable logic controller

    International Nuclear Information System (INIS)

    Dowell, J.L.; Akers, J.C.

    1995-01-01

    The system this document describes controls several functions of the Core Sample Truck(s) used to obtain nuclear waste samples from various underground storage tanks at Hanford. The system will monitor the sampling process and provide alarms and other feedback to insure the sampling process is performed within the prescribed operating envelope. The intended audience for this document is anyone associated with rotary or push mode core sampling. This document describes the Alarm and Control logic installed on Rotary Mode Core Sample Trucks (RMCST) number-sign 2, 3, and 4. It is intended to define the particular requirements of the RMCST alarm and control operation (not defined elsewhere) sufficiently for detailed design to implement on a Programmable Logic Controller (PLC)

  18. Spreading Design of Radioactivity in Sea Water, Algae and Fish Samples inthe Coastal of Muria Peninsula Area

    International Nuclear Information System (INIS)

    Sutjipto; Muryono; Sumining

    2000-01-01

    Spreading design of radioactivity in sea water, brown algae (phaeopyceae)and kerapu fish (epeniphelus) samples in the coastal of Muria peninsula areahas been studied. This research was carried out with designed beside to knowspreading each radioactivity but also spreading design in relation to thecontent of Pu-239 and Cs-137. Samples taken, preparation and analysis basedon the procedures of environmental radioactivity analysis. The instrumentused for the analysis radioactivity were alpha counter with detector ZnS, lowlevel beta counter modified P3TM-BATAN with detector GM and spectrometergamma with detector Ge(Li). Alpha radioactivity obtained of sea water, algaeand fish were the fluctuation form of the natural background. Radionuclide ofPu-239 in samples not detect, because its concentration/radioactivity stillbelow the maximum concentration detection value of Pu-239 for algae and fishwas that 1.10 Bq/g, whereas for sea water was that 0.07 Bq/mL. Result for theradioactivity which give the highest alpha radioactivity obtained on thekerapu fish was that 1.56 x 10 -3 Bq/g, beta radioactivity on sea water wasthat 1.75 x 10 2 mBq/L, gamma radioactivity of K-40 on brown algae was that3.72 x 10 -2 Bq/g and gamma radioactivity of Tl-208 on fish as mentionedabove was that 1.35 x 10 -2 Bq/g. All the peak spectrum gamma energy ofCs-137 do not detect with gamma counter, so there are not the radionuclide ofCs-137 in the samples. Spreading design of radioactivity which occur in thecoastal of Muria peninsula area for alpha radioactivity was found on kerapufish, beta radioactivities on sea water and gamma radioactivity on brownalgae and kerapu fish. (author)

  19. Statistical aspects of quantitative real-time PCR experiment design

    Czech Academy of Sciences Publication Activity Database

    Kitchen, R.R.; Kubista, Mikael; Tichopád, Aleš

    2010-01-01

    Roč. 50, č. 4 (2010), s. 231-236 ISSN 1046-2023 R&D Projects: GA AV ČR IAA500520809 Institutional research plan: CEZ:AV0Z50520701 Keywords : Real-time PCR * Experiment design * Nested analysis of variance Subject RIV: EB - Genetics ; Molecular Biology Impact factor: 4.527, year: 2010

  20. Effects of brief time delays on matching-to-sample abilities in capuchin monkeys (Sapajus spp.).

    Science.gov (United States)

    Truppa, Valentina; De Simone, Diego Antonio; Piano Mortari, Eva; De Lillo, Carlo

    2014-09-01

    Traditionally, studies of delayed matching-to-sample (DMTS) tasks in nonhuman species have focused on the assessment of the limits of the retrieval of information stored in short- and long-term memory systems. However, it is still unclear if visual recognition in these tasks is affected by very brief delay intervals, which are typically used to study rapidly decaying types of visual memory. This study aimed at evaluating if tufted capuchin monkeys' ability to recognise visual stimuli in a DMTS task is affected by (i) the disappearance of the sample stimulus and (ii) the introduction of delay intervals (0.5, 1.0, 2.0 and 3.0s) between the disappearance of the sample and the presentation of the comparison stimuli. The results demonstrated that the simple disappearance of the sample and the introduction of a delay of 0.5s did not affect capuchins' performance either in terms of accuracy or response time. A delay interval of 1.0s produced a significant increase in response time but still did not affect recognition accuracy. By contrast, delays of 2.0 and 3.0s determined a significant increase in response time and a reduction in recognition accuracy. These findings indicate the existence in capuchin monkeys of processes enabling a very accurate retention of stimulus features within time frames comparable to those reported for humans' sensory memory (0.5-1.0s). The extent to which such processes can be considered analogous to the sensory memory processes observed in human visual cognition is discussed. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. Balanced sampling

    NARCIS (Netherlands)

    Brus, D.J.

    2015-01-01

    In balanced sampling a linear relation between the soil property of interest and one or more covariates with known means is exploited in selecting the sampling locations. Recent developments make this sampling design attractive for statistical soil surveys. This paper introduces balanced sampling

  2. A simplified time-domain design and implementation of cascaded PI ...

    Indian Academy of Sciences (India)

    LENIN PRAKASH

    Performance analysis based on field test results using real-time weather data validates the proposed design. ... applications such as photovoltaic (PV) solar systems, fuel cells, etc. ...... 2001 Averaged modeling of PWM converters operating in.

  3. Solar sail time-optimal interplanetary transfer trajectory design

    International Nuclear Information System (INIS)

    Gong Shengpin; Gao Yunfeng; Li Junfeng

    2011-01-01

    The fuel consumption associated with some interplanetary transfer trajectories using chemical propulsion is not affordable. A solar sail is a method of propulsion that does not consume fuel. Transfer time is one of the most pressing problems of solar sail transfer trajectory design. This paper investigates the time-optimal interplanetary transfer trajectories to a circular orbit of given inclination and radius. The optimal control law is derived from the principle of maximization. An indirect method is used to solve the optimal control problem by selecting values for the initial adjoint variables, which are normalized within a unit sphere. The conditions for the existence of the time-optimal transfer are dependent on the lightness number of the sail and the inclination and radius of the target orbit. A numerical method is used to obtain the boundary values for the time-optimal transfer trajectories. For the cases where no time-optimal transfer trajectories exist, first-order necessary conditions of the optimal control are proposed to obtain feasible solutions. The results show that the transfer time decreases as the minimum distance from the Sun decreases during the transfer duration. For a solar sail with a small lightness number, the transfer time may be evaluated analytically for a three-phase transfer trajectory. The analytical results are compared with previous results and the associated numerical results. The transfer time of the numerical result here is smaller than the transfer time from previous results and is larger than the analytical result.

  4. Development of a real-time multiplex PCR assay for the detection of multiple Salmonella serotypes in chicken samples

    Directory of Open Access Journals (Sweden)

    Whyte Paul

    2008-09-01

    Full Text Available Abstract Background A real-time multiplex PCR assay was developed for the detection of multiple Salmonella serotypes in chicken samples. Poultry-associated serotypes detected in the assay include Enteritidis, Gallinarum, Typhimurium, Kentucky and Dublin. The traditional cultural method according to EN ISO 6579:2002 for the detection of Salmonella in food was performed in parallel. The real-time PCR based method comprised a pre-enrichment step in Buffered Peptone Water (BPW overnight, followed by a shortened selective enrichment in Rappaport Vasilliadis Soya Broth (RVS for 6 hours and subsequent DNA extraction. Results The real-time multiplex PCR assay and traditional cultural method showed 100% inclusivity and 100% exclusivity on all strains tested. The real-time multiplex PCR assay was as sensitive as the traditional cultural method in detecting Salmonella in artificially contaminated chicken samples and correctly identified the serotype. Artificially contaminated chicken samples resulted in a detection limit of between 1 and 10 CFU per 25 g sample for both methods. A total of sixty-three naturally contaminated chicken samples were investigated by both methods and relative accuracy, relative sensitivity and relative specificity of the real-time PCR method were determined to be 89, 94 and 87%, respectively. Thirty cultures blind tested were correctly identified by the real-time multiplex PCR method. Conclusion Real-time PCR methodology can contribute to meet the need for rapid identification and detection methods in food testing laboratories.

  5. Evaluation of statistical methods for quantifying fractal scaling in water-quality time series with irregular sampling

    Science.gov (United States)

    Zhang, Qian; Harman, Ciaran J.; Kirchner, James W.

    2018-02-01

    River water-quality time series often exhibit fractal scaling, which here refers to autocorrelation that decays as a power law over some range of scales. Fractal scaling presents challenges to the identification of deterministic trends because (1) fractal scaling has the potential to lead to false inference about the statistical significance of trends and (2) the abundance of irregularly spaced data in water-quality monitoring networks complicates efforts to quantify fractal scaling. Traditional methods for estimating fractal scaling - in the form of spectral slope (β) or other equivalent scaling parameters (e.g., Hurst exponent) - are generally inapplicable to irregularly sampled data. Here we consider two types of estimation approaches for irregularly sampled data and evaluate their performance using synthetic time series. These time series were generated such that (1) they exhibit a wide range of prescribed fractal scaling behaviors, ranging from white noise (β = 0) to Brown noise (β = 2) and (2) their sampling gap intervals mimic the sampling irregularity (as quantified by both the skewness and mean of gap-interval lengths) in real water-quality data. The results suggest that none of the existing methods fully account for the effects of sampling irregularity on β estimation. First, the results illustrate the danger of using interpolation for gap filling when examining autocorrelation, as the interpolation methods consistently underestimate or overestimate β under a wide range of prescribed β values and gap distributions. Second, the widely used Lomb-Scargle spectral method also consistently underestimates β. A previously published modified form, using only the lowest 5 % of the frequencies for spectral slope estimation, has very poor precision, although the overall bias is small. Third, a recent wavelet-based method, coupled with an aliasing filter, generally has the smallest bias and root-mean-squared error among all methods for a wide range of

  6. EZH2 and CD79B mutational status over time in B-cell non-Hodgkin lymphomas detected by high-throughput sequencing using minimal samples

    Science.gov (United States)

    Saieg, Mauro Ajaj; Geddie, William R; Boerner, Scott L; Bailey, Denis; Crump, Michael; da Cunha Santos, Gilda

    2013-01-01

    BACKGROUND: Numerous genomic abnormalities in B-cell non-Hodgkin lymphomas (NHLs) have been revealed by novel high-throughput technologies, including recurrent mutations in EZH2 (enhancer of zeste homolog 2) and CD79B (B cell antigen receptor complex-associated protein beta chain) genes. This study sought to determine the evolution of the mutational status of EZH2 and CD79B over time in different samples from the same patient in a cohort of B-cell NHLs, through use of a customized multiplex mutation assay. METHODS: DNA that was extracted from cytological material stored on FTA cards as well as from additional specimens, including archived frozen and formalin-fixed histological specimens, archived stained smears, and cytospin preparations, were submitted to a multiplex mutation assay specifically designed for the detection of point mutations involving EZH2 and CD79B, using MassARRAY spectrometry followed by Sanger sequencing. RESULTS: All 121 samples from 80 B-cell NHL cases were successfully analyzed. Mutations in EZH2 (Y646) and CD79B (Y196) were detected in 13.2% and 8% of the samples, respectively, almost exclusively in follicular lymphomas and diffuse large B-cell lymphomas. In one-third of the positive cases, a wild type was detected in a different sample from the same patient during follow-up. CONCLUSIONS: Testing multiple minimal tissue samples using a high-throughput multiplex platform exponentially increases tissue availability for molecular analysis and might facilitate future studies of tumor progression and the related molecular events. Mutational status of EZH2 and CD79B may vary in B-cell NHL samples over time and support the concept that individualized therapy should be based on molecular findings at the time of treatment, rather than on results obtained from previous specimens. Cancer (Cancer Cytopathol) 2013;121:377–386. © 2013 American Cancer Society. PMID:23361872

  7. Effect of hip braces on brake response time: Repeated measures designed study.

    Science.gov (United States)

    Dammerer, Dietmar; Waidmann, Cornelia; Huber, Dennis G; Krismer, Martin; Haid, Christian; Liebensteiner, Michael C

    2017-08-01

    The question whether or not a patient with a hip brace should drive a car is of obvious importance because the advice given to patients to resume driving is often anecdotal as few scientific data are available on this specific subject. To assess driving ability (brake response time) with commonly used hip braces. Repeated measures design. Brake response time was assessed under six conditions: (1) without a brace (control), (2) with a typical postoperative hip brace with adjustable range of motion and the settings: unrestricted, (3) flexion limited to 70°, (4) extension blocked at 20° hip flexion, (5) both flexion and extension limited (20°/70°) and (6) an elastic hip bandage. Brake response time was assessed using a custom-made driving simulator as used in previous studies. The participants were a convenience sample of able-bodied participants. A total of 70 participants (35 women and 35 men) participated in our study. Mean age was 31.1 (standard deviation: 10.6; range: 21.7-66.4) years. A significant within-subject effect for brake response time was found ( p = 0.009), but subsequent post hoc analyses revealed no significant differences between control and the other settings. Based on our findings, it does not seem mandatory to recommend driving abstinence for patients wearing a hip orthosis. We suggest that our results be interpreted with caution, because (1) an underlying pathological hip condition needs to be considered, (2) the ability to drive a car safely is multifactorial and brake response time is only one component thereof and (3) brake response time measurements were performed only with healthy participants. Clinical relevance Hip braces are used in the context of joint-preserving and prosthetic surgery of the hip. Therefore, clinicians are confronted with the question whether to allow driving a car with the respective hip brace or not. Our data suggest that hip braces do not impair brake response time.

  8. Designing a fuzzy scheduler for hard real-time systems

    Science.gov (United States)

    Yen, John; Lee, Jonathan; Pfluger, Nathan; Natarajan, Swami

    1992-01-01

    In hard real-time systems, tasks have to be performed not only correctly, but also in a timely fashion. If timing constraints are not met, there might be severe consequences. Task scheduling is the most important problem in designing a hard real-time system, because the scheduling algorithm ensures that tasks meet their deadlines. However, the inherent nature of uncertainty in dynamic hard real-time systems increases the problems inherent in scheduling. In an effort to alleviate these problems, we have developed a fuzzy scheduler to facilitate searching for a feasible schedule. A set of fuzzy rules are proposed to guide the search. The situation we are trying to address is the performance of the system when no feasible solution can be found, and therefore, certain tasks will not be executed. We wish to limit the number of important tasks that are not scheduled.

  9. Design of a radioactive gas sampling system for NESHAP compliance measurements of 41Ar

    International Nuclear Information System (INIS)

    Newton, G.J.; McDonald, M.J.; Ghanbari, F.; Hoover, M.D.; Barr, E.B.

    1994-01-01

    United States Department of Energy facilities are required to comply with the U.S. Environmental Protection Agency, National Emission Standard for Hazardous Air Pollutants (NESHAP) 40 CFR, part 61, subpart H. Compliance generally requires confirmatory measurements of emitted radionuclides. Although a number of standard procedures exist for extractive sampling of particle-associated radionuclides, sampling approaches for radioactive gases are less defined. Real-time, flow-through sampling of radioactive gases can be done when concentrations are high compared to interferences from background radiation. Cold traps can be used to collect and concentrate condensible effluents in applications where cryogenic conditions can be established and maintained. Commercially available gas-sampling cylinders can be used to capture grab samples of contaminated air under ambient or compressed conditions, if suitable sampling and control hardware are added to the cylinders. The purpose of the current study was to develop an efficient and compact set of sampling and control hardware for use with commercially available gas-sampling cylinders, and to demonstrate its use in NESHAP compliance testing of 41 Ar at two experimental research reactors

  10. Laser-induced breakdown spectroscopy for the real-time analysis of mixed waste samples containing Sr

    International Nuclear Information System (INIS)

    Barefield, J.E. II; Koskelo, A.C.; Multari, R.A.; Cremers, D.A.; Gamble, T.K.; Han, C.Y.

    1995-01-01

    In this report, the use of Laser-induced breakdown spectroscopy to analyze mixed waste samples containing Sr is discussed. The mixed waste samples investigated include vitrified waste glass and contaminated soil. Compared to traditional analysis techniques, the laser-based method is fast (i.e., analysis times on the order of minutes) and essentially waste free since little or no sample preparation is required. Detection limits on the order of pmm Sr were determined. Detection limits obtained using a fiber optic cable to deliver laser pulses to soil samples containing Cr, Zr, Pb, Be, Cu, and Ni will also be discussed

  11. Multi-hit time-to-amplitude CAMAC module (MTAC)

    International Nuclear Information System (INIS)

    Kang, H.

    1980-10-01

    A Multi-Hit Time-to-Amplitude Module (MTAC) for the SLAC Mark III drift chamber system has been designed to measure drift time by converting time-proportional chamber signals into analog levels, and converting the analog data by slow readout via a semi-autonomous controller in a CAMAC crate. The single width CAMAC module has 16 wire channels, each with a 4-hit capacity. An externally generated common start initiates an internal precision ramp voltage which is then sampled using a novel shift register gating scheme and CMOS sampling switches. The detailed design and performance specifications are described

  12. A general theory on frequency and time-frequency analysis of irregularly sampled time series based on projection methods - Part 1: Frequency analysis

    Science.gov (United States)

    Lenoir, Guillaume; Crucifix, Michel

    2018-03-01

    We develop a general framework for the frequency analysis of irregularly sampled time series. It is based on the Lomb-Scargle periodogram, but extended to algebraic operators accounting for the presence of a polynomial trend in the model for the data, in addition to a periodic component and a background noise. Special care is devoted to the correlation between the trend and the periodic component. This new periodogram is then cast into the Welch overlapping segment averaging (WOSA) method in order to reduce its variance. We also design a test of significance for the WOSA periodogram, against the background noise. The model for the background noise is a stationary Gaussian continuous autoregressive-moving-average (CARMA) process, more general than the classical Gaussian white or red noise processes. CARMA parameters are estimated following a Bayesian framework. We provide algorithms that compute the confidence levels for the WOSA periodogram and fully take into account the uncertainty in the CARMA noise parameters. Alternatively, a theory using point estimates of CARMA parameters provides analytical confidence levels for the WOSA periodogram, which are more accurate than Markov chain Monte Carlo (MCMC) confidence levels and, below some threshold for the number of data points, less costly in computing time. We then estimate the amplitude of the periodic component with least-squares methods, and derive an approximate proportionality between the squared amplitude and the periodogram. This proportionality leads to a new extension for the periodogram: the weighted WOSA periodogram, which we recommend for most frequency analyses with irregularly sampled data. The estimated signal amplitude also permits filtering in a frequency band. Our results generalise and unify methods developed in the fields of geosciences, engineering, astronomy and astrophysics. They also constitute the starting point for an extension to the continuous wavelet transform developed in a companion

  13. Development of a real-time PCR to detect Demodex canis DNA in different tissue samples.

    Science.gov (United States)

    Ravera, Ivan; Altet, Laura; Francino, Olga; Bardagí, Mar; Sánchez, Armand; Ferrer, Lluís

    2011-02-01

    The present study reports the development of a real-time polymerase chain reaction (PCR) to detect Demodex canis DNA on different tissue samples. The technique amplifies a 166 bp of D. canis chitin synthase gene (AB 080667) and it has been successfully tested on hairs extracted with their roots and on formalin-fixed paraffin embedded skin biopsies. The real-time PCR amplified on the hairs of all 14 dogs with a firm diagnosis of demodicosis and consistently failed to amplify on negative controls. Eleven of 12 skin biopsies with a morphologic diagnosis of canine demodicosis were also positive. Sampling hairs on two skin points (lateral face and interdigital skin), D. canis DNA was detected on nine of 51 healthy dogs (17.6%) a much higher percentage than previously reported with microscopic studies. Furthermore, it is foreseen that if the number of samples were increased, the percentage of positive dogs would probably also grow. Moreover, in four of the six dogs with demodicosis, the samples taken from non-lesioned skin were positive. This finding, if confirmed in further studies, suggests that demodicosis is a generalized phenomenon in canine skin, due to proliferation of local mite populations, even though macroscopic lesions only appear in certain areas. The real-time PCR technique to detect D. canis DNA described in this work is a useful tool to advance our understanding of canine demodicosis.

  14. GRAPHIC, time-sharing magnet design computer programs at Argonne

    International Nuclear Information System (INIS)

    Lari, R.J.

    1974-01-01

    This paper describes three magnet design computer programs in use at the Zero Gradient Synchrotron of Argonne National Laboratory. These programs are used in the time sharing mode in conjunction with a Tektronix model 4012 graphic display terminal. The first program in called TRIM, the second MAGNET, and the third GFUN. (U.S.)

  15. In-well time-of-travel approach to evaluate optimal purge duration during low-flow sampling of monitoring wells

    Science.gov (United States)

    Harte, Philip T.

    2017-01-01

    A common assumption with groundwater sampling is that low (time until inflow from the high hydraulic conductivity part of the screened formation can travel vertically in the well to the pump intake. Therefore, the length of the time needed for adequate purging prior to sample collection (called optimal purge duration) is controlled by the in-well, vertical travel times. A preliminary, simple analytical model was used to provide information on the relation between purge duration and capture of formation water for different gross levels of heterogeneity (contrast between low and high hydraulic conductivity layers). The model was then used to compare these time–volume relations to purge data (pumping rates and drawdown) collected at several representative monitoring wells from multiple sites. Results showed that computation of time-dependent capture of formation water (as opposed to capture of preexisting screen water), which were based on vertical travel times in the well, compares favorably with the time required to achieve field parameter stabilization. If field parameter stabilization is an indicator of arrival time of formation water, which has been postulated, then in-well, vertical flow may be an important factor at wells where low-flow sampling is the sample method of choice.

  16. Minimum-Energy Sub-Threshold Self-Timed Circuits: Design Methodology and a Case Study

    DEFF Research Database (Denmark)

    Akgun, Omer Can; Rodrigues, Joachim; Sparsø, Jens

    2010-01-01

    , which integrates the current sensing circuitry. The paper outlines a corresponding design flow, which is based on contemporary synchronous EDA tools, and which transforms a synchronous design, into a corresponding self-timed circuit. The design flow and the current-sensing technique is validated...

  17. Advanced discrete-time control designs and applications

    CERN Document Server

    Abidi, Khalid

    2015-01-01

    This book covers a wide spectrum of systems such as linear and nonlinear multivariable systems as well as control problems such as disturbance, uncertainty and time-delays. The purpose of this book is to provide researchers and practitioners a manual for the design and application of advanced discrete-time controllers.  The book presents six different control approaches depending on the type of system and control problem. The first and second approaches are based on Sliding Mode control (SMC) theory and are intended for linear systems with exogenous disturbances. The third and fourth approaches are based on adaptive control theory and are aimed at linear/nonlinear systems with periodically varying parametric uncertainty or systems with input delay. The fifth approach is based on Iterative learning control (ILC) theory and is aimed at uncertain linear/nonlinear systems with repeatable tasks and the final approach is based on fuzzy logic control (FLC) and is intended for highly uncertain systems with heuristi...

  18. Analysis of Clinical Cohort Data Using Nested Case-control and Case-cohort Sampling Designs. A Powerful and Economical Tool.

    Science.gov (United States)

    Ohneberg, K; Wolkewitz, M; Beyersmann, J; Palomar-Martinez, M; Olaechea-Astigarraga, P; Alvarez-Lerma, F; Schumacher, M

    2015-01-01

    Sampling from a large cohort in order to derive a subsample that would be sufficient for statistical analysis is a frequently used method for handling large data sets in epidemiological studies with limited resources for exposure measurement. For clinical studies however, when interest is in the influence of a potential risk factor, cohort studies are often the first choice with all individuals entering the analysis. Our aim is to close the gap between epidemiological and clinical studies with respect to design and power considerations. Schoenfeld's formula for the number of events required for a Cox' proportional hazards model is fundamental. Our objective is to compare the power of analyzing the full cohort and the power of a nested case-control and a case-cohort design. We compare formulas for power for sampling designs and cohort studies. In our data example we simultaneously apply a nested case-control design with a varying number of controls matched to each case, a case cohort design with varying subcohort size, a random subsample and a full cohort analysis. For each design we calculate the standard error for estimated regression coefficients and the mean number of distinct persons, for whom covariate information is required. The formula for the power of a nested case-control design and the power of a case-cohort design is directly connected to the power of a cohort study using the well known Schoenfeld formula. The loss in precision of parameter estimates is relatively small compared to the saving in resources. Nested case-control and case-cohort studies, but not random subsamples yield an attractive alternative for analyzing clinical studies in the situation of a low event rate. Power calculations can be conducted straightforwardly to quantify the loss of power compared to the savings in the num-ber of patients using a sampling design instead of analyzing the full cohort.

  19. Generation of synthetic time histories compatible with multiple-damping design response spectra

    International Nuclear Information System (INIS)

    Lilhanand, K.; Tseng, W.S.

    1987-01-01

    Seismic design of nuclear power plants as currently practiced requires time history analyses be performed to generate floor response spectra for seismic qualification of piping, equipment, and components. Since design response spectra are normally prescribed in the form of smooth spectra, the generation of synthetic time histories whose response spectra closely match the ''target'' design spectra of multiple damping values, is often required for the seismic time history analysis purpose. Various methods of generation of synthetic time histories compatible with target response spectra have been proposed in the literature. Since the mathematical problem of determining a time history from a given set of response spectral values is not unique, an exact solution is not possible, and all the proposed methods resort to some forms of approximate solutions. In this paper, a new iteration scheme, is described which effectively removes the difficulties encountered by the existing methods. This new iteration scheme can not only improve the accuracy of spectrum matching for a single-damping target spectrum, but also automate the spectrum matching for multiple-damping target spectra. The applicability and limitations as well as the method adopted to improve the numerical stability of this new iteration scheme are presented. The effectiveness of this new iteration scheme is illustrated by two example applications

  20. Pumping time required to obtain tube well water samples with aquifer characteristic radon concentrations

    International Nuclear Information System (INIS)

    Ricardo, Carla Pereira; Oliveira, Arno Heeren de

    2011-01-01

    Radon is an inert noble gas, which comes from the natural radioactive decay of uranium and thorium in soil, rock and water. Radon isotopes emanated from radium-bearing grains of a rock or soil are released into the pore space. Radon that reaches the pore space is partitioned between the gaseous and aqueous phases. Thus, the groundwater presents a radon signature from the rock that is characteristic of the aquifer. The characteristic radon concentration of an aquifer, which is mainly related to the emanation, is also influenced by the degree of subsurface degassing, especially in the vicinity of a tube well, where the radon concentration is strongly reduced. Looking for the required pumping time to take a tube well water sample that presents the characteristic radon concentration of the aquifer, an experiment was conducted in an 80 m deep tube well. In this experiment, after twenty-four hours without extraction, water samples were collected periodically, about ten minutes intervals, during two hours of pumping time. The radon concentrations of the samples were determined by using the RAD7 Electronic Radon Detector from Durridge Company, a solid state alpha spectrometric detector. It was realized that the necessary time to reach the maximum radon concentration, that means the characteristic radon concentration of the aquifer, is about sixty minutes. (author)

  1. A time-sorting pitfall trap and temperature datalogger for the sampling of surface-active arthropods

    OpenAIRE

    McMunn, Marshall S.

    2017-01-01

    Nearly all arthropods display consistent patterns of activity according to time of day. These patterns of activity often limit the extent of animal co-occurrence in space and time. Quantifying when particular species are active and how activity varies with environmental conditions is difficult without the use of automated devices due to the need for continuous monitoring. Time-sorting pitfall traps passively collect active arthropods into containers with known beginning and end sample times. ...

  2. Monitoring well design and sampling techniques at NAPL sites

    International Nuclear Information System (INIS)

    Collins, M.; Rohrman, W.R.; Drake, K.D.

    1992-01-01

    The existence of Non-Aqueous Phase Liquids (NAPLs) at many Superfund and RCRA hazardous waste sites has become a recognized problem in recent years. The large number of sites exhibiting this problem results from the fact that many of the most frequently used industrial solvents and petroleum products can exist as NAPLs. Hazardous waste constituents occurring as NAPLs possess a common characteristic that causes great concern during groundwater contamination evaluation: while solubility in water is generally very low, it is sufficient to cause groundwater to exceed Maximum Contamination Levels (MCLs). Thus, even a small quantity of NAPL within a groundwater regime can act as a point source with the ability to contaminate vast quantities of groundwater over time. This property makes it imperative that groundwater investigations focus heavily on characterizing the nature, extent, and migration pathways of NAPLs at sites where it exists. Two types of NAPLs may exist in a groundwater system. Water-immiscible liquid constituents having a specific gravity greater than one are termed Dense Non-Aqueous Phase Liquids, while those with a specific gravity less than one are considered Light Non-Aqueous Phase Liquids. For a groundwater investigation to properly characterize the two types of NAPLs, careful consideration must be given to the placement and sampling of groundwater monitoring wells. Unfortunately, technical reviewers at EPA Region VII and the Corps of Engineers find that many groundwater investigations fall short in characterizing NAPLs because several basic considerations were overlooked. Included among these are monitoring well location and screen placement with respect to the water table and significant confining units, and the ability of the well sampling method to obtain samples of NAPL. Depending on the specific gravity of the NAPL that occurs at a site, various considerations can substantially enhance adequate characterization of NAPL contaminants

  3. Investigation of Legionella Contamination in Bath Water Samples by Culture, Amoebic Co-Culture, and Real-Time Quantitative PCR Methods.

    Science.gov (United States)

    Edagawa, Akiko; Kimura, Akio; Kawabuchi-Kurata, Takako; Adachi, Shinichi; Furuhata, Katsunori; Miyamoto, Hiroshi

    2015-10-19

    We investigated Legionella contamination in bath water samples, collected from 68 bathing facilities in Japan, by culture, culture with amoebic co-culture, real-time quantitative PCR (qPCR), and real-time qPCR with amoebic co-culture. Using the conventional culture method, Legionella pneumophila was detected in 11 samples (11/68, 16.2%). Contrary to our expectation, the culture method with the amoebic co-culture technique did not increase the detection rate of Legionella (4/68, 5.9%). In contrast, a combination of the amoebic co-culture technique followed by qPCR successfully increased the detection rate (57/68, 83.8%) compared with real-time qPCR alone (46/68, 67.6%). Using real-time qPCR after culture with amoebic co-culture, more than 10-fold higher bacterial numbers were observed in 30 samples (30/68, 44.1%) compared with the same samples without co-culture. On the other hand, higher bacterial numbers were not observed after propagation by amoebae in 32 samples (32/68, 47.1%). Legionella was not detected in the remaining six samples (6/68, 8.8%), irrespective of the method. These results suggest that application of the amoebic co-culture technique prior to real-time qPCR may be useful for the sensitive detection of Legionella from bath water samples. Furthermore, a combination of amoebic co-culture and real-time qPCR might be useful to detect viable and virulent Legionella because their ability to invade and multiply within free-living amoebae is considered to correlate with their pathogenicity for humans. This is the first report evaluating the efficacy of the amoebic co-culture technique for detecting Legionella in bath water samples.

  4. Identification of driving network of cellular differentiation from single sample time course gene expression data

    Science.gov (United States)

    Chen, Ye; Wolanyk, Nathaniel; Ilker, Tunc; Gao, Shouguo; Wang, Xujing

    Methods developed based on bifurcation theory have demonstrated their potential in driving network identification for complex human diseases, including the work by Chen, et al. Recently bifurcation theory has been successfully applied to model cellular differentiation. However, there one often faces a technical challenge in driving network prediction: time course cellular differentiation study often only contains one sample at each time point, while driving network prediction typically require multiple samples at each time point to infer the variation and interaction structures of candidate genes for the driving network. In this study, we investigate several methods to identify both the critical time point and the driving network through examination of how each time point affects the autocorrelation and phase locking. We apply these methods to a high-throughput sequencing (RNA-Seq) dataset of 42 subsets of thymocytes and mature peripheral T cells at multiple time points during their differentiation (GSE48138 from GEO). We compare the predicted driving genes with known transcription regulators of cellular differentiation. We will discuss the advantages and limitations of our proposed methods, as well as potential further improvements of our methods.

  5. A multimodal logistics service network design with time windows and environmental concerns.

    Science.gov (United States)

    Zhang, Dezhi; He, Runzhong; Li, Shuangyan; Wang, Zhongwei

    2017-01-01

    The design of a multimodal logistics service network with customer service time windows and environmental costs is an important and challenging issue. Accordingly, this work established a model to minimize the total cost of multimodal logistics service network design with time windows and environmental concerns. The proposed model incorporates CO2 emission costs to determine the optimal transportation mode combinations and investment selections for transfer nodes, which consider transport cost, transport time, carbon emission, and logistics service time window constraints. Furthermore, genetic and heuristic algorithms are proposed to set up the abovementioned optimal model. A numerical example is provided to validate the model and the abovementioned two algorithms. Then, comparisons of the performance of the two algorithms are provided. Finally, this work investigates the effects of the logistics service time windows and CO2 emission taxes on the optimal solution. Several important management insights are obtained.

  6. Design and realization of real-time processing system for seismic exploration

    International Nuclear Information System (INIS)

    Zhang Sifeng; Cao Ping; Song Kezhu; Yao Lin

    2010-01-01

    For solving real-time seismic data processing problems, a high-speed, large-capacity and real-time data processing system is designed based on FPGA and ARM. With the advantages of multi-processor, DRPS has the characteristics of high-speed data receiving, large-capacity data storage, protocol analysis, data splicing, data converting from time sequence into channel sequence, no dead time data ping-pong storage, etc. And with the embedded Linux operating system, DRPS has the characteristics of flexibility and reliability. (authors)

  7. Design Optimization of Time- and Cost-Constrained Fault-Tolerant Distributed Embedded Systems

    DEFF Research Database (Denmark)

    Izosimov, Viacheslav; Pop, Paul; Eles, Petru

    2005-01-01

    In this paper we present an approach to the design optimization of fault-tolerant embedded systems for safety-critical applications. Processes are statically scheduled and communications are performed using the time-triggered protocol. We use process re-execution and replication for tolerating...... transient faults. Our design optimization approach decides the mapping of processes to processors and the assignment of fault-tolerant policies to processes such that transient faults are tolerated and the timing constraints of the application are satisfied. We present several heuristics which are able...

  8. Influenza virus drug resistance: a time-sampled population genetics perspective.

    Directory of Open Access Journals (Sweden)

    Matthieu Foll

    2014-02-01

    Full Text Available The challenge of distinguishing genetic drift from selection remains a central focus of population genetics. Time-sampled data may provide a powerful tool for distinguishing these processes, and we here propose approximate Bayesian, maximum likelihood, and analytical methods for the inference of demography and selection from time course data. Utilizing these novel statistical and computational tools, we evaluate whole-genome datasets of an influenza A H1N1 strain in the presence and absence of oseltamivir (an inhibitor of neuraminidase collected at thirteen time points. Results reveal a striking consistency amongst the three estimation procedures developed, showing strongly increased selection pressure in the presence of drug treatment. Importantly, these approaches re-identify the known oseltamivir resistance site, successfully validating the approaches used. Enticingly, a number of previously unknown variants have also been identified as being positively selected. Results are interpreted in the light of Fisher's Geometric Model, allowing for a quantification of the increased distance to optimum exerted by the presence of drug, and theoretical predictions regarding the distribution of beneficial fitness effects of contending mutations are empirically tested. Further, given the fit to expectations of the Geometric Model, results suggest the ability to predict certain aspects of viral evolution in response to changing host environments and novel selective pressures.

  9. Imaging systems and algorithms to analyze biological samples in real-time using mobile phone microscopy.

    Science.gov (United States)

    Shanmugam, Akshaya; Usmani, Mohammad; Mayberry, Addison; Perkins, David L; Holcomb, Daniel E

    2018-01-01

    Miniaturized imaging devices have pushed the boundaries of point-of-care imaging, but existing mobile-phone-based imaging systems do not exploit the full potential of smart phones. This work demonstrates the use of simple imaging configurations to deliver superior image quality and the ability to handle a wide range of biological samples. Results presented in this work are from analysis of fluorescent beads under fluorescence imaging, as well as helminth eggs and freshwater mussel larvae under white light imaging. To demonstrate versatility of the systems, real time analysis and post-processing results of the sample count and sample size are presented in both still images and videos of flowing samples.

  10. Sensitive time-resolved fluoroimmunoassay for quantitative determination of clothianidin in agricultural samples.

    Science.gov (United States)

    Li, Ming; Sheng, Enze; Yuan, Yulong; Liu, Xiaofeng; Hua, Xiude; Wang, Minghua

    2014-05-01

    Europium (Eu(3+))-labeled antibody was used as a fluorescent label to develop a highly sensitive time-resolved fluoroimmunoassay (TRFIA) for determination of clothianidin residues in agricultural samples. Toward this goal, the Eu(3+)-labeled polyclonal antibody and goat anti-rabbit antibody were prepared for developing and evaluating direct competitive TRFIA (dc-TRFIA) and indirect competitive TRFIA (ic-TRFIA). Under optimal conditions, the half-maximal inhibition concentration (IC50) and the limit of detection (LOD, IC10) of clothianidin were 9.20 and 0.0909 μg/L for the dc-TRFIA and 2.07 and 0.0220 μg/L for the ic-TRFIA, respectively. The ic-TRFIA has no obvious cross-reactivity with the analogues of clothianidin except for dinotefuran. The average recoveries of clothianidin from spiked water, soil, cabbage, and rice samples were estimated to range from 74.1 to 115.9 %, with relative standard deviations of 3.3 to 11.7 %. The results of TRFIA for the blind samples were largely consistent with gas chromatography (R (2) = 0.9902). The optimized ic-TRFIA might become a sensitive and satisfactory analytical method for the quantitative monitoring of clothianidin residues in agricultural samples.

  11. The design of a semi-custom intergrated circuit for the SLAC SLC timing control system

    International Nuclear Information System (INIS)

    Linstadt, E.

    1985-01-01

    A semi-custom (gate array) integrated circuit has been designed for use in the SLAC Linear Collider timing and control system. The design process and SLAC's experiences during the phases of the design cycle are described. Issues concerning the partitioning of the design into semi-custom and standard components are discussed. Functional descriptions of the semi-custom integrated circuit and the timing module in which it is used are given

  12. Design of a coil sensor for time domain electromagnetic system for uranium exploration

    International Nuclear Information System (INIS)

    Keshwani, R.T.; Bhattacharya, S.

    2011-01-01

    Time domain electromagnetic system is used for exploration of deep seated deposits under the Earth surface. The basic principle is to set up eddy currents in conductors using pulsed excited transmitter coil during on time of a pulse. The decay time of eddy currents during off time of a pulse is a function conductivity, permeability and depth of conductor located under the Earth surface. The technology is being developed to carry out exploration of mineral deposits (basically uranium) under the Earth surface. The decay of eddy currents is eddy using J coil sensor located coplanar with the transmitter coil. The depth upto which successful exploration can be carried is strong function of design of receiver coil. The design parameters include number of turns, bandwidth, stray capacitance and resistance of a coil. This paper describes various designs tried out and their characterization results. Field results for a ground based system developed are also described. (author)

  13. Evaluation of statistical methods for quantifying fractal scaling in water-quality time series with irregular sampling

    Directory of Open Access Journals (Sweden)

    Q. Zhang

    2018-02-01

    Full Text Available River water-quality time series often exhibit fractal scaling, which here refers to autocorrelation that decays as a power law over some range of scales. Fractal scaling presents challenges to the identification of deterministic trends because (1 fractal scaling has the potential to lead to false inference about the statistical significance of trends and (2 the abundance of irregularly spaced data in water-quality monitoring networks complicates efforts to quantify fractal scaling. Traditional methods for estimating fractal scaling – in the form of spectral slope (β or other equivalent scaling parameters (e.g., Hurst exponent – are generally inapplicable to irregularly sampled data. Here we consider two types of estimation approaches for irregularly sampled data and evaluate their performance using synthetic time series. These time series were generated such that (1 they exhibit a wide range of prescribed fractal scaling behaviors, ranging from white noise (β  =  0 to Brown noise (β  =  2 and (2 their sampling gap intervals mimic the sampling irregularity (as quantified by both the skewness and mean of gap-interval lengths in real water-quality data. The results suggest that none of the existing methods fully account for the effects of sampling irregularity on β estimation. First, the results illustrate the danger of using interpolation for gap filling when examining autocorrelation, as the interpolation methods consistently underestimate or overestimate β under a wide range of prescribed β values and gap distributions. Second, the widely used Lomb–Scargle spectral method also consistently underestimates β. A previously published modified form, using only the lowest 5 % of the frequencies for spectral slope estimation, has very poor precision, although the overall bias is small. Third, a recent wavelet-based method, coupled with an aliasing filter, generally has the smallest bias and root-mean-squared error among

  14. Sample size methodology

    CERN Document Server

    Desu, M M

    2012-01-01

    One of the most important problems in designing an experiment or a survey is sample size determination and this book presents the currently available methodology. It includes both random sampling from standard probability distributions and from finite populations. Also discussed is sample size determination for estimating parameters in a Bayesian setting by considering the posterior distribution of the parameter and specifying the necessary requirements. The determination of the sample size is considered for ranking and selection problems as well as for the design of clinical trials. Appropria

  15. Sample processing, protocol, and statistical analysis of the time-of-flight secondary ion mass spectrometry (ToF-SIMS) of protein, cell, and tissue samples.

    Science.gov (United States)

    Barreto, Goncalo; Soininen, Antti; Sillat, Tarvo; Konttinen, Yrjö T; Kaivosoja, Emilia

    2014-01-01

    Time-of-flight secondary ion mass spectrometry (ToF-SIMS) is increasingly being used in analysis of biological samples. For example, it has been applied to distinguish healthy and osteoarthritic human cartilage. This chapter discusses ToF-SIMS principle and instrumentation including the three modes of analysis in ToF-SIMS. ToF-SIMS sets certain requirements for the samples to be analyzed; for example, the samples have to be vacuum compatible. Accordingly, sample processing steps for different biological samples, i.e., proteins, cells, frozen and paraffin-embedded tissues and extracellular matrix for the ToF-SIMS are presented. Multivariate analysis of the ToF-SIMS data and the necessary data preprocessing steps (peak selection, data normalization, mean-centering, and scaling and transformation) are discussed in this chapter.

  16. Investigation of Legionella Contamination in Bath Water Samples by Culture, Amoebic Co-Culture, and Real-Time Quantitative PCR Methods

    Directory of Open Access Journals (Sweden)

    Akiko Edagawa

    2015-10-01

    Full Text Available We investigated Legionella contamination in bath water samples, collected from 68 bathing facilities in Japan, by culture, culture with amoebic co-culture, real-time quantitative PCR (qPCR, and real-time qPCR with amoebic co-culture. Using the conventional culture method, Legionella pneumophila was detected in 11 samples (11/68, 16.2%. Contrary to our expectation, the culture method with the amoebic co-culture technique did not increase the detection rate of Legionella (4/68, 5.9%. In contrast, a combination of the amoebic co-culture technique followed by qPCR successfully increased the detection rate (57/68, 83.8% compared with real-time qPCR alone (46/68, 67.6%. Using real-time qPCR after culture with amoebic co-culture, more than 10-fold higher bacterial numbers were observed in 30 samples (30/68, 44.1% compared with the same samples without co-culture. On the other hand, higher bacterial numbers were not observed after propagation by amoebae in 32 samples (32/68, 47.1%. Legionella was not detected in the remaining six samples (6/68, 8.8%, irrespective of the method. These results suggest that application of the amoebic co-culture technique prior to real-time qPCR may be useful for the sensitive detection of Legionella from bath water samples. Furthermore, a combination of amoebic co-culture and real-time qPCR might be useful to detect viable and virulent Legionella because their ability to invade and multiply within free-living amoebae is considered to correlate with their pathogenicity for humans. This is the first report evaluating the efficacy of the amoebic co-culture technique for detecting Legionella in bath water samples.

  17. Limited sampling strategy for determining metformin area under the plasma concentration-time curve

    DEFF Research Database (Denmark)

    Santoro, Ana Beatriz; Stage, Tore Bjerregaard; Struchiner, Claudio José

    2016-01-01

    AIM: The aim was to develop and validate limited sampling strategy (LSS) models to predict the area under the plasma concentration-time curve (AUC) for metformin. METHODS: Metformin plasma concentrations (n = 627) at 0-24 h after a single 500 mg dose were used for LSS development, based on all su...

  18. The effects of disjunct sampling and averaging time on maximum mean wind speeds

    DEFF Research Database (Denmark)

    Larsén, Xiaoli Guo; Mann, J.

    2006-01-01

    Conventionally, the 50-year wind is calculated on basis of the annual maxima of consecutive 10-min averages. Very often, however, the averages are saved with a temporal spacing of several hours. We call it disjunct sampling. It may also happen that the wind speeds are averaged over a longer time...

  19. Coordination of Conditional Poisson Samples

    Directory of Open Access Journals (Sweden)

    Grafström Anton

    2015-12-01

    Full Text Available Sample coordination seeks to maximize or to minimize the overlap of two or more samples. The former is known as positive coordination, and the latter as negative coordination. Positive coordination is mainly used for estimation purposes and to reduce data collection costs. Negative coordination is mainly performed to diminish the response burden of the sampled units. Poisson sampling design with permanent random numbers provides an optimum coordination degree of two or more samples. The size of a Poisson sample is, however, random. Conditional Poisson (CP sampling is a modification of the classical Poisson sampling that produces a fixed-size πps sample. We introduce two methods to coordinate Conditional Poisson samples over time or simultaneously. The first one uses permanent random numbers and the list-sequential implementation of CP sampling. The second method uses a CP sample in the first selection and provides an approximate one in the second selection because the prescribed inclusion probabilities are not respected exactly. The methods are evaluated using the size of the expected sample overlap, and are compared with their competitors using Monte Carlo simulation. The new methods provide a good coordination degree of two samples, close to the performance of Poisson sampling with permanent random numbers.

  20. Mars Rover Sample Return aerocapture configuration design and packaging constraints

    Science.gov (United States)

    Lawson, Shelby J.

    1989-01-01

    This paper discusses the aerodynamics requirements, volume and mass constraints that lead to a biconic aeroshell vehicle design that protects the Mars Rover Sample Return (MRSR) mission elements from launch to Mars landing. The aerodynamic requirements for Mars aerocapture and entry and packaging constraints for the MRSR elements result in a symmetric biconic aeroshell that develops a L/D of 1.0 at 27.0 deg angle of attack. A significant problem in the study is obtaining a cg that provides adequate aerodynamic stability and performance within the mission imposed constraints. Packaging methods that relieve the cg problems include forward placement of aeroshell propellant tanks and incorporating aeroshell structure as lander structure. The MRSR missions developed during the pre-phase A study are discussed with dimensional and mass data included. Further study is needed for some missions to minimize MRSR element volume so that launch mass constraints can be met.

  1. Design-based Sample and Probability Law-Assumed Sample: Their Role in Scientific Investigation.

    Science.gov (United States)

    Ojeda, Mario Miguel; Sahai, Hardeo

    2002-01-01

    Discusses some key statistical concepts in probabilistic and non-probabilistic sampling to provide an overview for understanding the inference process. Suggests a statistical model constituting the basis of statistical inference and provides a brief review of the finite population descriptive inference and a quota sampling inferential theory.…

  2. Iterative Refinement Methods for Time-Domain Equalizer Design

    Directory of Open Access Journals (Sweden)

    Evans Brian L

    2006-01-01

    Full Text Available Commonly used time domain equalizer (TEQ design methods have been recently unified as an optimization problem involving an objective function in the form of a Rayleigh quotient. The direct generalized eigenvalue solution relies on matrix decompositions. To reduce implementation complexity, we propose an iterative refinement approach in which the TEQ length starts at two taps and increases by one tap at each iteration. Each iteration involves matrix-vector multiplications and vector additions with matrices and two-element vectors. At each iteration, the optimization of the objective function either improves or the approach terminates. The iterative refinement approach provides a range of communication performance versus implementation complexity tradeoffs for any TEQ method that fits the Rayleigh quotient framework. We apply the proposed approach to three such TEQ design methods: maximum shortening signal-to-noise ratio, minimum intersymbol interference, and minimum delay spread.

  3. Combining censored and uncensored data in a U-statistic: design and sample size implications for cell therapy research.

    Science.gov (United States)

    Moyé, Lemuel A; Lai, Dejian; Jing, Kaiyan; Baraniuk, Mary Sarah; Kwak, Minjung; Penn, Marc S; Wu, Colon O

    2011-01-01

    The assumptions that anchor large clinical trials are rooted in smaller, Phase II studies. In addition to specifying the target population, intervention delivery, and patient follow-up duration, physician-scientists who design these Phase II studies must select the appropriate response variables (endpoints). However, endpoint measures can be problematic. If the endpoint assesses the change in a continuous measure over time, then the occurrence of an intervening significant clinical event (SCE), such as death, can preclude the follow-up measurement. Finally, the ideal continuous endpoint measurement may be contraindicated in a fraction of the study patients, a change that requires a less precise substitution in this subset of participants.A score function that is based on the U-statistic can address these issues of 1) intercurrent SCE's and 2) response variable ascertainments that use different measurements of different precision. The scoring statistic is easy to apply, clinically relevant, and provides flexibility for the investigators' prospective design decisions. Sample size and power formulations for this statistic are provided as functions of clinical event rates and effect size estimates that are easy for investigators to identify and discuss. Examples are provided from current cardiovascular cell therapy research.

  4. Evolution in the design of a low sheath-flow interface for CE-MS and application to biological samples.

    Science.gov (United States)

    González-Ruiz, Víctor; Codesido, Santiago; Rudaz, Serge; Schappler, Julie

    2018-03-01

    Although several interfaces for CE-MS hyphenation are commercially available, the development of new versatile, simple and yet efficient and sensitive alternatives remains an important field of research. In a previous work, a simple low sheath-flow interface was developed from inexpensive parts. This interface features a design easy to build, maintain, and adapt to particular needs. The present work introduces an improved design of the previous interface. By reducing the diameter of the separation capillary and the emitter, a smaller Taylor cone is spontaneously formed, minimizing the zone dispersion while the analytes go through the interface and leading to less peak broadening associated to the ESI process. Numerical modeling allowed studying the mixing and diffusion processes taking place in the Taylor cone. The analytical performance of this new interface was tested with pharmaceutically relevant molecules and endogenous metabolites. The interface was eventually applied to the analysis of neural cell culture samples, allowing the identification of a panel of neurotransmission-related molecules. An excellent migration time repeatability was obtained (intra-day RSD 10 with an injected volume of 6.7 nL of biological extract. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Critical appraisal of arguments for the delayed-start design proposed as alternative to the parallel-group randomized clinical trial design in the field of rare disease.

    Science.gov (United States)

    Spineli, Loukia M; Jenz, Eva; Großhennig, Anika; Koch, Armin

    2017-08-17

    A number of papers have proposed or evaluated the delayed-start design as an alternative to the standard two-arm parallel group randomized clinical trial (RCT) design in the field of rare disease. However the discussion is felt to lack a sufficient degree of consideration devoted to the true virtues of the delayed start design and the implications either in terms of required sample-size, overall information, or interpretation of the estimate in the context of small populations. To evaluate whether there are real advantages of the delayed-start design particularly in terms of overall efficacy and sample size requirements as a proposed alternative to the standard parallel group RCT in the field of rare disease. We used a real-life example to compare the delayed-start design with the standard RCT in terms of sample size requirements. Then, based on three scenarios regarding the development of the treatment effect over time, the advantages, limitations and potential costs of the delayed-start design are discussed. We clarify that delayed-start design is not suitable for drugs that establish an immediate treatment effect, but for drugs with effects developing over time, instead. In addition, the sample size will always increase as an implication for a reduced time on placebo resulting in a decreased treatment effect. A number of papers have repeated well-known arguments to justify the delayed-start design as appropriate alternative to the standard parallel group RCT in the field of rare disease and do not discuss the specific needs of research methodology in this field. The main point is that a limited time on placebo will result in an underestimated treatment effect and, in consequence, in larger sample size requirements compared to those expected under a standard parallel-group design. This also impacts on benefit-risk assessment.

  6. Real-time mobile customer short message system design and implementation

    Science.gov (United States)

    Han, Qirui; Sun, Fang

    To expand the current mobile phone short message service, and to make the contact between schools, teachers, parents and feedback of the modern school office system more timely and conveniently, designed and developed the Short Message System based on the Linux platform. The state-of-the-art principles and designed proposals in the Short Message System based on the Linux platform are introduced. Finally we propose an optimized secure access authentication method. At present, many schools,vbusinesses and research institutions ratify the promotion and application the messaging system gradually, which has shown benign market prospects.

  7. National Ignition Facility sub-system design requirements integrated timing system SSDR 1.5.3

    International Nuclear Information System (INIS)

    Wiedwald, J.; Van Aersau, P.; Bliss, E.

    1996-01-01

    This System Design Requirement document establishes the performance, design, development, and test requirements for the Integrated Timing System, WBS 1.5.3 which is part of the NIF Integrated Computer Control System (ICCS). The Integrated Timing System provides all temporally-critical hardware triggers to components and equipment in other NIF systems

  8. Time perspective in hereditary cancer: psychometric properties of a short form of the Zimbardo Time Perspective Inventory in a community and clinical sample.

    Science.gov (United States)

    Wakefield, Claire E; Homewood, Judi; Taylor, Alan; Mahmut, Mehmet; Meiser, Bettina

    2010-10-01

    We aimed to assess the psychometric properties of a 25-item short form of the Zimbardo Time Perspective Inventory in a community sample (N = 276) and in individuals with a strong family history of cancer, considering genetic testing for cancer risk (N = 338). In the community sample, individuals with high past-negative or present-fatalistic scores had higher levels of distress, as measured by depression, anxiety, and aggression. Similarly, in the patient sample, past-negative time perspective was positively correlated with distress, uncertainty, and postdecision regret when making a decision about genetic testing. Past-negative-oriented individuals were also more likely to be undecided about, or against, genetic testing. Hedonism was associated with being less likely to read the educational materials they received at their clinic, and fatalism was associated with having lower knowledge levels about genetic testing. The assessment of time perspective in individuals at increased risk of cancer can provide valuable clinical insights. However, further investigation of the psychometric properties of the short form of this scale is warranted, as it did not meet the currently accepted criteria for psychometric validation studies.

  9. Hierarchical Bayesian modelling of gene expression time series across irregularly sampled replicates and clusters.

    Science.gov (United States)

    Hensman, James; Lawrence, Neil D; Rattray, Magnus

    2013-08-20

    Time course data from microarrays and high-throughput sequencing experiments require simple, computationally efficient and powerful statistical models to extract meaningful biological signal, and for tasks such as data fusion and clustering. Existing methodologies fail to capture either the temporal or replicated nature of the experiments, and often impose constraints on the data collection process, such as regularly spaced samples, or similar sampling schema across replications. We propose hierarchical Gaussian processes as a general model of gene expression time-series, with application to a variety of problems. In particular, we illustrate the method's capacity for missing data imputation, data fusion and clustering.The method can impute data which is missing both systematically and at random: in a hold-out test on real data, performance is significantly better than commonly used imputation methods. The method's ability to model inter- and intra-cluster variance leads to more biologically meaningful clusters. The approach removes the necessity for evenly spaced samples, an advantage illustrated on a developmental Drosophila dataset with irregular replications. The hierarchical Gaussian process model provides an excellent statistical basis for several gene-expression time-series tasks. It has only a few additional parameters over a regular GP, has negligible additional complexity, is easily implemented and can be integrated into several existing algorithms. Our experiments were implemented in python, and are available from the authors' website: http://staffwww.dcs.shef.ac.uk/people/J.Hensman/.

  10. Salmonella detection in poultry samples. Comparison of two commercial real-time PCR systems with culture methods for the detection of Salmonella spp. in environmental and fecal samples of poultry.

    Science.gov (United States)

    Sommer, D; Enderlein, D; Antakli, A; Schönenbrücher, H; Slaghuis, J; Redmann, T; Lierz, M

    2012-01-01

    The efficiency of two commercial PCR methods based on real-time technology, the foodproof® Salmonella detection system and the BAX® PCR Assay Salmonella system was compared to standardized culture methods (EN ISO 6579:2002 - Annex D) for the detection of Salmonella spp. in poultry samples. Four sample matrices (feed, dust, boot swabs, feces) obtained directly from poultry flocks, as well as artificially spiked samples of the same matrices, were used. All samples were tested for Salmonella spp. using culture methods first as the gold standard. In addition samples spiked with Salmonella Enteridis were tested to evaluate the sensitivity of both PCR methods. Furthermore all methods were evaluated in an annual ring-trial of the National Salmonella Reference Laboratory of Germany. Salmonella detection in the matrices feed, dust and boot swabs were comparable in both PCR systems whereas the results from feces differed markedly. The quality, especially the freshness, of the fecal samples had an influence on the sensitivity of the real-time PCR and the results of the culture methods. In fresh fecal samples an initial spiking level of 100cfu/25g Salmonella Enteritidis was detected. Two-days-dried fecal samples allowed the detection of 14cfu/25g. Both real- time PCR protocols appear to be suitable for the detection of Salmonella spp. in all four matrices. The foodproof® system detected eight samples more to be positive compared to the BAX® system, but had a potential false positive result in one case. In 7-days-dried samples none of the methods was able to detect Salmonella likely through letal cell damage. In general the advantage of PCR analyses over the culture method is the reduction of working time from 4-5 days to only 2 days. However, especially for the analysis of fecal samples official validation should be conducted according to the requirement of EN ISO6579:2002 - Annex D.

  11. Extracting Hydrologic Understanding from the Unique Space-time Sampling of the Surface Water and Ocean Topography (SWOT) Mission

    Science.gov (United States)

    Nickles, C.; Zhao, Y.; Beighley, E.; Durand, M. T.; David, C. H.; Lee, H.

    2017-12-01

    The Surface Water and Ocean Topography (SWOT) satellite mission is jointly developed by NASA, the French space agency (CNES), with participation from the Canadian and UK space agencies to serve both the hydrology and oceanography communities. The SWOT mission will sample global surface water extents and elevations (lakes/reservoirs, rivers, estuaries, oceans, sea and land ice) at a finer spatial resolution than is currently possible enabling hydrologic discovery, model advancements and new applications that are not currently possible or likely even conceivable. Although the mission will provide global cover, analysis and interpolation of the data generated from the irregular space/time sampling represents a significant challenge. In this study, we explore the applicability of the unique space/time sampling for understanding river discharge dynamics throughout the Ohio River Basin. River network topology, SWOT sampling (i.e., orbit and identified SWOT river reaches) and spatial interpolation concepts are used to quantify the fraction of effective sampling of river reaches each day of the three-year mission. Streamflow statistics for SWOT generated river discharge time series are compared to continuous daily river discharge series. Relationships are presented to transform SWOT generated streamflow statistics to equivalent continuous daily discharge time series statistics intended to support hydrologic applications using low-flow and annual flow duration statistics.

  12. Design and real-time control of a robotic system for fracture manipulation.

    Science.gov (United States)

    Dagnino, G; Georgilas, I; Tarassoli, P; Atkins, R; Dogramadzi, S

    2015-08-01

    This paper presents the design, development and control of a new robotic system for fracture manipulation. The objective is to improve the precision, ergonomics and safety of the traditional surgical procedure to treat joint fractures. The achievements toward this direction are here reported and include the design, the real-time control architecture and the evaluation of a new robotic manipulator system. The robotic manipulator is a 6-DOF parallel robot with the struts developed as linear actuators. The control architecture is also described here. The high-level controller implements a host-target structure composed by a host computer (PC), a real-time controller, and an FPGA. A graphical user interface was designed allowing the surgeon to comfortably automate and monitor the robotic system. The real-time controller guarantees the determinism of the control algorithms adding an extra level of safety for the robotic automation. The system's positioning accuracy and repeatability have been demonstrated showing a maximum positioning RMSE of 1.18 ± 1.14mm (translations) and 1.85 ± 1.54° (rotations).

  13. Assessment of SCAR markers to design real-time PCR primers for rhizosphere quantification of Azospirillum brasilense phytostimulatory inoculants of maize.

    Science.gov (United States)

    Couillerot, O; Poirier, M-A; Prigent-Combaret, C; Mavingui, P; Caballero-Mellado, J; Moënne-Loccoz, Y

    2010-08-01

    To assess the applicability of sequence characterized amplified region (SCAR) markers obtained from BOX, ERIC and RAPD fragments to design primers for real-time PCR quantification of the phytostimulatory maize inoculants Azospirillum brasilense UAP-154 and CFN-535 in the rhizosphere. Primers were designed based on strain-specific SCAR markers and were screened for successful amplification of target strain and absence of cross-reaction with other Azospirillum strains. The specificity of primers thus selected was verified under real-time PCR conditions using genomic DNA from strain collection and DNA from rhizosphere samples. The detection limit was 60 fg DNA with pure cultures and 4 x 10(3) (for UAP-154) and 4 x 10(4) CFU g(-1) (for CFN-535) in the maize rhizosphere. Inoculant quantification was effective from 10(4) to 10(8) CFU g(-1) soil. BOX-based SCAR markers were useful to find primers for strain-specific real-time PCR quantification of each A. brasilense inoculant in the maize rhizosphere. Effective root colonization is a prerequisite for successful Azospirillum phytostimulation, but cultivation-independent monitoring methods were lacking. The real-time PCR methods developed here will help understand the effect of environmental conditions on root colonization and phytostimulation by A. brasilense UAP-154 and CFN-535.

  14. Sampling in ecology and evolution - bridging the gap between theory and practice

    Science.gov (United States)

    Albert, C.H.; Yoccoz, N.G.; Edwards, T.C.; Graham, C.H.; Zimmermann, N.E.; Thuiller, W.

    2010-01-01

    Sampling is a key issue for answering most ecological and evolutionary questions. The importance of developing a rigorous sampling design tailored to specific questions has already been discussed in the ecological and sampling literature and has provided useful tools and recommendations to sample and analyse ecological data. However, sampling issues are often difficult to overcome in ecological studies due to apparent inconsistencies between theory and practice, often leading to the implementation of simplified sampling designs that suffer from unknown biases. Moreover, we believe that classical sampling principles which are based on estimation of means and variances are insufficient to fully address many ecological questions that rely on estimating relationships between a response and a set of predictor variables over time and space. Our objective is thus to highlight the importance of selecting an appropriate sampling space and an appropriate sampling design. We also emphasize the importance of using prior knowledge of the study system to estimate models or complex parameters and thus better understand ecological patterns and processes generating these patterns. Using a semi-virtual simulation study as an illustration we reveal how the selection of the space (e.g. geographic, climatic), in which the sampling is designed, influences the patterns that can be ultimately detected. We also demonstrate the inefficiency of common sampling designs to reveal response curves between ecological variables and climatic gradients. Further, we show that response-surface methodology, which has rarely been used in ecology, is much more efficient than more traditional methods. Finally, we discuss the use of prior knowledge, simulation studies and model-based designs in defining appropriate sampling designs. We conclude by a call for development of methods to unbiasedly estimate nonlinear ecologically relevant parameters, in order to make inferences while fulfilling requirements of

  15. Three axis electronic flight motion simulator real time control system design and implementation.

    Science.gov (United States)

    Gao, Zhiyuan; Miao, Zhonghua; Wang, Xuyong; Wang, Xiaohua

    2014-12-01

    A three axis electronic flight motion simulator is reported in this paper including the modelling, the controller design as well as the hardware implementation. This flight motion simulator could be used for inertial navigation test and high precision inertial navigation system with good dynamic and static performances. A real time control system is designed, several control system implementation problems were solved including time unification with parallel port interrupt, high speed finding-zero method of rotary inductosyn, zero-crossing management with continuous rotary, etc. Tests were carried out to show the effectiveness of the proposed real time control system.

  16. Three axis electronic flight motion simulator real time control system design and implementation

    Energy Technology Data Exchange (ETDEWEB)

    Gao, Zhiyuan; Miao, Zhonghua, E-mail: zhonghua-miao@163.com; Wang, Xiaohua [School of Mechatronic Engineering and Automation, Shanghai University, Shanghai, 200072 (China); Wang, Xuyong [School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai 200240 (China)

    2014-12-15

    A three axis electronic flight motion simulator is reported in this paper including the modelling, the controller design as well as the hardware implementation. This flight motion simulator could be used for inertial navigation test and high precision inertial navigation system with good dynamic and static performances. A real time control system is designed, several control system implementation problems were solved including time unification with parallel port interrupt, high speed finding-zero method of rotary inductosyn, zero-crossing management with continuous rotary, etc. Tests were carried out to show the effectiveness of the proposed real time control system.

  17. Component-based engineering of real-time JAVA : applications on a polychronous design platform

    OpenAIRE

    Talpin , Jean-Pierre; Le Dez , Bruno; Gamatié , Abdoulaye; Le Guernic , Paul; Berner , David

    2003-01-01

    Rising complexity and performances of embedded systems, shortening time-to-ma- rket demands for digital equipments, growing installed bases of intellectual properties, stress high-level design as a prominent research topic to compensate a widening productivity gap. In this aim, we put the principles of polychronous design (i.e. multi-clocked and synchronous) to work in the context of the real-time Java programming language by introducing a method for modeling, transforming, verifying and simu...

  18. Lyapunov stability robust analysis and robustness design for linear continuous-time systems

    NARCIS (Netherlands)

    Luo, J.S.; Johnson, A.; Bosch, van den P.P.J.

    1995-01-01

    The linear continuous-time systems to be discussed are described by state space models with structured time-varying uncertainties. First, the explicit maximal perturbation bound for maintaining quadratic Lyapunov stability of the closed-loop systems is presented. Then, a robust design method is

  19. Symbol synchronization and sampling frequency synchronization techniques in real-time DDO-OFDM systems

    Science.gov (United States)

    Chen, Ming; He, Jing; Cao, Zizheng; Tang, Jin; Chen, Lin; Wu, Xian

    2014-09-01

    In this paper, we propose and experimentally demonstrate a symbol synchronization and sampling frequency synchronization techniques in real-time direct-detection optical orthogonal frequency division multiplexing (DDO-OFDM) system, over 100-km standard single mode fiber (SSMF) using a cost-effective directly modulated distributed feedback (DFB) laser. The experiment results show that the proposed symbol synchronization based on training sequence (TS) has a low complexity and high accuracy even at a sampling frequency offset (SFO) of 5000-ppm. Meanwhile, the proposed pilot-assisted sampling frequency synchronization between digital-to-analog converter (DAC) and analog-to-digital converter (ADC) is capable of estimating SFOs with an accuracy of technique can also compensate SFO effects within a small residual SFO caused by deviation of SFO estimation and low-precision or unstable clock source. The two synchronization techniques are suitable for high-speed DDO-OFDM transmission systems.

  20. TimesVector: a vectorized clustering approach to the analysis of time series transcriptome data from multiple phenotypes.

    Science.gov (United States)

    Jung, Inuk; Jo, Kyuri; Kang, Hyejin; Ahn, Hongryul; Yu, Youngjae; Kim, Sun

    2017-12-01

    Identifying biologically meaningful gene expression patterns from time series gene expression data is important to understand the underlying biological mechanisms. To identify significantly perturbed gene sets between different phenotypes, analysis of time series transcriptome data requires consideration of time and sample dimensions. Thus, the analysis of such time series data seeks to search gene sets that exhibit similar or different expression patterns between two or more sample conditions, constituting the three-dimensional data, i.e. gene-time-condition. Computational complexity for analyzing such data is very high, compared to the already difficult NP-hard two dimensional biclustering algorithms. Because of this challenge, traditional time series clustering algorithms are designed to capture co-expressed genes with similar expression pattern in two sample conditions. We present a triclustering algorithm, TimesVector, specifically designed for clustering three-dimensional time series data to capture distinctively similar or different gene expression patterns between two or more sample conditions. TimesVector identifies clusters with distinctive expression patterns in three steps: (i) dimension reduction and clustering of time-condition concatenated vectors, (ii) post-processing clusters for detecting similar and distinct expression patterns and (iii) rescuing genes from unclassified clusters. Using four sets of time series gene expression data, generated by both microarray and high throughput sequencing platforms, we demonstrated that TimesVector successfully detected biologically meaningful clusters of high quality. TimesVector improved the clustering quality compared to existing triclustering tools and only TimesVector detected clusters with differential expression patterns across conditions successfully. The TimesVector software is available at http://biohealth.snu.ac.kr/software/TimesVector/. sunkim.bioinfo@snu.ac.kr. Supplementary data are available at