Using variance components to estimate power in a hierarchically nested sampling design.
Dzul, Maria C; Dixon, Philip M; Quist, Michael C; Dinsmore, Stephen J; Bower, Michael R; Wilson, Kevin P; Gaines, D Bailey
2013-01-01
We used variance components to assess allocation of sampling effort in a hierarchically nested sampling design for ongoing monitoring of early life history stages of the federally endangered Devils Hole pupfish (DHP) (Cyprinodon diabolis). Sampling design for larval DHP included surveys (5 days each spring 2007-2009), events, and plots. Each survey was comprised of three counting events, where DHP larvae on nine plots were counted plot by plot. Statistical analysis of larval abundance included three components: (1) evaluation of power from various sample size combinations, (2) comparison of power in fixed and random plot designs, and (3) assessment of yearly differences in the power of the survey. Results indicated that increasing the sample size at the lowest level of sampling represented the most realistic option to increase the survey's power, fixed plot designs had greater power than random plot designs, and the power of the larval survey varied by year. This study provides an example of how monitoring efforts may benefit from coupling variance components estimation with power analysis to assess sampling design.
DEFF Research Database (Denmark)
Thomadsen, Tommy
2005-01-01
of different types of hierarchical networks. This is supplemented by a review of ring network design problems and a presentation of a model allowing for modeling most hierarchical networks. We use methods based on linear programming to design the hierarchical networks. Thus, a brief introduction to the various....... The thesis investigates models for hierarchical network design and methods used to design such networks. In addition, ring network design is considered, since ring networks commonly appear in the design of hierarchical networks. The thesis introduces hierarchical networks, including a classification scheme...... linear programming based methods is included. The thesis is thus suitable as a foundation for study of design of hierarchical networks. The major contribution of the thesis consists of seven papers which are included in the appendix. The papers address hierarchical network design and/or ring network...
Advanced hierarchical distance sampling
Royle, Andy
2016-01-01
In this chapter, we cover a number of important extensions of the basic hierarchical distance-sampling (HDS) framework from Chapter 8. First, we discuss the inclusion of “individual covariates,” such as group size, in the HDS model. This is important in many surveys where animals form natural groups that are the primary observation unit, with the size of the group expected to have some influence on detectability. We also discuss HDS integrated with time-removal and double-observer or capture-recapture sampling. These “combined protocols” can be formulated as HDS models with individual covariates, and thus they have a commonality with HDS models involving group structure (group size being just another individual covariate). We cover several varieties of open-population HDS models that accommodate population dynamics. On one end of the spectrum, we cover models that allow replicate distance sampling surveys within a year, which estimate abundance relative to availability and temporary emigration through time. We consider a robust design version of that model. We then consider models with explicit dynamics based on the Dail and Madsen (2011) model and the work of Sollmann et al. (2015). The final major theme of this chapter is relatively newly developed spatial distance sampling models that accommodate explicit models describing the spatial distribution of individuals known as Point Process models. We provide novel formulations of spatial DS and HDS models in this chapter, including implementations of those models in the unmarked package using a hack of the pcount function for N-mixture models.
DEFF Research Database (Denmark)
Thomadsen, Tommy
2005-01-01
Communication networks are immensely important today, since both companies and individuals use numerous services that rely on them. This thesis considers the design of hierarchical (communication) networks. Hierarchical networks consist of layers of networks and are well-suited for coping...... the clusters. The design of hierarchical networks involves clustering of nodes, hub selection, and network design, i.e. selection of links and routing of ows. Hierarchical networks have been in use for decades, but integrated design of these networks has only been considered for very special types of networks....... The thesis investigates models for hierarchical network design and methods used to design such networks. In addition, ring network design is considered, since ring networks commonly appear in the design of hierarchical networks. The thesis introduces hierarchical networks, including a classification scheme...
Adaptive Sampling in Hierarchical Simulation
Energy Technology Data Exchange (ETDEWEB)
Knap, J; Barton, N R; Hornung, R D; Arsenlis, A; Becker, R; Jefferson, D R
2007-07-09
We propose an adaptive sampling methodology for hierarchical multi-scale simulation. The method utilizes a moving kriging interpolation to significantly reduce the number of evaluations of finer-scale response functions to provide essential constitutive information to a coarser-scale simulation model. The underlying interpolation scheme is unstructured and adaptive to handle the transient nature of a simulation. To handle the dynamic construction and searching of a potentially large set of finer-scale response data, we employ a dynamic metric tree database. We study the performance of our adaptive sampling methodology for a two-level multi-scale model involving a coarse-scale finite element simulation and a finer-scale crystal plasticity based constitutive law.
Ross, Kenneth N.
1987-01-01
This article considers various kinds of probability and non-probability samples in both experimental and survey studies. Throughout, how a sample is chosen is stressed. Size alone is not the determining consideration in sample selection. Good samples do not occur by accident; they are the result of a careful design. (Author/JAZ)
Design of Hierarchical Structures for Synchronized Deformations
Seifi, Hamed; Javan, Anooshe Rezaee; Ghaedizadeh, Arash; Shen, Jianhu; Xu, Shanqing; Xie, Yi Min
2017-01-01
In this paper we propose a general method for creating a new type of hierarchical structures at any level in both 2D and 3D. A simple rule based on a rotate-and-mirror procedure is introduced to achieve multi-level hierarchies. These new hierarchical structures have remarkably few degrees of freedom compared to existing designs by other methods. More importantly, these structures exhibit synchronized motions during opening or closure, resulting in uniform and easily-controllable deformations. Furthermore, a simple analytical formula is found which can be used to avoid collision of units of the structure during the closing process. The novel design concept is verified by mathematical analyses, computational simulations and physical experiments.
Hierarchical Design Method for Micro Device
Directory of Open Access Journals (Sweden)
Zheng Liu
2013-05-01
Full Text Available Traditional mask-beginning design flow of micro device is unintuitive and fussy for designers. A hierarchical design method and involved key technologies for features mapping procedure are presented. With the feature-based design framework, the model of micro device is organized by various features in different designing stages, which can be converted into each other based on the mapping rules. The feature technology is the foundation of the three-level design flow that provides a more efficient design way. In system level, functional features provide the top level of schematic and functional description. After the functional mapping procedure, on the other hand, parametric design features construct the 3D model of micro device in device level, which is based on Hybird Model representation. By means of constraint features, the corresponding revision rules are applied to the rough model to optimize the original structure. As a result, the model reconstruction algorithm makes benefit for the model revision and constraint features mapping process. Moreover, the formulating description of manufacturing features derivation provides the automatic way for model conversion.
Hierarchical Codebook Design for Massive MIMO
Directory of Open Access Journals (Sweden)
Xin Su
2015-02-01
Full Text Available The Research of Massive MIMO is an emerging area, since the more antennas the transmitters or receivers equipped with, the higher spectral efficiency and link reliability the system can provide. Due to the limited feedback channel, precoding and codebook design are important to exploit the performance of massive MIMO. To improve the precoding performance, we propose a novel hierarchical codebook with the Fourier-based perturbation matrices as the subcodebook and the Kerdock codebook as the main codebook, which could reduce storage and search complexity due to the finite a lphabet. Moreover, t o f urther r educe t he search complexity and feedback overhead without noticeable performance degradation, we use an adaptive selection algorithm to decide whether to use the subcodebook. Simulation results show that the proposed codebook has remarkable performance gain compared to the conventional Kerdock codebook, without significant increase in feedback overhead and search complexity.
Hierarchical Network Design Using Simulated Annealing
DEFF Research Database (Denmark)
Thomadsen, Tommy; Clausen, Jens
2002-01-01
The hierarchical network problem is the problem of finding the least cost network, with nodes divided into groups, edges connecting nodes in each groups and groups ordered in a hierarchy. The idea of hierarchical networks comes from telecommunication networks where hierarchies exist. Hierarchical...... networks are described and a mathematical model is proposed for a two level version of the hierarchical network problem. The problem is to determine which edges should connect nodes, and how demand is routed in the network. The problem is solved heuristically using simulated annealing which as a sub......-algorithm uses a construction algorithm to determine edges and route the demand. Performance for different versions of the algorithm are reported in terms of runtime and quality of the solutions. The algorithm is able to find solutions of reasonable quality in approximately 1 hour for networks with 100 nodes....
Micromechanical design of hierarchical composites using global load sharing theory
Rajan, V. P.; Curtin, W. A.
2016-05-01
Hierarchical composites, embodied by natural materials ranging from bone to bamboo, may offer combinations of material properties inaccessible to conventional composites. Using global load sharing (GLS) theory, a well-established micromechanics model for composites, we develop accurate numerical and analytical predictions for the strength and toughness of hierarchical composites with arbitrary fiber geometries, fiber strengths, interface properties, and number of hierarchical levels, N. The model demonstrates that two key material properties at each hierarchical level-a characteristic strength and a characteristic fiber length-control the scalings of composite properties. One crucial finding is that short- and long-fiber composites behave radically differently. Long-fiber composites are significantly stronger than short-fiber composites, by a factor of 2N or more; they are also significantly tougher because their fiber breaks are bridged by smaller-scale fibers that dissipate additional energy. Indeed, an "infinite" fiber length appears to be optimal in hierarchical composites. However, at the highest level of the composite, long fibers localize on planes of pre-existing damage, and thus short fibers must be employed instead to achieve notch sensitivity and damage tolerance. We conclude by providing simple guidelines for microstructural design of hierarchical composites, including the selection of N, the fiber lengths, the ratio of length scales at successive hierarchical levels, the fiber volume fractions, and the desired properties of the smallest-scale reinforcement. Our model enables superior hierarchical composites to be designed in a rational way, without resorting either to numerical simulation or trial-and-error-based experimentation.
Urban pattern: Layout design by hierarchical domain splitting
Yang, Yongliang
2013-11-01
We present a framework for generating street networks and parcel layouts. Our goal is the generation of high-quality layouts that can be used for urban planning and virtual environments. We propose a solution based on hierarchical domain splitting using two splitting types: streamline-based splitting, which splits a region along one or multiple streamlines of a cross field, and template-based splitting, which warps pre-designed templates to a region and uses the interior geometry of the template as the splitting lines. We combine these two splitting approaches into a hierarchical framework, providing automatic and interactive tools to explore the design space.
Hierarchical robust nonlinear switching control design for propulsion systems
Leonessa, Alexander
1999-09-01
The desire for developing an integrated control system- design methodology for advanced propulsion systems has led to significant activity in modeling and control of flow compression systems in recent years. In this dissertation we develop a novel hierarchical switching control framework for addressing the compressor aerodynamic instabilities of rotating stall and surge. The proposed control framework accounts for the coupling between higher-order modes while explicitly addressing actuator rate saturation constraints and system modeling uncertainty. To develop a hierarchical nonlinear switching control framework, first we develop generalized Lyapunov and invariant set theorems for nonlinear dynamical systems wherein all regularity assumptions on the Lyapunov function and the system dynamics are removed. In particular, local and global stability theorems are given using lower semicontinuous Lyapunov functions. Furthermore, generalized invariant set theorems are derived wherein system trajectories converge to a union of largest invariant sets contained in intersections over finite intervals of the closure of generalized Lyapunov level surfaces. The proposed results provide transparent generalizations to standard Lyapunov and invariant set theorems. Using the generalized Lyapunov and invariant set theorems, a nonlinear control-system design framework predicated on a hierarchical switching controller architecture parameterized over a set of moving system equilibria is developed. Specifically, using equilibria- dependent Lyapunov functions, a hierarchical nonlinear control strategy is developed that stabilizes a given nonlinear system by stabilizing a collection of nonlinear controlled subsystems. The switching nonlinear controller architecture is designed based on a generalized lower semicontinuous Lyapunov function obtained by minimizing a potential function over a given switching set induced by the parameterized system equilibria. The proposed framework provides a
Design of Hierarchical Ring Networks Using Branch-and-Price
DEFF Research Database (Denmark)
Thomadsen, Tommy; Stidsen, Thomas K.
2004-01-01
We consider the problem of designing hierarchical two layer ring networks. The top layer consists of a federal-ring which establishes connection between a number of node disjoint metro-rings in a bottom layer. The objective is to minimize the costs of links in the network, taking both the fixed...... link establishment costs and the link capacity costs into account. The hierarchical two layer ring network design problem is solved in two stages: First the bottom layer, i.e. the metro-rings are designed, implicitly taking into account the capacity cost of the federal-ring. Then the federal......-ring is designed connecting the metro-rings, minimizing fixed link establishment costs of the federal-ring. A branch-and-price algorithm is presented for the design of the bottom layer and it is suggested that existing methods are used for the design of the federal-ring. Computational results are given...
Feiveson, A. H.; Chhikara, R. S.; Hallum, C. R. (Principal Investigator)
1979-01-01
The sampling design in LACIE consisted of two major components, one for wheat acreage estimation and one for wheat yield prediction. The acreage design was basically a classical survey for which the sampling unit was a 5- by 6-nautical mile segment; however, there were complications caused by measurement errors and loss of data. Yield was predicted by sampling meteorological data from weather stations within a region and then using those data as input to previously fitted regression equations. Wheat production was not estimated directly, but was computed by multiplying yield and acreage estimates. The allocation of samples to countries is discussed as well as the allocation and selection of segments in strata/substrata.
Data with hierarchical structure: impact of intraclass correlation and sample size on type-I error.
Musca, Serban C; Kamiejski, Rodolphe; Nugier, Armelle; Méot, Alain; Er-Rafiy, Abdelatif; Brauer, Markus
2011-01-01
Least squares analyses (e.g., ANOVAs, linear regressions) of hierarchical data leads to Type-I error rates that depart severely from the nominal Type-I error rate assumed. Thus, when least squares methods are used to analyze hierarchical data coming from designs in which some groups are assigned to the treatment condition, and others to the control condition (i.e., the widely used "groups nested under treatment" experimental design), the Type-I error rate is seriously inflated, leading too often to the incorrect rejection of the null hypothesis (i.e., the incorrect conclusion of an effect of the treatment). To highlight the severity of the problem, we present simulations showing how the Type-I error rate is affected under different conditions of intraclass correlation and sample size. For all simulations the Type-I error rate after application of the popular Kish (1965) correction is also considered, and the limitations of this correction technique discussed. We conclude with suggestions on how one should collect and analyze data bearing a hierarchical structure.
Data with hierarchical structure: impact of intraclass correlation and sample size on Type-I error
Directory of Open Access Journals (Sweden)
Serban C Musca
2011-04-01
Full Text Available Least squares analyses (e.g., ANOVAs, linear regressions of hierarchical data leads to Type-I error rates that depart severely from the nominal Type-I error rate assumed. Thus, when least squares methods are used to analyze hierarchical data coming from designs in which some groups are assigned to the treatment condition, and others to the control condition (i.e., the widely used "groups nested under treatment" experimental design, the Type-I error rate is seriously inflated, leading too often to the incorrect rejection of the null hypothesis (i.e., the incorrect conclusion of an effect of the treatment. To highlight the severity of the problem, we present simulations showing how the Type-I error rate is affected under different conditions of intraclass correlation and sample size. For all simulations the Type-I error rate after application of the popular Kish (1965 correction is also considered, and the limitations of this correction technique discussed. We conclude with suggestions on how one should collect and analyze data bearing a hierarchical structure.
Hierarchical Ring Network Design Using Branch-and-Price
DEFF Research Database (Denmark)
Thomadsen, Tommy; Stidsen, Thomas K.
2005-01-01
We consider the problem of designing hierarchical two layer ring networks. The top layer consists of a federal-ring which establishes connection between a number of node disjoint metro-rings in a bottom layer. The objective is to minimize the costs of links in the network, taking both the fixed l...... for jointly solving the clustering problem, the metro ring design problem and the routing problem. Computational results are given for networks with up to 36 nodes.......We consider the problem of designing hierarchical two layer ring networks. The top layer consists of a federal-ring which establishes connection between a number of node disjoint metro-rings in a bottom layer. The objective is to minimize the costs of links in the network, taking both the fixed...... link establishment costs and the link capacity costs into account. Hierarchical ring network design problems combines the following optimization problems: Clustering, hub selection, metro ring design, federal ring design and routing problems. In this paper a branch-and-price algorithm is presented...
Impact of hierarchical memory systems on linear algebra algorithm design
Energy Technology Data Exchange (ETDEWEB)
Gallivan, K.; Jalby, W.; Meier, U.; Sameh, A.H.
1988-01-01
Linear algebra algorithms based on the BLAS or extended BLAS do not achieve high performance on multivector processors with a hierarchical memory system because of a lack of data locality. For such machines, block linear algebra algorithms must be implemented in terms of matrix-matrix primitives (BLAS3). Designing efficient linear algebra algorithms for these architectures requires analysis of the behavior of the matrix-matrix primitives and the resulting block algorithms as a function of certain system parameters. The analysis must identify the limits of performance improvement possible via blocking and any contradictory trends that require trade-off consideration. The authors propose a methodology that facilitates such an analysis and use it to analyze the performance of the BLAS3 primitives used in block methods. A similar analysis of the block size-performance relationship is also performed at the algorithm level for block versions of the LU decomposition and the Gram-Schmidt orthogonalization procedures.
Determining the Bayesian optimal sampling strategy in a hierarchical system.
Energy Technology Data Exchange (ETDEWEB)
Grace, Matthew D.; Ringland, James T.; Boggs, Paul T.; Pebay, Philippe Pierre
2010-09-01
Consider a classic hierarchy tree as a basic model of a 'system-of-systems' network, where each node represents a component system (which may itself consist of a set of sub-systems). For this general composite system, we present a technique for computing the optimal testing strategy, which is based on Bayesian decision analysis. In previous work, we developed a Bayesian approach for computing the distribution of the reliability of a system-of-systems structure that uses test data and prior information. This allows for the determination of both an estimate of the reliability and a quantification of confidence in the estimate. Improving the accuracy of the reliability estimate and increasing the corresponding confidence require the collection of additional data. However, testing all possible sub-systems may not be cost-effective, feasible, or even necessary to achieve an improvement in the reliability estimate. To address this sampling issue, we formulate a Bayesian methodology that systematically determines the optimal sampling strategy under specified constraints and costs that will maximally improve the reliability estimate of the composite system, e.g., by reducing the variance of the reliability distribution. This methodology involves calculating the 'Bayes risk of a decision rule' for each available sampling strategy, where risk quantifies the relative effect that each sampling strategy could have on the reliability estimate. A general numerical algorithm is developed and tested using an example multicomponent system. The results show that the procedure scales linearly with the number of components available for testing.
Bellan, Steve E; Gimenez, Olivier; Choquet, Rémi; Getz, Wayne M
2013-04-01
Distance sampling is widely used to estimate the abundance or density of wildlife populations. Methods to estimate wildlife mortality rates have developed largely independently from distance sampling, despite the conceptual similarities between estimation of cumulative mortality and the population density of living animals. Conventional distance sampling analyses rely on the assumption that animals are distributed uniformly with respect to transects and thus require randomized placement of transects during survey design. Because mortality events are rare, however, it is often not possible to obtain precise estimates in this way without infeasible levels of effort. A great deal of wildlife data, including mortality data, is available via road-based surveys. Interpreting these data in a distance sampling framework requires accounting for the non-uniformity sampling. Additionally, analyses of opportunistic mortality data must account for the decline in carcass detectability through time. We develop several extensions to distance sampling theory to address these problems.We build mortality estimators in a hierarchical framework that integrates animal movement data, surveillance effort data, and motion-sensor camera trap data, respectively, to relax the uniformity assumption, account for spatiotemporal variation in surveillance effort, and explicitly model carcass detection and disappearance as competing ongoing processes.Analysis of simulated data showed that our estimators were unbiased and that their confidence intervals had good coverage.We also illustrate our approach on opportunistic carcass surveillance data acquired in 2010 during an anthrax outbreak in the plains zebra of Etosha National Park, Namibia.The methods developed here will allow researchers and managers to infer mortality rates from opportunistic surveillance data.
Virtual Screening and Molecular Design Based on Hierarchical Qsar Technology
Kuz'min, Victor E.; Artemenko, A. G.; Muratov, Eugene N.; Polischuk, P. G.; Ognichenko, L. N.; Liahovsky, A. V.; Hromov, A. I.; Varlamova, E. V.
This chapter is devoted to the hierarchical QSAR technology (HiT QSAR) based on simplex representation of molecular structure (SiRMS) and its application to different QSAR/QSPR tasks. The essence of this technology is a sequential solution (with the use of the information obtained on the previous steps) of the QSAR paradigm by a series of enhanced models based on molecular structure description (in a specific order from 1D to 4D). Actually, it's a system of permanently improved solutions. Different approaches for domain applicability estimation are implemented in HiT QSAR. In the SiRMS approach every molecule is represented as a system of different simplexes (tetratomic fragments with fixed composition, structure, chirality, and symmetry). The level of simplex descriptors detailed increases consecutively from the 1D to 4D representation of the molecular structure. The advantages of the approach presented are an ability to solve QSAR/QSPR tasks for mixtures of compounds, the absence of the "molecular alignment" problem, consideration of different physical-chemical properties of atoms (e.g., charge, lipophilicity), and the high adequacy and good interpretability of obtained models and clear ways for molecular design. The efficiency of HiT QSAR was demonstrated by its comparison with the most popular modern QSAR approaches on two representative examination sets. The examples of successful application of the HiT QSAR for various QSAR/QSPR investigations on the different levels (1D-4D) of the molecular structure description are also highlighted. The reliability of developed QSAR models as the predictive virtual screening tools and their ability to serve as the basis of directed drug design was validated by subsequent synthetic, biological, etc. experiments. The HiT QSAR is realized as the suite of computer programs termed the "HiT QSAR" software that so includes powerful statistical capabilities and a number of useful utilities.
Reliability and Hierarchical Structure of DSM-5 Pathological Traits in a Danish Mixed Sample
DEFF Research Database (Denmark)
Bo, Sune; Bach, Bo; Mortensen, Erik Lykke
2016-01-01
In this study we assessed the DSM-5 trait model in a large Danish sample (n = 1,119) with respect to reliability of the applied Danish version of the Personality Inventory for DSM-5 (PID-5) self-report form by means of internal consistency and item discrimination. In addition, we tested whether...... the five-factor structure of the DSM-5 trait model can be replicated in a Danish independent sample using the PID-5 self-report form. Finally, we examined the hierarchical structure of DSM-5 traits. In terms of internal consistency and item discrimination, the applied PID-5 scales were generally found...... reliable and functional; our data resembled the five-factor structure of previous findings, and we identified a hierarchical structure from one to five factors that was conceptually reasonable and corresponded with existing findings. These results support the new DSM-5 trait model and suggest that it can...
Institute of Scientific and Technical Information of China (English)
LIU Hu; TIAN Yongliang; ZHANG Chaoying; YIN Jiao; SUN Yijie
2012-01-01
In order to take requirements for commercial operations or military missions into better consideration in new flight vehicle design,a tri-hierarchical task classification model of "design for operation" is proposed,which takes basic man-object interaction task,complex collaborative operation and large-scale joint operation into account.The corresponding general architecture of evaluation criteria is also depicted.Then a virtual simulation-based approach to implement the evaluations at three hierarchy levels is mainly analyzed with a detailed example,which validates the feasibility and effectiveness of evaluation architecture.Finally,extending the virtual simulation architecture from design to operation training is discussed.
Joint hierarchical models for sparsely sampled high-dimensional LiDAR and forest variables
Finley, Andrew O.; Banerjee, Sudipto; Zhou, Yuzhen; Cook, Bruce D; Babcock, Chad
2016-01-01
Recent advancements in remote sensing technology, specifically Light Detection and Ranging (LiDAR) sensors, provide the data needed to quantify forest characteristics at a fine spatial resolution over large geographic domains. From an inferential standpoint, there is interest in prediction and interpolation of the often sparsely sampled and spatially misaligned LiDAR signals and forest variables. We propose a fully process-based Bayesian hierarchical model for above ground biomass (AGB) and L...
Directory of Open Access Journals (Sweden)
Andrew Cron
Full Text Available Flow cytometry is the prototypical assay for multi-parameter single cell analysis, and is essential in vaccine and biomarker research for the enumeration of antigen-specific lymphocytes that are often found in extremely low frequencies (0.1% or less. Standard analysis of flow cytometry data relies on visual identification of cell subsets by experts, a process that is subjective and often difficult to reproduce. An alternative and more objective approach is the use of statistical models to identify cell subsets of interest in an automated fashion. Two specific challenges for automated analysis are to detect extremely low frequency event subsets without biasing the estimate by pre-processing enrichment, and the ability to align cell subsets across multiple data samples for comparative analysis. In this manuscript, we develop hierarchical modeling extensions to the Dirichlet Process Gaussian Mixture Model (DPGMM approach we have previously described for cell subset identification, and show that the hierarchical DPGMM (HDPGMM naturally generates an aligned data model that captures both commonalities and variations across multiple samples. HDPGMM also increases the sensitivity to extremely low frequency events by sharing information across multiple samples analyzed simultaneously. We validate the accuracy and reproducibility of HDPGMM estimates of antigen-specific T cells on clinically relevant reference peripheral blood mononuclear cell (PBMC samples with known frequencies of antigen-specific T cells. These cell samples take advantage of retrovirally TCR-transduced T cells spiked into autologous PBMC samples to give a defined number of antigen-specific T cells detectable by HLA-peptide multimer binding. We provide open source software that can take advantage of both multiple processors and GPU-acceleration to perform the numerically-demanding computations. We show that hierarchical modeling is a useful probabilistic approach that can provide a
Directory of Open Access Journals (Sweden)
Ariful Azad
2016-08-01
Full Text Available We describe algorithms for discovering immunophenotypes from large collections of flow cytometry (FC samples, and using them to organize the samples into a hierarchy based on phenotypic similarity. The hierarchical organization is helpful for effective and robust cytometry data mining, including the creation of collections of cell populations characteristic of different classes of samples, robust classification, and anomaly detection. We summarize a set of samples belonging to a biological class or category with a statistically derived template for the class. Whereas individual samples are represented in terms of their cell populations (clusters, a template consists of generic meta-populations (a group of homogeneous cell populations obtained from the samples in a class that describe key phenotypes shared among all those samples. We organize an FC data collection in a hierarchical data structure that supports the identification of immunophenotypes relevant to clinical diagnosis. A robust template-based classification scheme is also developed, but our primary focus is in the discovery of phenotypic signatures and inter-sample relationships in an FC data collection. This collective analysis approach is more efficient and robust since templates describe phenotypic signatures common to cell populations in several samples, while ignoring noise and small sample-specific variations.We have applied the template-base scheme to analyze several data setsincluding one representing a healthy immune system, and one of Acute Myeloid Leukemia (AMLsamples. The last task is challenging due to the phenotypic heterogeneity of the severalsubtypes of AML. However, we identified thirteen immunophenotypes corresponding to subtypes of AML, and were able to distinguish Acute Promyelocytic Leukemia from other subtypes of AML.
Hierarchical classifier design in high-dimensional, numerous class cases
Kim, Byungyong; Landgrebe, David A.
1991-01-01
As progress in new sensor technology continues, increasingly high spectral resolution sensors are being developed. These sensors give more detailed and complex data for each picture element and greatly increase the dimensionality of data over past systems. Three methods for designing a decision tree classifier are discussed; a top down approach, a bottom up approach, and a hybrid approach. Three feature extraction techniques are implemented. Canonical and extended canonical techniques are mainly dependent on the mean difference between two classes. An autocorrelation technique is dependent on the correlation differences. The mathematical relationship among sample size, dimensionality, and risk value is derived.
Hierarchical sliding mode control for under-actuated cranes design, analysis and simulation
Qian, Dianwei
2015-01-01
This book reports on the latest developments in sliding mode overhead crane control, presenting novel research ideas and findings on sliding mode control (SMC), hierarchical SMC and compensator design-based hierarchical sliding mode. The results, which were previously scattered across various journals and conference proceedings, are now presented in a systematic and unified form. The book will be of interest to researchers, engineers and graduate students in control engineering and mechanical engineering who want to learn the methods and applications of SMC.
[Saarland Growth Study: sampling design].
Danker-Hopfe, H; Zabransky, S
2000-01-01
The use of reference data to evaluate the physical development of children and adolescents is part of the daily routine in the paediatric ambulance. The construction of such reference data is based on the collection of extensive reference data. There are different kinds of reference data: cross sectional references, which are based on data collected from a big representative cross-sectional sample of the population, longitudinal references, which are based on follow-up surveys of usually smaller samples of individuals from birth to maturity, and mixed longitudinal references, which are a combination of longitudinal and cross-sectional reference data. The advantages and disadvantages of the different methods of data collection and the resulting reference data are discussed. The Saarland Growth Study was conducted for several reasons: growth processes are subject to secular changes, there are no specific reference data for children and adolescents from this part of the country and the growth charts in use in the paediatric praxis are possibly not appropriate any more. Therefore, the Saarland Growth Study served two purposes a) to create actual regional reference data and b) to create a database for future studies on secular trends in growth processes of children and adolescents from Saarland. The present contribution focusses on general remarks on the sampling design of (cross-sectional) growth surveys and its inferences for the design of the present study.
Techniques for multivariate sample design
Energy Technology Data Exchange (ETDEWEB)
Williamson, M.A.
1990-04-01
In this report we consider sampling methods applicable to the multi-product Annual Fuel Oil and Kerosene Sales Report (Form EIA-821) Survey. For years prior to 1989, the purpose of the survey was to produce state-level estimates of total sales volumes for each of five target variables: residential No. 2 distillate, other retail No. 2 distillate, wholesale No. 2 distillate, retail residual, and wholesale residual. For the year 1989, the other retail No. 2 distillate and wholesale No. 2 distillate variables were replaced by a new variable defined to be the maximum of the two. The strata for this variable were crossed with the strata for the residential No. 2 distillate variable, resulting in a single stratified No. 2 distillate variable. Estimation for 1989 focused on the single No. 2 distillate variable and the two residual variables. Sampling accuracy requirements for each product were specified in terms of the coefficients of variation (CVs) for the various estimates based on data taken from recent surveys. The target population for the Form EIA-821 survey includes companies that deliver or sell fuel oil or kerosene to end-users. The Petroleum Product Sales Identification Survey (Form EIA-863) data base and numerous state and commercial lists provide the basis of the sampling frame, which is updated as new data become available. In addition, company/state-level volumes for distillates fuel oil, residual fuel oil, and motor gasoline are added to aid the design and selection process. 30 refs., 50 figs., 10 tabs.
Taming outliers in pulsar-timing datasets with hierarchical likelihoods and Hamiltonian sampling
Vallisneri, Michele
2016-01-01
Pulsar-timing datasets have been analyzed with great success using probabilistic treatments based on Gaussian distributions, with applications ranging from studies of neutron-star structure to tests of general relativity and searches for nanosecond gravitational waves. As for other applications of Gaussian distributions, outliers in timing measurements pose a significant challenge to statistical inference, since they can bias the estimation of timing and noise parameters, and affect reported parameter uncertainties. We describe and demonstrate a practical end-to-end approach to perform Bayesian inference of timing and noise parameters robustly in the presence of outliers, and to identify these probabilistically. The method is fully consistent (i.e., outlier-ness probabilities vary in tune with the posterior distributions of the timing and noise parameters), and it relies on the efficient sampling of the hierarchical form of the pulsar-timing likelihood. Such sampling has recently become possible with a "no-U-...
PROBABILITY SAMPLING DESIGNS FOR VETERINARY EPIDEMIOLOGY
Xhelil Koleci; Coryn, Chris L.S.; Kristin A. Hobson; Rruzhdi Keci
2011-01-01
The objective of sampling is to estimate population parameters, such as incidence or prevalence, from information contained in a sample. In this paper, the authors describe sources of error in sampling; basic probability sampling designs, including simple random sampling, stratified sampling, systematic sampling, and cluster sampling; estimating a population size if unknown; and factors influencing sample size determination for epidemiological studies in veterinary medicine.
Nadeem, Khurram; Moore, Jeffrey E; Zhang, Ying; Chipman, Hugh
2016-07-01
Stochastic versions of Gompertz, Ricker, and various other dynamics models play a fundamental role in quantifying strength of density dependence and studying long-term dynamics of wildlife populations. These models are frequently estimated using time series of abundance estimates that are inevitably subject to observation error and missing data. This issue can be addressed with a state-space modeling framework that jointly estimates the observed data model and the underlying stochastic population dynamics (SPD) model. In cases where abundance data are from multiple locations with a smaller spatial resolution (e.g., from mark-recapture and distance sampling studies), models are conventionally fitted to spatially pooled estimates of yearly abundances. Here, we demonstrate that a spatial version of SPD models can be directly estimated from short time series of spatially referenced distance sampling data in a unified hierarchical state-space modeling framework that also allows for spatial variance (covariance) in population growth. We also show that a full range of likelihood based inference, including estimability diagnostics and model selection, is feasible in this class of models using a data cloning algorithm. We further show through simulation experiments that the hierarchical state-space framework introduced herein efficiently captures the underlying dynamical parameters and spatial abundance distribution. We apply our methodology by analyzing a time series of line-transect distance sampling data for fin whales (Balaenoptera physalus) off the U.S. west coast. Although there were only seven surveys conducted during the study time frame, 1991-2014, our analysis detected presence of strong density regulation and provided reliable estimates of fin whale densities. In summary, we show that the integrative framework developed herein allows ecologists to better infer key population characteristics such as presence of density regulation and spatial variability in a
Canivez, Gary L
2014-03-01
The Wechsler Intelligence Scale for Children-Fourth Edition (WISC-IV) is one of the most frequently used intelligence tests in clinical assessments of children with learning difficulties. Construct validity studies of the WISC-IV have generally supported the higher order structure with four correlated first-order factors and one higher-order general intelligence factor, but recent studies have supported an alternate model in which general intelligence is conceptualized as a breadth factor rather than a superordinate factor (M. W. Watkins, 2010, Structure of the Wechsler Intelligence Scale for Children-Fourth Edition among a national sample of referred students, Psychological Assessment, Vol. 22, pp. 782-787; M. W. Watkins, G. L. Canivez, T. James, K. & R. Good, in press, Construct validity of the WISC-IVUK with a large referred Irish sample, International Journal of School and Educational Psychology). WISC-IV core subtest data obtained from evaluations to assess learning difficulties in 345 children (224 boys, 121 girls) were examined. One through four, first order factor models and indirect versus direct hierarchical models were compared using confirmatory factor analyses. The correlated four-factor Wechsler model provided good fit to these data, but the direct hierarchical model showed statistically significant improvement over the indirect hierarchical model and correlated four-factor model. The direct hierarchical model was judged the best explanation of the WISC-IV factor structure, with the general factor accounting for 71.6% of the common variance while the first order factors accounted for 2.4-10.3% of the common variance. Thus, the results with the present sample of referred children were similar to those from other investigations (G. E. Gignac, 2005, Revisiting the factor structure of the WAIS-R: Insights through nested factor modeling, Assessment, Vol. 12, pp. 320-329; G. E. Gignac, 2006, The WAIS-III as a nested factors model: A useful alternative to
Helson, Ravenna; Jones, Constance; Kwan, Virginia S Y
2002-09-01
Normative personality change over 40 years was shown in 2 longitudinal cohorts with hierarchical linear modeling of California Psychological Inventory data obtained at multiple times between ages 21-75. Although themes of change and the paucity of differences attributable to gender and cohort largely supported findings of multiethnic cross-sectional samples, the authors also found much quadratic change and much individual variability. The form of quadratic change supported predictions about the influence of period of life and social climate as factors in change over the adult years: Scores on Dominance and Independence peaked in the middle age of both cohorts, and scores on Responsibility were lowest during peak years of the culture of individualism. The idea that personality change is most pronounced before age 30 and then reaches a plateau received no support.
Using Dynamic Quantum Clustering to Analyze Hierarchically Heterogeneous Samples on the Nanoscale
Energy Technology Data Exchange (ETDEWEB)
Hume, Allison; /Princeton U. /SLAC
2012-09-07
Dynamic Quantum Clustering (DQC) is an unsupervised, high visual data mining technique. DQC was tested as an analysis method for X-ray Absorption Near Edge Structure (XANES) data from the Transmission X-ray Microscopy (TXM) group. The TXM group images hierarchically heterogeneous materials with nanoscale resolution and large field of view. XANES data consists of energy spectra for each pixel of an image. It was determined that DQC successfully identifies structure in data of this type without prior knowledge of the components in the sample. Clusters and sub-clusters clearly reflected features of the spectra that identified chemical component, chemical environment, and density in the image. DQC can also be used in conjunction with the established data analysis technique, which does require knowledge of components present.
An, Le; Adeli, Ehsan; Liu, Mingxia; Zhang, Jun; Lee, Seong-Whan; Shen, Dinggang
2017-01-01
Classification is one of the most important tasks in machine learning. Due to feature redundancy or outliers in samples, using all available data for training a classifier may be suboptimal. For example, the Alzheimer’s disease (AD) is correlated with certain brain regions or single nucleotide polymorphisms (SNPs), and identification of relevant features is critical for computer-aided diagnosis. Many existing methods first select features from structural magnetic resonance imaging (MRI) or SNPs and then use those features to build the classifier. However, with the presence of many redundant features, the most discriminative features are difficult to be identified in a single step. Thus, we formulate a hierarchical feature and sample selection framework to gradually select informative features and discard ambiguous samples in multiple steps for improved classifier learning. To positively guide the data manifold preservation process, we utilize both labeled and unlabeled data during training, making our method semi-supervised. For validation, we conduct experiments on AD diagnosis by selecting mutually informative features from both MRI and SNP, and using the most discriminative samples for training. The superior classification results demonstrate the effectiveness of our approach, as compared with the rivals. PMID:28358032
An, Le; Adeli, Ehsan; Liu, Mingxia; Zhang, Jun; Lee, Seong-Whan; Shen, Dinggang
2017-03-01
Classification is one of the most important tasks in machine learning. Due to feature redundancy or outliers in samples, using all available data for training a classifier may be suboptimal. For example, the Alzheimer’s disease (AD) is correlated with certain brain regions or single nucleotide polymorphisms (SNPs), and identification of relevant features is critical for computer-aided diagnosis. Many existing methods first select features from structural magnetic resonance imaging (MRI) or SNPs and then use those features to build the classifier. However, with the presence of many redundant features, the most discriminative features are difficult to be identified in a single step. Thus, we formulate a hierarchical feature and sample selection framework to gradually select informative features and discard ambiguous samples in multiple steps for improved classifier learning. To positively guide the data manifold preservation process, we utilize both labeled and unlabeled data during training, making our method semi-supervised. For validation, we conduct experiments on AD diagnosis by selecting mutually informative features from both MRI and SNP, and using the most discriminative samples for training. The superior classification results demonstrate the effectiveness of our approach, as compared with the rivals.
The Hierarchical Specification and Mechanical Verification of the SIFT Design
1983-01-01
The formal specification and proof methodology employed to demonstrate that the SIFT computer system meets its requirements are described. The hierarchy of design specifications is shown, from very abstract descriptions of system function down to the implementation. The most abstract design specifications are simple and easy to understand, almost all details of the realization were abstracted out, and are used to ensure that the system functions reliably and as intended. A succession of lower level specifications refines these specifications into more detailed, and more complex, views of the system design, culminating in the Pascal implementation. The section describes the rigorous mechanical proof that the abstract specifications are satisfied by the actual implementation.
Raykov, Tenko
2011-01-01
Interval estimation of intraclass correlation coefficients in hierarchical designs is discussed within a latent variable modeling framework. A method accomplishing this aim is outlined, which is applicable in two-level studies where participants (or generally lower-order units) are clustered within higher-order units. The procedure can also be…
Design And Analysis Of Low Power Hierarchical Decoder
Directory of Open Access Journals (Sweden)
Abhinav Singh
2012-11-01
Full Text Available Due to the high degree of miniaturization possible today in semiconductor technology, the size and complexity of designs that may be implemented in hardware has increased dramatically. Process scaling has been used in the miniaturization process to reduce the area needed for logic functions in an effort to lower the product costs. Precharged Complementary Metal Oxide Semiconductor (CMOS domino logic techniques may be applied to functional blocks to reduce power. Domino logic forms an attractive design style for high performance designs since its low switching threshold and reduced transistor count leads to fast and area efficient circuit implementations. In this paper all the necessary components required to form a 5-to-32 bit decoder using domino logic are designed to perform different analysis at 180nm & 350 nm technologies. Decoderimplemented through domino logic is compared to static decoder.
Hierarchical Distributed Control Design for Multi-agent Systems Using Approximate Simulation
Institute of Scientific and Technical Information of China (English)
TANG Yu-Tao; HONG Yi-Guang
2013-01-01
In this paper,we consider a hierarchical control design for multi-agent systems based on approximate simulation.To reduce complexity,we first construct a simple abstract system to guide the agents,then we discuss the simulation relations between the abstract system and multiple agents.With the help of this abstract system,distributed hierarchical control is proposed to complete a coordination task.By virtue of a common Lyapunov function,we analyze the collective behaviors with switching multi-agent topology in light of simulation functions.
Design of Experiments for Factor Hierarchization in Complex Structure Modelling
Directory of Open Access Journals (Sweden)
C. Kasmi
2013-07-01
Full Text Available Modelling the power-grid network is of fundamental interest to analyse the conducted propagation of unintentional and intentional electromagnetic interferences. The propagation is indeed highly influenced by the channel behaviour. In this paper, we investigate the effects of appliances and the position of cables in a low voltage network. First, the power-grid architecture is described. Then, the principle of Experimental Design is recalled. Next, the methodology is applied to power-grid modelling. Finally, we propose an analysis of the statistical moments of the experimental design results. Several outcomes are provided to describe the effects induced by parameter variability on the conducted propagation of spurious compromising emanations.
Taming outliers in pulsar-timing datasets with hierarchical likelihoods and Hamiltonian sampling
Vallisneri, Michele; van Haasteren, Rutger
2017-01-01
Pulsar-timing datasets have been analyzed with great success using probabilistic treatments based on Gaussian distributions, with applications ranging from studies of neutron-star structure to tests of general relativity and searches for nanosecond gravitational waves. As for other applications of Gaussian distributions, outliers in timing measurements pose a significant challenge to statistical inference, since they can bias the estimation of timing and noise parameters, and affect reported parameter uncertainties. We describe and demonstrate a practical end-to-end approach to perform Bayesian inference of timing and noise parameters robustly in the presence of outliers, and to identify these probabilistically. The method is fully consistent (i.e., outlier-ness probabilities vary in tune with the posterior distributions of the timing and noise parameters), and it relies on the efficient sampling of the hierarchical form of the pulsar-timing likelihood. Such sampling has recently become possible with a "no-U-turn" Hamiltonian sampler coupled to a highly customized reparametrization of the likelihood; this code is described elsewhere, but it is already available online. We recommend our method as a standard step in the preparation of pulsar-timing-array datasets: even if statistical inference is not affected, follow-up studies of outlier candidates can reveal unseen problems in radio observations and timing measurements; furthermore, confidence in the results of gravitational-wave searches will only benefit from stringent statistical evidence that datasets are clean and outlier-free.
Energy Technology Data Exchange (ETDEWEB)
Grazzini, Jacopo [Los Alamos National Laboratory; Prasad, Lakshman [Los Alamos National Laboratory; Dillard, Scott [PNNL
2010-10-21
The automatic detection, recognition , and segmentation of object classes in remote sensed images is of crucial importance for scene interpretation and understanding. However, it is a difficult task because of the high variability of satellite data. Indeed, the observed scenes usually exhibit a high degree of complexity, where complexity refers to the large variety of pictorial representations of objects with the same semantic meaning and also to the extensive amount of available det.ails. Therefore, there is still a strong demand for robust techniques for automatic information extraction and interpretation of satellite images. In parallel, there is a growing interest in techniques that can extract vector features directly from such imagery. In this paper, we investigate the problem of automatic hierarchical segmentation and vectorization of multispectral satellite images. We propose a new algorithm composed of the following steps: (i) a non-uniform sampling scheme extracting most salient pixels in the image, (ii) an anisotropic triangulation constrained by the sampled pixels taking into account both strength and directionality of local structures present in the image, (iii) a polygonal grouping scheme merging, through techniques based on perceptual information , the obtained segments to a smaller quantity of superior vectorial objects. Besides its computational efficiency, this approach provides a meaningful polygonal representation for subsequent image analysis and/or interpretation.
Taming outliers in pulsar-timing data sets with hierarchical likelihoods and Hamiltonian sampling
Vallisneri, Michele; van Haasteren, Rutger
2017-04-01
Pulsar-timing data sets have been analysed with great success using probabilistic treatments based on Gaussian distributions, with applications ranging from studies of neutron-star structure to tests of general relativity and searches for nanosecond gravitational waves. As for other applications of Gaussian distributions, outliers in timing measurements pose a significant challenge to statistical inference, since they can bias the estimation of timing and noise parameters, and affect reported parameter uncertainties. We describe and demonstrate a practical end-to-end approach to perform Bayesian inference of timing and noise parameters robustly in the presence of outliers, and to identify these probabilistically. The method is fully consistent (i.e. outlier-ness probabilities vary in tune with the posterior distributions of the timing and noise parameters), and it relies on the efficient sampling of the hierarchical form of the pulsar-timing likelihood. Such sampling has recently become possible with a 'no-U-turn' Hamiltonian sampler coupled to a highly customized reparametrization of the likelihood; this code is described elsewhere, but it is already available online. We recommend our method as a standard step in the preparation of pulsar-timing-array data sets: even if statistical inference is not affected, follow-up studies of outlier candidates can reveal unseen problems in radio observations and timing measurements; furthermore, confidence in the results of gravitational-wave searches will only benefit from stringent statistical evidence that data sets are clean and outlier-free.
Sampling designs dependent on sample parameters of auxiliary variables
Wywiał, Janusz L
2015-01-01
The book offers a valuable resource for students and statisticians whose work involves survey sampling. An estimation of the population parameters in finite and fixed populations assisted by auxiliary variables is considered. New sampling designs dependent on moments or quantiles of auxiliary variables are presented on the background of the classical methods. Accuracies of the estimators based on original sampling design are compared with classical estimation procedures. Specific conditional sampling designs are applied to problems of small area estimation as well as to estimation of quantiles of variables under study. .
Rawal, Amit; Sharma, Sumit; Kumar, Vijay; Saraswat, Harshvardhan
2016-12-01
Hierarchical roughness and low surface energy are the main criteria for designing superhydrophobic surfaces with extreme water repellency. Herein, we present a step-wise approach to devise three-dimensional (3D) superhydrophobic disordered arrays of fibers in the form of nonwoven mats exhibiting hierarchical surface roughness and low surface energy. Key design parameters in the form of roughness factors at multiple length scales for 3D nonwoven mats have been quantified. The contact angles have been predicted for each of the wetting regimes that exists for nonwoven mats with predefined level of hierarchical surface roughness and surface energy. Experimental realization of superhydrophobic mats was attained by decorating the highly hydrophilic nonwoven viscose fibers with ZnO rods that effectively modulated the surface roughness at multiple length scales and subsequently, the surface energy was lowered using fluorocarbon treatment. Synergistic effects of hierarchical roughness and surface energy have systematically increased the static water contact angle of nonwoven mat (up to 164°) and simultaneously, lowered the roll-off angle (≈11°).
National Research Council Canada - National Science Library
Mira Kania Sabariah; Veronikha Effendy; Muhamad Fachmi Ichsan
2016-01-01
... of learning and characteristics of early childhood (4-6 years). Based on the results, Hierarchical Task Analysis method generated a list of tasks that must be done in designing an user interface that represents the user experience in draw learning. Then by using the Heuristic Evaluation method the usability of the model has fulfilled a very good level of understanding and also it can be enhanced and produce a better model.
Institute of Scientific and Technical Information of China (English)
YAN Haixia; ZHOU Qiang; HONG Xianlong; LI Zhuoyuan
2009-01-01
Hierarchical art was used to solve the mixed mode placement for three dimensional (3-D) inte-grated circuit design. The 3-D placement flow stream includes hierarchical clustering, hierarchical 3-D floor-planning, vertical via mapping, and recursive two dimensional (2-D) global/detailed placement phases. With state-of-the-art clustering and de-clustering phases, the design complexity was reduced to enhance the placement algorithm efficiency and capacity. The 3-D floorplanning phase solved the layer assignment problem and controlled the number of vertical vias. The vertical via mapping transformed the 3-D placement problem to a set of 2-D placement sub-problems, which not only simplifies the original 3-D placement prob-lem, but also generates the vertical via assignment solution for the routing phase. The design optimizes both the wire length and the thermal load in the floorplan and placement phases to improve the performance and reliability of 3-D integrate circuits. Experiments on IBM benchmarks show that the total wire length is reduced from 15% to 35% relative to 2-D placement with two to four stacked layers, with the number of vertical vias minimized to satisfy a pre-defined upper bound constraint. The maximum temperature is reduced by 16% with two-stage optimization on four stacked layers.
Hierarchical Modeling and Robust Synthesis for the Preliminary Design of Large Scale Complex Systems
Koch, Patrick N.
1997-01-01
Large-scale complex systems are characterized by multiple interacting subsystems and the analysis of multiple disciplines. The design and development of such systems inevitably requires the resolution of multiple conflicting objectives. The size of complex systems, however, prohibits the development of comprehensive system models, and thus these systems must be partitioned into their constituent parts. Because simultaneous solution of individual subsystem models is often not manageable iteration is inevitable and often excessive. In this dissertation these issues are addressed through the development of a method for hierarchical robust preliminary design exploration to facilitate concurrent system and subsystem design exploration, for the concurrent generation of robust system and subsystem specifications for the preliminary design of multi-level, multi-objective, large-scale complex systems. This method is developed through the integration and expansion of current design techniques: Hierarchical partitioning and modeling techniques for partitioning large-scale complex systems into more tractable parts, and allowing integration of subproblems for system synthesis; Statistical experimentation and approximation techniques for increasing both the efficiency and the comprehensiveness of preliminary design exploration; and Noise modeling techniques for implementing robust preliminary design when approximate models are employed. Hierarchical partitioning and modeling techniques including intermediate responses, linking variables, and compatibility constraints are incorporated within a hierarchical compromise decision support problem formulation for synthesizing subproblem solutions for a partitioned system. Experimentation and approximation techniques are employed for concurrent investigations and modeling of partitioned subproblems. A modified composite experiment is introduced for fitting better predictive models across the ranges of the factors, and an approach for
Kim, Tae-Hyun; Ha, Sung-Hun; Jang, Nam-Su; Kim, Jeonghyo; Kim, Ji Hoon; Park, Jong-Kweon; Lee, Deug-Woo; Lee, Jaebeom; Kim, Soo-Hyung; Kim, Jong-Man
2015-03-11
Optical transparency and mechanical flexibility are both of great importance for significantly expanding the applicability of superhydrophobic surfaces. Such features make it possible for functional surfaces to be applied to various glass-based products with different curvatures. In this work, we report on the simple and potentially cost-effective fabrication of highly flexible and transparent superhydrophobic films based on hierarchical surface design. The hierarchical surface morphology was easily fabricated by the simple transfer of a porous alumina membrane to the top surface of UV-imprinted polymeric micropillar arrays and subsequent chemical treatments. Through optimization of the hierarchical surface design, the resultant superhydrophobic films showed superior surface wetting properties (with a static contact angle of >170° and contact angle hysteresis of 82% at 550 nm wavelength). The superhydrophobic films were also experimentally found to be robust without significant degradation in the superhydrophobicity, even under repetitive bending and pressing for up to 2000 cycles. Finally, the practical usability of the proposed superhydorphobic films was clearly demonstrated by examining the antiwetting performance in real time while pouring water on the film and submerging the film in water.
Planetary Sample Caching System Design Options
Collins, Curtis; Younse, Paulo; Backes, Paul
2009-01-01
Potential Mars Sample Return missions would aspire to collect small core and regolith samples using a rover with a sample acquisition tool and sample caching system. Samples would need to be stored in individual sealed tubes in a canister that could be transfered to a Mars ascent vehicle and returned to Earth. A sample handling, encapsulation and containerization system (SHEC) has been developed as part of an integrated system for acquiring and storing core samples for application to future potential MSR and other potential sample return missions. Requirements and design options for the SHEC system were studied and a recommended design concept developed. Two families of solutions were explored: 1)transfer of a raw sample from the tool to the SHEC subsystem and 2)transfer of a tube containing the sample to the SHEC subsystem. The recommended design utilizes sample tool bit change out as the mechanism for transferring tubes to and samples in tubes from the tool. The SHEC subsystem design, called the Bit Changeout Caching(BiCC) design, is intended for operations on a MER class rover.
Fang, Baizeng; Kim, Jung Ho; Kim, Min-Sik; Yu, Jong-Sung
2013-07-16
Nanostructured porous carbon materials have diverse applications including sorbents, catalyst supports for fuel cells, electrode materials for capacitors, and hydrogen storage systems. When these materials have hierarchical porosity, interconnected pores of different dimensions, their potential application is increased. Hierarchical nanostructured carbons (HNCs) that contain 3D-interconnected macroporous/mesoporous and mesoporous/microporous structures have enhanced properties compared with single-sized porous carbon materials, because they have improved mass transport through the macropores/mesopores and enhanced selectivity and increased specific surface area on the level of fine pore systems through mesopores/micropores. The HNCs with macro/mesoporosity are of particular interest because chemists can tailor specific applications through controllable synthesis of HNCs with designed nanostructures. An efficient and commonly used technique for creating HNCs is "nanocasting", a technique that first involves the creation of a sacrificial silica template with hierarchical porous nanostructure and then the impregnation of the silica template with an appropriate carbon source. This is followed by carbonization of the filled carbon precursor, and subsequent removal of the silica template. The resulting HNC is an inverse replica of its parent hierarchical nanostructured silica (HNS). Through such nanocasting, scientists can create different HNC frameworks with tailored pore structures and narrow pore size distribution. Generally, HNSs with specific structure and 3D-interconnected porosity are needed to fabricate HNCs using the nanocasting strategy. However, how can we fabricate a HNS framework with tailored structure and hierarchical porosity of meso-macropores? This Account reports on our recent work in the development of novel HNCs and their interesting applications. We have explored a series of strategies to address the challenges in synthesis of HNSs and HNCs. Through
Zhang, K.; Ju, X. D.; Lu, J. Q.; Men, B. Y.
2016-08-01
On the basis of modular and hierarchical design ideas, this study presents a debugging system for an azimuthally sensitive acoustic bond tool (AABT). The debugging system includes three parts: a personal computer (PC), embedded front-end machine and function expansion boards. Modular and hierarchical design ideas are conducted in all design and debug processes. The PC communicates with the front-end machine via the Internet, and the front-end machine and function expansion boards connect each other by the extended parallel bus. In this method, the three parts of the debugging system form stable and high-speed data communication. This study not only introduces the system-level debugging and sub-system level debugging of the tool but also the debugging of the analogue signal processing board, which is important and greatly used in logging tools. Experiments illustrate that the debugging system can greatly improve AABT verification and calibration efficiency and that, board-level debugging can examine and improve analogue signal processing boards. The design thinking is clear and the design structure is reasonable, thus making it easy to extend and upgrade the debugging system.
Goyert, Holly F; Gardner, Beth; Sollmann, Rahel; Veit, Richard R; Gilbert, Andrew T; Connelly, Emily E; Williams, Kathryn A
2016-09-01
Proposed offshore wind energy development on the Atlantic Outer Continental Shelf has brought attention to the need for baseline studies of the distribution and abundance of marine birds. We compiled line transect data from 15 shipboard surveys (June 2012-April 2014), along with associated remotely sensed habitat data, in the lower Mid-Atlantic Bight off the coast of Delaware, Maryland, and Virginia, USA. We implemented a recently developed hierarchical community distance sampling model to estimate the seasonal abundance of 40 observed marine bird species. Treating each season separately, we included six oceanographic parameters to estimate seabird abundance: three static (distance to shore, slope, sediment grain size) and three dynamic covariates (sea surface temperature [SST], salinity, primary productivity). We expected that avian bottom-feeders would respond primarily to static covariates that characterize seafloor variability, and that surface-feeders would respond more to dynamic covariates that quantify surface productivity. We compared the variation in species-specific and community-level responses to these habitat features, including for rare species, and we predicted species abundance across the study area. While several protected species used the study area in summer during their breeding season, estimated abundance and observed diversity were highest for nonbreeding species in winter. Distance to shore was the most common significant predictor of abundance, and thus useful in estimating the potential exposure of marine birds to offshore development. In many cases, our expectations based on feeding ecology were confirmed, such as in the first winter season, when bottom-feeders associated significantly with the three static covariates (distance to shore, slope, and sediment grain size), and surface-feeders associated significantly with two dynamic covariates (SST, primary productivity). However, other cases revealed significant relationships between
Chapman, Benjamin P; Weiss, Alexander; Barrett, Paul; Duberstein, Paul
2013-03-01
The structure of the Eysenck Personality Inventory (EPI) is poorly understood, and applications have mostly been confined to the broad Neuroticism, Extraversion, and Lie scales. Using a hierarchical factoring procedure, we mapped the sequential differentiation of EPI scales from broad, molar factors to more specific, molecular factors, in a UK population sample of over 6500 persons. Replicable facets at the lowest tier of Neuroticism included emotional fragility, mood lability, nervous tension, and rumination. The lowest order set of replicable Extraversion facets consisted of social dynamism, sociotropy, decisiveness, jocularity, social information seeking, and impulsivity. The Lie scale consisted of an interpersonal virtue and a behavioral diligence facet. Users of the EPI may be well served in some circumstances by considering its broad Neuroticism, Extraversion, and Lie scales as multifactorial, a feature that was explicitly incorporated into subsequent Eysenck inventories and is consistent with other hierarchical trait structures.
Design and Co-simulation of Hierarchical Architecture for Demand Response Control and Coordination
DEFF Research Database (Denmark)
Bhattarai, Bishnu Prasad; Lévesque, Martin; Bak-Jensen, Birgitte
2017-01-01
Demand response (DR) plays a key role for optimum asset utilization and to avoid or delay the need of new infrastructure investment. However, coordinated execution of multiple DRs is desired to maximize the DR benefits. In this study, we propose a hierarchical DR architecture (HDRA) to control...... and coordinate the performance of various DR categories such that the operation of every DR category is backed-up by time delayed action of the others. A reliable, cost-effective communication infrastructure based on ZigBee, WiMAX, and fibers is designed to facilitate the HDRA execution. The performance...
Accuracy assessment with complex sampling designs
Raymond L. Czaplewski
2010-01-01
A reliable accuracy assessment of remotely sensed geospatial data requires a sufficiently large probability sample of expensive reference data. Complex sampling designs reduce cost or increase precision, especially with regional, continental and global projects. The General Restriction (GR) Estimator and the Recursive Restriction (RR) Estimator separate a complex...
Hierarchical star formation across the grand-design spiral NGC 1566
Gouliermis, Dimitrios A.; Elmegreen, Bruce G.; Elmegreen, Debra M.; Calzetti, Daniela; Cignoni, Michele; Gallagher, John S., III; Kennicutt, Robert C.; Klessen, Ralf S.; Sabbi, Elena; Thilker, David; Ubeda, Leonardo; Aloisi, Alessandra; Adamo, Angela; Cook, David O.; Dale, Daniel; Grasha, Kathryn; Grebel, Eva K.; Johnson, Kelsey E.; Sacchi, Elena; Shabani, Fayezeh; Smith, Linda J.; Wofford, Aida
2017-06-01
We investigate how star formation is spatially organized in the grand-design spiral NGC 1566 from deep Hubble Space Telescope photometry with the Legacy ExtraGalactic UV Survey. Our contour-based clustering analysis reveals 890 distinct stellar conglomerations at various levels of significance. These star-forming complexes are organized in a hierarchical fashion with the larger congregations consisting of smaller structures, which themselves fragment into even smaller and more compact stellar groupings. Their size distribution, covering a wide range in length-scales, shows a power law as expected from scale-free processes. We explain this shape with a simple 'fragmentation and enrichment' model. The hierarchical morphology of the complexes is confirmed by their mass-size relation that can be represented by a power law with a fractional exponent, analogous to that determined for fractal molecular clouds. The surface stellar density distribution of the complexes shows a lognormal shape similar to that for supersonic non-gravitating turbulent gas. Between 50 and 65 per cent of the recently formed stars, as well as about 90 per cent of the young star clusters, are found inside the stellar complexes, located along the spiral arms. We find an age difference between young stars inside the complexes and those in their direct vicinity in the arms of at least 10 Myr. This time-scale may relate to the minimum time for stellar evaporation, although we cannot exclude the in situ formation of stars. As expected, star formation preferentially occurs in spiral arms. Our findings reveal turbulent-driven hierarchical star formation along the arms of a grand-design galaxy.
A hierarchical approach for the design improvements of an Organocat biorefinery.
Abdelaziz, Omar Y; Gadalla, Mamdouh A; El-Halwagi, Mahmoud M; Ashour, Fatma H
2015-04-01
Lignocellulosic biomass has emerged as a potentially attractive renewable energy source. Processing technologies of such biomass, particularly its primary separation, still lack economic justification due to intense energy requirements. Establishing an economically viable and energy efficient biorefinery scheme is a significant challenge. In this work, a systematic approach is proposed for improving basic/existing biorefinery designs. This approach is based on enhancing the efficiency of mass and energy utilization through the use of a hierarchical design approach that involves mass and energy integration. The proposed procedure is applied to a novel biorefinery called Organocat to minimize its energy and mass consumption and total annualized cost. An improved heat exchanger network with minimum energy consumption of 4.5 MJ/kgdry biomass is designed. An optimal recycle network with zero fresh water usage and minimum waste discharge is also constructed, making the process more competitive and economically attractive. Copyright © 2015 Elsevier Ltd. All rights reserved.
Mobile Variable Depth Sampling System Design Study
Energy Technology Data Exchange (ETDEWEB)
BOGER, R.M.
2000-08-25
A design study is presented for a mobile, variable depth sampling system (MVDSS) that will support the treatment and immobilization of Hanford LAW and HLW. The sampler can be deployed in a 4-inch tank riser and has a design that is based on requirements identified in the Level 2 Specification (latest revision). The waste feed sequence for the MVDSS is based on Phase 1, Case 3S6 waste feed sequence. Technical information is also presented that supports the design study.
Kim, Namhee; Zahran, Mai; Schlick, Tamar
2015-01-01
The modular organization of RNA structure has been exploited in various computational and theoretical approaches to identify RNA tertiary (3D) motifs and assemble RNA structures. Riboswitches exemplify this modularity in terms of both structural and functional adaptability of RNA components. Here, we extend our computational approach based on tree graph sampling to the prediction of riboswitch topologies by defining additional edges to mimick pseudoknots. Starting from a secondary (2D) structure, we construct an initial graph deduced from predicted junction topologies by our data-mining algorithm RNAJAG trained on known RNAs; we sample these graphs in 3D space guided by knowledge-based statistical potentials derived from bending and torsion measures of internal loops as well as radii of gyration for known RNAs. We present graph sampling results for 10 representative riboswitches, 6 of them with pseudoknots, and compare our predictions to solved structures based on global and local RMSD measures. Our results indicate that the helical arrangements in riboswitches can be approximated using our combination of modified 3D tree graph representations for pseudoknots, junction prediction, graph moves, and scoring functions. Future challenges in the field of riboswitch prediction and design are also discussed.
Missing observations in multiyear rotation sampling designs
Gbur, E. E.; Sielken, R. L., Jr. (Principal Investigator)
1982-01-01
Because Multiyear estimation of at-harvest stratum crop proportions is more efficient than single year estimation, the behavior of multiyear estimators in the presence of missing acquisitions was studied. Only the (worst) case when a segment proportion cannot be estimated for the entire year is considered. The effect of these missing segments on the variance of the at-harvest stratum crop proportion estimator is considered when missing segments are not replaced, and when missing segments are replaced by segments not sampled in previous years. The principle recommendations are to replace missing segments according to some specified strategy, and to use a sequential procedure for selecting a sampling design; i.e., choose an optimal two year design and then, based on the observed two year design after segment losses have been taken into account, choose the best possible three year design having the observed two year parent design.
Yao, Haimin; Gao, Huajian
2006-06-01
Gecko and many insects have evolved specialized adhesive tissues with bottom-up designed (from nanoscale and up) hierarchical structures that allow them to maneuver on vertical walls and ceilings. The adhesion mechanisms of gecko must be robust enough to function on unknown rough surfaces and also easily releasable upon animal movement. How does nature design such macroscopic sized robust and releasable adhesion devices? How can an adhesion system designed for robust attachment simultaneously allow easy detachment? These questions have motivated the present investigation on mechanics of robust and releasable adhesion in biology. On the question of robust adhesion, we introduce a fractal gecko hairs model, which assumes self-similar fibrillar structures at multiple hierarchical levels mimicking gecko's spatula ultrastructure, to show that structural hierarchy plays a key role in robust adhesion: it allows the work of adhesion to be exponentially enhanced with each added level of hierarchy. We demonstrate that, barring fiber fracture, the fractal gecko hairs can be designed from nanoscale and up to achieve flaw tolerant adhesion at any length scales. However, consideration of crack-like flaws in the hairs themselves results in an upper size limit for flaw tolerant design. On the question of releasable adhesion, we hypothesize that the asymmetrically aligned seta hairs of gecko form a strongly anisotropic material with adhesion strength strongly varying with the direction of pulling. We use analytical solutions to show that a strongly anisotropic elastic solid indeed exhibits a strongly anisotropic adhesion strength when sticking on a rough surface. Furthermore, we perform finite element calculations to show that the adhesion strength of a strongly anisotropic attachment pad exhibits essentially two levels of adhesion strength depending on the direction of pulling, resulting in an orientation-controlled switch between attachment and detachment. These findings not only
Sample design effects in landscape genetics
Oyler-McCance, Sara J.; Fedy, Bradley C.; Landguth, Erin L.
2012-01-01
An important research gap in landscape genetics is the impact of different field sampling designs on the ability to detect the effects of landscape pattern on gene flow. We evaluated how five different sampling regimes (random, linear, systematic, cluster, and single study site) affected the probability of correctly identifying the generating landscape process of population structure. Sampling regimes were chosen to represent a suite of designs common in field studies. We used genetic data generated from a spatially-explicit, individual-based program and simulated gene flow in a continuous population across a landscape with gradual spatial changes in resistance to movement. Additionally, we evaluated the sampling regimes using realistic and obtainable number of loci (10 and 20), number of alleles per locus (5 and 10), number of individuals sampled (10-300), and generational time after the landscape was introduced (20 and 400). For a simulated continuously distributed species, we found that random, linear, and systematic sampling regimes performed well with high sample sizes (>200), levels of polymorphism (10 alleles per locus), and number of molecular markers (20). The cluster and single study site sampling regimes were not able to correctly identify the generating process under any conditions and thus, are not advisable strategies for scenarios similar to our simulations. Our research emphasizes the importance of sampling data at ecologically appropriate spatial and temporal scales and suggests careful consideration for sampling near landscape components that are likely to most influence the genetic structure of the species. In addition, simulating sampling designs a priori could help guide filed data collection efforts.
Design of Energy-efficient Hierarchical Scheduling for Integrated Modular Avionics Systems
Institute of Scientific and Technical Information of China (English)
ZHOU Tianran; XIONG Huagang
2012-01-01
Recently the integrated modular avionics (IMA) architecture which introduces the concept of resource partitions becomes popular as an alternative to the traditional federated architecture.This study investigates the problem of designing hierarchical scheduling for IMA systems.The proposed scheduler model enables strong temporal partitioning,so that multiple hard real-time applications can be easily integrated into an uniprocessor platform.This paper derives the mathematic relationships among partition cycle,partition capacity and schedulability under the real-time condition,and then proposes an algorithm for optimizing partition parameters.Real-time tasks with arbitrary deadlines are considered for generality.To further improve the basic algorithm and reduce the energy consumption for embedded systems in aircraft,a power optimization approach is also proposed by exploiting the slack time.Experimental results show that the designed system can guarantee the hard real-time requirement and reduce the power consumption by at least 14%.
Energy Technology Data Exchange (ETDEWEB)
Paik, Taejong; Yun, Hongseok; Fleury, Blaise; Hong, Sung-Hoon; Jo, Pil Sung; Wu, Yaoting; Oh, Soong-Ju; Cargnello, Matteo; Yang, Haoran; Murray, Christopher B.; Kagan, Cherie R.
2017-02-08
We demonstrate the fabrication of hierarchical materials by controlling the structure of highly ordered binary nanocrystal superlattices (BNSLs) on multiple length scales. Combinations of magnetic, plasmonic, semiconducting, and insulating colloidal nanocrystal (NC) building blocks are self-assembled into BNSL membranes via the liquid–interfacial assembly technique. Free-standing BNSL membranes are transferred onto topographically structured poly(dimethylsiloxane) molds via the Langmuir–Schaefer technique and then deposited in patterns onto substrates via transfer printing. BNSLs with different structural motifs are successfully patterned into various meso- and microstructures such as lines, circles, and even three-dimensional grids across large-area substrates. A combination of electron microscopy and grazing incidence small-angle X-ray scattering (GISAXS) measurements confirm the ordering of NC building blocks in meso- and micropatterned BNSLs. This technique demonstrates structural diversity in the design of hierarchical materials by assembling BNSLs from NC building blocks of different composition and size by patterning BNSLs into various size and shape superstructures of interest for a broad range of applications.
The impact of hierarchical memory systems on linear algebra algorithm design
Energy Technology Data Exchange (ETDEWEB)
Gallivan, K.; Jalby, W.; Meier, U.; Sameh, A.
1987-09-14
Performing an extremely detailed performance optimization analysis is counterproductive when the concern is performance behavior on a class of architecture, since general trends are obscured by the overwhelming number of machine-specific considerations required. Instead, in this paper, a methodology is used which identifies the effects of a hierarchical memory system on the performance of linear algebra algorithms on multivector processors; provides a means of producing a set of algorithm parameters, i.e., blocksizes, as functions of system parameters which yield near-optimal performance; and provides guidelines for algorithm designers which reduce the influence of the hierarchical memory system on algorithm performance to negligible levels and thereby allow them to concentrate on machine-specific optimizations. The remainder of this paper comprises five major discussions. First, the methodology and the attendant architectural model are discussed. Next, an analysis of the basic BLAS3 matrix-matrix primitive is presented. This is followed by a discussion of three block algorithms: a block LU decomposition, a block LDL/sup T/ decomposition and a block Gram-Schmidt algorithm. 22 refs., 9 figs.
Paik, Taejong; Yun, Hongseok; Fleury, Blaise; Hong, Sung-Hoon; Jo, Pil Sung; Wu, Yaoting; Oh, Soong-Ju; Cargnello, Matteo; Yang, Haoran; Murray, Christopher B; Kagan, Cherie R
2017-03-08
We demonstrate the fabrication of hierarchical materials by controlling the structure of highly ordered binary nanocrystal superlattices (BNSLs) on multiple length scales. Combinations of magnetic, plasmonic, semiconducting, and insulating colloidal nanocrystal (NC) building blocks are self-assembled into BNSL membranes via the liquid-interfacial assembly technique. Free-standing BNSL membranes are transferred onto topographically structured poly(dimethylsiloxane) molds via the Langmuir-Schaefer technique and then deposited in patterns onto substrates via transfer printing. BNSLs with different structural motifs are successfully patterned into various meso- and microstructures such as lines, circles, and even three-dimensional grids across large-area substrates. A combination of electron microscopy and grazing incidence small-angle X-ray scattering (GISAXS) measurements confirm the ordering of NC building blocks in meso- and micropatterned BNSLs. This technique demonstrates structural diversity in the design of hierarchical materials by assembling BNSLs from NC building blocks of different composition and size by patterning BNSLs into various size and shape superstructures of interest for a broad range of applications.
Hierarchical design of a polymeric nanovehicle for efficient tumor regression and imaging
An, Jinxia; Guo, Qianqian; Zhang, Peng; Sinclair, Andrew; Zhao, Yu; Zhang, Xinge; Wu, Kan; Sun, Fang; Hung, Hsiang-Chieh; Li, Chaoxing; Jiang, Shaoyi
2016-04-01
Effective delivery of therapeutics to disease sites significantly contributes to drug efficacy, toxicity and clearance. Here we designed a hierarchical polymeric nanoparticle structure for anti-cancer chemotherapy delivery by utilizing state-of-the-art polymer chemistry and co-assembly techniques. This novel structural design combines the most desired merits for drug delivery in a single particle, including a long in vivo circulation time, inhibited non-specific cell uptake, enhanced tumor cell internalization, pH-controlled drug release and simultaneous imaging. This co-assembled nanoparticle showed exceptional stability in complex biological media. Benefiting from the synergistic effects of zwitterionic and multivalent galactose polymers, drug-loaded nanoparticles were selectively internalized by cancer cells rather than normal tissue cells. In addition, the pH-responsive core retained their cargo within their polymeric coating through hydrophobic interaction and released it under slightly acidic conditions. In vivo pharmacokinetic studies in mice showed minimal uptake of nanoparticles by the mononuclear phagocyte system and excellent blood circulation half-lives of 14.4 h. As a result, tumor growth was completely inhibited and no damage was observed for normal organ tissues. This newly developed drug nanovehicle has great potential in cancer therapy, and the hierarchical design principle should provide valuable information for the development of the next generation of drug delivery systems.Effective delivery of therapeutics to disease sites significantly contributes to drug efficacy, toxicity and clearance. Here we designed a hierarchical polymeric nanoparticle structure for anti-cancer chemotherapy delivery by utilizing state-of-the-art polymer chemistry and co-assembly techniques. This novel structural design combines the most desired merits for drug delivery in a single particle, including a long in vivo circulation time, inhibited non-specific cell uptake
Energy Technology Data Exchange (ETDEWEB)
Gorentla Venkata, Manjunath [ORNL; Shamis, Pavel [ORNL; Graham, Richard L [ORNL; Ladd, Joshua S [ORNL; Sampath, Rahul S [ORNL
2013-01-01
Many scientific simulations, using the Message Passing Interface (MPI) programming model, are sensitive to the performance and scalability of reduction collective operations such as MPI Allreduce and MPI Reduce. These operations are the most widely used abstractions to perform mathematical operations over all processes that are part of the simulation. In this work, we propose a hierarchical design to implement the reduction operations on multicore systems. This design aims to improve the efficiency of reductions by 1) tailoring the algorithms and customizing the implementations for various communication mechanisms in the system 2) providing the ability to configure the depth of hierarchy to match the system architecture, and 3) providing the ability to independently progress each of this hierarchy. Using this design, we implement MPI Allreduce and MPI Reduce operations (and its nonblocking variants MPI Iallreduce and MPI Ireduce) for all message sizes, and evaluate on multiple architectures including InfiniBand and Cray XT5. We leverage and enhance our existing infrastructure, Cheetah, which is a framework for implementing hierarchical collective operations to implement these reductions. The experimental results show that the Cheetah reduction operations outperform the production-grade MPI implementations such as Open MPI default, Cray MPI, and MVAPICH2, demonstrating its efficiency, flexibility and portability. On Infini- Band systems, with a microbenchmark, a 512-process Cheetah nonblocking Allreduce and Reduce achieves a speedup of 23x and 10x, respectively, compared to the default Open MPI reductions. The blocking variants of the reduction operations also show similar performance benefits. A 512-process nonblocking Cheetah Allreduce achieves a speedup of 3x, compared to the default MVAPICH2 Allreduce implementation. On a Cray XT5 system, a 6144-process Cheetah Allreduce outperforms the Cray MPI by 145%. The evaluation with an application kernel, Conjugate
Li, Xin; Yu, Jiaguo; Jaroniec, Mietek
2016-05-01
As a green and sustainable technology, semiconductor-based heterogeneous photocatalysis has received much attention in the last few decades because it has potential to solve both energy and environmental problems. To achieve efficient photocatalysts, various hierarchical semiconductors have been designed and fabricated at the micro/nanometer scale in recent years. This review presents a critical appraisal of fabrication methods, growth mechanisms and applications of advanced hierarchical photocatalysts. Especially, the different synthesis strategies such as two-step templating, in situ template-sacrificial dissolution, self-templating method, in situ template-free assembly, chemically induced self-transformation and post-synthesis treatment are highlighted. Finally, some important applications including photocatalytic degradation of pollutants, photocatalytic H2 production and photocatalytic CO2 reduction are reviewed. A thorough assessment of the progress made in photocatalysis may open new opportunities in designing highly effective hierarchical photocatalysts for advanced applications ranging from thermal catalysis, separation and purification processes to solar cells.
Fast Moving Sampling Designs in Temporal Networks
Thompson, Steven K
2015-01-01
In a study related to this one I set up a temporal network simulation environment for evaluating network intervention strategies. A network intervention strategy consists of a sampling design to select nodes in the network. An intervention is applied to nodes in the sample for the purpose of changing the wider network in some desired way. The network intervention strategies can represent natural agents such as viruses that spread in the network, programs to prevent or reduce the virus spread, and the agency of individual nodes, such as people, in forming and dissolving the links that create, maintain or change the network. The present paper examines idealized versions of the sampling designs used to that study. The purpose is to better understand the natural and human network designs in real situations and to provide a simple inference of design-based properties that in turn measure properties of the time-changing network. The designs use link tracing and sometimes other probabilistic procedures to add units ...
Directory of Open Access Journals (Sweden)
Piergiorgi Paolo
2006-11-01
Full Text Available Abstract Background Uncertainty often affects molecular biology experiments and data for different reasons. Heterogeneity of gene or protein expression within the same tumor tissue is an example of biological uncertainty which should be taken into account when molecular markers are used in decision making. Tissue Microarray (TMA experiments allow for large scale profiling of tissue biopsies, investigating protein patterns characterizing specific disease states. TMA studies deal with multiple sampling of the same patient, and therefore with multiple measurements of same protein target, to account for possible biological heterogeneity. The aim of this paper is to provide and validate a classification model taking into consideration the uncertainty associated with measuring replicate samples. Results We propose an extension of the well-known Naïve Bayes classifier, which accounts for biological heterogeneity in a probabilistic framework, relying on Bayesian hierarchical models. The model, which can be efficiently learned from the training dataset, exploits a closed-form of classification equation, thus providing no additional computational cost with respect to the standard Naïve Bayes classifier. We validated the approach on several simulated datasets comparing its performances with the Naïve Bayes classifier. Moreover, we demonstrated that explicitly dealing with heterogeneity can improve classification accuracy on a TMA prostate cancer dataset. Conclusion The proposed Hierarchical Naïve Bayes classifier can be conveniently applied in problems where within sample heterogeneity must be taken into account, such as TMA experiments and biological contexts where several measurements (replicates are available for the same biological sample. The performance of the new approach is better than the standard Naïve Bayes model, in particular when the within sample heterogeneity is different in the different classes.
Directory of Open Access Journals (Sweden)
Mira Kania Sabariah
2016-05-01
Full Text Available Draw learning in early childhood is an important lesson and full of stimulation of the process of growth and development of children which could help to train the fine motor skills. We have had a lot of applications that can be used to perform learning, including interactive learning applications. Referring to the observations that have been conducted showed that the experiences given by the applications that exist today are very diverse and have not been able to represent the model of learning and characteristics of early childhood (4-6 years. Based on the results, Hierarchical Task Analysis method generated a list of tasks that must be done in designing an user interface that represents the user experience in draw learning. Then by using the Heuristic Evaluation method the usability of the model has fulfilled a very good level of understanding and also it can be enhanced and produce a better model.
A hierarchical layout design method based on rubber band potentialenergy descending
Directory of Open Access Journals (Sweden)
Ou Cheng Yi
2016-01-01
Full Text Available Strip packing problems is one important sub-problem of the Cutting stock problems. Its application domains include sheet metal, ship making, wood, furniture, garment, shoes and glass. In this paper, a hierarchical layout design method based on rubber band potential-energy descending was proposed. The basic concept of the rubber band enclosing model was described in detail. We divided the layout process into three different stages: initial layout stage, rubber band enclosing stage and local adjustment stage. In different stages, the most efficient strategies were employed for further improving the layout solution. Computational results show that the proposed method performed better than the GLSHA algorithm for three out of nine instances in utilization.
Phipps, Denham L; Meakin, George H; Beatty, Paul C W
2011-07-01
While hierarchical task analysis (HTA) is well established as a general task analysis method, there appears a need to make more explicit both the cognitive elements of a task and design requirements that arise from an analysis. One way of achieving this is to make use of extensions to the standard HTA. The aim of the current study is to evaluate the use of two such extensions--the sub-goal template (SGT) and the skills-rules-knowledge (SRK) framework--to analyse the cognitive activity that takes place during the planning and delivery of anaesthesia. In quantitative terms, the two methods were found to have relatively poor inter-rater reliability; however, qualitative evidence suggests that the two methods were nevertheless of value in generating insights about anaesthetists' information handling and cognitive performance. Implications for the use of an extended HTA to analyse work systems are discussed.
Rational design of hierarchically nanostructured electrodes for solid oxide fuel cells
Çelikbilek, Ӧzden; Jauffrès, David; Siebert, Elisabeth; Dessemond, Laurent; Burriel, Mónica; Martin, Christophe L.; Djurado, Elisabeth
2016-11-01
Understanding, controlling and optimizing the mechanisms of electrode reactions need to be addressed for high performance energy and storage conversion devices. Hierarchically structured porous films of mixed ionic electronic conductors (MIECs) and their composites with ionic conductors offer unique properties. However, correlating the intrinsic properties of electrode components to microstructural features remains a challenging task. Here, La0.6Sr0.4Co0.2Fe0.8O3-δ (LSCF) and La0.6Sr0.4Co0.2Fe0.8O3-δ: Ce0.9Gd0.1O2-δ (LSCF:CGO) composite cathodes with hierarchical porosity from nano to micro range are fabricated. The LSCF film exhibits exceptional electrode performance with area specific resistance values of 0.021 and 0.065 Ω cm2 at 650 and 600 °C respectively, whereas LSCF:CGO composite is only slightly superior than pure LSCF below 450 °C. We report for the first time a numerical 3D Finite Element Model (FEM) comprising real micro/nanostructural parameters from 3D reconstructions into a simple geometry similar to experimentally observed columnar features. The model demonstrates that heterogeneities in porosity within the film thickness and percolation of the ionically conducting phase significantly impact bulk transport at low temperatures. Design guidelines relating performance to microstructure and bulk material properties in relation to experimental results are proposed. Our model has potential to be extended for rational design of larger, regular and heterogeneous microstructures.
Jang, K L; McCrae, R R; Angleitner, A; Riemann, R; Livesley, W J
1998-06-01
The common variance among personality traits can be summarized in the factors of the five-factor model, which are known to be heritable. This study examined heritability of the residual specific variance in facet-level traits from the Revised NEO Personality Inventory. Analyses of raw and residual facet scales across Canadian (183 monozygotic [MZ] and 175 dizogotic [DZ] pairs) and German (435 MZ and 205 DZ pairs) twin samples showed genetic and environmental influences of the same type and magnitude across the 2 samples for most facets. Additive genetic effects accounted for 25% to 65% of the reliable specific variance. Results provide strong support for hierarchical models of personality that posit a large number of narrow traits in addition to a few broader trait factors or domains. Facet-level traits are not simply exemplars of the broad factors they define; they are discrete constructs with their own heritable and thus biological basis.
Bagby, R Michael; Sellbom, Martin; Ayearst, Lindsay E; Chmielewski, Michael S; Anderson, Jaime L; Quilty, Lena C
2014-01-01
In this study our goal was to examine the hierarchical structure of personality pathology as conceptualized by Harkness and McNulty's (1994) Personality Psychopathology Five (PSY-5) model, as recently operationalized by the MMPI-2-RF (Ben-Porath & Tellegen, 2011) PSY-5r scales. We used Goldberg's (2006) "bass-ackwards" method to obtain factor structure using PSY-5r item data, successively extracting from 1 to 5 factors in a sample of psychiatric patients (n = 1,000) and a sample of university undergraduate students (n = 1,331). Participants from these samples had completed either the MMPI-2 or the MMPI-2-RF. The results were mostly consistent across the 2 samples, with some differences at the 3-factor level. In the patient sample a factor structure representing 3 broad psychopathology domains (internalizing, externalizing, and psychoticism) emerged; in the student sample the 3-factor level represented what is more commonly observed in "normal-range" personality models (negative emotionality, introversion, and disconstraint). At the 5-factor level the basic structure was similar across the 2 samples and represented well the PSY-5r domains.
Designing an enhanced groundwater sample collection system
Energy Technology Data Exchange (ETDEWEB)
Schalla, R.
1994-10-01
As part of an ongoing technical support mission to achieve excellence and efficiency in environmental restoration activities at the Laboratory for Energy and Health-Related Research (LEHR), Pacific Northwest Laboratory (PNL) provided guidance on the design and construction of monitoring wells and identified the most suitable type of groundwater sampling pump and accessories for monitoring wells. The goal was to utilize a monitoring well design that would allow for hydrologic testing and reduce turbidity to minimize the impact of sampling. The sampling results of the newly designed monitoring wells were clearly superior to those of the previously installed monitoring wells. The new wells exhibited reduced turbidity, in addition to improved access for instrumentation and hydrologic testing. The variable frequency submersible pump was selected as the best choice for obtaining groundwater samples. The literature references are listed at the end of this report. Despite some initial difficulties, the actual performance of the variable frequency, submersible pump and its accessories was effective in reducing sampling time and labor costs, and its ease of use was preferred over the previously used bladder pumps. The surface seals system, called the Dedicator, proved to be useful accessory to prevent surface contamination while providing easy access for water-level measurements and for connecting the pump. Cost savings resulted from the use of the pre-production pumps (beta units) donated by the manufacturer for the demonstration. However, larger savings resulted from shortened field time due to the ease in using the submersible pumps and the surface seal access system. Proper deployment of the monitoring wells also resulted in cost savings and ensured representative samples.
2012-01-22
ICES REPORT 12-05 January 2012 An Isogeometric Design-through-analysis Methodology based on Adaptive Hierarchical Refinement of NURBS , Immersed...M.J. Borden, E. Rank, T.J.R. Hughes, An Isogeometric Design-through-analysis Methodology based on Adaptive Hierarchical Refinement of NURBS , Immersed...analysis Methodology based on Adaptive Hierarchical Refinement of NURBS , Immersed Boundary Methods, and T-spline CAD Surfaces 5a. CONTRACT NUMBER 5b
2013-01-01
Distance sampling is widely used to estimate the abundance or density of wildlife populations. Methods to estimate wildlife mortality rates have developed largely independently from distance sampling, despite the conceptual similarities between estimation of cumulative mortality and the population density of living animals. Conventional distance sampling analyses rely on the assumption that animals are distributed uniformly with respect to transects and thus require randomized placement of tr...
Tang, Yichao; Lin, Gaojian; Han, Lin; Qiu, Songgang; Yang, Shu; Yin, Jie
2015-11-25
Applying hierarchical cuts to thin sheets of elastomer generates super-stretchable and reconfigurable metamaterials, exhibiting highly nonlinear stress-strain behaviors and tunable phononic bandgaps. The cut concept fails on brittle thin sheets due to severe stress concentration in the rotating hinges. By engineering the local hinge shapes and global hierarchical structure, cut-based reconfigurable metamaterials with largely enhanced strength are realized.
POLLUTION PREVENTION IN THE DESIGN OF CHEMICAL PROCESSES USING HIERARCHICAL DESIGN AND SIMULATION
The design of chemical processes is normally an interactive process of synthesis and analysis. When one also desires or needs to limit the amount of pollution generated by the process the difficulty of the task can increase substantially. In this work, we show how combining hier...
Stratified sampling design based on data mining.
Kim, Yeonkook J; Oh, Yoonhwan; Park, Sunghoon; Cho, Sungzoon; Park, Hayoung
2013-09-01
To explore classification rules based on data mining methodologies which are to be used in defining strata in stratified sampling of healthcare providers with improved sampling efficiency. We performed k-means clustering to group providers with similar characteristics, then, constructed decision trees on cluster labels to generate stratification rules. We assessed the variance explained by the stratification proposed in this study and by conventional stratification to evaluate the performance of the sampling design. We constructed a study database from health insurance claims data and providers' profile data made available to this study by the Health Insurance Review and Assessment Service of South Korea, and population data from Statistics Korea. From our database, we used the data for single specialty clinics or hospitals in two specialties, general surgery and ophthalmology, for the year 2011 in this study. Data mining resulted in five strata in general surgery with two stratification variables, the number of inpatients per specialist and population density of provider location, and five strata in ophthalmology with two stratification variables, the number of inpatients per specialist and number of beds. The percentages of variance in annual changes in the productivity of specialists explained by the stratification in general surgery and ophthalmology were 22% and 8%, respectively, whereas conventional stratification by the type of provider location and number of beds explained 2% and 0.2% of variance, respectively. This study demonstrated that data mining methods can be used in designing efficient stratified sampling with variables readily available to the insurer and government; it offers an alternative to the existing stratification method that is widely used in healthcare provider surveys in South Korea.
Liu, Ya; Kuksenok, Olga; Balazs, Anna C
2013-10-15
One of the challenges in creating high-performance polymer nanocomposites is establishing effective routes for tailoring the morphology of both the polymer mixture and the dispersed nanoparticles, which contribute desirable optical, electrical, and mechanical properties. Using computational modeling, we devise an effective method for simultaneously controlling the spatial regularity of the polymer phases and the distribution of the rods within this matrix. We focus on mixtures of photosensitive AB binary blends and A-coated nanorods; in the presence of light, the binary blends undergo a reversible chemical reaction and phase separation to yield a morphology resembling that of microphase-separated diblock copolymers. We simulate the effects of illuminating this sample with a uniform background light and a higher intensity, spatially localized light, which is rastered over the sample with a velocity v. The resulting material displays a periodically ordered, essentially defect-free morphology, with the A-like nanoparticles localized in lamellar A domains. The dynamic behavior of the rods within this system can be controlled by varying the velocity v and Γ2, the reaction rate coefficient produced by the higher intensity light. Specifically, the rastering light can drive the rods to be "pushed" along the lamellar domains or oriented perpendicular to these stripes. Given these attributes, we isolate scenarios where the system encompasses a complex hierarchical structure, with rods that are simultaneously ordered along two distinct directions within the periodic matrix. Namely, the rods form long nanowires that span the length of the sample and lie perpendicular to these wires in regularly spaced A lamellae. Hence, our approach points to new routes for producing self-organized rectangular grids, which can impart remarkable optoelectronic or mechanical properties to the materials.
Laplanche, Christophe
2010-04-01
The author compares 12 hierarchical models in the aim of estimating the abundance of fish in alpine streams by using removal sampling data collected at multiple locations. The most expanded model accounts for (i) variability of the abundance among locations, (ii) variability of the catchability among locations, and (iii) residual variability of the catchability among fish. Eleven model reductions are considered depending which variability is included in the model. The more restrictive model considers none of the aforementioned variabilities. Computations of the latter model can be achieved by using the algorithm presented by Carle and Strub (Biometrics 1978, 34, 621-630). Maximum a posteriori and interval estimates of the parameters as well as the Akaike and the Bayesian information criterions of model fit are computed by using samples simulated by a Markov chain Monte Carlo method. The models are compared by using a trout (Salmo trutta fario) parr (0+) removal sampling data set collected at three locations in the Pyrénées mountain range (Haute-Garonne, France) in July 2006. Results suggest that, in this case study, variability of the catchability is not significant, either among fish or locations. Variability of the abundance among locations is significant. 95% interval estimates of the abundances at the three locations are [0.15, 0.24], [0.26, 0.36], and [0.45, 0.58] parrs per m(2). Such differences are likely the consequence of habitat variability.
Directory of Open Access Journals (Sweden)
Ohaeri Jude U
2010-07-01
Full Text Available Abstract Background An understanding of depressive symptomatology from the perspective of confirmatory factor analysis (CFA could facilitate valid and interpretable comparisons across cultures. The objectives of the study were: (i using the responses of a sample of Arab college students to the Beck Depression Inventory (BDI-II in CFA, to compare the "goodness of fit" indices of the original dimensional three-and two-factor first-order models, and their modifications, with the corresponding hierarchical models (i.e., higher - order and bifactor models; (ii to assess the psychometric characteristics of the BDI-II, including convergent/discriminant validity with the Hopkins Symptom Checklist (HSCL-25. Method Participants (N = 624 were Kuwaiti national college students, who completed the questionnaires in class. CFA was done by AMOS, version 16. Eleven models were compared using eight "fit" indices. Results In CFA, all the models met most "fit" criteria. While the higher-order model did not provide improved fit over the dimensional first - order factor models, the bifactor model (BFM had the best fit indices (CMNI/DF = 1.73; GFI = 0.96; RMSEA = 0.034. All regression weights of the dimensional models were significantly different from zero (P Conclusion The broadly adequate fit of the various models indicates that they have some merit and implies that the relationship between the domains of depression probably contains hierarchical and dimensional elements. The bifactor model is emerging as the best way to account for the clinical heterogeneity of depression. The psychometric characteristics of the BDI-II lend support to our CFA results.
Directory of Open Access Journals (Sweden)
Guiyang Xin
2015-09-01
Full Text Available This paper presents a novel hexapod robot, hereafter named PH-Robot, with three degrees of freedom (3-DOF parallel leg mechanisms based on the concept of an integrated limb mechanism (ILM for the integration of legged locomotion and arm manipulation. The kinematic model plays an important role in the parametric optimal design and motion planning of robots. However, models of parallel mechanisms are often difficult to obtain because of the implicit relationship between the motions of actuated joints and the motion of a moving platform. In order to derive the kinematic equations of the proposed hexapod robot, an extended hierarchical kinematic modelling method is proposed. According to the kinematic model, the geometrical parameters of the leg are optimized utilizing a comprehensive objective function that considers both dexterity and payload. PH-Robot has distinct advantages in accuracy and load ability over a robot with serial leg mechanisms through the former's comparison of performance indices. The reachable workspace of the leg verifies its ability to walk and manipulate. The results of the trajectory tracking experiment demonstrate the correctness of the kinematic model of the hexapod robot.
Hierarchical Bayesian modeling and Markov chain Monte Carlo sampling for tuning-curve analysis.
Cronin, Beau; Stevenson, Ian H; Sur, Mriganka; Körding, Konrad P
2010-01-01
A central theme of systems neuroscience is to characterize the tuning of neural responses to sensory stimuli or the production of movement. Statistically, we often want to estimate the parameters of the tuning curve, such as preferred direction, as well as the associated degree of uncertainty, characterized by error bars. Here we present a new sampling-based, Bayesian method that allows the estimation of tuning-curve parameters, the estimation of error bars, and hypothesis testing. This method also provides a useful way of visualizing which tuning curves are compatible with the recorded data. We demonstrate the utility of this approach using recordings of orientation and direction tuning in primary visual cortex, direction of motion tuning in primary motor cortex, and simulated data.
Energy Technology Data Exchange (ETDEWEB)
Nasimi, E.; Gabbar, H.A. [Univ. of Ontario Inst. of Tech., Oshawa, Ontario (Canada)
2009-07-01
This paper proposes a new intelligent and highly automated Hierarchical Control Chart (HCC) and operations mapping solution for nuclear power plant operators that provides control system designers, developers and operators with a single view of all elements and systems across a power plant with the integrated interactive data access and information retrieval capabilities that enables a faster fault diagnostics as well aids in a more efficient decision making for the routine daily tasks. (author)
Hierarchical Linear Modeling Meta-Analysis of Single-Subject Design Research
Gage, Nicholas A.; Lewis, Timothy J.
2014-01-01
The identification of evidence-based practices continues to provoke issues of disagreement across multiple fields. One area of contention is the role of single-subject design (SSD) research in providing scientific evidence. The debate about SSD's utility centers on three issues: sample size, effect size, and serial dependence. One potential…
A Quality Control Design for Validating Hierarchical Sequencing of Programed Instruction.
Tennyson, Robert D.; Boutwell, Richard C.
A quality control model is proposed to facilitate development of effective instructional programs. The theories of R. M. Gagne and of M. D. Merrill provide the foundations for a theory of sequencing behavior into a hierarchical order in order to improve the learning potential of an instructional program. The initial step in the procedural model is…
Manual for the Sampling Design Tool for ArcGIS
Buja, Ken; Menza, Charles
2008-01-01
The Biogeography Branch’s Sampling Design Tool for ArcGIS provides a means to effectively develop sampling strategies in a geographic information system (GIS) environment. The tool was produced as part of an iterative process of sampling design development, whereby existing data informs new design decisions. The objective of this process, and hence a product of this tool, is an optimal sampling design which can be used to achieve accurate, high-precision estimates of population metrics at a m...
Analysing designed experiments in distance sampling
Stephen T. Buckland; Robin E. Russell; Brett G. Dickson; Victoria A. Saab; Donal N. Gorman; William M. Block
2009-01-01
Distance sampling is a survey technique for estimating the abundance or density of wild animal populations. Detection probabilities of animals inherently differ by species, age class, habitats, or sex. By incorporating the change in an observer's ability to detect a particular class of animals as a function of distance, distance sampling leads to density estimates...
Designing optimal sampling schemes for field visits
CSIR Research Space (South Africa)
Debba, Pravesh
2008-10-01
Full Text Available This is a presentation of a statistical method for deriving optimal spatial sampling schemes. The research focuses on ground verification of minerals derived from hyperspectral data. Spectral angle mapper (SAM) and spectral feature fitting (SFF...
Sampling Designs in Qualitative Research: Making the Sampling Process More Public
Onwuegbuzie, Anthony J.; Leech, Nancy L.
2007-01-01
The purpose of this paper is to provide a typology of sampling designs for qualitative researchers. We introduce the following sampling strategies: (a) parallel sampling designs, which represent a body of sampling strategies that facilitate credible comparisons of two or more different subgroups that are extracted from the same levels of study;…
Directory of Open Access Journals (Sweden)
Tomas Uricar
2010-01-01
Full Text Available The unavoidable parametrization of the wireless link represents a major problem of the network-coded modulation synthesis in a 2-way relay channel. Composite (hierarchical codeword received at the relay is generally parametrized by the channel gain, forcing any processing on the relay to be dependent on channel parameters. In this paper, we introduce the codebook design criteria, which ensure that all permissible hierarchical codewords have decision regions invariant to the channel parameters (as seen by the relay. We utilize the criterion for parameter-invariant constellation space boundary to obtain the codebooks with channel parameter-invariant decision regions at the relay. Since the requirements on such codebooks are relatively strict, the construction of higher-order codebooks will require a slightly simplified design criteria. We will show that the construction algorithm based on these relaxed criteria provides a feasible way to the design of codebooks with arbitrary cardinality. The promising performance benefits of the example codebooks (compared to a classical linear modulation alphabets will be exemplified on the minimum distance analysis.
Design and evaluation of a hierarchical control architecture for an autonomous underwater vehicle
Institute of Scientific and Technical Information of China (English)
BIAN Xin-qian; QIN Zheng; YAN Zhe-ping
2008-01-01
This paper researches on a kind of control architecture for autonomous underwater vehicle (AUV). After describing the hybrid property of the AUV control system, we present the hierarchical AUV control architecture. The architecture is organized in three layers: mission layer, task layer and execution layer. State supervisor and task coordinator are two key modules handling discrete events, so we describe these two modules in detail. Finally, we carried out a series of tests to verify this architecture. The test results show that the AUV can perform autonomous missions effectively and safely. We can conclude the control architecture is valid and practical.
DEFF Research Database (Denmark)
Perez-Ramirez, Javier; Christensen, Claus H.; Egeblad, Kresten
2008-01-01
in these materials often imposes intracrystalline diffusion limitations, rendering low utilisation of the zeolite active volume in catalysed reactions. This critical review examines recent advances in the rapidly evolving area of zeolites with improved accessibility and molecular transport. Strategies to enhance...... the properties of the resulting materials and the catalytic function. We particularly dwell on the exciting field of hierarchical zeolites, which couple in a single material the catalytic power of micropores and the facilitated access and improved transport consequence of a complementary mesopore network...
Comaniciu, Cristina
2007-01-01
In this paper, a hierarchical cross-layer design approach is proposed to increase energy efficiency in ad hoc networks through joint adaptation of nodes' transmitting powers and route selection. The design maintains the advantages of the classic OSI model, while accounting for the cross-coupling between layers, through information sharing. The proposed joint power control and routing algorithm is shown to increase significantly the overall energy efficiency of the network, at the expense of a moderate increase in complexity. Performance enhancement of the joint design using multiuser detection is also investigated, and it is shown that the use of multiuser detection can increase the capacity of the ad hoc network significantly for a given level of energy consumption.
Hierarchical hybrid control network design based on LON and master-slave RS-422/485 protocol
Institute of Scientific and Technical Information of China (English)
彭可; 陈际达; 陈岚
2002-01-01
Aiming at the weaknesses of LON bus, combining the coexistence of fieldbus and DCS (Distribu-ted Control Systems) in control networks, the authors introduce a hierarchical hybrid control network design based on LON and master-slave RS-422/485 protocol. This design adopts LON as the trunk, master-slave RS-422/485 control networks are connected to LON as special subnets by dedicated gateways. It is an implementation method for isomerous control network integration. Data management is ranked according to real-time requirements for different network data. The core components, such as control network nodes, router and gateway, are detailed in the paper. The design utilizes both communication advantage of LonWorks technology and the more powerful control ability of universal MCUs or PLCs, thus it greatly increases system response speed and performance-cost ratio.
Directory of Open Access Journals (Sweden)
Poor HVincent
2007-01-01
Full Text Available A hierarchical cross-layer design approach is proposed to increase energy efficiency in ad hoc networks through joint adaptation of nodes' transmitting powers and route selection. The design maintains the advantages of the classic OSI model, while accounting for the cross-coupling between layers, through information sharing. The proposed joint power control and routing algorithm is shown to increase significantly the overall energy efficiency of the network, at the expense of a moderate increase in complexity. Performance enhancement of the joint design using multiuser detection is also investigated, and it is shown that the use of multiuser detection can increase the capacity of the ad hoc network significantly for a given level of energy consumption.
[Variance estimation considering multistage sampling design in multistage complex sample analysis].
Li, Yichong; Zhao, Yinjun; Wang, Limin; Zhang, Mei; Zhou, Maigeng
2016-03-01
Multistage sampling is a frequently-used method in random sampling survey in public health. Clustering or independence between observations often exists in the sampling, often called complex sample, generated by multistage sampling. Sampling error may be underestimated and the probability of type I error may be increased if the multistage sample design was not taken into consideration in analysis. As variance (error) estimator in complex sample is often complicated, statistical software usually adopt ultimate cluster variance estimate (UCVE) to approximate the estimation, which simply assume that the sample comes from one-stage sampling. However, with increased sampling fraction of primary sampling unit, contribution from subsequent sampling stages is no more trivial, and the ultimate cluster variance estimate may, therefore, lead to invalid variance estimation. This paper summarize a method of variance estimation considering multistage sampling design. The performances are compared with UCVE and the method considering multistage sampling design by simulating random sampling under different sampling schemes using real world data. Simulation showed that as primary sampling unit (PSU) sampling fraction increased, UCVE tended to generate increasingly biased estimation, whereas accurate estimates were obtained by using the method considering multistage sampling design.
Optimisation of sampling windows design for population pharmacokinetic experiments.
Ogungbenro, Kayode; Aarons, Leon
2008-08-01
This paper describes an approach for optimising sampling windows for population pharmacokinetic experiments. Sampling windows designs are more practical in late phase drug development where patients are enrolled in many centres and in out-patient clinic settings. Collection of samples under the uncontrolled environment at these centres at fixed times may be problematic and can result in uninformative data. Population pharmacokinetic sampling windows design provides an opportunity to control when samples are collected by allowing some flexibility and yet provide satisfactory parameter estimation. This approach uses information obtained from previous experiments about the model and parameter estimates to optimise sampling windows for population pharmacokinetic experiments within a space of admissible sampling windows sequences. The optimisation is based on a continuous design and in addition to sampling windows the structure of the population design in terms of the proportion of subjects in elementary designs, number of elementary designs in the population design and number of sampling windows per elementary design is also optimised. The results obtained showed that optimal sampling windows designs obtained using this approach are very efficient for estimating population PK parameters and provide greater flexibility in terms of when samples are collected. The results obtained also showed that the generalized equivalence theorem holds for this approach.
Description of sampling designs using a comprehensive data structure
John C. Byrne; Albert R. Stage
1988-01-01
Maintaining permanent plot data with different sampling designs over long periods within an organization, as well as sharing such information between organizations, requires that common standards be used. A data structure for the description of the sampling design within a stand is proposed. It is based on the definition of subpopulations of trees sampled, the rules...
Zheng, Kaiwen; Li, Yuanyuan; Zhu, Ming; Yu, Xi; Zhang, Mengyan; Shi, Ling; Cheng, Jue
2017-10-01
A hierarchical porous water hyacinth-derived carbon (WHC) is fabricated by pre-carbonization and KOH activation for supercapacitors. The physicochemical properties of WHC are researched by scanning electron microscopy (SEM), N2 adsorption-desorption measurements, X-ray diffraction (XRD), Raman spectroscopy and X-ray photoelectron spectroscopy (XPS). The results indicate that WHC exhibits hierarchical porous structure and high specific surface area of 2276 m2/g. And the electrochemical properties of WHC are studied by cyclic voltammetry (CV), galvanostatic charge-discharge and electrochemical impedance spectroscopy (EIS) tests. In a three-electrode test system, WHC shows considerable specific capacitance of 344.9 F/g at a current density of 0.5 A/g, good rate performance with 225.8 F/g even at a current density of 30 A/g, and good cycle stability with 95% of the capacitance retention after 10000 cycles of charge-discharge at a current density of 5 A/g. Moreover, WHC cell delivers an energy density of 23.8 Wh/kg at 0.5 A/g and a power density of 15.7 kW/kg at 10 A/g. Thus, using water hyacinth as carbon source to fabricate supercapacitors electrodes is a promising approach for developing inexpensive, sustainable and high-performance carbon materials. Additionally, this study supports the sustainable development and the control of biological invasion.
HIERARCHICAL DESIGN BASED INTRUSION DETECTION SYSTEM FOR WIRELESS AD HOC SENSOR NETWORK
Directory of Open Access Journals (Sweden)
Mohammad Saiful Islam Mamun
2010-07-01
Full Text Available In recent years, wireless ad hoc sensor network becomes popular both in civil and military jobs.However, security is one of the significant challenges for sensor network because of their deploymentin open and unprotected environment. As cryptographic mechanism is not enough to protect sensornetwork from external attacks, intrusion detection system needs to be introduced. Though intrusionprevention mechanism is one of the major and efficient methods against attacks, but there might besome attacks for which prevention method is not known. Besides preventing the system from someknown attacks, intrusion detection system gather necessary information related to attack technique andhelp in the development of intrusion prevention system. In addition to reviewing the present attacksavailable in wireless sensor network this paper examines the current efforts to intrusion detectionsystem against wireless sensor network. In this paper we propose a hierarchical architectural designbased intrusion detection system that fits the current demands and restrictions of wireless ad hocsensor network. In this proposed intrusion detection system architecture we followed clusteringmechanism to build a four level hierarchical network which enhances network scalability to largegeographical area and use both anomaly and misuse detection techniques for intrusion detection. Weintroduce policy based detection mechanism as well as intrusion response together with GSM cellconcept for intrusion detection architecture.
Uliaszek, Amanda A; Al-Dajani, Nadia; Bagby, R Michael
2015-12-01
Shifts in the conceptualization of psychopathology have favored a dimensional approach, with the five-factor model (FFM) playing a prominent role in this research. However, reservations about the utility of the FFM in differentiating disorders have risen. In the current investigation, a "bottom-up" analytical method was used to ascertain the hierarchical structure of personality, with investigation of the specificity of the traits in categorizing diagnostic categories across an expanded array of psychiatric disorders. Following earlier investigations, which used a hierarchical structural approach, this study presents new results relating to the differentiation of several forms of psychopathology not included in these earlier analyses--bipolar disorder, psychotic disorders, problem gambling, posttraumatic stress disorder, and somatoform disorders--across distinct levels of a personality hierarchy based on the FFM. These results bolster the argument for the use of FFM personality traits in characterizing and differentiating psychiatric diagnostic groups.
Using Hierarchical Adaptive Neuro Fuzzy Systems And Design Two New Edge Detectors In Noisy Images
Directory of Open Access Journals (Sweden)
M. H. Olyaee
2013-10-01
Full Text Available One of the most important topics in image processing is edge detection. Many methods have been proposed for this end but most of them have weak performance in noisy images because noise pixels are determined as edge. In this paper, two new methods are represented based on Hierarchical Adaptive Neuro Fuzzy Systems (HANFIS. Each method consists of desired number of HANFIS operators that receive the value of some neighbouring pixels and decide central pixel is edge or not. Simple train images are used in order to set internal parameters of each HANFIS operator. The presented methods are evaluated by some test images and compared with several popular edge detectors. The experimental results show that these methods are robust against impulse noise and extract edge pixels exactly.
Hierarchical Pathfinding and AI-Based Learning Approach in Strategy Game Design
Directory of Open Access Journals (Sweden)
Le Minh Duc
2008-01-01
Full Text Available Strategy game and simulation application are an exciting area with many opportunities for study and research. Currently most of the existing games and simulations apply hard coded rules so the intelligence of the computer generated forces is limited. After some time, player gets used to the simulation making it less attractive and challenging. It is also costly and tedious to incorporate new rules for an existing game. The main motivation behind this research project is to improve the quality of artificial intelligence- (AI- based on various techniques such as qualitative spatial reasoning (Forbus et al., 2002, near-optimal hierarchical pathfinding (HPA* (Botea et al., 2004, and reinforcement learning (RL (Sutton and Barto, 1998.
Directory of Open Access Journals (Sweden)
Anna J. Schulte
2011-05-01
Full Text Available Hierarchically structured flower leaves (petals of many plants are superhydrophobic, but water droplets do not roll-off when the surfaces are tilted. On such surfaces water droplets are in the “Cassie impregnating wetting state”, which is also known as the “petal effect”. By analyzing the petal surfaces of different species, we discovered interesting new wetting characteristics of the surface of the flower of the wild pansy (Viola tricolor. This surface is superhydrophobic with a static contact angle of 169° and very low hysteresis, i.e., the petal effect does not exist and water droplets roll-off as from a lotus (Nelumbo nucifera leaf. However, the surface of the wild pansy petal does not possess the wax crystals of the lotus leaf. Its petals exhibit high cone-shaped cells (average size 40 µm with a high aspect ratio (2.1 and a very fine cuticular folding (width 260 nm on top. The applied water droplets are in the Cassie–Baxter wetting state and roll-off at inclination angles below 5°. Fabricated hydrophobic polymer replicas of the wild pansy were prepared in an easy two-step moulding process and possess the same wetting characteristics as the original flowers. In this work we present a technical surface with a new superhydrophobic, low adhesive surface design, which combines the hierarchical structuring of petals with a wetting behavior similar to that of the lotus leaf.
From Continuous-Time Design to Sampled-Data Design of Nonlinear Observers
Karafyllis, Iasson; Kravaris, Costas
2008-01-01
In this work, a sampled-data nonlinear observer is designed using a continuous-time design coupled with an inter-sample output predictor. The proposed sampled-data observer is a hybrid system. It is shown that under certain conditions, the robustness properties of the continuous-time design are inherited by the sampled-data design, as long as the sampling period is not too large. The approach is applied to linear systems and to triangular globally Lipschitz systems.
System design description for sampling fuel in K basins
Energy Technology Data Exchange (ETDEWEB)
Ritter, G.A., Westinghouse Hanford
1996-09-17
This System Design Description provides: (1) statements of the Spent Nuclear Fuel Project`s needs for sampling of fuel in the K East and K West Basins, (2) the sampling equipment functions and requirements, (3) a general work plan and the design logic followed to develop the equipment, and (4) a summary description of the design for the sampling equipment. This report summarizes the integrated application of both the subject equipment and the canister sludge sampling system in the characterization campaigns at K Basins.
Point-Mass Aircraft Trajectory Prediction Using a Hierarchical, Highly-Adaptable Software Design
Karr, David A.; Vivona, Robert A.; Woods, Sharon E.; Wing, David J.
2017-01-01
A highly adaptable and extensible method for predicting four-dimensional trajectories of civil aircraft has been developed. This method, Behavior-Based Trajectory Prediction, is based on taxonomic concepts developed for the description and comparison of trajectory prediction software. A hierarchical approach to the "behavioral" layer of a point-mass model of aircraft flight, a clear separation between the "behavioral" and "mathematical" layers of the model, and an abstraction of the methods of integrating differential equations in the "mathematical" layer have been demonstrated to support aircraft models of different types (in particular, turbojet vs. turboprop aircraft) using performance models at different levels of detail and in different formats, and promise to be easily extensible to other aircraft types and sources of data. The resulting trajectories predict location, altitude, lateral and vertical speeds, and fuel consumption along the flight path of the subject aircraft accurately and quickly, accounting for local conditions of wind and outside air temperature. The Behavior-Based Trajectory Prediction concept was implemented in NASA's Traffic Aware Planner (TAP) flight-optimizing cockpit software application.
Design of IP Camera Access Control Protocol by Utilizing Hierarchical Group Key
Directory of Open Access Journals (Sweden)
Jungho Kang
2015-08-01
Full Text Available Unlike CCTV, security video surveillance devices, which we have generally known about, IP cameras which are connected to a network either with or without wire, provide monitoring services through a built-in web-server. Due to the fact that IP cameras can use a network such as the Internet, multiple IP cameras can be installed at a long distance and each IP camera can utilize the function of a web server individually. Even though IP cameras have this kind of advantage, it has difficulties in access control management and weakness in user certification, too. Particularly, because the market of IP cameras did not begin to be realized a long while ago, systems which are systematized from the perspective of security have not been built up yet. Additionally, it contains severe weaknesses in terms of access authority to the IP camera web server, certification of users, and certification of IP cameras which are newly installed within a network, etc. This research grouped IP cameras hierarchically to manage them systematically, and provided access control and data confidentiality between groups by utilizing group keys. In addition, IP cameras and users are certified by using PKI-based certification, and weak points of security such as confidentiality and integrity, etc., are improved by encrypting passwords. Thus, this research presents specific protocols of the entire process and proved through experiments that this method can be actually applied.
Yang, Yanbing; Li, Peixu; Wu, Shiting; Li, Xinyang; Shi, Enzheng; Shen, Qicang; Wu, Dehai; Xu, Wenjing; Cao, Anyuan; Yuan, Quan
2015-04-13
Mesoporous carbon (m-C) has potential applications as porous electrodes for electrochemical energy storage, but its applications have been severely limited by the inherent fragility and low electrical conductivity. A rational strategy is presented to construct m-C into hierarchical porous structures with high flexibility by using a carbon nanotube (CNT) sponge as a three-dimensional template, and grafting Pt nanoparticles at the m-C surface. This method involves several controllable steps including solution deposition of a mesoporous silica (m-SiO2 ) layer onto CNTs, chemical vapor deposition of acetylene, and etching of m-SiO2 , resulting in a CNT@m-C core-shell or a CNT@m-C@Pt core-shell hybrid structure after Pt adsorption. The underlying CNT network provides a robust yet flexible support and a high electrical conductivity, whereas the m-C provides large surface area, and the Pt nanoparticles improves interfacial electron and ion diffusion. Consequently, specific capacitances of 203 and 311 F g(-1) have been achieved in these CNT@m-C and CNT@m-C@Pt sponges as supercapacitor electrodes, respectively, which can retain 96 % of original capacitance under large degree compression.
A Frequency Domain Design Method For Sampled-Data Compensators
DEFF Research Database (Denmark)
Niemann, Hans Henrik; Jannerup, Ole Erik
1990-01-01
A new approach to the design of a sampled-data compensator in the frequency domain is investigated. The starting point is a continuous-time compensator for the continuous-time system which satisfy specific design criteria. The new design method will graphically show how the discrete...
DEFF Research Database (Denmark)
Stolpe, Mathias; Stidsen, Thomas K.
2005-01-01
of minimizing the weight of a structure subject to displacement and local design-dependent stress constraints. The method iteratively solves a sequence of problems of increasing size of the same type as the original problem. The problems are defined on a design mesh which is initially coarse...... from global optimization, which have only recently become available, for solving the problems in the sequence. Numerical examples of topology design problems of continuum structures with local stress and displacement constraints are presented....
DEFF Research Database (Denmark)
Stolpe, Mathias; Stidsen, Thomas K.
2007-01-01
of minimizing the weight of a structure subject to displacement and local design-dependent stress constraints. The method iteratively treats a sequence of problems of increasing size of the same type as the original problem. The problems are defined on a design mesh which is initially coarse...... from global optimization, which have only recently become available, for solving the problems in the sequence. Numerical examples of topology design problems of continuum structures with local stress and displacement constraints are presented....
Probability sampling design in ethnobotanical surveys of medicinal plants
Directory of Open Access Journals (Sweden)
Mariano Martinez Espinosa
2012-12-01
Full Text Available Non-probability sampling design can be used in ethnobotanical surveys of medicinal plants. However, this method does not allow statistical inferences to be made from the data generated. The aim of this paper is to present a probability sampling design that is applicable in ethnobotanical studies of medicinal plants. The sampling design employed in the research titled "Ethnobotanical knowledge of medicinal plants used by traditional communities of Nossa Senhora Aparecida do Chumbo district (NSACD, Poconé, Mato Grosso, Brazil" was used as a case study. Probability sampling methods (simple random and stratified sampling were used in this study. In order to determine the sample size, the following data were considered: population size (N of 1179 families; confidence coefficient, 95%; sample error (d, 0.05; and a proportion (p, 0.5. The application of this sampling method resulted in a sample size (n of at least 290 families in the district. The present study concludes that probability sampling methods necessarily have to be employed in ethnobotanical studies of medicinal plants, particularly where statistical inferences have to be made using data obtained. This can be achieved by applying different existing probability sampling methods, or better still, a combination of such methods.
Design of a gravity corer for near shore sediment sampling
Digital Repository Service at National Institute of Oceanography (India)
Bhat, S.T.; Sonawane, A; Nayak, B
For the purpose of geotechnical investigation a gravity corer has been designed and fabricated to obtain undisturbed sediment core samples from near shore waters. The corer was successfully operated at 75 stations up to water depth 30 m. Simplicity...
Design of perfect reconstruction rational sampling filter banks
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
The design of rational sampling filter banks based on a recombination structure can be formulated as a problem with two objective functions to be optimized. A new hybrid optimization method for designing perfectreconstruction rational sampling filter banks is presented, which can be used to solve a class of problems with two objective functions. This method is of good convergence and mezzo calculation cost. Satisfactory results free of aliasing in analysis and synthesis filters can be obtained by the proposed method.
Extending cluster lot quality assurance sampling designs for surveillance programs.
Hund, Lauren; Pagano, Marcello
2014-07-20
Lot quality assurance sampling (LQAS) has a long history of applications in industrial quality control. LQAS is frequently used for rapid surveillance in global health settings, with areas classified as poor or acceptable performance on the basis of the binary classification of an indicator. Historically, LQAS surveys have relied on simple random samples from the population; however, implementing two-stage cluster designs for surveillance sampling is often more cost-effective than simple random sampling. By applying survey sampling results to the binary classification procedure, we develop a simple and flexible nonparametric procedure to incorporate clustering effects into the LQAS sample design to appropriately inflate the sample size, accommodating finite numbers of clusters in the population when relevant. We use this framework to then discuss principled selection of survey design parameters in longitudinal surveillance programs. We apply this framework to design surveys to detect rises in malnutrition prevalence in nutrition surveillance programs in Kenya and South Sudan, accounting for clustering within villages. By combining historical information with data from previous surveys, we design surveys to detect spikes in the childhood malnutrition rate.
ANL small-sample calorimeter system design and operation
Energy Technology Data Exchange (ETDEWEB)
Roche, C.T.; Perry, R.B.; Lewis, R.N.; Jung, E.A.; Haumann, J.R.
1978-07-01
The Small-Sample Calorimetric System is a portable instrument designed to measure the thermal power produced by radioactive decay of plutonium-containing fuels. The small-sample calorimeter is capable of measuring samples producing power up to 32 milliwatts at a rate of one sample every 20 min. The instrument is contained in two packages: a data-acquisition module consisting of a microprocessor with an 8K-byte nonvolatile memory, and a measurement module consisting of the calorimeter and a sample preheater. The total weight of the system is 18 kg.
Experimental and Sampling Design for the INL-2 Sample Collection Operational Test
Energy Technology Data Exchange (ETDEWEB)
Piepel, Gregory F.; Amidan, Brett G.; Matzke, Brett D.
2009-02-16
This report describes the experimental and sampling design developed to assess sampling approaches and methods for detecting contamination in a building and clearing the building for use after decontamination. An Idaho National Laboratory (INL) building will be contaminated with BG (Bacillus globigii, renamed Bacillus atrophaeus), a simulant for Bacillus anthracis (BA). The contamination, sampling, decontamination, and re-sampling will occur per the experimental and sampling design. This INL-2 Sample Collection Operational Test is being planned by the Validated Sampling Plan Working Group (VSPWG). The primary objectives are: 1) Evaluate judgmental and probabilistic sampling for characterization as well as probabilistic and combined (judgment and probabilistic) sampling approaches for clearance, 2) Conduct these evaluations for gradient contamination (from low or moderate down to absent or undetectable) for different initial concentrations of the contaminant, 3) Explore judgment composite sampling approaches to reduce sample numbers, 4) Collect baseline data to serve as an indication of the actual levels of contamination in the tests. A combined judgmental and random (CJR) approach uses Bayesian methodology to combine judgmental and probabilistic samples to make clearance statements of the form "X% confidence that at least Y% of an area does not contain detectable contamination” (X%/Y% clearance statements). The INL-2 experimental design has five test events, which 1) vary the floor of the INL building on which the contaminant will be released, 2) provide for varying the amount of contaminant released to obtain desired concentration gradients, and 3) investigate overt as well as covert release of contaminants. Desirable contaminant gradients would have moderate to low concentrations of contaminant in rooms near the release point, with concentrations down to zero in other rooms. Such gradients would provide a range of contamination levels to challenge the sampling
Performance of Random Effects Model Estimators under Complex Sampling Designs
Jia, Yue; Stokes, Lynne; Harris, Ian; Wang, Yan
2011-01-01
In this article, we consider estimation of parameters of random effects models from samples collected via complex multistage designs. Incorporation of sampling weights is one way to reduce estimation bias due to unequal probabilities of selection. Several weighting methods have been proposed in the literature for estimating the parameters of…
Spatial Sampling Design for Estimating Regional GPP With Spatial Heterogeneities
Wang, J.H.; Ge, Y.; Heuvelink, G.B.M.; Zhou, C.H.
2014-01-01
The estimation of regional gross primary production (GPP) is a crucial issue in carbon cycle studies. One commonly used way to estimate the characteristics of GPP is to infer the total amount of GPP by collecting field samples. In this process, the spatial sampling design will affect the error
Spatial Sampling Design for Estimating Regional GPP With Spatial Heterogeneities
Wang, J.H.; Ge, Y.; Heuvelink, G.B.M.; Zhou, C.H.
2014-01-01
The estimation of regional gross primary production (GPP) is a crucial issue in carbon cycle studies. One commonly used way to estimate the characteristics of GPP is to infer the total amount of GPP by collecting field samples. In this process, the spatial sampling design will affect the error varia
DESIGN SAMPLING AND REPLICATION ASSIGNMENT UNDER FIXED COMPUTING BUDGET
Institute of Scientific and Technical Information of China (English)
Loo Hay LEE; Ek Peng CHEW
2005-01-01
For many real world problems, when the design space is huge and unstructured, and time consuming simulation is needed to estimate the performance measure, it is important to decide how many designs to sample and how long to run for each design alternative given that we have only a fixed amount of computing time. In this paper, we present a simulation study on how the distribution of the performance measures and distribution of the estimation errors/noises will affect the decision.From the analysis, it is observed that when the underlying distribution of the noise is bounded and if there is a high chance that we can get the smallest noise, then the decision will be to sample as many as possible, but if the noise is unbounded, then it will be important to reduce the noise level first by assigning more replications for each design. On the other hand, if the distribution of the performance measure indicates that we will have a high chance of getting good designs, the suggestion is also to reduce the noise level, otherwise, we need to sample more designs so as to increase the chances of getting good designs. For the special case when the distributions of both the performance measures and noise are normal, we are able to estimate the number of designs to sample, and the number of replications to run in order to obtain the best performance.
Alternative sampling designs and estimators for annual surveys
Paul C. Van Deusen
2000-01-01
Annual forest inventory systems in the United States have generally converged on sampling designs that: (1) measure equal proportions of the total number of plots each year; and (2) call for the plots to be systematically dispersed. However, there will inevitably be a need to deviate from the basic design to respond to special requests, natural disasters, and budgetary...
Implications of sampling design and sample size for national carbon accounting systems.
Köhl, Michael; Lister, Andrew; Scott, Charles T; Baldauf, Thomas; Plugge, Daniel
2011-11-08
Countries willing to adopt a REDD regime need to establish a national Measurement, Reporting and Verification (MRV) system that provides information on forest carbon stocks and carbon stock changes. Due to the extensive areas covered by forests the information is generally obtained by sample based surveys. Most operational sampling approaches utilize a combination of earth-observation data and in-situ field assessments as data sources. We compared the cost-efficiency of four different sampling design alternatives (simple random sampling, regression estimators, stratified sampling, 2-phase sampling with regression estimators) that have been proposed in the scope of REDD. Three of the design alternatives provide for a combination of in-situ and earth-observation data. Under different settings of remote sensing coverage, cost per field plot, cost of remote sensing imagery, correlation between attributes quantified in remote sensing and field data, as well as population variability and the percent standard error over total survey cost was calculated. The cost-efficiency of forest carbon stock assessments is driven by the sampling design chosen. Our results indicate that the cost of remote sensing imagery is decisive for the cost-efficiency of a sampling design. The variability of the sample population impairs cost-efficiency, but does not reverse the pattern of cost-efficiency of the individual design alternatives. Our results clearly indicate that it is important to consider cost-efficiency in the development of forest carbon stock assessments and the selection of remote sensing techniques. The development of MRV-systems for REDD need to be based on a sound optimization process that compares different data sources and sampling designs with respect to their cost-efficiency. This helps to reduce the uncertainties related with the quantification of carbon stocks and to increase the financial benefits from adopting a REDD regime.
Hierarchical auxetic mechanical metamaterials.
Gatt, Ruben; Mizzi, Luke; Azzopardi, Joseph I; Azzopardi, Keith M; Attard, Daphne; Casha, Aaron; Briffa, Joseph; Grima, Joseph N
2015-02-11
Auxetic mechanical metamaterials are engineered systems that exhibit the unusual macroscopic property of a negative Poisson's ratio due to sub-unit structure rather than chemical composition. Although their unique behaviour makes them superior to conventional materials in many practical applications, they are limited in availability. Here, we propose a new class of hierarchical auxetics based on the rotating rigid units mechanism. These systems retain the enhanced properties from having a negative Poisson's ratio with the added benefits of being a hierarchical system. Using simulations on typical hierarchical multi-level rotating squares, we show that, through design, one can control the extent of auxeticity, degree of aperture and size of the different pores in the system. This makes the system more versatile than similar non-hierarchical ones, making them promising candidates for industrial and biomedical applications, such as stents and skin grafts.
Hierarchical Auxetic Mechanical Metamaterials
Gatt, Ruben; Mizzi, Luke; Azzopardi, Joseph I.; Azzopardi, Keith M.; Attard, Daphne; Casha, Aaron; Briffa, Joseph; Grima, Joseph N.
2015-02-01
Auxetic mechanical metamaterials are engineered systems that exhibit the unusual macroscopic property of a negative Poisson's ratio due to sub-unit structure rather than chemical composition. Although their unique behaviour makes them superior to conventional materials in many practical applications, they are limited in availability. Here, we propose a new class of hierarchical auxetics based on the rotating rigid units mechanism. These systems retain the enhanced properties from having a negative Poisson's ratio with the added benefits of being a hierarchical system. Using simulations on typical hierarchical multi-level rotating squares, we show that, through design, one can control the extent of auxeticity, degree of aperture and size of the different pores in the system. This makes the system more versatile than similar non-hierarchical ones, making them promising candidates for industrial and biomedical applications, such as stents and skin grafts.
Multi-Scale Hierarchical and Topological Design of Structures for Failure Resistance
2013-10-04
Molecular mechanics of mineralized collagen fibrils in bone, Nature Communications, (04 2013): 0. doi: 10.1038/ncomms2720 TOTAL: 14 (b) Papers published in...multiscale modeling and applications of collagen materials – MURI project funded (start date end of 2009), focused on the design of disruptive fibers...and mats of carbon nanotubes – Keynote/plenary speaker at many conferences and workshops, enabled outreach to academic and technical community
Smart laser scanning sampling head design for image acquisition applications.
Amin, M Junaid; Riza, Nabeel A
2013-07-10
A smart laser scanning sampling head design is presented using an electronically controlled variable focal length lens to achieve the smallest sampling laser spot possible at target plane distances reaching 8 m. A proof-of-concept experiment is conducted using a 10 mW red 633 nm laser coupled with beam conditioning optics that includes an electromagnetically actuated deformable membrane liquid lens to demonstrate sampling laser spot radii under 1 mm over a target range of 20-800 cm. Applications for the proposed sampling head are diverse and include laser machining and component inspection.
A new design of groundwater sampling device and its application
Institute of Scientific and Technical Information of China (English)
Yih-Jin Tsai; Ming-Ching T.Kuo
2005-01-01
Compounds in the atmosphere contaminate samples of groundwater. An inexpensive and simple method for collecting groundwater samples is developed to prevent contamination when the background concentration of contaminants is high. This new design of groundwater sampling device involves a glass sampling bottle with a Teflon-lined valve at each end. A cleaned and dried sampling bottle was connected to a low flow-rate peristaltic pump with Teflon tubing and was filled with water. No headspace volume was remained in the sampling bottle. The sample bottle was then packed in a PVC bag to prevent the target component from infiltrating into the water sample through the valves. In this study, groundwater was sampled at six wells using both the conventional method and the improved method.The analysis of trichlorofluoromethane(CFC-11 ) concentrations at these six wells indicates that all the groundwater samples obtained by the conventional sampling method were contaminated by CFC-11 from the atmosphere. The improved sampling method greatly eliminated theproblems of contamination, preservation and quantitative analysis of natural water.
Experimental Design for the INL Sample Collection Operational Test
Energy Technology Data Exchange (ETDEWEB)
Amidan, Brett G.; Piepel, Gregory F.; Matzke, Brett D.; Filliben, James J.; Jones, Barbara
2007-12-13
This document describes the test events and numbers of samples comprising the experimental design that was developed for the contamination, decontamination, and sampling of a building at the Idaho National Laboratory (INL). This study is referred to as the INL Sample Collection Operational Test. Specific objectives were developed to guide the construction of the experimental design. The main objective is to assess the relative abilities of judgmental and probabilistic sampling strategies to detect contamination in individual rooms or on a whole floor of the INL building. A second objective is to assess the use of probabilistic and Bayesian (judgmental + probabilistic) sampling strategies to make clearance statements of the form “X% confidence that at least Y% of a room (or floor of the building) is not contaminated. The experimental design described in this report includes five test events. The test events (i) vary the floor of the building on which the contaminant will be released, (ii) provide for varying or adjusting the concentration of contaminant released to obtain the ideal concentration gradient across a floor of the building, and (iii) investigate overt as well as covert release of contaminants. The ideal contaminant gradient would have high concentrations of contaminant in rooms near the release point, with concentrations decreasing to zero in rooms at the opposite end of the building floor. For each of the five test events, the specified floor of the INL building will be contaminated with BG, a stand-in for Bacillus anthracis. The BG contaminant will be disseminated from a point-release device located in the room specified in the experimental design for each test event. Then judgmental and probabilistic samples will be collected according to the pre-specified sampling plan. Judgmental samples will be selected based on professional judgment and prior information. Probabilistic samples will be selected in sufficient numbers to provide desired confidence
DNA-inspired hierarchical polymer design: electrostatics and hydrogen bonding in concert.
Hemp, Sean T; Long, Timothy E
2012-01-01
Nucleic acids and proteins, two of nature's biopolymers, assemble into complex structures to achieve desired biological functions and inspire the design of synthetic macromolecules containing a wide variety of noncovalent interactions including electrostatics and hydrogen bonding. Researchers have incorporated DNA nucleobases into a wide variety of synthetic monomers/polymers achieving stimuli-responsive materials, supramolecular assemblies, and well-controlled macromolecules. Recently, scientists utilized both electrostatics and complementary hydrogen bonding to orthogonally functionalize a polymer backbone through supramolecular assembly. Diverse macromolecules with noncovalent interactions will create materials with properties necessary for biomedical applications.
Research on Flexible Transfer Line Schematic Design Using Hierarchical Process Planning
Institute of Scientific and Technical Information of China (English)
无
2002-01-01
Flexible transfer line(FTL)is now widely used in ma ny manufacturing domains to realize efficiently,high quantity and economic prod uction.These manufacturing domains include automobile,tractor,internal-combu stion engine,and so on.In today's competitive business environment,it is vit ally important for machine tool manufacturers to design flexible transfer line m ore effectively and efficiently according to a wider variety of customer demand s.This paper proposes an approach to a bidding-based flexible tra...
3D hierarchically patterned tubular NiSe with nano-/microstructures for Li ion battery design.
Mi, Liwei; Sun, Hui; Ding, Qi; Chen, Weihua; Liu, Chuntai; Hou, Hongwei; Zheng, Zhi; Shen, Changyu
2012-10-28
Tubular nickel selenide (NiSe) crystals with hierarchical structures were successfully fabricated using a one-step solvothermal method in moderate conditions, in which ethylenediamine and ethylene glycol were used as the mixed solvent. The growth of hierarchical NiSe microtubes from NiSe microflakes was achieved without surfactants or other chemical additives by changing the reaction time. When the as-synthesized NiSe microtubes were employed as cathode materials for lithium-ion batteries, the initial discharge capacity of hierarchical NiSe microtubes reached 410.7 mAh g(-1).
Economic design of VSI GCCC charts for correlated samples
Directory of Open Access Journals (Sweden)
Y K Chen
2013-08-01
Full Text Available Generalised cumulative count of conforming (GCCC charts have been proposed for monitoring a high-yield process that allows the items to be inspected sample by sample and not according to the production order. Recent study has shown that the GCCC chart with a variable sampling interval (VSI is superior to the traditional one with a fixed sampling interval (FSI because of the additional flexibility of sampling interval it offers. However, the VSI chart is still costly when used for the prevention of defective products. This paper presents an economic model for the design problem of the VSI GCCC chart, taking into account the correlation of the production outputs within the same sample. In the economic design, a cost function is developed that includes the cost of sampling and inspection, the cost of false alarms, the cost of detecting and removing the assignable cause, and the cost when the process is out-of-control. An evolutionary search method using the cost function is presented for finding the optimal design parameters of the VSI GCCC chart. Comparisons between VSI and FSI charts for expected cost per unit time are also made for various process and cost parameters.
Choosing a Cluster Sampling Design for Lot Quality Assurance Sampling Surveys.
Hund, Lauren; Bedrick, Edward J; Pagano, Marcello
2015-01-01
Lot quality assurance sampling (LQAS) surveys are commonly used for monitoring and evaluation in resource-limited settings. Recently several methods have been proposed to combine LQAS with cluster sampling for more timely and cost-effective data collection. For some of these methods, the standard binomial model can be used for constructing decision rules as the clustering can be ignored. For other designs, considered here, clustering is accommodated in the design phase. In this paper, we compare these latter cluster LQAS methodologies and provide recommendations for choosing a cluster LQAS design. We compare technical differences in the three methods and determine situations in which the choice of method results in a substantively different design. We consider two different aspects of the methods: the distributional assumptions and the clustering parameterization. Further, we provide software tools for implementing each method and clarify misconceptions about these designs in the literature. We illustrate the differences in these methods using vaccination and nutrition cluster LQAS surveys as example designs. The cluster methods are not sensitive to the distributional assumptions but can result in substantially different designs (sample sizes) depending on the clustering parameterization. However, none of the clustering parameterizations used in the existing methods appears to be consistent with the observed data, and, consequently, choice between the cluster LQAS methods is not straightforward. Further research should attempt to characterize clustering patterns in specific applications and provide suggestions for best-practice cluster LQAS designs on a setting-specific basis.
Latent spatial models and sampling design for landscape genetics
Hanks, Ephraim M.; Hooten, Mevin B.; Knick, Steven T.; Oyler-McCance, Sara J.; Fike, Jennifer A.; Cross, Todd B.; Schwartz, Michael K.
2016-01-01
We propose a spatially-explicit approach for modeling genetic variation across space and illustrate how this approach can be used to optimize spatial prediction and sampling design for landscape genetic data. We propose a multinomial data model for categorical microsatellite allele data commonly used in landscape genetic studies, and introduce a latent spatial random effect to allow for spatial correlation between genetic observations. We illustrate how modern dimension reduction approaches to spatial statistics can allow for efficient computation in landscape genetic statistical models covering large spatial domains. We apply our approach to propose a retrospective spatial sampling design for greater sage-grouse (Centrocercus urophasianus) population genetics in the western United States.
A Designated Harmonic Suppression Technology for Sampled SPWM
Institute of Scientific and Technical Information of China (English)
YANG Ping
2005-01-01
Sampled SPWM is an excellent VVVF method of motor speed control, meanwhile the harmonic components of the output wave impairs its applications in practice. A designated harmonic suppression technology is presented for sampled SPWM, which is an improved algorithm for the harmonic suppression in high voltage and high frequency spectrum. As the technology is applied in whole speed adjusting range, the voltage can be conveniently controlled and high frequency harmonic of SP WM is also improved.
Design of a Mars rover and sample return mission
Bourke, Roger D.; Kwok, Johnny H.; Friedlander, Alan
1990-01-01
The design of a Mars Rover Sample Return (MRSR) mission that satisfies scientific and human exploration precursor needs is described. Elements included in the design include an imaging rover that finds and certifies safe landing sites and maps rover traverse routes, a rover that operates the surface with an associated lander for delivery, and a Mars communications orbiter that allows full-time contact with surface elements. A graph of MRSR candidate launch vehice performances is presented.
DeKosky, Brandon J; Dormer, Nathan H; Ingavle, Ganesh C; Roatch, Christopher H; Lomakin, Joseph; Detamore, Michael S; Gehrke, Stevin H
2010-12-01
A new method for encapsulating cells in interpenetrating network (IPN) hydrogels of superior mechanical integrity was developed. In this study, two biocompatible materials-agarose and poly(ethylene glycol) (PEG) diacrylate-were combined to create a new IPN hydrogel with greatly enhanced mechanical performance. Unconfined compression of hydrogel samples revealed that the IPN displayed a fourfold increase in shear modulus relative to a pure PEG-diacrylate network (39.9 vs. 9.9 kPa) and a 4.9-fold increase relative to a pure agarose network (8.2 kPa). PEG and IPN compressive failure strains were found to be 71% ± 17% and 74% ± 17%, respectively, while pure agarose gels failed around 15% strain. Similar mechanical property improvements were seen when IPNs-encapsulated chondrocytes, and LIVE/DEAD cell viability assays demonstrated that cells survived the IPN encapsulation process. The majority of IPN-encapsulated chondrocytes remained viable 1 week postencapsulation, and chondrocytes exhibited glycosaminoglycan synthesis comparable to that of agarose-encapsulated chondrocytes at 3 weeks postencapsulation. The introduction of a new method for encapsulating cells in a hydrogel with enhanced mechanical performance is a promising step toward cartilage defect repair. This method can be applied to fabricate a broad variety of cell-based IPNs by varying monomers and polymers in type and concentration and by adding functional groups such as degradable sequences or cell adhesion groups. Further, this technology may be applicable in other cell-based applications where mechanical integrity of cell-containing hydrogels is of great importance.
Convolution kernel design and efficient algorithm for sampling density correction.
Johnson, Kenneth O; Pipe, James G
2009-02-01
Sampling density compensation is an important step in non-cartesian image reconstruction. One of the common techniques to determine weights that compensate for differences in sampling density involves a convolution. A new convolution kernel is designed for sampling density attempting to minimize the error in a fully reconstructed image. The resulting weights obtained using this new kernel are compared with various previous methods, showing a reduction in reconstruction error. A computationally efficient algorithm is also presented that facilitates the calculation of the convolution of finite kernels. Both the kernel and the algorithm are extended to 3D. Copyright 2009 Wiley-Liss, Inc.
Optimized design and analysis of sparse-sampling FMRI experiments.
Perrachione, Tyler K; Ghosh, Satrajit S
2013-01-01
Sparse-sampling is an important methodological advance in functional magnetic resonance imaging (fMRI), in which silent delays are introduced between MR volume acquisitions, allowing for the presentation of auditory stimuli without contamination by acoustic scanner noise and for overt vocal responses without motion-induced artifacts in the functional time series. As such, the sparse-sampling technique has become a mainstay of principled fMRI research into the cognitive and systems neuroscience of speech, language, hearing, and music. Despite being in use for over a decade, there has been little systematic investigation of the acquisition parameters, experimental design considerations, and statistical analysis approaches that bear on the results and interpretation of sparse-sampling fMRI experiments. In this report, we examined how design and analysis choices related to the duration of repetition time (TR) delay (an acquisition parameter), stimulation rate (an experimental design parameter), and model basis function (an analysis parameter) act independently and interactively to affect the neural activation profiles observed in fMRI. First, we conducted a series of computational simulations to explore the parameter space of sparse design and analysis with respect to these variables; second, we validated the results of these simulations in a series of sparse-sampling fMRI experiments. Overall, these experiments suggest the employment of three methodological approaches that can, in many situations, substantially improve the detection of neurophysiological response in sparse fMRI: (1) Sparse analyses should utilize a physiologically informed model that incorporates hemodynamic response convolution to reduce model error. (2) The design of sparse fMRI experiments should maintain a high rate of stimulus presentation to maximize effect size. (3) TR delays of short to intermediate length can be used between acquisitions of sparse-sampled functional image volumes to increase
Hierarchically Nanoporous Bioactive Glasses for High Efficiency Immobilization of Enzymes
DEFF Research Database (Denmark)
He, W.; Min, D.D.; Zhang, X.D.
2014-01-01
Bioactive glasses with hierarchical nanoporosity and structures have been heavily involved in immobilization of enzymes. Because of meticulous design and ingenious hierarchical nanostructuration of porosities from yeast cell biotemplates, hierarchically nanostructured porous bioactive glasses can...
Optimal Design and Purposeful Sampling: Complementary Methodologies for Implementation Research.
Duan, Naihua; Bhaumik, Dulal K; Palinkas, Lawrence A; Hoagwood, Kimberly
2015-09-01
Optimal design has been an under-utilized methodology. However, it has significant real-world applications, particularly in mixed methods implementation research. We review the concept and demonstrate how it can be used to assess the sensitivity of design decisions and balance competing needs. For observational studies, this methodology enables selection of the most informative study units. For experimental studies, it entails selecting and assigning study units to intervention conditions in the most informative manner. We blend optimal design methods with purposeful sampling to show how these two concepts balance competing needs when there are multiple study aims, a common situation in implementation research.
Diniz, Daniel G.; Silva, Geane O.; Naves, Thaís B.; Fernandes, Taiany N.; Araújo, Sanderson C.; Diniz, José A. P.; de Farias, Luis H. S.; Sosthenes, Marcia C. K.; Diniz, Cristovam G.; Anthony, Daniel C.; da Costa Vasconcelos, Pedro F.; Picanço Diniz, Cristovam W.
2016-01-01
It is known that microglial morphology and function are related, but few studies have explored the subtleties of microglial morphological changes in response to specific pathogens. In the present report we quantitated microglia morphological changes in a monkey model of dengue disease with virus CNS invasion. To mimic multiple infections that usually occur in endemic areas, where higher dengue infection incidence and abundant mosquito vectors carrying different serotypes coexist, subjects received once a week subcutaneous injections of DENV3 (genotype III)-infected culture supernatant followed 24 h later by an injection of anti-DENV2 antibody. Control animals received either weekly anti-DENV2 antibodies, or no injections. Brain sections were immunolabeled for DENV3 antigens and IBA-1. Random and systematic microglial samples were taken from the polymorphic layer of dentate gyrus for 3-D reconstructions, where we found intense immunostaining for TNFα and DENV3 virus antigens. We submitted all bi- or multimodal morphological parameters of microglia to hierarchical cluster analysis and found two major morphological phenotypes designated types I and II. Compared to type I (stage 1), type II microglia were more complex; displaying higher number of nodes, processes and trees and larger surface area and volumes (stage 2). Type II microglia were found only in infected monkeys, whereas type I microglia was found in both control and infected subjects. Hierarchical cluster analysis of morphological parameters of 3-D reconstructions of random and systematic selected samples in control and ADE dengue infected monkeys suggests that microglia morphological changes from stage 1 to stage 2 may not be continuous. PMID:27047345
Jandura, Louise
2010-01-01
The Sample Acquisition/Sample Processing and Handling subsystem for the Mars Science Laboratory is a highly-mechanized, Rover-based sampling system that acquires powdered rock and regolith samples from the Martian surface, sorts the samples into fine particles through sieving, and delivers small portions of the powder into two science instruments inside the Rover. SA/SPaH utilizes 17 actuated degrees-of-freedom to perform the functions needed to produce 5 sample pathways in support of the scientific investigation on Mars. Both hardware redundancy and functional redundancy are employed in configuring this sampling system so some functionality is retained even with the loss of a degree-of-freedom. Intentional dynamic environments are created to move sample while vibration isolators attenuate this environment at the sensitive instruments located near the dynamic sources. In addition to the typical flight hardware qualification test program, two additional types of testing are essential for this kind of sampling system: characterization of the intentionally-created dynamic environment and testing of the sample acquisition and processing hardware functions using Mars analog materials in a low pressure environment. The overall subsystem design and configuration are discussed along with some of the challenges, tradeoffs, and lessons learned in the areas of fault tolerance, intentional dynamic environments, and special testing
The OSIRIS-REx Asteroid Sample Return Mission Operations Design
Gal-Edd, Jonathan S.; Cheuvront, Allan
2015-01-01
OSIRIS-REx is an acronym that captures the scientific objectives: Origins, Spectral Interpretation, Resource Identification, and Security-Regolith Explorer. OSIRIS-REx will thoroughly characterize near-Earth asteroid Bennu (Previously known as 1019551999 RQ36). The OSIRIS-REx Asteroid Sample Return Mission delivers its science using five instruments and radio science along with the Touch-And-Go Sample Acquisition Mechanism (TAGSAM). All of the instruments and data analysis techniques have direct heritage from flown planetary missions. The OSIRIS-REx mission employs a methodical, phased approach to ensure success in meeting the mission's science requirements. OSIRIS-REx launches in September 2016, with a backup launch period occurring one year later. Sampling occurs in 2019. The departure burn from Bennu occurs in March 2021. On September 24, 2023, the Sample Return Capsule (SRC) lands at the Utah Test and Training Range (UTTR). Stardust heritage procedures are followed to transport the SRC to Johnson Space Center, where the samples are removed and delivered to the OSIRIS-REx curation facility. After a six-month preliminary examination period the mission will produce a catalog of the returned sample, allowing the worldwide community to request samples for detailed analysis. Traveling and returning a sample from an Asteroid that has not been explored before requires unique operations consideration. The Design Reference Mission (DRM) ties together spacecraft, instrument and operations scenarios. Asteroid Touch and Go (TAG) has various options varying from ground only to fully automated (natural feature tracking). Spacecraft constraints such as thermo and high gain antenna pointing impact the timeline. The mission is sensitive to navigation errors, so a late command update has been implemented. The project implemented lessons learned from other "small body" missions. The key lesson learned was 'expect the unexpected' and implement planning tools early in the lifecycle
Computer-methodology for designing pest sampling and monitoring programs
Werf, van der W.; Nyrop, J.P.; Binns, M.R.; Kovach, J.
1999-01-01
This paper evaluates two distinct enterprises: (1) an ongoing attempt to produce an introductory book plus accompanying software tools on sampling and monitoring in pest management; and (2) application of the modelling approaches discussed in that book to the design of monitoring methods for
Mixed Estimation for a Forest Survey Sample Design
Francis A. Roesch
1999-01-01
Three methods of estimating the current state of forest attributes over small areas for the USDA Forest Service Southern Research Station's annual forest sampling design are compared. The three methods were (I) simple moving average, (II) single imputation of plot data that had been updated by externally developed models, and (III) local application of a global...
Enhancing sampling design in mist-net bat surveys by accounting for sample size optimization
Trevelin, Leonardo Carreira; Novaes, Roberto Leonan Morim; Colas-Rosas, Paul François; Benathar, Thayse Cristhina Melo; Peres, Carlos A.
2017-01-01
The advantages of mist-netting, the main technique used in Neotropical bat community studies to date, include logistical implementation, standardization and sampling representativeness. Nonetheless, study designs still have to deal with issues of detectability related to how different species behave and use the environment. Yet there is considerable sampling heterogeneity across available studies in the literature. Here, we approach the problem of sample size optimization. We evaluated the common sense hypothesis that the first six hours comprise the period of peak night activity for several species, thereby resulting in a representative sample for the whole night. To this end, we combined re-sampling techniques, species accumulation curves, threshold analysis, and community concordance of species compositional data, and applied them to datasets of three different Neotropical biomes (Amazonia, Atlantic Forest and Cerrado). We show that the strategy of restricting sampling to only six hours of the night frequently results in incomplete sampling representation of the entire bat community investigated. From a quantitative standpoint, results corroborated the existence of a major Sample Area effect in all datasets, although for the Amazonia dataset the six-hour strategy was significantly less species-rich after extrapolation, and for the Cerrado dataset it was more efficient. From the qualitative standpoint, however, results demonstrated that, for all three datasets, the identity of species that are effectively sampled will be inherently impacted by choices of sub-sampling schedule. We also propose an alternative six-hour sampling strategy (at the beginning and the end of a sample night) which performed better when resampling Amazonian and Atlantic Forest datasets on bat assemblages. Given the observed magnitude of our results, we propose that sample representativeness has to be carefully weighed against study objectives, and recommend that the trade-off between
Enhancing sampling design in mist-net bat surveys by accounting for sample size optimization.
Trevelin, Leonardo Carreira; Novaes, Roberto Leonan Morim; Colas-Rosas, Paul François; Benathar, Thayse Cristhina Melo; Peres, Carlos A
2017-01-01
The advantages of mist-netting, the main technique used in Neotropical bat community studies to date, include logistical implementation, standardization and sampling representativeness. Nonetheless, study designs still have to deal with issues of detectability related to how different species behave and use the environment. Yet there is considerable sampling heterogeneity across available studies in the literature. Here, we approach the problem of sample size optimization. We evaluated the common sense hypothesis that the first six hours comprise the period of peak night activity for several species, thereby resulting in a representative sample for the whole night. To this end, we combined re-sampling techniques, species accumulation curves, threshold analysis, and community concordance of species compositional data, and applied them to datasets of three different Neotropical biomes (Amazonia, Atlantic Forest and Cerrado). We show that the strategy of restricting sampling to only six hours of the night frequently results in incomplete sampling representation of the entire bat community investigated. From a quantitative standpoint, results corroborated the existence of a major Sample Area effect in all datasets, although for the Amazonia dataset the six-hour strategy was significantly less species-rich after extrapolation, and for the Cerrado dataset it was more efficient. From the qualitative standpoint, however, results demonstrated that, for all three datasets, the identity of species that are effectively sampled will be inherently impacted by choices of sub-sampling schedule. We also propose an alternative six-hour sampling strategy (at the beginning and the end of a sample night) which performed better when resampling Amazonian and Atlantic Forest datasets on bat assemblages. Given the observed magnitude of our results, we propose that sample representativeness has to be carefully weighed against study objectives, and recommend that the trade-off between
A Sample Handling System for Mars Sample Return - Design and Status
Allouis, E.; Renouf, I.; Deridder, M.; Vrancken, D.; Gelmi, R.; Re, E.
2009-04-01
A mission to return atmosphere and soil samples form the Mars is highly desired by planetary scientists from around the world and space agencies are starting preparation for the launch of a sample return mission in the 2020 timeframe. Such a mission would return approximately 500 grams of atmosphere, rock and soil samples to Earth by 2025. Development of a wide range of new technology will be critical to the successful implementation of such a challenging mission. Technical developments required to realise the mission include guided atmospheric entry, soft landing, sample handling robotics, biological sealing, Mars atmospheric ascent sample rendezvous & capture and Earth return. The European Space Agency has been performing system definition studies along with numerous technology development studies under the framework of the Aurora programme. Within the scope of these activities Astrium has been responsible for defining an overall sample handling architecture in collaboration with European partners (sample acquisition and sample capture, Galileo Avionica; sample containment and automated bio-sealing, Verhaert). Our work has focused on the definition and development of the robotic systems required to move the sample through the transfer chain. This paper presents the Astrium team's high level design for the surface transfer system and the orbiter transfer system. The surface transfer system is envisaged to use two robotic arms of different sizes to allow flexible operations and to enable sample transfer over relatively large distances (~2 to 3 metres): The first to deploy/retract the Drill Assembly used for sample collection, the second for the transfer of the Sample Container (the vessel containing all the collected samples) from the Drill Assembly to the Mars Ascent Vehicle (MAV). The sample transfer actuator also features a complex end-effector for handling the Sample Container. The orbiter transfer system will transfer the Sample Container from the capture
Seichter, Felicia; Vogt, Josef; Radermacher, Peter; Mizaikoff, Boris
2017-01-25
The calibration of analytical systems is time-consuming and the effort for daily calibration routines should therefore be minimized, while maintaining the analytical accuracy and precision. The 'calibration transfer' approach proposes to combine calibration data already recorded with actual calibrations measurements. However, this strategy was developed for the multivariate, linear analysis of spectroscopic data, and thus, cannot be applied to sensors with a single response channel and/or a non-linear relationship between signal and desired analytical concentration. To fill this gap for a non-linear calibration equation, we assume that the coefficients for the equation, collected over several calibration runs, are normally distributed. Considering that coefficients of an actual calibration are a sample of this distribution, only a few standards are needed for a complete calibration data set. The resulting calibration transfer approach is demonstrated for a fluorescence oxygen sensor and implemented as a hierarchical Bayesian model, combined with a Lagrange Multipliers technique and Monte-Carlo Markov-Chain sampling. The latter provides realistic estimates for coefficients and prediction together with accurate error bounds by simulating known measurement errors and system fluctuations. Performance criteria for validation and optimal selection of a reduced set of calibration samples were developed and lead to a setup which maintains the analytical performance of a full calibration. Strategies for a rapid determination of problems occurring in a daily calibration routine, are proposed, thereby opening the possibility of correcting the problem just in time.
Secondary Analysis under Cohort Sampling Designs Using Conditional Likelihood
Directory of Open Access Journals (Sweden)
Olli Saarela
2012-01-01
Full Text Available Under cohort sampling designs, additional covariate data are collected on cases of a specific type and a randomly selected subset of noncases, primarily for the purpose of studying associations with a time-to-event response of interest. With such data available, an interest may arise to reuse them for studying associations between the additional covariate data and a secondary non-time-to-event response variable, usually collected for the whole study cohort at the outset of the study. Following earlier literature, we refer to such a situation as secondary analysis. We outline a general conditional likelihood approach for secondary analysis under cohort sampling designs and discuss the specific situations of case-cohort and nested case-control designs. We also review alternative methods based on full likelihood and inverse probability weighting. We compare the alternative methods for secondary analysis in two simulated settings and apply them in a real-data example.
Detecting Hierarchical Structure in Networks
DEFF Research Database (Denmark)
Herlau, Tue; Mørup, Morten; Schmidt, Mikkel Nørgaard;
2012-01-01
a generative Bayesian model that is able to infer whether hierarchies are present or not from a hypothesis space encompassing all types of hierarchical tree structures. For efficient inference we propose a collapsed Gibbs sampling procedure that jointly infers a partition and its hierarchical structure......Many real-world networks exhibit hierarchical organization. Previous models of hierarchies within relational data has focused on binary trees; however, for many networks it is unknown whether there is hierarchical structure, and if there is, a binary tree might not account well for it. We propose....... On synthetic and real data we demonstrate that our model can detect hierarchical structure leading to better link-prediction than competing models. Our model can be used to detect if a network exhibits hierarchical structure, thereby leading to a better comprehension and statistical account the network....
Directory of Open Access Journals (Sweden)
Baljuk J.A.
2014-12-01
Full Text Available In work the algorithm of adaptive strategy of optimum spatial sampling for studying of the spatial organisation of communities of soil animals in the conditions of an urbanization have been presented. As operating variables the principal components obtained as a result of the analysis of the field data on soil penetration resistance, soils electrical conductivity and density of a forest stand, collected on a quasiregular grid have been used. The locations of experimental polygons have been stated by means of program ESAP. The sampling has been made on a regular grid within experimental polygons. The biogeocoenological estimation of experimental polygons have been made on a basis of A.L.Belgard's ecomorphic analysis. The spatial configuration of biogeocoenosis types has been established on the basis of the data of earth remote sensing and the analysis of digital elevation model. The algorithm was suggested which allows to reveal the spatial organisation of soil animal communities at investigated point, biogeocoenosis, and landscape.
High speed sampling circuit design for pulse laser ranging
Qian, Rui-hai; Gao, Xuan-yi; Zhang, Yan-mei; Li, Huan; Guo, Hai-chao; Guo, Xiao-kang; He, Shi-jie
2016-10-01
In recent years, with the rapid development of digital chip, high speed sampling rate analog to digital conversion chip can be used to sample narrow laser pulse echo. Moreover, high speed processor is widely applied to achieve digital laser echo signal processing algorithm. The development of digital chip greatly improved the laser ranging detection accuracy. High speed sampling and processing circuit used in the laser ranging detection system has gradually been a research hotspot. In this paper, a pulse laser echo data logging and digital signal processing circuit system is studied based on the high speed sampling. This circuit consists of two parts: the pulse laser echo data processing circuit and the data transmission circuit. The pulse laser echo data processing circuit includes a laser diode, a laser detector and a high sample rate data logging circuit. The data transmission circuit receives the processed data from the pulse laser echo data processing circuit. The sample data is transmitted to the computer through USB2.0 interface. Finally, a PC interface is designed using C# language, in which the sampling laser pulse echo signal is demonstrated and the processed laser pulse is plotted. Finally, the laser ranging experiment is carried out to test the pulse laser echo data logging and digital signal processing circuit system. The experiment result demonstrates that the laser ranging hardware system achieved high speed data logging, high speed processing and high speed sampling data transmission.
DESIGN OF NOVEL HIGH PRESSURE- RESISTANT HYDROTHERMAL FLUID SAMPLE VALVE
Institute of Scientific and Technical Information of China (English)
LIU Wei; YANG Canjun; WU Shijun; XIE Yingjun; CHEN Ying
2008-01-01
Sampling study is an effective exploration method, but the most extreme environments of hydrothermal vents pose considerable engineering challenges for sampling hydrothermal fluids. Moreover, traditional sampler systems with sample valves have difficulty in maintaining samples in situ pressure. However, decompression changes have effect on microorganisms sensitive to such stresses. To address the technical difficulty of collecting samples from hydrothermal vents, a new bidirectional high pressure-resistant sample valve with balanced poppet was designed. The sample valve utilizes a soft high performance plastic "PEEK" as poppet. The poppet with inapposite dimension is prone to occur to plastic deformation or rupture for high working pressure in experiments. To address this issue, based on the finite element model, simulated results on stress distribution of the poppet with different structure parameters and preload spring force were obtained. The static axial deformations on top of the poppet were experimented. The simulated results agree with the experimental results. The new sample valve seals well and it can withstand high working pressure.
Design-based inference in time-location sampling.
Leon, Lucie; Jauffret-Roustide, Marie; Le Strat, Yann
2015-07-01
Time-location sampling (TLS), also called time-space sampling or venue-based sampling is a sampling technique widely used in populations at high risk of infectious diseases. The principle is to reach individuals in places and at times where they gather. For example, men who have sex with men meet in gay venues at certain times of the day, and homeless people or drug users come together to take advantage of services provided to them (accommodation, care, meals). The statistical analysis of data coming from TLS surveys has been comprehensively discussed in the literature. Two issues of particular importance are the inclusion or not of sampling weights and how to deal with the frequency of venue attendance (FVA) of individuals during the course of the survey. The objective of this article is to present TLS in the context of sampling theory, to calculate sampling weights and to propose design-based inference taking into account the FVA. The properties of an estimator ignoring the FVA and of the design-based estimator are assessed and contrasted both through a simulation study and using real data from a recent cross-sectional survey conducted in France among drug users. We show that the estimators of prevalence or a total can be strongly biased if the FVA is ignored, while the design-based estimator taking FVA into account is unbiased even when declarative errors occur in the FVA. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Software Radio Sampling Rate Selection, Design and Synchronization
Venosa, Elettra; Palmieri, Francesco A N
2012-01-01
Software Radio represents the future of communication devices. By moving a radio's hardware functionalities into software, SWR promises to change the communication devices creating radios that, built on DSP based hardware platforms, are multiservice, multiband, reconfigurable and reprogrammable. This book describes the design of Software Radio (SWR). Rather than providing an overview of digital signal processing and communications, this book focuses on topics which are crucial in the design and development of a SWR, explaining them in a very simple, yet precise manner, giving simulation results that confirm the effectiveness of the proposed design. Readers will gain in-depth knowledge of key issues so they can actually implement a SWR. Specifically the book addresses the following issues: proper low-sampling rate selection in the multi-band received signal scenario, architecture design for both software radio receiver and transmitter devices and radio synchronization. Addresses very precisely the most imp...
ACS sampling system: design, implementation, and performance evaluation
Di Marcantonio, Paolo; Cirami, Roberto; Chiozzi, Gianluca
2004-09-01
By means of ACS (ALMA Common Software) framework we designed and implemented a sampling system which allows sampling of every Characteristic Component Property with a specific, user-defined, sustained frequency limited only by the hardware. Collected data are sent to various clients (one or more Java plotting widgets, a dedicated GUI or a COTS application) using the ACS/CORBA Notification Channel. The data transport is optimized: samples are cached locally and sent in packets with a lower and user-defined frequency to keep network load under control. Simultaneous sampling of the Properties of different Components is also possible. Together with the design and implementation issues we present the performance of the sampling system evaluated on two different platforms: on a VME based system using VxWorks RTOS (currently adopted by ALMA) and on a PC/104+ embedded platform using Red Hat 9 Linux operating system. The PC/104+ solution offers, as an alternative, a low cost PC compatible hardware environment with free and open operating system.
The OSIRIS-Rex Asteroid Sample Return: Mission Operations Design
Gal-Edd, Jonathan; Cheuvront, Allan
2014-01-01
The OSIRIS-REx mission employs a methodical, phased approach to ensure success in meeting the missions science requirements. OSIRIS-REx launches in September 2016, with a backup launch period occurring one year later. Sampling occurs in 2019. The departure burn from Bennu occurs in March 2021. On September 24, 2023, the SRC lands at the Utah Test and Training Range (UTTR). Stardust heritage procedures are followed to transport the SRC to Johnson Space Center, where the samples are removed and delivered to the OSIRIS-REx curation facility. After a six-month preliminary examination period the mission will produce a catalog of the returned sample, allowing the worldwide community to request samples for detailed analysis.Traveling and returning a sample from an Asteroid that has not been explored before requires unique operations consideration. The Design Reference Mission (DRM) ties together space craft, instrument and operations scenarios. The project implemented lessons learned from other small body missions: APLNEAR, JPLDAWN and ESARosetta. The key lesson learned was expected the unexpected and implement planning tools early in the lifecycle. In preparation to PDR, the project changed the asteroid arrival date, to arrive one year earlier and provided additional time margin. STK is used for Mission Design and STKScheduler for instrument coverage analysis.
Design, data analysis and sampling techniques for clinical research.
Suresh, Karthik; Thomas, Sanjeev V; Suresh, Geetha
2011-10-01
Statistical analysis is an essential technique that enables a medical research practitioner to draw meaningful inference from their data analysis. Improper application of study design and data analysis may render insufficient and improper results and conclusion. Converting a medical problem into a statistical hypothesis with appropriate methodological and logical design and then back-translating the statistical results into relevant medical knowledge is a real challenge. This article explains various sampling methods that can be appropriately used in medical research with different scenarios and challenges.
Hierarchical Porous Structures
Energy Technology Data Exchange (ETDEWEB)
Grote, Christopher John [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-06-07
Materials Design is often at the forefront of technological innovation. While there has always been a push to generate increasingly low density materials, such as aero or hydrogels, more recently the idea of bicontinuous structures has gone more into play. This review will cover some of the methods and applications for generating both porous, and hierarchically porous structures.
The Study on Mental Health at Work: Design and sampling.
Rose, Uwe; Schiel, Stefan; Schröder, Helmut; Kleudgen, Martin; Tophoven, Silke; Rauch, Angela; Freude, Gabriele; Müller, Grit
2017-08-01
The Study on Mental Health at Work (S-MGA) generates the first nationwide representative survey enabling the exploration of the relationship between working conditions, mental health and functioning. This paper describes the study design, sampling procedures and data collection, and presents a summary of the sample characteristics. S-MGA is a representative study of German employees aged 31-60 years subject to social security contributions. The sample was drawn from the employment register based on a two-stage cluster sampling procedure. Firstly, 206 municipalities were randomly selected from a pool of 12,227 municipalities in Germany. Secondly, 13,590 addresses were drawn from the selected municipalities for the purpose of conducting 4500 face-to-face interviews. The questionnaire covers psychosocial working and employment conditions, measures of mental health, work ability and functioning. Data from personal interviews were combined with employment histories from register data. Descriptive statistics of socio-demographic characteristics and logistic regressions analyses were used for comparing population, gross sample and respondents. In total, 4511 face-to-face interviews were conducted. A test for sampling bias revealed that individuals in older cohorts participated more often, while individuals with an unknown educational level, residing in major cities or with a non-German ethnic background were slightly underrepresented. There is no indication of major deviations in characteristics between the basic population and the sample of respondents. Hence, S-MGA provides representative data for research on work and health, designed as a cohort study with plans to rerun the survey 5 years after the first assessment.
Reliability of single sample experimental designs: comfortable effort level.
Brown, W S; Morris, R J; DeGroot, T; Murry, T
1998-12-01
This study was designed to ascertain the intrasubject variability across multiple recording sessions-most often disregarded in reporting group mean data or unavailable because of single sample experimental designs. Intrasubject variability was assessed within and across several experimental sessions from measures of speaking fundamental frequency, vocal intensity, and reading rate. Three age groups of men and women--young, middle-aged, and elderly--repeated the vowel /a/, read a standard passage, and spoke extemporaneously during each experimental session. Statistical analyses were performed to assess each speaker's variability from his or her own mean, and that which consistently varied for any one speaking sample type, both within or across days. Results indicated that intrasubject variability was minimal, with approximately 4% of the data exhibiting significant variation across experimental sessions.
1990-12-01
equation (2. 1). define the virtual reference trajectory Iqr qd -A J e dt (3.1) 3 where qd is an n xl vector of dezired joint coordinates. e = q - qd, and...asymptotic stability is guaranteed despite the highly nonlinear define the virtual reference trajectory q, q, ’q, and the virtual nature of the...designate ’!-e !u,:red trajectory. Following [4] define the virtual reference trajectory 4r, qr, and the virtual ’.-:!--,.:vory velocity error 6rIr = d
Incorporating the sampling design in weighting adjustments for panel attrition.
Chen, Qixuan; Gelman, Andrew; Tracy, Melissa; Norris, Fran H; Galea, Sandro
2015-12-10
We review weighting adjustment methods for panel attrition and suggest approaches for incorporating design variables, such as strata, clusters, and baseline sample weights. Design information can typically be included in attrition analysis using multilevel models or decision tree methods such as the chi-square automatic interaction detection algorithm. We use simulation to show that these weighting approaches can effectively reduce bias in the survey estimates that would occur from omitting the effect of design factors on attrition while keeping the resulted weights stable. We provide a step-by-step illustration on creating weighting adjustments for panel attrition in the Galveston Bay Recovery Study, a survey of residents in a community following a disaster, and provide suggestions to analysts in decision-making about weighting approaches. Copyright © 2015 John Wiley & Sons, Ltd.
Ai, Wei; Wang, Xuewan; Zou, Chenji; Du, Zhuzhu; Fan, Zhanxi; Zhang, Hua; Chen, Peng; Yu, Ting; Huang, Wei
2017-02-01
Hierarchically porous carbons are attracting tremendous attention in sustainable energy systems, such as lithium ion battery (LIB) and fuel cell, due to their excellent transport properties that arise from the high surface area and rich porosity. The state-of-the-art approaches for synthesizing hierarchically porous carbons normally require chemical- and/or template-assisted activation techniques, which is complicate, time consuming, and not feasible for large scale production. Here, a molecular-level design principle toward large-scale synthesis of nitrogen and phosphorus codoped hierarchically porous carbon (NPHPC) through an in situ self-activation process is proposed. The material is fabricated based on the direct pyrolysis of a well-designed polymer, melamine polyphosphate, which is capable of in situ self-activation to generate large specific surface area (1479 m(2) g(-1) ) and hierarchical pores in the final NPHPC. As an anode material for LIB, NPHPC delivers a high reversible capacity of 1073 mAh g(-1) and an excellent cyclic stability for 300 cycles with negligible capacity decay. The peculiar structural properties and synergistic effect of N and P codopants also enable NPHPC a promising electrocatalyst for oxygen reduction reaction, a key cathodic reaction process of many energy conversion devices (for example, fuel cells and metal air batteries). Electrochemical measurements show NPHPC a comparable electrocatalytic performance to commercial Pt/C catalyst (onset potential of 0.88 V vs reversible hydrogen electrode in alkaline medium) with excellent stability (89.8% retention after 20 000 s continuous operation) and superior methanol tolerance. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Energy Technology Data Exchange (ETDEWEB)
Muench, Falk, E-mail: muench@ca.tu-darmstadt.de [Department of Material- and Geoscience, Technische Universität Darmstadt, Alarich-Weiss-Straße 2, 64287 Darmstadt (Germany); Seidl, Tim; Rauber, Markus [Department of Material- and Geoscience, Technische Universität Darmstadt, Alarich-Weiss-Straße 2, 64287 Darmstadt (Germany); Material Research Department, GSI Helmholtzzentrum für Schwerionenforschung GmbH, Planckstraße 1, 64291 Darmstadt (Germany); Peter, Benedikt; Brötz, Joachim [Department of Material- and Geoscience, Technische Universität Darmstadt, Alarich-Weiss-Straße 2, 64287 Darmstadt (Germany); Krause, Markus; Trautmann, Christina [Department of Material- and Geoscience, Technische Universität Darmstadt, Alarich-Weiss-Straße 2, 64287 Darmstadt (Germany); Material Research Department, GSI Helmholtzzentrum für Schwerionenforschung GmbH, Planckstraße 1, 64291 Darmstadt (Germany); Roth, Christina [Department of Chemistry and Biochemistry, Freie Universität Berlin, Takustraße 3, 14195 Berlin (Germany); Katusic, Stipan [Evonik Industries AG, Rodenbacher Chaussee 4, 63457 Hanau (Germany); Ensinger, Wolfgang [Department of Material- and Geoscience, Technische Universität Darmstadt, Alarich-Weiss-Straße 2, 64287 Darmstadt (Germany)
2014-12-15
Well-defined, porous carbon monoliths are highly promising materials for electrochemical applications, separation, purification and catalysis. In this work, we present an approach allowing to transfer the remarkable degree of synthetic control given by the ion-track etching technology to the fabrication of carbon membranes with porosity structured on multiple length scales. The carbonization and pore formation processes were examined with Raman, Brunauer–Emmett–Teller (BET), scanning electron microscopy (SEM) and X-ray diffraction (XRD) measurements, while model experiments demonstrated the viability of the carbon membranes as catalyst support and pollutant adsorbent. Using ion-track etching, specifically designed, continuous channel-shaped pores were introduced into polyimide foils with precise control over channel diameter, orientation, density and interconnection. At a pyrolysis temperature of 950 °C, the artificially created channels shrunk in size, but their shape was preserved, while the polymer was transformed to microporous, amorphous carbon. Channel diameters ranging from ∼10 to several 100 nm could be achieved. The channels also gave access to previously closed micropore volume. Substantial surface increase was realized, as it was shown by introducing a network consisting of 1.4 × 10{sup 10} channels per cm{sup 2} of 30 nm diameter, which more than tripled the mass-normalized surface of the pyrolytic carbon from 205 m{sup 2} g{sup −1} to 732 m{sup 2} g{sup −1}. At a pyrolysis temperature of 3000 °C, membranes consisting of highly ordered graphite were obtained. In this case, the channel shape was severely altered, resulting in a pronounced conical geometry in which the channel diameter quickly decreased with increasing distance to the membrane surface. - Highlights: • Pyrolysis of ion-track etched polyimide yields porous carbon membranes. • Hierarchic porosity: continuous nanochannels embedded in a microporous carbon matrix.
Hierarchical materials: Background and perspectives
DEFF Research Database (Denmark)
2016-01-01
Hierarchical design draws inspiration from analysis of biological materials and has opened new possibilities for enhancing performance and enabling new functionalities and extraordinary properties. With the development of nanotechnology, the necessary technological requirements for the manufactur...
Confronting the ironies of optimal design: Nonoptimal sampling designs with desirable properties
Casman, Elizabeth A.; Naiman, Daniel Q.; Chamberlin, Charles E.
1988-03-01
Two sampling designs are developed for the improvement of parameter estimate precision in nonlinear regression, one for when there is uncertainty in the parameter values, and the other for when the correct model formulation is unknown. Although based on concepts of optimal design theory, the design criteria emphasize efficiency rather than optimality. The development is illustrated using a Streeter-Phelps dissolved oxygen-biochemical oxygen demand model.
Evaluation of design flood estimates with respect to sample size
Kobierska, Florian; Engeland, Kolbjorn
2016-04-01
Estimation of design floods forms the basis for hazard management related to flood risk and is a legal obligation when building infrastructure such as dams, bridges and roads close to water bodies. Flood inundation maps used for land use planning are also produced based on design flood estimates. In Norway, the current guidelines for design flood estimates give recommendations on which data, probability distribution, and method to use dependent on length of the local record. If less than 30 years of local data is available, an index flood approach is recommended where the local observations are used for estimating the index flood and regional data are used for estimating the growth curve. For 30-50 years of data, a 2 parameter distribution is recommended, and for more than 50 years of data, a 3 parameter distribution should be used. Many countries have national guidelines for flood frequency estimation, and recommended distributions include the log Pearson II, generalized logistic and generalized extreme value distributions. For estimating distribution parameters, ordinary and linear moments, maximum likelihood and Bayesian methods are used. The aim of this study is to r-evaluate the guidelines for local flood frequency estimation. In particular, we wanted to answer the following questions: (i) Which distribution gives the best fit to the data? (ii) Which estimation method provides the best fit to the data? (iii) Does the answer to (i) and (ii) depend on local data availability? To answer these questions we set up a test bench for local flood frequency analysis using data based cross-validation methods. The criteria were based on indices describing stability and reliability of design flood estimates. Stability is used as a criterion since design flood estimates should not excessively depend on the data sample. The reliability indices describe to which degree design flood predictions can be trusted.
Mars Rover Sample Return aerocapture configuration design and packaging constraints
Lawson, Shelby J.
1989-01-01
This paper discusses the aerodynamics requirements, volume and mass constraints that lead to a biconic aeroshell vehicle design that protects the Mars Rover Sample Return (MRSR) mission elements from launch to Mars landing. The aerodynamic requirements for Mars aerocapture and entry and packaging constraints for the MRSR elements result in a symmetric biconic aeroshell that develops a L/D of 1.0 at 27.0 deg angle of attack. A significant problem in the study is obtaining a cg that provides adequate aerodynamic stability and performance within the mission imposed constraints. Packaging methods that relieve the cg problems include forward placement of aeroshell propellant tanks and incorporating aeroshell structure as lander structure. The MRSR missions developed during the pre-phase A study are discussed with dimensional and mass data included. Further study is needed for some missions to minimize MRSR element volume so that launch mass constraints can be met.
Energy Technology Data Exchange (ETDEWEB)
Muray, L.P.; Anderson, E.H.; Boegli, V. [Ernest Orlando Lawrence Berkeley National Laboratory, M/S 2-400, Berkeley, California 94720 (United States)
1997-11-01
A farm of off-the-shelf microprocessors is evaluated for use as a real-time parallel postprocessing subsystem of the Lawrence Berkeley National Laboratory datapath, including backscatter proximity correction. The native data format is GDSII with embedded control. Data storage is fully hierarchical with no intermediate binary pattern data formats. Benchmarks of a four Pentium Pro{trademark} farm, after optimization, demonstrate compatibility with exposure rates of 25 MHz for 32{percent} area fill on a vector scan Gaussian beam e-beam tool. Scalability of the architecture is discussed in detail. {copyright} {ital 1997 American Vacuum Society.}
Sparsely Sampling the Sky: A Bayesian Experimental Design Approach
Paykari, P
2012-01-01
The next generation of galaxy surveys will observe millions of galaxies over large volumes of the universe. These surveys are expensive both in time and cost, raising questions regarding the optimal investment of this time and money. In this work we investigate criteria for selecting amongst observing strategies for constraining the galaxy power spectrum and a set of cosmological parameters. Depending on the parameters of interest, it may be more efficient to observe a larger, but sparsely sampled, area of sky instead of a smaller contiguous area. In this work, by making use of the principles of Bayesian Experimental Design, we will investigate the advantages and disadvantages of the sparse sampling of the sky and discuss the circumstances in which a sparse survey is indeed the most efficient strategy. For the Dark Energy Survey (DES), we find that by sparsely observing the same area in a smaller amount of time, we only increase the errors on the parameters by a maximum of 0.45%. Conversely, investing the sam...
Effects-Driven Participatory Design: Learning from Sampling Interruptions.
Brandrup, Morten; Østergaard, Kija Lin; Hertzum, Morten; Karasti, Helena; Simonsen, Jesper
2017-01-01
Participatory design (PD) can play an important role in obtaining benefits from healthcare information technologies, but we contend that to fulfil this role PD must incorporate feedback from real use of the technologies. In this paper we describe an effects-driven PD approach that revolves around a sustained focus on pursued effects and uses the experience sampling method (ESM) to collect real-use feedback. To illustrate the use of the method we analyze a case that involves the organizational implementation of electronic whiteboards at a Danish hospital to support the clinicians' intra- and interdepartmental coordination. The hospital aimed to reduce the number of phone calls involved in coordinating work because many phone calls were seen as unnecessary interruptions. To learn about the interruptions we introduced an app for capturing quantitative data and qualitative feedback about the phone calls. The investigation showed that the electronic whiteboards had little potential for reducing the number of phone calls at the operating ward. The combination of quantitative data and qualitative feedback worked both as a basis for aligning assumptions to data and showed ESM as an instrument for triggering in-situ reflection. The participant-driven design and redesign of the way data were captured by means of ESM is a central contribution to the understanding of how to conduct effects-driven PD.
SAS procedures for designing and analyzing sample surveys
Stafford, Joshua D.; Reinecke, Kenneth J.; Kaminski, Richard M.
2003-01-01
Complex surveys often are necessary to estimate occurrence (or distribution), density, and abundance of plants and animals for purposes of re-search and conservation. Most scientists are familiar with simple random sampling, where sample units are selected from a population of interest (sampling frame) with equal probability. However, the goal of ecological surveys often is to make inferences about populations over large or complex spatial areas where organisms are not homogeneously distributed or sampling frames are in-convenient or impossible to construct. Candidate sampling strategies for such complex surveys include stratified,multistage, and adaptive sampling (Thompson 1992, Buckland 1994).
Westfall, Jacob; Kenny, David A; Judd, Charles M
2014-10-01
Researchers designing experiments in which a sample of participants responds to a sample of stimuli are faced with difficult questions about optimal study design. The conventional procedures of statistical power analysis fail to provide appropriate answers to these questions because they are based on statistical models in which stimuli are not assumed to be a source of random variation in the data, models that are inappropriate for experiments involving crossed random factors of participants and stimuli. In this article, we present new methods of power analysis for designs with crossed random factors, and we give detailed, practical guidance to psychology researchers planning experiments in which a sample of participants responds to a sample of stimuli. We extensively examine 5 commonly used experimental designs, describe how to estimate statistical power in each, and provide power analysis results based on a reasonable set of default parameter values. We then develop general conclusions and formulate rules of thumb concerning the optimal design of experiments in which a sample of participants responds to a sample of stimuli. We show that in crossed designs, statistical power typically does not approach unity as the number of participants goes to infinity but instead approaches a maximum attainable power value that is possibly small, depending on the stimulus sample. We also consider the statistical merits of designs involving multiple stimulus blocks. Finally, we provide a simple and flexible Web-based power application to aid researchers in planning studies with samples of stimuli.
Resilient 3D hierarchical architected metamaterials.
Meza, Lucas R; Zelhofer, Alex J; Clarke, Nigel; Mateos, Arturo J; Kochmann, Dennis M; Greer, Julia R
2015-09-15
Hierarchically designed structures with architectural features that span across multiple length scales are found in numerous hard biomaterials, like bone, wood, and glass sponge skeletons, as well as manmade structures, like the Eiffel Tower. It has been hypothesized that their mechanical robustness and damage tolerance stem from sophisticated ordering within the constituents, but the specific role of hierarchy remains to be fully described and understood. We apply the principles of hierarchical design to create structural metamaterials from three material systems: (i) polymer, (ii) hollow ceramic, and (iii) ceramic-polymer composites that are patterned into self-similar unit cells in a fractal-like geometry. In situ nanomechanical experiments revealed (i) a nearly theoretical scaling of structural strength and stiffness with relative density, which outperforms existing nonhierarchical nanolattices; (ii) recoverability, with hollow alumina samples recovering up to 98% of their original height after compression to ≥ 50% strain; (iii) suppression of brittle failure and structural instabilities in hollow ceramic hierarchical nanolattices; and (iv) a range of deformation mechanisms that can be tuned by changing the slenderness ratios of the beams. Additional levels of hierarchy beyond a second order did not increase the strength or stiffness, which suggests the existence of an optimal degree of hierarchy to amplify resilience. We developed a computational model that captures local stress distributions within the nanolattices under compression and explains some of the underlying deformation mechanisms as well as validates the measured effective stiffness to be interpreted as a metamaterial property.
Pearce, Dave; Walter, Anton; Lupton, W. F.; Warren-Smith, Rodney F.; Lawden, Mike; McIlwrath, Brian; Peden, J. C. M.; Jenness, Tim; Draper, Peter W.
2015-02-01
The Hierarchical Data System (HDS) is a file-based hierarchical data system designed for the storage of a wide variety of information. It is particularly suited to the storage of large multi-dimensional arrays (with their ancillary data) where efficient access is needed. It is a key component of the Starlink software collection (ascl:1110.012) and is used by the Starlink N-Dimensional Data Format (NDF) library (ascl:1411.023). HDS organizes data into hierarchies, broadly similar to the directory structure of a hierarchical filing system, but contained within a single HDS container file. The structures stored in these files are self-describing and flexible; HDS supports modification and extension of structures previously created, as well as functions such as deletion, copying, and renaming. All information stored in HDS files is portable between the machines on which HDS is implemented. Thus, there are no format conversion problems when moving between machines. HDS can write files in a private binary format (version 4), or be layered on top of HDF5 (version 5).
Yoo, JongTae; Cho, Sung-Ju; Jung, Gwan Yeong; Kim, Su Hwan; Choi, Keun-Ho; Kim, Jeong-Hoon; Lee, Chang Kee; Kwak, Sang Kyu; Lee, Sang-Young
2016-05-11
The hierarchical porous structure has garnered considerable attention as a multiscale engineering strategy to bring unforeseen synergistic effects in a vast variety of functional materials. Here, we demonstrate a "microporous covalent organic framework (COF) net on mesoporous carbon nanotube (CNT) net" hybrid architecture as a new class of molecularly designed, hierarchical porous chemical trap for lithium polysulfides (Li2Sx) in Li-S batteries. As a proof of concept for the hybrid architecture, self-standing COF-net on CNT-net interlayers (called "NN interlayers") are fabricated through CNT-templated in situ COF synthesis and then inserted between sulfur cathodes and separators. Two COFs with different micropore sizes (COF-1 (0.7 nm) and COF-5 (2.7 nm)) are chosen as model systems. The effects of the pore size and (boron-mediated) chemical affinity of microporous COF nets on Li2Sx adsorption phenomena are theoretically investigated through density functional theory calculations. Benefiting from the chemical/structural uniqueness, the NN interlayers effectively capture Li2Sx without impairing their ion/electron conduction. Notably, the COF-1 NN interlayer, driven by the well-designed microporous structure, allows for the selective deposition/dissolution (i.e., facile solid-liquid conversion) of electrically inert Li2S. As a consequence, the COF-1 NN interlayer provides a significant improvement in the electrochemical performance of Li-S cells (capacity retention after 300 cycles (at charge/discharge rate = 2.0 C/2.0 C) = 84% versus 15% for a control cell with no interlayer) that lies far beyond those accessible with conventional Li-S technologies.
Design unbiased estimation in line intersect sampling using segmented transects
David L.R. Affleck; Timothy G. Gregoire; Harry T. Valentine; Harry T. Valentine
2005-01-01
In many applications of line intersect sampling. transects consist of multiple, connected segments in a prescribed configuration. The relationship between the transect configuration and the selection probability of a population element is illustrated and a consistent sampling protocol, applicable to populations composed of arbitrarily shaped elements, is proposed. It...
Design of Automatic Sample Loading System for INAA
Institute of Scientific and Technical Information of China (English)
YAO; Yong-gang; XIAO; Cai-jin; WANG; Ping-sheng; JIN; Xiang-chun; HUA; Long; NI; Bang-fa
2015-01-01
Instrumental neutron activation analysis(INAA)is that the sample is bombarded with neutrons,causing the elements to form radioactive isotopes.It is possible to study spectra of the emissions of the radioactive sample,and determine the concentrations of the elements within it.Neutron activation analysis is a sensitive multi-element analytical technique that used for both
Hierarchical DSE for multi-ASIP platforms
DEFF Research Database (Denmark)
Micconi, Laura; Corvino, Rosilde; Gangadharan, Deepak;
2013-01-01
This work proposes a hierarchical Design Space Exploration (DSE) for the design of multi-processor platforms targeted to specific applications with strict timing and area constraints. In particular, it considers platforms integrating multiple Application Specific Instruction Set Processors (ASIPs...
Functional annotation of hierarchical modularity.
Directory of Open Access Journals (Sweden)
Kanchana Padmanabhan
Full Text Available In biological networks of molecular interactions in a cell, network motifs that are biologically relevant are also functionally coherent, or form functional modules. These functionally coherent modules combine in a hierarchical manner into larger, less cohesive subsystems, thus revealing one of the essential design principles of system-level cellular organization and function-hierarchical modularity. Arguably, hierarchical modularity has not been explicitly taken into consideration by most, if not all, functional annotation systems. As a result, the existing methods would often fail to assign a statistically significant functional coherence score to biologically relevant molecular machines. We developed a methodology for hierarchical functional annotation. Given the hierarchical taxonomy of functional concepts (e.g., Gene Ontology and the association of individual genes or proteins with these concepts (e.g., GO terms, our method will assign a Hierarchical Modularity Score (HMS to each node in the hierarchy of functional modules; the HMS score and its p-value measure functional coherence of each module in the hierarchy. While existing methods annotate each module with a set of "enriched" functional terms in a bag of genes, our complementary method provides the hierarchical functional annotation of the modules and their hierarchically organized components. A hierarchical organization of functional modules often comes as a bi-product of cluster analysis of gene expression data or protein interaction data. Otherwise, our method will automatically build such a hierarchy by directly incorporating the functional taxonomy information into the hierarchy search process and by allowing multi-functional genes to be part of more than one component in the hierarchy. In addition, its underlying HMS scoring metric ensures that functional specificity of the terms across different levels of the hierarchical taxonomy is properly treated. We have evaluated our
Variance Estimation, Design Effects, and Sample Size Calculations for Respondent-Driven Sampling
National Research Council Canada - National Science Library
Salganik, Matthew J
2006-01-01
.... A recently developed statistical approach called respondent-driven sampling improves our ability to study hidden populations by allowing researchers to make unbiased estimates of the prevalence...
A Typology of Mixed Methods Sampling Designs in Social Science Research
Onwuegbuzie, Anthony J.; Collins, Kathleen M. T.
2007-01-01
This paper provides a framework for developing sampling designs in mixed methods research. First, we present sampling schemes that have been associated with quantitative and qualitative research. Second, we discuss sample size considerations and provide sample size recommendations for each of the major research designs for quantitative and…
Conditional estimation of exponential random graph models from snowball sampling designs
Pattison, Philippa E.; Robins, Garry L.; Snijders, Tom A. B.; Wang, Peng
2013-01-01
A complete survey of a network in a large population may be prohibitively difficult and costly. So it is important to estimate models for networks using data from various network sampling designs, such as link-tracing designs. We focus here on snowball sampling designs, designs in which the members
Using remote sensing images to design optimal field sampling schemes
CSIR Research Space (South Africa)
Debba, Pravesh
2008-08-01
Full Text Available At this presentation, the author discussed a statistical method for deriving optimal spatial sampling schemes. First I focus on ground verification of minerals derived from hyperspectral data. Spectral angle mapper (SAM) and spectral feature fitting...
Implications of sampling design and sample size for national carbon accounting systems
Michael Köhl; Andrew Lister; Charles T. Scott; Thomas Baldauf; Daniel. Plugge
2011-01-01
Countries willing to adopt a REDD regime need to establish a national Measurement, Reporting and Verification (MRV) system that provides information on forest carbon stocks and carbon stock changes. Due to the extensive areas covered by forests the information is generally obtained by sample based surveys. Most operational sampling approaches utilize a combination of...
Design-based Sample and Probability Law-Assumed Sample: Their Role in Scientific Investigation.
Ojeda, Mario Miguel; Sahai, Hardeo
2002-01-01
Discusses some key statistical concepts in probabilistic and non-probabilistic sampling to provide an overview for understanding the inference process. Suggests a statistical model constituting the basis of statistical inference and provides a brief review of the finite population descriptive inference and a quota sampling inferential theory.…
Design, data analysis and sampling techniques for clinical research
Karthik Suresh; Sanjeev V Thomas; Geetha Suresh
2011-01-01
Statistical analysis is an essential technique that enables a medical research practitioner to draw meaningful inference from their data analysis. Improper application of study design and data analysis may render insufficient and improper results and conclusion. Converting a medical problem into a statistical hypothesis with appropriate methodological and logical design and then back-translating the statistical results into relevant medical knowledge is a real challenge. This article explains...
Directory of Open Access Journals (Sweden)
Matthew J. Spittal
2016-10-01
Full Text Available Abstract Background The Australian Longitudinal Study on Male Health (Ten to Men used a complex sampling scheme to identify potential participants for the baseline survey. This raises important questions about when and how to adjust for the sampling design when analyzing data from the baseline survey. Methods We describe the sampling scheme used in Ten to Men focusing on four important elements: stratification, multi-stage sampling, clustering and sample weights. We discuss how these elements fit together when using baseline data to estimate a population parameter (e.g., population mean or prevalence or to estimate the association between an exposure and an outcome (e.g., an odds ratio. We illustrate this with examples using a continuous outcome (weight in kilograms and a binary outcome (smoking status. Results Estimates of a population mean or disease prevalence using Ten to Men baseline data are influenced by the extent to which the sampling design is addressed in an analysis. Estimates of mean weight and smoking prevalence are larger in unweighted analyses than weighted analyses (e.g., mean = 83.9 kg vs. 81.4 kg; prevalence = 18.0 % vs. 16.7 %, for unweighted and weighted analyses respectively and the standard error of the mean is 1.03 times larger in an analysis that acknowledges the hierarchical (clustered structure of the data compared with one that does not. For smoking prevalence, the corresponding standard error is 1.07 times larger. Measures of association (mean group differences, odds ratios are generally similar in unweighted or weighted analyses and whether or not adjustment is made for clustering. Conclusions The extent to which the Ten to Men sampling design is accounted for in any analysis of the baseline data will depend on the research question. When the goals of the analysis are to estimate the prevalence of a disease or risk factor in the population or the magnitude of a population-level exposure
Spittal, Matthew J; Carlin, John B; Currier, Dianne; Downes, Marnie; English, Dallas R; Gordon, Ian; Pirkis, Jane; Gurrin, Lyle
2016-10-31
The Australian Longitudinal Study on Male Health (Ten to Men) used a complex sampling scheme to identify potential participants for the baseline survey. This raises important questions about when and how to adjust for the sampling design when analyzing data from the baseline survey. We describe the sampling scheme used in Ten to Men focusing on four important elements: stratification, multi-stage sampling, clustering and sample weights. We discuss how these elements fit together when using baseline data to estimate a population parameter (e.g., population mean or prevalence) or to estimate the association between an exposure and an outcome (e.g., an odds ratio). We illustrate this with examples using a continuous outcome (weight in kilograms) and a binary outcome (smoking status). Estimates of a population mean or disease prevalence using Ten to Men baseline data are influenced by the extent to which the sampling design is addressed in an analysis. Estimates of mean weight and smoking prevalence are larger in unweighted analyses than weighted analyses (e.g., mean = 83.9 kg vs. 81.4 kg; prevalence = 18.0 % vs. 16.7 %, for unweighted and weighted analyses respectively) and the standard error of the mean is 1.03 times larger in an analysis that acknowledges the hierarchical (clustered) structure of the data compared with one that does not. For smoking prevalence, the corresponding standard error is 1.07 times larger. Measures of association (mean group differences, odds ratios) are generally similar in unweighted or weighted analyses and whether or not adjustment is made for clustering. The extent to which the Ten to Men sampling design is accounted for in any analysis of the baseline data will depend on the research question. When the goals of the analysis are to estimate the prevalence of a disease or risk factor in the population or the magnitude of a population-level exposure-outcome association, our advice is to adopt an analysis that respects the
Wier, Timothy P; Moser, Cameron S; Grant, Jonathan F; First, Matthew R; Riley, Scott C; Robbins-Wamsley, Stephanie H; Drake, Lisa A
2015-09-15
By using an appropriate in-line sampling system, it is possible to obtain representative samples of ballast water from the main ballast line. An important parameter of the sampling port is its "isokinetic diameter" (DISO), which is the diameter calculated to determine the velocity of water in the sample port relative to the velocity of the water in the main ballast line. The guidance in the U.S. Environmental Technology Verification (ETV) program protocol suggests increasing the diameter from 1.0× DISO (in which velocity in the sample port is equivalent to velocity in the main line) to 1.5-2.0× DISO. In this manner, flow velocity is slowed-and mortality of organisms is theoretically minimized-as water enters the sample port. This report describes field and laboratory trials, as well as computational fluid dynamics modeling, to refine this guidance. From this work, a DISO of 1.0-2.0× (smaller diameter sample ports) is recommended. Copyright © 2015 Elsevier Ltd. All rights reserved.
Hierarchical Grid Based on OGSA Grid Scheduling Design%基于OGSA网格的分层式网格任务调度器设计
Institute of Scientific and Technical Information of China (English)
邓宾
2012-01-01
Based on the demand for grid task scheduling,task scheduling features of the grid,in the full analysis of the general process of grid task scheduling,etc.,based on a grid computing environment,while taking into account some characteristics,such as virtualization,hierarchical and the essential characteristics of autonomy,and collaborative workflow tasks demand grid tasks under resource-dependent,coarse-grained,repeat and other characteristics of the premise,to improve the design of a grid workflow task master-slave hierarchical scheduling model,and gives the scheduling policy and scheduling algorithm.The scheduler model grid in the actual task of collaborative workflow system has been good application results.%文章根据网格任务调度的需求、网格任务调度的特点,在充分分析一般网格任务调度的过程等的基础上,另外考虑到了网格计算环境的一些特点,比如虚拟化、分层次及自治的本质特征,以及在工作流任务协同需求下网格任务的资源依赖、粗粒度、重复执行等特性的前提下,改进设计了一种网格工作流任务主从式分层调度模型,并给出了调度策略和调度算法实现。该调度器模型在实际的网格工作流任务协同系统中得到了较好的应用效果。
Discussion on Hierarchical Design of Compiler Principles Practice%《编译原理》实验的层次化设计
Institute of Scientific and Technical Information of China (English)
金永霞; 丁海军
2012-01-01
《编译原理》实验方案的设计与实施对该课程整体教学质量起着重要作用。提出一种层次化实验设计方案，根据课堂授课进度、学生的接受能力以及编译理论的实际应用．设计不同难度的实验内容并分阶段实施。该项研究对《编译原理》课程建设和教学改革具有一定的意义。%Design and implementation of the practice scheme of Compiler Principles is very important to the whole teaching quality. Proposes a hierarchical practice design scheme. Based on the class- teaching schedule, capabilities of students and application of Compiler Principles, designs and realizes different practice contents by stages. This work has significance for construction and teaching reformation of Compiler Principle course.
Designing a Repetitive Group Sampling Plan for Weibull Distributed Processes
Directory of Open Access Journals (Sweden)
Aijun Yan
2016-01-01
Full Text Available Acceptance sampling plans are useful tools to determine whether the submitted lots should be accepted or rejected. An efficient and economic sampling plan is very desirable for the high quality levels required by the production processes. The process capability index CL is an important quality parameter to measure the product quality. Utilizing the relationship between the CL index and the nonconforming rate, a repetitive group sampling (RGS plan based on CL index is developed in this paper when the quality characteristic follows the Weibull distribution. The optimal plan parameters of the proposed RGS plan are determined by satisfying the commonly used producer’s risk and consumer’s risk at the same time by minimizing the average sample number (ASN and then tabulated for different combinations of acceptance quality level (AQL and limiting quality level (LQL. The results show that the proposed plan has better performance than the single sampling plan in terms of ASN. Finally, the proposed RGS plan is illustrated with an industrial example.
A novel sampling design to explore gene-longevity associations
DEFF Research Database (Denmark)
De Rango, Francesco; Dato, Serena; Bellizzi, Dina
2008-01-01
To investigate the genetic contribution to familial similarity in longevity, we set up a novel experimental design where cousin-pairs born from siblings who were concordant or discordant for the longevity trait were analyzed. To check this design, two chromosomal regions already known to encompass...... longevity-related genes were examined: 6p21.3 (genes TNFalpha, TNFbeta, HSP70.1) and 11p15.5 (genes SIRT3, HRAS1, IGF2, INS, TH). Population pools of 1.6, 2.3 and 2.0 million inhabitants were screened, respectively, in Denmark, France and Italy to identify families matching the design requirements. A total...... of 234 trios composed by one centenarian, his/her child and a child of his/her concordant or discordant sib were collected. By using population-specific allele frequencies, we reconstructed haplotype phase and estimated the likelihood of Identical By Descent (IBD) haplotype sharing in cousin-pairs born...
Designing waveforms for temporal encoding using a frequency sampling method
DEFF Research Database (Denmark)
Gran, Fredrik; Jensen, Jørgen Arendt
2007-01-01
, the amplitude spectrum of the transmitted waveform can be optimized, such that most of the energy is transmitted where the transducer has large amplification. To test the design method, a waveform was designed for a BK8804 linear array transducer. The resulting nonlinear frequency modulated waveform...... for the linear frequency modulated signal) were tested for both waveforms in simulation with respect to the Doppler frequency shift occurring when probing moving objects. It was concluded that the Doppler effect of moving targets does not significantly degrade the filtered output. Finally, in vivo measurements...
Practical Tools for Designing and Weighting Survey Samples
Valliant, Richard; Dever, Jill A.; Kreuter, Frauke
2013-01-01
Survey sampling is fundamentally an applied field. The goal in this book is to put an array of tools at the fingertips of practitioners by explaining approaches long used by survey statisticians, illustrating how existing software can be used to solve survey problems, and developing some specialized software where needed. This book serves at least…
Practical Tools for Designing and Weighting Survey Samples
Valliant, Richard; Dever, Jill A.; Kreuter, Frauke
2013-01-01
Survey sampling is fundamentally an applied field. The goal in this book is to put an array of tools at the fingertips of practitioners by explaining approaches long used by survey statisticians, illustrating how existing software can be used to solve survey problems, and developing some specialized software where needed. This book serves at least…
Effects-Driven Participatory Design: Learning from Sampling Interruptions
DEFF Research Database (Denmark)
Brandrup, Morten; Østergaard, Kija Lin; Hertzum, Morten
2017-01-01
a sustained focus on pursued effects and uses the experience sampling method (ESM) to collect real-use feedback. To illustrate the use of the method we analyze a case that involves the organizational implementation of electronic whiteboards at a Danish hospital to support the clinicians’ intra...
Effects-Driven Participatory Design: Learning from Sampling Interruptions
DEFF Research Database (Denmark)
Brandrup, Morten; Østergaard, Kija Lin; Hertzum, Morten
2017-01-01
a sustained focus on pursued effects and uses the experience sampling method (ESM) to collect real-use feedback. To illustrate the use of the method we analyze a case that involves the organizational implementation of electronic whiteboards at a Danish hospital to support the clinicians’ intra...
DESIGN, DEVELOPMENT AND FIELD DEPLOYMENT OF A TELEOPERATED SAMPLING SYSTEM
Energy Technology Data Exchange (ETDEWEB)
Dalmaso, M; Robert Fogle, R; Tony Hicks, T; Larry Harpring, L; Daniel Odell, D
2007-11-09
A teleoperated sampling system for the identification, collection and retrieval of samples following the detonation of an Improvised Nuclear Device (IND) or Radiological Dispersion Devise (RDD) has been developed and tested in numerous field exercises. The system has been developed as part of the Defense Threat Reduction Agency's (DTRA) National Technical Nuclear Forensic (NTNF) Program. The system is based on a Remotec ANDROS Mark V-A1 platform. Extensive modifications and additions have been incorporated into the platform to enable it to meet the mission requirements. The Defense Science Board Task Force on Unconventional Nuclear Warfare Defense, 2000 Summer Study Volume III report recommended the Department of Defense (DOD) improve nuclear forensics capabilities to achieve accurate and fast identification and attribution. One of the strongest elements of protection is deterrence through the threat of reprisal, but to accomplish this objective a more rapid and authoritative attribution system is needed. The NTNF program provides the capability for attribution. Early on in the NTNF program, it was recognized that there would be a desire to collect debris samples for analysis as soon as possible after a nuclear event. Based on nuclear test experience, it was recognized that mean radiation fields associated with even low yield events could be several thousand R/Hr near the detonation point for some time after the detonation. In anticipation of pressures to rapidly sample debris near the crater, considerable effort is being devoted to developing a remotely controlled vehicle that could enter the high radiation field area and collect one or more samples for subsequent analysis.
Hierarchical topic modeling with nested hierarchical Dirichlet process
Institute of Scientific and Technical Information of China (English)
Yi-qun DING; Shan-ping LI; Zhen ZHANG; Bin SHEN
2009-01-01
This paper deals with the statistical modeling of latent topic hierarchies in text corpora. The height of the topic tree is assumed as fixed, while the number of topics on each level as unknown a priori and to be inferred from data. Taking a nonparametric Bayesian approach to this problem, we propose a new probabilistic generative model based on the nested hierarchical Dirichlet process (nHDP) and present a Markov chain Monte Carlo sampling algorithm for the inference of the topic tree structure as welt as the word distribution of each topic and topic distribution of each document. Our theoretical analysis and experiment results show that this model can produce a more compact hierarchical topic structure and captures more free-grained topic relationships compared to the hierarchical latent Dirichlet allocation model.
Measuring Radionuclides in the environment: radiological quantities and sampling designs
Energy Technology Data Exchange (ETDEWEB)
Voigt, G. [ed.] [GSF - Forschungszentrum fuer Umwelt und Gesundheit Neuherberg GmbH, Oberschleissheim (Germany). Inst. fuer Strahlenschutz
1998-10-01
One aim of the workshop was to support and provide an ICRU report committee (International Union of Radiation Units) with actual information on techniques, data and knowledge of modern radioecology when radionuclides are to be measured in the environment. It has been increasingly recognised that some studies in radioecology, especially those involving both field sampling and laboratory measurements, have not paid adequate attention to the problem of obtaining representative, unbiased samples. This can greatly affect the quality of scientific interpretation, and the ability to manage the environment. Further, as the discipline of radioecology has developed, it has seen a growth in the numbers of quantities and units used, some of which are ill-defined and which are non-standardised. (orig.)
Restricted Repetitive Sampling in Designing of Control Charts
Directory of Open Access Journals (Sweden)
Muhammad Anwar Mughal
2017-06-01
Full Text Available In this article a criteria have been defined to classify the existing repetitive sampling into soft, moderate and strict conditions. Behind this division a ratio has been suggested i.e. c2 (constant used in repetitive limits to c1(constant used in control limit in slabs. A restricted criterion has been devised on the existing repetitive sampling. By embedding the proposed schematic in the control chart it becomes highly efficient in detecting the shifts quite earlier as well as it detects even smaller shifts at smaller ARLs. To facilitate the user for best choice the restricted criterion has further categorized to softly restricted, moderately restricted and strictly restricted. The restricted conditions are dependent on value of restriction parameter ’m’ varies 2 to 6. The application of proposed scheme on selected cases is given in tables which are self explanatory.
Sensory Hierarchical Organization and Reading.
Skapof, Jerome
The purpose of this study was to judge the viability of an operational approach aimed at assessing response styles in reading using the hypothesis of sensory hierarchical organization. A sample of 103 middle-class children from a New York City public school, between the ages of five and seven, took part in a three phase experiment. Phase one…
Design Method of the Semantic-driven Hierarchical Map Symbols%语义驱动的层次化地图符号设计方法
Institute of Scientific and Technical Information of China (English)
田江鹏; 贾奋励; 夏青; 吴金兵
2012-01-01
Research of map symbols is an important part of cartography. Currently, the main research of map symbols was focused on the visual graphics but paying little attention to the semantic. This paper puts forward a method of semantic-driven hierarchical map symbols design. In this method the semantic relation of map symbols is as a benchmark to the construction of symbol graphics, and the symbol graphics is controlled by semantic model. So we can fully exploit the intrinsic value of the semantic components of the map symbols in its design activities. We mainly focused on such four key steps of the method. The first one is semantic feature extraction. We systematically summarized the semantic feature of the map symbols by using ontological level concept. Second step is about morphemes design. The concept, design principles of morphemes and its important role in map symbols design was discussed. The third is about modeling of the associative semantic relation. In this step the modeling methods was discussed and a practice that show how to construct an associative semantic model based on common map symbols for the public geographical information was conducted. The fourth is semantics-driven generation of map symbols. The processes and characteristics of a symbol's generation was analyzed. An existing map symbol standard was improved by using our symbol design method. And a group of cognitive experiments have been done which show that the propound method has a superior performance in cognitive efficiency and relatively stable and high transmission efficiency in the analog process of information transmission. In conclusion, the semantic relation of the geospatial objects is the core of the method of semantic-driven hierarchical map symbols design, which is aimed to improve the graphic design and understanding of map symbols. Characterized by the symbol design oriented ontology domain, this method makes the map symbols more semantic-evident for better recognition
Design of the CERN MEDICIS Collection and Sample Extraction System
Brown, Alexander
MEDICIS is a new facility at CERN ISOLDE that aims to produce radio-isotopes for medical research. Possible designs for the collection and transport system for the collection of radio-isotopes was investigated. A system using readily available equipment was devised with the the aim of keeping costs to a minimum whilst maintaining the highest safety standards. FLUKA, a Monte Carlo radiation transport code, was used to simulate the radiation from the isotopes to be collected. Of the isotopes to be collected 44Sc was found to give the largest dose by simulating the collection of all isotopes of interest to CERN’s MEDICIS facility, for medical research. The simulations helped guide the amount of shielding used in the final design. Swiss Regulations stipulating allowed activity level of individual isotopes was also considered within the body of the work.
Gu, Jiuwang; Khan, Javid; Chai, Zhisheng; Yuan, Yufei; Yu, Xiang; Liu, Pengyi; Wu, Mingmei; Mai, Wenjie
2016-01-01
Large surface area, sufficient light-harvesting and superior electron transport property are the major factors for an ideal photoanode of dye-sensitized solar cells (DSSCs), which requires rational design of the nanoarchitectures and smart integration of state-of-the-art technologies. In this work, a 3D anatase TiO2 architecture consisting of vertically aligned 1D hierarchical TiO2 nanotubes (NTs) with ultra-dense branches (HTNTs, bottom layer) and 0D hollow TiO2 microspheres with rough surface (HTS, top layer) is first successfully constructed on transparent conductive fluorine-doped tin oxide glass through a series of facile processes. When used as photoanodes, the DSSCs achieve a very large short-current density of 19.46 mA cm-2 and a high overall power conversion efficiency of 8.38%. The remarkable photovoltaic performance is predominantly ascribed to the enhanced charge transport capacity of the NTs (function as the electron highway), the large surface area of the branches (act as the electron branch lines), the pronounced light harvesting efficiency of the HTS (serve as the light scattering centers), and the engineered intimate interfaces between all of them (minimize the recombination effect). Our work demonstrates a possibility of fabricating superior photoanodes for high-performance DSSCs by rational design of nanoarchitectures and smart integration of multi-functional components.
Design-Based Sample and Probability Law-Assumed Sample: Their Role in Scientific Investigation
Ojeda, Mario Miguel; Sahai, Hardeo
2002-01-01
Students in statistics service courses are frequently exposed to dogmatic approaches for evaluating the role of randomization in statistical designs, and inferential data analysis in experimental, observational and survey studies. In order to provide an overview for understanding the inference process, in this work some key statistical concepts in…
Structural integrity of hierarchical composites
Directory of Open Access Journals (Sweden)
Marco Paggi
2012-01-01
Full Text Available Interface mechanical problems are of paramount importance in engineering and materials science. Traditionally, due to the complexity of modelling their mechanical behaviour, interfaces are often treated as defects and their features are not explored. In this study, a different approach is illustrated, where the interfaces play an active role in the design of innovative hierarchical composites and are fundamental for their structural integrity. Numerical examples regarding cutting tools made of hierarchical cellular polycrystalline materials are proposed, showing that tailoring of interface properties at the different scales is the way to achieve superior mechanical responses that cannot be obtained using standard materials
Prevalence of Mixed-Methods Sampling Designs in Social Science Research
Collins, Kathleen M. T.
2006-01-01
The purpose of this mixed-methods study was to document the prevalence of sampling designs utilised in mixed-methods research and to examine the interpretive consistency between interpretations made in mixed-methods studies and the sampling design used. Classification of studies was based on a two-dimensional mixed-methods sampling model. This…
Collins, Kathleen M. T.; Onwuegbuzie, Anthony J.; Jiao, Qun G.
2007-01-01
A sequential design utilizing identical samples was used to classify mixed methods studies via a two-dimensional model, wherein sampling designs were grouped according to the time orientation of each study's components and the relationship of the qualitative and quantitative samples. A quantitative analysis of 121 studies representing nine fields…
Collins, Kathleen M. T.; Onwuegbuzie, Anthony J.
2013-01-01
The goal of this chapter is to recommend quality criteria to guide evaluators' selections of sampling designs when mixing approaches. First, we contextualize our discussion of quality criteria and sampling designs by discussing the concept of interpretive consistency and how it impacts sampling decisions. Embedded in this discussion are…
Optimal adaptive group sequential design with flexible timing of sample size determination.
Cui, Lu; Zhang, Lanju; Yang, Bo
2017-04-26
Flexible sample size designs, including group sequential and sample size re-estimation designs, have been used as alternatives to fixed sample size designs to achieve more robust statistical power and better trial efficiency. In this work, a new representation of sample size re-estimation design suggested by Cui et al. [5,6] is introduced as an adaptive group sequential design with flexible timing of sample size determination. This generalized adaptive group sequential design allows one time sample size determination either before the start of or in the mid-course of a clinical study. The new approach leads to possible design optimization on an expanded space of design parameters. Its equivalence to sample size re-estimation design proposed by Cui et al. provides further insight on re-estimation design and helps to address common confusions and misunderstanding. Issues in designing flexible sample size trial, including design objective, performance evaluation and implementation are touched upon with an example to illustrate. Copyright © 2017. Published by Elsevier Inc.
Hierarchical Multiagent Reinforcement Learning
2004-01-25
In this paper, we investigate the use of hierarchical reinforcement learning (HRL) to speed up the acquisition of cooperative multiagent tasks. We...introduce a hierarchical multiagent reinforcement learning (RL) framework and propose a hierarchical multiagent RL algorithm called Cooperative HRL. In
Hierarchically Nanostructured Materials for Sustainable Environmental Applications
Ren, Zheng; Guo, Yanbing; Liu, Cai-Hong; Gao, Pu-Xian
2013-11-01
This article presents a comprehensive overview of the hierarchical nanostructured materials with either geometry or composition complexity in environmental applications. The hierarchical nanostructures offer advantages of high surface area, synergistic interactions and multiple functionalities towards water remediation, environmental gas sensing and monitoring as well as catalytic gas treatment. Recent advances in synthetic strategies for various hierarchical morphologies such as hollow spheres and urchin-shaped architectures have been reviewed. In addition to the chemical synthesis, the physical mechanisms associated with the materials design and device fabrication have been discussed for each specific application. The development and application of hierarchical complex perovskite oxide nanostructures have also been introduced in photocatalytic water remediation, gas sensing and catalytic converter. Hierarchical nanostructures will open up many possibilities for materials design and device fabrication in environmental chemistry and technology.
Hierarchically nanostructured materials for sustainable environmental applications
Ren, Zheng; Guo, Yanbing; Liu, Cai-Hong; Gao, Pu-Xian
2013-01-01
This review presents a comprehensive overview of the hierarchical nanostructured materials with either geometry or composition complexity in environmental applications. The hierarchical nanostructures offer advantages of high surface area, synergistic interactions, and multiple functionalities toward water remediation, biosensing, environmental gas sensing and monitoring as well as catalytic gas treatment. Recent advances in synthetic strategies for various hierarchical morphologies such as hollow spheres and urchin-shaped architectures have been reviewed. In addition to the chemical synthesis, the physical mechanisms associated with the materials design and device fabrication have been discussed for each specific application. The development and application of hierarchical complex perovskite oxide nanostructures have also been introduced in photocatalytic water remediation, gas sensing, and catalytic converter. Hierarchical nanostructures will open up many possibilities for materials design and device fabrication in environmental chemistry and technology. PMID:24790946
Hierarchically Nanostructured Materials for Sustainable Environmental Applications
Directory of Open Access Journals (Sweden)
Zheng eRen
2013-11-01
Full Text Available This article presents a comprehensive overview of the hierarchical nanostructured materials with either geometry or composition complexity in environmental applications. The hierarchical nanostructures offer advantages of high surface area, synergistic interactions and multiple functionalities towards water remediation, environmental gas sensing and monitoring as well as catalytic gas treatment. Recent advances in synthetic strategies for various hierarchical morphologies such as hollow spheres and urchin-shaped architectures have been reviewed. In addition to the chemical synthesis, the physical mechanisms associated with the materials design and device fabrication have been discussed for each specific application. The development and application of hierarchical complex perovskite oxide nanostructures have also been introduced in photocatalytic water remediation, gas sensing and catalytic converter. Hierarchical nanostructures will open up many possibilities for materials design and device fabrication in environmental chemistry and technology.
Dual-Filter Estimation for Rotating-Panel Sample Designs
Directory of Open Access Journals (Sweden)
Francis A. Roesch
2017-06-01
Full Text Available Dual-filter estimators are described and tested for use in the annual estimation for national forest inventories. The dual-filter approach involves the use of a moving widow estimator in the first pass, which is used as input to Theil’s mixed estimator in the second pass. The moving window and dual-filter estimators are tested along with two other estimators in a sampling simulation of 152 simulated populations, which were developed from data collected in 38 states and Puerto Rico by the Forest Inventory and Analysis Program of the USDA Forest Service. The dual-filter estimators are shown to almost always provide some reduction in mean squared error (MSE relative to the first pass moving window estimators.
Voss, Sebastian; Zimmermann, Beate; Zimmermann, Alexander
2016-09-01
In the last decades, an increasing number of studies analyzed spatial patterns in throughfall by means of variograms. The estimation of the variogram from sample data requires an appropriate sampling scheme: most importantly, a large sample and a layout of sampling locations that often has to serve both variogram estimation and geostatistical prediction. While some recommendations on these aspects exist, they focus on Gaussian data and high ratios of the variogram range to the extent of the study area. However, many hydrological data, and throughfall data in particular, do not follow a Gaussian distribution. In this study, we examined the effect of extent, sample size, sampling design, and calculation method on variogram estimation of throughfall data. For our investigation, we first generated non-Gaussian random fields based on throughfall data with large outliers. Subsequently, we sampled the fields with three extents (plots with edge lengths of 25 m, 50 m, and 100 m), four common sampling designs (two grid-based layouts, transect and random sampling) and five sample sizes (50, 100, 150, 200, 400). We then estimated the variogram parameters by method-of-moments (non-robust and robust estimators) and residual maximum likelihood. Our key findings are threefold. First, the choice of the extent has a substantial influence on the estimation of the variogram. A comparatively small ratio of the extent to the correlation length is beneficial for variogram estimation. Second, a combination of a minimum sample size of 150, a design that ensures the sampling of small distances and variogram estimation by residual maximum likelihood offers a good compromise between accuracy and efficiency. Third, studies relying on method-of-moments based variogram estimation may have to employ at least 200 sampling points for reliable variogram estimates. These suggested sample sizes exceed the number recommended by studies dealing with Gaussian data by up to 100 %. Given that most previous
Estimating HIES Data through Ratio and Regression Methods for Different Sampling Designs
Directory of Open Access Journals (Sweden)
Faqir Muhammad
2007-01-01
Full Text Available In this study, comparison has been made for different sampling designs, using the HIES data of North West Frontier Province (NWFP for 2001-02 and 1998-99 collected from the Federal Bureau of Statistics, Statistical Division, Government of Pakistan, Islamabad. The performance of the estimators has also been considered using bootstrap and Jacknife. A two-stage stratified random sample design is adopted by HIES. In the first stage, enumeration blocks and villages are treated as the first stage Primary Sampling Units (PSU. The sample PSU’s are selected with probability proportional to size. Secondary Sampling Units (SSU i.e., households are selected by systematic sampling with a random start. They have used a single study variable. We have compared the HIES technique with some other designs, which are: Stratified Simple Random Sampling. Stratified Systematic Sampling. Stratified Ranked Set Sampling. Stratified Two Phase Sampling. Ratio and Regression methods were applied with two study variables, which are: Income (y and Household sizes (x. Jacknife and Bootstrap are used for variance replication. Simple Random Sampling with sample size (462 to 561 gave moderate variances both by Jacknife and Bootstrap. By applying Systematic Sampling, we received moderate variance with sample size (467. In Jacknife with Systematic Sampling, we obtained variance of regression estimator greater than that of ratio estimator for a sample size (467 to 631. At a sample size (952 variance of ratio estimator gets greater than that of regression estimator. The most efficient design comes out to be Ranked set sampling compared with other designs. The Ranked set sampling with jackknife and bootstrap, gives minimum variance even with the smallest sample size (467. Two Phase sampling gave poor performance. Multi-stage sampling applied by HIES gave large variances especially if used with a single study variable.
On effects of trawling, benthos and sampling design.
Gray, John S; Dayton, Paul; Thrush, Simon; Kaiser, Michel J
2006-08-01
The evidence for the wider effects of fishing on the marine ecosystem demands that we incorporate these considerations into our management of human activities. The consequences of the direct physical disturbance of the seabed caused by towed bottom-fishing gear have been studied extensively with over 100 manipulations reported in the peer-reviewed literature. The outcome of these studies varies according to the gear used and the habitat in which it was deployed. This variability in the response of different benthic systems concurs with established theoretical models of the response of community metrics to disturbance. Despite this powerful evidence, a recent FAO report wrongly concludes that the variability in the reported responses to fishing disturbance mean that no firm conclusion as to the effects of fishing disturbance can be made. This thesis is further supported (incorrectly) by the supposition that current benthic sampling methodologies are inadequate to demonstrate the effects of fishing disturbance on benthic systems. The present article addresses these two erroneous conclusions which may confuse non-experts and in particular policy-makers.
Creel survey sampling designs for estimating effort in short-duration Chinook salmon fisheries
McCormick, Joshua L.; Quist, Michael C.; Schill, Daniel J.
2013-01-01
Chinook Salmon Oncorhynchus tshawytscha sport fisheries in the Columbia River basin are commonly monitored using roving creel survey designs and require precise, unbiased catch estimates. The objective of this study was to examine the relative bias and precision of total catch estimates using various sampling designs to estimate angling effort under the assumption that mean catch rate was known. We obtained information on angling populations based on direct visual observations of portions of Chinook Salmon fisheries in three Idaho river systems over a 23-d period. Based on the angling population, Monte Carlo simulations were used to evaluate the properties of effort and catch estimates for each sampling design. All sampling designs evaluated were relatively unbiased. Systematic random sampling (SYS) resulted in the most precise estimates. The SYS and simple random sampling designs had mean square error (MSE) estimates that were generally half of those observed with cluster sampling designs. The SYS design was more efficient (i.e., higher accuracy per unit cost) than a two-cluster design. Increasing the number of clusters available for sampling within a day decreased the MSE of estimates of daily angling effort, but the MSE of total catch estimates was variable depending on the fishery. The results of our simulations provide guidelines on the relative influence of sample sizes and sampling designs on parameters of interest in short-duration Chinook Salmon fisheries.
Sampling flies or sampling flaws? Experimental design and inference strength in forensic entomology.
Michaud, J-P; Schoenly, Kenneth G; Moreau, G
2012-01-01
Forensic entomology is an inferential science because postmortem interval estimates are based on the extrapolation of results obtained in field or laboratory settings. Although enormous gains in scientific understanding and methodological practice have been made in forensic entomology over the last few decades, a majority of the field studies we reviewed do not meet the standards for inference, which are 1) adequate replication, 2) independence of experimental units, and 3) experimental conditions that capture a representative range of natural variability. Using a mock case-study approach, we identify design flaws in field and lab experiments and suggest methodological solutions for increasing inference strength that can inform future casework. Suggestions for improving data reporting in future field studies are also proposed.
DEFF Research Database (Denmark)
Jin, Zheming; Meng, Lexuan; Quintero, Juan Carlos Vasquez
2017-01-01
Due to the increasing need to reduce the cost and emission of ships, shipboard applications are calling advanced technologies to go onboard. Recently, cleaner power sources (i.e. gas turbines, fuel cell, solar and wind power), energy storage, advanced control and power/energy management are intro......Due to the increasing need to reduce the cost and emission of ships, shipboard applications are calling advanced technologies to go onboard. Recently, cleaner power sources (i.e. gas turbines, fuel cell, solar and wind power), energy storage, advanced control and power/energy management...... are introduced to meet the new requirement, and therefore, making shipboard power system more like a microgrid. In this paper, a frequency-division based power sharing method is proposed to solve the contradiction between fuel efficiency and dynamic load conditions of marine vessels. With effective design...
Hierarchical self-assembly of designed 2x2-alpha-helix bundle proteins on Au(111) surfaces
DEFF Research Database (Denmark)
Wackerbarth, Hainer; Tofteng, A.P.; Jensen, K.J.
2006-01-01
Self-assembled monolayers of biomolecules on atomically planar surfaces offer the prospect of complex combinations of controlled properties, e. g., for bioelectronics. We have prepared a novel hemi-4-alpha-helix bundle protein by attaching two alpha-helical peptides to a cyclo-dithiothreitol (cyclo......-DTT) template. The protein was de novo designed to self-assemble in solution to form a 4-alpha-helix bundle, whereas the disulfide moiety enables the formation of a self-assembled monolayer on a Au(111) surface by opening of the disulfide, thus giving rise to a two-step self-assembly process. The 2 x 2-alpha......-helix bundle protein and its template were studied by X-ray photo electron spectroscopy (XPS), electrochemical methods, and electrochemical in situ scanning tunneling microscopy (in situ STM). XPS showed that the cyclo-DTT opens on adsorption to a gold surface with the integrity of the 2 x 2- R-helix bundle...
A cheap and quickly adaptable in situ electrical contacting TEM sample holder design.
Börrnert, Felix; Voigtländer, Ralf; Rellinghaus, Bernd; Büchner, Bernd; Rümmeli, Mark H; Lichte, Hannes
2014-04-01
In situ electrical characterization of nanostructures inside a transmission electron microscope provides crucial insight into the mechanisms of functioning micro- and nano-electronic devices. For such in situ investigations specialized sample holders are necessary. A simple and affordable but flexible design is important, especially, when sample geometries change, a holder should be adaptable with minimum effort. Atomic resolution imaging is standard nowadays, so a sample holder must ensure this capability. A sample holder design for on-chip samples is presented that fulfils these requisites. On-chip sample devices have the advantage that they can be manufactured via standard fabrication routes.
A comparison of two sampling designs for fish assemblage assessment in a large river
Kiraly, Ian A.; Coghlan Jr., Stephen M.; Zydlewski, Joseph; Hayes, Daniel
2014-01-01
We compared the efficiency of stratified random and fixed-station sampling designs to characterize fish assemblages in anticipation of dam removal on the Penobscot River, the largest river in Maine. We used boat electrofishing methods in both sampling designs. Multiple 500-m transects were selected randomly and electrofished in each of nine strata within the stratified random sampling design. Within the fixed-station design, up to 11 transects (1,000 m) were electrofished, all of which had been sampled previously. In total, 88 km of shoreline were electrofished during summer and fall in 2010 and 2011, and 45,874 individuals of 34 fish species were captured. Species-accumulation and dissimilarity curve analyses indicated that all sampling effort, other than fall 2011 under the fixed-station design, provided repeatable estimates of total species richness and proportional abundances. Overall, our sampling designs were similar in precision and efficiency for sampling fish assemblages. The fixed-station design was negatively biased for estimating the abundance of species such as Common Shiner Luxilus cornutus and Fallfish Semotilus corporalis and was positively biased for estimating biomass for species such as White Sucker Catostomus commersonii and Atlantic Salmon Salmo salar. However, we found no significant differences between the designs for proportional catch and biomass per unit effort, except in fall 2011. The difference observed in fall 2011 was due to limitations on the number and location of fixed sites that could be sampled, rather than an inherent bias within the design. Given the results from sampling in the Penobscot River, application of the stratified random design is preferable to the fixed-station design due to less potential for bias caused by varying sampling effort, such as what occurred in the fall 2011 fixed-station sample or due to purposeful site selection.
A strategy for sampling on a sphere applied to 3D selective RF pulse design.
Wong, S T; Roos, M S
1994-12-01
Conventional constant angular velocity sampling of the surface of a sphere results in a higher sampling density near the two poles relative to the equatorial region. More samples, and hence longer sampling time, are required to achieve a given sampling density in the equatorial region when compared with uniform sampling. This paper presents a simple expression for a continuous sample path through a nearly uniform distribution of points on the surface of a sphere. Sampling of concentric spherical shells in k-space with the new strategy is used to design 3D selective inversion and spin-echo pulses. These new 3D selective pulses have been implemented and verified experimentally.
Hierarchical Affinity Propagation
Givoni, Inmar; Frey, Brendan J
2012-01-01
Affinity propagation is an exemplar-based clustering algorithm that finds a set of data-points that best exemplify the data, and associates each datapoint with one exemplar. We extend affinity propagation in a principled way to solve the hierarchical clustering problem, which arises in a variety of domains including biology, sensor networks and decision making in operational research. We derive an inference algorithm that operates by propagating information up and down the hierarchy, and is efficient despite the high-order potentials required for the graphical model formulation. We demonstrate that our method outperforms greedy techniques that cluster one layer at a time. We show that on an artificial dataset designed to mimic the HIV-strain mutation dynamics, our method outperforms related methods. For real HIV sequences, where the ground truth is not available, we show our method achieves better results, in terms of the underlying objective function, and show the results correspond meaningfully to geographi...
Gopalakrishnan, V.; Subramanian, V.; Baskaran, R.; Venkatraman, B.
2015-07-01
Wireless based custom built aerosol sampling network is designed, developed, and implemented for environmental aerosol sampling. These aerosol sampling systems are used in field measurement campaign, in which sodium aerosol dispersion experiments have been conducted as a part of environmental impact studies related to sodium cooled fast reactor. The sampling network contains 40 aerosol sampling units and each contains custom built sampling head and the wireless control networking designed with Programmable System on Chip (PSoC™) and Xbee Pro RF modules. The base station control is designed using graphical programming language LabView. The sampling network is programmed to operate in a preset time and the running status of the samplers in the network is visualized from the base station. The system is developed in such a way that it can be used for any other environment sampling system deployed in wide area and uneven terrain where manual operation is difficult due to the requirement of simultaneous operation and status logging.
Gopalakrishnan, V; Subramanian, V; Baskaran, R; Venkatraman, B
2015-07-01
Wireless based custom built aerosol sampling network is designed, developed, and implemented for environmental aerosol sampling. These aerosol sampling systems are used in field measurement campaign, in which sodium aerosol dispersion experiments have been conducted as a part of environmental impact studies related to sodium cooled fast reactor. The sampling network contains 40 aerosol sampling units and each contains custom built sampling head and the wireless control networking designed with Programmable System on Chip (PSoC™) and Xbee Pro RF modules. The base station control is designed using graphical programming language LabView. The sampling network is programmed to operate in a preset time and the running status of the samplers in the network is visualized from the base station. The system is developed in such a way that it can be used for any other environment sampling system deployed in wide area and uneven terrain where manual operation is difficult due to the requirement of simultaneous operation and status logging.
Energy Technology Data Exchange (ETDEWEB)
Gopalakrishnan, V.; Subramanian, V.; Baskaran, R.; Venkatraman, B. [Radiation Impact Assessment Section, Radiological Safety Division, Indira Gandhi Centre for Atomic Research, Kalpakkam 603 102 (India)
2015-07-15
Wireless based custom built aerosol sampling network is designed, developed, and implemented for environmental aerosol sampling. These aerosol sampling systems are used in field measurement campaign, in which sodium aerosol dispersion experiments have been conducted as a part of environmental impact studies related to sodium cooled fast reactor. The sampling network contains 40 aerosol sampling units and each contains custom built sampling head and the wireless control networking designed with Programmable System on Chip (PSoC™) and Xbee Pro RF modules. The base station control is designed using graphical programming language LabView. The sampling network is programmed to operate in a preset time and the running status of the samplers in the network is visualized from the base station. The system is developed in such a way that it can be used for any other environment sampling system deployed in wide area and uneven terrain where manual operation is difficult due to the requirement of simultaneous operation and status logging.
A Pilot Sampling Design for Estimating Outdoor Recreation Site Visits on the National Forests
Stanley J. Zarnoch; S.M. Kocis; H. Ken Cordell; D.B.K. English
2002-01-01
A pilot sampling design is described for estimating site visits to National Forest System lands. The three-stage sampling design consisted of national forest ranger districts, site days within ranger districts, and last-exiting recreation visitors within site days. Stratification was used at both the primary and secondary stages. Ranger districts were stratified based...
A data structure for describing sampling designs to aid in compilation of stand attributes
John C. Byrne; Albert R. Stage
1988-01-01
Maintaining permanent plot data with different sampling designs over long periods within an organization, and sharing such information between organizations, requires that common standards be used. A data structure for the description of the sampling design within a stand is proposed. It is composed of just those variables and their relationships needed to compile...
An Optimal Spatial Sampling Design for Intra-Urban Population Exposure Assessment.
Kumar, Naresh
2009-02-01
This article offers an optimal spatial sampling design that captures maximum variance with the minimum sample size. The proposed sampling design addresses the weaknesses of the sampling design that Kanaroglou et al. (2005) used for identifying 100 sites for capturing population exposure to NO(2) in Toronto, Canada. Their sampling design suffers from a number of weaknesses and fails to capture the spatial variability in NO(2) effectively. The demand surface they used is spatially autocorrelated and weighted by the population size, which leads to the selection of redundant sites. The location-allocation model (LAM) available with the commercial software packages, which they used to identify their sample sites, is not designed to solve spatial sampling problems using spatially autocorrelated data. A computer application (written in C++) that utilizes spatial search algorithm was developed to implement the proposed sampling design. This design was implemented in three different urban environments - namely Cleveland, OH; Delhi, India; and Iowa City, IA - to identify optimal sample sites for monitoring airborne particulates.
Directory of Open Access Journals (Sweden)
Wei Lin Teoh
Full Text Available Designs of the double sampling (DS X chart are traditionally based on the average run length (ARL criterion. However, the shape of the run length distribution changes with the process mean shifts, ranging from highly skewed when the process is in-control to almost symmetric when the mean shift is large. Therefore, we show that the ARL is a complicated performance measure and that the median run length (MRL is a more meaningful measure to depend on. This is because the MRL provides an intuitive and a fair representation of the central tendency, especially for the rightly skewed run length distribution. Since the DS X chart can effectively reduce the sample size without reducing the statistical efficiency, this paper proposes two optimal designs of the MRL-based DS X chart, for minimizing (i the in-control average sample size (ASS and (ii both the in-control and out-of-control ASSs. Comparisons with the optimal MRL-based EWMA X and Shewhart X charts demonstrate the superiority of the proposed optimal MRL-based DS X chart, as the latter requires a smaller sample size on the average while maintaining the same detection speed as the two former charts. An example involving the added potassium sorbate in a yoghurt manufacturing process is used to illustrate the effectiveness of the proposed MRL-based DS X chart in reducing the sample size needed.
Theory, design, and performance of extended tuning range semiconductor lasers with sampled gratings
Energy Technology Data Exchange (ETDEWEB)
Jayaraman, V.; Chuang, Zuon-Min; Coldren, L.A. (Univ. of California, Santa Barbara, CA (United States))
1993-06-01
The authors have recently demonstrated 57 nm of tuning in a monolithic semiconductor laser using conventional DBR technology with grating elements removed in a periodic fashion. This paper describes the theory and design of these sampled grating tunable lasers. They first calculate sampled grating reflectivity. They then present normalized design curves which quantify tradeoffs involved in a sampled grating DBR laser with two mismatched sampled grating mirrors. These results are applied to design example in the InP-InGaAsP system. The design example provides 70 nm tuning wile maintaining [gt]30 dB MSR, with fractional index change [Delta][mu]/[mu] [lt] 0.2% in the mirrors, and only 1 mm of total sampled grating length. Section 4 summarizes recent experimental results, and compares them to theory. They also analyze other device structures which make use of sampled gratings.
Clewe, Oskar; Karlsson, Mats O; Simonsson, Ulrika S H
2015-12-01
Bronchoalveolar lavage (BAL) is a pulmonary sampling technique for characterization of drug concentrations in epithelial lining fluid and alveolar cells. Two hypothetical drugs with different pulmonary distribution rates (fast and slow) were considered. An optimized BAL sampling design was generated assuming no previous information regarding the pulmonary distribution (rate and extent) and with a maximum of two samples per subject. Simulations were performed to evaluate the impact of the number of samples per subject (1 or 2) and the sample size on the relative bias and relative root mean square error of the parameter estimates (rate and extent of pulmonary distribution). The optimized BAL sampling design depends on a characterized plasma concentration time profile, a population plasma pharmacokinetic model, the limit of quantification (LOQ) of the BAL method and involves only two BAL sample time points, one early and one late. The early sample should be taken as early as possible, where concentrations in the BAL fluid ≥ LOQ. The second sample should be taken at a time point in the declining part of the plasma curve, where the plasma concentration is equivalent to the plasma concentration in the early sample. Using a previously described general pulmonary distribution model linked to a plasma population pharmacokinetic model, simulated data using the final BAL sampling design enabled characterization of both the rate and extent of pulmonary distribution. The optimized BAL sampling design enables characterization of both the rate and extent of the pulmonary distribution for both fast and slowly equilibrating drugs.
The influence of sampling design on tree-ring-based quantification of forest growth.
Nehrbass-Ahles, Christoph; Babst, Flurin; Klesse, Stefan; Nötzli, Magdalena; Bouriaud, Olivier; Neukom, Raphael; Dobbertin, Matthias; Frank, David
2014-09-01
Tree-rings offer one of the few possibilities to empirically quantify and reconstruct forest growth dynamics over years to millennia. Contemporaneously with the growing scientific community employing tree-ring parameters, recent research has suggested that commonly applied sampling designs (i.e. how and which trees are selected for dendrochronological sampling) may introduce considerable biases in quantifications of forest responses to environmental change. To date, a systematic assessment of the consequences of sampling design on dendroecological and-climatological conclusions has not yet been performed. Here, we investigate potential biases by sampling a large population of trees and replicating diverse sampling designs. This is achieved by retroactively subsetting the population and specifically testing for biases emerging for climate reconstruction, growth response to climate variability, long-term growth trends, and quantification of forest productivity. We find that commonly applied sampling designs can impart systematic biases of varying magnitude to any type of tree-ring-based investigations, independent of the total number of samples considered. Quantifications of forest growth and productivity are particularly susceptible to biases, whereas growth responses to short-term climate variability are less affected by the choice of sampling design. The world's most frequently applied sampling design, focusing on dominant trees only, can bias absolute growth rates by up to 459% and trends in excess of 200%. Our findings challenge paradigms, where a subset of samples is typically considered to be representative for the entire population. The only two sampling strategies meeting the requirements for all types of investigations are the (i) sampling of all individuals within a fixed area; and (ii) fully randomized selection of trees. This result advertises the consistent implementation of a widely applicable sampling design to simultaneously reduce uncertainties in
Precision and cost considerations for two-stage sampling in a panelized forest inventory design.
Westfall, James A; Lister, Andrew J; Scott, Charles T
2016-01-01
Due to the relatively high cost of measuring sample plots in forest inventories, considerable attention is given to sampling and plot designs during the forest inventory planning phase. A two-stage design can be efficient from a field work perspective as spatially proximate plots are grouped into work zones. A comparison between subsampling with units of unequal size (SUUS) and a simple random sample (SRS) design in a panelized framework assessed the statistical and economic implications of using the SUUS design for a case study in the Northeastern USA. The sampling errors for estimates of forest land area and biomass were approximately 1.5-2.2 times larger with SUUS prior to completion of the inventory cycle. Considerable sampling error reductions were realized by using the zones within a post-stratified sampling paradigm; however, post-stratification of plots in the SRS design always provided smaller sampling errors in comparison. Cost differences between the two designs indicated the SUUS design could reduce the field work expense by 2-7 %. The results also suggest the SUUS design may provide substantial economic advantage for tropical forest inventories, where remote areas, poor access, and lower wages are typically encountered.
An improved adaptive sampling and experiment design method for aerodynamic optimization
Institute of Scientific and Technical Information of China (English)
Huang Jiangtao; Gao Zhenghong; Zhou Zhu; Zhao Ke
2015-01-01
Experiment design method is a key to construct a highly reliable surrogate model for numerical optimization in large-scale project. Within the method, the experimental design criterion directly affects the accuracy of the surrogate model and the optimization efficient. According to the shortcomings of the traditional experimental design, an improved adaptive sampling method is pro-posed in this paper. The surrogate model is firstly constructed by basic sparse samples. Then the supplementary sampling position is detected according to the specified criteria, which introduces the energy function and curvature sampling criteria based on radial basis function (RBF) network. Sampling detection criteria considers both the uniformity of sample distribution and the description of hypersurface curvature so as to significantly improve the prediction accuracy of the surrogate model with much less samples. For the surrogate model constructed with sparse samples, the sample uniformity is an important factor to the interpolation accuracy in the initial stage of adaptive sam-pling and surrogate model training. Along with the improvement of uniformity, the curvature description of objective function surface gradually becomes more important. In consideration of these issues, crowdness enhance function and root mean square error (RMSE) feedback function are introduced in C criterion expression. Thus, a new sampling method called RMSE and crowd-ness enhance (RCE) adaptive sampling is established. The validity of RCE adaptive sampling method is studied through typical test function firstly and then the airfoil/wing aerodynamic opti-mization design problem, which has high-dimensional design space. The results show that RCE adaptive sampling method not only reduces the requirement for the number of samples, but also effectively improves the prediction accuracy of the surrogate model, which has a broad prospects for applications.
Hierarchical Reverberation Mapping
Brewer, Brendon J
2013-01-01
Reverberation mapping (RM) is an important technique in studies of active galactic nuclei (AGN). The key idea of RM is to measure the time lag $\\tau$ between variations in the continuum emission from the accretion disc and subsequent response of the broad line region (BLR). The measurement of $\\tau$ is typically used to estimate the physical size of the BLR and is combined with other measurements to estimate the black hole mass $M_{\\rm BH}$. A major difficulty with RM campaigns is the large amount of data needed to measure $\\tau$. Recently, Fine et al (2012) introduced a new approach to RM where the BLR light curve is sparsely sampled, but this is counteracted by observing a large sample of AGN, rather than a single system. The results are combined to infer properties of the sample of AGN. In this letter we implement this method using a hierarchical Bayesian model and contrast this with the results from the previous stacked cross-correlation technique. We find that our inferences are more precise and allow fo...
Cost-effective Sampling Design Applied to Large-scale Monitoring of Boreal Birds
Directory of Open Access Journals (Sweden)
Matthew Carlson
2002-12-01
Full Text Available Despite their important roles in biodiversity conservation, large-scale ecological monitoring programs are scarce, in large part due to the difficulty of achieving an effective design under fiscal constraints. Using long-term avian monitoring in the boreal forest of Alberta, Canada as an example, we present a methodology that uses power analysis, statistical modeling, and partial derivatives to identify cost-effective sampling strategies for ecological monitoring programs. Empirical parameter estimates were used in simulations that estimated the power of sampling designs to detect trend in a variety of species' populations and community metrics. The ability to detect trend with increased sample effort depended on the monitoring target's variability and how effort was allocated to sampling parameters. Power estimates were used to develop nonlinear models of the relationship between sample effort and power. A cost model was also developed, and partial derivatives of the power and cost models were evaluated to identify two cost-effective avian sampling strategies. For decreasing sample error, sampling multiple plots at a site is preferable to multiple within-year visits to the site, and many sites should be sampled relatively infrequently rather than sampling few sites frequently, although the importance of frequent sampling increases for variable targets. We end by stressing the need for long-term, spatially extensive data for additional taxa, and by introducing optimal design as an alternative to power analysis for the evaluation of ecological monitoring program designs.
Survey-Based Cross-Country Comparisons Where Countries Vary in Sample Design: Issues and Solutions
Directory of Open Access Journals (Sweden)
Kaminska Olena
2017-03-01
Full Text Available In multi-national surveys, different countries usually implement different sample designs. The sample designs affect the variance of estimates of differences between countries. When making such estimates, analysts often fail to take sufficient account of sample design. This failure occurs sometimes because variables indicating stratification, clustering, or weighting are unavailable, partially available, or in a form that is unsuitable for cross-national analysis. In this article, we demonstrate how complex sample design should be taken into account when estimating differences between countries, and we provide practical guidance to analysts and to data producers on how to deal with partial or inappropriately-coded sample design indicator variables. Using EU-SILC as a case study, we evaluate the inverse misspecification effect (imeff that results from ignoring clustering or stratification, or both in a between-country comparison where countries’ sample designs differ. We present imeff for estimates of between-country differences in a number of demographic and economic variables for 19 European Union Member States. We assess the magnitude of imeff and the associated impact on standard error estimates. Our empirical findings illustrate that it is important for data producers to supply appropriate sample design indicators and for analysts to use them.
Outcome-Dependent Sampling Design and Inference for Cox's Proportional Hazards Model.
Yu, Jichang; Liu, Yanyan; Cai, Jianwen; Sandler, Dale P; Zhou, Haibo
2016-11-01
We propose a cost-effective outcome-dependent sampling design for the failure time data and develop an efficient inference procedure for data collected with this design. To account for the biased sampling scheme, we derive estimators from a weighted partial likelihood estimating equation. The proposed estimators for regression parameters are shown to be consistent and asymptotically normally distributed. A criteria that can be used to optimally implement the ODS design in practice is proposed and studied. The small sample performance of the proposed method is evaluated by simulation studies. The proposed design and inference procedure is shown to be statistically more powerful than existing alternative designs with the same sample sizes. We illustrate the proposed method with an existing real data from the Cancer Incidence and Mortality of Uranium Miners Study.
The role of data analysis in sampling design of environmental monitoring
Energy Technology Data Exchange (ETDEWEB)
Shyr, L.J.; Herrera, H.; Haaker, R. [Sandia National Labs., Albuquerque, NM (United States). Environmental and Emergency Management Dept.
1998-03-01
The report is intended to address the need for data analysis in environmental sampling programs. Routine environmental sampling has been conducted at Sandia National Laboratories/New Mexico (SNL/NM) to ensure that site operations have not resulted in undue risk to the public and the environment. Over the years, large amounts of data have been accumulated. The richness of the data should be fully utilized to improve sampling design and prioritize sampling needs for a technically-sound, yet cost-effective sampling design. The report presents a methodology for analyzing environmental monitoring data and demonstrates the application by using SNL`s historical monitoring data. Recommendations for sampling design modification were derived based on the results of the analyses.
Hybrid and hierarchical composite materials
Kim, Chang-Soo; Sano, Tomoko
2015-01-01
This book addresses a broad spectrum of areas in both hybrid materials and hierarchical composites, including recent development of processing technologies, structural designs, modern computer simulation techniques, and the relationships between the processing-structure-property-performance. Each topic is introduced at length with numerous and detailed examples and over 150 illustrations. In addition, the authors present a method of categorizing these materials, so that representative examples of all material classes are discussed.
Thompson, Steven K
2012-01-01
Praise for the Second Edition "This book has never had a competitor. It is the only book that takes a broad approach to sampling . . . any good personal statistics library should include a copy of this book." —Technometrics "Well-written . . . an excellent book on an important subject. Highly recommended." —Choice "An ideal reference for scientific researchers and other professionals who use sampling." —Zentralblatt Math Features new developments in the field combined with all aspects of obtaining, interpreting, and using sample data Sampling provides an up-to-date treat
Design of sample carrier for neutron irradiation facility at TRIGA MARK II nuclear reactor
Abdullah, Y.; Hamid, N. A.; Mansor, M. A.; Ahmad, M. H. A. R. M.; Yusof, M. R.; Yazid, H.; Mohamed, A. A.
2013-06-01
The objective of this work is to design a sample carrier for neutron irradiation experiment at beam ports of research nuclear reactor, the Reaktor TRIGA PUSPATI (RTP). The sample carrier was designed so that irradiation experiment can be performed safely by researchers. This development will resolve the transferring of sample issues faced by the researchers at the facility when performing neutron irradiation studies. The function of sample carrier is to ensure the sample for the irradiation process can be transferred into and out from the beam port of the reactor safely and effectively. The design model used was House of Quality Method (HOQ) which is usually used for developing specifications for product and develop numerical target to work towards and determining how well we can meet up to the needs. The chosen sample carrier (product) consists of cylindrical casing shape with hydraulic cylinders transportation method. The sample placing can be done manually, locomotion was by wheel while shielding used was made of boron materials. The sample carrier design can shield thermal neutron during irradiation of sample so that only low fluencies fast neutron irradiates the sample.
Sampling designs matching species biology produce accurate and affordable abundance indices
Directory of Open Access Journals (Sweden)
Grant Harris
2013-12-01
Full Text Available Wildlife biologists often use grid-based designs to sample animals and generate abundance estimates. Although sampling in grids is theoretically sound, in application, the method can be logistically difficult and expensive when sampling elusive species inhabiting extensive areas. These factors make it challenging to sample animals and meet the statistical assumption of all individuals having an equal probability of capture. Violating this assumption biases results. Does an alternative exist? Perhaps by sampling only where resources attract animals (i.e., targeted sampling, it would provide accurate abundance estimates more efficiently and affordably. However, biases from this approach would also arise if individuals have an unequal probability of capture, especially if some failed to visit the sampling area. Since most biological programs are resource limited, and acquiring abundance data drives many conservation and management applications, it becomes imperative to identify economical and informative sampling designs. Therefore, we evaluated abundance estimates generated from grid and targeted sampling designs using simulations based on geographic positioning system (GPS data from 42 Alaskan brown bears (Ursus arctos. Migratory salmon drew brown bears from the wider landscape, concentrating them at anadromous streams. This provided a scenario for testing the targeted approach. Grid and targeted sampling varied by trap amount, location (traps placed randomly, systematically or by expert opinion, and traps stationary or moved between capture sessions. We began by identifying when to sample, and if bears had equal probability of capture. We compared abundance estimates against seven criteria: bias, precision, accuracy, effort, plus encounter rates, and probabilities of capture and recapture. One grid (49 km2 cells and one targeted configuration provided the most accurate results. Both placed traps by expert opinion and moved traps between capture
Sampling designs matching species biology produce accurate and affordable abundance indices
Farley, Sean; Russell, Gareth J.; Butler, Matthew J.; Selinger, Jeff
2013-01-01
Wildlife biologists often use grid-based designs to sample animals and generate abundance estimates. Although sampling in grids is theoretically sound, in application, the method can be logistically difficult and expensive when sampling elusive species inhabiting extensive areas. These factors make it challenging to sample animals and meet the statistical assumption of all individuals having an equal probability of capture. Violating this assumption biases results. Does an alternative exist? Perhaps by sampling only where resources attract animals (i.e., targeted sampling), it would provide accurate abundance estimates more efficiently and affordably. However, biases from this approach would also arise if individuals have an unequal probability of capture, especially if some failed to visit the sampling area. Since most biological programs are resource limited, and acquiring abundance data drives many conservation and management applications, it becomes imperative to identify economical and informative sampling designs. Therefore, we evaluated abundance estimates generated from grid and targeted sampling designs using simulations based on geographic positioning system (GPS) data from 42 Alaskan brown bears (Ursus arctos). Migratory salmon drew brown bears from the wider landscape, concentrating them at anadromous streams. This provided a scenario for testing the targeted approach. Grid and targeted sampling varied by trap amount, location (traps placed randomly, systematically or by expert opinion), and traps stationary or moved between capture sessions. We began by identifying when to sample, and if bears had equal probability of capture. We compared abundance estimates against seven criteria: bias, precision, accuracy, effort, plus encounter rates, and probabilities of capture and recapture. One grid (49 km2 cells) and one targeted configuration provided the most accurate results. Both placed traps by expert opinion and moved traps between capture sessions, which
Sample size adjustment designs with time-to-event outcomes: A caution.
Freidlin, Boris; Korn, Edward L
2017-08-01
Sample size adjustment designs, which allow increasing the study sample size based on interim analysis of outcome data from a randomized clinical trial, have been increasingly promoted in the biostatistical literature. Although it is recognized that group sequential designs can be at least as efficient as sample size adjustment designs, many authors argue that a key advantage of these designs is their flexibility; interim sample size adjustment decisions can incorporate information and business interests external to the trial. Recently, Chen et al. (Clinical Trials 2015) considered sample size adjustment applications in the time-to-event setting using a design (CDL) that limits adjustments to situations where the interim results are promising. The authors demonstrated that while CDL provides little gain in unconditional power (versus fixed-sample-size designs), there is a considerable increase in conditional power for trials in which the sample size is adjusted. In time-to-event settings, sample size adjustment allows an increase in the number of events required for the final analysis. This can be achieved by either (a) following the original study population until the additional events are observed thus focusing on the tail of the survival curves or (b) enrolling a potentially large number of additional patients thus focusing on the early differences in survival curves. We use the CDL approach to investigate performance of sample size adjustment designs in time-to-event trials. Through simulations, we demonstrate that when the magnitude of the true treatment effect changes over time, interim information on the shape of the survival curves can be used to enrich the final analysis with events from the time period with the strongest treatment effect. In particular, interested parties have the ability to make the end-of-trial treatment effect larger (on average) based on decisions using interim outcome data. Furthermore, in "clinical null" cases where there is no
Hierarchical Optimization of Material and Structure
DEFF Research Database (Denmark)
Rodrigues, Helder C.; Guedes, Jose M.; Bendsøe, Martin P.
2002-01-01
This paper describes a hierarchical computational procedure for optimizing material distribution as well as the local material properties of mechanical elements. The local properties are designed using a topology design approach, leading to single scale microstructures, which may be restricted...... in various ways, based on design and manufacturing criteria. Implementation issues are also discussed and computational results illustrate the nature of the procedure....
Sample Design and Cohort Selection in the Hispanic Community Health Study/Study of Latinos
LaVange, Lisa M.; Kalsbeek, William; Sorlie, Paul D.; Avilés-Santa, Larissa M.; Kaplan, Robert C.; Barnhart, Janice; Liu, Kiang; Giachello, Aida; Lee, David J.; Ryan, John; Criqui, Michael H.; Elder, John P.
2010-01-01
PURPOSE The Hispanic Community Health Study (HCHS)/Study of Latinos (SOL) is a multi-center, community based cohort study of Hispanic/Latino adults in the United States. A diverse participant sample is required that is both representative of the target population and likely to remain engaged throughout follow-up. The choice of sample design, its rationale, and benefits and challenges of design decisions are described in this paper. METHODS The study design calls for recruitment and follow-up of a cohort of 16,000 Hispanics/Latinos aged 18-74 years, with 62.5% (10,000) over 44 years of age and adequate subgroup sample sizes to support inference by Hispanic/Latino background. Participants are recruited in community areas surrounding four field centers in the Bronx, Chicago, Miami, and San Diego. A two-stage area probability sample of households is selected with stratification and over-sampling incorporated at each stage to provide a broadly diverse sample, offer efficiencies in field operations, and ensure that the target age distribution is obtained. CONCLUSIONS Embedding probability sampling within this traditional, multi-site cohort study design enables competing research objectives to be met. However, the use of probability sampling requires developing solutions to some unique challenges in both sample selection and recruitment, as described here. PMID:20609344
A robust variable sampling time BLDC motor control design based upon μ-synthesis.
Hung, Chung-Wen; Yen, Jia-Yush
2013-01-01
The variable sampling rate system is encountered in many applications. When the speed information is derived from the position marks along the trajectory, one would have a speed dependent sampling rate system. The conventional fixed or multisampling rate system theory may not work in these cases because the system dynamics include the uncertainties which resulted from the variable sampling rate. This paper derived a convenient expression for the speed dependent sampling rate system. The varying sampling rate effect is then translated into multiplicative uncertainties to the system. The design then uses the popular μ-synthesis process to achieve a robust performance controller design. The implementation on a BLDC motor demonstrates the effectiveness of the design approach.
A Robust Variable Sampling Time BLDC Motor Control Design Based upon μ-Synthesis
Directory of Open Access Journals (Sweden)
Chung-Wen Hung
2013-01-01
Full Text Available The variable sampling rate system is encountered in many applications. When the speed information is derived from the position marks along the trajectory, one would have a speed dependent sampling rate system. The conventional fixed or multisampling rate system theory may not work in these cases because the system dynamics include the uncertainties which resulted from the variable sampling rate. This paper derived a convenient expression for the speed dependent sampling rate system. The varying sampling rate effect is then translated into multiplicative uncertainties to the system. The design then uses the popular μ-synthesis process to achieve a robust performance controller design. The implementation on a BLDC motor demonstrates the effectiveness of the design approach.
[Sampling plan, weighting process and design effects of the Brazilian Oral Health Survey].
Silva, Nilza Nunes da; Roncalli, Angelo Giuseppe
2013-12-01
To present aspects of the sampling plan of the Brazilian Oral Health Survey (SBBrasil Project). with theoretical and operational issues that should be taken into account in the primary data analyses. The studied population was composed of five demographic groups from urban areas of Brazil in 2010. Two and three stage cluster sampling was used. adopting different primary units. Sample weighting and design effects (deff) were used to evaluate sample consistency. In total. 37,519 individuals were reached. Although the majority of deff estimates were acceptable. some domains showed distortions. The majority (90%) of the samples showed results in concordance with the precision proposed in the sampling plan. The measures to prevent losses and the effects the cluster sampling process in the minimum sample sizes proved to be effective for the deff. which did not exceeded 2. even for results derived from weighting. The samples achieved in the SBBrasil 2010 survey were close to the main proposals for accuracy of the design. Some probabilities proved to be unequal among the primary units of the same domain. Users of this database should bear this in mind, introducing sample weighting in calculations of point estimates, standard errors, confidence intervals and design effects.
Tanilon, Jenny; Segers, Mien; Vedder, Paul; Tillema, Harm
2009-01-01
This study illustrates the development and validation of an admission test, labeled as Performance Samples on Academic Tasks in Educational Sciences (PSAT-Ed), designed to assess samples of performance on academic tasks characteristic of those that would eventually be encountered by examinees in an Educational Sciences program. The test was based…
Multi-saline sample distillation apparatus for hydrogen isotope analyses : design and accuracy
Hassan, Afifa Afifi
1981-01-01
A distillation apparatus for saline water samples was designed and tested. Six samples may be distilled simultaneously. The temperature was maintained at 400 C to ensure complete dehydration of the precipitating salts. Consequently, the error in the measured ratio of stable hydrogen isotopes resulting from incomplete dehydration of hydrated salts during distillation was eliminated. (USGS)
Willem W.S. van Hees
1999-01-01
An assessment of the vegetation resources of southwest Alaska was made by using an inventory design developed by the Pacific Northwest Research Station. Satellite imagery (LANDSAT MSS), high-altitude aerial photography, and ground sampling were the major components of the design. Estimates of area for all land cover classes in the southwest region were produced....
Statistical methods for genetic association studies with response-selective sampling designs
Balliu, Brunilda
2015-01-01
This dissertation describes new statistical methods designed to improve the power of genetic association studies. Of particular interest are studies with a response-selective sampling design, i.e. case-control studies of unrelated individuals and case-control studies of family members. The
Adaptive sampling in two-phase designs: a biomarker study for progression in arthritis
McIsaac, Michael A; Cook, Richard J
2015-01-01
Response-dependent two-phase designs are used increasingly often in epidemiological studies to ensure sampling strategies offer good statistical efficiency while working within resource constraints. Optimal response-dependent two-phase designs are difficult to implement, however, as they require specification of unknown parameters. We propose adaptive two-phase designs that exploit information from an internal pilot study to approximate the optimal sampling scheme for an analysis based on mean score estimating equations. The frequency properties of estimators arising from this design are assessed through simulation, and they are shown to be similar to those from optimal designs. The design procedure is then illustrated through application to a motivating biomarker study in an ongoing rheumatology research program. Copyright © 2015 © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. PMID:25951124
The Stampede Toward Hofstede's Framework: Avoiding the Sample Design Pit in Cross-Cultural Research
Sivakumar, K.; Cheryl Nakata
2001-01-01
We propose a method to design better multi-country samples for international business studies using Hofstede's framework and aimed at determining the effects of national culture on various business phenomena. We describe typical research scenarios, then develop sets of algorithms that calculate indexes reflecting the power of different samples for hypotheses testing. The indexes were computed from Hofstede's data, then rank ordered. The top multi-country samples are presented in tables for se...
Gruijter, de J.J.; Braak, ter C.J.F.
1992-01-01
Two fundamentally different sources of randomness exist on which design and inference in spatial sampling can be based: (a) variation that would occur on resampling the same spatial population with other sampling configurations generated by the same design, and (b) variation occurring on sampling
Hoekman, D.; Springer, Yuri P; Barker, C.M.; Barrera, R.; Blackmore, M.S.; Bradshaw, W.E.; Foley, D. H.; Ginsberg, Howard; Hayden, M. H.; Holzapfel, C. M.; Juliano, S. A.; Kramer, L. D.; LaDeau, S. L.; Livdahl, T. P.; Moore, C. G.; Nasci, R.S.; Reisen, W.K.; Savage, H. M.
2016-01-01
The National Ecological Observatory Network (NEON) intends to monitor mosquito populations across its broad geographical range of sites because of their prevalence in food webs, sensitivity to abiotic factors and relevance for human health. We describe the design of mosquito population sampling in the context of NEON’s long term continental scale monitoring program, emphasizing the sampling design schedule, priorities and collection methods. Freely available NEON data and associated field and laboratory samples, will increase our understanding of how mosquito abundance, demography, diversity and phenology are responding to land use and climate change.
Baseline Design Compliance Matrix for the Rotary Mode Core Sampling System
Energy Technology Data Exchange (ETDEWEB)
LECHELT, J.A.
2000-10-17
The purpose of the design compliance matrix (DCM) is to provide a single-source document of all design requirements associated with the fifteen subsystems that make up the rotary mode core sampling (RMCS) system. It is intended to be the baseline requirement document for the RMCS system and to be used in governing all future design and design verification activities associated with it. This document is the DCM for the RMCS system used on Hanford single-shell radioactive waste storage tanks. This includes the Exhauster System, Rotary Mode Core Sample Trucks, Universal Sampling System, Diesel Generator System, Distribution Trailer, X-Ray Cart System, Breathing Air Compressor, Nitrogen Supply Trailer, Casks and Cask Truck, Service Trailer, Core Sampling Riser Equipment, Core Sampling Support Trucks, Foot Clamp, Ramps and Platforms and Purged Camera System. Excluded items are tools such as light plants and light stands. Other items such as the breather inlet filter are covered by a different design baseline. In this case, the inlet breather filter is covered by the Tank Farms Design Compliance Matrix.
Practical iterative learning control with frequency domain design and sampled data implementation
Wang, Danwei; Zhang, Bin
2014-01-01
This book is on the iterative learning control (ILC) with focus on the design and implementation. We approach the ILC design based on the frequency domain analysis and address the ILC implementation based on the sampled data methods. This is the first book of ILC from frequency domain and sampled data methodologies. The frequency domain design methods offer ILC users insights to the convergence performance which is of practical benefits. This book presents a comprehensive framework with various methodologies to ensure the learnable bandwidth in the ILC system to be set with a balance between learning performance and learning stability. The sampled data implementation ensures effective execution of ILC in practical dynamic systems. The presented sampled data ILC methods also ensure the balance of performance and stability of learning process. Furthermore, the presented theories and methodologies are tested with an ILC controlled robotic system. The experimental results show that the machines can work in much h...
Spatial Prediction and Optimized Sampling Design for Sodium Concentration in Groundwater.
Zahid, Erum; Hussain, Ijaz; Spöck, Gunter; Faisal, Muhammad; Shabbir, Javid; M AbdEl-Salam, Nasser; Hussain, Tajammal
Sodium is an integral part of water, and its excessive amount in drinking water causes high blood pressure and hypertension. In the present paper, spatial distribution of sodium concentration in drinking water is modeled and optimized sampling designs for selecting sampling locations is calculated for three divisions in Punjab, Pakistan. Universal kriging and Bayesian universal kriging are used to predict the sodium concentrations. Spatial simulated annealing is used to generate optimized sampling designs. Different estimation methods (i.e., maximum likelihood, restricted maximum likelihood, ordinary least squares, and weighted least squares) are used to estimate the parameters of the variogram model (i.e, exponential, Gaussian, spherical and cubic). It is concluded that Bayesian universal kriging fits better than universal kriging. It is also observed that the universal kriging predictor provides minimum mean universal kriging variance for both adding and deleting locations during sampling design.
Optimal two-phase sampling design for comparing accuracies of two binary classification rules.
Xu, Huiping; Hui, Siu L; Grannis, Shaun
2014-02-10
In this paper, we consider the design for comparing the performance of two binary classification rules, for example, two record linkage algorithms or two screening tests. Statistical methods are well developed for comparing these accuracy measures when the gold standard is available for every unit in the sample, or in a two-phase study when the gold standard is ascertained only in the second phase in a subsample using a fixed sampling scheme. However, these methods do not attempt to optimize the sampling scheme to minimize the variance of the estimators of interest. In comparing the performance of two classification rules, the parameters of primary interest are the difference in sensitivities, specificities, and positive predictive values. We derived the analytic variance formulas for these parameter estimates and used them to obtain the optimal sampling design. The efficiency of the optimal sampling design is evaluated through an empirical investigation that compares the optimal sampling with simple random sampling and with proportional allocation. Results of the empirical study show that the optimal sampling design is similar for estimating the difference in sensitivities and in specificities, and both achieve a substantial amount of variance reduction with an over-sample of subjects with discordant results and under-sample of subjects with concordant results. A heuristic rule is recommended when there is no prior knowledge of individual sensitivities and specificities, or the prevalence of the true positive findings in the study population. The optimal sampling is applied to a real-world example in record linkage to evaluate the difference in classification accuracy of two matching algorithms. Copyright © 2013 John Wiley & Sons, Ltd.
Information-based sample size re-estimation in group sequential design for longitudinal trials.
Zhou, Jing; Adewale, Adeniyi; Shentu, Yue; Liu, Jiajun; Anderson, Keaven
2014-09-28
Group sequential design has become more popular in clinical trials because it allows for trials to stop early for futility or efficacy to save time and resources. However, this approach is less well-known for longitudinal analysis. We have observed repeated cases of studies with longitudinal data where there is an interest in early stopping for a lack of treatment effect or in adapting sample size to correct for inappropriate variance assumptions. We propose an information-based group sequential design as a method to deal with both of these issues. Updating the sample size at each interim analysis makes it possible to maintain the target power while controlling the type I error rate. We will illustrate our strategy with examples and simulations and compare the results with those obtained using fixed design and group sequential design without sample size re-estimation.
Bionic Design for Mars Sampling Scoop Inspired by Himalayan Marmot Claw
2016-01-01
Cave animals are often adapted to digging and life underground, with claw toes similar in structure and function to a sampling scoop. In this paper, the clawed toes of the Himalayan marmot were selected as a biological prototype for bionic research. Based on geometric parameter optimization of the clawed toes, a bionic sampling scoop for use on Mars was designed. Using a 3D laser scanner, the point cloud data of the second front claw toe was acquired. Parametric equations and contour curves for the claw were then built with cubic polynomial fitting. We obtained 18 characteristic curve equations for the internal and external contours of the claw. A bionic sampling scoop was designed according to the structural parameters of Curiosity's sampling shovel and the contours of the Himalayan marmot's claw. Verifying test results showed that when the penetration angle was 45° and the sampling speed was 0.33 r/min, the bionic sampling scoops' resistance torque was 49.6% less than that of the prototype sampling scoop. When the penetration angle was 60° and the sampling speed was 0.22 r/min, the resistance torque of the bionic sampling scoop was 28.8% lower than that of the prototype sampling scoop. PMID:28127229
Bionic Design for Mars Sampling Scoop Inspired by Himalayan Marmot Claw.
Xue, Long; Zhang, Rong Rong; Zong, Wei; Song, Jia Feng; Zou, Meng
2016-01-01
Cave animals are often adapted to digging and life underground, with claw toes similar in structure and function to a sampling scoop. In this paper, the clawed toes of the Himalayan marmot were selected as a biological prototype for bionic research. Based on geometric parameter optimization of the clawed toes, a bionic sampling scoop for use on Mars was designed. Using a 3D laser scanner, the point cloud data of the second front claw toe was acquired. Parametric equations and contour curves for the claw were then built with cubic polynomial fitting. We obtained 18 characteristic curve equations for the internal and external contours of the claw. A bionic sampling scoop was designed according to the structural parameters of Curiosity's sampling shovel and the contours of the Himalayan marmot's claw. Verifying test results showed that when the penetration angle was 45° and the sampling speed was 0.33 r/min, the bionic sampling scoops' resistance torque was 49.6% less than that of the prototype sampling scoop. When the penetration angle was 60° and the sampling speed was 0.22 r/min, the resistance torque of the bionic sampling scoop was 28.8% lower than that of the prototype sampling scoop.
Bionic Design for Mars Sampling Scoop Inspired by Himalayan Marmot Claw
Directory of Open Access Journals (Sweden)
Long Xue
2016-01-01
Full Text Available Cave animals are often adapted to digging and life underground, with claw toes similar in structure and function to a sampling scoop. In this paper, the clawed toes of the Himalayan marmot were selected as a biological prototype for bionic research. Based on geometric parameter optimization of the clawed toes, a bionic sampling scoop for use on Mars was designed. Using a 3D laser scanner, the point cloud data of the second front claw toe was acquired. Parametric equations and contour curves for the claw were then built with cubic polynomial fitting. We obtained 18 characteristic curve equations for the internal and external contours of the claw. A bionic sampling scoop was designed according to the structural parameters of Curiosity’s sampling shovel and the contours of the Himalayan marmot’s claw. Verifying test results showed that when the penetration angle was 45° and the sampling speed was 0.33 r/min, the bionic sampling scoops’ resistance torque was 49.6% less than that of the prototype sampling scoop. When the penetration angle was 60° and the sampling speed was 0.22 r/min, the resistance torque of the bionic sampling scoop was 28.8% lower than that of the prototype sampling scoop.
Design and performance of a complex-coupled DFB laser with sampled grating
Institute of Scientific and Technical Information of China (English)
王桓; 朱洪亮; 贾凌慧; 陈向飞; 王圩
2009-01-01
A complex-coupled DFB laser with sampled grating has been designed and fabricated. The method uses the + 1 st order reflection of the sampled grating for laser single-mode operation. The typical threshold current of the sampled grating based DFB laser is 25 mA, and the optical output is about 10 mW at the injected current of 100 mA. The lasing wavelength of the device is 1.5385μm, which is the +1 st order wavelength of the sampled grating.
Managing Clustered Data Using Hierarchical Linear Modeling
Warne, Russell T.; Li, Yan; McKyer, E. Lisako J.; Condie, Rachel; Diep, Cassandra S.; Murano, Peter S.
2012-01-01
Researchers in nutrition research often use cluster or multistage sampling to gather participants for their studies. These sampling methods often produce violations of the assumption of data independence that most traditional statistics share. Hierarchical linear modeling is a statistical method that can overcome violations of the independence…
Managing Clustered Data Using Hierarchical Linear Modeling
Warne, Russell T.; Li, Yan; McKyer, E. Lisako J.; Condie, Rachel; Diep, Cassandra S.; Murano, Peter S.
2012-01-01
Researchers in nutrition research often use cluster or multistage sampling to gather participants for their studies. These sampling methods often produce violations of the assumption of data independence that most traditional statistics share. Hierarchical linear modeling is a statistical method that can overcome violations of the independence…
Micromechanics of hierarchical materials
DEFF Research Database (Denmark)
Mishnaevsky, Leon, Jr.
2012-01-01
A short overview of micromechanical models of hierarchical materials (hybrid composites, biomaterials, fractal materials, etc.) is given. Several examples of the modeling of strength and damage in hierarchical materials are summarized, among them, 3D FE model of hybrid composites...... with nanoengineered matrix, fiber bundle model of UD composites with hierarchically clustered fibers and 3D multilevel model of wood considered as a gradient, cellular material with layered composite cell walls. The main areas of research in micromechanics of hierarchical materials are identified, among them......, the investigations of the effects of load redistribution between reinforcing elements at different scale levels, of the possibilities to control different material properties and to ensure synergy of strengthening effects at different scale levels and using the nanoreinforcement effects. The main future directions...
Introduction into Hierarchical Matrices
Litvinenko, Alexander
2013-12-05
Hierarchical matrices allow us to reduce computational storage and cost from cubic to almost linear. This technique can be applied for solving PDEs, integral equations, matrix equations and approximation of large covariance and precision matrices.
Applied Bayesian Hierarchical Methods
Congdon, Peter D
2010-01-01
Bayesian methods facilitate the analysis of complex models and data structures. Emphasizing data applications, alternative modeling specifications, and computer implementation, this book provides a practical overview of methods for Bayesian analysis of hierarchical models.
Programming with Hierarchical Maps
DEFF Research Database (Denmark)
Ørbæk, Peter
This report desribes the hierarchical maps used as a central data structure in the Corundum framework. We describe its most prominent features, ague for its usefulness and briefly describe some of the software prototypes implemented using the technology....
Catalysis with hierarchical zeolites
DEFF Research Database (Denmark)
Holm, Martin Spangsberg; Taarning, Esben; Egeblad, Kresten
2011-01-01
Hierarchical (or mesoporous) zeolites have attracted significant attention during the first decade of the 21st century, and so far this interest continues to increase. There have already been several reviews giving detailed accounts of the developments emphasizing different aspects of this research...... topic. Until now, the main reason for developing hierarchical zeolites has been to achieve heterogeneous catalysts with improved performance but this particular facet has not yet been reviewed in detail. Thus, the present paper summaries and categorizes the catalytic studies utilizing hierarchical...... zeolites that have been reported hitherto. Prototypical examples from some of the different categories of catalytic reactions that have been studied using hierarchical zeolite catalysts are highlighted. This clearly illustrates the different ways that improved performance can be achieved with this family...
Beaty, David W.; Allen, Carlton C.; Bass, Deborah S.; Buxbaum, Karen L.; Campbell, James K.; Lindstrom, David J.; Miller, Sylvia L.; Papanastassiou, Dimitri A.
2009-10-01
It has been widely understood for many years that an essential component of a Mars Sample Return mission is a Sample Receiving Facility (SRF). The purpose of such a facility would be to take delivery of the flight hardware that lands on Earth, open the spacecraft and extract the sample container and samples, and conduct an agreed-upon test protocol, while ensuring strict containment and contamination control of the samples while in the SRF. Any samples that are found to be non-hazardous (or are rendered non-hazardous by sterilization) would then be transferred to long-term curation. Although the general concept of an SRF is relatively straightforward, there has been considerable discussion about implementation planning. The Mars Exploration Program carried out an analysis of the attributes of an SRF to establish its scope, including minimum size and functionality, budgetary requirements (capital cost, operating costs, cost profile), and development schedule. The approach was to arrange for three independent design studies, each led by an architectural design firm, and compare the results. While there were many design elements in common identified by each study team, there were significant differences in the way human operators were to interact with the systems. In aggregate, the design studies provided insight into the attributes of a future SRF and the complex factors to consider for future programmatic planning.
Beaty, David W; Allen, Carlton C; Bass, Deborah S; Buxbaum, Karen L; Campbell, James K; Lindstrom, David J; Miller, Sylvia L; Papanastassiou, Dimitri A
2009-10-01
It has been widely understood for many years that an essential component of a Mars Sample Return mission is a Sample Receiving Facility (SRF). The purpose of such a facility would be to take delivery of the flight hardware that lands on Earth, open the spacecraft and extract the sample container and samples, and conduct an agreed-upon test protocol, while ensuring strict containment and contamination control of the samples while in the SRF. Any samples that are found to be non-hazardous (or are rendered non-hazardous by sterilization) would then be transferred to long-term curation. Although the general concept of an SRF is relatively straightforward, there has been considerable discussion about implementation planning. The Mars Exploration Program carried out an analysis of the attributes of an SRF to establish its scope, including minimum size and functionality, budgetary requirements (capital cost, operating costs, cost profile), and development schedule. The approach was to arrange for three independent design studies, each led by an architectural design firm, and compare the results. While there were many design elements in common identified by each study team, there were significant differences in the way human operators were to interact with the systems. In aggregate, the design studies provided insight into the attributes of a future SRF and the complex factors to consider for future programmatic planning.
On the repeated measures designs and sample sizes for randomized controlled trials.
Tango, Toshiro
2016-04-01
For the analysis of longitudinal or repeated measures data, generalized linear mixed-effects models provide a flexible and powerful tool to deal with heterogeneity among subject response profiles. However, the typical statistical design adopted in usual randomized controlled trials is an analysis of covariance type analysis using a pre-defined pair of "pre-post" data, in which pre-(baseline) data are used as a covariate for adjustment together with other covariates. Then, the major design issue is to calculate the sample size or the number of subjects allocated to each treatment group. In this paper, we propose a new repeated measures design and sample size calculations combined with generalized linear mixed-effects models that depend not only on the number of subjects but on the number of repeated measures before and after randomization per subject used for the analysis. The main advantages of the proposed design combined with the generalized linear mixed-effects models are (1) it can easily handle missing data by applying the likelihood-based ignorable analyses under the missing at random assumption and (2) it may lead to a reduction in sample size, compared with the simple pre-post design. The proposed designs and the sample size calculations are illustrated with real data arising from randomized controlled trials.
Treatment Protocols as Hierarchical Structures
Ben-Bassat, Moshe; Carlson, Richard W.; Puri, Vinod K.; Weil, Max Harry
1978-01-01
We view a treatment protocol as a hierarchical structure of therapeutic modules. The lowest level of this structure consists of individual therapeutic actions. Combinations of individual actions define higher level modules, which we call routines. Routines are designed to manage limited clinical problems, such as the routine for fluid loading to correct hypovolemia. Combinations of routines and additional actions, together with comments, questions, or precautions organized in a branching logic, in turn, define the treatment protocol for a given disorder. Adoption of this modular approach may facilitate the formulation of treatment protocols, since the physician is not required to prepare complex flowcharts. This hierarchical approach also allows protocols to be updated and modified in a flexible manner. By use of such a standard format, individual components may be fitted together to create protocols for multiple disorders. The technique is suited for computer implementation. We believe that this hierarchical approach may facilitate standarization of patient care as well as aid in clinical teaching. A protocol for acute pancreatitis is used to illustrate this technique.
Design, analysis, and interpretation of field quality-control data for water-sampling projects
Mueller, David K.; Schertz, Terry L.; Martin, Jeffrey D.; Sandstrom, Mark W.
2015-01-01
The process of obtaining and analyzing water samples from the environment includes a number of steps that can affect the reported result. The equipment used to collect and filter samples, the bottles used for specific subsamples, any added preservatives, sample storage in the field, and shipment to the laboratory have the potential to affect how accurately samples represent the environment from which they were collected. During the early 1990s, the U.S. Geological Survey implemented policies to include the routine collection of quality-control samples in order to evaluate these effects and to ensure that water-quality data were adequately representing environmental conditions. Since that time, the U.S. Geological Survey Office of Water Quality has provided training in how to design effective field quality-control sampling programs and how to evaluate the resultant quality-control data. This report documents that training material and provides a reference for methods used to analyze quality-control data.
Analytical quality-by-design approach for sample treatment of BSA-containing solutions
Institute of Scientific and Technical Information of China (English)
Lien Taevernier; Evelien Wynendaele; Matthias D’Hondt; Bart De Spiegeleer
2015-01-01
The sample preparation of samples containing bovine serum albumin (BSA), e.g., as used in transdermal Franz diffusion cell (FDC) solutions, was evaluated using an analytical quality-by-design (QbD) approach. Traditional precipitation of BSA by adding an equal volume of organic solvent, often successfully used with conventional HPLC-PDA, was found insufficiently robust when novel fused-core HPLC and/or UPLC-MS methods were used. In this study, three factors (acetonitrile (%), formic acid (%) and boiling time (min)) were included in the experimental design to determine an optimal and more suitable sample treatment of BSA-containing FDC solutions. Using a QbD and Derringer desirability (D) approach, combining BSA loss, dilution factor and variability, we constructed an optimal working space with the edge of failure defined as Do0.9. The design space is modelled and is confirmed to have an ACN range of 8373%and FA content of 170.25%.
Directory of Open Access Journals (Sweden)
Dongsheng Chen
2016-01-01
Full Text Available Accurate biomass estimations are important for assessing and monitoring forest carbon storage. Bayesian theory has been widely applied to tree biomass models. Recently, a hierarchical Bayesian approach has received increasing attention for improving biomass models. In this study, tree biomass data were obtained by sampling 310 trees from 209 permanent sample plots from larch plantations in six regions across China. Non-hierarchical and hierarchical Bayesian approaches were used to model allometric biomass equations. We found that the total, root, stem wood, stem bark, branch and foliage biomass model relationships were statistically significant (p-values < 0.001 for both the non-hierarchical and hierarchical Bayesian approaches, but the hierarchical Bayesian approach increased the goodness-of-fit statistics over the non-hierarchical Bayesian approach. The R2 values of the hierarchical approach were higher than those of the non-hierarchical approach by 0.008, 0.018, 0.020, 0.003, 0.088 and 0.116 for the total tree, root, stem wood, stem bark, branch and foliage models, respectively. The hierarchical Bayesian approach significantly improved the accuracy of the biomass model (except for the stem bark and can reflect regional differences by using random parameters to improve the regional scale model accuracy.
Tunesi, Luca; Armbruster, Philippe
2004-02-01
The objective of this paper is to demonstrate a suitable hierarchical networking solution to improve capabilities and performances of space systems, with significant recurrent costs saving and more efficient design & manufacturing flows. Classically, a satellite can be split in two functional sub-systems: the platform and the payload complement. The platform is in charge of providing power, attitude & orbit control and up/down-link services, whereas the payload represents the scientific and/or operational instruments/transponders and embodies the objectives of the mission. One major possibility to improve the performance of payloads, by limiting the data return to pertinent information, is to process data on board thanks to a proper implementation of the payload data system. In this way, it is possible to share non-recurring development costs by exploiting a system that can be adopted by the majority of space missions. It is believed that the Modular and Scalable Payload Data System, under development by ESA, provides a suitable solution to fulfil a large range of future mission requirements. The backbone of the system is the standardised high data rate SpaceWire network http://www.ecss.nl/. As complement, a lower speed command and control bus connecting peripherals is required. For instance, at instrument level, there is a need for a "local" low complexity bus, which gives the possibility to command and control sensors and actuators. Moreover, most of the connections at sub-system level are related to discrete signals management or simple telemetry acquisitions, which can easily and efficiently be handled by a local bus. An on-board hierarchical network can therefore be defined by interconnecting high-speed links and local buses. Additionally, it is worth stressing another important aspect of the design process: Agencies and ESA in particular are frequently confronted with a big consortium of geographically spread companies located in different countries, each one
Joint Hierarchical Category Structure Learning and Large-Scale Image Classification
Qu, Yanyun; Lin, Li; Shen, Fumin; Lu, Chang; Wu, Yang; Xie, Yuan; Tao, Dacheng
2017-09-01
We investigate the scalable image classification problem with a large number of categories. Hierarchical visual data structures are helpful for improving the efficiency and performance of large-scale multi-class classification. We propose a novel image classification method based on learning hierarchical inter-class structures. Specifically, we first design a fast algorithm to compute the similarity metric between categories, based on which a visual tree is constructed by hierarchical spectral clustering. Using the learned visual tree, a test sample label is efficiently predicted by searching for the best path over the entire tree. The proposed method is extensively evaluated on the ILSVRC2010 and Caltech 256 benchmark datasets. Experimental results show that our method obtains significantly better category hierarchies than other state-of-the-art visual tree-based methods and, therefore, much more accurate classification.
Development of rotation sample designs for the estimation of crop acreages
Lycthuan-Lee, T. G. (Principal Investigator)
1981-01-01
The idea behind the use of rotation sample designs is that the variation of the crop acreage of a particular sample unit from year to year is usually less than the variation of crop acreage between units within a particular year. The estimation theory is based on an additive mixed analysis of variance model with years as fixed effects, (a sub t), and sample units as a variable factor. The rotation patterns are decided upon according to: (1) the number of sample units in the design each year; (2) the number of units retained in the following years; and (3) the number of years to complete the rotation pattern. Different analytic formulae for the variance of (a sub t) and the variance comparisons in using a complete survey of the rotation patterns.
Distance software: design and analysis of distance sampling surveys for estimating population size.
Thomas, Len; Buckland, Stephen T; Rexstad, Eric A; Laake, Jeff L; Strindberg, Samantha; Hedley, Sharon L; Bishop, Jon Rb; Marques, Tiago A; Burnham, Kenneth P
2010-02-01
1.Distance sampling is a widely used technique for estimating the size or density of biological populations. Many distance sampling designs and most analyses use the software Distance.2.We briefly review distance sampling and its assumptions, outline the history, structure and capabilities of Distance, and provide hints on its use.3.Good survey design is a crucial prerequisite for obtaining reliable results. Distance has a survey design engine, with a built-in geographic information system, that allows properties of different proposed designs to be examined via simulation, and survey plans to be generated.4.A first step in analysis of distance sampling data is modelling the probability of detection. Distance contains three increasingly sophisticated analysis engines for this: conventional distance sampling, which models detection probability as a function of distance from the transect and assumes all objects at zero distance are detected; multiple-covariate distance sampling, which allows covariates in addition to distance; and mark-recapture distance sampling, which relaxes the assumption of certain detection at zero distance.5.All three engines allow estimation of density or abundance, stratified if required, with associated measures of precision calculated either analytically or via the bootstrap.6.Advanced analysis topics covered include the use of multipliers to allow analysis of indirect surveys (such as dung or nest surveys), the density surface modelling analysis engine for spatial and habitat modelling, and information about accessing the analysis engines directly from other software.7.Synthesis and applications. Distance sampling is a key method for producing abundance and density estimates in challenging field conditions. The theory underlying the methods continues to expand to cope with realistic estimation situations. In step with theoretical developments, state-of-the-art software that implements these methods is described that makes the methods
Guided transect sampling - a new design combining prior information and field surveying
Anna Ringvall; Goran Stahl; Tomas Lamas
2000-01-01
Guided transect sampling is a two-stage sampling design in which prior information is used to guide the field survey in the second stage. In the first stage, broad strips are randomly selected and divided into grid-cells. For each cell a covariate value is estimated from remote sensing data, for example. The covariate is the basis for subsampling of a transect through...
A design-based approximation to the Bayes Information Criterion in finite population sampling
Directory of Open Access Journals (Sweden)
Enrico Fabrizi
2014-05-01
Full Text Available In this article, various issues related to the implementation of the usual Bayesian Information Criterion (BIC are critically examined in the context of modelling a finite population. A suitable design-based approximation to the BIC is proposed in order to avoid the derivation of the exact likelihood of the sample which is often very complex in a finite population sampling. The approximation is justified using a theoretical argument and a Monte Carlo simulation study.
A GENERALIZED SAMPLING THEOREM OVER GALOIS FIELD DOMAINS FOR EXPERIMENTAL DESIGN
Directory of Open Access Journals (Sweden)
Yoshifumi Ukita
2015-12-01
Full Text Available In this paper, the sampling theorem for bandlimited functions over domains is generalized to one over ∏ domains. The generalized theorem is applicable to the experimental design model in which each factor has a different number of levels and enables us to estimate the parameters in the model by using Fourier transforms. Moreover, the relationship between the proposed sampling theorem and orthogonal arrays is also provided. KEY
Evaluating Hierarchical Structure in Music Annotations.
McFee, Brian; Nieto, Oriol; Farbood, Morwaread M; Bello, Juan Pablo
2017-01-01
Music exhibits structure at multiple scales, ranging from motifs to large-scale functional components. When inferring the structure of a piece, different listeners may attend to different temporal scales, which can result in disagreements when they describe the same piece. In the field of music informatics research (MIR), it is common to use corpora annotated with structural boundaries at different levels. By quantifying disagreements between multiple annotators, previous research has yielded several insights relevant to the study of music cognition. First, annotators tend to agree when structural boundaries are ambiguous. Second, this ambiguity seems to depend on musical features, time scale, and genre. Furthermore, it is possible to tune current annotation evaluation metrics to better align with these perceptual differences. However, previous work has not directly analyzed the effects of hierarchical structure because the existing methods for comparing structural annotations are designed for "flat" descriptions, and do not readily generalize to hierarchical annotations. In this paper, we extend and generalize previous work on the evaluation of hierarchical descriptions of musical structure. We derive an evaluation metric which can compare hierarchical annotations holistically across multiple levels. sing this metric, we investigate inter-annotator agreement on the multilevel annotations of two different music corpora, investigate the influence of acoustic properties on hierarchical annotations, and evaluate existing hierarchical segmentation algorithms against the distribution of inter-annotator agreement.
Evaluating Hierarchical Structure in Music Annotations
Directory of Open Access Journals (Sweden)
Brian McFee
2017-08-01
Full Text Available Music exhibits structure at multiple scales, ranging from motifs to large-scale functional components. When inferring the structure of a piece, different listeners may attend to different temporal scales, which can result in disagreements when they describe the same piece. In the field of music informatics research (MIR, it is common to use corpora annotated with structural boundaries at different levels. By quantifying disagreements between multiple annotators, previous research has yielded several insights relevant to the study of music cognition. First, annotators tend to agree when structural boundaries are ambiguous. Second, this ambiguity seems to depend on musical features, time scale, and genre. Furthermore, it is possible to tune current annotation evaluation metrics to better align with these perceptual differences. However, previous work has not directly analyzed the effects of hierarchical structure because the existing methods for comparing structural annotations are designed for “flat” descriptions, and do not readily generalize to hierarchical annotations. In this paper, we extend and generalize previous work on the evaluation of hierarchical descriptions of musical structure. We derive an evaluation metric which can compare hierarchical annotations holistically across multiple levels. sing this metric, we investigate inter-annotator agreement on the multilevel annotations of two different music corpora, investigate the influence of acoustic properties on hierarchical annotations, and evaluate existing hierarchical segmentation algorithms against the distribution of inter-annotator agreement.
Architectural Design Space Exploration of an FPGA-based Compressed Sampling Engine
DEFF Research Database (Denmark)
El-Sayed, Mohammad; Koch, Peter; Le Moullec, Yannick
2015-01-01
We present the architectural design space exploration of a compressed sampling engine for use in a wireless heart-rate monitoring system. We show how parallelism affects execution time at the register transfer level. Furthermore, two example solutions (modified semi-parallel and full-parallel) se......-parallel) selected from the design space are prototyped on an Altera Cyclone III FPGA platform; in both cases the FPGA resource usage is less than 1% and the maximum frequency is 250 MHz....
Wearable chemical sensing – sensor design and sampling techniques for real-time sweat analysis
2014-01-01
Wearable chemical sensors have the potential to provide new methods of non-invasive physiological measurement. The nature of chemical sensors involves an active surface where a chemical reaction must occur to elicit a response. This adds complexity to a wearable system which creates challenges in the design of a reliable long-term working system. This work presents the design of a real-time sweat sensing platform to analyse sweat loss and composition. Sampling methods have an impact on...
Efficient adaptive designs with mid-course sample size adjustment in clinical trials
Bartroff, Jay
2011-01-01
Adaptive designs have been proposed for clinical trials in which the nuisance parameters or alternative of interest are unknown or likely to be misspecified before the trial. Whereas most previous works on adaptive designs and mid-course sample size re-estimation have focused on two-stage or group sequential designs in the normal case, we consider here a new approach that involves at most three stages and is developed in the general framework of multiparameter exponential families. Not only does this approach maintain the prescribed type I error probability, but it also provides a simple but asymptotically efficient sequential test whose finite-sample performance, measured in terms of the expected sample size and power functions, is shown to be comparable to the optimal sequential design, determined by dynamic programming, in the simplified normal mean case with known variance and prespecified alternative, and superior to the existing two-stage designs and also to adaptive group sequential designs when the al...
Hierarchical Knowledge-Gradient for Sequential Sampling
Mes, Martijn R.K.; Powel, Warren B.; Frazier, Peter I.
2009-01-01
We consider the problem of selecting the best of a finite but very large set of alternatives. Each alternative may be characterized by a multi-dimensional vector and has independent normal rewards. This problem arises in various settings such as (i) ranking and selection, (ii) simulation
Chang, Xiaofeng; Bao, Xiaoying; Wang, Shiping; Zhu, Xiaoxue; Luo, Caiyun; Zhang, Zhenhua; Wilkes, Andreas
2016-05-15
The effects of climate change and human activities on grassland degradation and soil carbon stocks have become a focus of both research and policy. However, lack of research on appropriate sampling design prevents accurate assessment of soil carbon stocks and stock changes at community and regional scales. Here, we conducted an intensive survey with 1196 sampling sites over an area of 190 km(2) of degraded alpine meadow. Compared to lightly degraded meadow, soil organic carbon (SOC) stocks in moderately, heavily and extremely degraded meadow were reduced by 11.0%, 13.5% and 17.9%, respectively. Our field survey sampling design was overly intensive to estimate SOC status with a tolerable uncertainty of 10%. Power analysis showed that the optimal sampling density to achieve the desired accuracy would be 2, 3, 5 and 7 sites per 10 km(2) for lightly, moderately, heavily and extremely degraded meadows, respectively. If a subsequent paired sampling design with the optimum sample size were performed, assuming stock change rates predicted by experimental and modeling results, we estimate that about 5-10 years would be necessary to detect expected trends in SOC in the top 20 cm soil layer. Our results highlight the utility of conducting preliminary surveys to estimate the appropriate sampling density and avoid wasting resources due to over-sampling, and to estimate the sampling interval required to detect an expected sequestration rate. Future studies will be needed to evaluate spatial and temporal patterns of SOC variability. Copyright © 2016. Published by Elsevier Ltd.
Sampling design for long-term regional trends in marine rocky intertidal communities.
Irvine, Gail V; Shelly, Alice
2013-08-01
Probability-based designs reduce bias and allow inference of results to the pool of sites from which they were chosen. We developed and tested probability-based designs for monitoring marine rocky intertidal assemblages at Glacier Bay National Park and Preserve (GLBA), Alaska. A multilevel design was used that varied in scale and inference. The levels included aerial surveys, extensive sampling of 25 sites, and more intensive sampling of 6 sites. Aerial surveys of a subset of intertidal habitat indicated that the original target habitat of bedrock-dominated sites with slope ≤30° was rare. This unexpected finding illustrated one value of probability-based surveys and led to a shift in the target habitat type to include steeper, more mixed rocky habitat. Subsequently, we evaluated the statistical power of different sampling methods and sampling strategies to detect changes in the abundances of the predominant sessile intertidal taxa: barnacles Balanomorpha, the mussel Mytilus trossulus, and the rockweed Fucus distichus subsp. evanescens. There was greatest power to detect trends in Mytilus and lesser power for barnacles and Fucus. Because of its greater power, the extensive, coarse-grained sampling scheme was adopted in subsequent years over the intensive, fine-grained scheme. The sampling attributes that had the largest effects on power included sampling of "vertical" line transects (vs. horizontal line transects or quadrats) and increasing the number of sites. We also evaluated the power of several management-set parameters. Given equal sampling effort, sampling more sites fewer times had greater power. The information gained through intertidal monitoring is likely to be useful in assessing changes due to climate, including ocean acidification; invasive species; trampling effects; and oil spills.
Sampling design for long-term regional trends in marine rocky intertidal communities
Irvine, Gail V.; Shelley, Alice
2013-01-01
Probability-based designs reduce bias and allow inference of results to the pool of sites from which they were chosen. We developed and tested probability-based designs for monitoring marine rocky intertidal assemblages at Glacier Bay National Park and Preserve (GLBA), Alaska. A multilevel design was used that varied in scale and inference. The levels included aerial surveys, extensive sampling of 25 sites, and more intensive sampling of 6 sites. Aerial surveys of a subset of intertidal habitat indicated that the original target habitat of bedrock-dominated sites with slope ≤30° was rare. This unexpected finding illustrated one value of probability-based surveys and led to a shift in the target habitat type to include steeper, more mixed rocky habitat. Subsequently, we evaluated the statistical power of different sampling methods and sampling strategies to detect changes in the abundances of the predominant sessile intertidal taxa: barnacles Balanomorpha, the mussel Mytilus trossulus, and the rockweed Fucus distichus subsp. evanescens. There was greatest power to detect trends in Mytilus and lesser power for barnacles and Fucus. Because of its greater power, the extensive, coarse-grained sampling scheme was adopted in subsequent years over the intensive, fine-grained scheme. The sampling attributes that had the largest effects on power included sampling of “vertical” line transects (vs. horizontal line transects or quadrats) and increasing the number of sites. We also evaluated the power of several management-set parameters. Given equal sampling effort, sampling more sites fewer times had greater power. The information gained through intertidal monitoring is likely to be useful in assessing changes due to climate, including ocean acidification; invasive species; trampling effects; and oil spills.
Data-driven soft sensor design with multiple-rate sampled data
DEFF Research Database (Denmark)
Lin, Bao; Recke, Bodil; Knudsen, Jørgen K.H.
2007-01-01
Multi-rate systems are common in industrial processes where quality measurements have slower sampling rate than other process variables. Since inter-sample information is desirable for effective quality control, different approaches have been reported to estimate the quality between samples...... are implemented to design quality soft sensors for cement kiln processes using data collected from a plant log system. Preliminary results reveal that the WPLS approach is able to provide accurate one-step-ahead prediction. The regularized data lifting technique predicts the product quality of cement kiln systems...
Hierarchical Control for Smart Grids
DEFF Research Database (Denmark)
Trangbæk, K; Bendtsen, Jan Dimon; Stoustrup, Jakob
2011-01-01
This paper deals with hierarchical model predictive control (MPC) of smart grid systems. The design consists of a high level MPC controller, a second level of so-called aggregators, which reduces the computational and communication-related load on the high-level control, and a lower level...... of autonomous consumers. The control system is tasked with balancing electric power production and consumption within the smart grid, and makes active use of the ﬂexibility of a large number of power producing and/or power consuming units. The objective is to accommodate the load variation on the grid, arising...
Springer, Yuri P; Hoekman, David; Johnson, Pieter TJ; Duffy, Paul A; Hufft, Rebecca A.; Barnett, David T.; Allan, Brian F.; Amman, Brian R; Barker, Christopher M; Barrera, Roberto; Beard, Charles B; Beati, Lorenza; Begon, Mike; Blackmore, Mark S; Bradshaw, William E; Brisson, Dustin; Calisher, Charles H.; Childs, James E; Diuk-Wasser, Maria A.; Douglass, Richard J; Eisen, Rebecca J; Foley, Desmond H; Foley, Janet E.; Gaff, Holly D; Gardner, Scott L; Ginsberg, Howard; Glass, Gregory E; Hamer, Sarah A; Hayden, Mary H; Hjelle, Brian; Holzapfel, Christina M; Juliano, Steven A.; Kramer, Laura D.; Kuenzi, Amy J.; LaDeau, Shannon L.; Livdahl, Todd P.; Mills, James N.; Moore, Chester G.; Morand, Serge; Nasci, Roger S.; Ogden, Nicholas H.; Ostfeld, Richard S.; Parmenter, Robert R.; Piesman, Joseph; Reisen, William K.; Savage, Harry M.; Sonenshine, Daniel E.; Swei, Andrea; Yabsley, Michael J.
2016-01-01
Parasites and pathogens are increasingly recognized as significant drivers of ecological and evolutionary change in natural ecosystems. Concurrently, transmission of infectious agents among human, livestock, and wildlife populations represents a growing threat to veterinary and human health. In light of these trends and the scarcity of long-term time series data on infection rates among vectors and reservoirs, the National Ecological Observatory Network (NEON) will collect measurements and samples of a suite of tick-, mosquito-, and rodent-borne parasites through a continental-scale surveillance program. Here, we describe the sampling designs for these efforts, highlighting sampling priorities, field and analytical methods, and the data as well as archived samples to be made available to the research community. Insights generated by this sampling will advance current understanding of and ability to predict changes in infection and disease dynamics in novel, interdisciplinary, and collaborative ways.
The SDSS-IV MaNGA Sample: Design, Optimization, and Usage Considerations
Wake, David A.; Bundy, Kevin; Diamond-Stanic, Aleksandar M.; Yan, Renbin; Blanton, Michael R.; Bershady, Matthew A.; Sánchez-Gallego, José R.; Drory, Niv; Jones, Amy; Kauffmann, Guinevere; Law, David R.; Li, Cheng; MacDonald, Nicholas; Masters, Karen; Thomas, Daniel; Tinker, Jeremy; Weijmans, Anne-Marie; Brownstein, Joel R.
2017-09-01
We describe the sample design for the SDSS-IV MaNGA survey and present the final properties of the main samples along with important considerations for using these samples for science. Our target selection criteria were developed while simultaneously optimizing the size distribution of the MaNGA integral field units (IFUs), the IFU allocation strategy, and the target density to produce a survey defined in terms of maximizing signal-to-noise ratio, spatial resolution, and sample size. Our selection strategy makes use of redshift limits that only depend on i-band absolute magnitude (M i ), or, for a small subset of our sample, M i and color (NUV - i). Such a strategy ensures that all galaxies span the same range in angular size irrespective of luminosity and are therefore covered evenly by the adopted range of IFU sizes. We define three samples: the Primary and Secondary samples are selected to have a flat number density with respect to M i and are targeted to have spectroscopic coverage to 1.5 and 2.5 effective radii (R e ), respectively. The Color-Enhanced supplement increases the number of galaxies in the low-density regions of color-magnitude space by extending the redshift limits of the Primary sample in the appropriate color bins. The samples cover the stellar mass range 5× {10}8≤slant {M}* ≤slant 3× {10}11 {M}⊙ {h}-2 and are sampled at median physical resolutions of 1.37 and 2.5 kpc for the Primary and Secondary samples, respectively. We provide weights that will statistically correct for our luminosity and color-dependent selection function and IFU allocation strategy, thus correcting the observed sample to a volume-limited sample.
OSIRIS-REx Touch-and-Go (TAG) Mission Design for Asteroid Sample Collection
May, Alexander; Sutter, Brian; Linn, Timothy; Bierhaus, Beau; Berry, Kevin; Mink, Ron
2014-01-01
The Origins Spectral Interpretation Resource Identification Security Regolith Explorer (OSIRIS-REx) mission is a NASA New Frontiers mission launching in September 2016 to rendezvous with the near-Earth asteroid Bennu in October 2018. After several months of proximity operations to characterize the asteroid, OSIRIS-REx flies a Touch-And-Go (TAG) trajectory to the asteroid's surface to collect at least 60 g of pristine regolith sample for Earth return. This paper provides mission and flight system overviews, with more details on the TAG mission design and key events that occur to safely and successfully collect the sample. An overview of the navigation performed relative to a chosen sample site, along with the maneuvers to reach the desired site is described. Safety monitoring during descent is performed with onboard sensors providing an option to abort, troubleshoot, and try again if necessary. Sample collection occurs using a collection device at the end of an articulating robotic arm during a brief five second contact period, while a constant force spring mechanism in the arm assists to rebound the spacecraft away from the surface. Finally, the sample is measured quantitatively utilizing the law of conservation of angular momentum, along with qualitative data from imagery of the sampling device. Upon sample mass verification, the arm places the sample into the Stardust-heritage Sample Return Capsule (SRC) for return to Earth in September 2023.
Wejnert, Cyprian; Pham, Huong; Krishna, Nevin; Le, Binh; DiNenno, Elizabeth
2012-05-01
Respondent-driven sampling (RDS) has become increasingly popular for sampling hidden populations, including injecting drug users (IDU). However, RDS data are unique and require specialized analysis techniques, many of which remain underdeveloped. RDS sample size estimation requires knowing design effect (DE), which can only be calculated post hoc. Few studies have analyzed RDS DE using real world empirical data. We analyze estimated DE from 43 samples of IDU collected using a standardized protocol. We find the previous recommendation that sample size be at least doubled, consistent with DE = 2, underestimates true DE and recommend researchers use DE = 4 as an alternate estimate when calculating sample size. A formula for calculating sample size for RDS studies among IDU is presented. Researchers faced with limited resources may wish to accept slightly higher standard errors to keep sample size requirements low. Our results highlight dangers of ignoring sampling design in analysis.
Parallel hierarchical radiosity rendering
Energy Technology Data Exchange (ETDEWEB)
Carter, M.
1993-07-01
In this dissertation, the step-by-step development of a scalable parallel hierarchical radiosity renderer is documented. First, a new look is taken at the traditional radiosity equation, and a new form is presented in which the matrix of linear system coefficients is transformed into a symmetric matrix, thereby simplifying the problem and enabling a new solution technique to be applied. Next, the state-of-the-art hierarchical radiosity methods are examined for their suitability to parallel implementation, and scalability. Significant enhancements are also discovered which both improve their theoretical foundations and improve the images they generate. The resultant hierarchical radiosity algorithm is then examined for sources of parallelism, and for an architectural mapping. Several architectural mappings are discussed. A few key algorithmic changes are suggested during the process of making the algorithm parallel. Next, the performance, efficiency, and scalability of the algorithm are analyzed. The dissertation closes with a discussion of several ideas which have the potential to further enhance the hierarchical radiosity method, or provide an entirely new forum for the application of hierarchical methods.
Blinded sample size re-estimation in three-arm trials with 'gold standard' design.
Mütze, Tobias; Friede, Tim
2017-10-15
In this article, we study blinded sample size re-estimation in the 'gold standard' design with internal pilot study for normally distributed outcomes. The 'gold standard' design is a three-arm clinical trial design that includes an active and a placebo control in addition to an experimental treatment. We focus on the absolute margin approach to hypothesis testing in three-arm trials at which the non-inferiority of the experimental treatment and the assay sensitivity are assessed by pairwise comparisons. We compare several blinded sample size re-estimation procedures in a simulation study assessing operating characteristics including power and type I error. We find that sample size re-estimation based on the popular one-sample variance estimator results in overpowered trials. Moreover, sample size re-estimation based on unbiased variance estimators such as the Xing-Ganju variance estimator results in underpowered trials, as it is expected because an overestimation of the variance and thus the sample size is in general required for the re-estimation procedure to eventually meet the target power. To overcome this problem, we propose an inflation factor for the sample size re-estimation with the Xing-Ganju variance estimator and show that this approach results in adequately powered trials. Because of favorable features of the Xing-Ganju variance estimator such as unbiasedness and a distribution independent of the group means, the inflation factor does not depend on the nuisance parameter and, therefore, can be calculated prior to a trial. Moreover, we prove that the sample size re-estimation based on the Xing-Ganju variance estimator does not bias the effect estimate. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Time as a dimension of the sample design in national-scale forest inventories
Francis Roesch; Paul Van Deusen
2013-01-01
Historically, the goal of forest inventories has been to determine the extent of the timber resource. Predictions of how the resource was changing were made by comparing differences between successive inventories. The general view of the associated sample design was with selection probabilities based on land area observed at a discrete point in time. Time was not...
Mars Sample Return: A Low Cost, Direct and Minimum Risk Design
Wercinski, Paul F.; Arnold, James O. (Technical Monitor)
1994-01-01
Current NASA strategy for Mars exploration is seeking simpler, cheaper, and more reliable missions to Mars. This requirement has left virtually all previously proposed Mars Sample Return (MSR) missions as economically untenable. The MSR mission proposed in this paper represents an economical, back-to-basics approach of mission design by leveraging interplanetary trajectory design and limited surface science for shorter mission duration, advanced propulsion and thermal protection systems for mass reduction and simplified mission operations for high reliability. As a result, the proposed concept, called the Fast, Mini, Direct Mars Sample Return (FMD-MSR) mission represents the cheapest and fastest class of missions that could return a 0.5 kg sample from the surface of Mars to Earth with a total mission duration of less than 1.5 Earth years. The constraints require an aggressive mission design that dictates the use of advanced storable liquid propulsion systems and advanced TPS materials to minimize aeroshell mass. The mission does not have the high risk operations of other MSR missions such as orbit rendezvous at Mars, propulsive insertion at Mars, rover operations on the surface, and sample transfer. This paper details the key mission elements for such a mission and presents a feasible and cost effective design.
Sampling design for compliance monitoring of surface water quality: A case study in a Polder area
Brus, D.J.; Knotters, M.
2008-01-01
International agreements such as the EU Water Framework Directive (WFD) ask for efficient sampling methods for monitoring natural resources. In this paper a general methodology for designing efficient, statistically sound monitoring schemes is described. An important decision is the choice between a
An Alternative View of Some FIA Sample Design and Analysis Issues
Paul C. Van Deusen
2005-01-01
Sample design and analysis decisions are the result of compromises and inputs from many sources. The end result would likely change if different individuals or groups were involved in the planning process. Discussed here are some alternatives to the procedures that are currently being used for the annual inventory. The purpose is to indicate that alternatives exist and...
Designing efficient nitrous oxide sampling strategies in agroecosystems using simulation models
Debasish Saha; Armen R. Kemanian; Benjamin M. Rau; Paul R. Adler; Felipe Montes
2017-01-01
Annual cumulative soil nitrous oxide (N2O) emissions calculated from discrete chamber-based flux measurements have unknown uncertainty. We used outputs from simulations obtained with an agroecosystem model to design sampling strategies that yield accurate cumulative N2O flux estimates with a known uncertainty level. Daily soil N2O fluxes were simulated for Ames, IA (...
Thomas C. Edwards; D. Richard Cutler; Niklaus E. Zimmermann; Linda Geiser; Gretchen G. Moisen
2006-01-01
We evaluated the effects of probabilistic (hereafter DESIGN) and non-probabilistic (PURPOSIVE) sample surveys on resultant classification tree models for predicting the presence of four lichen species in the Pacific Northwest, USA. Models derived from both survey forms were assessed using an independent data set (EVALUATION). Measures of accuracy as gauged by...
Designing sampling schemes for effect monitoring of nutrient leaching from agricultural soils.
Brus, D.J.; Noij, I.G.A.M.
2008-01-01
A general methodology for designing sampling schemes for monitoring is illustrated with a case study aimed at estimating the temporal change of the spatial mean P concentration in the topsoil of an agricultural field after implementation of the remediation measure. A before-after control-impact (BAC
Modeling place field activity with hierarchical slow feature analysis
Directory of Open Access Journals (Sweden)
Fabian eSchoenfeld
2015-05-01
Full Text Available In this paper we present six experimental studies from the literature on hippocampal place cells and replicate their main results in a computational framework based on the principle of slowness. Each of the chosen studies first allows rodents to develop stable place field activity and then examines a distinct property of the established spatial encoding, namely adaptation to cue relocation and removal; directional firing activity in the linear track and open field; and results of morphing and stretching the overall environment. To replicate these studies we employ a hierarchical Slow Feature Analysis (SFA network. SFA is an unsupervised learning algorithm extracting slowly varying information from a given stream of data, and hierarchical application of SFA allows for high dimensional input such as visual images to be processed efficiently and in a biologically plausible fashion. Training data for the network is produced in ratlab, a free basic graphics engine designed to quickly set up a wide range of 3D environments mimicking real life experimental studies, simulate a foraging rodent while recording its visual input, and training & sampling a hierarchical SFA network.
A Bayesian model for estimating population means using a link-tracing sampling design.
St Clair, Katherine; O'Connell, Daniel
2012-03-01
Link-tracing sampling designs can be used to study human populations that contain "hidden" groups who tend to be linked together by a common social trait. These links can be used to increase the sampling intensity of a hidden domain by tracing links from individuals selected in an initial wave of sampling to additional domain members. Chow and Thompson (2003, Survey Methodology 29, 197-205) derived a Bayesian model to estimate the size or proportion of individuals in the hidden population for certain link-tracing designs. We propose an addition to their model that will allow for the modeling of a quantitative response. We assess properties of our model using a constructed population and a real population of at-risk individuals, both of which contain two domains of hidden and nonhidden individuals. Our results show that our model can produce good point and interval estimates of the population mean and domain means when our population assumptions are satisfied.
Heidel, R. Eric
2016-01-01
Statistical power is the ability to detect a significant effect, given that the effect actually exists in a population. Like most statistical concepts, statistical power tends to induce cognitive dissonance in hepatology researchers. However, planning for statistical power by an a priori sample size calculation is of paramount importance when designing a research study. There are five specific empirical components that make up an a priori sample size calculation: the scale of measurement of the outcome, the research design, the magnitude of the effect size, the variance of the effect size, and the sample size. A framework grounded in the phenomenon of isomorphism, or interdependencies amongst different constructs with similar forms, will be presented to understand the isomorphic effects of decisions made on each of the five aforementioned components of statistical power. PMID:27073717
Directory of Open Access Journals (Sweden)
R. Eric Heidel
2016-01-01
Full Text Available Statistical power is the ability to detect a significant effect, given that the effect actually exists in a population. Like most statistical concepts, statistical power tends to induce cognitive dissonance in hepatology researchers. However, planning for statistical power by an a priori sample size calculation is of paramount importance when designing a research study. There are five specific empirical components that make up an a priori sample size calculation: the scale of measurement of the outcome, the research design, the magnitude of the effect size, the variance of the effect size, and the sample size. A framework grounded in the phenomenon of isomorphism, or interdependencies amongst different constructs with similar forms, will be presented to understand the isomorphic effects of decisions made on each of the five aforementioned components of statistical power.
Design-based stereology: Planning, volumetry and sampling are crucial steps for a successful study.
Tschanz, Stefan; Schneider, Jan Philipp; Knudsen, Lars
2014-01-01
Quantitative data obtained by means of design-based stereology can add valuable information to studies performed on a diversity of organs, in particular when correlated to functional/physiological and biochemical data. Design-based stereology is based on a sound statistical background and can be used to generate accurate data which are in line with principles of good laboratory practice. In addition, by adjusting the study design an appropriate precision can be achieved to find relevant differences between groups. For the success of the stereological assessment detailed planning is necessary. In this review we focus on common pitfalls encountered during stereological assessment. An exemplary workflow is included, and based on authentic examples, we illustrate a number of sampling principles which can be implemented to obtain properly sampled tissue blocks for various purposes. Copyright © 2013 Elsevier GmbH. All rights reserved.
Optimized design and analysis of sparse-sampling fMRI experiments
Directory of Open Access Journals (Sweden)
Tyler K Perrachione
2013-04-01
Full Text Available Sparse-sampling is an important methodological advance in functional magnetic resonance imaging (fMRI, in which silent delays are introduced between MR volume acquisitions, allowing for the presentation of auditory stimuli without contamination by acoustic scanner noise and for overt vocal responses without motion-induced artifacts in the functional timeseries. As such, the sparse-sampling technique has become a mainstay of principled fMRI research into the cognitive and systems neuroscience of speech, language, hearing, and music. Despite being in use for over a decade, there has been little systematic investigation of the acquisition parameters, experimental design considerations, and statistical analysis approaches that bear on the results and interpretation of sparse-sampling fMRI experiments. In this report, we examined how design and analysis choices related to the duration of repetition time (TR delay (an acquisition parameter, stimulation rate (an experimental design parameter and model basis function (an analysis parameter act independently and interactively to affect the neural activation profiles observed in fMRI. First, we conducted a series of computational simulations to explore the parameter space of sparse design and analysis with respect to these variables; second, we validated the results of these simulations in a series of sparse-sampling fMRI experiments. Overall, these experiments suggest three methodological approaches that can, in many situations, substantially improve the detection of neurophysiological response in sparse fMRI: (1 Sparse analyses should utilize a physiologically-informed model that incorporates hemodynamic response convolution to reduce model error. (2 The design of sparse fMRI experiments should maintain a high rate of stimulus presentation to maximize effect size. (3 TR delays of short to intermediate length can be used between acquisitions of sparse-sampled functional image volumes to improve
Neutrosophic Hierarchical Clustering Algoritms
Directory of Open Access Journals (Sweden)
Rıdvan Şahin
2014-03-01
Full Text Available Interval neutrosophic set (INS is a generalization of interval valued intuitionistic fuzzy set (IVIFS, whose the membership and non-membership values of elements consist of fuzzy range, while single valued neutrosophic set (SVNS is regarded as extension of intuitionistic fuzzy set (IFS. In this paper, we extend the hierarchical clustering techniques proposed for IFSs and IVIFSs to SVNSs and INSs respectively. Based on the traditional hierarchical clustering procedure, the single valued neutrosophic aggregation operator, and the basic distance measures between SVNSs, we define a single valued neutrosophic hierarchical clustering algorithm for clustering SVNSs. Then we extend the algorithm to classify an interval neutrosophic data. Finally, we present some numerical examples in order to show the effectiveness and availability of the developed clustering algorithms.
Energy Technology Data Exchange (ETDEWEB)
Oliveira, Karina B. de [Universidade Federal do Parana (UFPR), Curitiba, PR (Brazil). Dept. de Farmacia; Oliveira, Bras H. de, E-mail: bho@ufpr.br [Universidade Federal do Parana (UFPR), Curitiba, PR (Brazil). Dept. de Quimica
2013-01-15
Sage (Salvia officinalis) contains high amounts of the biologically active rosmarinic acid (RA) and other polyphenolic compounds. RA is easily oxidized, and may undergo degradation during sample preparation for analysis. The objective of this work was to develop and validate an analytical procedure for determination of RA in sage, using factorial design of experiments for optimizing sample preparation. The statistically significant variables for improving RA extraction yield were determined initially and then used in the optimization step, using central composite design (CCD). The analytical method was then fully validated, and used for the analysis of commercial samples of sage. The optimized procedure involved extraction with aqueous methanol (40%) containing an antioxidant mixture (ascorbic acid and ethylenediaminetetraacetic acid (EDTA)), with sonication at 45 deg C for 20 min. The samples were then injected in a system containing a C{sub 18} column, using methanol (A) and 0.1% phosphoric acid in water (B) in step gradient mode (45A:55B, 0-5 min; 80A:20B, 5-10 min) with flow rate of 1.0 mL min-1 and detection at 330 nm. Using this conditions, RA concentrations were 50% higher when compared to extractions without antioxidants (98.94 {+-} 1.07% recovery). Auto-oxidation of RA during sample extraction was prevented by the use of antioxidants resulting in more reliable analytical results. The method was then used for the analysis of commercial samples of sage. (author)
Technique for fast and efficient hierarchical clustering
Stork, Christopher
2013-10-08
A fast and efficient technique for hierarchical clustering of samples in a dataset includes compressing the dataset to reduce a number of variables within each of the samples of the dataset. A nearest neighbor matrix is generated to identify nearest neighbor pairs between the samples based on differences between the variables of the samples. The samples are arranged into a hierarchy that groups the samples based on the nearest neighbor matrix. The hierarchy is rendered to a display to graphically illustrate similarities or differences between the samples.
Järvelä, J.; Stenvall, A.; Mikkonen, R.
Theelectrical and stabilitypropertiesof superconductivestrandsareoftencharacterizedby short sample testing.These tests are often done in a measurement system where the sample is cooled by liquid cryogen or cold gas flow. In both approaches, the sample temperature during a measurement is stabilized by the abundance of available cooling power. This also helps to protect the sample during a thermal runaway i.e. quench. However, in some characterizations, e.g. minimum quench energy testing, the cooling conditions can have a significant effect on the results. Therefore a more adiabatic solution is prefer able as iten able seasier comparison of the results from different measurement stations. One solution to achieving the desired adiabacy is to use conduction-cooling and vacuum insulation. As there is no cooling fluidtorelyon, as cheme for sample protection has to be implemented. Inaconduction-cooled setup, one way to protect the sampleis to use an active protection system in conjunction with aproperly designed sample holder. In this publication, we present an electrical and thermal analysis of a conduction-cooled sample holder suitable for both critical current and minimum quench energy measurements. A coupled electro-thermal finite element method model was constructed to study the sample holder performance during measurement. For our application, the performance is defined by the ohmic losses in the holder component sand by the recovery time from as amplequench.
Samples Selection for Artificial Neural Network Training in Preliminary Structural Design
Institute of Scientific and Technical Information of China (English)
TONG Fei; LIU Xila
2005-01-01
An artificial neural network (ANN) is applied in the preliminary structural design of reticulated shells. Major efforts are made to enhance the generalization ability of networks through well-selected training samples. Number-theoretic methods (NTMs) are adopted to generate samples with low discrepancy, i.e., uniformly scattered in the domain, where discrepancy is a quantitative measurement of the uniformity. The discrepancy of the NTM-based sample set is 1/6-1/7 that of samples with equal spacing. In a case study, networks trained by NTM-based samples are compared with those trained by equal-spaced samples in generalizing performance. The results show that both the computational precision and stability of the former ANNs are more satisfactory than those of the latter. It is concluded that the flexibility of ANNs in generalizing can be effectively increased by use of uniformly distributed training samples rather than simply piling data. More reliable uniformity should be obtained, however, through NTMs instead of equal-spaced samples.
Conflict-cost based random sampling design for parallel MRI with low rank constraints
Kim, Wan; Zhou, Yihang; Lyu, Jingyuan; Ying, Leslie
2015-05-01
In compressed sensing MRI, it is very important to design sampling pattern for random sampling. For example, SAKE (simultaneous auto-calibrating and k-space estimation) is a parallel MRI reconstruction method using random undersampling. It formulates image reconstruction as a structured low-rank matrix completion problem. Variable density (VD) Poisson discs are typically adopted for 2D random sampling. The basic concept of Poisson disc generation is to guarantee samples are neither too close to nor too far away from each other. However, it is difficult to meet such a condition especially in the high density region. Therefore the sampling becomes inefficient. In this paper, we present an improved random sampling pattern for SAKE reconstruction. The pattern is generated based on a conflict cost with a probability model. The conflict cost measures how many dense samples already assigned are around a target location, while the probability model adopts the generalized Gaussian distribution which includes uniform and Gaussian-like distributions as special cases. Our method preferentially assigns a sample to a k-space location with the least conflict cost on the circle of the highest probability. To evaluate the effectiveness of the proposed random pattern, we compare the performance of SAKEs using both VD Poisson discs and the proposed pattern. Experimental results for brain data show that the proposed pattern yields lower normalized mean square error (NMSE) than VD Poisson discs.
AN EVALUATION OF PRIMARY DATA-COLLECTION MODES IN AN ADDRESS-BASED SAMPLING DESIGN.
Amaya, Ashley; Leclere, Felicia; Carris, Kari; Liao, Youlian
2015-01-01
As address-based sampling becomes increasingly popular for multimode surveys, researchers continue to refine data-collection best practices. While much work has been conducted to improve efficiency within a given mode, additional research is needed on how multimode designs can be optimized across modes. Previous research has not evaluated the consequences of mode sequencing on multimode mail and phone surveys, nor has significant research been conducted to evaluate mode sequencing on a variety of indicators beyond response rates. We conducted an experiment within the Racial and Ethnic Approaches to Community Health across the U.S. Risk Factor Survey (REACH U.S.) to evaluate two multimode case-flow designs: (1) phone followed by mail (phone-first) and (2) mail followed by phone (mail-first). We compared response rates, cost, timeliness, and data quality to identify differences across case-flow design. Because surveys often differ on the rarity of the target population, we also examined whether changes in the eligibility rate altered the choice of optimal case flow. Our results suggested that, on most metrics, the mail-first design was superior to the phone-first design. Compared with phone-first, mail-first achieved a higher yield rate at a lower cost with equivalent data quality. While the phone-first design initially achieved more interviews compared to the mail-first design, over time the mail-first design surpassed it and obtained the greatest number of interviews.
Institute of Scientific and Technical Information of China (English)
刘冉冉; 潘天红; 李正明
2015-01-01
针对一类非均匀数据采样Hammerstein-Wiener系统,提出一种递阶多新息随机梯度算法. 首先基于提升技术,推导出系统的状态空间模型,并考虑因果约束关系,将该模型分解成两个子系统,利用多新息遗忘随机梯度算法辨识出此模型的参数;然后,引入可变遗忘因子,提出一种修正函数并在线确定其大小,提高了算法的收敛速度及抗干扰能力.仿真实例验证了所提出算法的有效性和优越性.%A hierarchical multi-innovation stochastic gradient identification algorithm is proposed for Hammerstein-Wiener(H-W) nonlinear systems with non-uniformly sampling. The corresponding state space models of H-W are derived by using the lifting technique. Considering the causality constraints, the H-W system is decomposed into two subsystems firstly. Then the model parameters are identified by using the multi-innovation based stochastic gradient algorithm with forgetting factors. In order to improve the convergent rate and the disturbance rejection, a new kind of variable forgetting factor algorithm is also presented. Simulation examples demonstrate that the proposed algorithm has fast convergence speed and is robust to the noise.
Romer, Jeremy D.; Gitelman, Alix I.; Clements, Shaun; Schreck, Carl B.
2015-01-01
A number of researchers have attempted to estimate salmonid smolt survival during outmigration through an estuary. However, it is currently unclear how the design of such studies influences the accuracy and precision of survival estimates. In this simulation study we consider four patterns of smolt survival probability in the estuary, and test the performance of several different sampling strategies for estimating estuarine survival assuming perfect detection. The four survival probability patterns each incorporate a systematic component (constant, linearly increasing, increasing and then decreasing, and two pulses) and a random component to reflect daily fluctuations in survival probability. Generally, spreading sampling effort (tagging) across the season resulted in more accurate estimates of survival. All sampling designs in this simulation tended to under-estimate the variation in the survival estimates because seasonal and daily variation in survival probability are not incorporated in the estimation procedure. This under-estimation results in poorer performance of estimates from larger samples. Thus, tagging more fish may not result in better estimates of survival if important components of variation are not accounted for. The results of our simulation incorporate survival probabilities and run distribution data from previous studies to help illustrate the tradeoffs among sampling strategies in terms of the number of tags needed and distribution of tagging effort. This information will assist researchers in developing improved monitoring programs and encourage discussion regarding issues that should be addressed prior to implementation of any telemetry-based monitoring plan. We believe implementation of an effective estuary survival monitoring program will strengthen the robustness of life cycle models used in recovery plans by providing missing data on where and how much mortality occurs in the riverine and estuarine portions of smolt migration. These data
Directory of Open Access Journals (Sweden)
Jeremy D Romer
Full Text Available A number of researchers have attempted to estimate salmonid smolt survival during outmigration through an estuary. However, it is currently unclear how the design of such studies influences the accuracy and precision of survival estimates. In this simulation study we consider four patterns of smolt survival probability in the estuary, and test the performance of several different sampling strategies for estimating estuarine survival assuming perfect detection. The four survival probability patterns each incorporate a systematic component (constant, linearly increasing, increasing and then decreasing, and two pulses and a random component to reflect daily fluctuations in survival probability. Generally, spreading sampling effort (tagging across the season resulted in more accurate estimates of survival. All sampling designs in this simulation tended to under-estimate the variation in the survival estimates because seasonal and daily variation in survival probability are not incorporated in the estimation procedure. This under-estimation results in poorer performance of estimates from larger samples. Thus, tagging more fish may not result in better estimates of survival if important components of variation are not accounted for. The results of our simulation incorporate survival probabilities and run distribution data from previous studies to help illustrate the tradeoffs among sampling strategies in terms of the number of tags needed and distribution of tagging effort. This information will assist researchers in developing improved monitoring programs and encourage discussion regarding issues that should be addressed prior to implementation of any telemetry-based monitoring plan. We believe implementation of an effective estuary survival monitoring program will strengthen the robustness of life cycle models used in recovery plans by providing missing data on where and how much mortality occurs in the riverine and estuarine portions of smolt
Competitive Comparison of Optimal Designs of Experiments for Sampling-based Sensitivity Analysis
Janouchova, Eliska
2012-01-01
Nowadays, the numerical models of real-world structures are more precise, more complex and, of course, more time-consuming. Despite the growth of a computational effort, the exploration of model behaviour remains a complex task. The sensitivity analysis is a basic tool for investigating the sensitivity of the model to its inputs. One widely used strategy to assess the sensitivity is based on a finite set of simulations for a given sets of input parameters, i.e. points in the design space. An estimate of the sensitivity can be then obtained by computing correlations between the input parameters and the chosen response of the model. The accuracy of the sensitivity prediction depends on the choice of design points called the design of experiments. The aim of the presented paper is to review and compare available criteria determining the quality of the design of experiments suitable for sampling-based sensitivity analysis.
Design of multishell sampling schemes with uniform coverage in diffusion MRI.
Caruyer, Emmanuel; Lenglet, Christophe; Sapiro, Guillermo; Deriche, Rachid
2013-06-01
In diffusion MRI, a technique known as diffusion spectrum imaging reconstructs the propagator with a discrete Fourier transform, from a Cartesian sampling of the diffusion signal. Alternatively, it is possible to directly reconstruct the orientation distribution function in q-ball imaging, providing so-called high angular resolution diffusion imaging. In between these two techniques, acquisitions on several spheres in q-space offer an interesting trade-off between the angular resolution and the radial information gathered in diffusion MRI. A careful design is central in the success of multishell acquisition and reconstruction techniques. The design of acquisition in multishell is still an open and active field of research, however. In this work, we provide a general method to design multishell acquisition with uniform angular coverage. This method is based on a generalization of electrostatic repulsion to multishell. We evaluate the impact of our method using simulations, on the angular resolution in one and two bundles of fiber configurations. Compared to more commonly used radial sampling, we show that our method improves the angular resolution, as well as fiber crossing discrimination. We propose a novel method to design sampling schemes with optimal angular coverage and show the positive impact on angular resolution in diffusion MRI. Copyright © 2012 Wiley Periodicals, Inc.
Energy Technology Data Exchange (ETDEWEB)
Backlund, Peter B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Shahan, David W. [HRL Labs., LLC, Malibu, CA (United States); Seepersad, Carolyn Conner [Univ. of Texas, Austin, TX (United States)
2014-04-22
A classifier-guided sampling (CGS) method is introduced for solving engineering design optimization problems with discrete and/or continuous variables and continuous and/or discontinuous responses. The method merges concepts from metamodel-guided sampling and population-based optimization algorithms. The CGS method uses a Bayesian network classifier for predicting the performance of new designs based on a set of known observations or training points. Unlike most metamodeling techniques, however, the classifier assigns a categorical class label to a new design, rather than predicting the resulting response in continuous space, and thereby accommodates nondifferentiable and discontinuous functions of discrete or categorical variables. The CGS method uses these classifiers to guide a population-based sampling process towards combinations of discrete and/or continuous variable values with a high probability of yielding preferred performance. Accordingly, the CGS method is appropriate for discrete/discontinuous design problems that are ill-suited for conventional metamodeling techniques and too computationally expensive to be solved by population-based algorithms alone. In addition, the rates of convergence and computational properties of the CGS method are investigated when applied to a set of discrete variable optimization problems. Results show that the CGS method significantly improves the rate of convergence towards known global optima, on average, when compared to genetic algorithms.
Choi, Jinhyeok; Kim, Hyeonjin
2016-12-01
To improve the efficacy of undersampled MRI, a method of designing adaptive sampling functions is proposed that is simple to implement on an MR scanner and yet effectively improves the performance of the sampling functions. An approximation of the energy distribution of an image (E-map) is estimated from highly undersampled k-space data acquired in a prescan and efficiently recycled in the main scan. An adaptive probability density function (PDF) is generated by combining the E-map with a modeled PDF. A set of candidate sampling functions are then prepared from the adaptive PDF, among which the one with maximum energy is selected as the final sampling function. To validate its computational efficiency, the proposed method was implemented on an MR scanner, and its robust performance in Fourier-transform (FT) MRI and compressed sensing (CS) MRI was tested by simulations and in a cherry tomato. The proposed method consistently outperforms the conventional modeled PDF approach for undersampling ratios of 0.2 or higher in both FT-MRI and CS-MRI. To fully benefit from undersampled MRI, it is preferable that the design of adaptive sampling functions be performed online immediately before the main scan. In this way, the proposed method may further improve the efficacy of the undersampled MRI. Copyright © 2016 Elsevier Inc. All rights reserved.
Profit based phase II sample size determination when adaptation by design is adopted
Martini, D.
2014-01-01
Background. Adaptation by design consists in conservatively estimating the phase III sample size on the basis of phase II data, and can be applied in almost all therapeutic areas; it is based on the assumption that the effect size of the drug is the same in phase II and phase III trials, that is a very common scenario assumed in product development. Adaptation by design reduces the probability on underpowered experiments and can improve the overall success probability of phase II and III tria...
Hierarchical social networks and information flow
López, Luis; F. F. Mendes, Jose; Sanjuán, Miguel A. F.
2002-12-01
Using a simple model for the information flow on social networks, we show that the traditional hierarchical topologies frequently used by companies and organizations, are poorly designed in terms of efficiency. Moreover, we prove that this type of structures are the result of the individual aim of monopolizing as much information as possible within the network. As the information is an appropriate measurement of centrality, we conclude that this kind of topology is so attractive for leaders, because the global influence each actor has within the network is completely determined by the hierarchical level occupied.
Analyzing security protocols in hierarchical networks
DEFF Research Database (Denmark)
Zhang, Ye; Nielson, Hanne Riis
2006-01-01
Validating security protocols is a well-known hard problem even in a simple setting of a single global network. But a real network often consists of, besides the public-accessed part, several sub-networks and thereby forms a hierarchical structure. In this paper we first present a process calculus...... capturing the characteristics of hierarchical networks and describe the behavior of protocols on such networks. We then develop a static analysis to automate the validation. Finally we demonstrate how the technique can benefit the protocol development and the design of network systems by presenting a series...
Directory of Open Access Journals (Sweden)
Giles M. Foody
2017-08-01
Full Text Available Validation data are often used to evaluate the performance of a trained neural network and used in the selection of a network deemed optimal for the task at-hand. Optimality is commonly assessed with a measure, such as overall classification accuracy. The latter is often calculated directly from a confusion matrix showing the counts of cases in the validation set with particular labelling properties. The sample design used to form the validation set can, however, influence the estimated magnitude of the accuracy. Commonly, the validation set is formed with a stratified sample to give balanced classes, but also via random sampling, which reflects class abundance. It is suggested that if the ultimate aim is to accurately classify a dataset in which the classes do vary in abundance, a validation set formed via random, rather than stratified, sampling is preferred. This is illustrated with the classification of simulated and remotely-sensed datasets. With both datasets, statistically significant differences in the accuracy with which the data could be classified arose from the use of validation sets formed via random and stratified sampling (z = 2.7 and 1.9 for the simulated and real datasets respectively, for both p < 0.05%. The accuracy of the classifications that used a stratified sample in validation were smaller, a result of cases of an abundant class being commissioned into a rarer class. Simple means to address the issue are suggested.
Assessment of sediment contamination and sampling design in Savona Harbour, Italy.
Paladino, Ombretta; Massabò, Marco; Fissore, Francesca; Moranda, Arianna
2015-02-15
A method for assessing environmental contamination in harbour sediments and designing the forthcoming monitoring activities in enlarged coastal ecosystems is proposed herein. The method is based on coupling principal component analysis of previous sampling campaigns with a discrete optimisation of a value for money function. The objective function represents the utility derived for every sum of money spent in sampling and chemical analysis. The method was then used to assess actual contamination and found to be well suited for reducing the number of chemicals to be searched during extended monitoring activities and identifying the possible sources of contamination. Data collected in Savona Harbour (Porto Vado), Italy, where construction of a new terminal construction is planned, were used to illustrate the procedure. 23 chemicals were searched for within a total of 213 samples in 68 sampling points during three monitoring campaigns. These data were used to test the procedure. Subsequently, 28 chemicals were searched for within 14 samples in 10 sampling points and collected data were used to evaluate the experimental error and to validate the proposed procedure. Copyright © 2014 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Noah D Charney
Full Text Available Improving detection rates for elusive species with clumped distributions is often accomplished through adaptive sampling designs. This approach can be extended to include species with temporally variable detection probabilities. By concentrating survey effort in years when the focal species are most abundant or visible, overall detection rates can be improved. This requires either long-term monitoring at a few locations where the species are known to occur or models capable of predicting population trends using climatic and demographic data. For marbled salamanders (Ambystoma opacum in Massachusetts, we demonstrate that annual variation in detection probability of larvae is regionally correlated. In our data, the difference in survey success between years was far more important than the difference among the three survey methods we employed: diurnal surveys, nocturnal surveys, and dipnet surveys. Based on these data, we simulate future surveys to locate unknown populations under a temporally adaptive sampling framework. In the simulations, when pond dynamics are correlated over the focal region, the temporally adaptive design improved mean survey success by as much as 26% over a non-adaptive sampling design. Employing a temporally adaptive strategy costs very little, is simple, and has the potential to substantially improve the efficient use of scarce conservation funds.
Charney, Noah D; Kubel, Jacob E; Eiseman, Charles S
2015-01-01
Improving detection rates for elusive species with clumped distributions is often accomplished through adaptive sampling designs. This approach can be extended to include species with temporally variable detection probabilities. By concentrating survey effort in years when the focal species are most abundant or visible, overall detection rates can be improved. This requires either long-term monitoring at a few locations where the species are known to occur or models capable of predicting population trends using climatic and demographic data. For marbled salamanders (Ambystoma opacum) in Massachusetts, we demonstrate that annual variation in detection probability of larvae is regionally correlated. In our data, the difference in survey success between years was far more important than the difference among the three survey methods we employed: diurnal surveys, nocturnal surveys, and dipnet surveys. Based on these data, we simulate future surveys to locate unknown populations under a temporally adaptive sampling framework. In the simulations, when pond dynamics are correlated over the focal region, the temporally adaptive design improved mean survey success by as much as 26% over a non-adaptive sampling design. Employing a temporally adaptive strategy costs very little, is simple, and has the potential to substantially improve the efficient use of scarce conservation funds.
Egger, C; Maurer, M
2015-04-15
Urban drainage design relying on observed precipitation series neglects the uncertainties associated with current and indeed future climate variability. Urban drainage design is further affected by the large stochastic variability of precipitation extremes and sampling errors arising from the short observation periods of extreme precipitation. Stochastic downscaling addresses anthropogenic climate impact by allowing relevant precipitation characteristics to be derived from local observations and an ensemble of climate models. This multi-climate model approach seeks to reflect the uncertainties in the data due to structural errors of the climate models. An ensemble of outcomes from stochastic downscaling allows for addressing the sampling uncertainty. These uncertainties are clearly reflected in the precipitation-runoff predictions of three urban drainage systems. They were mostly due to the sampling uncertainty. The contribution of climate model uncertainty was found to be of minor importance. Under the applied greenhouse gas emission scenario (A1B) and within the period 2036-2065, the potential for urban flooding in our Swiss case study is slightly reduced on average compared to the reference period 1981-2010. Scenario planning was applied to consider urban development associated with future socio-economic factors affecting urban drainage. The impact of scenario uncertainty was to a large extent found to be case-specific, thus emphasizing the need for scenario planning in every individual case. The results represent a valuable basis for discussions of new drainage design standards aiming specifically to include considerations of uncertainty. Copyright © 2015 Elsevier Ltd. All rights reserved.
Improving broadcast channel rate using hierarchical modulation
Meric, Hugo; Arnal, Fabrice; Lesthievent, Guy; Boucheret, Marie-Laure
2011-01-01
We investigate the design of a broadcast system where the aim is to maximise the throughput. This task is usually challenging due to the channel variability. Forty years ago, Cover introduced and compared two schemes: time sharing and superposition coding. The second scheme was proved to be optimal for some channels. Modern satellite communications systems such as DVB-SH and DVB-S2 mainly rely on time sharing strategy to optimize throughput. They consider hierarchical modulation, a practical implementation of superposition coding, but only for unequal error protection or backward compatibility purposes. We propose in this article to combine time sharing and hierarchical modulation together and show how this scheme can improve the performance in terms of available rate. We present the gain on a simple channel modeling the broadcasting area of a satellite. Our work is applied to the DVB-SH standard, which considers hierarchical modulation as an optional feature.
Towards a sustainable manufacture of hierarchical zeolites.
Verboekend, Danny; Pérez-Ramírez, Javier
2014-03-01
Hierarchical zeolites have been established as a superior type of aluminosilicate catalysts compared to their conventional (purely microporous) counterparts. An impressive array of bottom-up and top-down approaches has been developed during the last decade to design and subsequently exploit these exciting materials catalytically. However, the sustainability of the developed synthetic methods has rarely been addressed. This paper highlights important criteria to ensure the ecological and economic viability of the manufacture of hierarchical zeolites. Moreover, by using base leaching as a promising case study, we verify a variety of approaches to increase reactor productivity, recycle waste streams, prevent the combustion of organic compounds, and minimize separation efforts. By reducing their synthetic footprint, hierarchical zeolites are positioned as an integral part of sustainable chemistry. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Static and dynamic friction of hierarchical surfaces
Costagliola, Gianluca; Bosia, Federico; Pugno, Nicola M.
2016-12-01
Hierarchical structures are very common in nature, but only recently have they been systematically studied in materials science, in order to understand the specific effects they can have on the mechanical properties of various systems. Structural hierarchy provides a way to tune and optimize macroscopic mechanical properties starting from simple base constituents and new materials are nowadays designed exploiting this possibility. This can be true also in the field of tribology. In this paper we study the effect of hierarchical patterned surfaces on the static and dynamic friction coefficients of an elastic material. Our results are obtained by means of numerical simulations using a one-dimensional spring-block model, which has previously been used to investigate various aspects of friction. Despite the simplicity of the model, we highlight some possible mechanisms that explain how hierarchical structures can significantly modify the friction coefficients of a material, providing a means to achieve tunability.
Zhou, Liang
2013-02-01
Multivariate volumetric datasets are important to both science and medicine. We propose a transfer function (TF) design approach based on user selected samples in the spatial domain to make multivariate volumetric data visualization more accessible for domain users. Specifically, the user starts the visualization by probing features of interest on slices and the data values are instantly queried by user selection. The queried sample values are then used to automatically and robustly generate high dimensional transfer functions (HDTFs) via kernel density estimation (KDE). Alternatively, 2D Gaussian TFs can be automatically generated in the dimensionality reduced space using these samples. With the extracted features rendered in the volume rendering view, the user can further refine these features using segmentation brushes. Interactivity is achieved in our system and different views are tightly linked. Use cases show that our system has been successfully applied for simulation and complicated seismic data sets. © 2013 IEEE.
Microbiology of subsurface environments: Deep Probe sampling design and planning workshop
Energy Technology Data Exchange (ETDEWEB)
Wobber, F.J.; Zachara, J.M.
1987-01-01
This report describes the results of an invited workshop in support of DOE's microbiology research program (Deep Probe). This report summarizes the major recommendations of the workshop, and contains DOE's perspectives on tasks related to the following areas: drilling and coring procedures; tracer technologies and issues of contamination; supplementary geohydrological, sedimentological and geological analysis relevant to analysis of deep microbial habitats; statistical analysis of existing samples; experimental design for future coring; improvements in microbiological sampling, core handling, and acquisition of pore water chemical samples; interlaboratory standardization of microbiological protocols; DOE's strategy for extension of the research to other hydrogeological regimes; and, accelerated information exchange with the scientific community.
Energy Technology Data Exchange (ETDEWEB)
ROMERO,VICENTE J.; SWILER,LAURA PAINTON; GIUNTA,ANTHONY A.
2000-04-25
This paper examines the modeling accuracy of finite element interpolation, kriging, and polynomial regression used in conjunction with the Progressive Lattice Sampling (PLS) incremental design-of-experiments approach. PLS is a paradigm for sampling a deterministic hypercubic parameter space by placing and incrementally adding samples in a manner intended to maximally reduce lack of knowledge in the parameter space. When combined with suitable interpolation methods, PLS is a formulation for progressive construction of response surface approximations (RSA) in which the RSA are efficiently upgradable, and upon upgrading, offer convergence information essential in estimating error introduced by the use of RSA in the problem. The three interpolation methods tried here are examined for performance in replicating an analytic test function as measured by several different indicators. The process described here provides a framework for future studies using other interpolation schemes, test functions, and measures of approximation quality.
Design and Performance of Sampled Data Loops for Subcarrier and Carrier Tracking
Aguirre, S.; Hurd, W. J.
1984-01-01
Design parameters and resulting performance are presented for the sampled data analogies of continuous time phase locked loops of second and third order containing perfect integrators. Expressions for noise equivalent bandwidth and steady state errors are given. Stability and gain margin are investigated using z plane root loci. Finally, an application is presented for Voyager subcarrier and carrier tracking under the dynamics of the encounters with Uranus and Neptune. For carrier tracking, loop bandwidth narrow enough for satisfactory loop signal to noise ratios can be achieved using third order loops without rate aiding, whereas second order loops would require aiding. For subcarrier tracking, third order loops can be used when the sampling rate is limited to approximately once per second, as in the Baseband Assembly, whereas second order loops sufficiently wide to track the dynamics have stability problems at that sampling rate.
The role of the upper sample size limit in two-stage bioequivalence designs.
Karalis, Vangelis
2013-11-01
Two-stage designs (TSDs) are currently recommended by the regulatory authorities for bioequivalence (BE) assessment. The TSDs presented until now rely on an assumed geometric mean ratio (GMR) value of the BE metric in stage I in order to avoid inflation of type I error. In contrast, this work proposes a more realistic TSD design where sample re-estimation relies not only on the variability of stage I, but also on the observed GMR. In these cases, an upper sample size limit (UL) is introduced in order to prevent inflation of type I error. The aim of this study is to unveil the impact of UL on two TSD bioequivalence approaches which are based entirely on the interim results. Monte Carlo simulations were used to investigate several different scenarios of UL levels, within-subject variability, different starting number of subjects, and GMR. The use of UL leads to no inflation of type I error. As UL values increase, the % probability of declaring BE becomes higher. The starting sample size and the variability of the study affect type I error. Increased UL levels result in higher total sample sizes of the TSD which are more pronounced for highly variable drugs.
Predictive Array Design. A method for sampling combinatorial chemistry library space.
Lipkin, M J; Rose, V S; Wood, J
2002-01-01
A method, Predictive Array Design, is presented for sampling combinatorial chemistry space and selecting a subarray for synthesis based on the experimental design method of Latin Squares. The method is appropriate for libraries with three sites of variation. Libraries with four sites of variation can be designed using the Graeco-Latin Square. Simulated annealing is used to optimise the physicochemical property profile of the sub-array. The sub-array can be used to make predictions of the activity of compounds in the all combinations array if we assume each monomer has a relatively constant contribution to activity and that the activity of a compound is composed of the sum of the activities of its constitutive monomers.
Optimizing sampling design to deal with mist-net avoidance in Amazonian birds and bats.
Directory of Open Access Journals (Sweden)
João Tiago Marques
Full Text Available Mist netting is a widely used technique to sample bird and bat assemblages. However, captures often decline with time because animals learn and avoid the locations of nets. This avoidance or net shyness can substantially decrease sampling efficiency. We quantified the day-to-day decline in captures of Amazonian birds and bats with mist nets set at the same location for four consecutive days. We also evaluated how net avoidance influences the efficiency of surveys under different logistic scenarios using re-sampling techniques. Net avoidance caused substantial declines in bird and bat captures, although more accentuated in the latter. Most of the decline occurred between the first and second days of netting: 28% in birds and 47% in bats. Captures of commoner species were more affected. The numbers of species detected also declined. Moving nets daily to minimize the avoidance effect increased captures by 30% in birds and 70% in bats. However, moving the location of nets may cause a reduction in netting time and captures. When moving the nets caused the loss of one netting day it was no longer advantageous to move the nets frequently. In bird surveys that could even decrease the number of individuals captured and species detected. Net avoidance can greatly affect sampling efficiency but adjustments in survey design can minimize this. Whenever nets can be moved without losing netting time and the objective is to capture many individuals, they should be moved daily. If the main objective is to survey species present then nets should still be moved for bats, but not for birds. However, if relocating nets causes a significant loss of netting time, moving them to reduce effects of shyness will not improve sampling efficiency in either group. Overall, our findings can improve the design of mist netting sampling strategies in other tropical areas.
Optimizing sampling design to deal with mist-net avoidance in Amazonian birds and bats.
Marques, João Tiago; Ramos Pereira, Maria J; Marques, Tiago A; Santos, Carlos David; Santana, Joana; Beja, Pedro; Palmeirim, Jorge M
2013-01-01
Mist netting is a widely used technique to sample bird and bat assemblages. However, captures often decline with time because animals learn and avoid the locations of nets. This avoidance or net shyness can substantially decrease sampling efficiency. We quantified the day-to-day decline in captures of Amazonian birds and bats with mist nets set at the same location for four consecutive days. We also evaluated how net avoidance influences the efficiency of surveys under different logistic scenarios using re-sampling techniques. Net avoidance caused substantial declines in bird and bat captures, although more accentuated in the latter. Most of the decline occurred between the first and second days of netting: 28% in birds and 47% in bats. Captures of commoner species were more affected. The numbers of species detected also declined. Moving nets daily to minimize the avoidance effect increased captures by 30% in birds and 70% in bats. However, moving the location of nets may cause a reduction in netting time and captures. When moving the nets caused the loss of one netting day it was no longer advantageous to move the nets frequently. In bird surveys that could even decrease the number of individuals captured and species detected. Net avoidance can greatly affect sampling efficiency but adjustments in survey design can minimize this. Whenever nets can be moved without losing netting time and the objective is to capture many individuals, they should be moved daily. If the main objective is to survey species present then nets should still be moved for bats, but not for birds. However, if relocating nets causes a significant loss of netting time, moving them to reduce effects of shyness will not improve sampling efficiency in either group. Overall, our findings can improve the design of mist netting sampling strategies in other tropical areas.
Energy Technology Data Exchange (ETDEWEB)
Scroppo, J.A.; Scroppo, G.L. [Bladon International, Inc., Oak Brook, IL (United States); Carty, R.H.; Chaimberg, M. [Institute of Gas Technology, Chicago, IL (United States); Timmons, R.D.; O`Donnell, M. [Timco Mfg., Inc., Prairie du Sac, WI (United States)
1992-06-01
There is a need for a quick, simple, reliable, and inexpensive on-site method for sampling soil pollutants before they reach the groundwater. Vadose zone monitoring is an important aspect of sound groundwater management. In the vadose zone, where water moves via percolation, this water medium possesses the ability to transfer hazardous wastes to the nation`s groundwater system. Obtaining samples of moisture and contaminants from the vadose zone is necessary if potential problems are to be identified before they reach the water table. Accurate determination of spatial distribution, movement, and concentrations of contaminants is essential to the selection of remediation technologies. There is a need for three-dimentional subsurface characterization technologies to identify the location of hazardous plumes and their migration. Current subsurface characterization methods for dispersed contaminants primarily involve a time consuming, expensive process for drilling wells and taking samples. With no major water flow in the vadose zone, conventional monitoring wells will not function as designed. The multi-sampling lysimeter can be readily linked with physical and chemical sensors for on-site screening. The hydraulically-installed suction lysimeter was capable of extracting soil pore liquid samples from unsaturated test soils without the need to predrill a well. Test results verified that the lysimeters installed with a hydraulic or mechanical ram were able to collect soil pore liquid samples in excess of the amount typically required for monitoring and analysis on a daily basis. Modifications to the prototype design eliminated moving parts and the need for inflatable packers. The elimination of the packer system and the use of porous nickel contributed to increased system ruggedness.
Optimization of sampling pattern and the design of Fourier ptychographic illuminator.
Guo, Kaikai; Dong, Siyuan; Nanda, Pariksheet; Zheng, Guoan
2015-03-09
Fourier ptychography (FP) is a recently developed imaging approach that facilitates high-resolution imaging beyond the cutoff frequency of the employed optics. In the original FP approach, a periodic LED array is used for sample illumination, and therefore, the scanning pattern is a uniform grid in the Fourier space. Such a uniform sampling scheme leads to 3 major problems for FP, namely: 1) it requires a large number of raw images, 2) it introduces the raster grid artefacts in the reconstruction process, and 3) it requires a high-dynamic-range detector. Here, we investigate scanning sequences and sampling patterns to optimize the FP approach. For most biological samples, signal energy is concentrated at low-frequency region, and as such, we can perform non-uniform Fourier sampling in FP by considering the signal structure. In contrast, conventional ptychography perform uniform sampling over the entire real space. To implement the non-uniform Fourier sampling scheme in FP, we have designed and built an illuminator using LEDs mounted on a 3D-printed plastic case. The advantages of this illuminator are threefold in that: 1) it reduces the number of image acquisitions by at least 50% (68 raw images versus 137 in the original FP setup), 2) it departs from the translational symmetry of sampling to solve the raster grid artifact problem, and 3) it reduces the dynamic range of the captured images 6 fold. The results reported in this paper significantly shortened acquisition time and improved quality of FP reconstructions. It may provide new insights for developing Fourier ptychographic imaging platforms and find important applications in digital pathology.
Bertol, Gustavo; Franco, Luzia; Oliveira, Brás Heleno de
2012-01-01
Uncaria tomentosa ("cat's claw") is widely used for the treatment of some infectious and inflammatory diseases. Oxindole alkaloids are regarded as the most important components responsible for the biological activities attributed to the plant. Their analysis require efficient sample preparation and suitable reference standards but few are commercially available. To develop and validate a HPLC analytical method for oxindole alkaloids in Uncaria tomentosa with emphasis on sample preparation. Factorial experimental designs were used for the optimisation of both sample preparation and chromatographic separation. The optimised sample preparation involved extraction with aqueous ethanol, and the granulometry of the powdered plant material significantly influenced extraction yields. Mitraphylline was used as a calibration reference for the determination of total alkaloids. The method was fully validated and showed good selectivity, linearity (r² ≥ 0.9996), accuracy (≥ 96%) and precision (RSD < 2.4%). Detection and quantification limits for mitraphylline were 0.8 and 2.4 ppm, respectively. The optimised chromatographic method, using organic buffer in the mobile phase, provided baseline separation of tetracyclic and pentacyclic alkaloids in the samples. Calibration using mitraphylline provided more accurate estimates of total alkaloid content when compared to other available reference alkaloids. Copyright © 2011 John Wiley & Sons, Ltd.
Binford, Michael W; Lee, Tae Jeong; Townsend, Robert M
2004-08-03
Environmental variability is an important risk factor in rural agricultural communities. Testing models requires empirical sampling that generates data that are representative in both economic and ecological domains. Detrended correspondence analysis of satellite remote sensing data were used to design an effective low-cost sampling protocol for a field study to create an integrated socioeconomic and ecological database when no prior information on ecology of the survey area existed. We stratified the sample for the selection of tambons from various preselected provinces in Thailand based on factor analysis of spectral land-cover classes derived from satellite data. We conducted the survey for the sampled villages in the chosen tambons. The resulting data capture interesting variations in soil productivity and in the timing of good and bad years, which a purely random sample would likely have missed. Thus, this database will allow tests of hypotheses concerning the effect of credit on productivity, the sharing of idiosyncratic risks, and the economic influence of environmental variability.
Energy Technology Data Exchange (ETDEWEB)
McDonald, Benjamin S.; Zalavadia, Mital A.; Miller, Brian W.; Bliss, Mary; Olsen, Khris B.; Kasparek, Dustin M.; Clarke, Ardelia M.
2017-07-17
Environmental sampling and sample analyses by the International Atomic Energy Agency’s (IAEA) Network of Analytical Laboratories (NWAL) is a critical technical tool used to detect facility misuse under a Comprehensive Safeguards Agreement and to verify the absence of undeclared nuclear material activities under an Additional Protocol. Currently all environmental swipe samples (ESS) are screened using gamma spectrometry and x-ray fluorescence to estimate the amount of U and/or Pu in the ESS, to guide further analysis, and to assist in the shipment of ESS to the NWAL. Quantitative Digital Autoradiography for Environmental Samples (QDARES) is being developed to complement existing techniques through the use of a portable, real-time, high-spatial-resolution camera called the Ionizing-radiation Quantum Imaging Detector (iQID). The iQID constructs a spatial map of radionuclides within a sample or surface in real-time as charged particles (betas) and photons (gamma/x-rays) are detected and localized on an event-by-event basis. Knowledge of the location and nature of radioactive hot spots on the ESS could provide information for subsequent laboratory analysis. As a nondestructive technique, QDARES does not compromise the ESS chain of custody or subsequent laboratory analysis. In this paper we will present the system design and construction, characterization measurements with calibration sources, and initial measurements of ESS.
Adaptive k-space sampling design for edge-enhanced DCE-MRI using compressed sensing.
Raja, Rajikha; Sinha, Neelam
2014-09-01
The critical challenge in dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is the trade-off between spatial and temporal resolution due to the limited availability of acquisition time. To address this, it is imperative to under-sample k-space and to develop specific reconstruction techniques. Our proposed method reconstructs high-quality images from under-sampled dynamic k-space data by proposing two main improvements; i) design of an adaptive k-space sampling lattice and ii) edge-enhanced reconstruction technique. A high-resolution data set obtained before the start of the dynamic phase is utilized. The sampling pattern is designed to adapt to the nature of k-space energy distribution obtained from the static high-resolution data. For image reconstruction, the well-known compressed sensing-based total variation (TV) minimization constrained reconstruction scheme is utilized by incorporating the gradient information obtained from the static high-resolution data. The proposed method is tested on seven real dynamic time series consisting of 2 breast data sets and 5 abdomen data sets spanning 1196 images in all. For data availability of only 10%, performance improvement is seen across various quality metrics. Average improvements in Universal Image Quality Index and Structural Similarity Index Metric of up to 28% and 24% on breast data and about 17% and 9% on abdomen data, respectively, are obtained for the proposed method as against the baseline TV reconstruction with variable density random sampling pattern. Copyright © 2014 Elsevier Inc. All rights reserved.
García-Aguilar, J; Miguel-García, I; Berenguer-Murcia, Á; Cazorla-Amorós, D
2014-12-24
A synthetic procedure to prepare novel materials (surface-mediated fillings) based on robust hierarchical monoliths is reported. The methodology includes the deposition of a (micro- or mesoporous) silica thin film on the support followed by growth of a porous monolithic SiO2 structure. It has been demonstrated that this synthesis is viable for supports of different chemical nature with different inner diameters without shrinkage of the silica filling. The formation mechanism of the surface-mediated fillings is based on a solution/precipitation process and the anchoring of the silica filling to the deposited thin film. The interaction between the two SiO2 structures (monolith and thin film) depends on the porosity of the thin film and yields composite materials with different mechanical stability. By this procedure, capillary microreactors have been prepared and have been proved to be highly active and selective in the total and preferential oxidation of carbon monoxide (TOxCO and PrOxCO).
Collaborative Hierarchical Sparse Modeling
Sprechmann, Pablo; Sapiro, Guillermo; Eldar, Yonina C
2010-01-01
Sparse modeling is a powerful framework for data analysis and processing. Traditionally, encoding in this framework is done by solving an l_1-regularized linear regression problem, usually called Lasso. In this work we first combine the sparsity-inducing property of the Lasso model, at the individual feature level, with the block-sparsity property of the group Lasso model, where sparse groups of features are jointly encoded, obtaining a sparsity pattern hierarchically structured. This results in the hierarchical Lasso, which shows important practical modeling advantages. We then extend this approach to the collaborative case, where a set of simultaneously coded signals share the same sparsity pattern at the higher (group) level but not necessarily at the lower one. Signals then share the same active groups, or classes, but not necessarily the same active set. This is very well suited for applications such as source separation. An efficient optimization procedure, which guarantees convergence to the global opt...
Sampling design considerations for demographic studies: a case of colonial seabirds
Kendall, William L.; Converse, Sarah J.; Doherty, Paul F.; Naughton, Maura B.; Anders, Angela; Hines, James E.; Flint, Elizabeth
2009-01-01
For the purposes of making many informed conservation decisions, the main goal for data collection is to assess population status and allow prediction of the consequences of candidate management actions. Reducing the bias and variance of estimates of population parameters reduces uncertainty in population status and projections, thereby reducing the overall uncertainty under which a population manager must make a decision. In capture-recapture studies, imperfect detection of individuals, unobservable life-history states, local movement outside study areas, and tag loss can cause bias or precision problems with estimates of population parameters. Furthermore, excessive disturbance to individuals during capture?recapture sampling may be of concern because disturbance may have demographic consequences. We address these problems using as an example a monitoring program for Black-footed Albatross (Phoebastria nigripes) and Laysan Albatross (Phoebastria immutabilis) nesting populations in the northwestern Hawaiian Islands. To mitigate these estimation problems, we describe a synergistic combination of sampling design and modeling approaches. Solutions include multiple capture periods per season and multistate, robust design statistical models, dead recoveries and incidental observations, telemetry and data loggers, buffer areas around study plots to neutralize the effect of local movements outside study plots, and double banding and statistical models that account for band loss. We also present a variation on the robust capture?recapture design and a corresponding statistical model that minimizes disturbance to individuals. For the albatross case study, this less invasive robust design was more time efficient and, when used in combination with a traditional robust design, reduced the standard error of detection probability by 14% with only two hours of additional effort in the field. These field techniques and associated modeling approaches are applicable to studies of
The design of circuit for THz time domain spectroscopy system based on asynchronous optical sampling
Wang, Ruike; Zhang, Mile; Li, Yihan; He, Jingsuo; Zhang, Cunlin; Cui, Hailin
2016-11-01
Terahertz time domain spectroscopy system (THz-TDS) is the most commonly means of measuring terahertz time-domain spectroscopy. The time delay between the pump and probe laser is an important technology to realize THz time domain spectrum measurement. The translation platform with two mirrors and the mechanical structure is the popular means to adjust the optical path difference between the pump and probe laser to get the time delay of femtosecond pulse. Because of the limit of the mechanical structure and the phase-locked amplifier, this technique can't scan spectrum fast. In order to obtain high quality signal, a long time will be taken to scan spectrum. So a more rapid and convenient time delay technology is required to Instead of the machine translation platform and accomplish the Rapid spectral measurement. Asynchronous optical sampling technique is a way to get the time delay by producing a very small frequency difference between the repetition frequency of two femtosecond lasers. The scanner time will be reduced, because of there is no waste of time, due to mechanical inertia, not only by using the asynchronous optical sampling method to replace the mechanical structure without the influence of vibration. It will greatly increase the degree of integration by using the fiber femtosecond laser and highly integrated circuit to realize optical asynchronous sampling. To solve the problem above, a terahertz time-domain spectroscopy system based on asynchronous sampling is designed in this thesis. The system is based of two femtosecond laser whose repetition frequency is 100MHz.In order to realize asynchronous sampling, the control circuit of the two lasers is the most important. This thesis focuses on the researching, designing and experiment of this circuit. Firstly, the circuit is designed overall. Then the selection of the key device and the designing of the circuit principle is done by myself. Secondly, the test of the circuit to phase locked the master and
An Adaptive Defect Weighted Sampling Algorithm to Design Pseudoknotted RNA Secondary Structures.
Zandi, Kasra; Butler, Gregory; Kharma, Nawwaf
2016-01-01
Computational design of RNA sequences that fold into targeted secondary structures has many applications in biomedicine, nanotechnology and synthetic biology. An RNA molecule is made of different types of secondary structure elements and an important RNA element named pseudoknot plays a key role in stabilizing the functional form of the molecule. However, due to the computational complexities associated with characterizing pseudoknotted RNA structures, most of the existing RNA sequence designer algorithms generally ignore this important structural element and therefore limit their applications. In this paper we present a new algorithm to design RNA sequences for pseudoknotted secondary structures. We use NUPACK as the folding algorithm to compute the equilibrium characteristics of the pseudoknotted RNAs, and describe a new adaptive defect weighted sampling algorithm named Enzymer to design low ensemble defect RNA sequences for targeted secondary structures including pseudoknots. We used a biological data set of 201 pseudoknotted structures from the Pseudobase library to benchmark the performance of our algorithm. We compared the quality characteristics of the RNA sequences we designed by Enzymer with the results obtained from the state of the art MODENA and antaRNA. Our results show our method succeeds more frequently than MODENA and antaRNA do, and generates sequences that have lower ensemble defect, lower probability defect and higher thermostability. Finally by using Enzymer and by constraining the design to a naturally occurring and highly conserved Hammerhead motif, we designed 8 sequences for a pseudoknotted cis-acting Hammerhead ribozyme. Enzymer is available for download at https://bitbucket.org/casraz/enzymer.
Hierarchical manifold learning.
Bhatia, Kanwal K; Rao, Anil; Price, Anthony N; Wolz, Robin; Hajnal, Jo; Rueckert, Daniel
2012-01-01
We present a novel method of hierarchical manifold learning which aims to automatically discover regional variations within images. This involves constructing manifolds in a hierarchy of image patches of increasing granularity, while ensuring consistency between hierarchy levels. We demonstrate its utility in two very different settings: (1) to learn the regional correlations in motion within a sequence of time-resolved images of the thoracic cavity; (2) to find discriminative regions of 3D brain images in the classification of neurodegenerative disease,
Conceptual Design of a Communications Relay Satellite for a Lunar Sample Return Mission
Brunner, Christopher W.
2005-01-01
In 2003, NASA solicited proposals for a robotic exploration of the lunar surface. Submissions were requested for a lunar sample return mission from the South Pole-Aitken Basin. The basin is of interest because it is thought to contain some of the oldest accessible rocks on the lunar surface. A mission is under study that will land a spacecraft in the basin, collect a sample of rock fragments, and return the sample to Earth. Because the Aitken Basin is on the far side of the Moon, the lander will require a communications relay satellite (CRS) to maintain contact with the Earth during its surface operation. Design of the CRS's orbit is therefore critical. This paper describes a mission design which includes potential transfer and mission orbits, required changes in velocity, orbital parameters, and mission dates. Several different low lunar polar orbits are examined to compare their availability to the lander versus the distance over which they must communicate. In addition, polar orbits are compared to a halo orbit about the Earth-Moon L2 point, which would permit continuous communication at a cost of increased fuel requirements and longer transmission distances. This thesis also examines some general parameters of the spacecraft systems for the mission under study. Mission requirements for the lander dictate the eventual choice of mission orbit. This mission could be the first step in a period of renewed lunar exploration and eventual human landings.
Assessing usual dietary intake in complex sample design surveys: the National Dietary Survey
Directory of Open Access Journals (Sweden)
Flávia dos Santos Barbosa
2013-02-01
Full Text Available The National Cancer Institute (NCI method allows the distributions of usual intake of nutrients and foods to be estimated. This method can be used in complex surveys. However, the user must perform additional calculations, such as balanced repeated replication (BRR, in order to obtain standard errors and confidence intervals for the percentiles and mean from the distribution of usual intake. The objective is to highlight adaptations of the NCI method using data from the National Dietary Survey. The application of the NCI method was exemplified analyzing the total energy (kcal and fruit (g intake, comparing estimations of mean and standard deviation that were based on the complex design of the Brazilian survey with those assuming simple random sample. Although means point estimates were similar, estimates of standard error using the complex design increased by up to 60% compared to simple random sample. Thus, for valid estimates of food and energy intake for the population, all of the sampling characteristics of the surveys should be taken into account because when these characteristics are neglected, statistical analysis may produce underestimated standard errors that would compromise the results and the conclusions of the survey.
Hierarchically Structured Electrospun Fibers
Directory of Open Access Journals (Sweden)
Nicole E. Zander
2013-01-01
Full Text Available Traditional electrospun nanofibers have a myriad of applications ranging from scaffolds for tissue engineering to components of biosensors and energy harvesting devices. The generally smooth one-dimensional structure of the fibers has stood as a limitation to several interesting novel applications. Control of fiber diameter, porosity and collector geometry will be briefly discussed, as will more traditional methods for controlling fiber morphology and fiber mat architecture. The remainder of the review will focus on new techniques to prepare hierarchically structured fibers. Fibers with hierarchical primary structures—including helical, buckled, and beads-on-a-string fibers, as well as fibers with secondary structures, such as nanopores, nanopillars, nanorods, and internally structured fibers and their applications—will be discussed. These new materials with helical/buckled morphology are expected to possess unique optical and mechanical properties with possible applications for negative refractive index materials, highly stretchable/high-tensile-strength materials, and components in microelectromechanical devices. Core-shell type fibers enable a much wider variety of materials to be electrospun and are expected to be widely applied in the sensing, drug delivery/controlled release fields, and in the encapsulation of live cells for biological applications. Materials with a hierarchical secondary structure are expected to provide new superhydrophobic and self-cleaning materials.
Hierarchical video summarization
Ratakonda, Krishna; Sezan, M. Ibrahim; Crinon, Regis J.
1998-12-01
We address the problem of key-frame summarization of vide in the absence of any a priori information about its content. This is a common problem that is encountered in home videos. We propose a hierarchical key-frame summarization algorithm where a coarse-to-fine key-frame summary is generated. A hierarchical key-frame summary facilitates multi-level browsing where the user can quickly discover the content of the video by accessing its coarsest but most compact summary and then view a desired segment of the video with increasingly more detail. At the finest level, the summary is generated on the basis of color features of video frames, using an extension of a recently proposed key-frame extraction algorithm. The finest level key-frames are recursively clustered using a novel pairwise K-means clustering approach with temporal consecutiveness constraint. We also address summarization of MPEG-2 compressed video without fully decoding the bitstream. We also propose efficient mechanisms that facilitate decoding the video when the hierarchical summary is utilized in browsing and playback of video segments starting at selected key-frames.
[Methodological Aspects of the Sampling Design for the 2015 National Mental Health Survey].
Rodríguez, Nelcy; Rodríguez, Viviana Alejandra; Ramírez, Eugenia; Cediel, Sandra; Gil, Fabián; Rondón, Martín Alonso
2016-12-01
The WHO has encouraged the development, implementation and evaluation of policies related to mental health all over the world. In Colombia, within this framework and promoted by the Ministry of Health and Social Protection, as well as being supported by Colciencias, the fourth National Mental Health Survey (NMHST) was conducted using a observational cross sectional study. According to the context and following the guidelines and sampling design, a summary of the methodology used for this sampling process is presented. The fourth NMHST used the Homes Master Sample for Studies in Health from the National System of Studies and Population Surveys for Health to calculate its sample. This Master Sample was developed and implemented in the year 2013 by the Ministry of Social Protection. This study included non-institutionalised civilian population divided into four age groups: children 7-11 years, adolescent 12-17 years, 18-44 years and 44 years old or older. The sample size calculation was based on the reported prevalences in other studies for the outcomes of mental disorders, depression, suicide, associated morbidity, and alcohol use. A probabilistic, cluster, stratified and multistage selection process was used. Expansions factors to the total population were calculated. A total of 15,351 completed surveys were collected and were distributed according to the age groups: 2727, 7-11 years, 1754, 12-17 years, 5889, 18-44 years, and 4981, ≥45 years. All the surveys were distributed in five regions: Atlantic, Oriental, Bogotá, Central and Pacific. A sufficient number of surveys were collected in this study to obtain a more precise approximation of the mental problems and disorders at the regional and national level. Copyright © 2016 Asociación Colombiana de Psiquiatría. Publicado por Elsevier España. All rights reserved.
Strobl, R O; Robillard, P D; Day, R L; Shannon, R D; McDonnell, A J
2006-11-01
In order to resolve the spatial component of the design of a water quality monitoring network, a methodology has been developed to identify the critical sampling locations within a watershed. This methodology, called Critical Sampling Points (CSP), focuses on the contaminant total phosphorus (TP), and is applicable to small, predominantly agricultural-forested watersheds. The CSP methodology was translated into a model, called Water Quality Monitoring Station Analysis (WQMSA). It incorporates a geographic information system (GIS) for spatial analysis and data manipulation purposes, a hydrologic/water quality simulation model for estimating TP loads, and an artificial intelligence technology for improved input data representation. The model input data include a number of hydrologic, topographic, soils, vegetative, and land use factors. The model also includes an economic and logistics component. The validity of the CSP methodology was tested on a small experimental Pennsylvanian watershed, for which TP data from a number of single storm events were available for various sampling points within the watershed. A comparison of the ratios of observed to predicted TP loads between sampling points revealed that the model's results were promising.
Design of a current Mode Sample and Hold Circuit at sampling rate of 150 MS/s
Directory of Open Access Journals (Sweden)
Prity Yadav
2014-10-01
Full Text Available A current mode sample and hold circuit is presented in this paper at 180nm technology. The major concerns of VLSI are area, power, delay and speed. Hence, we have used a MOSFET in triode region in the proposed architecture for voltage to current conversion instead of a resistor being used in previously proposed circuit. The proposed circuit achieves high sampling frequency and with more accuracy than the previous one. The performance of the proposed circuit is depicted in the form of simulation results.
Multicollinearity in hierarchical linear models.
Yu, Han; Jiang, Shanhe; Land, Kenneth C
2015-09-01
This study investigates an ill-posed problem (multicollinearity) in Hierarchical Linear Models from both the data and the model perspectives. We propose an intuitive, effective approach to diagnosing the presence of multicollinearity and its remedies in this class of models. A simulation study demonstrates the impacts of multicollinearity on coefficient estimates, associated standard errors, and variance components at various levels of multicollinearity for finite sample sizes typical in social science studies. We further investigate the role multicollinearity plays at each level for estimation of coefficient parameters in terms of shrinkage. Based on these analyses, we recommend a top-down method for assessing multicollinearity in HLMs that first examines the contextual predictors (Level-2 in a two-level model) and then the individual predictors (Level-1) and uses the results for data collection, research problem redefinition, model re-specification, variable selection and estimation of a final model.
Small-Sample Behavior of Novel Phase I Cancer Trial Designs
Oron, Assaf P
2012-01-01
Novel dose-finding designs, using estimation to assign the best estimated maximum- tolerated-dose (MTD) at each point in the experiment, most commonly via Bayesian techniques, have recently entered large-scale implementation in Phase I cancer clinical trials. We examine the small-sample behavior of these "Bayesian Phase I" (BP1) designs, and also of non-Bayesian designs sharing the same main "long-memory" traits (hereafter: LMP1s). For all LMP1s examined, the number of cohorts treated at the true MTD (denoted here as n*) was highly variable between numerical runs drawn from the same toxicity-threshold distribution, especially when compared with "up-and-down" (U&D) short-memory designs. Further investigation using the same set of thresholds in permuted order, produced a nearly-identical magnitude of variability in n*. Therefore, this LMP1 behavior is driven by a strong sensitivity to the order in which toxicity thresholds appear in the experiment. We suggest that the sensitivity is related to LMP1's tenden...
Wu, Mingjie; Tang, Qiaowei; Dong, Fang; Wang, Yongzhen; Li, Donghui; Guo, Qinping; Liu, Yuyu; Qiao, Jinli
2016-07-28
A new type of Fe, N-doped hierarchically porous carbons (N-Fe-HPCs) has been synthesized via a cost-effective synthetic route, derived from nitrogen-enriched polyquaternium networks by combining a simple silicate templated two-step graphitization of the impregnated carbon. The as-prepared N-Fe-HPCs present a high catalytic activity for the oxygen reduction reaction (ORR) with onset and half-wave potentials of 0.99 and 0.86 V in 0.1 M KOH, respectively, which are superior to commercially available Pt/C catalyst (half-wave potential 0.86 V vs. RHE). Surprisingly, the diffusion-limited current density of N-S-HPCs approaches ∼7.5 mA cm(-2), much higher than that of Pt/C (∼5.5 mA cm(-2)). As a cathode electrode material used in Zn-air batteries, the unique configuration of the N-Fe-HPCs delivers a high discharge peak power density reaching up to 540 mW cm(-2) with a current density of 319 mA cm(-2) at 1.0 V of cell voltage and an energy density >800 Wh kg(-1). Additionally, outstanding ORR durability of the N-Fe-HPCs is demonstrated, as evaluated by the transient cell-voltage behavior of the Zn-air battery retaining an open circuit voltage of 1.48 V over 10 hours with a discharge current density of 100 mA cm(-2).
A stratified two-stage sampling design for digital soil mapping in a Mediterranean basin
Blaschek, Michael; Duttmann, Rainer
2015-04-01
The quality of environmental modelling results often depends on reliable soil information. In order to obtain soil data in an efficient manner, several sampling strategies are at hand depending on the level of prior knowledge and the overall objective of the planned survey. This study focuses on the collection of soil samples considering available continuous secondary information in an undulating, 16 km²-sized river catchment near Ussana in southern Sardinia (Italy). A design-based, stratified, two-stage sampling design has been applied aiming at the spatial prediction of soil property values at individual locations. The stratification based on quantiles from density functions of two land-surface parameters - topographic wetness index and potential incoming solar radiation - derived from a digital elevation model. Combined with four main geological units, the applied procedure led to 30 different classes in the given test site. Up to six polygons of each available class were selected randomly excluding those areas smaller than 1ha to avoid incorrect location of the points in the field. Further exclusion rules were applied before polygon selection masking out roads and buildings using a 20m buffer. The selection procedure was repeated ten times and the set of polygons with the best geographical spread were chosen. Finally, exact point locations were selected randomly from inside the chosen polygon features. A second selection based on the same stratification and following the same methodology (selecting one polygon instead of six) was made in order to create an appropriate validation set. Supplementary samples were obtained during a second survey focusing on polygons that have either not been considered during the first phase at all or were not adequately represented with respect to feature size. In total, both field campaigns produced an interpolation set of 156 samples and a validation set of 41 points. The selection of sample point locations has been done using
Heath, Christopher M.
2012-01-01
An isokinetic dilution probe has been designed with the aid of computational fluid dynamics to sample sub-micron particles emitted from aviation combustion sources. The intended operational range includes standard day atmospheric conditions up to 40,000-ft. With dry nitrogen as the diluent, the probe is intended to minimize losses from particle microphysics and transport while rapidly quenching chemical kinetics. Initial results indicate that the Mach number ratio of the aerosol sample and dilution streams in the mixing region is an important factor for successful operation. Flow rate through the probe tip was found to be highly sensitive to the static pressure at the probe exit. Particle losses through the system were estimated to be on the order of 50% with minimal change in the overall particle size distribution apparent. Following design refinement, experimental testing and validation will be conducted in the Particle Aerosol Laboratory, a research facility located at the NASA Glenn Research Center to study the evolution of aviation emissions at lower stratospheric conditions. Particle size distributions and number densities from various combustion sources will be used to better understand particle-phase microphysics, plume chemistry, evolution to cirrus, and environmental impacts of aviation.
Staymates, Matthew; Gillen, Greg; Grandner, Jessica; Lukow, Stefan
2011-11-01
As part of an ongoing effort with the Transportation Security Laboratory, the National Institute of Standards and Technology has been developing a prototype shoe sampling system that relies on aerodynamic sampling as the primary mechanism for liberating, transporting, and collecting explosive contamination. This presentation will focus on the fluid dynamics associated with the current prototype design. This design includes several air jets and air blades that are used to dislodge particles from target areas of a shoe. A large blower then draws air and liberated particles into a collection device at several hundred liters per second. Experiments that utilize optical particle counting techniques have shown that the applied shear forces from these jets are capable of liberating particles efficiently from shoe surfaces. Results from real-world contamination testing also support the effectiveness of air jet impingement in this prototype. Many examples of flow visualization will be shown. The issues associated with air spillage, particle release efficiency, and particle transport will also be discussed.
A UAV-Based Fog Collector Design for Fine-Scale Aerobiological Sampling
Gentry, D.; Guarro, M.; Demachkie, I. S.; Stumfall, I.; Dahlgren, R. P.
2016-12-01
Airborne microbes are found throughout the troposphere and into the stratosphere. Knowing how the activity of airborne microorganisms can alter water, carbon, and other geochemical cycles is vital to a full understanding of local and global ecosystems. Just as on the land or in the ocean, atmospheric regions vary in habitability; the underlying geochemical, climatic, and ecological dynamics must be characterized at different scales to be effectively modeled. Most aerobiological studies have focused on a high level: 'How high are airborne microbes found?' and 'How far can they travel?' Most fog and cloud water studies collect from stationary ground stations (point) or along flight transects (1D). To complement and provide context for this data, we have designed a UAV-based modified fog and cloud water collector to retrieve 4D-resolved samples for biological and chemical analysis. Our design uses a passive impacting collector hanging from a rigid rod suspended between two multi-rotor UAVs. The suspension design reduces the effect of turbulence and potential for contamination from the UAV downwash. The UAVs are currently modeled in a leader-follower configuration, taking advantage of recent advances in modular UAVs, UAV swarming, and flight planning. The collector itself is a hydrophobic mesh. Materials including Tyvek, PTFE, nylon, and polypropylene monofilament fabricated via laser cutting, CNC knife, or 3D printing were characterized for droplet collection efficiency using a benchtop atomizer and particle counter. Because the meshes can be easily and inexpensively fabricated, a set can be pre-sterilized and brought to the field for 'hot swapping' to decrease cross-contamination between flight sessions or use as negative controls. An onboard sensor and logging system records the time and location of each sample; when combined with flight tracking data, the samples can be resolved into a 4D volumetric map of the fog bank. Collected samples can be returned to the lab
Institute of Scientific and Technical Information of China (English)
刘蔚; 苟鹏; 操安喜; 崔维成
2006-01-01
将多学科优化方法作为一种新的设计方法应用于AUV的总体性能优化设计中.文中构建顶层控制系统层和并行独立子系统底层的两层分级的多学科优化设计框架,来实现AUV总体设计的有效载荷部分长度和推进力最大化和总重量最小化等的多个设计目标.使用的设计工具为商业软件iSIGHT和Fortran.%Multidisciplinary optimization design (MDO) is presented for an autonomous underwater vehicle (AUV) design as a new approach of general performance optimization. The framework of MDO introduced in this paper is bilevel and hierarchical, which is the high controlling system level and low level of parallel individual subsystems. The methodology is suitable to maximize the payload length and thrust force of AUV and minimize its gross weight.The tools of commercial software iSIGHT and Fortran are used to realize MDO design of AUV.
Sampling design optimisation for rainfall prediction using a non-stationary geostatistical model
Wadoux, Alexandre M. J.-C.; Brus, Dick J.; Rico-Ramirez, Miguel A.; Heuvelink, Gerard B. M.
2017-09-01
The accuracy of spatial predictions of rainfall by merging rain-gauge and radar data is partly determined by the sampling design of the rain-gauge network. Optimising the locations of the rain-gauges may increase the accuracy of the predictions. Existing spatial sampling design optimisation methods are based on minimisation of the spatially averaged prediction error variance under the assumption of intrinsic stationarity. Over the past years, substantial progress has been made to deal with non-stationary spatial processes in kriging. Various well-documented geostatistical models relax the assumption of stationarity in the mean, while recent studies show the importance of considering non-stationarity in the variance for environmental processes occurring in complex landscapes. We optimised the sampling locations of rain-gauges using an extension of the Kriging with External Drift (KED) model for prediction of rainfall fields. The model incorporates both non-stationarity in the mean and in the variance, which are modelled as functions of external covariates such as radar imagery, distance to radar station and radar beam blockage. Spatial predictions are made repeatedly over time, each time recalibrating the model. The space-time averaged KED variance was minimised by Spatial Simulated Annealing (SSA). The methodology was tested using a case study predicting daily rainfall in the north of England for a one-year period. Results show that (i) the proposed non-stationary variance model outperforms the stationary variance model, and (ii) a small but significant decrease of the rainfall prediction error variance is obtained with the optimised rain-gauge network. In particular, it pays off to place rain-gauges at locations where the radar imagery is inaccurate, while keeping the distribution over the study area sufficiently uniform.
Designing efficient nitrous oxide sampling strategies in agroecosystems using simulation models
Saha, Debasish; Kemanian, Armen R.; Rau, Benjamin M.; Adler, Paul R.; Montes, Felipe
2017-04-01
Annual cumulative soil nitrous oxide (N2O) emissions calculated from discrete chamber-based flux measurements have unknown uncertainty. We used outputs from simulations obtained with an agroecosystem model to design sampling strategies that yield accurate cumulative N2O flux estimates with a known uncertainty level. Daily soil N2O fluxes were simulated for Ames, IA (corn-soybean rotation), College Station, TX (corn-vetch rotation), Fort Collins, CO (irrigated corn), and Pullman, WA (winter wheat), representing diverse agro-ecoregions of the United States. Fertilization source, rate, and timing were site-specific. These simulated fluxes surrogated daily measurements in the analysis. We ;sampled; the fluxes using a fixed interval (1-32 days) or a rule-based (decision tree-based) sampling method. Two types of decision trees were built: a high-input tree (HI) that included soil inorganic nitrogen (SIN) as a predictor variable, and a low-input tree (LI) that excluded SIN. Other predictor variables were identified with Random Forest. The decision trees were inverted to be used as rules for sampling a representative number of members from each terminal node. The uncertainty of the annual N2O flux estimation increased along with the fixed interval length. A 4- and 8-day fixed sampling interval was required at College Station and Ames, respectively, to yield ±20% accuracy in the flux estimate; a 12-day interval rendered the same accuracy at Fort Collins and Pullman. Both the HI and the LI rule-based methods provided the same accuracy as that of fixed interval method with up to a 60% reduction in sampling events, particularly at locations with greater temporal flux variability. For instance, at Ames, the HI rule-based and the fixed interval methods required 16 and 91 sampling events, respectively, to achieve the same absolute bias of 0.2 kg N ha-1 yr-1 in estimating cumulative N2O flux. These results suggest that using simulation models along with decision trees can reduce
Lu, Chao; Li, Xubin; Wu, Dongsheng; Zheng, Lianqing; Yang, Wei
2016-01-12
analysis suggests that because essential conformational events are mainly driven by the compensating fluctuations of essential solute-solvent and solute-solute interactions, commonly employed "predictive" sampling methods are unlikely to be effective on this seemingly "simple" system. The gOST development presented in this paper illustrates how to employ the OSS scheme for physics-based sampling method designs.
A novel sampling design to explore gene-longevity associations: the ECHA study.
De Rango, Francesco; Dato, Serena; Bellizzi, Dina; Rose, Giuseppina; Marzi, Erika; Cavallone, Luca; Franceschi, Claudio; Skytthe, Axel; Jeune, Bernard; Cournil, Amandine; Robine, Jean Marie; Gampe, Jutta; Vaupel, James W; Mari, Vincenzo; Feraco, Emidio; Passarino, Giuseppe; Novelletto, Andrea; De Benedictis, Giovanna
2008-02-01
To investigate the genetic contribution to familial similarity in longevity, we set up a novel experimental design where cousin-pairs born from siblings who were concordant or discordant for the longevity trait were analyzed. To check this design, two chromosomal regions already known to encompass longevity-related genes were examined: 6p21.3 (genes TNFalpha, TNFbeta, HSP70.1) and 11p15.5 (genes SIRT3, HRAS1, IGF2, INS, TH). Population pools of 1.6, 2.3 and 2.0 million inhabitants were screened, respectively, in Denmark, France and Italy to identify families matching the design requirements. A total of 234 trios composed by one centenarian, his/her child and a child of his/her concordant or discordant sib were collected. By using population-specific allele frequencies, we reconstructed haplotype phase and estimated the likelihood of Identical By Descent (IBD) haplotype sharing in cousin-pairs born from concordant and discordant siblings. In addition, we analyzed haplotype transmission from centenarians to offspring, and a statistically significant Transmission Ratio Distortion (TRD) was observed for both chromosomal regions in the discordant families (P=0.007 for 6p21.3 and P=0.015 for 11p15.5). In concordant families, a marginally significant TRD was observed at 6p21.3 only (P=0.06). Although no significant difference emerged between the two groups of cousin-pairs, our study gave new insights on the hindrances to recruiting a suitable sample to obtain significant IBD data on longevity-related chromosomal regions. This will allow to dimension future sampling campaigns to study-genetic basis of human longevity.
Designing to Sample the Unknown: Lessons from OSIRIS-REx Project Systems Engineering
Everett, David; Mink, Ronald; Linn, Timothy; Wood, Joshua
2017-01-01
On September 8, 2016, the third NASA New Frontiers mission launched on an Atlas V 411. The Origins, Spectral Interpretation, Resource Identification, Security-Regolith Explorer (OSIRIS-REx) will rendezvous with asteroid Bennu in 2018, collect a sample in 2020, and return that sample to Earth in September 2023. The development team has overcome a number of challenges in order to design and build a system that will make contact with an unexplored, airless, low-gravity body. This paper will provide an overview of the mission, then focus in on the system-level challenges and some of the key system-level processes. Some of the lessons here are unique to the type of mission, like discussion of operating at a largely-unknown, low-gravity object. Other lessons, particularly from the build phase, have broad implications. The OSIRIS-REx risk management process was particularly effective in achieving an on-time and under-budget development effort. The systematic requirements management and verification and the system validation also helped identify numerous potential problems. The final assessment of the OSIRIS-REx performance will need to wait until the sample is returned in 2023, but this post-launch assessment will capture some of the key systems-engineering lessons from the development team.
Sample size calculation for microarray experiments with blocked one-way design
Directory of Open Access Journals (Sweden)
Jung Sin-Ho
2009-05-01
Full Text Available Abstract Background One of the main objectives of microarray analysis is to identify differentially expressed genes for different types of cells or treatments. Many statistical methods have been proposed to assess the treatment effects in microarray experiments. Results In this paper, we consider discovery of the genes that are differentially expressed among K (> 2 treatments when each set of K arrays consists of a block. In this case, the array data among K treatments tend to be correlated because of block effect. We propose to use the blocked one-way ANOVA F-statistic to test if each gene is differentially expressed among K treatments. The marginal p-values are calculated using a permutation method accounting for the block effect, adjusting for the multiplicity of the testing procedure by controlling the false discovery rate (FDR. We propose a sample size calculation method for microarray experiments with a blocked one-way design. With FDR level and effect sizes of genes specified, our formula provides a sample size for a given number of true discoveries. Conclusion The calculated sample size is shown via simulations to provide an accurate number of true discoveries while controlling the FDR at the desired level.
Sampling design for the Study of Cardiovascular Risks in Adolescents (ERICA).
Vasconcellos, Mauricio Teixeira Leite de; Silva, Pedro Luis do Nascimento; Szklo, Moyses; Kuschnir, Maria Cristina Caetano; Klein, Carlos Henrique; Abreu, Gabriela de Azevedo; Barufaldi, Laura Augusta; Bloch, Katia Vergetti
2015-05-01
The Study of Cardiovascular Risk in Adolescents (ERICA) aims to estimate the prevalence of cardiovascular risk factors and metabolic syndrome in adolescents (12-17 years) enrolled in public and private schools of the 273 municipalities with over 100,000 inhabitants in Brazil. The study population was stratified into 32 geographical strata (27 capitals and five sets with other municipalities in each macro-region of the country) and a sample of 1,251 schools was selected with probability proportional to size. In each school three combinations of shift (morning and afternoon) and grade were selected, and within each of these combinations, one class was selected. All eligible students in the selected classes were included in the study. The design sampling weights were calculated by the product of the reciprocals of the inclusion probabilities in each sampling stage, and were later calibrated considering the projections of the numbers of adolescents enrolled in schools located in the geographical strata by sex and age.
Directory of Open Access Journals (Sweden)
Andreas G. Andreou
2013-03-01
Full Text Available We discuss the architecture and design of parallel sampling front ends for analog to information (A2I converters. As a way of example, we detail the design of a custom 0.5 µm CMOS implementation of a mixed signal parallel sampling encoder architecture. The system consists of configurable parallel analog processing channels, whose output is sampled by traditional analog-to-digital converters (ADCs. The analog front-end modulates the signal of interest with a high-speed digital chipping sequence and integrates the result prior to sampling at a low rate. An FPGA is employed to generate the chipping sequences and process the digitized samples.
Gentry, Diana; Cynthia Ouandji; Arismendi, Dillon; Guarro, Marcello; Demachkie, Isabella; Crosbie, Ewan; Dadashazar, Hossein; MacDonald, Alex B.; Wang, Zhen; Sorooshian, Armin;
2017-01-01
Just as on the land or in the ocean, atmospheric regions may be more or less hospitable to life. The aerobiosphere, or collection of living things in Earth's atmosphere, is poorly understood due to the small number and ad hoc nature of samples studied. However, we know viable airborne microbes play important roles, such as providing cloud condensation nuclei. Knowing the distribution of such microorganisms and how their activity can alter water, carbon, and other geochemical cycles is key to developing criteria for planetary habitability, particularly for potential habitats with wet atmospheres but little stable surface water. Coastal California has regular, dense fog known to play a major transport role in the local ecosystem. In addition to the significant local (1 km) geographical variation in typical fog, previous studies have found that changes in height above surface of as little as a few meters can yield significant differences in typical concentrations, populations and residence times. No single current sampling platform (ground-based impactors, towers, balloons, aircraft) is capable of accessing all of these regions of interest.A novel passive fog and cloud water sampler, consisting of a lightweight passive impactor suspended from autonomous aerial vehicles (UAVs), is being developed to allow 4D point sampling within a single fog bank, allowing closer study of small-scale (100 m) system dynamics. Fog and cloud droplet water samples from low-altitude aircraft flights in nearby coastal waters were collected and assayed to estimate the required sample volumes, flight times, and sensitivity thresholds of the system under design.125 cloud water samples were collected from 16 flights of the Center for Interdisciplinary Remotely Piloted Aircraft Studies (CIRPAS) instrumented Twin Otter, equipped with a sampling tube collector, occurring between 18 July and 12 August 2016 below 1 km altitude off the central coast. The collector was flushed first with 70 ethanol
Context updates are hierarchical
Directory of Open Access Journals (Sweden)
Anton Karl Ingason
2016-10-01
Full Text Available This squib studies the order in which elements are added to the shared context of interlocutors in a conversation. It focuses on context updates within one hierarchical structure and argues that structurally higher elements are entered into the context before lower elements, even if the structurally higher elements are pronounced after the lower elements. The crucial data are drawn from a comparison of relative clauses in two head-initial languages, English and Icelandic, and two head-final languages, Korean and Japanese. The findings have consequences for any theory of a dynamic semantics.
Directed Hierarchical Patterning of Polycarbonate Bisphenol A Glass Surface along Predictable Sites
Directory of Open Access Journals (Sweden)
Mazen Khaled
2015-01-01
Full Text Available This paper reports a new approach in designing textured and hierarchical surfaces on polycarbonate bisphenol A type glass to improve hydrophobicity and dust repellent application for solar panels. Solvent- and vapor-induced crystallization of thermoplastic glass polycarbonate bisphenol A (PC is carried out to create hierarchically structured surfaces. In this approach dichloromethane (DCM and acetone are used in sequence. Samples are initially immersed in DCM liquid to generate nanopores, followed by exposing to acetone vapor resulting in the generation of hierarchical structure along the interporous sites. The effects of exposure time on the size, density, and distance of the generated spherules and gaps are studied and correlated with the optical transmittance and contact angle measurements at the surface. At optimized exposure time a contact angle of 98° was achieved with 80% optical transmittance. To further increase the hydrophobicity while maintaining optical properties, the hierarchical surfaces were coated with a transparent composite of tetraethyl orthosilicate as precursor and hexamethyldisilazane as silylation agent resulting in an average contact angle of 135.8° and transmittance of around 70%. FTIR and AFM characterization techniques are employed to study the composition and morphology of the generated surfaces.
Importance of sampling design and analysis in animal population studies: a comment on Sergio et al
Kery, M.; Royle, J. Andrew; Schmid, Hans
2008-01-01
1. The use of predators as indicators and umbrellas in conservation has been criticized. In the Trentino region, Sergio et al. (2006; hereafter SEA) counted almost twice as many bird species in quadrats located in raptor territories than in controls. However, SEA detected astonishingly few species. We used contemporary Swiss Breeding Bird Survey data from an adjacent region and a novel statistical model that corrects for overlooked species to estimate the expected number of bird species per quadrat in that region. 2. There are two anomalies in SEA which render their results ambiguous. First, SEA detected on average only 6.8 species, whereas a value of 32 might be expected. Hence, they probably overlooked almost 80% of all species. Secondly, the precision of their mean species counts was greater in two-thirds of cases than in the unlikely case that all quadrats harboured exactly the same number of equally detectable species. This suggests that they detected consistently only a biased, unrepresentative subset of species. 3. Conceptually, expected species counts are the product of true species number and species detectability p. Plenty of factors may affect p, including date, hour, observer, previous knowledge of a site and mobbing behaviour of passerines in the presence of predators. Such differences in p between raptor and control quadrats could have easily created the observed effects. Without a method that corrects for such biases, or without quantitative evidence that species detectability was indeed similar between raptor and control quadrats, the meaning of SEA's counts is hard to evaluate. Therefore, the evidence presented by SEA in favour of raptors as indicator species for enhanced levels of biodiversity remains inconclusive. 4. Synthesis and application. Ecologists should pay greater attention to sampling design and analysis in animal population estimation. Species richness estimation means sampling a community. Samples should be representative for the
Lee, S.; Lee, D.; Abu Salim, K.; Yun, H. M.; Han, S.; Lee, W. K.; Davies, S. J.; Son, Y.
2014-12-01
Mixed tropical forest structure is highly heterogeneous unlike plantation or mixed temperate forest structure, and therefore, different sampling approaches are required. However, the appropriate sampling design for estimating the above-ground biomass (AGB) in Bruneian lowland mixed dipterocarp forest (MDF) has not yet been fully clarified. The aim of this study was to provide supportive information in sampling design for Bruneian forest carbon inventory. The study site was located at Kuala Belalong lowland MDF, which is part of the Ulu Tembulong National Park, Brunei Darussalam. Six 60 m × 60 m quadrats were established, separated by a distance of approximately 100 m and each was subdivided into quadrats of 10 m × 10 m, at an elevation between 200 and 300 m above sea level. At each plot all free-standing trees with diameter at breast height (dbh) ≥ 1 cm were measured. The AGB for all trees with dbh ≥ 10 cm was estimated by allometric models. In order to analyze changes in the diameter-dependent parameters used for estimating the AGB, different quadrat areas, ranging from 10 m × 10 m to 60 m × 60 m, were used across the study area, starting at the South-West end and moving towards the North-East end. The derived result was as follows: (a) Big trees (dbh ≥ 70 cm) with sparse distribution have remarkable contribution to the total AGB in Bruneian lowland MDF, and therefore, special consideration is required when estimating the AGB of big trees. Stem number of trees with dbh ≥ 70 cm comprised only 2.7% of all trees with dbh ≥ 10 cm, but 38.5% of the total AGB. (b) For estimating the AGB of big trees at the given acceptable limit of precision (p), it is more efficient to use large quadrats than to use small quadrats, because the total sampling area decreases with the former. Our result showed that 239 20 m × 20 m quadrats (9.6 ha in total) were required, while 15 60 m × 60 m quadrats (5.4 ha in total) were required when estimating the AGB of the trees
On the geostatistical characterization of hierarchical media
Neuman, Shlomo P.; Riva, Monica; Guadagnini, Alberto
2008-02-01
The subsurface consists of porous and fractured materials exhibiting a hierarchical geologic structure, which gives rise to systematic and random spatial and directional variations in hydraulic and transport properties on a multiplicity of scales. Traditional geostatistical moment analysis allows one to infer the spatial covariance structure of such hierarchical, multiscale geologic materials on the basis of numerous measurements on a given support scale across a domain or "window" of a given length scale. The resultant sample variogram often appears to fit a stationary variogram model with constant variance (sill) and integral (spatial correlation) scale. In fact, some authors, who recognize that hierarchical sedimentary architecture and associated log hydraulic conductivity fields tend to be nonstationary, nevertheless associate them with stationary "exponential-like" transition probabilities and variograms, respectively, the latter being a consequence of the former. We propose that (1) the apparent ability of stationary spatial statistics to characterize the covariance structure of nonstationary hierarchical media is an artifact stemming from the finite size of the windows within which geologic and hydrologic variables are ubiquitously sampled, and (2) the artifact is eliminated upon characterizing the covariance structure of such media with the aid of truncated power variograms, which represent stationary random fields obtained upon sampling a nonstationary fractal over finite windows. To support our opinion, we note that truncated power variograms arise formally when a hierarchical medium is sampled jointly across all geologic categories and scales within a window; cite direct evidence that geostatistical parameters (variance and integral scale) inferred on the basis of traditional variograms vary systematically with support and window scales; demonstrate the ability of truncated power models to capture these variations in terms of a few scaling parameters
Study on Synthesis and Catalytic Performance of Hierarchical Zeolite
Institute of Scientific and Technical Information of China (English)
Zhang Lingling; Li Fengyan; ZhaoTianbo; Sun Guida
2007-01-01
A kind of hierarchical zeolite catalyst was synthesized by hydrothermal method.X-ray diffraction (XRD)and nitrogen adsorption-desorption method were used to study the phase and aperture structure of the prepared catalyst.Infrared(IR)spectra of pyridine adsorbed on the sample showed that the hierarchical zeolite really had much more Bronsted and Lewis acidic sites than the HZSM-5 zeolite.The catalytic cracking of large hydrocarbon molecules showed that the hierarchical zeolite had a higher catalytic activity than the HZSM-5 zeolite.
Directory of Open Access Journals (Sweden)
Annu Saini
2014-09-01
Full Text Available This paper presents a low power high performance and higher sampling speed sample and hold circuit. The proposed circuit is designed at 180 nm technology and has high linearity. The circuit can be used for the ADC frontend applications and supports double sampling architecture. The proposed sample and hold circuit has common mode range beyond rail to rail and uses two differential pairs transistor stages connected in parallel as its input stage.
On designing data-sampling for Rasch model calibrating an achievement test
Directory of Open Access Journals (Sweden)
TAKUYA YANAGIDA
2009-12-01
Full Text Available In correspondence with pertinent statistical tests, it is of practical importance to design data-sampling when the Rasch model is used for calibrating an achievement test. That is, determining the sample size according to a given type-I- and type-II-risk, and according to a certain effect of model misfit which is of practical relevance is of interest. However, pertinent Rasch model tests use chi-squared distributed test-statistics, whose degrees of freedom do not depend on the sample size or the number of testees, but only on the number of estimated parameters. We therefore suggest a new approach using an F-distributed statistic as applied within analysis of variance, where the sample size directly affects the degrees of freedom. The Rasch model’s quality of specific objective measurement is in accordance with no interaction effect in a specific analysis of variance design. In analogy to Andersen’s approach in his Likelihood-Ratio test, the testees must be divided into at least two groups according to some criterion suspected of causing differential item functioning (DIF. Then a three-way analysis of variance design (A>BxC with mixed classification is the result: There is a (fixed group factor A, a (random factor B of testees within A, and a (fixed factor C of items cross-classified with A>B; obviously the factor B is nested within A. Yet the data are dichotomous (a testee either solves an item or fails to solve it and only one observation per cell exists. The latter is not assumed to do harm, though the design is a mixed classification. But the former suggests the need to perform a simulation study in order to test whether the type-I-risk holds for the AxC interaction F-test – this interaction effect corresponds to Rasch model’s specific objectivity. If so, the critical number of testees is of interest for fulfilling the pertinent precision parameters. The simulation study (100 000 runs for each of several special cases proved that the
Energy Technology Data Exchange (ETDEWEB)
Kwon, Oh-Sun [Department of Physics, University of Rhode Island, Kingston, RI 02881 (United States); Department of Chemistry, Interdisciplinary Program of Integrated Biotechnology, Sogang University, Seoul (Korea, Republic of); Shin, Kwanwoo [Department of Chemistry, Interdisciplinary Program of Integrated Biotechnology, Sogang University, Seoul (Korea, Republic of)]. E-mail: kwshin@sogang.ac.kr; Choi, Dong-Jin [Department of Chemistry, Interdisciplinary Program of Integrated Biotechnology, Sogang University, Seoul (Korea, Republic of); Hong, Kwang Pho [HANARO Utilization Technology Development Division, Korea Atomic Energy Research Institute, P.O.B. 105, Yuseong, Daejeon, 305-600 (Korea, Republic of); Moon, Myung Kook [HANARO Utilization Technology Development Division, Korea Atomic Energy Research Institute, P.O.B. 105, Yuseong, Daejeon, 305-600 (Korea, Republic of); Cho, Sang Jin [HANARO Utilization Technology Development Division, Korea Atomic Energy Research Institute, P.O.B. 105, Yuseong, Daejeon, 305-600 (Korea, Republic of); Choi, Young Hyun [HANARO Utilization Technology Development Division, Korea Atomic Energy Research Institute, P.O.B. 105, Yuseong, Daejeon, 305-600 (Korea, Republic of); Lee, Jeong Soo [HANARO Utilization Technology Development Division, Korea Atomic Energy Research Institute, P.O.B. 105, Yuseong, Daejeon, 305-600 (Korea, Republic of); Lee, Change-Hee [HANARO Utilization Technology Development Division, Korea Atomic Energy Research Institute, P.O.B. 105, Yuseong, Daejeon, 305-600 (Korea, Republic of)
2007-05-23
A new neutron reflectometer with a horizontal sample geometry was designed and is now under construction at the HANARO, 30 MW research reactor. It was originally built and operated at the H9-A beam port at BNL, and was relocated to HANARO in 2004. We performed simulations of neutron ray-tracing to evaluate the performance of all of the optical components of the instrument with a Monte Carlo technique using McStas code. The feasible wavelength of the incident neutron beam is 2.52 A. It produces a q-range up to 0.126 A{sup -1} with a supermirror as a deflector. Our studies indicated possibilities to improve the performance of the guide tube. Although the performance is limited (limited q-range and flux due to multiple reflections prior to the deflector), it promises to be the first reflectometer in Korea for the study of free surfaces, which is currently in demand.
论生态取向教师学习内容的层级设计%On the Hierarchical Design of Teachers＇ Learning Content of Ecological Orientation
Institute of Scientific and Technical Information of China (English)
肖正德
2011-01-01
根据生态位原理，生态取向教师学习内容的层级设计是满足不同类别、不同层级教师需求的多样性设计，是促使教师学习、工作与生活融合一体的持续性设计，是促使教师同伴互助共生发展的互动性设计，是源于教师生命发展需要、提升教师幸福感的生命化设计。生态取向的教师学习应该以教师发展的层级理论为依据，视其发展层级，为其可持续发展设计层级学习内容框架。%According to the niche theory, the hierarehical design of teachers＇ learning content of ecological orientation is the diverse design to meet the needs of different classes and levels of teachers, which is not only the sustainahle design to promote the integration of teachers＇ learning, work and life, the interactive design to promote common development of the teachers of mutual companion, hut also the life design resulted from the needs of the intergrowth of teachers＇ development and the enhancement of teachers＇ sense of happiness. Teachers＇ learning of ecological orientation should be based on the theory of the hierarchy of teachers＇ development, and the framework of the level of learning contents should be designed according to the level of teachers＇ development, and catered for the teachers＇ sustainable development.
Modular, Hierarchical Learning By Artificial Neural Networks
Baldi, Pierre F.; Toomarian, Nikzad
1996-01-01
Modular and hierarchical approach to supervised learning by artificial neural networks leads to neural networks more structured than neural networks in which all neurons fully interconnected. These networks utilize general feedforward flow of information and sparse recurrent connections to achieve dynamical effects. The modular organization, sparsity of modular units and connections, and fact that learning is much more circumscribed are all attractive features for designing neural-network hardware. Learning streamlined by imitating some aspects of biological neural networks.
A note on hierarchical hubbing for a generalization of the VPN problem
Olver, N.K.
2014-01-01
Robust network design refers to a class of optimization problems that occur when designing networks to efficiently handle variable demands. The notion of "hierarchical hubbing" was introduced (in the narrow context of a specific robust network design question), by Olver and Shepherd [2010]. Hierarch
CYGNSS Spaceborne Constellation for Ocean Surface Winds: Mission Design and Sampling Properties
Ruf, Chris; Ridley, Aaron; Clarizia, Maria Paola; Gleason, Scott; Rose, Randall; Scherrer, John
2014-05-01
The NASA Earth Venture Cyclone Global Navigation Satellite System (CYGNSS) is a spaceborne mission scheduled to launch in October 2016 that is focused on tropical cyclone (TC) inner core process studies. CYGNSS is specifically designed to address the inadequacy in observations of the inner core that result from two causes: 1) much of the inner core ocean surface is obscured from conventional remote sensing instruments by intense precipitation in the eye wall and inner rain bands; and 2) the rapidly evolving (genesis and intensification) stages of the TC life cycle are poorly sampled in time by conventional polar-orbiting, wide-swath surface wind imagers. CYGNSS measurements of bistatic radar cross section of the ocean can be directly related to the near surface wind speed, in a manner roughly analogous to that of conventional ocean wind scatterometers. The technique has been demonstrated previously from space by the UK-DMC mission in 2005-6. CYGNSS will advance the wind measuring capability demonstrated by the experimental payload on UK-DMC to a more mature ocean science mission. The CYGNSS constellation is comprised of 8 observatories in 500 km circular orbits at a common inclination angle of 35°. Each observatory contains a Delay Doppler Mapping Instrument (DDMI) which consists of a multi-channel GPS receiver, a low gain zenith antenna and two high gain nadir antennas. Each DDMI measures simultaneous specular scattered signals from the 4 GPS transmitters with the highest probable signal-to-noise ratio. The receivers coherently integrate the received signals for 1 ms, then incoherently integrate on board for an additional one second. This results in 32 wind measurements per second. CYGNSS has spatial and temporal sampling properties that are distinctly different from conventional wide-swath polar imagers. Spatial sampling is marked by 32 simultaneous single pixel "swaths" that are 25 km wide and, typically, 100s of km long. They can be considered roughly
The design of high-temperature thermal conductivity measurements apparatus for thin sample size
Directory of Open Access Journals (Sweden)
Hadi Syamsul
2017-01-01
Full Text Available This study presents the designing, constructing and validating processes of thermal conductivity apparatus using steady-state heat-transfer techniques with the capability of testing a material at high temperatures. This design is an improvement from ASTM D5470 standard where meter-bars with the equal cross-sectional area were used to extrapolate surface temperature and measure heat transfer across a sample. There were two meter-bars in apparatus where each was placed three thermocouples. This Apparatus using a heater with a power of 1,000 watts, and cooling water to stable condition. The pressure applied was 3.4 MPa at the cross-sectional area of 113.09 mm2 meter-bar and thermal grease to minimized interfacial thermal contact resistance. To determine the performance, the validating process proceeded by comparing the results with thermal conductivity obtained by THB 500 made by LINSEIS. The tests showed the thermal conductivity of the stainless steel and bronze are 15.28 Wm-1K-1 and 38.01 Wm-1K-1 with a difference of test apparatus THB 500 are −2.55% and 2.49%. Furthermore, this apparatus has the capability to measure the thermal conductivity of the material to a temperature of 400°C where the results for the thermal conductivity of stainless steel is 19.21 Wm-1K-1 and the difference was 7.93%.
Gray, Brian R.; Gitzen, Robert A.; Millspaugh, Joshua J.; Cooper, Andrew B.; Licht, Daniel S.
2012-01-01
Variance components may play multiple roles (cf. Cox and Solomon 2003). First, magnitudes and relative magnitudes of the variances of random factors may have important scientific and management value in their own right. For example, variation in levels of invasive vegetation among and within lakes may suggest causal agents that operate at both spatial scales – a finding that may be important for scientific and management reasons. Second, variance components may also be of interest when they affect precision of means and covariate coefficients. For example, variation in the effect of water depth on the probability of aquatic plant presence in a study of multiple lakes may vary by lake. This variation will affect the precision of the average depth-presence association. Third, variance component estimates may be used when designing studies, including monitoring programs. For example, to estimate the numbers of years and of samples per year required to meet long-term monitoring goals, investigators need estimates of within and among-year variances. Other chapters in this volume (Chapters 7, 8, and 10) as well as extensive external literature outline a framework for applying estimates of variance components to the design of monitoring efforts. For example, a series of papers with an ecological monitoring theme examined the relative importance of multiple sources of variation, including variation in means among sites, years, and site-years, for the purposes of temporal trend detection and estimation (Larsen et al. 2004, and references therein).
Design and Use of a Full Flow Sampling System (FFS) for the Quantification of Methane Emissions.
Johnson, Derek R; Covington, April N; Clark, Nigel N
2016-06-12
The use of natural gas continues to grow with increased discovery and production of unconventional shale resources. At the same time, the natural gas industry faces continued scrutiny for methane emissions from across the supply chain, due to methane's relatively high global warming potential (25-84x that of carbon dioxide, according to the Energy Information Administration). Currently, a variety of techniques of varied uncertainties exists to measure or estimate methane emissions from components or facilities. Currently, only one commercial system is available for quantification of component level emissions and recent reports have highlighted its weaknesses. In order to improve accuracy and increase measurement flexibility, we have designed, developed, and implemented a novel full flow sampling system (FFS) for quantification of methane emissions and greenhouse gases based on transportation emissions measurement principles. The FFS is a modular system that consists of an explosive-proof blower(s), mass airflow sensor(s) (MAF), thermocouple, sample probe, constant volume sampling pump, laser based greenhouse gas sensor, data acquisition device, and analysis software. Dependent upon the blower and hose configuration employed, the current FFS is able to achieve a flow rate ranging from 40 to 1,500 standard cubic feet per minute (SCFM). Utilization of laser-based sensors mitigates interference from higher hydrocarbons (C2+). Co-measurement of water vapor allows for humidity correction. The system is portable, with multiple configurations for a variety of applications ranging from being carried by a person to being mounted in a hand drawn cart, on-road vehicle bed, or from the bed of utility terrain vehicles (UTVs). The FFS is able to quantify methane emission rates with a relative uncertainty of ± 4.4%. The FFS has proven, real world operation for the quantification of methane emissions occurring in conventional and remote facilities.
Sampling effects on the identification of roadkill hotspots: Implications for survey design.
Santos, Sara M; Marques, J Tiago; Lourenço, André; Medinas, Denis; Barbosa, A Márcia; Beja, Pedro; Mira, António
2015-10-01
Although locating wildlife roadkill hotspots is essential to mitigate road impacts, the influence of study design on hotspot identification remains uncertain. We evaluated how sampling frequency affects the accuracy of hotspot identification, using a dataset of vertebrate roadkills (n = 4427) recorded over a year of daily surveys along 37 km of roads. "True" hotspots were identified using this baseline dataset, as the 500-m segments where the number of road-killed vertebrates exceeded the upper 95% confidence limit of the mean, assuming a Poisson distribution of road-kills per segment. "Estimated" hotspots were identified likewise, using datasets representing progressively lower sampling frequencies, which were produced by extracting data from the baseline dataset at appropriate time intervals (1-30 days). Overall, 24.3% of segments were "true" hotspots, concentrating 40.4% of roadkills. For different groups, "true" hotspots accounted from 6.8% (bats) to 29.7% (small birds) of road segments, concentrating from 60% (lizards, lagomorphs, carnivores) of roadkills. Spatial congruence between "true" and "estimated" hotspots declined rapidly with increasing time interval between surveys, due primarily to increasing false negatives (i.e., missing "true" hotspots). There were also false positives (i.e., wrong "estimated" hotspots), particularly at low sampling frequencies. Spatial accuracy decay with increasing time interval between surveys was higher for smaller-bodied (amphibians, reptiles, small birds, small mammals) than for larger-bodied species (birds of prey, hedgehogs, lagomorphs, carnivores). Results suggest that widely used surveys at weekly or longer intervals may produce poor estimates of roadkill hotspots, particularly for small-bodied species. Surveying daily or at two-day intervals may be required to achieve high accuracy in hotspot identification for multiple species. Copyright © 2015 Elsevier Ltd. All rights reserved.
Institute of Scientific and Technical Information of China (English)
Yaqin Li; Guoshu Jian; Shifa Wu
2006-01-01
The rational design of the sample cell may improve the sensitivity of surface-enhanced Raman scattering(SERS) detection in a high degree. Finite difference time domain (FDTD) simulations of the configurationof Ag film-Ag particles illuminated by plane wave and evanescent wave are performed to provide physicalinsight for design of the sample cell. Numerical solutions indicate that the sample cell can provide more"hot spots" and the massive field intensity enhancement occurs in these "hot spots". More information onthe nanometer character of the sample can be got because of gradient-field Raman (GFR) of evanescentwave.
Design and Demonstration of a Material-Plasma Exposure Target Station for Neutron Irradiated Samples
Energy Technology Data Exchange (ETDEWEB)
Rapp, Juergen [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Aaron, A. M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Bell, Gary L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Burgess, Thomas W. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Ellis, Ronald James [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Giuliano, D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Howard, R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Kiggans, James O. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Lessard, Timothy L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Ohriner, Evan Keith [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Perkins, Dale E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Varma, Venugopal Koikal [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
2015-10-20
-state heat fluxes of 5–20 MW/m^{2} and ion fluxes up to 10^{24} m^{-2}s^{-1}. Since PFCs will have to withstand neutron irradiation displacement damage up to 50 dpa, the target station design must accommodate radioactive specimens (materials to be irradiated in HFIR or at SNS) to enable investigations of the impact of neutron damage on materials. Therefore, the system will have to be able to install and extract irradiated specimens using equipment and methods to avoid sample modification, control contamination, and minimize worker dose. Included in the design considerations will be an assessment of all the steps between neutron irradiation and post-exposure materials examination/characterization, as well as an evaluation of the facility hazard categorization. In particular, the factors associated with the acquisition of radioactive specimens and their preparation, transportation, experimental configuration at the plasma-specimen interface, post-plasma-exposure sample handling, and specimen preparation will be evaluated. Neutronics calculations to determine the dose rates of the samples were carried out for a large number of potential plasma-facing materials.
Directory of Open Access Journals (Sweden)
Shanyou Zhu
2014-01-01
Full Text Available Sampling designs are commonly used to estimate deforestation over large areas, but comparisons between different sampling strategies are required. Using PRODES deforestation data as a reference, deforestation in the state of Mato Grosso in Brazil from 2005 to 2006 is evaluated using Landsat imagery and a nearly synchronous MODIS dataset. The MODIS-derived deforestation is used to assist in sampling and extrapolation. Three sampling designs are compared according to the estimated deforestation of the entire study area based on simple extrapolation and linear regression models. The results show that stratified sampling for strata construction and sample allocation using the MODIS-derived deforestation hotspots provided more precise estimations than simple random and systematic sampling. Moreover, the relationship between the MODIS-derived and TM-derived deforestation provides a precise estimate of the total deforestation area as well as the distribution of deforestation in each block.
Melvin, Neal R; Poda, Daniel; Sutherland, Robert J
2007-10-01
When properly applied, stereology is a very robust and efficient method to quantify a variety of parameters from biological material. A common sampling strategy in stereology is systematic random sampling, which involves choosing a random sampling [corrected] start point outside the structure of interest, and sampling relevant objects at [corrected] sites that are placed at pre-determined, equidistant intervals. This has proven to be a very efficient sampling strategy, and is used widely in stereological designs. At the microscopic level, this is most often achieved through the use of a motorized stage that facilitates the systematic random stepping across the structure of interest. Here, we report a simple, precise and cost-effective software-based alternative to accomplishing systematic random sampling under the microscope. We believe that this approach will facilitate the use of stereological designs that employ systematic random sampling in laboratories that lack the resources to acquire costly, fully automated systems.
Zhu, Shanyou; Zhang, Hailong; Liu, Ronggao; Cao, Yun; Zhang, Guixin
2014-01-01
Sampling designs are commonly used to estimate deforestation over large areas, but comparisons between different sampling strategies are required. Using PRODES deforestation data as a reference, deforestation in the state of Mato Grosso in Brazil from 2005 to 2006 is evaluated using Landsat imagery and a nearly synchronous MODIS dataset. The MODIS-derived deforestation is used to assist in sampling and extrapolation. Three sampling designs are compared according to the estimated deforestation of the entire study area based on simple extrapolation and linear regression models. The results show that stratified sampling for strata construction and sample allocation using the MODIS-derived deforestation hotspots provided more precise estimations than simple random and systematic sampling. Moreover, the relationship between the MODIS-derived and TM-derived deforestation provides a precise estimate of the total deforestation area as well as the distribution of deforestation in each block.
Silvia, Paul J; Kwapil, Thomas R; Walsh, Molly A; Myin-Germeys, Inez
2014-03-01
Experience-sampling research involves trade-offs between the number of questions asked per signal, the number of signals per day, and the number of days. By combining planned missing-data designs and multilevel latent variable modeling, we show how to reduce the items per signal without reducing the number of items. After illustrating different designs using real data, we present two Monte Carlo studies that explored the performance of planned missing-data designs across different within-person and between-person sample sizes and across different patterns of response rates. The missing-data designs yielded unbiased parameter estimates but slightly higher standard errors. With realistic sample sizes, even designs with extensive missingness performed well, so these methods are promising additions to an experience-sampler's toolbox.
A Hierarchical Sensor Network Based on Voronoi Diagram
Institute of Scientific and Technical Information of China (English)
SHANG Rui-qiang; ZHAO Jian-li; SUN Qiu-xia; WANG Guang-xing
2006-01-01
A hierarchical sensor network is proposed which places the sensing and routing capacity at different layer nodes.It thus simplifies the hardware design and reduces cost. Adopting Voronoi diagram in the partition of backbone network,a mathematical model of data aggregation based on hierarchical architecture is given. Simulation shows that the number of transmission data packages is sharply cut down in the network, thus reducing the needs in the bandwidth and energy resources and is thus well adapted to sensor networks.
Hierarchical partial order ranking.
Carlsen, Lars
2008-09-01
Assessing the potential impact on environmental and human health from the production and use of chemicals or from polluted sites involves a multi-criteria evaluation scheme. A priori several parameters are to address, e.g., production tonnage, specific release scenarios, geographical and site-specific factors in addition to various substance dependent parameters. Further socio-economic factors may be taken into consideration. The number of parameters to be included may well appear to be prohibitive for developing a sensible model. The study introduces hierarchical partial order ranking (HPOR) that remedies this problem. By HPOR the original parameters are initially grouped based on their mutual connection and a set of meta-descriptors is derived representing the ranking corresponding to the single groups of descriptors, respectively. A second partial order ranking is carried out based on the meta-descriptors, the final ranking being disclosed though average ranks. An illustrative example on the prioritization of polluted sites is given.
Trees and Hierarchical Structures
Haeseler, Arndt
1990-01-01
The "raison d'etre" of hierarchical dustering theory stems from one basic phe nomenon: This is the notorious non-transitivity of similarity relations. In spite of the fact that very often two objects may be quite similar to a third without being that similar to each other, one still wants to dassify objects according to their similarity. This should be achieved by grouping them into a hierarchy of non-overlapping dusters such that any two objects in ~ne duster appear to be more related to each other than they are to objects outside this duster. In everyday life, as well as in essentially every field of scientific investigation, there is an urge to reduce complexity by recognizing and establishing reasonable das sification schemes. Unfortunately, this is counterbalanced by the experience of seemingly unavoidable deadlocks caused by the existence of sequences of objects, each comparatively similar to the next, but the last rather different from the first.
Optimisation by hierarchical search
Zintchenko, Ilia; Hastings, Matthew; Troyer, Matthias
2015-03-01
Finding optimal values for a set of variables relative to a cost function gives rise to some of the hardest problems in physics, computer science and applied mathematics. Although often very simple in their formulation, these problems have a complex cost function landscape which prevents currently known algorithms from efficiently finding the global optimum. Countless techniques have been proposed to partially circumvent this problem, but an efficient method is yet to be found. We present a heuristic, general purpose approach to potentially improve the performance of conventional algorithms or special purpose hardware devices by optimising groups of variables in a hierarchical way. We apply this approach to problems in combinatorial optimisation, machine learning and other fields.
How hierarchical is language use?
Frank, Stefan L.; Bod, Rens; Christiansen, Morten H.
2012-01-01
It is generally assumed that hierarchical phrase structure plays a central role in human language. However, considerations of simplicity and evolutionary continuity suggest that hierarchical structure should not be invoked too hastily. Indeed, recent neurophysiological, behavioural and computational studies show that sequential sentence structure has considerable explanatory power and that hierarchical processing is often not involved. In this paper, we review evidence from the recent literature supporting the hypothesis that sequential structure may be fundamental to the comprehension, production and acquisition of human language. Moreover, we provide a preliminary sketch outlining a non-hierarchical model of language use and discuss its implications and testable predictions. If linguistic phenomena can be explained by sequential rather than hierarchical structure, this will have considerable impact in a wide range of fields, such as linguistics, ethology, cognitive neuroscience, psychology and computer science. PMID:22977157
How hierarchical is language use?
Frank, Stefan L; Bod, Rens; Christiansen, Morten H
2012-11-22
It is generally assumed that hierarchical phrase structure plays a central role in human language. However, considerations of simplicity and evolutionary continuity suggest that hierarchical structure should not be invoked too hastily. Indeed, recent neurophysiological, behavioural and computational studies show that sequential sentence structure has considerable explanatory power and that hierarchical processing is often not involved. In this paper, we review evidence from the recent literature supporting the hypothesis that sequential structure may be fundamental to the comprehension, production and acquisition of human language. Moreover, we provide a preliminary sketch outlining a non-hierarchical model of language use and discuss its implications and testable predictions. If linguistic phenomena can be explained by sequential rather than hierarchical structure, this will have considerable impact in a wide range of fields, such as linguistics, ethology, cognitive neuroscience, psychology and computer science.
Shirley, Matthew H.; Dorazio, Robert M.; Abassery, Ekramy; Elhady, Amr A.; Mekki, Mohammed S.; Asran, Hosni H.
2012-01-01
As part of the development of a management program for Nile crocodiles in Lake Nasser, Egypt, we used a dependent double-observer sampling protocol with multiple observers to compute estimates of population size. To analyze the data, we developed a hierarchical model that allowed us to assess variation in detection probabilities among observers and survey dates, as well as account for variation in crocodile abundance among sites and habitats. We conducted surveys from July 2008-June 2009 in 15 areas of Lake Nasser that were representative of 3 main habitat categories. During these surveys, we sampled 1,086 km of lake shore wherein we detected 386 crocodiles. Analysis of the data revealed significant variability in both inter- and intra-observer detection probabilities. Our raw encounter rate was 0.355 crocodiles/km. When we accounted for observer effects and habitat, we estimated a surface population abundance of 2,581 (2,239-2,987, 95% credible intervals) crocodiles in Lake Nasser. Our results underscore the importance of well-trained, experienced monitoring personnel in order to decrease heterogeneity in intra-observer detection probability and to better detect changes in the population based on survey indices. This study will assist the Egyptian government establish a monitoring program as an integral part of future crocodile harvest activities in Lake Nasser
Associative Hierarchical Random Fields.
Ladický, L'ubor; Russell, Chris; Kohli, Pushmeet; Torr, Philip H S
2014-06-01
This paper makes two contributions: the first is the proposal of a new model-The associative hierarchical random field (AHRF), and a novel algorithm for its optimization; the second is the application of this model to the problem of semantic segmentation. Most methods for semantic segmentation are formulated as a labeling problem for variables that might correspond to either pixels or segments such as super-pixels. It is well known that the generation of super pixel segmentations is not unique. This has motivated many researchers to use multiple super pixel segmentations for problems such as semantic segmentation or single view reconstruction. These super-pixels have not yet been combined in a principled manner, this is a difficult problem, as they may overlap, or be nested in such a way that the segmentations form a segmentation tree. Our new hierarchical random field model allows information from all of the multiple segmentations to contribute to a global energy. MAP inference in this model can be performed efficiently using powerful graph cut based move making algorithms. Our framework generalizes much of the previous work based on pixels or segments, and the resulting labelings can be viewed both as a detailed segmentation at the pixel level, or at the other extreme, as a segment selector that pieces together a solution like a jigsaw, selecting the best segments from different segmentations as pieces. We evaluate its performance on some of the most challenging data sets for object class segmentation, and show that this ability to perform inference using multiple overlapping segmentations leads to state-of-the-art results.
Duffull, Stephen B; Graham, Gordon; Mengersen, Kerrie; Eccleston, John
2012-01-01
Information theoretic methods are often used to design studies that aim to learn about pharmacokinetic and linked pharmacokinetic-pharmacodynamic systems. These design techniques, such as D-optimality, provide the optimum experimental conditions. The performance of the optimum design will depend on the ability of the investigator to comply with the proposed study conditions. However, in clinical settings it is not possible to comply exactly with the optimum design and hence some degree of unplanned suboptimality occurs due to error in the execution of the study. In addition, due to the nonlinear relationship of the parameters of these models to the data, the designs are also locally dependent on an arbitrary choice of a nominal set of parameter values. A design that is robust to both study conditions and uncertainty in the nominal set of parameter values is likely to be of use clinically. We propose an adaptive design strategy to account for both execution error and uncertainty in the parameter values. In this study we investigate designs for a one-compartment first-order pharmacokinetic model. We do this in a Bayesian framework using Markov-chain Monte Carlo (MCMC) methods. We consider log-normal prior distributions on the parameters and investigate several prior distributions on the sampling times. An adaptive design was used to find the sampling window for the current sampling time conditional on the actual times of all previous samples.
Yin, Ge; Danielsson, Sara; Dahlberg, Anna-Karin; Zhou, Yihui; Qiu, Yanling; Nyberg, Elisabeth; Bignert, Anders
2017-10-01
Environmental monitoring typically assumes samples and sampling activities to be representative of the population being studied. Given a limited budget, an appropriate sampling strategy is essential to support detecting temporal trends of contaminants. In the present study, based on real chemical analysis data on polybrominated diphenyl ethers in snails collected from five subsites in Tianmu Lake, computer simulation is performed to evaluate three sampling strategies by the estimation of required sample size, to reach a detection of an annual change of 5% with a statistical power of 80% and 90% with a significant level of 5%. The results showed that sampling from an arbitrarily selected sampling spot is the worst strategy, requiring much more individual analyses to achieve the above mentioned criteria compared with the other two approaches. A fixed sampling site requires the lowest sample size but may not be representative for the intended study object e.g. a lake and is also sensitive to changes of that particular sampling site. In contrast, sampling at multiple sites along the shore each year, and using pooled samples when the cost to collect and prepare individual specimens are much lower than the cost for chemical analysis, would be the most robust and cost efficient strategy in the long run. Using statistical power as criterion, the results demonstrated quantitatively the consequences of various sampling strategies, and could guide users with respect of required sample sizes depending on sampling design for long term monitoring programs. Copyright © 2017 Elsevier Ltd. All rights reserved.
Design and Demonstration of a Material-Plasma Exposure Target Station for Neutron Irradiated Samples
Energy Technology Data Exchange (ETDEWEB)
Rapp, Juergen [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Aaron, A. M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Bell, Gary L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Burgess, Thomas W. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Ellis, Ronald James [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Giuliano, D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Howard, R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Kiggans, James O. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Lessard, Timothy L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Ohriner, Evan Keith [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Perkins, Dale E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Varma, Venugopal Koikal [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
2015-10-20
-state heat fluxes of 5–20 MW/m2 and ion fluxes up to 1024 m-2s-1. Since PFCs will have to withstand neutron irradiation displacement damage up to 50 dpa, the target station design must accommodate radioactive specimens (materials to be irradiated in HFIR or at SNS) to enable investigations of the impact of neutron damage on materials. Therefore, the system will have to be able to install and extract irradiated specimens using equipment and methods to avoid sample modification, control contamination, and minimize worker dose. Included in the design considerations will be an assessment of all the steps between neutron irradiation and post-exposure materials examination/characterization, as well as an evaluation of the facility hazard categorization. In particular, the factors associated with the acquisition of radioactive specimens and their preparation, transportation, experimental configuration at the plasma-specimen interface, post-plasma-exposure sample handling, and specimen preparation will be evaluated.
Modeling hierarchical structures - Hierarchical Linear Modeling using MPlus
Jelonek, M
2006-01-01
The aim of this paper is to present the technique (and its linkage with physics) of overcoming problems connected to modeling social structures, which are typically hierarchical. Hierarchical Linear Models provide a conceptual and statistical mechanism for drawing conclusions regarding the influence of phenomena at different levels of analysis. In the social sciences it is used to analyze many problems such as educational, organizational or market dilemma. This paper introduces the logic of modeling hierarchical linear equations and estimation based on MPlus software. I present my own model to illustrate the impact of different factors on school acceptation level.
Zhong, Wei; Koopmeiners, Joseph S; Carlin, Bradley P
2013-11-01
Frequentist sample size determination for binary outcome data in a two-arm clinical trial requires initial guesses of the event probabilities for the two treatments. Misspecification of these event rates may lead to a poor estimate of the necessary sample size. In contrast, the Bayesian approach that considers the treatment effect to be random variable having some distribution may offer a better, more flexible approach. The Bayesian sample size proposed by (Whitehead et al., 2008) for exploratory studies on efficacy justifies the acceptable minimum sample size by a "conclusiveness" condition. In this work, we introduce a new two-stage Bayesian design with sample size reestimation at the interim stage. Our design inherits the properties of good interpretation and easy implementation from Whitehead et al. (2008), generalizes their method to a two-sample setting, and uses a fully Bayesian predictive approach to reduce an overly large initial sample size when necessary. Moreover, our design can be extended to allow patient level covariates via logistic regression, now adjusting sample size within each subgroup based on interim analyses. We illustrate the benefits of our approach with a design in non-Hodgkin lymphoma with a simple binary covariate (patient gender), offering an initial step toward within-trial personalized medicine. Copyright © 2013 Elsevier Inc. All rights reserved.
Small, coded, pill-sized tracers embedded in grain are proposed as a method for grain traceability. A sampling process for a grain traceability system was designed and investigated by applying probability statistics using a science-based sampling approach to collect an adequate number of tracers fo...
Bayesian hierarchical modeling of drug stability data.
Chen, Jie; Zhong, Jinglin; Nie, Lei
2008-06-15
Stability data are commonly analyzed using linear fixed or random effect model. The linear fixed effect model does not take into account the batch-to-batch variation, whereas the random effect model may suffer from the unreliable shelf-life estimates due to small sample size. Moreover, both methods do not utilize any prior information that might have been available. In this article, we propose a Bayesian hierarchical approach to modeling drug stability data. Under this hierarchical structure, we first use Bayes factor to test the poolability of batches. Given the decision on poolability of batches, we then estimate the shelf-life that applies to all batches. The approach is illustrated with two example data sets and its performance is compared in simulation studies with that of the commonly used frequentist methods. (c) 2008 John Wiley & Sons, Ltd.
Energy Technology Data Exchange (ETDEWEB)
JANICEK, G.P.
2000-06-08
Report documenting Formal Design Review conducted on portable exhausters used to support rotary mode core sampling of Hanford underground radioactive waste tanks with focus on Safety Class design features and control requirements for flammable gas environment operation and air discharge permitting compliance.
Saldanha, Luis
2016-01-01
This article reports on a classroom teaching experiment that engaged a group of high school students in designing sampling simulations within a computer microworld. The simulation-design activities aimed to foster students' abilities to conceive of contextual situations as stochastic experiments, and to engage them with the logic of hypothesis…
Energy Technology Data Exchange (ETDEWEB)
Romero, Vicente Jose
2011-11-01
This report explores some important considerations in devising a practical and consistent framework and methodology for utilizing experiments and experimental data to support modeling and prediction. A pragmatic and versatile 'Real Space' approach is outlined for confronting experimental and modeling bias and uncertainty to mitigate risk in modeling and prediction. The elements of experiment design and data analysis, data conditioning, model conditioning, model validation, hierarchical modeling, and extrapolative prediction under uncertainty are examined. An appreciation can be gained for the constraints and difficulties at play in devising a viable end-to-end methodology. Rationale is given for the various choices underlying the Real Space end-to-end approach. The approach adopts and refines some elements and constructs from the literature and adds pivotal new elements and constructs. Crucially, the approach reflects a pragmatism and versatility derived from working many industrial-scale problems involving complex physics and constitutive models, steady-state and time-varying nonlinear behavior and boundary conditions, and various types of uncertainty in experiments and models. The framework benefits from a broad exposure to integrated experimental and modeling activities in the areas of heat transfer, solid and structural mechanics, irradiated electronics, and combustion in fluids and solids.
Institute of Scientific and Technical Information of China (English)
张海涛; 饶志坚; 李俊杰; 高泉; 邢晓庆
2014-01-01
Course scheduling is an important work ensuring the normal order of teaching in universities, but the algorithm of con-flict detection is complex with heavy tasks, so the C/S structure is adopted in most of the present course scheduling systems. This article interprets the main principles of using .NET technology and scheduling courses in universities, designs and achieves a u-niversity hierarchical course scheduling system based on WEB, and proposes a more efficient binary system conflict detection al-gorithm for course scheduling. This system has been working well in Yunnan Agricultural University and the practice effect is im-pressive.%排课是高校能否保证教学有序进行的一项重要工作，冲突检测算法复杂，任务繁重，目前多数排课系统的实现采用C/S结构。本文主要阐述运用.NET技术和高校排课的主要原则，设计并实现了基于WEB的校院分级排课系统，同时还提出了更高效的二进制占位排课冲突检测算法。系统在云南农业大学排课过程中得到实际运行，实践效果较好。
Institute of Scientific and Technical Information of China (English)
李竹林; 马燕
2012-01-01
现代企业人员层级制度分明,工作任务分工细致,业务分配机制的灵活性至关重要.因此如何达到业务分配的灵活性和动态性是必须解决的问题.文中提出了一种基于用户角色和所属部门的分层控制模型来实现企业的业务动态分配,并将其应用于采油厂原油生产管理系统中,取得了良好的应用效果.%Hierarchy of modern enterprise staff is clear, labor division is meticulous, and business distribution mechanism is flexible. These are important to an enterprise. So, how to achieve flexible and dynamic business distribution is a problem to be solved. This paper designs a hierarchical control model to realize dynamic allocation scheme of business based on departments and user roles, and uses it in the crude oil production management system. The application result shows the method is very good.
Hierarchical Robot Control In A Multisensor Environment
Bhanu, Bir; Thune, Nils; Lee, Jih Kun; Thune, Mari
1987-03-01
Automatic recognition, inspection, manipulation and assembly of objects will be a common denominator in most of tomorrow's highly automated factories. These tasks will be handled by intelligent computer controlled robots with multisensor capabilities which contribute to desired flexibility and adaptability. The control of a robot in such a multisensor environment becomes of crucial importance as the complexity of the problem grows exponentially with the number of sensors, tasks, commands and objects. In this paper we present an approach which uses CAD (Computer-Aided Design) based geometric and functional models of objects together with action oriented neuroschemas to recognize and manipulate objects by a robot in a multisensor environment. The hierarchical robot control system is being implemented on a BBN Butterfly multi processor. Index terms: CAD, Hierarchical Control, Hypothesis Generation and Verification, Parallel Processing, Schemas
Institute of Scientific and Technical Information of China (English)
朱文忠
2014-01-01
为提高电力数据调度效率，缩短电力数据调度延时，提出一种改进的无通信冲突的分布式电力数据聚集调度近似算法，采用Sink根数据聚集树对无线传感器网络中各个节点电力资源数据进行分层数据调度，根据分布式数据集对各个电力节点之间的控制信息进行不断融合处理，在最大独立集的基础上建立一棵根在Sink的数据聚集树。每个节点分配一个时间片，使该节点能在无通信冲突的情况下传输数据。仿真实验表明，采用改进算法得到的聚集延时明显减小，有效保证了电力调度控制的实时性，电力信息数据分层融合度能达到90%以上，而改进前的算法只有10%~50%之间。%In order to improve the power data scheduling efficiency, shorten the power data scheduling delay, and improve matching and integration degree, and improved power scheduling optimization design method based on Sink root data tree hierarchical clustering was proposed for improve the management efficiency. We established a tree root in the Sink data ag-gregation tree based on the maximum independent set. Each node was assigned a time slice, so that the node could transmit data in the absence of communication conflict situations. Simulation results show that the improved algorithm has signifi-cantly reduced aggregation delay, and it has effectively ensured the real-time dispatching control, and the data hierarchical fusion degree can reach more than 90%, while the former algorithm is only 10%~50%.
层次化动态实时调度框架的设计与实现%Design and Realization of Hierarchical Dynamic Real-time Scheduling Framework
Institute of Scientific and Technical Information of China (English)
胡家义; 张激; 刘玲
2013-01-01
The development of embedded systems is facing new trends including variousness of usage scenarios, strict requirement for real-time feature, complexity of upper applications and assurance of strong robustness, demanding the advance of system safety by means of promoting embedded operating systems. Temporal isolation mechanism is an important part to improve the safety of system, proposing a hierarchical dynamic real-time scheduling framework to be the implementation of temporal isolation. This paper uses the homogeneity of task to generate task sets, which can be the basis of hierarchical framework for task partitioning;testifies the schedulable condition of the framework, designs the structure of scheduling algorithm and realizes the dynamic switching of scheduling algorithm. The simulation result and theoretical analysis indicate that the issued framework can improve the safety of system and dynamically adjust to the variation of system load while guaranteeing the stability of time complexity of context switch.%现有嵌入式系统具有应用场景多变、实时性要求严格、上层应用复杂、鲁棒性较强等特点，在嵌入式操作系统层面对系统防危性要求较高。时间隔离机制是提高系统防危性的重要组成部分，为此，提出一种将层次化动态实时调度框架作为时间隔离的实现策略。引入任务同质性的概念进行任务分划，将产生的任务集作为层次框架的基础，证明多层次框架下实时任务的可调度性条件，设计调度算法结构并实现调度算法的动态切换。仿真结果和理论分析表明，该调度框架在保证上下文切换时间复杂度稳定的前提下，可提高系统防危性并动态应对系统负载的变化。
Hierarchical Surface Architecture of Plants as an Inspiration for Biomimetic Fog Collectors.
Azad, M A K; Barthlott, W; Koch, K
2015-12-01
Fog collectors can enable us to alleviate the water crisis in certain arid regions of the world. A continuous fog-collection cycle consisting of a persistent capture of fog droplets and their fast transport to the target is a prerequisite for developing an efficient fog collector. In regard to this topic, a biological superior design has been found in the hierarchical surface architecture of barley (Hordeum vulgare) awns. We demonstrate here the highly wettable (advancing contact angle 16° ± 2.7 and receding contact angle 9° ± 2.6) barbed (barb = conical structure) awn as a model to develop optimized fog collectors with a high fog-capturing capability, an effective water transport, and above all an efficient fog collection. We compare the fog-collection efficiency of the model sample with other plant samples naturally grown in foggy habitats that are supposed to be very efficient fog collectors. The model sample, consisting of dry hydrophilized awns (DH awns), is found to be about twice as efficient (fog-collection rate 563.7 ± 23.2 μg/cm(2) over 10 min) as any other samples investigated under controlled experimental conditions. Finally, a design based on the hierarchical surface architecture of the model sample is proposed for the development of optimized biomimetic fog collectors.
A Hierarchical Framework for Facial Age Estimation
Directory of Open Access Journals (Sweden)
Yuyu Liang
2014-01-01
Full Text Available Age estimation is a complex issue of multiclassification or regression. To address the problems of uneven distribution of age database and ignorance of ordinal information, this paper shows a hierarchic age estimation system, comprising age group and specific age estimation. In our system, two novel classifiers, sequence k-nearest neighbor (SKNN and ranking-KNN, are introduced to predict age group and value, respectively. Notably, ranking-KNN utilizes the ordinal information between samples in estimation process rather than regards samples as separate individuals. Tested on FG-NET database, our system achieves 4.97 evaluated by MAE (mean absolute error for age estimation.
Design of nonlinear discrete-time controllers using a parameter space sampling procedure
Young, G. E.; Auslander, D. M.
1983-01-01
The design of nonlinear discrete-time controllers is investigated where the control algorithm assumes a special form. State-dependent control actions are obtained from tables whose values are the design parameters. A new design methodology capable of dealing with nonlinear systems containing parameter uncertainty is used to obtain the controller design. Various controller strategies are presented and illustrated through an example.
Johnston, Lisa G; Chen, Yea-Hung; Silva-Santisteban, Alfonso; Raymond, H Fisher
2013-07-01
For studies using respondent driven sampling (RDS), the current practice of collecting a sample twice as large as that used in simple random sampling (SRS) (i.e. design effect of 2.00) may not be sufficient. This paper provides empirical evidence of sample-to-sample variability in design effects using data from nine studies in six countries among injecting drug users, female sex workers, men who have sex with men and male-to-female transgender (MTF) persons. We computed the design effect as the variance under RDS divided by the variance under SRS for a broad range of demographic and behavioral variables in each study. We also estimated several measures for each variable in each study that we hypothesized might be related to design effect: the number of waves needed for equilibrium, homophily, and mean network size. Design effects for all studies ranged from 1.20 to 5.90. Mean design effects among all studies ranged from 1.50 to 3.70. A particularly high design effect was found for employment status (design effect of 5.90) of MTF in Peru. This may be explained by a "bottleneck"--defined as the occurrence of a relatively small number of recruitment ties between two groups in the population. A design effect of two for RDS studies may not be sufficient. Since the mean design effect across all studies was 2.33, an effect slightly above 2.00 may be adequate; however, an effect closer to 3.00 or 4.00 might be more appropriate.
RlSC & DSP Advanced Microprocessor System Design; Sample Projects, Fall 1991
Fredine, John E.; Goeckel, Dennis L.; Meyer, David G.; Sailer, Stuart E.; Schmottlach, Glenn E.
1992-01-01
RlSC & DSP Microprocessor System Design (EE 595M) provides students with an overview of reduced instruction set (RISC) microprocessors and digital signal processing (DSP) microprocessors, with emphasis on incorporating these devices in general purpose and embedded system designs, respectively. The first half of the course emphasizes design considerations for RlSC microprocessor based computer systems; a half-semester design project focuses on design principles that could be utilized in a gene...
Lotterhos, Katie E; Whitlock, Michael C
2015-03-01
Although genome scans have become a popular approach towards understanding the genetic basis of local adaptation, the field still does not have a firm grasp on how sampling design and demographic history affect the performance of genome scans on complex landscapes. To explore these issues, we compared 20 different sampling designs in equilibrium (i.e. island model and isolation by distance) and nonequilibrium (i.e. range expansion from one or two refugia) demographic histories in spatially heterogeneous environments. We simulated spatially complex landscapes, which allowed us to exploit local maxima and minima in the environment in 'pair' and 'transect' sampling strategies. We compared F(ST) outlier and genetic-environment association (GEA) methods for each of two approaches that control for population structure: with a covariance matrix or with latent factors. We show that while the relative power of two methods in the same category (F(ST) or GEA) depended largely on the number of individuals sampled, overall GEA tests had higher power in the island model and F(ST) had higher power under isolation by distance. In the refugia models, however, these methods varied in their power to detect local adaptation at weakly selected loci. At weakly selected loci, paired sampling designs had equal or higher power than transect or random designs to detect local adaptation. Our results can inform sampling designs for studies of local adaptation and have important implications for the interpretation of genome scans based on landscape data. © 2015 John Wiley & Sons Ltd.
Zhang, Yu Xin
2014-12-01
A facile and large-scale strategy of mesoporous birnessite-type manganese dioxide (MnO2) nanosheets on one-dimension (1D) H2Ti 3O7 and anatase/TiO2 (B) nanowires (NWs) is developed for high performance supercapacitors. The morphological characteristics of MnO2 nanoflakes on H2Ti 3O7 and anatase/TiO2 (B) NWs could be rationally designed with various characteristics (e.g., the sheet thickness, surface area). Interestingly, the MnO2/TiO2 NWs exhibit a more optimized electrochemical performance with specific capacitance of 120 F g-1 at current density of 0.1 A g-1 (based on MnO 2 + TiO2) than MnO2/H2Ti 3O7 NWs. An asymmetric supercapacitor of MnO 2/TiO2//activated graphene (AG) yields a better energy density of 29.8 Wh kg-1 than MnO2/H2Ti 3O7//AG asymmetric supercapacitor, while maintaining desirable cycling stability. Indeed, the pseudocapacitive difference is related to the substrates, unique structure and surface area. Especially, the anatase/TiO2 (B) mixed-phase system can provide good electronic conductivity and high utilization of MnO2 nanosheets. © 2014 Elsevier B.V. All rights reserved.
Zhang, Yu Xin; Kuang, Min; Hao, Xiao Dong; Liu, Yan; Huang, Ming; Guo, Xiao Long; Yan, Jing; Han, Gen Quan; Li, Jing
2014-12-01
A facile and large-scale strategy of mesoporous birnessite-type manganese dioxide (MnO2) nanosheets on one-dimension (1D) H2Ti3O7 and anatase/TiO2 (B) nanowires (NWs) is developed for high performance supercapacitors. The morphological characteristics of MnO2 nanoflakes on H2Ti3O7 and anatase/TiO2 (B) NWs could be rationally designed with various characteristics (e.g., the sheet thickness, surface area). Interestingly, the MnO2/TiO2 NWs exhibit a more optimized electrochemical performance with specific capacitance of 120 F g-1 at current density of 0.1 A g-1 (based on MnO2 + TiO2) than MnO2/H2Ti3O7 NWs. An asymmetric supercapacitor of MnO2/TiO2//activated graphene (AG) yields a better energy density of 29.8 Wh kg-1 than MnO2/H2Ti3O7//AG asymmetric supercapacitor, while maintaining desirable cycling stability. Indeed, the pseudocapacitive difference is related to the substrates, unique structure and surface area. Especially, the anatase/TiO2 (B) mixed-phase system can provide good electronic conductivity and high utilization of MnO2 nanosheets.
Modeling hierarchical structures - Hierarchical Linear Modeling using MPlus
Jelonek, Magdalena
2006-01-01
The aim of this paper is to present the technique (and its linkage with physics) of overcoming problems connected to modeling social structures, which are typically hierarchical. Hierarchical Linear Models provide a conceptual and statistical mechanism for drawing conclusions regarding the influence of phenomena at different levels of analysis. In the social sciences it is used to analyze many problems such as educational, organizational or market dilemma. This paper introduces the logic of m...
Modeling the deformation behavior of nanocrystalline alloy with hierarchical microstructures
Energy Technology Data Exchange (ETDEWEB)
Liu, Hongxi; Zhou, Jianqiu, E-mail: zhouj@njtech.edu.cn [Nanjing Tech University, Department of Mechanical Engineering (China); Zhao, Yonghao, E-mail: yhzhao@njust.edu.cn [Nanjing University of Science and Technology, Nanostructural Materials Research Center, School of Materials Science and Engineering (China)
2016-02-15
A mechanism-based plasticity model based on dislocation theory is developed to describe the mechanical behavior of the hierarchical nanocrystalline alloys. The stress–strain relationship is derived by invoking the impeding effect of the intra-granular solute clusters and the inter-granular nanostructures on the dislocation movements along the sliding path. We found that the interaction between dislocations and the hierarchical microstructures contributes to the strain hardening property and greatly influence the ductility of nanocrystalline metals. The analysis indicates that the proposed model can successfully describe the enhanced strength of the nanocrystalline hierarchical alloy. Moreover, the strain hardening rate is sensitive to the volume fraction of the hierarchical microstructures. The present model provides a new perspective to design the microstructures for optimizing the mechanical properties in nanostructural metals.
The reflection of hierarchical cluster analysis of co-occurrence matrices in SPSS
Zhou, Q.; Leng, F.; Leydesdorff, L.
2015-01-01
Purpose: To discuss the problems arising from hierarchical cluster analysis of co-occurrence matrices in SPSS, and the corresponding solutions. Design/methodology/approach: We design different methods of using the SPSS hierarchical clustering module for co-occurrence matrices in order to compare the
The reflection of hierarchical cluster analysis of co-occurrence matrices in SPSS
Zhou, Q.; Leng, F.; Leydesdorff, L.
2015-01-01
Purpose: To discuss the problems arising from hierarchical cluster analysis of co-occurrence matrices in SPSS, and the corresponding solutions. Design/methodology/approach: We design different methods of using the SPSS hierarchical clustering module for co-occurrence matrices in order to compare
Petrov, Romain G; Boskri, Abdelkarim; Folcher, Jean-Pierre; Lagarde, Stephane; Bresson, Yves; Benkhaldoum, Zouhair; Lazrek, Mohamed; Rakshit, Suvendu
2014-01-01
The limiting magnitude is a key issue for optical interferometry. Pairwise fringe trackers based on the integrated optics concepts used for example in GRAVITY seem limited to about K=10.5 with the 8m Unit Telescopes of the VLTI, and there is a general "common sense" statement that the efficiency of fringe tracking, and hence the sensitivity of optical interferometry, must decrease as the number of apertures increases, at least in the near infrared where we are still limited by detector readout noise. Here we present a Hierarchical Fringe Tracking (HFT) concept with sensitivity at least equal to this of a two apertures fringe trackers. HFT is based of the combination of the apertures in pairs, then in pairs of pairs then in pairs of groups. The key HFT module is a device that behaves like a spatial filter for two telescopes (2TSF) and transmits all or most of the flux of a cophased pair in a single mode beam. We give an example of such an achromatic 2TSF, based on very broadband dispersed fringes analyzed by g...
Meneveau, Charles
2007-11-01
The massive datasets now generated by Direct Numerical Simulations (DNS) of turbulent flows create serious new challenges. During a simulation, DNS provides only a few time steps at any instant, owing to storage limitations within the computational cluster. Therefore, traditional numerical experiments done during the simulation examine each time slice only a few times before discarding it. Conversely, if a few large datasets from high-resolution simulations are stored, they are practically inaccessible to most in the turbulence research community, who lack the cyber resources to handle the massive amounts of data. Even those who can compute at that scale must run simulations again forward in time in order to answer new questions about the dynamics, duplicating computational effort. The result is that most turbulence datasets are vastly underutilized and not available as they should be for creative experimentation. In this presentation, we discuss the desired features and requirements of a turbulence database that will enable its widest access to the research community. The guiding principle of large databases is ``move the program to the data'' (Szalay et al. ``Designing and mining multi-terabyte Astronomy archives: the Sloan Digital Sky Survey,'' in ACM SIGMOD, 2000). However, in the case of turbulence research, the questions and analysis techniques are highly specific to the client and vary widely from one client to another. This poses particularly hard challenges in the design of database analysis tools. We propose a minimal set of such tools that are of general utility across various applications. And, we describe a new approach based on a Web services interface that allows a client to access the data in a user-friendly fashion while allowing maximum flexibility to execute desired analysis tasks. Sample applications will be discussed. This work is performed by the interdisciplinary ITR group, consisting of the author and Yi Li(1), Eric Perlman(2), Minping Wan(1
Nearly Cyclic Pursuit and its Hierarchical variant for Multi-agent Systems
DEFF Research Database (Denmark)
Iqbal, Muhammad; Leth, John-Josef; Ngo, Trung Dung
2015-01-01
The rendezvous problem for multiple agents under nearly cyclic pursuit and hierarchical nearly cyclic pursuit is discussed in this paper. The control law designed under nearly cyclic pursuit strategy enables the agents to converge at a point dictated by a beacon. A hierarchical version of the nea......The rendezvous problem for multiple agents under nearly cyclic pursuit and hierarchical nearly cyclic pursuit is discussed in this paper. The control law designed under nearly cyclic pursuit strategy enables the agents to converge at a point dictated by a beacon. A hierarchical version...
Hierarchical clustering for graph visualization
Clémençon, Stéphan; Rossi, Fabrice; Tran, Viet Chi
2012-01-01
This paper describes a graph visualization methodology based on hierarchical maximal modularity clustering, with interactive and significant coarsening and refining possibilities. An application of this method to HIV epidemic analysis in Cuba is outlined.
Direct hierarchical assembly of nanoparticles
Xu, Ting; Zhao, Yue; Thorkelsson, Kari
2014-07-22
The present invention provides hierarchical assemblies of a block copolymer, a bifunctional linking compound and a nanoparticle. The block copolymers form one micro-domain and the nanoparticles another micro-domain.
Hierarchical architecture of active knits
Abel, Julianna; Luntz, Jonathan; Brei, Diann
2013-12-01
Nature eloquently utilizes hierarchical structures to form the world around us. Applying the hierarchical architecture paradigm to smart materials can provide a basis for a new genre of actuators which produce complex actuation motions. One promising example of cellular architecture—active knits—provides complex three-dimensional distributed actuation motions with expanded operational performance through a hierarchically organized structure. The hierarchical structure arranges a single fiber of active material, such as shape memory alloys (SMAs), into a cellular network of interlacing adjacent loops according to a knitting grid. This paper defines a four-level hierarchical classification of knit structures: the basic knit loop, knit patterns, grid patterns, and restructured grids. Each level of the hierarchy provides increased architectural complexity, resulting in expanded kinematic actuation motions of active knits. The range of kinematic actuation motions are displayed through experimental examples of different SMA active knits. The results from this paper illustrate and classify the ways in which each level of the hierarchical knit architecture leverages the performance of the base smart material to generate unique actuation motions, providing necessary insight to best exploit this new actuation paradigm.
Quick Web Services Lookup Model Based on Hierarchical Registration
Institute of Scientific and Technical Information of China (English)
谢山; 朱国进; 陈家训
2003-01-01
Quick Web Services Lookup (Q-WSL) is a new model to registration and lookup of complex services in the Internet. The model is designed to quickly find complex Web services by using hierarchical registration method. The basic concepts of Web services system are introduced and presented, and then the method of hierarchical registration of services is described. In particular, service query document description and service lookup procedure are concentrated, and it addresses how to lookup these services which are registered in the Web services system. Furthermore, an example design and an evaluation of its performance are presented.Specifically, it shows that the using of attributionbased service query document design and contentbased hierarchical registration in Q-WSL allows service requesters to discover needed services more flexibly and rapidly. It is confirmed that Q-WSL is very suitable for Web services system.
A framework for cut-off sampling in business survey design
Bee, Marco; Benedetti, Roberto; Espa, Giuseppe
2007-01-01
In sampling theory the large concentration of the population with respect to most surveyed variables constitutes a problem which is difficult to tackle by means of classical tools. One possible solution is given by cut-off sampling, which explicitly prescribes to discard part of the population; in particular, if the population is composed by firms or establishments, the method results in the exclusion of the “smallest” firms. Whereas this sampling scheme is common among practitioners, its the...
Simulation of design-unbiased point-to-particle sampling compared to alternatives on plantation rows
Thomas B. Lynch; David Hamlin; Mark J. Ducey
2016-01-01
Total quantities of tree attributes can be estimatedÂ in plantations by sampling on plantation rows usingÂ several methods. At random sample points on a row,Â either fixed row lengths or variable row lengths with aÂ fixed number of sample trees can be assessed. Ratio ofÂ means or mean of ratios estimators can be developedÂ for the fixed number of trees option but are not...
Designing a multiple dependent state sampling plan based on the coefficient of variation.
Yan, Aijun; Liu, Sanyang; Dong, Xiaojuan
2016-01-01
A multiple dependent state (MDS) sampling plan is developed based on the coefficient of variation of the quality characteristic which follows a normal distribution with unknown mean and variance. The optimal plan parameters of the proposed plan are solved by a nonlinear optimization model, which satisfies the given producer's risk and consumer's risk at the same time and minimizes the sample size required for inspection. The advantages of the proposed MDS sampling plan over the existing single sampling plan are discussed. Finally an example is given to illustrate the proposed plan.
Varekar, Vikas; Karmakar, Subhankar; Jha, Ramakar; Ghosh, N C
2015-06-01
The design of a water quality monitoring network (WQMN) is a complicated decision-making process because each sampling involves high installation, operational, and maintenance costs. Therefore, data with the highest information content should be collected. The effect of seasonal variation in point and diffuse pollution loadings on river water quality may have a significant impact on the optimal selection of sampling locations, but this possible effect has never been addressed in the evaluation and design of monitoring networks. The present study proposes a systematic approach for siting an optimal number and location of river water quality sampling stations based on seasonal or monsoonal variations in both point and diffuse pollution loadings. The proposed approach conceptualizes water quality monitoring as a two-stage process; the first stage of which is to consider all potential water quality sampling sites, selected based on the existing guidelines or frameworks, and the locations of both point and diffuse pollution sources. The monitoring at all sampling sites thus identified should be continued for an adequate period of time to account for the effect of the monsoon season. In the second stage, the monitoring network is then designed separately for monsoon and non-monsoon periods by optimizing the number and locations of sampling sites, using a modified Sanders approach. The impacts of human interventions on the design of the sampling net are quantified geospatially by estimating diffuse pollution loads and verified with land use map. To demonstrate the proposed methodology, the Kali River basin in the western Uttar Pradesh state of India was selected as a study area. The final design suggests consequential pre- and post-monsoonal changes in the location and priority of water quality monitoring stations based on the seasonal variation of point and diffuse pollution loadings.
SMALL SAMPLE SIZE IN 2X2 CROSS OVER DESIGNS: CONDITIONS OF DETERMINATION
Directory of Open Access Journals (Sweden)
B SOLEYMANI
2001-09-01
Full Text Available Introduction. Determination of small sample size in some clinical trials is a matter of importance. In cross-over studies which are one types of clinical trials, the matter is more significant. In this article, the conditions in which determination of small sample size in cross-over studies are possible were considered, and the effect of deviation from normality on the matter has been shown. Methods. The present study has been done on such 2x2 cross-over studies that variable of interest is quantitative one and is measurable by ratio or interval scale. The method of consideration is based on use of variable and sample mean"s distributions, central limit theorem, method of sample size determination in two groups, and cumulant or moment generating function. Results. In normal variables or transferable to normal variables, there is no restricting factors other than significant level and power of the test for determination of sample size, but in the case of non-normal variables, it should be determined such large that guarantee the normality of sample mean"s distribution. Discussion. In such cross over studies that because of existence of theoretical base, few samples can be computed, one should not do it without taking applied worth of results into consideration. While determining sample size, in addition to variance, it is necessary to consider distribution of variable, particularly through its skewness and kurtosis coefficients. the more deviation from normality, the more need of samples. Since in medical studies most of the continuous variables are closed to normal distribution, a few number of samples often seems to be adequate for convergence of sample mean to normal distribution.
YILMAZ, Sinan; Öztürk, Betül; ÖZDEMİR, Durmuş
2013-01-01
Adsorptive cathodic stripping voltammetric determination of aluminum at ng mL-1 levels in salt samples based on the metal complexation with Calcon (1-(2-hydroxynaphthylazo)-2 naphthol-4-sulfonic acid) and the subsequent adsorptive deposition onto a hanging mercury drop electrode was studied. Central composite design was used as a design method. Several chemical and instrumental parameters (pH, ligand concentration, deposition time, deposition potential, and complexing time) were invo...
Designing efficient surveys: spatial arrangement of sample points for detection of invasive species
Ludek Berec; John M. Kean; Rebecca Epanchin-Niell; Andrew M. Liebhold; Robert G. Haight
2015-01-01
Effective surveillance is critical to managing biological invasions via early detection and eradication. The efficiency of surveillance systems may be affected by the spatial arrangement of sample locations. We investigate how the spatial arrangement of sample points, ranging from random to fixed grid arrangements, affects the probability of detecting a target...
Representativeness-based sampling network design for the State of Alaska
Forrest M. Hoffman; Jitendra Kumar; Richard T. Mills; William W. Hargrove
2013-01-01
Resource and logistical constraints limit the frequency and extent of environmental observations, particularly in the Arctic, necessitating the development of a systematic sampling strategy to maximize coverage and objectively represent environmental variability at desired scales. A quantitative methodology for stratifying sampling domains, informing site selection,...
Effects of sampling design on age ratios of migrants captured at stopover sites
Jeffrey F. Kelly; Deborah M. Finch
2000-01-01
Age classes of migrant songbirds often differ in migration timing. This difference creates the potential for age-ratios recorded at stopover sites to vary with the amount and distribution of sampling effort used. To test for these biases, we sub-sampled migrant capture data from the Middle Rio Grande Valley of New Mexico. We created data sets that reflected the age...
Michael Arbaugh; Larry Bednar
1996-01-01
The sampling methods used to monitor ozone injury to ponderosa and Jeffrey pines depend on the objectives of the study, geographic and genetic composition of the forest, and the source and composition of air pollutant emissions. By using a standardized sampling methodology, it may be possible to compare conditions within local areas more accurately, and to apply the...
Designing a Sample Selection Plan to Improve Generalizations from Two Scale-Up Experiments
Tipton, Elizabeth; Sullivan, Kate; Hedges, Larry; Vaden-Kiernan, Michael; Borman, Geoffrey; Caverly, Sarah
2011-01-01
In this paper the authors present a new method for sample selection for scale-up experiments. This method uses propensity score matching methods to create a sample that is similar in composition to a well-defined generalization population. The method they present is flexible and practical in the sense that it identifies units to be targeted for…
Designing Robust Hierarchically Textured Oleophobic Fabrics.
Kleingartner, Justin A; Srinivasan, Siddarth; Truong, Quoc T; Sieber, Michael; Cohen, Robert E; McKinley, Gareth H
2015-12-08
Commercially available woven fabrics (e.g., nylon- or PET-based fabrics) possess inherently re-entrant textures in the form of cylindrical yarns and fibers. We analyze the liquid repellency of woven and nanotextured oleophobic fabrics using a nested model with n levels of hierarchy that is constructed from modular units of cylindrical and spherical building blocks. At each level of hierarchy, the density of the topographical features is captured using a dimensionless textural parameter D(n)*. For a plain-woven mesh comprised of chemically treated fiber bundles (n = 2), the tight packing of individual fibers in each bundle (D2* ≈ 1) imposes a geometric constraint on the maximum oleophobicity that can be achieved solely by modifying the surface energy of the coating. For liquid droplets contacting such tightly bundled fabrics with modified surface energies, we show that this model predicts a lower bound on the equilibrium contact angle of θ(E) ≈ 57° below which the Cassie–Baxter to Wenzel wetting transition occurs spontaneously, and this is validated experimentally. We demonstrate how the introduction of an additional higher order micro-/nanotexture onto the fibers (n = 3) is necessary to overcome this limit and create more robustly nonwetting fabrics. Finally, we show a simple experimental realization of the enhanced oleophobicity of fabrics by depositing spherical microbeads of poly(methyl methacrylate)/fluorodecyl polyhedral oligomeric silsesquioxane (fluorodecyl POSS) onto the fibers of a commercial woven nylon fabric.
Planned Missing Data Designs with Small Sample Sizes: How Small Is Too Small?
Jia, Fan; Moore, E. Whitney G.; Kinai, Richard; Crowe, Kelly S.; Schoemann, Alexander M.; Little, Todd D.
2014-01-01
Utilizing planned missing data (PMD) designs (ex. 3-form surveys) enables researchers to ask participants fewer questions during the data collection process. An important question, however, is just how few participants are needed to effectively employ planned missing data designs in research studies. This article explores this question by using…
Adaptation of G-TAG Software for Validating Touch-and-Go Comet Surface Sampling Design Methodology
Mandic, Milan; Acikmese, Behcet; Blackmore, Lars
2011-01-01
The G-TAG software tool was developed under the R&TD on Integrated Autonomous Guidance, Navigation, and Control for Comet Sample Return, and represents a novel, multi-body dynamics simulation software tool for studying TAG sampling. The G-TAG multi-body simulation tool provides a simulation environment in which a Touch-and-Go (TAG) sampling event can be extensively tested. TAG sampling requires the spacecraft to descend to the surface, contact the surface with a sampling collection device, and then to ascend to a safe altitude. The TAG event lasts only a few seconds but is mission-critical with potentially high risk. Consequently, there is a need for the TAG event to be well characterized and studied by simulation and analysis in order for the proposal teams to converge on a reliable spacecraft design. This adaptation of the G-TAG tool was developed to support the Comet Odyssey proposal effort, and is specifically focused to address comet sample return missions. In this application, the spacecraft descends to and samples from the surface of a comet. Performance of the spacecraft during TAG is assessed based on survivability and sample collection performance. For the adaptation of the G-TAG simulation tool to comet scenarios, models are developed that accurately describe the properties of the spacecraft, approach trajectories, and descent velocities, as well as the models of the external forces and torques acting on the spacecraft. The adapted models of the spacecraft, descent profiles, and external sampling forces/torques were more sophisticated and customized for comets than those available in the basic G-TAG simulation tool. Scenarios implemented include the study of variations in requirements, spacecraft design (size, locations, etc. of the spacecraft components), and the environment (surface properties, slope, disturbances, etc.). The simulations, along with their visual representations using G-View, contributed to the Comet Odyssey New Frontiers proposal
The PowerAtlas: a power and sample size atlas for microarray experimental design and research
Directory of Open Access Journals (Sweden)
Wang Jelai
2006-02-01
Full Text Available Abstract Background Microarrays permit biologists to simultaneously measure the mRNA abundance of thousands of genes. An important issue facing investigators planning microarray experiments is how to estimate the sample size required for good statistical power. What is the projected sample size or number of replicate chips needed to address the multiple hypotheses with acceptable accuracy? Statistical methods exist for calculating power based upon a single hypothesis, using estimates of the variability in data from pilot studies. There is, however, a need for methods to estimate power and/or required sample sizes in situations where multiple hypotheses are being tested, such as in microarray experiments. In addition, investigators frequently do not have pilot data to estimate the sample sizes required for microarray studies. Results To address this challenge, we have developed a Microrarray PowerAtlas 1. The atlas enables estimation of statistical power by allowing investigators to appropriately plan studies by building upon previous studies that have similar experimental characteristics. Currently, there are sample sizes and power estimates based on 632 experiments from Gene Expression Omnibus (GEO. The PowerAtlas also permits investigators to upload their own pilot data and derive power and sample size estimates from these data. This resource will be updated regularly with new datasets from GEO and other databases such as The Nottingham Arabidopsis Stock Center (NASC. Conclusion This resource provides a valuable tool for investigators who are planning efficient microarray studies and estimating required sample sizes.
A Simplified Approach for Two-Dimensional Optimal Controlled Sampling Designs
Directory of Open Access Journals (Sweden)
Neeraj Tiwari
2014-01-01
Full Text Available Controlled sampling is a unique method of sample selection that minimizes the probability of selecting nondesirable combinations of units. Extending the concept of linear programming with an effective distance measure, we propose a simple method for two-dimensional optimal controlled selection that ensures zero probability to nondesired samples. Alternative estimators for population total and its variance have also been suggested. Some numerical examples have been considered to demonstrate the utility of the proposed procedure in comparison to the existing procedures.
Boessen, Ruud; van der Baan, Frederieke; Groenwold, Rolf; Egberts, Antoine; Klungel, Olaf; Grobbee, Diederick; Knol, Mirjam; Roes, Kit
2013-01-01
Two-stage clinical trial designs may be efficient in pharmacogenetics research when there is some but inconclusive evidence of effect modification by a genomic marker. Two-stage designs allow to stop early for efficacy or futility and can offer the additional opportunity to enrich the study population to a specific patient subgroup after an interim analysis. This study compared sample size requirements for fixed parallel group, group sequential, and adaptive selection designs with equal overall power and control of the family-wise type I error rate. The designs were evaluated across scenarios that defined the effect sizes in the marker positive and marker negative subgroups and the prevalence of marker positive patients in the overall study population. Effect sizes were chosen to reflect realistic planning scenarios, where at least some effect is present in the marker negative subgroup. In addition, scenarios were considered in which the assumed 'true' subgroup effects (i.e., the postulated effects) differed from those hypothesized at the planning stage. As expected, both two-stage designs generally required fewer patients than a fixed parallel group design, and the advantage increased as the difference between subgroups increased. The adaptive selection design added little further reduction in sample size, as compared with the group sequential design, when the postulated effect sizes were equal to those hypothesized at the planning stage. However, when the postulated effects deviated strongly in favor of enrichment, the comparative advantage of the adaptive selection design increased, which precisely reflects the adaptive nature of the design. Copyright © 2013 John Wiley & Sons, Ltd.
Application of different spatial sampling patterns for sparse-array transducer design
DEFF Research Database (Denmark)
Nikolov, Svetoslav; Jensen, Jørgen Arendt
2000-01-01
$) must be used, if the conventional phased array transducers are extrapolated to the two-dimensional case. To decrease thenumber of channels, sparse arrays with different aperture apodization functions in transmit and receive have to be designed. The design is usually carried out in 1D...... of the ultrasound fields show a decrease of the grating-lobe level of 10 dB for the diagonally optimized 2D array transducers compared to the the previuosly designed 2D arrays which didn't consider the diagonals....
Mcgwire, K.; Friedl, M.; Estes, J. E.
1993-01-01
This article describes research related to sampling techniques for establishing linear relations between land surface parameters and remotely-sensed data. Predictive relations are estimated between percentage tree cover in a savanna environment and a normalized difference vegetation index (NDVI) derived from the Thematic Mapper sensor. Spatial autocorrelation in original measurements and regression residuals is examined using semi-variogram analysis at several spatial resolutions. Sampling schemes are then tested to examine the effects of autocorrelation on predictive linear models in cases of small sample sizes. Regression models between image and ground data are affected by the spatial resolution of analysis. Reducing the influence of spatial autocorrelation by enforcing minimum distances between samples may also improve empirical models which relate ground parameters to satellite data.
Pharmacokinetic Studies in Neonates: The Utility of an Opportunistic Sampling Design.
Leroux, Stéphanie; Turner, Mark A; Guellec, Chantal Barin-Le; Hill, Helen; van den Anker, Johannes N; Kearns, Gregory L; Jacqz-Aigrain, Evelyne; Zhao, Wei
2015-12-01
The use of an opportunistic (also called scavenged) sampling strategy in a prospective pharmacokinetic study combined with population pharmacokinetic modelling has been proposed as an alternative strategy to conventional methods for accomplishing pharmacokinetic studies in neonates. However, the reliability of this approach in this particular paediatric population has not been evaluated. The objective of the present study was to evaluate the performance of an opportunistic sampling strategy for a population pharmacokinetic estimation, as well as dose prediction, and compare this strategy with a predetermined pharmacokinetic sampling approach. Three population pharmacokinetic models were derived for ciprofloxacin from opportunistic blood samples (SC model), predetermined (i.e. scheduled) samples (TR model) and all samples (full model used to previously characterize ciprofloxacin pharmacokinetics), using NONMEM software. The predictive performance of developed models was evaluated in an independent group of patients. Pharmacokinetic data from 60 newborns were obtained with a total of 430 samples available for analysis; 265 collected at predetermined times and 165 that were scavenged from those obtained as part of clinical care. All datasets were fit using a two-compartment model with first-order elimination. The SC model could identify the most significant covariates and provided reasonable estimates of population pharmacokinetic parameters (clearance and steady-state volume of distribution) compared with the TR and full models. Their predictive performances were further confirmed in an external validation by Bayesian estimation, and showed similar results. Monte Carlo simulation based on area under the concentration-time curve from zero to 24 h (AUC24)/minimum inhibitory concentration (MIC) using either the SC or the TR model gave similar dose prediction for ciprofloxacin. Blood samples scavenged in the course of caring for neonates can be used to estimate
Hierarchical Bayesian sparse image reconstruction with application to MRFM
Dobigeon, Nicolas; Tourneret, Jean-Yves
2008-01-01
This paper presents a hierarchical Bayesian model to reconstruct sparse images when the observations are obtained from linear transformations and corrupted by an additive white Gaussian noise. Our hierarchical Bayes model is well suited to such naturally sparse image applications as it seamlessly accounts for properties such as sparsity and positivity of the image via appropriate Bayes priors. We propose a prior that is based on a weighted mixture of a positive exponential distribution and a mass at zero. The prior has hyperparameters that are tuned automatically by marginalization over the hierarchical Bayesian model. To overcome the complexity of the posterior distribution, a Gibbs sampling strategy is proposed. The Gibbs samples can be used to estimate the image to be recovered, e.g. by maximizing the estimated posterior distribution. In our fully Bayesian approach the posteriors of all the parameters are available. Thus our algorithm provides more information than other previously proposed sparse reconstr...
Francis A. Roesch; Todd A. Schroeder; James T. Vogt
2017-01-01
The resilience of a National Forest Inventory and Monitoring sample design can sometimes depend upon the degree to which it can adapt to fluctuations in funding. If a budget reduction necessitates the observation of fewer plots per year, some practitioners weigh the problem as a tradeoff between reducing the total number of plots and measuring the original number of...
H.E. Anderson; J. Breidenbach
2007-01-01
Airborne laser scanning (LIDAR) can be a valuable tool in double-sampling forest survey designs. LIDAR-derived forest structure metrics are often highly correlated with important forest inventory variables, such as mean stand biomass, and LIDAR-based synthetic regression estimators have the potential to be highly efficient compared to single-stage estimators, which...
Optimal design of near-Earth asteroid sample-return trajectories in the Sun-Earth-Moon system
He, Shengmao; Zhu, Zhengfan; Peng, Chao; Ma, Jian; Zhu, Xiaolong; Gao, Yang
2016-08-01
In the 6th edition of the Chinese Space Trajectory Design Competition held in 2014, a near-Earth asteroid sample-return trajectory design problem was released, in which the motion of the spacecraft is modeled in multi-body dynamics, considering the gravitational forces of the Sun, Earth, and Moon. It is proposed that an electric-propulsion spacecraft initially parking in a circular 200-km-altitude low Earth orbit is expected to rendezvous with an asteroid and carry as much sample as possible back to the Earth in a 10-year time frame. The team from the Technology and Engineering Center for Space Utilization, Chinese Academy of Sciences has reported a solution with an asteroid sample mass of 328 tons, which is ranked first in the competition. In this article, we will present our design and optimization methods, primarily including overall analysis, target selection, escape from and capture by the Earth-Moon system, and optimization of impulsive and low-thrust trajectories that are modeled in multi-body dynamics. The orbital resonance concept and lunar gravity assists are considered key techniques employed for trajectory design. The reported solution, preliminarily revealing the feasibility of returning a hundreds-of-tons asteroid or asteroid sample, envisions future space missions relating to near-Earth asteroid exploration.
Optimal design of near-Earth asteroid sample-return trajectories in the Sun-Earth-Moon system
Institute of Scientific and Technical Information of China (English)
Shengmao He; Zhengfan Zhu; Chao Peng; Jian Ma; Xiaolong Zhu; Yang Gao
2016-01-01
In the 6th edition of the Chinese Space Trajec-tory Design Competition held in 2014, a near-Earth asteroid sample-return trajectory design problem was released, in which the motion of the spacecraft is modeled in multi-body dynamics, considering the gravitational forces of the Sun, Earth, and Moon. It is proposed that an electric-propulsion spacecraft initially parking in a circular 200-km-altitude low Earth orbit is expected to rendezvous with an asteroid and carry as much sample as possible back to the Earth in a 10-year time frame. The team from the Technology and Engi-neering Center for Space Utilization, Chinese Academy of Sciences has reported a solution with an asteroid sample mass of 328 tons, which is ranked first in the competition. In this article, we will present our design and optimization methods, primarily including overall analysis, target selec-tion, escape from and capture by the Earth–Moon system, and optimization of impulsive and low-thrust trajectories that are modeled in multi-body dynamics. The orbital res-onance concept and lunar gravity assists are considered key techniques employed for trajectory design. The reported solution, preliminarily revealing the feasibility of returning a hundreds-of-tons asteroid or asteroid sample, envisions future space missions relating to near-Earth asteroid explo-ration.