WorldWideScience

Sample records for array analysis revisited

  1. Nanoelectrode array for electrochemical analysis

    Energy Technology Data Exchange (ETDEWEB)

    Yelton, William G [Sandia Park, NM; Siegal, Michael P [Albuquerque, NM

    2009-12-01

    A nanoelectrode array comprises a plurality of nanoelectrodes wherein the geometric dimensions of the electrode controls the electrochemical response, and the current density is independent of time. By combining a massive array of nanoelectrodes in parallel, the current signal can be amplified while still retaining the beneficial geometric advantages of nanoelectrodes. Such nanoelectrode arrays can be used in a sensor system for rapid, non-contaminating field analysis. For example, an array of suitably functionalized nanoelectrodes can be incorporated into a small, integrated sensor system that can identify many species rapidly and simultaneously under field conditions in high-resistivity water, without the need for chemical addition to increase conductivity.

  2. Meta-analysis in clinical trials revisited.

    Science.gov (United States)

    DerSimonian, Rebecca; Laird, Nan

    2015-11-01

    In this paper, we revisit a 1986 article we published in this Journal, Meta-Analysis in Clinical Trials, where we introduced a random-effects model to summarize the evidence about treatment efficacy from a number of related clinical trials. Because of its simplicity and ease of implementation, our approach has been widely used (with more than 12,000 citations to date) and the "DerSimonian and Laird method" is now often referred to as the 'standard approach' or a 'popular' method for meta-analysis in medical and clinical research. The method is especially useful for providing an overall effect estimate and for characterizing the heterogeneity of effects across a series of studies. Here, we review the background that led to the original 1986 article, briefly describe the random-effects approach for meta-analysis, explore its use in various settings and trends over time and recommend a refinement to the method using a robust variance estimator for testing overall effect. We conclude with a discussion of repurposing the method for Big Data meta-analysis and Genome Wide Association Studies for studying the importance of genetic variants in complex diseases. Published by Elsevier Inc.

  3. Revisited

    DEFF Research Database (Denmark)

    Tegtmeier, Silke; Meyer, Verena; Pakura, Stefanie

    2017-01-01

    were captured when they described entrepreneurs. Therefore, this paper aims to revisit gender role stereotypes among young adults. Design/methodology/approach: To measure stereotyping, participants were asked to describe entrepreneurs in general and either women or men in general. The Schein......Purpose: Entrepreneurship is shaped by a male norm, which has been widely demonstrated in qualitative studies. The authors strive to complement these methods by a quantitative approach. First, gender role stereotypes were measured in entrepreneurship. Second, the explicit notions of participants......: The images of men and entrepreneurs show a high and significant congruence (r = 0.803), mostly in those adjectives that are untypical for men and entrepreneurs. The congruence of women and entrepreneurs was low (r = 0.152) and insignificant. Contrary to the participants’ beliefs, their explicit notions did...

  4. Leakage analysis of crossbar memristor arrays

    KAUST Repository

    Zidan, Mohammed A.; Salem, Ahmed Sultan; Fahmy, Hossam Aly Hassan; Salama, Khaled N.

    2014-01-01

    the readout operation. In this work we study the trade-off between the crossbar array density and the power consumption required for its readout. Our analysis is based on simulating full memristor arrays on a SPICE platform.

  5. Leakage analysis of crossbar memristor arrays

    KAUST Repository

    Zidan, Mohammed A.

    2014-07-01

    Crossbar memristor arrays provide a promising high density alternative for the current memory and storage technologies. These arrays suffer from parasitic current components that significantly increase the power consumption, and could ruin the readout operation. In this work we study the trade-off between the crossbar array density and the power consumption required for its readout. Our analysis is based on simulating full memristor arrays on a SPICE platform.

  6. Waveguide Phased Array Antenna Analysis and Synthesis

    NARCIS (Netherlands)

    Visser, H.J.; Keizer, W.P.M.N.

    1996-01-01

    Results of two software packages for analysis and synthesis of waveguide phased array antennas are shown. The antennas consist of arrays of open-ended waveguides where irises can be placed in the waveguide apertures and multiple dielectric sheets in front of the apertures in order to accomplish a

  7. Solar array qualification through qualified analysis

    Science.gov (United States)

    Zijdemans, J.; Cruijssen, H. J.; Wijker, J. J.

    1991-04-01

    To achieve qualification is in general a very expensive exercise. For solar arrays this is done by a dedicated test program through which final qualification is achieved. Due to severe competition on the solar array market, cheaper means are looked for to achieve a qualified product for the customers. One of the methods is to drastically limit the environmental test program and to qualify the solar-array structure against its environmental loads by analysis. Qualification by analysis is possible. The benefits are that a significant amount of development effort can be saved in case such a powerful tool is available. Extensive testing can be avoided thus saving time and money.

  8. Classical and sequential limit analysis revisited

    Science.gov (United States)

    Leblond, Jean-Baptiste; Kondo, Djimédo; Morin, Léo; Remmal, Almahdi

    2018-04-01

    Classical limit analysis applies to ideal plastic materials, and within a linearized geometrical framework implying small displacements and strains. Sequential limit analysis was proposed as a heuristic extension to materials exhibiting strain hardening, and within a fully general geometrical framework involving large displacements and strains. The purpose of this paper is to study and clearly state the precise conditions permitting such an extension. This is done by comparing the evolution equations of the full elastic-plastic problem, the equations of classical limit analysis, and those of sequential limit analysis. The main conclusion is that, whereas classical limit analysis applies to materials exhibiting elasticity - in the absence of hardening and within a linearized geometrical framework -, sequential limit analysis, to be applicable, strictly prohibits the presence of elasticity - although it tolerates strain hardening and large displacements and strains. For a given mechanical situation, the relevance of sequential limit analysis therefore essentially depends upon the importance of the elastic-plastic coupling in the specific case considered.

  9. Performance Analysis of Digital loudspeaker Arrays

    DEFF Research Database (Denmark)

    Pedersen, Bo Rohde; Kontomichos, Fotios; Mourjopoulos, John

    2008-01-01

    An analysis of digital loudspeaker arrays shows that the ways in which bits are mapped to the drivers influence the quality of the audio result. Specifically, a "bit-summed" rather than the traditional "bit-mapped" strategy greatly reduces the number of times drivers make binary transitions per...... period of the input frequency. Detailed simulations compare the results for a 32-loudspeaker array with a similar configuration with analog excitation of the drivers. Ideally, drivers in digital arrays should be very small and span a small area, but that sets limits on the low-frequency response...

  10. Microbial Diagnostic Array Workstation (MDAW: a web server for diagnostic array data storage, sharing and analysis

    Directory of Open Access Journals (Sweden)

    Chang Yung-Fu

    2008-09-01

    Full Text Available Abstract Background Microarrays are becoming a very popular tool for microbial detection and diagnostics. Although these diagnostic arrays are much simpler when compared to the traditional transcriptome arrays, due to the high throughput nature of the arrays, the data analysis requirements still form a bottle neck for the widespread use of these diagnostic arrays. Hence we developed a new online data sharing and analysis environment customised for diagnostic arrays. Methods Microbial Diagnostic Array Workstation (MDAW is a database driven application designed in MS Access and front end designed in ASP.NET. Conclusion MDAW is a new resource that is customised for the data analysis requirements for microbial diagnostic arrays.

  11. Fault Analysis in Solar Photovoltaic Arrays

    Science.gov (United States)

    Zhao, Ye

    Fault analysis in solar photovoltaic (PV) arrays is a fundamental task to increase reliability, efficiency and safety in PV systems. Conventional fault protection methods usually add fuses or circuit breakers in series with PV components. But these protection devices are only able to clear faults and isolate faulty circuits if they carry a large fault current. However, this research shows that faults in PV arrays may not be cleared by fuses under some fault scenarios, due to the current-limiting nature and non-linear output characteristics of PV arrays. First, this thesis introduces new simulation and analytic models that are suitable for fault analysis in PV arrays. Based on the simulation environment, this thesis studies a variety of typical faults in PV arrays, such as ground faults, line-line faults, and mismatch faults. The effect of a maximum power point tracker on fault current is discussed and shown to, at times, prevent the fault current protection devices to trip. A small-scale experimental PV benchmark system has been developed in Northeastern University to further validate the simulation conclusions. Additionally, this thesis examines two types of unique faults found in a PV array that have not been studied in the literature. One is a fault that occurs under low irradiance condition. The other is a fault evolution in a PV array during night-to-day transition. Our simulation and experimental results show that overcurrent protection devices are unable to clear the fault under "low irradiance" and "night-to-day transition". However, the overcurrent protection devices may work properly when the same PV fault occurs in daylight. As a result, a fault under "low irradiance" and "night-to-day transition" might be hidden in the PV array and become a potential hazard for system efficiency and reliability.

  12. Parametric analysis of ATM solar array.

    Science.gov (United States)

    Singh, B. K.; Adkisson, W. B.

    1973-01-01

    The paper discusses the methods used for the calculation of ATM solar array performance characteristics and provides the parametric analysis of solar panels used in SKYLAB. To predict the solar array performance under conditions other than test conditions, a mathematical model has been developed. Four computer programs have been used to convert the solar simulator test data to the parametric curves. The first performs module summations, the second determines average solar cell characteristics which will cause a mathematical model to generate a curve matching the test data, the third is a polynomial fit program which determines the polynomial equations for the solar cell characteristics versus temperature, and the fourth program uses the polynomial coefficients generated by the polynomial curve fit program to generate the parametric data.

  13. Exome Array Analysis of Nuclear Lens Opacity.

    Science.gov (United States)

    Loomis, Stephanie J; Klein, Alison P; Lee, Kristine E; Chen, Fei; Bomotti, Samantha; Truitt, Barbara; Iyengar, Sudha K; Klein, Ronald; Klein, Barbara E K; Duggal, Priya

    2018-06-01

    Nuclear cataract is the most common subtype of age-related cataract, the leading cause of blindness worldwide. It results from advanced nuclear sclerosis, or opacity in the center of the optic lens, and is affected by both genetic and environmental risk factors, including smoking. We sought to understand the genetic factors associated with nuclear sclerosis through interrogation of rare and low frequency coding variants using exome array data. We analyzed Illumina Human Exome Array data for 1,488 participants of European ancestry in the Beaver Dam Eye Study who were without cataract surgery for association with nuclear sclerosis grade, controlling for age and sex. We performed single-variant regression analysis for 32,138 variants with minor allele frequency (MAF) ≥0.003. In addition, gene-based analysis of 11,844 genes containing at least two variants with MAF nuclear sclerosis, the possible association with the RNF149 gene highlights a potential candidate gene for future studies that aim to understand the genetic architecture of nuclear sclerosis.

  14. Analysis of measurements on wind turbine arrays. Vol. 1

    International Nuclear Information System (INIS)

    Hoeg, E.

    1990-12-01

    In 1989 a Danish electric power company initiated an analysis of eight wind turbine arrays. Data from this project is presented together with the explained results of the analyses and the output variations for individual arrays and for systems within the arrays. The models for prognosis are compared and evaluated in order to find that which is most effective. (AB)

  15. Spacecraft Multiple Array Communication System Performance Analysis

    Science.gov (United States)

    Hwu, Shian U.; Desilva, Kanishka; Sham, Catherine C.

    2010-01-01

    The Communication Systems Simulation Laboratory (CSSL) at the NASA Johnson Space Center is tasked to perform spacecraft and ground network communication system simulations, design validation, and performance verification. The CSSL has developed simulation tools that model spacecraft communication systems and the space and ground environment in which the tools operate. In this paper, a spacecraft communication system with multiple arrays is simulated. Multiple array combined technique is used to increase the radio frequency coverage and data rate performance. The technique is to achieve phase coherence among the phased arrays to combine the signals at the targeting receiver constructively. There are many technical challenges in spacecraft integration with a high transmit power communication system. The array combining technique can improve the communication system data rate and coverage performances without increasing the system transmit power requirements. Example simulation results indicate significant performance improvement can be achieved with phase coherence implementation.

  16. High-resolution SNP array analysis of patients with developmental disorder and normal array CGH results

    Directory of Open Access Journals (Sweden)

    Siggberg Linda

    2012-09-01

    Full Text Available Abstract Background Diagnostic analysis of patients with developmental disorders has improved over recent years largely due to the use of microarray technology. Array methods that facilitate copy number analysis have enabled the diagnosis of up to 20% more patients with previously normal karyotyping results. A substantial number of patients remain undiagnosed, however. Methods and Results Using the Genome-Wide Human SNP array 6.0, we analyzed 35 patients with a developmental disorder of unknown cause and normal array comparative genomic hybridization (array CGH results, in order to characterize previously undefined genomic aberrations. We detected no seemingly pathogenic copy number aberrations. Most of the vast amount of data produced by the array was polymorphic and non-informative. Filtering of this data, based on copy number variant (CNV population frequencies as well as phenotypically relevant genes, enabled pinpointing regions of allelic homozygosity that included candidate genes correlating to the phenotypic features in four patients, but results could not be confirmed. Conclusions In this study, the use of an ultra high-resolution SNP array did not contribute to further diagnose patients with developmental disorders of unknown cause. The statistical power of these results is limited by the small size of the patient cohort, and interpretation of these negative results can only be applied to the patients studied here. We present the results of our study and the recurrence of clustered allelic homozygosity present in this material, as detected by the SNP 6.0 array.

  17. Fabrication of plasmonic cavity arrays for SERS analysis

    Science.gov (United States)

    Li, Ning; Feng, Lei; Teng, Fei; Lu, Nan

    2017-05-01

    The plasmonic cavity arrays are ideal substrates for surface enhanced Raman scattering analysis because they can provide hot spots with large volume for analyte molecules. The large area increases the probability to make more analyte molecules on hot spots and leads to a high reproducibility. Therefore, to develop a simple method for creating cavity arrays is important. Herein, we demonstrate how to fabricate a V and W shape cavity arrays by a simple method based on self-assembly. Briefly, the V and W shape cavity arrays are respectively fabricated by taking KOH etching on a nanohole and a nanoring array patterned silicon (Si) slides. The nanohole array is generated by taking a reactive ion etching on a Si slide assembled with monolayer of polystyrene (PS) spheres. The nanoring array is generated by taking a reactive ion etching on a Si slide covered with a monolayer of octadecyltrichlorosilane before self-assembling PS spheres. Both plasmonic V and W cavity arrays can provide large hot area, which increases the probability for analyte molecules to deposit on the hot spots. Taking 4-Mercaptopyridine as analyte probe, the enhancement factor can reach 2.99 × 105 and 9.97 × 105 for plasmonic V cavity and W cavity array, respectively. The relative standard deviations of the plasmonic V and W cavity arrays are 6.5% and 10.2% respectively according to the spectra collected on 20 random spots.

  18. Romanian earthquakes analysis using BURAR seismic array

    International Nuclear Information System (INIS)

    Borleanu, Felix; Rogozea, Maria; Nica, Daniela; Popescu, Emilia; Popa, Mihaela; Radulian, Mircea

    2008-01-01

    Bucovina seismic array (BURAR) is a medium-aperture array, installed in 2002 in the northern part of Romania (47.61480 N latitude, 25.21680 E longitude, 1150 m altitude), as a result of the cooperation between Air Force Technical Applications Center, USA and National Institute for Earth Physics, Romania. The array consists of ten elements, located in boreholes and distributed over a 5 x 5 km 2 area; nine with short-period vertical sensors and one with a broadband three-component sensor. Since the new station has been operating the earthquake survey of Romania's territory has been significantly improved. Data recorded by BURAR during 01.01.2005 - 12.31.2005 time interval are first processed and analyzed, in order to establish the array detection capability of the local earthquakes, occurred in different Romanian seismic zones. Subsequently a spectral ratios technique was applied in order to determine the calibration relationships for magnitude, using only the information gathered by BURAR station. The spectral ratios are computed relatively to a reference event, considered as representative for each seismic zone. This method has the advantage to eliminate the path effects. The new calibration procedure is tested for the case of Vrancea intermediate-depth earthquakes and proved to be very efficient in constraining the size of these earthquakes. (authors)

  19. Update-in-Place Analysis for True Multidimensional Arrays

    Directory of Open Access Journals (Sweden)

    Steven M. Fitzgerald

    1996-01-01

    Full Text Available Applicative languages have been proposed for defining algorithms for parallel architectures because they are implicitly parallel and lack side effects. However, straightforward implementations of applicative-language compilers may induce large amounts of copying to preserve program semantics. The unnecessary copying of data can increase both the execution time and the memory requirements of an application. To eliminate the unnecessary copying of data, the Sisal compiler uses both build-in-place and update-in-place analyses. These optimizations remove unnecessary array copy operations through compile-time analysis. Both build-in-place and update-in-place are based on hierarchical ragged arrays, i.e., the vector-of-vectors array model. Although this array model is convenient for certain applications, many optimizations are precluded, e.g., vectorization. To compensate for this deficiency, new languages, such as Sisal 2.0, have extended array models that allow for both high-level array operations to be performed and efficient implementations to be devised. In this article, we introduce a new method to perform update-in-place analysis that is applicable to arrays stored either in hierarchical or in contiguous storage. Consequently, the array model that is appropriate for an application can be selected without the loss of performance. Moreover, our analysis is more amenable for distributed memory and large software systems.

  20. Ceramic ball grid array package stress analysis

    Science.gov (United States)

    Badri, S. H. B. S.; Aziz, M. H. A.; Ong, N. R.; Sauli, Z.; Alcain, J. B.; Retnasamy, V.

    2017-09-01

    The ball grid array (BGA), a form of chip scale package (CSP), was developed as one of the most advanced surface mount devices, which may be assembled by an ordinary surface ball bumps are used instead of plated nickel and gold (Ni/Au) bumps. Assembly and reliability of the BGA's printed circuit board (PCB), which is soldered by conventional surface mount technology is considered in this study. The Ceramic Ball Grid Array (CBGA) is a rectangular ceramic package or square-shaped that will use the solder ball for external electrical connections instead of leads or wire for connections. The solder balls will be arranged in an array or grid at the bottom of the ceramic package body. In this study, ANSYS software is used to investigate the stress on the package for 2 balls and 4 balls of the CBGA package with the various force range of 1-3 Newton applied to the top of the die, top of the substrate and side of the substrate. The highest maximum stress was analyzed and the maximum equivalent stress was observed on the solder ball and the die. From the simulation result, the CBGA package with less solder balls experience higher stress compared to the package with many solder balls. Therefore, less number of solder ball on the CBGA package results higher stress and critically affect the reliability of the solder balls itself, substrate and die which can lead to the solder crack and also die crack.

  1. An Empirical Study of Precise Interprocedural Array Analysis

    Directory of Open Access Journals (Sweden)

    Michael Hind

    1994-01-01

    Full Text Available In this article we examine the role played by the interprocedural analysis of array accesses in the automatic parallelization of Fortran programs. We use the PTRAN system to provide measurements of several benchmarks to compare different methods of representing interprocedurally accessed arrays. We examine issues concerning the effectiveness of automatic parallelization using these methods and the efficiency of a precise summarization method.

  2. EzArray: A web-based highly automated Affymetrix expression array data management and analysis system

    Directory of Open Access Journals (Sweden)

    Zhu Yuelin

    2008-01-01

    Full Text Available Abstract Background Though microarray experiments are very popular in life science research, managing and analyzing microarray data are still challenging tasks for many biologists. Most microarray programs require users to have sophisticated knowledge of mathematics, statistics and computer skills for usage. With accumulating microarray data deposited in public databases, easy-to-use programs to re-analyze previously published microarray data are in high demand. Results EzArray is a web-based Affymetrix expression array data management and analysis system for researchers who need to organize microarray data efficiently and get data analyzed instantly. EzArray organizes microarray data into projects that can be analyzed online with predefined or custom procedures. EzArray performs data preprocessing and detection of differentially expressed genes with statistical methods. All analysis procedures are optimized and highly automated so that even novice users with limited pre-knowledge of microarray data analysis can complete initial analysis quickly. Since all input files, analysis parameters, and executed scripts can be downloaded, EzArray provides maximum reproducibility for each analysis. In addition, EzArray integrates with Gene Expression Omnibus (GEO and allows instantaneous re-analysis of published array data. Conclusion EzArray is a novel Affymetrix expression array data analysis and sharing system. EzArray provides easy-to-use tools for re-analyzing published microarray data and will help both novice and experienced users perform initial analysis of their microarray data from the location of data storage. We believe EzArray will be a useful system for facilities with microarray services and laboratories with multiple members involved in microarray data analysis. EzArray is freely available from http://www.ezarray.com/.

  3. Hybrid Information Flow Analysis for Programs with Arrays

    Directory of Open Access Journals (Sweden)

    Gergö Barany

    2016-07-01

    Full Text Available Information flow analysis checks whether certain pieces of (confidential data may affect the results of computations in unwanted ways and thus leak information. Dynamic information flow analysis adds instrumentation code to the target software to track flows at run time and raise alarms if a flow policy is violated; hybrid analyses combine this with preliminary static analysis. Using a subset of C as the target language, we extend previous work on hybrid information flow analysis that handled pointers to scalars. Our extended formulation handles arrays, pointers to array elements, and pointer arithmetic. Information flow through arrays of pointers is tracked precisely while arrays of non-pointer types are summarized efficiently. A prototype of our approach is implemented using the Frama-C program analysis and transformation framework. Work on a full machine-checked proof of the correctness of our approach using Isabelle/HOL is well underway; we present the existing parts and sketch the rest of the correctness argument.

  4. Revisiting Needs Analysis: A Cornerstone for Business English Courses

    Directory of Open Access Journals (Sweden)

    Hélder Fanha Martins

    2017-03-01

    Full Text Available Needs analysis is critical in aligning the content and methodology, in achieving relevancy to the participants in a business English course. The theory of learning is derived from practice, where stakeholders have a role in structuring the course and encouraging students to develop critical skills such as observing, analysing, and evaluating of communicative behaviour and learning (Holden, 1993. In this respect, needs analysis is directly concerned with planning and designing of the course content, with a focus on improving learning and training. Importantly, these activities are aligned to the students’ needs (West, 1994. In this respect, this article focuses on different kinds of needs, which form part of the needs analysis concept. In so doing, it becomes easy to discuss the structure and administration of stakeholders needs. The paper further discusses relevancy, motivation, and stakeholder involvement, as well as limitations of needs analysis. The needs analysis discussion revolves around business English courses

  5. Hydraulic modeling support for conflict analysis: The Manayunk canal revisited

    International Nuclear Information System (INIS)

    Chadderton, R.A.; Traver, R.G.; Rao, J.N.

    1992-01-01

    This paper presents a study which used a standard, hydraulic computer model to generate detailed design information to support conflict analysis of a water resource use issue. As an extension of previous studies, the conflict analysis in this case included several scenarios for stability analysis - all of which reached the conclusion that compromising, shared access to the water resources available would result in the most benefits to society. This expected equilibrium outcome was found to maximize benefit-cost estimates. 17 refs., 1 fig., 2 tabs

  6. Gravitational wave detection and data analysis for pulsar timing arrays

    NARCIS (Netherlands)

    Haasteren, Rutger van

    2011-01-01

    Long-term precise timing of Galactic millisecond pulsars holds great promise for measuring long-period (months-to-years) astrophysical gravitational waves. In this work we develop a Bayesian data analysis method for projects called pulsar timing arrays; projects aimed to detect these gravitational

  7. Rapid prenatal diagnosis of cytogenetic abnormalities by array CGH analysis

    Science.gov (United States)

    Array CGH analysis has been shown to be highly accurate for rapid detection of chromosomal aneuploidies and submicroscopic deletions or duplications on fetal DNA samples in a clinical prenatal diagnostic setting. The objective of this study is to present our "post-validation phase" experience with ...

  8. Analysis and synthesis of (SAR) waveguide phased array antennas

    Science.gov (United States)

    Visser, H. J.

    1994-02-01

    This report describes work performed due to ESA contract No. 101 34/93/NL/PB. Started is with a literature study on dual polarized waveguide radiators, resulting in the choice for the open ended square waveguide. After a thorough description of the mode matching infinite waveguide array analysis method - including finiteness effects - that forms the basis for all further described analysis and synthesis methods, the accuracy of the analysis software is validated by comparison with measurements on two realized antennas. These antennas have centered irises in the waveguide apertures and a dielectric wide angle impedance matching sheet in front of the antenna. A synthesis method, using simulated annealing and downhill simplex, is described next and different antenna designs, based on the analysis of a single element in an infinite array environment, are presented. Next, designs of subarrays are presented. Shown is the paramount importance of including the array environment in the design of a subarray. A microstrip patch waveguide exciter and subarray feeding network are discussed and the depth of the waveguide radiator is estimated. Chosen is a rectangular grid array with waveguides of 2.5 cm depth without irises and without dielectric sheet, grouped in linear 8 elements subarrays.

  9. HTGR core seismic analysis using an array processor

    International Nuclear Information System (INIS)

    Shatoff, H.; Charman, C.M.

    1983-01-01

    A Floating Point Systems array processor performs nonlinear dynamic analysis of the high-temperature gas-cooled reactor (HTGR) core with significant time and cost savings. The graphite HTGR core consists of approximately 8000 blocks of various shapes which are subject to motion and impact during a seismic event. Two-dimensional computer programs (CRUNCH2D, MCOCO) can perform explicit step-by-step dynamic analyses of up to 600 blocks for time-history motions. However, use of two-dimensional codes was limited by the large cost and run times required. Three-dimensional analysis of the entire core, or even a large part of it, had been considered totally impractical. Because of the needs of the HTGR core seismic program, a Floating Point Systems array processor was used to enhance computer performance of the two-dimensional core seismic computer programs, MCOCO and CRUNCH2D. This effort began by converting the computational algorithms used in the codes to a form which takes maximum advantage of the parallel and pipeline processors offered by the architecture of the Floating Point Systems array processor. The subsequent conversion of the vectorized FORTRAN coding to the array processor required a significant programming effort to make the system work on the General Atomic (GA) UNIVAC 1100/82 host. These efforts were quite rewarding, however, since the cost of running the codes has been reduced approximately 50-fold and the time threefold. The core seismic analysis with large two-dimensional models has now become routine and extension to three-dimensional analysis is feasible. These codes simulate the one-fifth-scale full-array HTGR core model. This paper compares the analysis with the test results for sine-sweep motion

  10. Smoothsort revisited

    NARCIS (Netherlands)

    Bron, Coenraad; Hesselink, Wim H.

    1991-01-01

    A renewed analysis of Dijkstra's array sorting algorithm Smoothsort showed that the removal of a complexity improved the performance. The resulting algorithm has a worst-case behaviour of order N.log N. In the average case, it has more or less the same speed as heapsort. For arrays that are

  11. Nonparametric analysis of blocked ordered categories data: some examples revisited

    Directory of Open Access Journals (Sweden)

    O. Thas

    2006-08-01

    Full Text Available Nonparametric analysis for general block designs can be given by using the Cochran-Mantel-Haenszel (CMH statistics. We demonstrate this with four examples and note that several well-known nonparametric statistics are special cases of CMH statistics.

  12. Computer-Based Interaction Analysis with DEGREE Revisited

    Science.gov (United States)

    Barros, B.; Verdejo, M. F.

    2016-01-01

    We review our research with "DEGREE" and analyse how our work has impacted the collaborative learning community since 2000. Our research is framed within the context of computer-based interaction analysis and the development of computer-supported collaborative learning (CSCL) tools. We identify some aspects of our work which have been…

  13. Controller Area Network (CAN) schedulability analysis : refuted, revisited and revised

    NARCIS (Netherlands)

    Davis, R.I.; Burns, A.; Bril, R.J.; Lukkien, J.J.

    2007-01-01

    Controller Area Network (CAN) is used extensively in automotive applications, with in excess of 400 million CAN enabled microcontrollers manufactured each year. In 1994 schedulability analysis was developed for CAN, showing how worst-case response times of CAN messages could be calculated and hence

  14. Analysis of Camera Arrays Applicable to the Internet of Things.

    Science.gov (United States)

    Yang, Jiachen; Xu, Ru; Lv, Zhihan; Song, Houbing

    2016-03-22

    The Internet of Things is built based on various sensors and networks. Sensors for stereo capture are essential for acquiring information and have been applied in different fields. In this paper, we focus on the camera modeling and analysis, which is very important for stereo display and helps with viewing. We model two kinds of cameras, a parallel and a converged one, and analyze the difference between them in vertical and horizontal parallax. Even though different kinds of camera arrays are used in various applications and analyzed in the research work, there are few discussions on the comparison of them. Therefore, we make a detailed analysis about their performance over different shooting distances. From our analysis, we find that the threshold of shooting distance for converged cameras is 7 m. In addition, we design a camera array in our work that can be used as a parallel camera array, as well as a converged camera array and take some images and videos with it to identify the threshold.

  15. Network and system diagrams revisited: Satisfying CEA requirements for causality analysis

    International Nuclear Information System (INIS)

    Perdicoulis, Anastassios; Piper, Jake

    2008-01-01

    Published guidelines for Cumulative Effects Assessment (CEA) have called for the identification of cause-and-effect relationships, or causality, challenging researchers to identify methods that can possibly meet CEA's specific requirements. Together with an outline of these requirements from CEA key literature, the various definitions of cumulative effects point to the direction of a method for causality analysis that is visually-oriented and qualitative. This article consequently revisits network and system diagrams, resolves their reported shortcomings, and extends their capabilities with causal loop diagramming methodology. The application of the resulting composite causality analysis method to three Environmental Impact Assessment (EIA) case studies appears to satisfy the specific requirements of CEA regarding causality. Three 'moments' are envisaged for the use of the proposed method: during the scoping stage, during the assessment process, and during the stakeholder participation process

  16. Constraints on inflation revisited. An analysis including the latest local measurement of the Hubble constant

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Rui-Yun [Northeastern University, Department of Physics, College of Sciences, Shenyang (China); Zhang, Xin [Northeastern University, Department of Physics, College of Sciences, Shenyang (China); Peking University, Center for High Energy Physics, Beijing (China)

    2017-12-15

    We revisit the constraints on inflation models by using the current cosmological observations involving the latest local measurement of the Hubble constant (H{sub 0} = 73.00 ± 1.75 km s{sup -1} Mpc{sup -1}). We constrain the primordial power spectra of both scalar and tensor perturbations with the observational data including the Planck 2015 CMB full data, the BICEP2 and Keck Array CMB B-mode data, the BAO data, and the direct measurement of H{sub 0}. In order to relieve the tension between the local determination of the Hubble constant and the other astrophysical observations, we consider the additional parameter N{sub eff} in the cosmological model. We find that, for the ΛCDM+r+N{sub eff} model, the scale invariance is only excluded at the 3.3σ level, and ΔN{sub eff} > 0 is favored at the 1.6σ level. Comparing the obtained 1σ and 2σ contours of (n{sub s},r) with the theoretical predictions of selected inflation models, we find that both the convex and the concave potentials are favored at 2σ level, the natural inflation model is excluded at more than 2σ level, the Starobinsky R{sup 2} inflation model is only favored at around 2σ level, and the spontaneously broken SUSY inflation model is now the most favored model. (orig.)

  17. Design and Analysis of MEMS Linear Phased Array

    Directory of Open Access Journals (Sweden)

    Guoxiang Fan

    2016-01-01

    Full Text Available A structure of micro-electro-mechanical system (MEMS linear phased array based on “multi-cell” element is designed to increase radiation sound pressure of transducer working in bending vibration mode at high frequency. In order to more accurately predict the resonant frequency of an element, the theoretical analysis of the dynamic equation of a fixed rectangular composite plate and finite element method simulation are adopted. The effects of the parameters both in the lateral and elevation direction on the three-dimensional beam directivity characteristics are comprehensively analyzed. The key parameters in the analysis include the “cell” number of element, “cell” size, “inter-cell” spacing and the number of elements, element width. The simulation results show that optimizing the linear array parameters both in the lateral and elevation direction can greatly improve the three-dimensional beam focusing for MEMS linear phased array, which is obviously different from the traditional linear array.

  18. Revisiting Ulchin 4 SGTR Accident - Analysis for EOP Improvement

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Eun-Hye; Lee, Wook-Jo; Jerng, Dong-Wook [Chung-Ang University, Seoul (Korea, Republic of)

    2016-10-15

    The Steam Generator Tube Ruputure (SGTR) is an accident that U-tube inside the SG is defected so that the reactor coolant releases through broken U-tube and this is one of design basis accidents. Operating the Nuclear Power Plants (NPP), maintaing the integrity of core and preventing radiation release are most important things. Because of risks, many researchers have studied scenarios, impacts and the ways to mitigate SGTR accidents. The study to provide an experimental database of aerosol particle retention and to develop models to support accident management interventions during SGTR was performed. The scaled-down models of NPP were used for experiments, also, MELCOR and SCDAP/RELAP5 were used to simulate a design basis SGTR accident. This study had a major role to resolve uncertainties of various physical models for aerosol mechanical resuspension. The other study which analyzed SGTR accident for System-integrated Modular Advanced Reactor (SMART) was performed. In this analysis, the amount of break flow was focused and TASS/SMRS code was used. It assumed that maximum leak was generated, and found that high RCS pressure, low core inlet coolant temperature, and low break location of the SG cassette contributed to leakage. Although the leakage was large, there was no direct release to atmosphere because the pressure of secondary loop was maintained below the safety relief valve set point. In this analysis, comparison of mitigating procedure when SGTR occurs between shutdown condition and full power condition was performed. In shutdown condition, the core uncovery would not take place in 16 hours whether the cooling procedures are performed or not. Therefore, the integrated amount of break flow should be considered only. In this point of view, cooling through intact SG only, case 3, is the best way to minimize the amount of break flow. In full power condition, the core water level is changed due to high reactor power. The important thing to protect NPP is to keep

  19. Revisiting corpus creation and analysis tools for translation tasks

    Directory of Open Access Journals (Sweden)

    Claudio Fantinuoli

    2016-06-01

    Full Text Available Many translation scholars have proposed the use of corpora to allow professional translators to produce high quality texts which read like originals. Yet, the diffusion of this methodology has been modest, one reason being the fact that software for corpora analyses have been developed with the linguist in mind, which means that they are generally complex and cumbersome, offering many advanced features, but lacking the level of usability and the specific features that meet translators’ needs. To overcome this shortcoming, we have developed TranslatorBank, a free corpus creation and analysis tool designed for translation tasks. TranslatorBank supports the creation of specialized monolingual corpora from the web; it includes a concordancer with a query system similar to a search engine; it uses basic statistical measures to indicate the reliability of results; it accesses the original documents directly for more contextual information; it includes a statistical and linguistic terminology extraction utility to extract the relevant terminology of the domain and the typical collocations of a given term. Designed to be easy and intuitive to use, the tool may help translation students as well as professionals to increase their translation quality by adhering to the specific linguistic variety of the target text corpus.

  20. Revisiting corpus creation and analysis tools for translation tasks

    Directory of Open Access Journals (Sweden)

    Claudio Fantinuoli

    2016-04-01

    Many translation scholars have proposed the use of corpora to allow professional translators to produce high quality texts which read like originals. Yet, the diffusion of this methodology has been modest, one reason being the fact that software for corpora analyses have been developed with the linguist in mind, which means that they are generally complex and cumbersome, offering many advanced features, but lacking the level of usability and the specific features that meet translators’ needs. To overcome this shortcoming, we have developed TranslatorBank, a free corpus creation and analysis tool designed for translation tasks. TranslatorBank supports the creation of specialized monolingual corpora from the web; it includes a concordancer with a query system similar to a search engine; it uses basic statistical measures to indicate the reliability of results; it accesses the original documents directly for more contextual information; it includes a statistical and linguistic terminology extraction utility to extract the relevant terminology of the domain and the typical collocations of a given term. Designed to be easy and intuitive to use, the tool may help translation students as well as professionals to increase their translation quality by adhering to the specific linguistic variety of the target text corpus.

  1. ATMAD: robust image analysis for Automatic Tissue MicroArray De-arraying.

    Science.gov (United States)

    Nguyen, Hoai Nam; Paveau, Vincent; Cauchois, Cyril; Kervrann, Charles

    2018-04-19

    Over the last two decades, an innovative technology called Tissue Microarray (TMA), which combines multi-tissue and DNA microarray concepts, has been widely used in the field of histology. It consists of a collection of several (up to 1000 or more) tissue samples that are assembled onto a single support - typically a glass slide - according to a design grid (array) layout, in order to allow multiplex analysis by treating numerous samples under identical and standardized conditions. However, during the TMA manufacturing process, the sample positions can be highly distorted from the design grid due to the imprecision when assembling tissue samples and the deformation of the embedding waxes. Consequently, these distortions may lead to severe errors of (histological) assay results when the sample identities are mismatched between the design and its manufactured output. The development of a robust method for de-arraying TMA, which localizes and matches TMA samples with their design grid, is therefore crucial to overcome the bottleneck of this prominent technology. In this paper, we propose an Automatic, fast and robust TMA De-arraying (ATMAD) approach dedicated to images acquired with brightfield and fluorescence microscopes (or scanners). First, tissue samples are localized in the large image by applying a locally adaptive thresholding on the isotropic wavelet transform of the input TMA image. To reduce false detections, a parametric shape model is considered for segmenting ellipse-shaped objects at each detected position. Segmented objects that do not meet the size and the roundness criteria are discarded from the list of tissue samples before being matched with the design grid. Sample matching is performed by estimating the TMA grid deformation under the thin-plate model. Finally, thanks to the estimated deformation, the true tissue samples that were preliminary rejected in the early image processing step are recognized by running a second segmentation step. We

  2. Dual-Polarized Planar Phased Array Analysis for Meteorological Applications

    Directory of Open Access Journals (Sweden)

    Chen Pang

    2015-01-01

    Full Text Available This paper presents a theoretical analysis for the accuracy requirements of the planar polarimetric phased array radar (PPPAR in meteorological applications. Among many factors that contribute to the polarimetric biases, four factors are considered and analyzed in this study, namely, the polarization distortion due to the intrinsic limitation of a dual-polarized antenna element, the antenna pattern measurement error, the entire array patterns, and the imperfect horizontal and vertical channels. Two operation modes, the alternately transmitting and simultaneously receiving (ATSR mode and the simultaneously transmitting and simultaneously receiving (STSR mode, are discussed. For each mode, the polarimetric biases are formulated. As the STSR mode with orthogonal waveforms is similar to the ATSR mode, the analysis is mainly focused on the ATSR mode and the impacts of the bias sources on the measurement of polarimetric variables are investigated through Monte Carlo simulations. Some insights of the accuracy requirements are obtained and summarized.

  3. Data-flow Analysis of Programs with Associative Arrays

    Directory of Open Access Journals (Sweden)

    David Hauzar

    2014-05-01

    Full Text Available Dynamic programming languages, such as PHP, JavaScript, and Python, provide built-in data structures including associative arrays and objects with similar semantics—object properties can be created at run-time and accessed via arbitrary expressions. While a high level of security and safety of applications written in these languages can be of a particular importance (consider a web application storing sensitive data and providing its functionality worldwide, dynamic data structures pose significant challenges for data-flow analysis making traditional static verification methods both unsound and imprecise. In this paper, we propose a sound and precise approach for value and points-to analysis of programs with associative arrays-like data structures, upon which data-flow analyses can be built. We implemented our approach in a web-application domain—in an analyzer of PHP code.

  4. Improvements in analysis techniques for segmented mirror arrays

    Science.gov (United States)

    Michels, Gregory J.; Genberg, Victor L.; Bisson, Gary R.

    2016-08-01

    The employment of actively controlled segmented mirror architectures has become increasingly common in the development of current astronomical telescopes. Optomechanical analysis of such hardware presents unique issues compared to that of monolithic mirror designs. The work presented here is a review of current capabilities and improvements in the methodology of the analysis of mechanically induced surface deformation of such systems. The recent improvements include capability to differentiate surface deformation at the array and segment level. This differentiation allowing surface deformation analysis at each individual segment level offers useful insight into the mechanical behavior of the segments that is unavailable by analysis solely at the parent array level. In addition, capability to characterize the full displacement vector deformation of collections of points allows analysis of mechanical disturbance predictions of assembly interfaces relative to other assembly interfaces. This capability, called racking analysis, allows engineers to develop designs for segment-to-segment phasing performance in assembly integration, 0g release, and thermal stability of operation. The performance predicted by racking has the advantage of being comparable to the measurements used in assembly of hardware. Approaches to all of the above issues are presented and demonstrated by example with SigFit, a commercially available tool integrating mechanical analysis with optical analysis.

  5. CGHPRO – A comprehensive data analysis tool for array CGH

    Directory of Open Access Journals (Sweden)

    Lenzner Steffen

    2005-04-01

    Full Text Available Abstract Background Array CGH (Comparative Genomic Hybridisation is a molecular cytogenetic technique for the genome wide detection of chromosomal imbalances. It is based on the co-hybridisation of differentially labelled test and reference DNA onto arrays of genomic BAC clones, cDNAs or oligonucleotides, and after correction for various intervening variables, loss or gain in the test DNA can be indicated from spots showing aberrant signal intensity ratios. Now that this technique is no longer confined to highly specialized laboratories and is entering the realm of clinical application, there is a need for a user-friendly software package that facilitates estimates of DNA dosage from raw signal intensities obtained by array CGH experiments, and which does not depend on a sophisticated computational environment. Results We have developed a user-friendly and versatile tool for the normalization, visualization, breakpoint detection and comparative analysis of array-CGH data. CGHPRO is a stand-alone JAVA application that guides the user through the whole process of data analysis. The import option for image analysis data covers several data formats, but users can also customize their own data formats. Several graphical representation tools assist in the selection of the appropriate normalization method. Intensity ratios of each clone can be plotted in a size-dependent manner along the chromosome ideograms. The interactive graphical interface offers the chance to explore the characteristics of each clone, such as the involvement of the clones sequence in segmental duplications. Circular Binary Segmentation and unsupervised Hidden Markov Model algorithms facilitate objective detection of chromosomal breakpoints. The storage of all essential data in a back-end database allows the simultaneously comparative analysis of different cases. The various display options facilitate also the definition of shortest regions of overlap and simplify the

  6. The On-Site Analysis of the Cherenkov Telescope Array

    CERN Document Server

    Bulgarelli, Andrea; Zoli, Andrea; Aboudan, Alessio; Rodríguez-Vázquez, Juan José; De Cesare, Giovanni; De Rosa, Adriano; Maier, Gernot; Lyard, Etienne; Bastieri, Denis; Lombardi, Saverio; Tosti, Gino; Bergamaschi, Sonia; Beneventano, Domenico; Lamanna, Giovanni; Jacquemier, Jean; Kosack, Karl; Antonelli, Lucio Angelo; Boisson, Catherine; Borkowski, Jerzy; Buson, Sara; Carosi, Alessandro; Conforti, Vito; Colomé, Pep; Reyes, Raquel de los; Dumm, Jon; Evans, Phil; Fortson, Lucy; Fuessling, Matthias; Gotz, Diego; Graciani, Ricardo; Gianotti, Fulvio; Grandi, Paola; Hinton, Jim; Humensky, Brian; Inoue, Susumu; Knödlseder, Jürgen; Flour, Thierry Le; Lindemann, Rico; Malaguti, Giuseppe; Markoff, Sera; Marisaldi, Martino; Neyroud, Nadine; Nicastro, Luciano; Ohm, Stefan; Osborne, Julian; Oya, Igor; Rodriguez, Jerome; Rosen, Simon; Ribo, Marc; Tacchini, Alessandro; Schüssler, Fabian; Stolarczyk, Thierry; Torresi, Eleonora; Testa, Vincenzo; Wegner, Peter

    2015-01-01

    The Cherenkov Telescope Array (CTA) observatory will be one of the largest ground-based very high-energy gamma-ray observatories. The On-Site Analysis will be the first CTA scientific analysis of data acquired from the array of telescopes, in both northern and southern sites. The On-Site Analysis will have two pipelines: the Level-A pipeline (also known as Real-Time Analysis, RTA) and the level-B one. The RTA performs data quality monitoring and must be able to issue automated alerts on variable and transient astrophysical sources within 30 seconds from the last acquired Cherenkov event that contributes to the alert, with a sensitivity not worse than the one achieved by the final pipeline by more than a factor of 3. The Level-B Analysis has a better sensitivity (not be worse than the final one by a factor of 2) and the results should be available within 10 hours from the acquisition of the data: for this reason this analysis could be performed at the end of an observation or next morning. The latency (in part...

  7. Joint Analysis of BICEP2/Keck Array and Planck Data

    DEFF Research Database (Denmark)

    Ade, P. A. R.; Aghanim, N.; Ahmed, Z.

    2015-01-01

    We report the results of a joint analysis of data from BICEP2/Keck Array and Planck. BICEP2 and Keck Array have observed the same approximately 400 deg2 patch of sky centered on RA 0 h, Dec. -57.5°. The combined maps reach a depth of 57 nK deg in Stokes Q and U in a band centered at 150 GHz. Planck...... GHz to a lensed-ΛCDM model that includes dust and a possible contribution from inflationary gravitational waves (as parametrized by the tensor-to-scalar ratio r), using a prior on the frequency spectral behavior of polarized dust emission from previous Planck analysis of other regions of the sky. We...... present an alternative analysis which is similar to a map-based cleaning of the dust contribution, and show that this gives similar constraints. The final result is expressed as a likelihood curve for r, and yields an upper limit r 0.05

  8. Cluster Computing For Real Time Seismic Array Analysis.

    Science.gov (United States)

    Martini, M.; Giudicepietro, F.

    A seismic array is an instrument composed by a dense distribution of seismic sen- sors that allow to measure the directional properties of the wavefield (slowness or wavenumber vector) radiated by a seismic source. Over the last years arrays have been widely used in different fields of seismological researches. In particular they are applied in the investigation of seismic sources on volcanoes where they can be suc- cessfully used for studying the volcanic microtremor and long period events which are critical for getting information on the volcanic systems evolution. For this reason arrays could be usefully employed for the volcanoes monitoring, however the huge amount of data produced by this type of instruments and the processing techniques which are quite time consuming limited their potentiality for this application. In order to favor a direct application of arrays techniques to continuous volcano monitoring we designed and built a small PC cluster able to near real time computing the kinematics properties of the wavefield (slowness or wavenumber vector) produced by local seis- mic source. The cluster is composed of 8 Intel Pentium-III bi-processors PC working at 550 MHz, and has 4 Gigabytes of RAM memory. It runs under Linux operating system. The developed analysis software package is based on the Multiple SIgnal Classification (MUSIC) algorithm and is written in Fortran. The message-passing part is based upon the LAM programming environment package, an open-source imple- mentation of the Message Passing Interface (MPI). The developed software system includes modules devote to receiving date by internet and graphical applications for the continuous displaying of the processing results. The system has been tested with a data set collected during a seismic experiment conducted on Etna in 1999 when two dense seismic arrays have been deployed on the northeast and the southeast flanks of this volcano. A real time continuous acquisition system has been simulated by

  9. Balanced into array : genome-wide array analysis in 54 patients with an apparently balanced de novo chromosome rearrangement and a meta-analysis

    NARCIS (Netherlands)

    Feenstra, Ilse; Hanemaaijer, Nicolien; Sikkema-Raddatz, Birgit; Yntema, Helger; Dijkhuizen, Trijnie; Lugtenberg, Dorien; Verheij, Joke; Green, Andrew; Hordijk, Roel; Reardon, William; de Vries, Bert; Brunner, Han; Bongers, Ernie; de Leeuw, Nicole; van Ravenswaaij-Arts, Conny

    2011-01-01

    High-resolution genome-wide array analysis enables detailed screening for cryptic and submicroscopic imbalances of microscopically balanced de novo rearrangements in patients with developmental delay and/or congenital abnormalities. In this report, we added the results of genome-wide array analysis

  10. Stochastic resolution analysis of co-prime arrays in radar

    NARCIS (Netherlands)

    Pribic, R; Coutiño Minguez, M.A.; Leus, G.J.T.

    2016-01-01

    Resolution from co-prime arrays and from a full ULA of the size equal to the virtual size of co-prime arrays is investigated. We take into account not only the resulting beam width but also the fact that fewer measurements are acquired by co-prime arrays. This fact is relevant in compressive

  11. Sensitivity Analysis of WEC Array Layout Parameters Effect on the Power Performance

    DEFF Research Database (Denmark)

    Ruiz, Pau Mercadé; Ferri, Francesco; Kofoed, Jens Peter

    2015-01-01

    This study assesses the effect that the array layout choice has on the power performance. To this end, a sensitivity analysis is carried out with six array layout parameters, as the simulation inputs, the array power performance (q-factor), as the simulation output, and a simulation model special...

  12. Analysis of the modal behavior of an antiguide diode laser array with Talbot filter

    NARCIS (Netherlands)

    van Eijk, P.D.; van Eijk, Pieter D.; Reglat, Muriel; Vassilief, Georges; Krijnen, Gijsbertus J.M.; Driessen, A.; Mouthaan, A.J.

    An analysis of the filtering of the array modes in a resonant optical waveguide (ROW) array of antiguides by a diffractive spatial filter (a Talbot filter) is presented. A dispersion relation is derived for the array modes, allowing the calculation of the field distribution. The filtering is

  13. Analysis and Simulation of Multi-target Echo Signals from a Phased Array Radar

    OpenAIRE

    Jia Zhen; Zhou Rui

    2017-01-01

    The construction of digital radar simulation systems has been a research hotspot of the radar field. This paper focuses on theoretical analysis and simulation of multi-target echo signals produced in a phased array radar system, and constructs an array antenna element and a signal generation environment. The antenna element is able to simulate planar arrays and optimizes these arrays by adding window functions. And the signal environment can model and simulate radar transmission signals, rada...

  14. THE MURCHISON WIDEFIELD ARRAY 21 cm POWER SPECTRUM ANALYSIS METHODOLOGY

    Energy Technology Data Exchange (ETDEWEB)

    Jacobs, Daniel C.; Beardsley, A. P.; Bowman, Judd D. [Arizona State University, School of Earth and Space Exploration, Tempe, AZ 85287 (United States); Hazelton, B. J.; Sullivan, I. S.; Barry, N.; Carroll, P. [University of Washington, Department of Physics, Seattle, WA 98195 (United States); Trott, C. M.; Pindor, B.; Briggs, F.; Gaensler, B. M. [ARC Centre of Excellence for All-sky Astrophysics (CAASTRO) (Australia); Dillon, Joshua S.; Oliveira-Costa, A. de; Ewall-Wice, A.; Feng, L. [MIT Kavli Institute for Astrophysics and Space Research, Cambridge, MA 02139 (United States); Pober, J. C. [Brown University, Department of Physics, Providence, RI 02912 (United States); Bernardi, G. [Department of Physics and Electronics, Rhodes University, Grahamstown 6140 (South Africa); Cappallo, R. J.; Corey, B. E. [MIT Haystack Observatory, Westford, MA 01886 (United States); Emrich, D., E-mail: daniel.c.jacobs@asu.edu [International Centre for Radio Astronomy Research, Curtin University, Perth, WA 6845 (Australia); and others

    2016-07-10

    We present the 21 cm power spectrum analysis approach of the Murchison Widefield Array Epoch of Reionization project. In this paper, we compare the outputs of multiple pipelines for the purpose of validating statistical limits cosmological hydrogen at redshifts between 6 and 12. Multiple independent data calibration and reduction pipelines are used to make power spectrum limits on a fiducial night of data. Comparing the outputs of imaging and power spectrum stages highlights differences in calibration, foreground subtraction, and power spectrum calculation. The power spectra found using these different methods span a space defined by the various tradeoffs between speed, accuracy, and systematic control. Lessons learned from comparing the pipelines range from the algorithmic to the prosaically mundane; all demonstrate the many pitfalls of neglecting reproducibility. We briefly discuss the way these different methods attempt to handle the question of evaluating a significant detection in the presence of foregrounds.

  15. Revisiting Okun's Relationship

    NARCIS (Netherlands)

    Dixon, R.; Lim, G.C.; van Ours, Jan

    2016-01-01

    Our paper revisits Okun's relationship between observed unemployment rates and output gaps. We include in the relationship the effect of labour market institutions as well as age and gender effects. Our empirical analysis is based on 20 OECD countries over the period 1985-2013. We find that the

  16. Revisiting the Okun relationship

    NARCIS (Netherlands)

    Dixon, R. (Robert); Lim, G.C.; J.C. van Ours (Jan)

    2017-01-01

    textabstractOur article revisits the Okun relationship between observed unemployment rates and output gaps. We include in the relationship the effect of labour market institutions as well as age and gender effects. Our empirical analysis is based on 20 OECD countries over the period 1985–2013. We

  17. Blind Time-Frequency Analysis for Source Discrimination in Multisensor Array Processing

    National Research Council Canada - National Science Library

    Amin, Moeness

    1999-01-01

    .... We have clearly demonstrated, through analysis and simulations, the offerings of time-frequency distributions in solving key problems in sensor array processing, including direction finding, source...

  18. Seismic Background Noise Analysis of BRTR (PS-43) Array

    Science.gov (United States)

    Ezgi Bakir, Mahmure; Meral Ozel, Nurcan; Umut Semin, Korhan

    2015-04-01

    The seismic background noise variation of BRTR array, composed of two sub arrays located in Ankara and in Ankara-Keskin, has been investigated by calculating Power Spectral Density and Probability Density Functions for seasonal and diurnal noise variations between 2005 and 2011. PSDs were computed within the frequency range of 100 s - 10 Hz. The results show us a little change in noise conditions in terms of time and location. Especially, noise level changes were observed at 3-5 Hz in diurnal variations at Keskin array and there is a 5-7 dB difference in day and night time in cultural noise band (1-10 Hz). On the other hand, noise levels of medium period array is high in 1-2 Hz frequency rather than short period array. High noise levels were observed in daily working times when we compare it to night-time in cultural noise band. The seasonal background noise variation at both sites also shows very similar properties to each other. Since these stations are borehole instruments and away from the coasts, we saw a small change in noise levels caused by microseism. Comparison between Keskin short period array and Ankara medium period array show us Keskin array is quiter than Ankara array.

  19. arrayCGHbase: an analysis platform for comparative genomic hybridization microarrays

    Directory of Open Access Journals (Sweden)

    Moreau Yves

    2005-05-01

    Full Text Available Abstract Background The availability of the human genome sequence as well as the large number of physically accessible oligonucleotides, cDNA, and BAC clones across the entire genome has triggered and accelerated the use of several platforms for analysis of DNA copy number changes, amongst others microarray comparative genomic hybridization (arrayCGH. One of the challenges inherent to this new technology is the management and analysis of large numbers of data points generated in each individual experiment. Results We have developed arrayCGHbase, a comprehensive analysis platform for arrayCGH experiments consisting of a MIAME (Minimal Information About a Microarray Experiment supportive database using MySQL underlying a data mining web tool, to store, analyze, interpret, compare, and visualize arrayCGH results in a uniform and user-friendly format. Following its flexible design, arrayCGHbase is compatible with all existing and forthcoming arrayCGH platforms. Data can be exported in a multitude of formats, including BED files to map copy number information on the genome using the Ensembl or UCSC genome browser. Conclusion ArrayCGHbase is a web based and platform independent arrayCGH data analysis tool, that allows users to access the analysis suite through the internet or a local intranet after installation on a private server. ArrayCGHbase is available at http://medgen.ugent.be/arrayCGHbase/.

  20. Revisiting the Gun Ownership and Violence Link; a multi- level analysis of victimisation survey data.

    NARCIS (Netherlands)

    van Kesteren, J.N.

    2014-01-01

    The link between gun ownership victimisation by violent crime remains one of the most contested issues in criminology. Some authors claim that high gun availability facilitates serious violence. Others claim that gun ownership prevents crime. This article revisits these issues using individual and

  1. Crosstalk analysis of silicon-on-insulator nanowire-arrayed waveguide grating

    International Nuclear Information System (INIS)

    Li Kai-Li; An Jun-Ming; Zhang Jia-Shun; Wang Yue; Wang Liang-Liang; Li Jian-Guang; Wu Yuan-Da; Yin Xiao-Jie; Hu Xiong-Wei

    2016-01-01

    The factors influencing the crosstalk of silicon-on-insulator (SOI) nanowire arrayed waveguide grating (AWG) are analyzed using the transfer function method. The analysis shows that wider and thicker arrayed waveguides, outsider fracture of arrayed waveguide, and larger channel space, could mitigate the deterioration of crosstalk. The SOI nanowire AWGs with different arrayed waveguide widths are fabricated by using deep ultraviolet lithography (DUV) and inductively coupled plasma etching (ICP) technology. The measurement results show that the crosstalk performance is improved by about 7 dB through adopting 800 nm arrayed waveguide width. (paper)

  2. Read margin analysis of crossbar arrays using the cell-variability-aware simulation method

    Science.gov (United States)

    Sun, Wookyung; Choi, Sujin; Shin, Hyungsoon

    2018-02-01

    This paper proposes a new concept of read margin analysis of crossbar arrays using cell-variability-aware simulation. The size of the crossbar array should be considered to predict the read margin characteristic of the crossbar array because the read margin depends on the number of word lines and bit lines. However, an excessively high-CPU time is required to simulate large arrays using a commercial circuit simulator. A variability-aware MATLAB simulator that considers independent variability sources is developed to analyze the characteristics of the read margin according to the array size. The developed MATLAB simulator provides an effective method for reducing the simulation time while maintaining the accuracy of the read margin estimation in the crossbar array. The simulation is also highly efficient in analyzing the characteristic of the crossbar memory array considering the statistical variations in the cell characteristics.

  3. Physical analysis for designing nested-wire arrays on Z-pinch implosion

    International Nuclear Information System (INIS)

    Yang Zhenhua; Liu Quan; Ding Ning; Ning Cheng

    2005-01-01

    Z-pinch experiments have demonstrated that the X-ray power increases 40% with a nested-wire array compared with that with a single-layered wire array. The design of the nested-wire array on Z accelerator is studied through the implosion dynamics and the growth of RT instabilities. The analysis shows that the nested-wire array does not produce more total X-ray radiation energy than the single-layered wire array, but it obviously increases the X-ray power. The radius of the outer array of the nested-wire array could be determined based on the radius of the optimized single-layered. The masses of the outer and inner arrays could be determined by the implosion time of the nested-wire array, which is roughly the same as that of the single-layered wire array. Some suggestions are put forward which may be helpful in the nested-wire array design for Z-pinch experiments. (authors)

  4. Thermal analysis for folded solar array of spacecraft in orbit

    International Nuclear Information System (INIS)

    Yang, W.H.; Cheng, H.E.; Cai, A.

    2004-01-01

    The combined radiation-conduction heat transfer in folded solar array was considered as a three-dimensional anisotropic conduction without inner heat source. The three-dimensional equivalent conductivity in cell plate were obtained. The especially discrete equation coefficients of the nodes on the surfaces of adjacent cell plates were deduced by utilizing the simplified radiation network among the two adjacent cell plate surfaces and the deep cold space. All the thermal influence factors on the temperature response of the folded solar array were considered carefully. SIP method was used to solve the discrete equation. By comparing the calculation results under three cases, the temperature response and the maximum average difference of the folded solar array was obtained during the period of throw-radome of the launch vehicle and spread of the folded solar array. The obtained result is a valuable reference for the selection of the launch time of the spacecraft

  5. Performance analysis of solar cell arrays in concentrating light intensity

    International Nuclear Information System (INIS)

    Xu Yongfeng; Li Ming; Lin Wenxian; Wang Liuling; Xiang Ming; Zhang Xinghua; Wang Yunfeng; Wei Shengxian

    2009-01-01

    Performance of concentrating photovoltaic/thermal system is researched by experiment and simulation calculation. The results show that the I-V curve of the GaAs cell array is better than that of crystal silicon solar cell arrays and the exergy produced by 9.51% electrical efficiency of the GaAs solar cell array can reach 68.93% of the photovoltaic/thermal system. So improving the efficiency of solar cell arrays can introduce more exergy and the system value can be upgraded. At the same time, affecting factors of solar cell arrays such as series resistance, temperature and solar irradiance also have been analyzed. The output performance of a solar cell array with lower series resistance is better and the working temperature has a negative impact on the voltage in concentrating light intensity. The output power has a -20 W/V coefficient and so cooling fluid must be used. Both heat energy and electrical power are then obtained with a solar trough concentrating photovoltaic/thermal system. (semiconductor devices)

  6. Reliability analysis of the solar array based on Fault Tree Analysis

    International Nuclear Information System (INIS)

    Wu Jianing; Yan Shaoze

    2011-01-01

    The solar array is an important device used in the spacecraft, which influences the quality of in-orbit operation of the spacecraft and even the launches. This paper analyzes the reliability of the mechanical system and certifies the most vital subsystem of the solar array. The fault tree analysis (FTA) model is established according to the operating process of the mechanical system based on DFH-3 satellite; the logical expression of the top event is obtained by Boolean algebra and the reliability of the solar array is calculated. The conclusion shows that the hinges are the most vital links between the solar arrays. By analyzing the structure importance(SI) of the hinge's FTA model, some fatal causes, including faults of the seal, insufficient torque of the locking spring, temperature in space, and friction force, can be identified. Damage is the initial stage of the fault, so limiting damage is significant to prevent faults. Furthermore, recommendations for improving reliability associated with damage limitation are discussed, which can be used for the redesigning of the solar array and the reliability growth planning.

  7. Reliability analysis of the solar array based on Fault Tree Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Wu Jianing; Yan Shaoze, E-mail: yansz@mail.tsinghua.edu.cn [State Key Laboratory of Tribology, Department of Precision Instruments and Mechanology, Tsinghua University,Beijing 100084 (China)

    2011-07-19

    The solar array is an important device used in the spacecraft, which influences the quality of in-orbit operation of the spacecraft and even the launches. This paper analyzes the reliability of the mechanical system and certifies the most vital subsystem of the solar array. The fault tree analysis (FTA) model is established according to the operating process of the mechanical system based on DFH-3 satellite; the logical expression of the top event is obtained by Boolean algebra and the reliability of the solar array is calculated. The conclusion shows that the hinges are the most vital links between the solar arrays. By analyzing the structure importance(SI) of the hinge's FTA model, some fatal causes, including faults of the seal, insufficient torque of the locking spring, temperature in space, and friction force, can be identified. Damage is the initial stage of the fault, so limiting damage is significant to prevent faults. Furthermore, recommendations for improving reliability associated with damage limitation are discussed, which can be used for the redesigning of the solar array and the reliability growth planning.

  8. Vrancea seismic source analysis using a small-aperture array

    International Nuclear Information System (INIS)

    Popescu, E.; Popa, M.; Radulian, M.; Placinta, A.O.

    2005-01-01

    A small-aperture seismic array (BURAR) was installed in 1999 in the northern part of the Romanian territory (Bucovina area). Since then, the array has been in operation under a joint cooperation programme between Romania and USA. The array consists of 10 stations installed in boreholes (nine short period instruments and one broadband instrument) with enough high sensitivity to properly detect earthquakes generated in Vrancea subcrustal domain (at about 250 km epicentral distance) with magnitude M w below 3. Our main purpose is to investigate and calibrate the source parameters of the Vrancea intermediate-depth earthquakes using specific techniques provided by the BURAR array data. Forty earthquakes with magnitudes between 2.9 and 6.0 were selected, including the recent events of September 27, 2004 (45.70 angle N, 26.45 angle E, h = 166 km, M w = 4.7), October 27, 2004 (45.84 angle N, 26.63 angle E, h = 105 km, M w = 6.0) and May 14, 2005 (45.66 angle N, 26.52 angle E, h = 146 km, M w = 5.1), which are the best ever recorded earthquakes on the Romanian territory: Empirical Green's function deconvolution and spectral ratio methods are applied for pairs of collocated events with similar focal mechanism. Stability tests are performed for the retrieved source time function using the array elements. Empirical scaling and calibration relationships are also determined. Our study shows the capability of the BURAR array to determine the source parameters of the Vrancea intermediate-depth earthquakes as a stand alone station and proves that the recordings of this array alone provides reliable and useful tools to efficiently constrain the source parameters and consequently source scaling properties. (authors)

  9. Lakatos Revisited.

    Science.gov (United States)

    Court, Deborah

    1999-01-01

    Revisits and reviews Imre Lakatos' ideas on "Falsification and the Methodology of Scientific Research Programmes." Suggests that Lakatos' framework offers an insightful way of looking at the relationship between theory and research that is relevant not only for evaluating research programs in theoretical physics, but in the social…

  10. Adaptive Injection-locking Oscillator Array for RF Spectrum Analysis

    International Nuclear Information System (INIS)

    Leung, Daniel

    2011-01-01

    A highly parallel radio frequency receiver using an array of injection-locking oscillators for on-chip, rapid estimation of signal amplitudes and frequencies is considered. The oscillators are tuned to different natural frequencies, and variable gain amplifiers are used to provide negative feedback to adapt the locking band-width with the input signal to yield a combined measure of input signal amplitude and frequency detuning. To further this effort, an array of 16 two-stage differential ring oscillators and 16 Gilbert-cell mixers is designed for 40-400 MHz operation. The injection-locking oscillator array is assembled on a custom printed-circuit board. Control and calibration is achieved by on-board microcontroller.

  11. Design and Analysis Tools for Deployable Solar Array Systems, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Large, lightweight, deployable solar array structures have been identified as a key enabling technology for NASA with analysis and design of these structures being...

  12. Design and Analysis Tools for Deployable Solar Array Systems, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Large, lightweight, deployable solar array structures have been identified as a key enabling technology for NASA with analysis and design of these structures being...

  13. Electromagnetic linear machines with dual Halbach array design and analysis

    CERN Document Server

    Yan, Liang; Peng, Juanjuan; Zhang, Lei; Jiao, Zongxia

    2017-01-01

    This book extends the conventional two-dimensional (2D) magnet arrangement into 3D pattern for permanent magnet linear machines for the first time, and proposes a novel dual Halbach array. It can not only effectively increase the radial component of magnetic flux density and output force of tubular linear machines, but also significantly reduce the axial flux density, radial force and thus system vibrations and noises. The book is also the first to address the fundamentals and provide a summary of conventional arrays, as well as novel concepts for PM pole design in electric linear machines. It covers theoretical study, numerical simulation, design optimization and experimental works systematically. The design concept and analytical approaches can be implemented to other linear and rotary machines with similar structures. The book will be of interest to academics, researchers, R&D engineers and graduate students in electronic engineering and mechanical engineering who wish to learn the core principles, met...

  14. Sensemaking Revisited

    DEFF Research Database (Denmark)

    Holt, Robin; Cornelissen, Joep

    2014-01-01

    We critique and extend theory on organizational sensemaking around three themes. First, we investigate sense arising non-productively and so beyond any instrumental relationship with things; second, we consider how sense is experienced through mood as well as our cognitive skills of manipulation ...... research by revisiting Weick’s seminal reading of Norman Maclean’s book surrounding the tragic events of a 1949 forest fire at Mann Gulch, USA....

  15. Online analysis by a fiber-optic diode array spectrophotometer

    International Nuclear Information System (INIS)

    Van Hare, D.R.; Prather, W.S.; O'Rourke, P.E.

    1987-01-01

    An online photometric analyzer has been developed which can make remote measurements over the 350 to 900 nm region at distances of up to 100 feet. The analyzer consists of a commercially available diode array spectrophotometer interfaced to a fiber-optic multiplexer to allow online monitoring of up to ten locations sequentially. The development of the fiber-optic interface is discussed and data from several online applications are presented to demonstrate the capabilities of the measurement system

  16. Signals analysis of fluxgate array for wire rope defaults

    International Nuclear Information System (INIS)

    Gu Wei; Chu Jianxin

    2005-01-01

    In order to detecting the magnetic leakage fields of the wire rope defaults, a transducer made up of the fluxgate array is designed, and a series of the characteristic values of wire rope defaults signals are defined. By processing the characteristic signals, the LF or LMA of wire rope are distinguished, and the default extent is estimated. The experiment results of the new method for detecting the wire rope faults are introduced

  17. Genetic analysis of presbycusis by arrayed primer extension.

    Science.gov (United States)

    Rodriguez-Paris, Juan; Ballay, Charles; Inserra, Michelle; Stidham, Katrina; Colen, Tahl; Roberson, Joseph; Gardner, Phyllis; Schrijver, Iris

    2008-01-01

    Using the Hereditary Hearing Loss arrayed primer extension (APEX) array, which contains 198 mutations across 8 hearing loss-associated genes (GJB2, GJB6, GJB3, GJA1, SLC26A4, SLC26A5, 12S-rRNA, and tRNA Ser), we compared the frequency of sequence variants in 94 individuals with early presbycusis to 50 unaffected controls and aimed to identify possible genetic contributors. This cross-sectional study was performed at Stanford University with presbycusis samples from the California Ear Institute. The patients were between ages 20 and 65 yr, with adult-onset sensorineural hearing loss of unknown etiology, and carried a clinical diagnosis of early presbycusis. Exclusion criteria comprised known causes of hearing loss such as significant noise exposure, trauma, ototoxic medication, neoplasm, and congenital infection or syndrome, as well as congenital or pediatric onset. Sequence changes were identified in 11.7% and 10% of presbycusis and control alleles, respectively. Among the presbycusis group, these solely occurred within the GJB2 and SLC26A4 genes. Homozygous and compound heterozygous pathogenic mutations were exclusively seen in affected individuals. We were unable to detect a statistically significant difference between our control and affected populations regarding the frequency of sequence variants detected with the APEX array. Individuals who carry two mild mutations in the GJB2 gene possibly have an increased risk of developing early presbycusis.

  18. Oligonucleotide arrays vs. metaphase-comparative genomic hybridisation and BAC arrays for single-cell analysis: first applications to preimplantation genetic diagnosis for Robertsonian translocation carriers.

    Science.gov (United States)

    Ramos, Laia; del Rey, Javier; Daina, Gemma; García-Aragonés, Manel; Armengol, Lluís; Fernandez-Encinas, Alba; Parriego, Mònica; Boada, Montserrat; Martinez-Passarell, Olga; Martorell, Maria Rosa; Casagran, Oriol; Benet, Jordi; Navarro, Joaquima

    2014-01-01

    Comprehensive chromosome analysis techniques such as metaphase-Comparative Genomic Hybridisation (CGH) and array-CGH are available for single-cell analysis. However, while metaphase-CGH and BAC array-CGH have been widely used for Preimplantation Genetic Diagnosis, oligonucleotide array-CGH has not been used in an extensive way. A comparison between oligonucleotide array-CGH and metaphase-CGH has been performed analysing 15 single fibroblasts from aneuploid cell-lines and 18 single blastomeres from human cleavage-stage embryos. Afterwards, oligonucleotide array-CGH and BAC array-CGH were also compared analysing 16 single blastomeres from human cleavage-stage embryos. All three comprehensive analysis techniques provided broadly similar cytogenetic profiles; however, non-identical profiles appeared when extensive aneuploidies were present in a cell. Both array techniques provided an optimised analysis procedure and a higher resolution than metaphase-CGH. Moreover, oligonucleotide array-CGH was able to define extra segmental imbalances in 14.7% of the blastomeres and it better determined the specific unbalanced chromosome regions due to a higher resolution of the technique (≈ 20 kb). Applicability of oligonucleotide array-CGH for Preimplantation Genetic Diagnosis has been demonstrated in two cases of Robertsonian translocation carriers 45,XY,der(13;14)(q10;q10). Transfer of euploid embryos was performed in both cases and pregnancy was achieved by one of the couples. This is the first time that an oligonucleotide array-CGH approach has been successfully applied to Preimplantation Genetic Diagnosis for balanced chromosome rearrangement carriers.

  19. Oligonucleotide arrays vs. metaphase-comparative genomic hybridisation and BAC arrays for single-cell analysis: first applications to preimplantation genetic diagnosis for Robertsonian translocation carriers.

    Directory of Open Access Journals (Sweden)

    Laia Ramos

    Full Text Available Comprehensive chromosome analysis techniques such as metaphase-Comparative Genomic Hybridisation (CGH and array-CGH are available for single-cell analysis. However, while metaphase-CGH and BAC array-CGH have been widely used for Preimplantation Genetic Diagnosis, oligonucleotide array-CGH has not been used in an extensive way. A comparison between oligonucleotide array-CGH and metaphase-CGH has been performed analysing 15 single fibroblasts from aneuploid cell-lines and 18 single blastomeres from human cleavage-stage embryos. Afterwards, oligonucleotide array-CGH and BAC array-CGH were also compared analysing 16 single blastomeres from human cleavage-stage embryos. All three comprehensive analysis techniques provided broadly similar cytogenetic profiles; however, non-identical profiles appeared when extensive aneuploidies were present in a cell. Both array techniques provided an optimised analysis procedure and a higher resolution than metaphase-CGH. Moreover, oligonucleotide array-CGH was able to define extra segmental imbalances in 14.7% of the blastomeres and it better determined the specific unbalanced chromosome regions due to a higher resolution of the technique (≈ 20 kb. Applicability of oligonucleotide array-CGH for Preimplantation Genetic Diagnosis has been demonstrated in two cases of Robertsonian translocation carriers 45,XY,der(13;14(q10;q10. Transfer of euploid embryos was performed in both cases and pregnancy was achieved by one of the couples. This is the first time that an oligonucleotide array-CGH approach has been successfully applied to Preimplantation Genetic Diagnosis for balanced chromosome rearrangement carriers.

  20. Oligonucleotide Arrays vs. Metaphase-Comparative Genomic Hybridisation and BAC Arrays for Single-Cell Analysis: First Applications to Preimplantation Genetic Diagnosis for Robertsonian Translocation Carriers

    Science.gov (United States)

    Ramos, Laia; del Rey, Javier; Daina, Gemma; García-Aragonés, Manel; Armengol, Lluís; Fernandez-Encinas, Alba; Parriego, Mònica; Boada, Montserrat; Martinez-Passarell, Olga; Martorell, Maria Rosa; Casagran, Oriol; Benet, Jordi; Navarro, Joaquima

    2014-01-01

    Comprehensive chromosome analysis techniques such as metaphase-Comparative Genomic Hybridisation (CGH) and array-CGH are available for single-cell analysis. However, while metaphase-CGH and BAC array-CGH have been widely used for Preimplantation Genetic Diagnosis, oligonucleotide array-CGH has not been used in an extensive way. A comparison between oligonucleotide array-CGH and metaphase-CGH has been performed analysing 15 single fibroblasts from aneuploid cell-lines and 18 single blastomeres from human cleavage-stage embryos. Afterwards, oligonucleotide array-CGH and BAC array-CGH were also compared analysing 16 single blastomeres from human cleavage-stage embryos. All three comprehensive analysis techniques provided broadly similar cytogenetic profiles; however, non-identical profiles appeared when extensive aneuploidies were present in a cell. Both array techniques provided an optimised analysis procedure and a higher resolution than metaphase-CGH. Moreover, oligonucleotide array-CGH was able to define extra segmental imbalances in 14.7% of the blastomeres and it better determined the specific unbalanced chromosome regions due to a higher resolution of the technique (≈20 kb). Applicability of oligonucleotide array-CGH for Preimplantation Genetic Diagnosis has been demonstrated in two cases of Robertsonian translocation carriers 45,XY,der(13;14)(q10;q10). Transfer of euploid embryos was performed in both cases and pregnancy was achieved by one of the couples. This is the first time that an oligonucleotide array-CGH approach has been successfully applied to Preimplantation Genetic Diagnosis for balanced chromosome rearrangement carriers. PMID:25415307

  1. Basic Considerations for Dry Storage of Spent Nuclear Fuels and Revisited CFD Thermal Analysis on the Concrete Cask

    International Nuclear Information System (INIS)

    Noh, Jae Soo; Park, Younwon; Song, Sub Lee; Kim, Hyeun Min

    2016-01-01

    The integrity of storage facility and also of the spent nuclear fuel itself is considered very important. Storage casks can be located in a designated area on a site or in a designated storage building. A number of different designs for dry storage have been developed and used in different countries. Dry storage system was classified into two categories by IAEA. One is container including cask and silo, the other one is vault. However, there is various way of categorization for dry storage system. Dry silo and cask are usually classified separately, so the dry storage system can be classified into three different types. Furthermore, dry cask storage can be categorized into two types based on the type of the materials, concrete cask and metal cask. In this paper, the design characteristics of dry storage cask are introduced and computational fluid dynamics (CFD) based thermal analysis for concrete cask is revisited. Basic principles for dry storage cask design were described. Based on that, thermal analysis of concrete dry cask was introduced from the study of H. M. Kim et al. From the CFD calculation, the temperature of concrete wall was maintained under the safety criteria. From this fundamental analysis, further investigations are expected. For example, thermal analysis on the metal cask, thermal analysis on horizontally laid spent nuclear fuel assemblies for transportation concerns, and investigations on better performance of natural air circulation in dry cask can be promising candidates

  2. Basic Considerations for Dry Storage of Spent Nuclear Fuels and Revisited CFD Thermal Analysis on the Concrete Cask

    Energy Technology Data Exchange (ETDEWEB)

    Noh, Jae Soo [ACT Co. Ltd., Daejeon (Korea, Republic of); Park, Younwon; Song, Sub Lee [BEES Inc., Daejeon (Korea, Republic of); Kim, Hyeun Min [KAIST, Daejeon (Korea, Republic of)

    2016-10-15

    The integrity of storage facility and also of the spent nuclear fuel itself is considered very important. Storage casks can be located in a designated area on a site or in a designated storage building. A number of different designs for dry storage have been developed and used in different countries. Dry storage system was classified into two categories by IAEA. One is container including cask and silo, the other one is vault. However, there is various way of categorization for dry storage system. Dry silo and cask are usually classified separately, so the dry storage system can be classified into three different types. Furthermore, dry cask storage can be categorized into two types based on the type of the materials, concrete cask and metal cask. In this paper, the design characteristics of dry storage cask are introduced and computational fluid dynamics (CFD) based thermal analysis for concrete cask is revisited. Basic principles for dry storage cask design were described. Based on that, thermal analysis of concrete dry cask was introduced from the study of H. M. Kim et al. From the CFD calculation, the temperature of concrete wall was maintained under the safety criteria. From this fundamental analysis, further investigations are expected. For example, thermal analysis on the metal cask, thermal analysis on horizontally laid spent nuclear fuel assemblies for transportation concerns, and investigations on better performance of natural air circulation in dry cask can be promising candidates.

  3. Array analysis of regional Pn and Pg wavefields from the Nevada Test Site

    International Nuclear Information System (INIS)

    Leonard, M.A.

    1991-06-01

    Small-aperture high-frequency seismic arrays with dimensions of a few kilometers or less, can improve our ability to seismically monitor compliance with a low-yield Threshold Test Ban Treaty. This work studies the characteristics and effectiveness of array processing of the regional Pn and Pg wavefields generated by underground nuclear explosions at the Nevada Test Site. Waveform data from the explosion HARDIN (m b = 5.5) is recorded at a temporary 12-element, 3-component, 1.5 km-aperture array sited in an area of northern Nevada. The explosions VILLE (m b = 4.4) and SALUT (m b = 5.5) are recorded at two arrays sited in the Mojave desert, one a 96-element vertical-component 7 km-aperture array and the other a 155-element vertical-component 4 km-aperture array. Among the mean spectra for the m b = 5.5 events there are significant differences in low-frequency spectral amplitudes between array sites. The spectra become nearly identical beyond about 6 Hz. Spectral ratios are used to examine seismic source properties and the partitioning of energy between Pn and Pg. Frequency-wavenumber analysis at the 12-element array is used to obtain estimates of signal gain, phase velocity, and source azimuth. This analysis reveals frequency-dependent biases in velocity and azimuth of the coherent Pn and Pg arrivals. Signal correlation, the principal factor governing array performance, is examined in terms of spatial coherence estimates. The coherence is found to vary between the three sites. In all cases the coherence of Pn is greater than that for Pg. 81 refs., 92 figs., 5 tabs

  4. Array analysis of regional Pn and Pg wavefields from the Nevada Test Site

    Energy Technology Data Exchange (ETDEWEB)

    Leonard, M.A. (California Univ., Berkeley, CA (United States). Dept. of Geology and Geophysics Lawrence Berkeley Lab., CA (United States))

    1991-06-01

    Small-aperture high-frequency seismic arrays with dimensions of a few kilometers or less, can improve our ability to seismically monitor compliance with a low-yield Threshold Test Ban Treaty. This work studies the characteristics and effectiveness of array processing of the regional Pn and Pg wavefields generated by underground nuclear explosions at the Nevada Test Site. Waveform data from the explosion HARDIN (m{sub b} = 5.5) is recorded at a temporary 12-element, 3-component, 1.5 km-aperture array sited in an area of northern Nevada. The explosions VILLE (m{sub b} = 4.4) and SALUT (m{sub b} = 5.5) are recorded at two arrays sited in the Mojave desert, one a 96-element vertical-component 7 km-aperture array and the other a 155-element vertical-component 4 km-aperture array. Among the mean spectra for the m{sub b} = 5.5 events there are significant differences in low-frequency spectral amplitudes between array sites. The spectra become nearly identical beyond about 6 Hz. Spectral ratios are used to examine seismic source properties and the partitioning of energy between Pn and Pg. Frequency-wavenumber analysis at the 12-element array is used to obtain estimates of signal gain, phase velocity, and source azimuth. This analysis reveals frequency-dependent biases in velocity and azimuth of the coherent Pn and Pg arrivals. Signal correlation, the principal factor governing array performance, is examined in terms of spatial coherence estimates. The coherence is found to vary between the three sites. In all cases the coherence of Pn is greater than that for Pg. 81 refs., 92 figs., 5 tabs.

  5. Nanomolar Trace Metal Analysis of Copper at Gold Microband Arrays

    Science.gov (United States)

    Wahl, A.; Dawson, K.; Sassiat, N.; Quinn, A. J.; O'Riordan, A.

    2011-08-01

    This paper describes the fabrication and electrochemical characterization of gold microband electrode arrays designated as a highly sensitive sensor for trace metal detection of copper in drinking water samples. Gold microband electrodes have been routinely fabricated by standard photolithographic methods. Electrochemical characterization were conducted in 0.1 M H2SO4 and found to display characteristic gold oxide formation and reduction peaks. The advantages of gold microband electrodes as trace metal sensors over currently used methods have been investigated by employing under potential deposition anodic stripping voltammetry (UPD-ASV) in Cu2+ nanomolar concentrations. Linear correlations were observed for increasing Cu2+ concentrations from which the concentration of an unknown sample of drinking water was estimated. The results obtained for the estimation of the unknown trace copper concentration in drinking was in good agreement with expected values.

  6. Nanomolar Trace Metal Analysis of Copper at Gold Microband Arrays

    International Nuclear Information System (INIS)

    Wahl, A; Dawson, K; Sassiat, N; Quinn, A J; O'Riordan, A

    2011-01-01

    This paper describes the fabrication and electrochemical characterization of gold microband electrode arrays designated as a highly sensitive sensor for trace metal detection of copper in drinking water samples. Gold microband electrodes have been routinely fabricated by standard photolithographic methods. Electrochemical characterization were conducted in 0.1 M H 2 SO 4 and found to display characteristic gold oxide formation and reduction peaks. The advantages of gold microband electrodes as trace metal sensors over currently used methods have been investigated by employing under potential deposition anodic stripping voltammetry (UPD-ASV) in Cu 2+ nanomolar concentrations. Linear correlations were observed for increasing Cu 2+ concentrations from which the concentration of an unknown sample of drinking water was estimated. The results obtained for the estimation of the unknown trace copper concentration in drinking was in good agreement with expected values.

  7. HARMONIC SPACE ANALYSIS OF PULSAR TIMING ARRAY REDSHIFT MAPS

    Energy Technology Data Exchange (ETDEWEB)

    Roebber, Elinore; Holder, Gilbert, E-mail: roebbere@physics.mcgill.ca [Department of Physics and McGill Space Institute, McGill University, 3600 rue University, Montréal, QC, H3A 2T8 (Canada)

    2017-01-20

    In this paper, we propose a new framework for treating the angular information in the pulsar timing array (PTA) response to a gravitational wave (GW) background based on standard cosmic microwave background techniques. We calculate the angular power spectrum of the all-sky gravitational redshift pattern induced at the Earth for both a single bright source of gravitational radiation and a statistically isotropic, unpolarized Gaussian random GW background. The angular power spectrum is the harmonic transform of the Hellings and Downs curve. We use the power spectrum to examine the expected variance in the Hellings and Downs curve in both cases. Finally, we discuss the extent to which PTAs are sensitive to the angular power spectrum and find that the power spectrum sensitivity is dominated by the quadrupole anisotropy of the gravitational redshift map.

  8. Analysis of neighborhood behavior in lead optimization and array design.

    Science.gov (United States)

    Papadatos, George; Cooper, Anthony W J; Kadirkamanathan, Visakan; Macdonald, Simon J F; McLay, Iain M; Pickett, Stephen D; Pritchard, John M; Willett, Peter; Gillet, Valerie J

    2009-02-01

    Neighborhood behavior describes the extent to which small structural changes defined by a molecular descriptor are likely to lead to small property changes. This study evaluates two methods for the quantification of neighborhood behavior: the optimal diagonal method of Patterson et al. and the optimality criterion method of Horvath and Jeandenans. The methods are evaluated using twelve different types of fingerprint (both 2D and 3D) with screening data derived from several lead optimization projects at GlaxoSmithKline. The principal focus of the work is the design of chemical arrays during lead optimization, and the study hence considers not only biological activity but also important drug properties such as metabolic stability, permeability, and lipophilicity. Evidence is provided to suggest that the optimality criterion method may provide a better quantitative description of neighborhood behavior than the optimal diagonal method.

  9. Deterministic Graphical Games Revisited

    DEFF Research Database (Denmark)

    Andersson, Klas Olof Daniel; Hansen, Kristoffer Arnsfelt; Miltersen, Peter Bro

    2012-01-01

    Starting from Zermelo’s classical formal treatment of chess, we trace through history the analysis of two-player win/lose/draw games with perfect information and potentially infinite play. Such chess-like games have appeared in many different research communities, and methods for solving them......, such as retrograde analysis, have been rediscovered independently. We then revisit Washburn’s deterministic graphical games (DGGs), a natural generalization of chess-like games to arbitrary zero-sum payoffs. We study the complexity of solving DGGs and obtain an almost-linear time comparison-based algorithm...

  10. Theory and design of compact hybrid microphone arrays on two-dimensional planes for three-dimensional soundfield analysis.

    Science.gov (United States)

    Chen, Hanchi; Abhayapala, Thushara D; Zhang, Wen

    2015-11-01

    Soundfield analysis based on spherical harmonic decomposition has been widely used in various applications; however, a drawback is the three-dimensional geometry of the microphone arrays. In this paper, a method to design two-dimensional planar microphone arrays that are capable of capturing three-dimensional (3D) spatial soundfields is proposed. Through the utilization of both omni-directional and first order microphones, the proposed microphone array is capable of measuring soundfield components that are undetectable to conventional planar omni-directional microphone arrays, thus providing the same functionality as 3D arrays designed for the same purpose. Simulations show that the accuracy of the planar microphone array is comparable to traditional spherical microphone arrays. Due to its compact shape, the proposed microphone array greatly increases the feasibility of 3D soundfield analysis techniques in real-world applications.

  11. ArrayExpress update--trends in database growth and links to data analysis tools.

    Science.gov (United States)

    Rustici, Gabriella; Kolesnikov, Nikolay; Brandizi, Marco; Burdett, Tony; Dylag, Miroslaw; Emam, Ibrahim; Farne, Anna; Hastings, Emma; Ison, Jon; Keays, Maria; Kurbatova, Natalja; Malone, James; Mani, Roby; Mupo, Annalisa; Pedro Pereira, Rui; Pilicheva, Ekaterina; Rung, Johan; Sharma, Anjan; Tang, Y Amy; Ternent, Tobias; Tikhonov, Andrew; Welter, Danielle; Williams, Eleanor; Brazma, Alvis; Parkinson, Helen; Sarkans, Ugis

    2013-01-01

    The ArrayExpress Archive of Functional Genomics Data (http://www.ebi.ac.uk/arrayexpress) is one of three international functional genomics public data repositories, alongside the Gene Expression Omnibus at NCBI and the DDBJ Omics Archive, supporting peer-reviewed publications. It accepts data generated by sequencing or array-based technologies and currently contains data from almost a million assays, from over 30 000 experiments. The proportion of sequencing-based submissions has grown significantly over the last 2 years and has reached, in 2012, 15% of all new data. All data are available from ArrayExpress in MAGE-TAB format, which allows robust linking to data analysis and visualization tools, including Bioconductor and GenomeSpace. Additionally, R objects, for microarray data, and binary alignment format files, for sequencing data, have been generated for a significant proportion of ArrayExpress data.

  12. Design and analysis of high gain array antenna for wireless communication applications

    Directory of Open Access Journals (Sweden)

    Sri Jaya LAKSHMI

    2015-05-01

    Full Text Available The array of antennas generally used for directing the radiated power towards a desired angular sector. Arrays can be used to synthesize a required pattern that cannot be achieved with a single element. The geometrical arrangement, number of elements, phases of the array elements and relative amplitudes depends on the angular pattern. This paper is focused on the issues related to the design and implementation of 4×1 array microstrip antenna with aperture coupled corporate feed for wireless local area network applications. Parametric analysis with change in element spacing is attempted in this work to understand the directional characteristics of the radiation pattern. Gain of more than 14 db and the efficiency more than 93% is achieved from the current design at desired frequency band.

  13. Computational analysis of vertical axis wind turbine arrays

    Science.gov (United States)

    Bremseth, J.; Duraisamy, K.

    2016-10-01

    Canonical problems involving single, pairs, and arrays of vertical axis wind turbines (VAWTs) are investigated numerically with the objective of understanding the underlying flow structures and their implications on energy production. Experimental studies by Dabiri (J Renew Sustain Energy 3, 2011) suggest that VAWTs demand less stringent spacing requirements than their horizontal axis counterparts and additional benefits may be obtained by optimizing the placement and rotational direction of VAWTs. The flowfield of pairs of co-/counter-rotating VAWTs shows some similarities with pairs of cylinders in terms of wake structure and vortex shedding. When multiple VAWTs are placed in a column, the extent of the wake is seen to spread further downstream, irrespective of the direction of rotation of individual turbines. However, the aerodynamic interference between turbines gives rise to regions of excess momentum between the turbines which lead to significant power augmentations. Studies of VAWTs arranged in multiple columns show that the downstream columns can actually be more efficient than the leading column, a proposition that could lead to radical improvements in wind farm productivity.

  14. Complexity Level Analysis Revisited: What Can 30 Years of Hindsight Tell Us about How the Brain Might Represent Visual Information?

    Directory of Open Access Journals (Sweden)

    John K. Tsotsos

    2017-08-01

    Full Text Available Much has been written about how the biological brain might represent and process visual information, and how this might inspire and inform machine vision systems. Indeed, tremendous progress has been made, and especially during the last decade in the latter area. However, a key question seems too often, if not mostly, be ignored. This question is simply: do proposed solutions scale with the reality of the brain's resources? This scaling question applies equally to brain and to machine solutions. A number of papers have examined the inherent computational difficulty of visual information processing using theoretical and empirical methods. The main goal of this activity had three components: to understand the deep nature of the computational problem of visual information processing; to discover how well the computational difficulty of vision matches to the fixed resources of biological seeing systems; and, to abstract from the matching exercise the key principles that lead to the observed characteristics of biological visual performance. This set of components was termed complexity level analysis in Tsotsos (1987 and was proposed as an important complement to Marr's three levels of analysis. This paper revisits that work with the advantage that decades of hindsight can provide.

  15. Analysis of Chinese women with primary ovarian insufficiency by high resolution array-comparative genomic hybridization.

    Science.gov (United States)

    Liao, Can; Fu, Fang; Yang, Xin; Sun, Yi-Min; Li, Dong-Zhi

    2011-06-01

    Primary ovarian insufficiency (POI) is defined as a primary ovarian defect characterized by absent menarche (primary amenorrhea) or premature depletion of ovarian follicles before the age of 40 years. The etiology of primary ovarian insufficiency in human female patients is still unclear. The purpose of this study is to investigate the potential genetic causes in primary amenorrhea patients by high resolution array based comparative genomic hybridization (array-CGH) analysis. Following the standard karyotyping analysis, genomic DNA from whole blood of 15 primary amenorrhea patients and 15 normal control women was hybridized with Affymetrix cytogenetic 2.7M arrays following the standard protocol. Copy number variations identified by array-CGH were confirmed by real time polymerase chain reaction. All the 30 samples were negative by conventional karyotyping analysis. Microdeletions on chromosome 17q21.31-q21.32 with approximately 1.3 Mb were identified in four patients by high resolution array-CGH analysis. This included the female reproductive secretory pathway related factor N-ethylmaleimide-sensitive factor (NSF) gene. The results of the present study suggest that there may be critical regions regulating primary ovarian insufficiency in women with a 17q21.31-q21.32 microdeletion. This effect might be due to the loss of function of the NSF gene/genes within the deleted region or to effects on contiguous genes.

  16. Revisiting the effect of maternal smoking during pregnancy on offspring birthweight: a quasi-experimental sibling analysis in Sweden.

    Directory of Open Access Journals (Sweden)

    Sol Pía Juárez

    Full Text Available Maternal smoking during pregnancy (SDP seems associated with reduced birthweight in the offspring. This observation, however, is based on conventional epidemiological analyses, and it might be confounded by unobserved maternal characteristics related to both smoking habits and offspring birth weight. Therefore, we apply a quasi-experimental sibling analysis to revisit previous findings. Using the Swedish Medical Birth Register, we identified 677,922 singletons born between 2002 and 2010 from native Swedish mothers. From this population, we isolated 62,941 siblings from 28,768 mothers with discrepant habits of SDP. We applied conventional and mother-specific multilevel linear regression models to investigate the association between maternal SDP and offspring birthweight. Depending on the mother was light or heavy smoker and the timing of exposition during pregnancy (i.e., first or third trimester, the effect of smoking on birthweight reduction was between 6 and 78 g less marked in the sibling analysis than in the conventional analysis. Sibling analysis showed that continuous smoking reduces birthweight by 162 grams for mothers who were light smokers (1 to 9 cigarettes per day and 226 g on average for those who were heavy smokers throughout the pregnancy in comparison to non-smoker mothers. Quitting smoking during pregnancy partly counteracted the smoking-related birthweight reduction by 1 to 29 g, and a subsequent smoking relapse during pregnancy reduced birthweight by 77 to 83 g. The sibling analysis provides strong evidence that maternal SDP reduces offspring birthweight, though this reduction was not as great as that observed in the conventional analysis. Our findings support public health interventions aimed to prevent SDP and to persuade those who already smoke to quit and not relapse throughout the pregnancy. Besides, further analyses are needed in order to explain the mechanisms through which smoking reduces birthweight and to identify

  17. ExonMiner: Web service for analysis of GeneChip Exon array data

    Directory of Open Access Journals (Sweden)

    Imoto Seiya

    2008-11-01

    Full Text Available Abstract Background Some splicing isoform-specific transcriptional regulations are related to disease. Therefore, detection of disease specific splice variations is the first step for finding disease specific transcriptional regulations. Affymetrix Human Exon 1.0 ST Array can measure exon-level expression profiles that are suitable to find differentially expressed exons in genome-wide scale. However, exon array produces massive datasets that are more than we can handle and analyze on personal computer. Results We have developed ExonMiner that is the first all-in-one web service for analysis of exon array data to detect transcripts that have significantly different splicing patterns in two cells, e.g. normal and cancer cells. ExonMiner can perform the following analyses: (1 data normalization, (2 statistical analysis based on two-way ANOVA, (3 finding transcripts with significantly different splice patterns, (4 efficient visualization based on heatmaps and barplots, and (5 meta-analysis to detect exon level biomarkers. We implemented ExonMiner on a supercomputer system in order to perform genome-wide analysis for more than 300,000 transcripts in exon array data, which has the potential to reveal the aberrant splice variations in cancer cells as exon level biomarkers. Conclusion ExonMiner is well suited for analysis of exon array data and does not require any installation of software except for internet browsers. What all users need to do is to access the ExonMiner URL http://ae.hgc.jp/exonminer. Users can analyze full dataset of exon array data within hours by high-level statistical analysis with sound theoretical basis that finds aberrant splice variants as biomarkers.

  18. Analysis of radiation damage in on-orbit solar array of Venus explorer Akatsuki

    International Nuclear Information System (INIS)

    Toyota, Hiroyuki; Shimada, Takanobu; Takahashi, You; Imamura, Takeshi; Hada, Yuko; Ishii, Takako T.; Isobe, Hiroaki; Asai, Ayumi; Shiota, Daikou

    2013-01-01

    This paper describes an analysis of radiation damage in solar array of Venus explorer Akatsuki observed on orbit. The output voltage of the solar array have shown sudden drops, which are most reasonably associated with radiation damage, three times since its launch. The analysis of these radiation damages is difficult, because no direct observation data of the spectra and the amount of the high-energy particles is available. We calculated the radiation damage using the relative damage coefficient (RDC) method assuming a typical spectral shape of protons. (author)

  19. OpenMSI Arrayed Analysis Toolkit: Analyzing Spatially Defined Samples Using Mass Spectrometry Imaging

    DEFF Research Database (Denmark)

    de Raad, Markus; de Rond, Tristan; Rübel, Oliver

    2017-01-01

    ://openmsinersc.gov), a platform for storing, sharing, and analyzing MSI data. By using a web-based python notebook (Jupyter), OMAAT is accessible to anyone without programming experience yet allows experienced users to leverage all features. OMAAT was :evaluated by analyzing an MSI data set of a high-throughput glycoside...... processing tools for the analysis of large arrayed MSI sample sets. The OpenMSI Arrayed Analysis Toolkit (OMAAT) is a software package that addresses the challenges of analyzing spatially defined samples in MSI data sets. OMAAT is written in Python and is integrated with OpenMSI (http...

  20. Study on dynamic behavior analysis of towed line array sensor

    Directory of Open Access Journals (Sweden)

    Hyun Kyoung Shin

    2012-03-01

    Full Text Available A set of equations of motion is derived for vibratory motions of an underwater cable connected to a moving vehicle at one end and with drogues at the other end. From the static analysis, cable configurations are obtained for different vehicle speeds and towing pretensions are determined by fluid resistance of drogues. Also the dynamic analysis is required to predict its vibratory motion. Nonlinear fluid drag forces greatly influence the dynamic tension. In this study, a numerical analysis program was developed to find out the characteristic of cable behaviour. The motion is described in terms of space and time coordinates based on Chebyshev polynomial expansions. For the spatial integration the collocation method is employed and the Newmark method is applied for the time integration. Dynamic tensions, displacements, velocities, accelerations were predicted in the time domain while natural frequencies and transfer functions were obtained in the frequency domain.

  1. GENE ARRAY ANALYSIS OF THE VENTRAL PROSTATE IN RATS EXPOSED TO EITHER VINCLOZOLIN OR PROCYMIDONE

    Science.gov (United States)

    GENE ARRAY ANALYSIS OF THE VENTRAL PROSTATE IN RATS EXPOSED TO EITHER VINCLOZOLIN OR PROCYMIDONE. MB Rosen, VS Wilson, JE Schmid, and LE Gray Jr. US EPA, ORD, NHEERL, RTP, NC.Vinclozolin (Vi) and procymidone (Pr) are antiandrogenic fungicides. While changes in gene expr...

  2. Vertical nanowire arrays as a versatile platform for protein detection and analysis

    DEFF Research Database (Denmark)

    Rostgaard, Katrine R.; Frederiksen, Rune S.; Liu, Yi-Chi

    2013-01-01

    solutions. Here we show that vertical arrays of nanowires (NWs) can overcome several bottlenecks of using nanoarrays for extraction and analysis of proteins. The high aspect ratio of the NWs results in a large surface area available for protein immobilization and renders passivation of the surface between...

  3. Indoor air quality inspection and analysis system based on gas sensor array

    Science.gov (United States)

    Gao, Xiang; Wang, Mingjiang; Fan, Binwen

    2017-08-01

    A detection and analysis system capable of measuring the concentration of four major gases in indoor air is designed. It uses four gas sensors constitute a gas sensor array, to achieve four indoor gas concentration detection, while the detection of data for further processing to reduce the cross-sensitivity between the gas sensor to improve the accuracy of detection.

  4. Two-Stage MAS Technique for Analysis of DRA Elements and Arrays on Finite Ground Planes

    DEFF Research Database (Denmark)

    Larsen, Niels Vesterdal; Breinbjerg, Olav

    2007-01-01

    A two-stage Method of Auxiliary Sources (MAS) technique is proposed for analysis of dielectric resonator antenna (DRA) elements and arrays on finite ground planes (FGPs). The problem is solved by first analysing the DRA on an infinite ground plane (IGP) and then using this solution to model the FGP...

  5. Evaluation of Influenza-Specific Humoral Response by Microbead Array Analysis

    Directory of Open Access Journals (Sweden)

    Yoav Keynan

    2011-01-01

    Full Text Available RATIONALE: Quantitation and determination of antigen specificity of systemic and mucosal immune responses to influenza vaccination is beneficial for future vaccine development. Previous methods to acquire this information were costly, time consuming and sample exhaustive. The benefits of suspension microbead array (MBA analysis are numerous. The multiplex capabilities of the system conserve time, money and sample, while generating statistically powerful data.

  6. Noise analysis and performance of a selfscanned linear InSb detector array

    International Nuclear Information System (INIS)

    Finger, G.; Meyer, M.; Moorwood, A.F.M.

    1987-01-01

    A noise model for detectors operated in the capacitive discharge mode is presented. It is used to analyze the noise performance of the ESO nested timing readout technique applied to a linear 32-element InSb array which is multiplexed by a silicon switched-FET shift register. Analysis shows that KTC noise of the videoline is the major noise contribution; it can be eliminated by weighted double-correlated sampling. Best noise performance of this array is achieved at the smallest possible reverse bias voltage (not more than 20 mV) whereas excess noise is observed at higher reverse bias voltages. 5 references

  7. MTF measurement and analysis of linear array HgCdTe infrared detectors

    Science.gov (United States)

    Zhang, Tong; Lin, Chun; Chen, Honglei; Sun, Changhong; Lin, Jiamu; Wang, Xi

    2018-01-01

    The slanted-edge technique is the main method for measurement detectors MTF, however this method is commonly used on planar array detectors. In this paper the authors present a modified slanted-edge method to measure the MTF of linear array HgCdTe detectors. Crosstalk is one of the major factors that degrade the MTF value of such an infrared detector. This paper presents an ion implantation guard-ring structure which was designed to effectively absorb photo-carriers that may laterally defuse between adjacent pixels thereby suppressing crosstalk. Measurement and analysis of the MTF of the linear array detectors with and without a guard-ring were carried out. The experimental results indicated that the ion implantation guard-ring structure effectively suppresses crosstalk and increases MTF value.

  8. TiArA: a virtual appliance for the analysis of Tiling Array data.

    Directory of Open Access Journals (Sweden)

    Jason A Greenbaum

    2010-04-01

    Full Text Available Genomic tiling arrays have been described in the scientific literature since 2003, yet there is a shortage of user-friendly applications available for their analysis.Tiling Array Analyzer (TiArA is a software program that provides a user-friendly graphical interface for the background subtraction, normalization, and summarization of data acquired through the Affymetrix tiling array platform. The background signal is empirically measured using a group of nonspecific probes with varying levels of GC content and normalization is performed to enforce a common dynamic range.TiArA is implemented as a standalone program for Linux systems and is available as a cross-platform virtual machine that will run under most modern operating systems using virtualization software such as Sun VirtualBox or VMware. The software is available as a Debian package or a virtual appliance at http://purl.org/NET/tiara.

  9. Numeric-modeling sensitivity analysis of the performance of wind turbine arrays

    Energy Technology Data Exchange (ETDEWEB)

    Lissaman, P.B.S.; Gyatt, G.W.; Zalay, A.D.

    1982-06-01

    An evaluation of the numerical model created by Lissaman for predicting the performance of wind turbine arrays has been made. Model predictions of the wake parameters have been compared with both full-scale and wind tunnel measurements. Only limited, full-scale data were available, while wind tunnel studies showed difficulties in representing real meteorological conditions. Nevertheless, several modifications and additions have been made to the model using both theoretical and empirical techniques and the new model shows good correlation with experiment. The larger wake growth rate and shorter near wake length predicted by the new model lead to reduced interference effects on downstream turbines and hence greater array efficiencies. The array model has also been re-examined and now incorporates the ability to show the effects of real meteorological conditions such as variations in wind speed and unsteady winds. The resulting computer code has been run to show the sensitivity of array performance to meteorological, machine, and array parameters. Ambient turbulence and windwise spacing are shown to dominate, while hub height ratio is seen to be relatively unimportant. Finally, a detailed analysis of the Goodnoe Hills wind farm in Washington has been made to show how power output can be expected to vary with ambient turbulence, wind speed, and wind direction.

  10. Efficient Analysis of Systems Biology Markup Language Models of Cellular Populations Using Arrays.

    Science.gov (United States)

    Watanabe, Leandro; Myers, Chris J

    2016-08-19

    The Systems Biology Markup Language (SBML) has been widely used for modeling biological systems. Although SBML has been successful in representing a wide variety of biochemical models, the core standard lacks the structure for representing large complex regular systems in a standard way, such as whole-cell and cellular population models. These models require a large number of variables to represent certain aspects of these types of models, such as the chromosome in the whole-cell model and the many identical cell models in a cellular population. While SBML core is not designed to handle these types of models efficiently, the proposed SBML arrays package can represent such regular structures more easily. However, in order to take full advantage of the package, analysis needs to be aware of the arrays structure. When expanding the array constructs within a model, some of the advantages of using arrays are lost. This paper describes a more efficient way to simulate arrayed models. To illustrate the proposed method, this paper uses a population of repressilator and genetic toggle switch circuits as examples. Results show that there are memory benefits using this approach with a modest cost in runtime.

  11. Fiber-optic microsphere-based antibody array for the analysis of inflammatory cytokines in saliva.

    Science.gov (United States)

    Blicharz, Timothy M; Siqueira, Walter L; Helmerhorst, Eva J; Oppenheim, Frank G; Wexler, Philip J; Little, Frédéric F; Walt, David R

    2009-03-15

    Antibody microarrays have emerged as useful tools for high-throughput protein analysis and candidate biomarker screening. We describe here the development of a multiplexed microsphere-based antibody array capable of simultaneously measuring 10 inflammatory protein mediators. Cytokine-capture microspheres were fabricated by covalently coupling monoclonal antibodies specific for cytokines of interest to fluorescently encoded 3.1 microm polymer microspheres. An optical fiber bundle containing approximately 50,000 individual 3.1 microm diameter fibers was chemically etched to create microwells in which cytokine-capture microspheres could be deposited. Microspheres were randomly distributed in the wells to produce an antibody array for performing a multiplexed sandwich immunoassay. The array responded specifically to recombinant cytokine solutions in a concentration-dependent fashion. The array was also used to examine endogenous mediator patterns in saliva supernatants from patients with pulmonary inflammatory diseases such as asthma and chronic obstructive pulmonary disease (COPD). This array technology may prove useful as a laboratory-based platform for inflammatory disease research and diagnostics, and its small footprint could also enable integration into a microfluidic cassette for use in point-of-care testing.

  12. Clearance Analysis of Node 3 Aft CBM to the Stowed FGB Solar Array

    Science.gov (United States)

    Liddle, Donn

    2014-01-01

    In early 2011, the ISS Vehicle Configuration Office began considering the relocation of the Permanent Multipurpose Module (PMM) to the aft facing Common Berthing Mechanism (CBM) on Node 3 to open a berthing location for visiting vehicles on the Node 1 nadir CBM. In this position, computer-aided design (CAD) models indicated that the aft end of the PMM would be only a few inches from the stowed Functional Cargo Block (FGB) port solar array. To validate the CAD model clearance analysis, in the late summer of 2011 the Image Science and Analysis Group (ISAG) was asked to determine the true geometric relationship between the on-orbit aft facing Node 3 CBM and the FGB port solar array. The desired measurements could be computed easily by photogrammetric analysis if current imagery of the ISS hardware were obtained. Beginning in the fall of 2011, ISAG used the Dynamic Onboard Ubiquitous Graphics (DOUG) program to design a way to acquire imagery of the aft face of Node 3, the aft end-cone of Node 1, the port side of pressurized mating adapter 1 (PMA1), and the port side of the FGB out to the tip of the port solar array using cameras on the Space Station Remote Manipulator System (SSRMS). This was complicated by the need to thread the SSRMS under the truss, past Node 3 and the Cupola, and into the space between the aft side of Node 3 and the FGB solar array to acquire more than 100 images from multiple positions. To minimize the number of SSRMS movements, the Special Purpose Dexterous Manipulator (SPDM) would be attached to the SSRMS. This would make it possible to park the SPDM in one position and acquire multiple images by changing the viewing orientation of the SPDM body cameras using the pan/tilt units on which the cameras are mounted. Using this implementation concept, ISAG identified four SSRMS/SPDM positions from which all of the needed imagery could be acquired. Based on a photogrammetric simulation, it was estimated that the location of the FGB solar array could be

  13. Discrimination of honeys using colorimetric sensor arrays, sensory analysis and gas chromatography techniques.

    Science.gov (United States)

    Tahir, Haroon Elrasheid; Xiaobo, Zou; Xiaowei, Huang; Jiyong, Shi; Mariod, Abdalbasit Adam

    2016-09-01

    Aroma profiles of six honey varieties of different botanical origins were investigated using colorimetric sensor array, gas chromatography-mass spectrometry (GC-MS) and descriptive sensory analysis. Fifty-eight aroma compounds were identified, including 2 norisoprenoids, 5 hydrocarbons, 4 terpenes, 6 phenols, 7 ketones, 9 acids, 12 aldehydes and 13 alcohols. Twenty abundant or active compounds were chosen as key compounds to characterize honey aroma. Discrimination of the honeys was subsequently implemented using multivariate analysis, including hierarchical clustering analysis (HCA) and principal component analysis (PCA). Honeys of the same botanical origin were grouped together in the PCA score plot and HCA dendrogram. SPME-GC/MS and colorimetric sensor array were able to discriminate the honeys effectively with the advantages of being rapid, simple and low-cost. Moreover, partial least squares regression (PLSR) was applied to indicate the relationship between sensory descriptors and aroma compounds. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. BEBC revisited

    CERN Multimedia

    1980-01-01

    A view of the BEBC interior, from bottom. At the centre, one sees the 'fish-eyes', surrounded by a bulk of cables (Kabelsalat) that allowed the magnetic field to be monitored. The array of proportional counters of the Internal Picket Fence are already installed at the periphery of the vacuum tank. See Annual Report 1981, p. 56

  15. Solar array deployment analysis considering path-dependent behavior of a tape spring hinge

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Kyung Won; Park, Young Jin [KAIST, Daejeon (Korea, Republic of)

    2015-05-15

    Solar array deployment analysis is conducted considering the path-dependent nonlinear behavior of tape spring hinge. Such hinges offer many advantages over rigid hinges; they are self-deployable, self-locking, lightweight, and simple. However, they show strongly nonlinear behavior with respect to rotation angle, making deployment analysis difficult. To accurately consider the characteristics of tape spring hinges for deployment analysis, a path-dependent path identification (PI) method for tracing the previous path of the moment is introduced. To analyze the deployment motion, the governing equation for solar array deployment is derived within the framework of Kane's dynamic equation for three deployable solar panels. The numerical solution is compared with the Recurdyn's multi-body dynamics analysis solution using experimentally measured moment-rotation profiles. Solar array deployment analysis is conducted by considering and not considering the path-dependent PI method. This simulation example shows that the proposed path-dependent PI method is very effective for accurately predicting the deployment motion.

  16. Recovery and evolutionary analysis of complete integron gene cassette arrays from Vibrio

    Directory of Open Access Journals (Sweden)

    Gillings Michael R

    2006-01-01

    Full Text Available Abstract Background Integrons are genetic elements capable of the acquisition, rearrangement and expression of genes contained in gene cassettes. Gene cassettes generally consist of a promoterless gene associated with a recombination site known as a 59-base element (59-be. Multiple insertion events can lead to the assembly of large integron-associated cassette arrays. The most striking examples are found in Vibrio, where such cassette arrays are widespread and can range from 30 kb to 150 kb. Besides those found in completely sequenced genomes, no such array has yet been recovered in its entirety. We describe an approach to systematically isolate, sequence and annotate large integron gene cassette arrays from bacterial strains. Results The complete Vibrio sp. DAT722 integron cassette array was determined through the streamlined approach described here. To place it in an evolutionary context, we compare the DAT722 array to known vibrio arrays and performed phylogenetic analyses for all of its components (integrase, 59-be sites, gene cassette encoded genes. It differs extensively in terms of genomic context as well as gene cassette content and organization. The phylogenetic tree of the 59-be sites collectively found in the Vibrio gene cassette pool suggests frequent transfer of cassettes within and between Vibrio species, with slower transfer rates between more phylogenetically distant relatives. We also identify multiple cases where non-integron chromosomal genes seem to have been assembled into gene cassettes and others where cassettes have been inserted into chromosomal locations outside integrons. Conclusion Our systematic approach greatly facilitates the isolation and annotation of large integrons gene cassette arrays. Comparative analysis of the Vibrio sp. DAT722 integron obtained through this approach to those found in other vibrios confirms the role of this genetic element in promoting lateral gene transfer and suggests a high rate of gene

  17. Data in support of proteomic analysis of pneumococcal pediatric clinical isolates to construct a protein array

    Directory of Open Access Journals (Sweden)

    Alfonso Olaya-Abril

    2016-03-01

    Full Text Available Surface proteins play key roles in the interaction between cells and their environment, and in pathogenic microorganisms they are the best targets for drug or vaccine discovery and/or development. In addition, surface proteins can be the basis for serodiagnostic tools aiming at developing more affordable techniques for early diagnosis of infection in patients. We carried out a proteomic analysis of a collection of pediatric clinical isolates of Streptococcus pneumoniae, an important human pathogen responsible for more than 1.5 million child deaths worldwide. For that, cultured live bacterial cells were “shaved” with trypsin, and the recovered peptides were analyzed by LC/MS/MS. We selected 95 proteins to be produced as recombinant polypeptides, and printed them on an array. We probed the protein array with a collection of patient sera to define serodiagnostic antigens. The mass spectrometry proteomics data correspond to those published in [1] and have been deposited to the ProteomeXchange Consortium [2] via the PRIDE partner repository [3] with the dataset identifier http://www.ebi.ac.uk/pride/archive/projects/PXD001740. The protein array raw data are provided as supplemental material in this article. Keywords: Pneumococcus, Protein arrays, Proteomics, Diagnostics

  18. SNP Arrays

    Directory of Open Access Journals (Sweden)

    Jari Louhelainen

    2016-10-01

    Full Text Available The papers published in this Special Issue “SNP arrays” (Single Nucleotide Polymorphism Arrays focus on several perspectives associated with arrays of this type. The range of papers vary from a case report to reviews, thereby targeting wider audiences working in this field. The research focus of SNP arrays is often human cancers but this Issue expands that focus to include areas such as rare conditions, animal breeding and bioinformatics tools. Given the limited scope, the spectrum of papers is nothing short of remarkable and even from a technical point of view these papers will contribute to the field at a general level. Three of the papers published in this Special Issue focus on the use of various SNP array approaches in the analysis of three different cancer types. Two of the papers concentrate on two very different rare conditions, applying the SNP arrays slightly differently. Finally, two other papers evaluate the use of the SNP arrays in the context of genetic analysis of livestock. The findings reported in these papers help to close gaps in the current literature and also to give guidelines for future applications of SNP arrays.

  19. An Introduction to MAMA (Meta-Analysis of MicroArray data) System.

    Science.gov (United States)

    Zhang, Zhe; Fenstermacher, David

    2005-01-01

    Analyzing microarray data across multiple experiments has been proven advantageous. To support this kind of analysis, we are developing a software system called MAMA (Meta-Analysis of MicroArray data). MAMA utilizes a client-server architecture with a relational database on the server-side for the storage of microarray datasets collected from various resources. The client-side is an application running on the end user's computer that allows the user to manipulate microarray data and analytical results locally. MAMA implementation will integrate several analytical methods, including meta-analysis within an open-source framework offering other developers the flexibility to plug in additional statistical algorithms.

  20. Array-CGH Analysis in a Cohort of Phenotypically Well-Characterized Individuals with "Essential" Autism Spectrum Disorders

    Science.gov (United States)

    Napoli, Eleonora; Russo, Serena; Casula, Laura; Alesi, Viola; Amendola, Filomena Alessandra; Angioni, Adriano; Novelli, Antonio; Valeri, Giovanni; Menghini, Deny; Vicari, Stefano

    2018-01-01

    Copy-number variants (CNVs) are associated with susceptibility to autism spectrum disorder (ASD). To detect the presence of CNVs, we conducted an array-comparative genomic hybridization (array-CGH) analysis in 133 children with "essential" ASD phenotype. Genetic analyses documented that 12 children had causative CNVs (C-CNVs), 29…

  1. OpenMSI Arrayed Analysis Toolkit: Analyzing Spatially Defined Samples Using Mass Spectrometry Imaging

    Energy Technology Data Exchange (ETDEWEB)

    de Raad, Markus [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); de Rond, Tristan [Univ. of California, Berkeley, CA (United States); Rübel, Oliver [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Keasling, Jay D. [Univ. of California, Berkeley, CA (United States); Joint BioEnergy Inst. (JBEI), Emeryville, CA (United States); Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Technical Univ. of Denmark, Lyngby (Denmark); Northen, Trent R. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); USDOE Joint Genome Institute (JGI), Walnut Creek, CA (United States); Bowen, Benjamin P. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); USDOE Joint Genome Institute (JGI), Walnut Creek, CA (United States)

    2017-05-03

    Mass spectrometry imaging (MSI) has primarily been applied in localizing biomolecules within biological matrices. Although well-suited, the application of MSI for comparing thousands of spatially defined spotted samples has been limited. One reason for this is a lack of suitable and accessible data processing tools for the analysis of large arrayed MSI sample sets. In this paper, the OpenMSI Arrayed Analysis Toolkit (OMAAT) is a software package that addresses the challenges of analyzing spatially defined samples in MSI data sets. OMAAT is written in Python and is integrated with OpenMSI (http://openmsi.nersc.gov), a platform for storing, sharing, and analyzing MSI data. By using a web-based python notebook (Jupyter), OMAAT is accessible to anyone without programming experience yet allows experienced users to leverage all features. OMAAT was evaluated by analyzing an MSI data set of a high-throughput glycoside hydrolase activity screen comprising 384 samples arrayed onto a NIMS surface at a 450 μm spacing, decreasing analysis time >100-fold while maintaining robust spot-finding. The utility of OMAAT was demonstrated for screening metabolic activities of different sized soil particles, including hydrolysis of sugars, revealing a pattern of size dependent activities. Finally, these results introduce OMAAT as an effective toolkit for analyzing spatially defined samples in MSI. OMAAT runs on all major operating systems, and the source code can be obtained from the following GitHub repository: https://github.com/biorack/omaat.

  2. Detachably assembled microfluidic device for perfusion culture and post-culture analysis of a spheroid array.

    Science.gov (United States)

    Sakai, Yusuke; Hattori, Koji; Yanagawa, Fumiki; Sugiura, Shinji; Kanamori, Toshiyuki; Nakazawa, Kohji

    2014-07-01

    Microfluidic devices permit perfusion culture of three-dimensional (3D) tissue, mimicking the flow of blood in vascularized 3D tissue in our body. Here, we report a microfluidic device composed of a two-part microfluidic chamber chip and multi-microwell array chip able to be disassembled at the culture endpoint. Within the microfluidic chamber, an array of 3D tissue aggregates (spheroids) can be formed and cultured under perfusion. Subsequently, detailed post-culture analysis of the spheroids collected from the disassembled device can be performed. This device facilitates uniform spheroid formation, growth analysis in a high-throughput format, controlled proliferation via perfusion flow rate, and post-culture analysis of spheroids. We used the device to culture spheroids of human hepatocellular carcinoma (HepG2) cells under two controlled perfusion flow rates. HepG2 spheroids exhibited greater cell growth at higher perfusion flow rates than at lower perfusion flow rates, and exhibited different metabolic activity and mRNA and protein expression under the different flow rate conditions. These results show the potential of perfusion culture to precisely control the culture environment in microfluidic devices. The construction of spheroid array chambers allows multiple culture conditions to be tested simultaneously, with potential applications in toxicity and drug screening. Copyright © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Genetic profiles of gastroesophageal cancer: combined analysis using expression array and tiling array--comparative genomic hybridization

    DEFF Research Database (Denmark)

    Isinger-Ekstrand, Anna; Johansson, Jan; Ohlsson, Mattias

    2010-01-01

    15, 13q34, and 12q13, whereas different profiles with gains at 5p15, 7p22, 2q35, and 13q34 characterized gastric cancers. CDK6 and EGFR were identified as putative target genes in cancers of the esophagus and the gastroesophageal junction, with upregulation in one quarter of the tumors. Gains......We aimed to characterize the genomic profiles of adenocarcinomas in the gastroesophageal junction in relation to cancers in the esophagus and the stomach. Profiles of gains/losses as well as gene expression profiles were obtained from 27 gastroesophageal adenocarcinomas by means of 32k high......-resolution array-based comparative genomic hybridization and 27k oligo gene expression arrays, and putative target genes were validated in an extended series. Adenocarcinomas in the distal esophagus and the gastroesophageal junction showed strong similarities with the most common gains at 20q13, 8q24, 1q21-23, 5p...

  4. RoboSCell: An automated single cell arraying and analysis instrument

    KAUST Repository

    Sakaki, Kelly

    2009-09-09

    Single cell research has the potential to revolutionize experimental methods in biomedical sciences and contribute to clinical practices. Recent studies suggest analysis of single cells reveals novel features of intracellular processes, cell-to-cell interactions and cell structure. The methods of single cell analysis require mechanical resolution and accuracy that is not possible using conventional techniques. Robotic instruments and novel microdevices can achieve higher throughput and repeatability; however, the development of such instrumentation is a formidable task. A void exists in the state-of-the-art for automated analysis of single cells. With the increase in interest in single cell analyses in stem cell and cancer research the ability to facilitate higher throughput and repeatable procedures is necessary. In this paper, a high-throughput, single cell microarray-based robotic instrument, called the RoboSCell, is described. The proposed instrument employs a partially transparent single cell microarray (SCM) integrated with a robotic biomanipulator for in vitro analyses of live single cells trapped at the array sites. Cells, labeled with immunomagnetic particles, are captured at the array sites by channeling magnetic fields through encapsulated permalloy channels in the SCM. The RoboSCell is capable of systematically scanning the captured cells temporarily immobilized at the array sites and using optical methods to repeatedly measure extracellular and intracellular characteristics over time. The instrument\\'s capabilities are demonstrated by arraying human T lymphocytes and measuring the uptake dynamics of calcein acetoxymethylester-all in a fully automated fashion. © 2009 Springer Science+Business Media, LLC.

  5. Transient three-dimensional thermal-hydraulic analysis of nuclear reactor fuel rod arrays: general equations and numerical scheme

    International Nuclear Information System (INIS)

    Wnek, W.J.; Ramshaw, J.D.; Trapp, J.A.; Hughes, E.D.; Solbrig, C.W.

    1975-11-01

    A mathematical model and a numerical solution scheme for thermal-hydraulic analysis of fuel rod arrays are given. The model alleviates the two major deficiencies associated with existing rod array analysis models, that of a correct transverse momentum equation and the capability of handling reversing and circulatory flows. Possible applications of the model include steady state and transient subchannel calculations as well as analysis of flows in heat exchangers, other engineering equipment, and porous media

  6. Revisiting the Psychology of Intelligence Analysis: From Rational Actors to Adaptive Thinkers

    Science.gov (United States)

    Puvathingal, Bess J.; Hantula, Donald A.

    2012-01-01

    Intelligence analysis is a decision-making process rife with ambiguous, conflicting, irrelevant, important, and excessive information. The U.S. Intelligence Community is primed for psychology to lend its voice to the "analytic transformation" movement aimed at improving the quality of intelligence analysis. Traditional judgment and decision making…

  7. Life quality index revisited

    DEFF Research Database (Denmark)

    Ditlevsen, Ove Dalager

    2004-01-01

    The derivation of the life quality index (LQI) is revisited for a revision. This revision takes into account the unpaid but necessary work time needed to stay alive in clean and healthy conditions to be fit for effective wealth producing work and to enjoyable free time. Dimension analysis...... at birth should not vary between countries. Finally the distributional assumptions are relaxed as compared to the assumptions made in an earlier work by the author. These assumptions concern the calculation of the life expectancy change due to the removal of an accident source. Moreover a simple public...... consistency problems with the standard power function expression of the LQI are pointed out. It is emphasized that the combination coefficient in the convex differential combination between the relative differential of the gross domestic product per capita and the relative differential of the expected life...

  8. Circular Array of Magnetic Sensors for Current Measurement: Analysis for Error Caused by Position of Conductor.

    Science.gov (United States)

    Yu, Hao; Qian, Zheng; Liu, Huayi; Qu, Jiaqi

    2018-02-14

    This paper analyzes the measurement error, caused by the position of the current-carrying conductor, of a circular array of magnetic sensors for current measurement. The circular array of magnetic sensors is an effective approach for AC or DC non-contact measurement, as it is low-cost, light-weight, has a large linear range, wide bandwidth, and low noise. Especially, it has been claimed that such structure has excellent reduction ability for errors caused by the position of the current-carrying conductor, crosstalk current interference, shape of the conduction cross-section, and the Earth's magnetic field. However, the positions of the current-carrying conductor-including un-centeredness and un-perpendicularity-have not been analyzed in detail until now. In this paper, for the purpose of having minimum measurement error, a theoretical analysis has been proposed based on vector inner and exterior product. In the presented mathematical model of relative error, the un-center offset distance, the un-perpendicular angle, the radius of the circle, and the number of magnetic sensors are expressed in one equation. The comparison of the relative error caused by the position of the current-carrying conductor between four and eight sensors is conducted. Tunnel magnetoresistance (TMR) sensors are used in the experimental prototype to verify the mathematical model. The analysis results can be the reference to design the details of the circular array of magnetic sensors for current measurement in practical situations.

  9. Clinical utility of an array comparative genomic hybridization analysis for Williams syndrome.

    Science.gov (United States)

    Yagihashi, Tatsuhiko; Torii, Chiharu; Takahashi, Reiko; Omori, Mikimasa; Kosaki, Rika; Yoshihashi, Hiroshi; Ihara, Masahiro; Minagawa-Kawai, Yasuyo; Yamamoto, Junichi; Takahashi, Takao; Kosaki, Kenjiro

    2014-11-01

    To reveal the relation between intellectual disability and the deleted intervals in Williams syndrome, we performed an array comparative genomic hybridization analysis and standardized developmental testing for 11 patients diagnosed as having Williams syndrome based on fluorescent in situ hybridization testing. One patient had a large 4.2-Mb deletion spanning distally beyond the common 1.5-Mb intervals observed in 10/11 patients. We formulated a linear equation describing the developmental age of the 10 patients with the common deletion; the developmental age of the patient with the 4.2-Mb deletion was significantly below the expectation (developmental age = 0.51 × chronological age). The large deletion may account for the severe intellectual disability; therefore, the use of array comparative genomic hybridization may provide practical information regarding individuals with Williams syndrome. © 2014 Japanese Teratology Society.

  10. Application of neural networks to digital pulse shape analysis for an array of silicon strip detectors

    Energy Technology Data Exchange (ETDEWEB)

    Flores, J.L. [Dpto de Ingeniería Eléctrica y Térmica, Universidad de Huelva (Spain); Martel, I. [Dpto de Física Aplicada, Universidad de Huelva (Spain); CERN, ISOLDE, CH 1211 Geneva, 23 (Switzerland); Jiménez, R. [Dpto de Ingeniería Electrónica, Sist. Informáticos y Automática, Universidad de Huelva (Spain); Galán, J., E-mail: jgalan@diesia.uhu.es [Dpto de Ingeniería Electrónica, Sist. Informáticos y Automática, Universidad de Huelva (Spain); Salmerón, P. [Dpto de Ingeniería Eléctrica y Térmica, Universidad de Huelva (Spain)

    2016-09-11

    The new generation of nuclear physics detectors that used to study nuclear reactions is considering the use of digital pulse shape analysis techniques (DPSA) to obtain the (A,Z) values of the reaction products impinging in solid state detectors. This technique can be an important tool for selecting the relevant reaction channels at the HYDE (HYbrid DEtector ball array) silicon array foreseen for the Low Energy Branch of the FAIR facility (Darmstadt, Germany). In this work we study the feasibility of using artificial neural networks (ANNs) for particle identification with silicon detectors. Multilayer Perceptron networks were trained and tested with recent experimental data, showing excellent identification capabilities with signals of several isotopes ranging from {sup 12}C up to {sup 84}Kr, yielding higher discrimination rates than any other previously reported.

  11. Worldwide nuclear plant performance revisited. An analysis of 1978-81

    Energy Technology Data Exchange (ETDEWEB)

    Thomas, S [Sussex Univ., Brighton (UK). Science Policy Research Unit

    1982-12-01

    Analysis of recent nuclear plant performance, by country and manufacturer, confirms trends and differences in performance identified in the author's previous analysis. The crucial roles of electric utilities and vendors in ensuring high quality of design, construction, operation and maintenance are identified. Evidence is also presented suggesting that technological leadership is passing from the USA to Europe, although Japan may also emerge as an important reactor supplier.

  12. The signal-to-noise analysis of the Little-Hopfield model revisited

    International Nuclear Information System (INIS)

    Bolle, D; Blanco, J Busquets; Verbeiren, T

    2004-01-01

    Using the generating functional analysis an exact recursion relation is derived for the time evolution of the effective local field of the fully connected Little-Hopfield model. It is shown that, by leaving out the feedback correlations arising from earlier times in this effective dynamics, one precisely finds the recursion relations usually employed in the signal-to-noise approach. The consequences of this approximation as well as the physics behind it are discussed. In particular, it is pointed out why it is hard to notice the effects, especially for model parameters corresponding to retrieval. Numerical simulations confirm these findings. The signal-to-noise analysis is then extended to include all correlations, making it a full theory for dynamics at the level of the generating functional analysis. The results are applied to the frequently employed extremely diluted (a)symmetric architectures and to sequence processing networks

  13. Revisiting the Majority Problem: Average-Case Analysis with Arbitrarily Many Colours

    OpenAIRE

    Kleerekoper, Anthony

    2016-01-01

    The majority problem is a special case of the heavy hitters problem. Given a collection of coloured balls, the task is to identify the majority colour or state that no such colour exists. Whilst the special case of two-colours has been well studied, the average-case performance for arbitrarily many colours has not. In this paper, we present heuristic analysis of the average-case performance of three deterministic algorithms that appear in the literature. We empirically validate our analysis w...

  14. Revisiting a Meta-Analysis of Helpful Aspects of Therapy in a Community Counselling Service

    Science.gov (United States)

    Quick, Emma L; Dowd, Claire; Spong, Sheila

    2018-01-01

    This small scale mixed methods study examines helpful events in a community counselling setting, categorising impacts of events according to Timulak's [(2007). Identifying core categories of client-identified impact of helpful events in psychotherapy: A qualitative meta-analysis. "Psychotherapy Research," 17, 305-314] meta-synthesis of…

  15. The Stroop Revisited: A Meta-Analysis of Interference Control in AD/HD

    Science.gov (United States)

    Van Mourik, Rosa; Oosterlaan, Jaap; Sergeant, Joseph A.

    2005-01-01

    Background: An inhibition deficit, including poor interference control, has been implicated as one of the core deficits in AD/HD. Interference control is clinically measured by the Stroop Colour-Word Task. The aim of this meta-analysis was to investigate the strength of an interference deficit in AD/HD as measured by the Stroop Colour-Word Task…

  16. The Stroop revisited: a meta-analysis of interference control in AD/HD

    NARCIS (Netherlands)

    van Mourik, R.; Oosterlaan, J.; Sergeant, J.A.

    2005-01-01

    Background: An inhibition deficit, including poor interference control, has been implicated as one of the core deficits in AD/HD. Interference control is clinically measured by the Stroop Colour-Word Task. The aim of this meta-analysis was to investigate the strength of an interference deficit in

  17. On Stratification in Changing Higher Education: The "Analysis of Status" Revisited

    Science.gov (United States)

    Bloch, Roland; Mitterle, Alexander

    2017-01-01

    This article seeks to shed light on current dynamics of stratification in changing higher education and proposes an analytical perspective to account for these dynamics based on Martin Trow's work on "the analysis of status." In research on higher education, the term "stratification" is generally understood as a metaphor that…

  18. Determining patterns of variability in ecological communities: time lag analysis revisited

    NARCIS (Netherlands)

    Kampichler, C.; Van der Jeugd, H.P.

    2013-01-01

    All ecological communities experience change over time. One method to quantify temporal variation in the patterns of relative abundance of communities is time lag analysis (TLA). It uses a distance-based approach to study temporal community dynamics by regressing community dissimilarity over

  19. Crisis in Cataloging Revisited: The Year's Work in Subject Analysis, 1990.

    Science.gov (United States)

    Young, James Bradford

    1991-01-01

    Reviews the 1990 literature that concerns subject analysis. Issues addressed include subject cataloging, including Library of Congress Subject Headings (LCSH); classification, including Dewey Decimal Classification (DDC), Library of Congress Classification, and classification in online systems; subject access, including the online use of…

  20. Irregular Liesegang-type patterns in gas phase revisited. II. Statistical correlation analysis

    Science.gov (United States)

    Torres-Guzmán, José C.; Martínez-Mekler, Gustavo; Müller, Markus F.

    2016-05-01

    We present a statistical analysis of Liesegang-type patterns formed in a gaseous HCl-NH3 system by ammonium chloride precipitation along glass tubes, as described in Paper I [J. C. Torres-Guzmán et al., J. Chem. Phys. 144, 174701 (2016)] of this work. We focus on the detection and characterization of short and long-range correlations within the non-stationary sequence of apparently irregular precipitation bands. To this end we applied several techniques to estimate spatial correlations stemming from different fields, namely, linear auto-correlation via the power spectral density, detrended fluctuation analysis (DFA), and methods developed in the context of random matrix theory (RMT). In particular RMT methods disclose well pronounced long-range correlations over at least 40 bands in terms of both, band positions and intensity values. By using a variant of the DFA we furnish proof of the nonlinear nature of the detected long-range correlations.

  1. Co-movement of energy commodities revisited: Evidence from wavelet coherence analysis

    Czech Academy of Sciences Publication Activity Database

    Vácha, Lukáš; Baruník, Jozef

    2012-01-01

    Roč. 34, č. 1 (2012), s. 241-247 ISSN 0140-9883 R&D Projects: GA ČR GA402/09/0965; GA ČR GD402/09/H045; GA ČR GAP402/10/1610 Institutional research plan: CEZ:AV0Z10750506 Keywords : Correlation * Co-movement * Wavelet analysis * Wavelet coherence Subject RIV: AH - Economics Impact factor: 2.538, year: 2012

  2. Performances of the UNDERground SEISmic array for the analysis of seismicity in Central Italy

    Directory of Open Access Journals (Sweden)

    R. Scarpa

    2006-06-01

    Full Text Available This paper presents the first results from the operation of a dense seismic array deployed in the underground Physics Laboratories at Gran Sasso (Central Italy. The array consists of 13 short-period, three-component seismometers with an aperture of about 550 m and average sensor spacing of 90 m. The reduced sensor spacing, joined to the spatially-white character of the background noise allows for quick and reliable detection of coherent wavefront arrivals even under very poor SNR conditions. We apply high-resolution frequency-slowness and polarization analyses to a set of 27 earthquakes recorded between November, 2002, and September, 2003, at epicentral distances spanning the 20-140 km interval. We locate these events using inversion of P- and S-wave backazimuths and S-P delay times, and compare the results with data from the Centralized National Seismic Network catalog. For the case of S-wave, the discrepancies among the two set of locations never exceed 10 km; the largest errors are instead observed for the case of P-waves. This observation may be due to the fact that the small array aperture does not allow for robust assessment of waves propagating at high apparent velocities. This information is discussed with special reference to the directions of future studies aimed at elucidating the location of seismogenetic structures in Central Italy from extended analysis of the micro-seismicity.

  3. The Design and Analysis of Split Row-Column Addressing Array for 2-D Transducer

    Directory of Open Access Journals (Sweden)

    Xu Li

    2016-09-01

    Full Text Available For 3-D ultrasound imaging, the row-column addressing (RCA with 2N connections for an N × N 2-D array makes the fabrication and interconnection simpler than the fully addressing with N2 connections. However, RCA degrades the image quality because of defocusing in signal channel direction in the transmit event. To solve this problem, a split row-column addressing scheme (SRCA is proposed in this paper. Rather than connecting all the elements in the signal channel direction together, this scheme divides the elements in the signal channel direction into several disconnected blocks, thus enables focusing beam access in both signal channel and switch channel directions. Selecting an appropriate split scheme is the key for SRCA to maintaining a reasonable tradeoff between the image quality and the number of connections. Various split schemes for a 32 × 32 array are fully investigated with point spread function (PSF analysis and imaging simulation. The result shows the split scheme with five blocks (4, 6, 12, 6, and 4 elements of each block can provide similar image quality to fully addressing. The splitting schemes for different array sizes from 16 × 16 to 96 × 96 are also discussed.

  4. Revisiting regional flood frequency analysis in Slovakia: the region-of-influence method vs. traditional regional approaches

    Science.gov (United States)

    Gaál, Ladislav; Kohnová, Silvia; Szolgay, Ján.

    2010-05-01

    During the last 10-15 years, the Slovak hydrologists and water resources managers have been devoting considerable efforts to develop statistical tools for modelling probabilities of flood occurrence in a regional context. Initially, these models followed concepts to regional flood frequency analysis that were based on fixed regions, later the Hosking and Wallis's (HW; 1997) theory was adopted and modified. Nevertheless, it turned out to be that delineating homogeneous regions using these approaches is not a straightforward task, mostly due to the complex orography of the country. In this poster we aim at revisiting flood frequency analyses so far accomplished for Slovakia by adopting one of the pooling approaches, i.e. the region-of-influence (ROI) approach (Burn, 1990). In the ROI approach, unique pooling groups of similar sites are defined for each site under study. The similarity of sites is defined through Euclidean distance in the space of site attributes that had also proved applicability in former cluster analyses: catchment area, afforested area, hydrogeological catchment index and the mean annual precipitation. The homogeneity of the proposed pooling groups is evaluated by the built-in homogeneity test by Lu and Stedinger (1992). Two alternatives of the ROI approach are examined: in the first one the target size of the pooling groups is adjusted to the target return period T of the estimated flood quantiles, while in the other one, the target size is fixed, regardless of the target T. The statistical models of the ROI approach are inter-compared by the conventional regionalization approach based on the HW methodology where the parameters of flood frequency distributions were derived by means of L-moment statistics and a regional formula for the estimation of the index flood was derived by multiple regression methods using physiographic and climatic catchment characteristics. The inter-comparison of different frequency models is evaluated by means of the

  5. Intraneural stimulation using wire-microelectrode arrays: analysis of force steps in recruitment curves

    NARCIS (Netherlands)

    Smit, J.P.A.; Rutten, Wim; Boom, H.B.K.

    1996-01-01

    In acute experiments on six Wistar rats, a wire-microelectrode array was inserted into the common peroneal nerve. A 5-channel array and a 24-channel array were available. Each electrode in the array was used to generate a twitch contraction force recruitment curve for the extensor digitorum longus

  6. An open source software for analysis of dynamic contrast enhanced magnetic resonance images: UMMPerfusion revisited.

    Science.gov (United States)

    Zöllner, Frank G; Daab, Markus; Sourbron, Steven P; Schad, Lothar R; Schoenberg, Stefan O; Weisser, Gerald

    2016-01-14

    Perfusion imaging has become an important image based tool to derive the physiological information in various applications, like tumor diagnostics and therapy, stroke, (cardio-) vascular diseases, or functional assessment of organs. However, even after 20 years of intense research in this field, perfusion imaging still remains a research tool without a broad clinical usage. One problem is the lack of standardization in technical aspects which have to be considered for successful quantitative evaluation; the second problem is a lack of tools that allow a direct integration into the diagnostic workflow in radiology. Five compartment models, namely, a one compartment model (1CP), a two compartment exchange (2CXM), a two compartment uptake model (2CUM), a two compartment filtration model (2FM) and eventually the extended Toft's model (ETM) were implemented as plugin for the DICOM workstation OsiriX. Moreover, the plugin has a clean graphical user interface and provides means for quality management during the perfusion data analysis. Based on reference test data, the implementation was validated against a reference implementation. No differences were found in the calculated parameters. We developed open source software to analyse DCE-MRI perfusion data. The software is designed as plugin for the DICOM Workstation OsiriX. It features a clean GUI and provides a simple workflow for data analysis while it could also be seen as a toolbox providing an implementation of several recent compartment models to be applied in research tasks. Integration into the infrastructure of a radiology department is given via OsiriX. Results can be saved automatically and reports generated automatically during data analysis ensure certain quality control.

  7. An open source software for analysis of dynamic contrast enhanced magnetic resonance images: UMMPerfusion revisited

    International Nuclear Information System (INIS)

    Zöllner, Frank G.; Daab, Markus; Sourbron, Steven P.; Schad, Lothar R.; Schoenberg, Stefan O.; Weisser, Gerald

    2016-01-01

    Perfusion imaging has become an important image based tool to derive the physiological information in various applications, like tumor diagnostics and therapy, stroke, (cardio-) vascular diseases, or functional assessment of organs. However, even after 20 years of intense research in this field, perfusion imaging still remains a research tool without a broad clinical usage. One problem is the lack of standardization in technical aspects which have to be considered for successful quantitative evaluation; the second problem is a lack of tools that allow a direct integration into the diagnostic workflow in radiology. Five compartment models, namely, a one compartment model (1CP), a two compartment exchange (2CXM), a two compartment uptake model (2CUM), a two compartment filtration model (2FM) and eventually the extended Toft’s model (ETM) were implemented as plugin for the DICOM workstation OsiriX. Moreover, the plugin has a clean graphical user interface and provides means for quality management during the perfusion data analysis. Based on reference test data, the implementation was validated against a reference implementation. No differences were found in the calculated parameters. We developed open source software to analyse DCE-MRI perfusion data. The software is designed as plugin for the DICOM Workstation OsiriX. It features a clean GUI and provides a simple workflow for data analysis while it could also be seen as a toolbox providing an implementation of several recent compartment models to be applied in research tasks. Integration into the infrastructure of a radiology department is given via OsiriX. Results can be saved automatically and reports generated automatically during data analysis ensure certain quality control

  8. Statistical analysis of exacerbation rates in COPD: TRISTAN and ISOLDE revisited

    DEFF Research Database (Denmark)

    Keene, O N; Calverley, P M A; Jones, P W

    2008-01-01

    different analysis methods, we have reanalysed data from two large studies which, among other objectives, investigated the effectiveness of inhaled corticosteroids in reducing COPD exacerbation rates. Using the negative binomial model to reanalyse data from the TRISTAN and ISOLDE studies, the overall...... estimates of exacerbation rates on each treatment arm are higher and the confidence intervals for comparisons between treatments are wider, but the overall conclusions of TRISTAN and ISOLDE regarding reduction of exacerbations remain unchanged. The negative binomial approach appears to provide a better fit...

  9. Covered Interest-Rate Parity Revisited: an Extreme Value Copula Analysis

    Directory of Open Access Journals (Sweden)

    Mikel Ugando-Peñate

    2015-11-01

    Full Text Available This article studied the covered interest-rate parity (CIP condition under extreme market movements using extreme value theory and extreme value copulas to characterize dependence between extreme interest rate differentials and forward premium. The empirical analysis for the CIP between interest rates for the US dollar and the British pound indicates that there is strong co-movement between interest rate differentials and forward premium at different maturities and in both upper and lower tails. This conclusion would support the existence of the CIP condition under extreme market movements.

  10. Revisiting of Multiscale Static Analysis of Notched Laminates Using the Generalized Method of Cells

    Science.gov (United States)

    Naghipour Ghezeljeh, Paria; Arnold, Steven M.; Pineda, Evan J.

    2016-01-01

    Composite material systems generally exhibit a range of behavior on different length scales (from constituent level to macro); therefore, a multiscale framework is beneficial for the design and engineering of these material systems. The complex nature of the observed composite failure during experiments suggests the need for a three-dimensional (3D) multiscale model to attain a reliable prediction. However, the size of a multiscale three-dimensional finite element model can become prohibitively large and computationally costly. Two-dimensional (2D) models are preferred due to computational efficiency, especially if many different configurations have to be analyzed for an in-depth damage tolerance and durability design study. In this study, various 2D and 3D multiscale analyses will be employed to conduct a detailed investigation into the tensile failure of a given multidirectional, notched carbon fiber reinforced polymer laminate. Threedimensional finite element analysis is typically considered more accurate than a 2D finite element model, as compared with experiments. Nevertheless, in the absence of adequate mesh refinement, large differences may be observed between a 2D and 3D analysis, especially for a shear-dominated layup. This observed difference has not been widely addressed in previous literature and is the main focus of this paper.

  11. Analysis of oligonucleotide array experiments with repeated measures using mixed models

    Directory of Open Access Journals (Sweden)

    Getchell Thomas V

    2004-12-01

    Full Text Available Abstract Background Two or more factor mixed factorial experiments are becoming increasingly common in microarray data analysis. In this case study, the two factors are presence (Patients with Alzheimer's disease or absence (Control of the disease, and brain regions including olfactory bulb (OB or cerebellum (CER. In the design considered in this manuscript, OB and CER are repeated measurements from the same subject and, hence, are correlated. It is critical to identify sources of variability in the analysis of oligonucleotide array experiments with repeated measures and correlations among data points have to be considered. In addition, multiple testing problems are more complicated in experiments with multi-level treatments or treatment combinations. Results In this study we adopted a linear mixed model to analyze oligonucleotide array experiments with repeated measures. We first construct a generalized F test to select differentially expressed genes. The Benjamini and Hochberg (BH procedure of controlling false discovery rate (FDR at 5% was applied to the P values of the generalized F test. For those genes with significant generalized F test, we then categorize them based on whether the interaction terms were significant or not at the α-level (αnew = 0.0033 determined by the FDR procedure. Since simple effects may be examined for the genes with significant interaction effect, we adopt the protected Fisher's least significant difference test (LSD procedure at the level of αnew to control the family-wise error rate (FWER for each gene examined. Conclusions A linear mixed model is appropriate for analysis of oligonucleotide array experiments with repeated measures. We constructed a generalized F test to select differentially expressed genes, and then applied a specific sequence of tests to identify factorial effects. This sequence of tests applied was designed to control for gene based FWER.

  12. Detecting periodic oscillations in astronomy data: revisiting wavelet analysis with coloured and white noise

    Science.gov (United States)

    Xu, Chang

    2017-04-01

    The intrinsic random variability of an astronomical source hampers the detection of possible periodicities that we are interested in. We find that a simple first-order autoregressive [AR(1)] process gives a poor fit to the power decay in the observed spectrum for astrophysical sources and geodetic observations. Thus, appropriate background noise models have to be chosen for significance tests to distinguish real features from the intrinsic variability of the source. Here we recall the wavelet analysis with significance and confidence testing but extend it with the generalized Gauss Markov stochastic model as the null hypothesis, which includes AR(1) and a power law as special cases. We exemplify this discussion with real data, such as sunspot number data, geomagnetic indices, X-ray observations, as well as a Global Positioning System (GPS) position time series.

  13. Entering the 'big data' era in medicinal chemistry: molecular promiscuity analysis revisited.

    Science.gov (United States)

    Hu, Ye; Bajorath, Jürgen

    2017-06-01

    The 'big data' concept plays an increasingly important role in many scientific fields. Big data involves more than unprecedentedly large volumes of data that become available. Different criteria characterizing big data must be carefully considered in computational data mining, as we discuss herein focusing on medicinal chemistry. This is a scientific discipline where big data is beginning to emerge and provide new opportunities. For example, the ability of many drugs to specifically interact with multiple targets, termed promiscuity, forms the molecular basis of polypharmacology, a hot topic in drug discovery. Compound promiscuity analysis is an area that is much influenced by big data phenomena. Different results are obtained depending on chosen data selection and confidence criteria, as we also demonstrate.

  14. Correlation analysis of the Korean stock market: Revisited to consider the influence of foreign exchange rate

    Science.gov (United States)

    Jo, Sang Kyun; Kim, Min Jae; Lim, Kyuseong; Kim, Soo Yong

    2018-02-01

    We investigated the effect of foreign exchange rate in a correlation analysis of the Korean stock market using both random matrix theory and minimum spanning tree. We collected data sets which were divided into two types of stock price, the original stock price in Korean Won and the price converted into US dollars at contemporary foreign exchange rates. Comparing the random matrix theory based on the two different prices, a few particular sectors exhibited substantial differences while other sectors changed little. The particular sectors were closely related to economic circumstances and the influence of foreign financial markets during that period. The method introduced in this paper offers a way to pinpoint the effect of exchange rate on an emerging stock market.

  15. Entering the ‘big data’ era in medicinal chemistry: molecular promiscuity analysis revisited

    Science.gov (United States)

    Hu, Ye; Bajorath, Jürgen

    2017-01-01

    The ‘big data’ concept plays an increasingly important role in many scientific fields. Big data involves more than unprecedentedly large volumes of data that become available. Different criteria characterizing big data must be carefully considered in computational data mining, as we discuss herein focusing on medicinal chemistry. This is a scientific discipline where big data is beginning to emerge and provide new opportunities. For example, the ability of many drugs to specifically interact with multiple targets, termed promiscuity, forms the molecular basis of polypharmacology, a hot topic in drug discovery. Compound promiscuity analysis is an area that is much influenced by big data phenomena. Different results are obtained depending on chosen data selection and confidence criteria, as we also demonstrate. PMID:28670471

  16. Read/write schemes analysis for novel complementary resistive switches in passive crossbar memory arrays

    International Nuclear Information System (INIS)

    Yu Shimeng; Liang Jiale; Wu Yi; Wong, H-S Philip

    2010-01-01

    Recently a prototype of complementary resistive switches has been proposed to solve the sneak-path problem in passive crossbar memory arrays. To further evaluate the potential of this novel cell structure for practical applications, we present a modeling analysis to capture its switching dynamics and analyze its unique read/write schemes. The model is corroborated by experimental data. We found a trade-off between the read voltage window and write voltage window. The constraint from avoiding disturbance on unselected cells is critical for proper functionality, which in turn limits the writing speed.

  17. Prostate alpha/beta revisited - an analysis of clinical results from 14 168 patients

    Energy Technology Data Exchange (ETDEWEB)

    Dasu, Alexandru [Dept. of Radiation Physics UHL, County Council of Oestergoetland, Linkoeping (Sweden); Radiation Physics, Dept. of Medical and Health Sciences, Faculty of Health Sciences, Linkoeping Univ., Linkoeping (Sweden); Toma-Dasu, Iuliana [Medical Radiation Physics, Stockholm Univ. and Karolinska Institutet, Stockholm (Sweden)

    2012-11-15

    Purpose. To determine the dose response parameters and the fractionation sensitivity of prostate tumours from clinical results of patients treated with external beam radiotherapy. Material and methods. The study was based on five-year biochemical results from 14 168 patients treated with external beam radiotherapy. Treatment data from 11 330 patients treated with conventional fractionation have been corrected for overall treatment time and fitted with a logit equation. The results have been used to determine the optimum {alpha}/{beta} values that minimise differences in predictions from 2838 patients treated with hypofractionated schedules. Results. Conventional fractionation data yielded logit dose response parameters for all risk groups and for all definitions of biochemical failures. The analysis of hypofractionation data led to very low {alpha}/{beta} values (1-1.7 Gy) in all mentioned cases. Neglecting the correction for overall treatment time has little impact on the derivation of {alpha}/{beta} values for prostate cancers. Conclusions. These results indicate that the high fractionation sensitivity is an intrinsic property of prostate carcinomas and they support the use of hypofractionation to increase the therapeutic gain for these tumours.

  18. Revisiting Slow Slip Events Occurrence in Boso Peninsula, Japan, Combining GPS Data and Repeating Earthquakes Analysis

    Science.gov (United States)

    Gardonio, B.; Marsan, D.; Socquet, A.; Bouchon, M.; Jara, J.; Sun, Q.; Cotte, N.; Campillo, M.

    2018-02-01

    Slow slip events (SSEs) regularly occur near the Boso Peninsula, central Japan. Their time of recurrence has been decreasing from 6.4 to 2.2 years from 1996 to 2014. It is important to better constrain the slip history of this area, especially as models show that the recurrence intervals could become shorter prior to the occurrence of a large interplate earthquake nearby. We analyze the seismic waveforms of more than 2,900 events (M≥1.0) taking place in the Boso Peninsula, Japan, from 1 April 2004 to 4 November 2015, calculating the correlation and the coherence between each pair of events in order to define groups of repeating earthquakes. The cumulative number of repeating earthquakes suggests the existence of two slow slip events that have escaped detection so far. Small transient displacements observed in the time series of nearby GPS stations confirm these results. The detection scheme coupling repeating earthquakes and GPS analysis allow to detect small SSEs that were not seen before by classical methods. This work brings new information on the diversity of SSEs and demonstrates that the SSEs in Boso area present a more complex history than previously considered.

  19. Analysis of LH Launcher Arrays (Like the ITER One) Using the TOPLHA Code

    International Nuclear Information System (INIS)

    Maggiora, R.; Milanesio, D.; Vecchi, G.

    2009-01-01

    TOPLHA (Torino Polytechnic Lower Hybrid Antenna) code is an innovative tool for the 3D/1D simulation of Lower Hybrid (LH) antennas, i.e. accounting for realistic 3D waveguides geometry and for accurate 1D plasma models, and without restrictions on waveguide shape, including curvature. This tool provides a detailed performances prediction of any LH launcher, by computing the antenna scattering parameters, the current distribution, electric field maps and power spectra for any user-specified waveguide excitation. In addition, a fully parallelized and multi-cavity version of TOPLHA permits the analysis of large and complex waveguide arrays in a reasonable simulation time. A detailed analysis of the performances of the proposed ITER LH antenna geometry has been carried out, underlining the strong dependence of the antenna input parameters with respect to plasma conditions. A preliminary optimization of the antenna dimensions has also been accomplished. Electric current distribution on conductors, electric field distribution at the interface with plasma, and power spectra have been calculated as well. The analysis shows the strong capabilities of the TOPLHA code as a predictive tool and its usefulness to LH launcher arrays detailed design.

  20. Revisiting pulmonary vein isolation alone for persistent atrial fibrillation: A systematic review and meta-analysis.

    Science.gov (United States)

    Voskoboinik, Aleksandr; Moskovitch, Jeremy T; Harel, Nadav; Sanders, Prashanthan; Kistler, Peter M; Kalman, Jonathan M

    2017-05-01

    Early studies demonstrated relatively low success rates for pulmonary vein isolation (PVI) alone in patients with persistent atrial fibrillation (PeAF). However, the advent of new technologies and the observation that additional substrate ablation does not improve outcomes have created a new focus on PVI alone for treatment of PeAF. The purpose of this study was to systematically review the recent medical literature to determine current medium-term outcomes when a PVI-only approach is used for PeAF. An electronic database search (MEDLINE, Embase, Web of Science, PubMed, Cochrane) was performed in August 2016. Only studies of PeAF patients undergoing a "PVI only" ablation strategy using contemporary radiofrequency (RF) technology or second-generation cryoballoon (CB2) were included. A random-effects model was used to assess the primary outcome of pooled single-procedure 12-month arrhythmia-free survival. Predictors of recurrence were also examined and a meta-analysis performed if ≥4 studies examined the parameter. Fourteen studies of 956 patients, of whom 45.2% underwent PVI only with RF and 54.8% with CB2, were included. Pooled single-procedure 12-month arrhythmia-free survival was 66.7% (95% confidence interval [CI] 60.8%-72.2%), with the majority of patients (80.5%) off antiarrhythmic drugs. Complication rates were very low, with cardiac tamponade occurring in 5 patients (0.6%) and persistent phrenic nerve palsy in 5 CB2 patients (0.9% of CB2). Blanking period recurrence (hazard ratio 4.68, 95% CI 1.70-12.9) was the only significant predictor of recurrence. A PVI-only strategy in PeAF patients with a low prevalence of structural heart disease using contemporary technology yields excellent outcomes comparable to those for paroxysmal AF ablation. Copyright © 2017 Heart Rhythm Society. Published by Elsevier Inc. All rights reserved.

  1. Revisiting drivers of energy intensity in China during 1997–2007: A structural decomposition analysis

    International Nuclear Information System (INIS)

    Zeng, Lin; Xu, Ming; Liang, Sai; Zeng, Siyu; Zhang, Tianzhu

    2014-01-01

    The decline of China's energy intensity slowed since 2000. During 2002–2005 it actually increased, reversing the long-term trend. Therefore, it is important to identify drivers of the fluctuation of energy intensity. We use input–output structural decomposition analysis to investigate the contributions of changes in energy mix, sectoral energy efficiency, production structure, final demand structure, and final demand category composition to China's energy intensity fluctuation during 1997–2007. We include household energy consumption in the study by closing the input–output model with respect to households. Results show that sectoral energy efficiency improvements contribute the most to the energy intensity decline during 1997–2007. The increase in China's energy intensity during 2002–2007 is instead explained by changes in final demand composition and production structure. Changes in final demand composition are mainly due to increasing share of exports, while changes in production structure mainly arise from the shift of Chinese economy to more energy-intensive industries. Changes in energy mix and final demand structure contribute little to China's energy intensity fluctuation. From the consumption perspective, growing exports of energy-intensive products and increasing infrastructure demands explain the majority of energy intensity increase during 2002–2007. - Highlights: • We analyzed energy intensity change from production and consumption perspectives. • We extended the research scope of energy intensity to cover household consumption. • Sectoral energy efficiency improvement contributed most to energy intensity decline. • Impact of production structure change on energy intensity varied at different times. • Growing export demand newly became main driver of China's energy intensity increase

  2. Colours of minor bodies in the outer solar system. II. A statistical analysis revisited

    Science.gov (United States)

    Hainaut, O. R.; Boehnhardt, H.; Protopapa, S.

    2012-10-01

    We present an update of the visible and near-infrared colour database of Minor Bodies in the Outer Solar System (MBOSSes), which now includes over 2000 measurement epochs of 555 objects, extracted from over 100 articles. The list is fairly complete as of December 2011. The database is now large enough to enable any dataset with a large dispersion to be safely identified and rejected from the analysis. The selection method used is quite insensitive to individual outliers. Most of the rejected datasets were observed during the early days of MBOSS photometry. The individual measurements are combined in a way that avoids possible rotational artifacts. The spectral gradient over the visible range is derived from the colours, as well as the R absolute magnitude M(1,1). The average colours, absolute magnitude, and spectral gradient are listed for each object, as well as the physico-dynamical classes using a classification adapted from Gladman and collaborators. Colour-colour diagrams, histograms, and various other plots are presented to illustrate and investigate class characteristics and trends with other parameters, whose significances are evaluated using standard statistical tests. Except for a small discrepancy for the J-H colour, the largest objects, with M(1,1) < 5, are indistinguishable from the smaller ones. The larger ones are slightly bluer than the smaller ones in J-H. Short-period comets, Plutinos and other resonant objects, hot classical disk objects, scattered disk objects and detached disk objects have similar properties in the visible, while the cold classical disk objects and the Jupiter Trojans form two separate groups of their spectral properties in the visible wavelength range. The well-known colour bimodality of Centaurs is confirmed. The hot classical disk objects with large inclinations, or large orbital excitations are found to be bluer than the others, confirming a previously known result. Additionally, the hot classical disk objects with a

  3. Aligned Carbon Nanotube Arrays Bonded to Solid Graphite Substrates: Thermal Analysis for Future Device Cooling Applications

    Directory of Open Access Journals (Sweden)

    Betty T. Quinton

    2018-05-01

    Full Text Available Carbon nanotubes (CNTs are known for high thermal conductivity and have potential use as nano-radiators or heat exchangers. This paper focuses on the thermal performance of carpet-like arrays of vertically aligned CNTs on solid graphite substrates with the idea of investigating their behavior as a function of carpet dimensions and predicting their performance as thermal interface material (TIM for electronic device cooling. Vertically aligned CNTs were grown on highly oriented pyrolytic graphite (HOPG substrate, which creates a robust and durable all-carbon hierarchical structure. The multi-layer thermal analysis approach using Netzsch laser flash analysis system was used to evaluate their performance as a function of carpet height, from which their thermal properties can be determined. It was seen that the thermal resistance of the CNT array varies linearly with CNT carpet height, providing a unique way of decoupling the properties of the CNT carpet from its interface. This data was used to estimate the thermal conductivity of individual multi-walled nanotube strands in this carpet, which was about 35 W/m-K. The influence of CNT carpet parameters (aerial density, diameter, and length on thermal resistance of the CNT carpet and its potential advantages and limitations as an integrated TIM are discussed.

  4. Analytical Kinematics and Coupled Vibrations Analysis of Mechanical System Operated by Solar Array Drive Assembly

    Science.gov (United States)

    Sattar, M.; Wei, C.; Jalali, A.; Sattar, R.

    2017-07-01

    To address the impact of solar array (SA) anomalies and vibrations on performance of precision space-based operations, it is important to complete its accurate jitter analysis. This work provides mathematical modelling scheme to approximate kinematics and coupled micro disturbance dynamics of rigid load supported and operated by solar array drive assembly (SADA). SADA employed in analysis provides a step wave excitation torque to activate the system. Analytical investigations into kinematics is accomplished by using generalized linear and Euler angle coordinates, applying multi-body dynamics concepts and transformations principles. Theoretical model is extended, to develop equations of motion (EoM), through energy method (Lagrange equation). The main emphasis is to research coupled frequency response by determining energies dissipated and observing dynamic behaviour of internal vibratory systems of SADA. The disturbance model captures discrete active harmonics of SADA, natural modes and vibration amplifications caused by interactions between active harmonics and structural modes of mechanical assembly. The proposed methodology can help to predict true micro disturbance nature of SADA operating rigid load. Moreover, performance outputs may be compared against actual mission requirements to assess precise spacecraft controller design to meet next space generation stringent accuracy goals.

  5. Oxidative phosphorylation revisited

    DEFF Research Database (Denmark)

    Nath, Sunil; Villadsen, John

    2015-01-01

    The fundamentals of oxidative phosphorylation and photophosphorylation are revisited. New experimental data on the involvement of succinate and malate anions respectively in oxidative phosphorylation and photophosphorylation are presented. These new data offer a novel molecular mechanistic...

  6. Generation of a genomic tiling array of the human Major Histocompatibility Complex (MHC and its application for DNA methylation analysis

    Directory of Open Access Journals (Sweden)

    Ottaviani Diego

    2008-05-01

    Full Text Available Abstract Background The major histocompatibility complex (MHC is essential for human immunity and is highly associated with common diseases, including cancer. While the genetics of the MHC has been studied intensively for many decades, very little is known about the epigenetics of this most polymorphic and disease-associated region of the genome. Methods To facilitate comprehensive epigenetic analyses of this region, we have generated a genomic tiling array of 2 Kb resolution covering the entire 4 Mb MHC region. The array has been designed to be compatible with chromatin immunoprecipitation (ChIP, methylated DNA immunoprecipitation (MeDIP, array comparative genomic hybridization (aCGH and expression profiling, including of non-coding RNAs. The array comprises 7832 features, consisting of two replicates of both forward and reverse strands of MHC amplicons and appropriate controls. Results Using MeDIP, we demonstrate the application of the MHC array for DNA methylation profiling and the identification of tissue-specific differentially methylated regions (tDMRs. Based on the analysis of two tissues and two cell types, we identified 90 tDMRs within the MHC and describe their characterisation. Conclusion A tiling array covering the MHC region was developed and validated. Its successful application for DNA methylation profiling indicates that this array represents a useful tool for molecular analyses of the MHC in the context of medical genomics.

  7. Pseudodiagnosticity Revisited

    Science.gov (United States)

    Crupi, Vincenzo; Tentori, Katya; Lombardi, Luigi

    2009-01-01

    In the psychology of reasoning and judgment, the pseudodiagnosticity task has been a major tool for the empirical investigation of people's ability to search for diagnostic information. A novel normative analysis of this experimental paradigm is presented, by which the participants' prevailing responses turn out not to support the generally…

  8. Electronic tongue - an array of non-specific chemical sensors - for analysis of radioactive solutions

    International Nuclear Information System (INIS)

    Legin, A.; Rudnitskaya, A.; Babain, V.

    2006-01-01

    Multisensor systems, combining chemical sensor arrays with multivariate data processing engines (electronic tongue) rapidly and successfully developing in the last years are capable of simultaneous quantitative analysis of several species, e.g. metals, in complex real solutions. The expansion of the metals (metal ions) and species to be detected in radioactive waste requires permanent enhancement of sensing materials and sensors, with seriously different properties from those known earlier. A prospective direction of R and D of novel sensing materials is exploitation of radiochemical extraction systems and application of extraction substances as active components of new sensors. The sensors based on bidentate phosphorous organic compounds and their combinations with chlorinated cobalt dicarbollide displayed high sensitivity and selectivity to rare-earth metal ions La 3+ , Pr 3+ , Nd 3+ , Eu 3+ . The results indicated good promise for the development of novel analytical tools for detection of multivalent metal cations in different media, particularly in corrosive solutions such as radioactive wastes and solutions derived from spent nuclear fuel. The sensors and sensor arrays made on their basis can play an important role in the development of 'electronic tongue' systems for rapid analytical determinations of different components in complex radioactive solutions

  9. Global analysis of aberrant pre-mRNA splicing in glioblastoma using exon expression arrays

    Directory of Open Access Journals (Sweden)

    Nixon Tamara J

    2008-05-01

    Full Text Available Abstract Background Tumor-predominant splice isoforms were identified during comparative in silico sequence analysis of EST clones, suggesting that global aberrant alternative pre-mRNA splicing may be an epigenetic phenomenon in cancer. We used an exon expression array to perform an objective, genome-wide survey of glioma-specific splicing in 24 GBM and 12 nontumor brain samples. Validation studies were performed using RT-PCR on glioma cell lines, patient tumor and nontumor brain samples. Results In total, we confirmed 14 genes with glioma-specific splicing; seven were novel events identified by the exon expression array (A2BP1, BCAS1, CACNA1G, CLTA, KCNC2, SNCB, and TPD52L2. Our data indicate that large changes (> 5-fold in alternative splicing are infrequent in gliomagenesis ( Conclusion While we observed some tumor-specific alternative splicing, the number of genes showing exclusive tumor-specific isoforms was on the order of tens, rather than the hundreds suggested previously by in silico mining. Given the important role of alternative splicing in neural differentiation, there may be selective pressure to maintain a majority of splicing events in order to retain glial-like characteristics of the tumor cells.

  10. X ray photoelectron spectroscopy (XPS) analysis of Photosensitive ZrO2 array

    Science.gov (United States)

    Li, Y.; Zhao, G.; Zhu, R.; Kou, Z.

    2018-03-01

    Based on organic zirconium source as the starting material, by adding chemical modifiers which are made up with photosensitive ZrO2 sol. A uniformed ZrO2 array dot was fabricated with a mean diameter of around 800 nm. By using UV-vis spectra and X-ray photoelectron spectroscopy analysis method, studies the photosensitive ZrO2 gel film of photochemical reaction process and the photosensitive mechanism, to determine the zirconium atom centered chelate structure, reaction formed by metal chelate Zr atom for the center, and to establish the molecular model of the chelate. And studied the ultraviolet light in the process of the variation of the XPS spectra, Zr3d5/2 to 184.9 eV corresponding to the binding energy of the as the combination of state peak gradually reduce; By combining with the status of Zr-O peak gradually increase; The strength of the peak is gradually decline. This suggests that in the process of ultraviolet light photo chemical reaction happened. This study is of great significance to the micro fabrication of ZrO2 array not only to the memory devices but also to the optical devices.

  11. Independent component analysis reveals new and biologically significant structures in micro array data

    Directory of Open Access Journals (Sweden)

    Veerla Srinivas

    2006-06-01

    Full Text Available Abstract Background An alternative to standard approaches to uncover biologically meaningful structures in micro array data is to treat the data as a blind source separation (BSS problem. BSS attempts to separate a mixture of signals into their different sources and refers to the problem of recovering signals from several observed linear mixtures. In the context of micro array data, "sources" may correspond to specific cellular responses or to co-regulated genes. Results We applied independent component analysis (ICA to three different microarray data sets; two tumor data sets and one time series experiment. To obtain reliable components we used iterated ICA to estimate component centrotypes. We found that many of the low ranking components indeed may show a strong biological coherence and hence be of biological significance. Generally ICA achieved a higher resolution when compared with results based on correlated expression and a larger number of gene clusters with significantly enriched for gene ontology (GO categories. In addition, components characteristic for molecular subtypes and for tumors with specific chromosomal translocations were identified. ICA also identified more than one gene clusters significant for the same GO categories and hence disclosed a higher level of biological heterogeneity, even within coherent groups of genes. Conclusion Although the ICA approach primarily detects hidden variables, these surfaced as highly correlated genes in time series data and in one instance in the tumor data. This further strengthens the biological relevance of latent variables detected by ICA.

  12. Null stream analysis of Pulsar Timing Array data: localisation of resolvable gravitational wave sources

    Science.gov (United States)

    Goldstein, Janna; Veitch, John; Sesana, Alberto; Vecchio, Alberto

    2018-04-01

    Super-massive black hole binaries are expected to produce a gravitational wave (GW) signal in the nano-Hertz frequency band which may be detected by pulsar timing arrays (PTAs) in the coming years. The signal is composed of both stochastic and individually resolvable components. Here we develop a generic Bayesian method for the analysis of resolvable sources based on the construction of `null-streams' which cancel the part of the signal held in common for each pulsar (the Earth-term). For an array of N pulsars there are N - 2 independent null-streams that cancel the GW signal from a particular sky location. This method is applied to the localisation of quasi-circular binaries undergoing adiabatic inspiral. We carry out a systematic investigation of the scaling of the localisation accuracy with signal strength and number of pulsars in the PTA. Additionally, we find that source sky localisation with the International PTA data release one is vastly superior than what is achieved by its constituent regional PTAs.

  13. Scattering Analysis of a Compact Dipole Array with Series and Parallel Feed Network including Mutual Coupling Effect

    Directory of Open Access Journals (Sweden)

    H. L. Sneha

    2013-01-01

    Full Text Available The current focus in defense arena is towards the stealth technology with an emphasis to control the radar cross-section (RCS. The scattering from the antennas mounted over the platform is of prime importance especially for a low-observable aerospace vehicle. This paper presents the analysis of the scattering cross section of a uniformly spaced linear dipole array. Two types of feed networks, that is, series and parallel feed networks, are considered. The total RCS of phased array with either kind of feed network is obtained by following the signal as it enters through the aperture and travels through the feed network. The RCS estimation of array is done including the mutual coupling effect between the dipole elements in three configurations, that is, side-by-side, collinear, and parallel-in-echelon. The results presented can be useful while designing a phased array with optimum performance towards low observability.

  14. Analysis of mean time to data loss of fault-tolerant disk arrays RAID-6 based on specialized Markov chain

    Science.gov (United States)

    Rahman, P. A.; D'K Novikova Freyre Shavier, G.

    2018-03-01

    This scientific paper is devoted to the analysis of the mean time to data loss of redundant disk arrays RAID-6 with alternation of data considering different failure rates of disks both in normal state of the disk array and in degraded and rebuild states, and also nonzero time of the disk replacement. The reliability model developed by the authors on the basis of the Markov chain and obtained calculation formula for estimation of the mean time to data loss (MTTDL) of the RAID-6 disk arrays are also presented. At last, the technique of estimation of the initial reliability parameters and examples of calculation of the MTTDL of the RAID-6 disk arrays for the different numbers of disks are also given.

  15. Tectonic Tremor analysis with the Taiwan Chelungpu-Fault Drilling Program (TCDP) downhole seismometer array

    Science.gov (United States)

    Lin, Y.; Hillers, G.; Ma, K.; Campillo, M.

    2011-12-01

    We study tectonic tremor activity in the Taichung area, Taiwan, analyzing continuous seismic records from 6 short-period sensors of the TCDP borehole array situated around 1 km depth. The low background noise level facilitates the detection of low-amplitude tectonic tremor and low-frequency earthquake (LFE) waveforms. We apply a hierarchical analysis to first detect transient amplitude increases, and to subsequently verify its tectonic origin, i.e. to associate it with tremor signals. The frequency content of tremor usually exceeds the background noise around 2-8 Hz; hence, in the first step, we use BHS1, BHS4 and BHS7 (top, center, bottom sensor) records to detect amplitude anomalies in this frequency range. We calculate the smoothed spectra of 30 second non-overlapping windows taken daily from 5 night time hours to avoid increased day time amplitudes associated with cultural activities. Amplitude detection is then performed on frequency dependent median values of 5 minute advancing, 10 minute long time windows, yielding a series of threshold dependent increased-energy spectra-envelopes, indicating teleseismic waveforms, potential tremor records, or other transients related to anthropogenic or natural sources. To verify the transients' tectonic origin, potential tremor waveforms detected by the amplitude method are manually picked in the time domain. We apply the Brown et al. (2008) LFE matched filter technique to three-component data from the 6 available sensors. Initial few-second templates are taken from the analyst-picked, minute-long segments, and correlated component-wise with 24-h data. Significantly increased similarity between templates and matched waveform segments is detected using the array-average 7-fold MAD measure. Harvested waveforms associated with this initial `weak' detection are stacked, and the thus created master templates are used in an iterative correlation procedure to arrive at robust LFE detections. The increased similarity of waveforms

  16. A Structural Equation Model of Risk Perception of Rockfall for Revisit Intention

    OpenAIRE

    Ya-Fen Lee; Yun-Yao Chi

    2014-01-01

    The study aims to explore the relationship between risk perception of rockfall and revisit intention using a Structural Equation Modeling (SEM) analysis. A total of 573 valid questionnaires are collected from travelers to Taroko National Park, Taiwan. The findings show the majority of travelers have the medium perception of rockfall risk, and are willing to revisit the Taroko National Park. The revisit intention to Taroko National Park is influenced by hazardous preferences, willingness-to-pa...

  17. Temperature-dependent photoluminescence analysis of ZnO nanowire array annealed in air

    Science.gov (United States)

    Sun, Yanan; Gu, Xiuquan; Zhao, Yulong; Wang, Linmeng; Qiang, Yinghuai

    2018-05-01

    ZnO nanowire arrays (NWAs) were prepared on transparent conducting fluorine doped tin oxide (FTO) substrates through a facile hydrothermal method, followed by a 500 °C annealing to improve their crystalline qualities and photoelectrochemical (PEC) activities. It was found that the annealing didn't change the morphology, but resulted in a significant reduction of the donor concentration. Temperature-dependent photoluminescence (PL) was carried out for a comprehensive analysis of the effect from annealing. Noteworthy, four dominant peaks were identified from the 10 K spectrum of a 500 °C annealed sample, and they were assigned to FX, D0X, (e, D0) and (e, D0) -1LO, respectively. Of them, the FX emission was only existed below 130 K, while the room-temperature (RT) PL spectrum was dominated by the D0X emission.

  18. Modeling and stability analysis for the upper atmosphere research satellite auxiliary array switch component

    Science.gov (United States)

    Wolfgang, R.; Natarajan, T.; Day, J.

    1987-01-01

    A feedback control system, called an auxiliary array switch, was designed to connect or disconnect auxiliary solar panel segments from a spacecraft electrical bus to meet fluctuating demand for power. A simulation of the control system was used to carry out a number of design and analysis tasks that could not economically be performed with a breadboard of the hardware. These tasks included: (1) the diagnosis of a stability problem, (2) identification of parameters to which the performance of the control system was particularly sensitive, (3) verification that the response of the control system to anticipated fluctuations in the electrical load of the spacecraft was satisfactory, and (4) specification of limitations on the frequency and amplitude of the load fluctuations.

  19. An Array of Qualitative Data Analysis Tools: A Call for Data Analysis Triangulation

    Science.gov (United States)

    Leech, Nancy L.; Onwuegbuzie, Anthony J.

    2007-01-01

    One of the most important steps in the qualitative research process is analysis of data. The purpose of this article is to provide elements for understanding multiple types of qualitative data analysis techniques available and the importance of utilizing more than one type of analysis, thus utilizing data analysis triangulation, in order to…

  20. Adapting Controlled-source Coherence Analysis to Dense Array Data in Earthquake Seismology

    Science.gov (United States)

    Schwarz, B.; Sigloch, K.; Nissen-Meyer, T.

    2017-12-01

    Exploration seismology deals with highly coherent wave fields generated by repeatable controlled sources and recorded by dense receiver arrays, whose geometry is tailored to back-scattered energy normally neglected in earthquake seismology. Owing to these favorable conditions, stacking and coherence analysis are routinely employed to suppress incoherent noise and regularize the data, thereby strongly contributing to the success of subsequent processing steps, including migration for the imaging of back-scattering interfaces or waveform tomography for the inversion of velocity structure. Attempts have been made to utilize wave field coherence on the length scales of passive-source seismology, e.g. for the imaging of transition-zone discontinuities or the core-mantle-boundary using reflected precursors. Results are however often deteriorated due to the sparse station coverage and interference of faint back-scattered with transmitted phases. USArray sampled wave fields generated by earthquake sources at an unprecedented density and similar array deployments are ongoing or planned in Alaska, the Alps and Canada. This makes the local coherence of earthquake data an increasingly valuable resource to exploit.Building on the experience in controlled-source surveys, we aim to extend the well-established concept of beam-forming to the richer toolbox that is nowadays used in seismic exploration. We suggest adapted strategies for local data coherence analysis, where summation is performed with operators that extract the local slope and curvature of wave fronts emerging at the receiver array. Besides estimating wave front properties, we demonstrate that the inherent data summation can also be used to generate virtual station responses at intermediate locations where no actual deployment was performed. Owing to the fact that stacking acts as a directional filter, interfering coherent wave fields can be efficiently separated from each other by means of coherent subtraction. We

  1. Lorentz violation naturalness revisited

    Energy Technology Data Exchange (ETDEWEB)

    Belenchia, Alessio; Gambassi, Andrea; Liberati, Stefano [SISSA - International School for Advanced Studies, via Bonomea 265, 34136 Trieste (Italy); INFN, Sezione di Trieste, via Valerio 2, 34127 Trieste (Italy)

    2016-06-08

    We revisit here the naturalness problem of Lorentz invariance violations on a simple toy model of a scalar field coupled to a fermion field via a Yukawa interaction. We first review some well-known results concerning the low-energy percolation of Lorentz violation from high energies, presenting some details of the analysis not explicitly discussed in the literature and discussing some previously unnoticed subtleties. We then show how a separation between the scale of validity of the effective field theory and that one of Lorentz invariance violations can hinder this low-energy percolation. While such protection mechanism was previously considered in the literature, we provide here a simple illustration of how it works and of its general features. Finally, we consider a case in which dissipation is present, showing that the dissipative behaviour does not percolate generically to lower mass dimension operators albeit dispersion does. Moreover, we show that a scale separation can protect from unsuppressed low-energy percolation also in this case.

  2. Gene array analysis of PD-1H overexpressing monocytes reveals a pro-inflammatory profile

    Directory of Open Access Journals (Sweden)

    Preeti Bharaj

    2018-02-01

    Full Text Available We have previously reported that overexpression of Programmed Death -1 Homolog (PD-1H in human monocytes leads to activation and spontaneous secretion of multiple pro inflammatory cytokines. Here we evaluate changes in monocytes gene expression after enforced PD-1H expression by gene array. The results show that there are significant alterations in 51 potential candidate genes that relate to immune response, cell adhesion and metabolism. Genes corresponding to pro-inflammatory cytokines showed the highest upregulation, 7, 3.2, 3.0, 5.8, 4.4 and 3.1 fold upregulation of TNF-α, IL-1 β, IFN-α, γ, λ and IL-27 relative to vector control. The data are in agreement with cytometric bead array analysis showing induction of proinflammatory cytokines, IL-6, IL-1β and TNF-α by PD-1H. Other genes related to inflammation, include transglutaminase 2 (TG2, NF-κB (p65 and p50 and toll like receptors (TLR 3 and 4 were upregulated 5, 4.5 and 2.5 fold, respectively. Gene set enrichment analysis (GSEA also revealed that signaling pathways related to inflammatory response, such as NFκB, AT1R, PYK2, MAPK, RELA, TNFR1, MTOR and proteasomal degradation, were significantly upregulated in response to PD-1H overexpression. We validated the results utilizing a standard inflammatory sepsis model in humanized BLT mice, finding that PD-1H expression was highly correlated with proinflammatory cytokine production. We therefore conclude that PD-1H functions to enhance monocyte activation and the induction of a pro-inflammatory gene expression profile.

  3. Analysis of Correlation in MEMS Gyroscope Array and its Influence on Accuracy Improvement for the Combined Angular Rate Signal

    Directory of Open Access Journals (Sweden)

    Liang Xue

    2018-01-01

    Full Text Available Obtaining a correlation factor is a prerequisite for fusing multiple outputs of a mircoelectromechanical system (MEMS gyroscope array and evaluating accuracy improvement. In this paper, a mathematical statistics method is established to analyze and obtain the practical correlation factor of a MEMS gyroscope array, which solves the problem of determining the Kalman filter (KF covariance matrix Q and fusing the multiple gyroscope signals. The working principle and mathematical model of the sensor array fusion is briefly described, and then an optimal estimate of input rate signal is achieved by using of a steady-state KF gain in an off-line estimation approach. Both theoretical analysis and simulation show that the negative correlation factor has a favorable influence on accuracy improvement. Additionally, a four-gyro array system composed of four discrete individual gyroscopes was developed to test the correlation factor and its influence on KF accuracy improvement. The result showed that correlation factors have both positive and negative values; in particular, there exist differences for correlation factor between the different units in the array. The test results also indicated that the Angular Random Walk (ARW of 1.57°/h0.5 and bias drift of 224.2°/h for a single gyroscope were reduced to 0.33°/h0.5 and 47.8°/h with some negative correlation factors existing in the gyroscope array, making a noise reduction factor of about 4.7, which is higher than that of a uncorrelated four-gyro array. The overall accuracy of the combined angular rate signal can be further improved if the negative correlation factors in the gyroscope array become larger.

  4. The Cherenkov Telescope Array production system for Monte Carlo simulations and analysis

    Science.gov (United States)

    Arrabito, L.; Bernloehr, K.; Bregeon, J.; Cumani, P.; Hassan, T.; Haupt, A.; Maier, G.; Moralejo, A.; Neyroud, N.; pre="for the"> CTA Consortium, DIRAC Consortium,

    2017-10-01

    The Cherenkov Telescope Array (CTA), an array of many tens of Imaging Atmospheric Cherenkov Telescopes deployed on an unprecedented scale, is the next-generation instrument in the field of very high energy gamma-ray astronomy. An average data stream of about 0.9 GB/s for about 1300 hours of observation per year is expected, therefore resulting in 4 PB of raw data per year and a total of 27 PB/year, including archive and data processing. The start of CTA operation is foreseen in 2018 and it will last about 30 years. The installation of the first telescopes in the two selected locations (Paranal, Chile and La Palma, Spain) will start in 2017. In order to select the best site candidate to host CTA telescopes (in the Northern and in the Southern hemispheres), massive Monte Carlo simulations have been performed since 2012. Once the two sites have been selected, we have started new Monte Carlo simulations to determine the optimal array layout with respect to the obtained sensitivity. Taking into account that CTA may be finally composed of 7 different telescope types coming in 3 different sizes, many different combinations of telescope position and multiplicity as a function of the telescope type have been proposed. This last Monte Carlo campaign represented a huge computational effort, since several hundreds of telescope positions have been simulated, while for future instrument response function simulations, only the operating telescopes will be considered. In particular, during the last 18 months, about 2 PB of Monte Carlo data have been produced and processed with different analysis chains, with a corresponding overall CPU consumption of about 125 M HS06 hours. In these proceedings, we describe the employed computing model, based on the use of grid resources, as well as the production system setup, which relies on the DIRAC interware. Finally, we present the envisaged evolutions of the CTA production system for the off-line data processing during CTA operations and

  5. Analysis of Reverse Phase Protein Array Data: From Experimental Design towards Targeted Biomarker Discovery

    Directory of Open Access Journals (Sweden)

    Astrid Wachter

    2015-11-01

    Full Text Available Mastering the systematic analysis of tumor tissues on a large scale has long been a technical challenge for proteomics. In 2001, reverse phase protein arrays (RPPA were added to the repertoire of existing immunoassays, which, for the first time, allowed a profiling of minute amounts of tumor lysates even after microdissection. A characteristic feature of RPPA is its outstanding sample capacity permitting the analysis of thousands of samples in parallel as a routine task. Until today, the RPPA approach has matured to a robust and highly sensitive high-throughput platform, which is ideally suited for biomarker discovery. Concomitant with technical advancements, new bioinformatic tools were developed for data normalization and data analysis as outlined in detail in this review. Furthermore, biomarker signatures obtained by different RPPA screens were compared with another or with that obtained by other proteomic formats, if possible. Options for overcoming the downside of RPPA, which is the need to steadily validate new antibody batches, will be discussed. Finally, a debate on using RPPA to advance personalized medicine will conclude this article.

  6. Unsupervised neural spike sorting for high-density microelectrode arrays with convolutive independent component analysis.

    Science.gov (United States)

    Leibig, Christian; Wachtler, Thomas; Zeck, Günther

    2016-09-15

    Unsupervised identification of action potentials in multi-channel extracellular recordings, in particular from high-density microelectrode arrays with thousands of sensors, is an unresolved problem. While independent component analysis (ICA) achieves rapid unsupervised sorting, it ignores the convolutive structure of extracellular data, thus limiting the unmixing to a subset of neurons. Here we present a spike sorting algorithm based on convolutive ICA (cICA) to retrieve a larger number of accurately sorted neurons than with instantaneous ICA while accounting for signal overlaps. Spike sorting was applied to datasets with varying signal-to-noise ratios (SNR: 3-12) and 27% spike overlaps, sampled at either 11.5 or 23kHz on 4365 electrodes. We demonstrate how the instantaneity assumption in ICA-based algorithms has to be relaxed in order to improve the spike sorting performance for high-density microelectrode array recordings. Reformulating the convolutive mixture as an instantaneous mixture by modeling several delayed samples jointly is necessary to increase signal-to-noise ratio. Our results emphasize that different cICA algorithms are not equivalent. Spike sorting performance was assessed with ground-truth data generated from experimentally derived templates. The presented spike sorter was able to extract ≈90% of the true spike trains with an error rate below 2%. It was superior to two alternative (c)ICA methods (≈80% accurately sorted neurons) and comparable to a supervised sorting. Our new algorithm represents a fast solution to overcome the current bottleneck in spike sorting of large datasets generated by simultaneous recording with thousands of electrodes. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Analysis of jet-airfoil interaction noise sources by using a microphone array technique

    Science.gov (United States)

    Fleury, Vincent; Davy, Renaud

    2016-03-01

    The paper is concerned with the characterization of jet noise sources and jet-airfoil interaction sources by using microphone array data. The measurements were carried-out in the anechoic open test section wind tunnel of Onera, Cepra19. The microphone array technique relies on the convected, Lighthill's and Ffowcs-Williams and Hawkings' acoustic analogy equation. The cross-spectrum of the source term of the analogy equation is sought. It is defined as the optimal solution to a minimal error equation using the measured microphone cross-spectra as reference. This inverse problem is ill-posed yet. A penalty term based on a localization operator is therefore added to improve the recovery of jet noise sources. The analysis of isolated jet noise data in subsonic regime shows the contribution of the conventional mixing noise source in the low frequency range, as expected, and of uniformly distributed, uncorrelated noise sources in the jet flow at higher frequencies. In underexpanded supersonic regime, a shock-associated noise source is clearly identified, too. An additional source is detected in the vicinity of the nozzle exit both in supersonic and subsonic regimes. In the presence of the airfoil, the distribution of the noise sources is deeply modified. In particular, a strong noise source is localized on the flap. For high Strouhal numbers, higher than about 2 (based on the jet mixing velocity and diameter), a significant contribution from the shear-layer near the flap is observed, too. Indications of acoustic reflections on the airfoil are also discerned.

  8. An analysis of seismic background noise variation and evaluation of detection capability of Keskin Array (BRTR PS-43) in Turkey

    Science.gov (United States)

    Bakir, M. E.; Ozel, N. M.; Semin, K. U.

    2011-12-01

    Bogazici University, Kandilli Observatory and Earthquake Research Institute (KOERI) is currently operating the Keskin seismic array (BRTR-PS 43) located in town Keskin, providing real-time data to IDC. The instrumentaion of seismic array includes six short period borehole seismometers and one broadband borehole seismometer. The seismic background noise variation of Keskin array are studied in order to estimate the local and regional event detection capability in the frequency range from 1 Hz to 10 Hz. The Power density spectrum and also probability density function of Keskin array data were computed for seasonal and diurnal noise variations between 2008 and 2010. The computation will be extended to cover the period between 2005 and 2008. We attempt to determine the precise frequency characteristics of the background noise, which will help us to assess the station sensitivity. Minimum detectable magnitude versus distance for Keskin seismic array will be analyzed based on the seismic noise analysis. Detailed analysis results of seismic background noise and detection capability will be presented by this research.

  9. Study and characterization of arrays of detectors for dosimetric verification of radiotherapy, analysis of business solutions

    International Nuclear Information System (INIS)

    Gago Arias, A.; Brualla Gonzalez, L.; Gomez Rodriguez, F.; Gonzalez Castano, D. M.; Pardo Montero, J.; Luna Vega, V.; Mosquera Sueiro, J.; Sanchez Garcia, M.

    2011-01-01

    This paper presents a comparative study of the detector arrays developed by different business houses to the demand for devices that speed up the verification process. Will analyze the effect of spatial response of individual detectors in the measurement of dose distributions, modeling the same and analyzing the ability of the arrays to detect variations in a treatment yield.

  10. Analysis of 3-D effects in segmented cylindrical quasi-Halbach magnet arrays

    NARCIS (Netherlands)

    Meessen, K.J.; Paulides, J.J.H.; Lomonova, E.

    2011-01-01

    To improve the performance of permanent magnet (PM) machines, quasi-Halbach PM arrays are used to increase the magnetic loading in these machines. In tubular PM actuators, these arrays are often approximated using segmented magnets resulting in a 3-D magnetic field effect. This paper describes the

  11. Efficient Full-Wave Analysis of Waveguide Arrays on Cylindrical Surfaces.

    NARCIS (Netherlands)

    Gerini, G.; Guglielmi, M.; Rozzi, T.; Zappelli, L.

    1999-01-01

    Conformal open-ended waveguide arrays received great attention in the early seventies. Recently, dielectric loaded waveguide radiators have been again proposed to achieve high dety microwave packaging [1], [2]. The efficient design of highly integrated array solutions, however, require fast and

  12. Gaussian entanglement revisited

    Science.gov (United States)

    Lami, Ludovico; Serafini, Alessio; Adesso, Gerardo

    2018-02-01

    We present a novel approach to the separability problem for Gaussian quantum states of bosonic continuous variable systems. We derive a simplified necessary and sufficient separability criterion for arbitrary Gaussian states of m versus n modes, which relies on convex optimisation over marginal covariance matrices on one subsystem only. We further revisit the currently known results stating the equivalence between separability and positive partial transposition (PPT) for specific classes of Gaussian states. Using techniques based on matrix analysis, such as Schur complements and matrix means, we then provide a unified treatment and compact proofs of all these results. In particular, we recover the PPT-separability equivalence for: (i) Gaussian states of 1 versus n modes; and (ii) isotropic Gaussian states. In passing, we also retrieve (iii) the recently established equivalence between separability of a Gaussian state and and its complete Gaussian extendability. Our techniques are then applied to progress beyond the state of the art. We prove that: (iv) Gaussian states that are invariant under partial transposition are necessarily separable; (v) the PPT criterion is necessary and sufficient for separability for Gaussian states of m versus n modes that are symmetric under the exchange of any two modes belonging to one of the parties; and (vi) Gaussian states which remain PPT under passive optical operations can not be entangled by them either. This is not a foregone conclusion per se (since Gaussian bound entangled states do exist) and settles a question that had been left unanswered in the existing literature on the subject. This paper, enjoyable by both the quantum optics and the matrix analysis communities, overall delivers technical and conceptual advances which are likely to be useful for further applications in continuous variable quantum information theory, beyond the separability problem.

  13. A Multimode Equivalent Network Approach for the Analysis of a 'Realistic' Finite Array of Open Ended Waveguides

    NARCIS (Netherlands)

    Neto, A.; Bolt, R.; Gerini, G.; Schmitt, D.

    2003-01-01

    In this contribution we present a theoretical model for the analysis of finite arrays of open-ended waveguides mounted on finite mounting platforms or having radome coverages. This model is based on a Multimode Equivalent Network (MEN) [1] representation of the radiating waveguides complete with

  14. Noise characteristics analysis of short wave infrared InGaAs focal plane arrays

    Science.gov (United States)

    Yu, Chunlei; Li, Xue; Yang, Bo; Huang, Songlei; Shao, Xiumei; Zhang, Yaguang; Gong, Haimei

    2017-09-01

    The increasing application of InGaAs short wave infrared (SWIR) focal plane arrays (FPAs) in low light level imaging requires ultra-low noise FPAs. This paper presents the theoretical analysis of FPA noise, and point out that both dark current and detector capacitance strongly affect the FPA noise. The impact of dark current and detector capacitance on FPA noise is compared in different situations. In order to obtain low noise performance FPAs, the demand for reducing detector capacitance is higher especially when pixel pitch is smaller, integration time is shorter, and integration capacitance is larger. Several InGaAs FPAs were measured and analyzed, the experiments' results could be well fitted to the calculated results. The study found that the major contributor of FPA noise is coupled noise with shorter integration time. The influence of detector capacitance on FPA noise is more significant than that of dark current. To investigate the effect of detector performance on FPA noise, two kinds of photodiodes with different concentration of the absorption layer were fabricated. The detectors' performance and noise characteristics were measured and analyzed, the results are consistent with that of theoretical analysis.

  15. Flat-plate solar array project. Volume 8: Project analysis and integration

    Science.gov (United States)

    Mcguire, P.; Henry, P.

    1986-01-01

    Project Analysis and Integration (PA&I) performed planning and integration activities to support management of the various Flat-Plate Solar Array (FSA) Project R&D activities. Technical and economic goals were established by PA&I for each R&D task within the project to coordinate the thrust toward the National Photovoltaic Program goals. A sophisticated computer modeling capability was developed to assess technical progress toward meeting the economic goals. These models included a manufacturing facility simulation, a photovoltaic power station simulation and a decision aid model incorporating uncertainty. This family of analysis tools was used to track the progress of the technology and to explore the effects of alternative technical paths. Numerous studies conducted by PA&I signaled the achievement of milestones or were the foundation of major FSA project and national program decisions. The most important PA&I activities during the project history are summarized. The PA&I planning function is discussed and how it relates to project direction and important analytical models developed by PA&I for its analytical and assessment activities are reviewed.

  16. Single nucleotide polymorphism array analysis of bone marrow failure patients reveals characteristic patterns of genetic changes.

    Science.gov (United States)

    Babushok, Daria V; Xie, Hongbo M; Roth, Jacquelyn J; Perdigones, Nieves; Olson, Timothy S; Cockroft, Joshua D; Gai, Xiaowu; Perin, Juan C; Li, Yimei; Paessler, Michele E; Hakonarson, Hakon; Podsakoff, Gregory M; Mason, Philip J; Biegel, Jaclyn A; Bessler, Monica

    2014-01-01

    The bone marrow failure syndromes (BMFS) are a heterogeneous group of rare blood disorders characterized by inadequate haematopoiesis, clonal evolution, and increased risk of leukaemia. Single nucleotide polymorphism arrays (SNP-A) have been proposed as a tool for surveillance of clonal evolution in BMFS. To better understand the natural history of BMFS and to assess the clinical utility of SNP-A in these disorders, we analysed 124 SNP-A from a comprehensively characterized cohort of 91 patients at our BMFS centre. SNP-A were correlated with medical histories, haematopathology, cytogenetic and molecular data. To assess clonal evolution, longitudinal analysis of SNP-A was performed in 25 patients. We found that acquired copy number-neutral loss of heterozygosity (CN-LOH) was significantly more frequent in acquired aplastic anaemia (aAA) than in other BMFS (odds ratio 12·2, P < 0·01). Homozygosity by descent was most common in congenital BMFS, frequently unmasking autosomal recessive mutations. Copy number variants (CNVs) were frequently polymorphic, and we identified CNVs enriched in neutropenia and aAA. Our results suggest that acquired CN-LOH is a general phenomenon in aAA that is probably mechanistically and prognostically distinct from typical CN-LOH of myeloid malignancies. Our analysis of clinical utility of SNP-A shows the highest yield of detecting new clonal haematopoiesis at diagnosis and at relapse. © 2013 John Wiley & Sons Ltd.

  17. AnovArray: a set of SAS macros for the analysis of variance of gene expression data

    Directory of Open Access Journals (Sweden)

    Renard Jean-Paul

    2005-06-01

    Full Text Available Abstract Background Analysis of variance is a powerful approach to identify differentially expressed genes in a complex experimental design for microarray and macroarray data. The advantage of the anova model is the possibility to evaluate multiple sources of variation in an experiment. Results AnovArray is a package implementing ANOVA for gene expression data using SAS® statistical software. The originality of the package is 1 to quantify the different sources of variation on all genes together, 2 to provide a quality control of the model, 3 to propose two models for a gene's variance estimation and to perform a correction for multiple comparisons. Conclusion AnovArray is freely available at http://www-mig.jouy.inra.fr/stat/AnovArray and requires only SAS® statistical software.

  18. Array comparative genomic hybridization and cytogenetic analysis in pediatric acute leukemias

    Science.gov (United States)

    Dawson, A.J.; Yanofsky, R.; Vallente, R.; Bal, S.; Schroedter, I.; Liang, L.; Mai, S.

    2011-01-01

    Most patients with acute lymphocytic leukemia (all) are reported to have acquired chromosomal abnormalities in their leukemic bone marrow cells. Many established chromosome rearrangements have been described, and their associations with specific clinical, biologic, and prognostic features are well defined. However, approximately 30% of pediatric and 50% of adult patients with all do not have cytogenetic abnormalities of clinical significance. Despite significant improvements in outcome for pediatric all, therapy fails in approximately 25% of patients, and these failures often occur unpredictably in patients with a favorable prognosis and “good” cytogenetics at diagnosis. It is well known that karyotype analysis in hematologic malignancies, although genome-wide, is limited because of altered cell kinetics (mitotic rate), a propensity of leukemic blasts to undergo apoptosis in culture, overgrowth by normal cells, and chromosomes of poor quality in the abnormal clone. Array comparative genomic hybridization (acgh—“microarray”) has a greatly increased genomic resolution over classical cytogenetics. Cytogenetic microarray, which uses genomic dna, is a powerful tool in the analysis of unbalanced chromosome rearrangements, such as copy number gains and losses, and it is the method of choice when the mitotic index is low and the quality of metaphases is suboptimal. The copy number profile obtained by microarray is often called a “molecular karyotype.” In the present study, microarray was applied to 9 retrospective cases of pediatric all either with initial high-risk features or with at least 1 relapse. The conventional karyotype was compared to the “molecular karyotype” to assess abnormalities as interpreted by classical cytogenetics. Not only were previously undetected chromosome losses and gains identified by microarray, but several karyotypes interpreted by classical cytogenetics were shown to be discordant with the microarray results. The

  19. Bounded Intention Planning Revisited

    OpenAIRE

    Sievers Silvan; Wehrle Martin; Helmert Malte

    2014-01-01

    Bounded intention planning provides a pruning technique for optimal planning that has been proposed several years ago. In addition partial order reduction techniques based on stubborn sets have recently been investigated for this purpose. In this paper we revisit bounded intention planning in the view of stubborn sets.

  20. A Hydrostatic Paradox Revisited

    Science.gov (United States)

    Ganci, Salvatore

    2012-01-01

    This paper revisits a well-known hydrostatic paradox, observed when turning upside down a glass partially filled with water and covered with a sheet of light material. The phenomenon is studied in its most general form by including the mass of the cover. A historical survey of this experiment shows that a common misunderstanding of the phenomenon…

  1. The Faraday effect revisited

    DEFF Research Database (Denmark)

    Cornean, Horia; Nenciu, Gheorghe

    2009-01-01

    This paper is the second in a series revisiting the (effect of) Faraday rotation. We formulate and prove the thermodynamic limit for the transverse electric conductivity of Bloch electrons, as well as for the Verdet constant. The main mathematical tool is a regularized magnetic and geometric...

  2. Microelectrode array-induced neuronal alignment directs neurite outgrowth: analysis using a fast Fourier transform (FFT).

    Science.gov (United States)

    Radotić, Viktorija; Braeken, Dries; Kovačić, Damir

    2017-12-01

    Many studies have shown that the topography of the substrate on which neurons are cultured can promote neuronal adhesion and guide neurite outgrowth in the same direction as the underlying topography. To investigate this effect, isotropic substrate-complementary metal-oxide-semiconductor (CMOS) chips were used as one example of microelectrode arrays (MEAs) for directing neurite growth of spiral ganglion neurons. Neurons were isolated from 5 to 7-day-old rat pups, cultured 1 day in vitro (DIV) and 4 DIV, and then fixed with 4% paraformaldehyde. For analysis of neurite alignment and orientation, fast Fourier transformation (FFT) was used. Results revealed that on the micro-patterned surface of a CMOS chip, neurons orient their neurites along three directional axes at 30, 90, and 150° and that neurites aligned in straight lines between adjacent pillars and mostly followed a single direction while occasionally branching perpendicularly. We conclude that the CMOS substrate guides neurites towards electrodes by means of their structured pillar organization and can produce electrical stimulation of aligned neurons as well as monitoring their neural activities once neurites are in the vicinity of electrodes. These findings are of particular interest for neural tissue engineering with the ultimate goal of developing a new generation of MEA essential for improved electrical stimulation of auditory neurons.

  3. Rate equation analysis and non-Hermiticity in coupled semiconductor laser arrays

    Science.gov (United States)

    Gao, Zihe; Johnson, Matthew T.; Choquette, Kent D.

    2018-05-01

    Optically coupled semiconductor laser arrays are described by coupled rate equations. The coupled mode equations and carrier densities are included in the analysis, which inherently incorporate the carrier-induced nonlinearities including gain saturation and amplitude-phase coupling. We solve the steady-state coupled rate equations and consider the cavity frequency detuning and the individual laser pump rates as the experimentally controlled variables. We show that the carrier-induced nonlinearities play a critical role in the mode control, and we identify gain contrast induced by cavity frequency detuning as a unique mechanism for mode control. Photon-mediated energy transfer between cavities is also discussed. Parity-time symmetry and exceptional points in this system are studied. Unbroken parity-time symmetry can be achieved by judiciously combining cavity detuning and unequal pump rates, while broken symmetry lies on the boundary of the optical locking region. Exceptional points are identified at the intersection between broken symmetry and unbroken parity-time symmetry.

  4. Field programmable gate array reliability analysis using the dynamic flow graph methodology

    Energy Technology Data Exchange (ETDEWEB)

    McNelles, Phillip; Lu, Lixuan [Faculty of Energy Systems and Nuclear Science, University of Ontario Institute of Technology (UOIT), Ontario (Canada)

    2016-10-15

    Field programmable gate array (FPGA)-based systems are thought to be a practical option to replace certain obsolete instrumentation and control systems in nuclear power plants. An FPGA is a type of integrated circuit, which is programmed after being manufactured. FPGAs have some advantages over other electronic technologies, such as analog circuits, microprocessors, and Programmable Logic Controllers (PLCs), for nuclear instrumentation and control, and safety system applications. However, safety-related issues for FPGA-based systems remain to be verified. Owing to this, modeling FPGA-based systems for safety assessment has now become an important point of research. One potential methodology is the dynamic flowgraph methodology (DFM). It has been used for modeling software/hardware interactions in modern control systems. In this paper, FPGA logic was analyzed using DFM. Four aspects of FPGAs are investigated: the 'IEEE 1164 standard', registers (D flip-flops), configurable logic blocks, and an FPGA-based signal compensator. The ModelSim simulations confirmed that DFM was able to accurately model those four FPGA properties, proving that DFM has the potential to be used in the modeling of FPGA-based systems. Furthermore, advantages of DFM over traditional reliability analysis methods and FPGA simulators are presented, along with a discussion of potential issues with using DFM for FPGA-based system modeling.

  5. Simultaneous Profiling of DNA Mutation and Methylation by Melting Analysis Using Magnetoresistive Biosensor Array

    DEFF Research Database (Denmark)

    Rizzi, Giovanni; Lee, Jung-Rok; Dahl, Christina

    2017-01-01

    specificity. Genomic (mutation) or bisulphite-treated (methylation) DNA is amplified using nondiscriminatory primers, and the amplicons are then hybridized to a giant magnetoresistive (GMR) biosensor array followed by melting curve measurements. The GMR biosensor platform offers scalable multiplexed detection...

  6. Dynamical analysis of surface-insulated planar wire array Z-pinches

    Science.gov (United States)

    Li, Yang; Sheng, Liang; Hei, Dongwei; Li, Xingwen; Zhang, Jinhai; Li, Mo; Qiu, Aici

    2018-05-01

    The ablation and implosion dynamics of planar wire array Z-pinches with and without surface insulation are compared and discussed in this paper. This paper first presents a phenomenological model named the ablation and cascade snowplow implosion (ACSI) model, which accounts for the ablation and implosion phases of a planar wire array Z-pinch in a single simulation. The comparison between experimental data and simulation results shows that the ACSI model could give a fairly good description about the dynamical characteristics of planar wire array Z-pinches. Surface insulation introduces notable differences in the ablation phase of planar wire array Z-pinches. The ablation phase is divided into two stages: insulation layer ablation and tungsten wire ablation. The two-stage ablation process of insulated wires is simulated in the ACSI model by updating the formulas describing the ablation process.

  7. Evaluation of SNP Data from the Malus Infinium Array Identifies Challenges for Genetic Analysis of Complex Genomes of Polyploid Origin.

    Directory of Open Access Journals (Sweden)

    Michela Troggio

    Full Text Available High throughput arrays for the simultaneous genotyping of thousands of single-nucleotide polymorphisms (SNPs have made the rapid genetic characterisation of plant genomes and the development of saturated linkage maps a realistic prospect for many plant species of agronomic importance. However, the correct calling of SNP genotypes in divergent polyploid genomes using array technology can be problematic due to paralogy, and to divergence in probe sequences causing changes in probe binding efficiencies. An Illumina Infinium II whole-genome genotyping array was recently developed for the cultivated apple and used to develop a molecular linkage map for an apple rootstock progeny (M432, but a large proportion of segregating SNPs were not mapped in the progeny, due to unexpected genotype clustering patterns. To investigate the causes of this unexpected clustering we performed BLAST analysis of all probe sequences against the 'Golden Delicious' genome sequence and discovered evidence for paralogous annealing sites and probe sequence divergence for a high proportion of probes contained on the array. Following visual re-evaluation of the genotyping data generated for 8,788 SNPs for the M432 progeny using the array, we manually re-scored genotypes at 818 loci and mapped a further 797 markers to the M432 linkage map. The newly mapped markers included the majority of those that could not be mapped previously, as well as loci that were previously scored as monomorphic, but which segregated due to divergence leading to heterozygosity in probe annealing sites. An evaluation of the 8,788 probes in a diverse collection of Malus germplasm showed that more than half the probes returned genotype clustering patterns that were difficult or impossible to interpret reliably, highlighting implications for the use of the array in genome-wide association studies.

  8. Analysis of seismic waves crossing the Santa Clara Valley using the three-component MUSIQUE array algorithm

    Science.gov (United States)

    Hobiger, Manuel; Cornou, Cécile; Bard, Pierre-Yves; Le Bihan, Nicolas; Imperatori, Walter

    2016-10-01

    We introduce the MUSIQUE algorithm and apply it to seismic wavefield recordings in California. The algorithm is designed to analyse seismic signals recorded by arrays of three-component seismic sensors. It is based on the MUSIC and the quaternion-MUSIC algorithms. In a first step, the MUSIC algorithm is applied in order to estimate the backazimuth and velocity of incident seismic waves and to discriminate between Love and possible Rayleigh waves. In a second step, the polarization parameters of possible Rayleigh waves are analysed using quaternion-MUSIC, distinguishing retrograde and prograde Rayleigh waves and determining their ellipticity. In this study, we apply the MUSIQUE algorithm to seismic wavefield recordings of the San Jose Dense Seismic Array. This array has been installed in 1999 in the Evergreen Basin, a sedimentary basin in the Eastern Santa Clara Valley. The analysis includes 22 regional earthquakes with epicentres between 40 and 600 km distant from the array and covering different backazimuths with respect to the array. The azimuthal distribution and the energy partition of the different surface wave types are analysed. Love waves dominate the wavefield for the vast majority of the events. For close events in the north, the wavefield is dominated by the first harmonic mode of Love waves, for farther events, the fundamental mode dominates. The energy distribution is different for earthquakes occurring northwest and southeast of the array. In both cases, the waves crossing the array are mostly arriving from the respective hemicycle. However, scattered Love waves arriving from the south can be seen for all earthquakes. Combining the information of all events, it is possible to retrieve the Love wave dispersion curves of the fundamental and the first harmonic mode. The particle motion of the fundamental mode of Rayleigh waves is retrograde and for the first harmonic mode, it is prograde. For both modes, we can also retrieve dispersion and ellipticity

  9. Surface analysis and mechanical behaviour mapping of vertically aligned CNT forest array through nanoindentation

    Energy Technology Data Exchange (ETDEWEB)

    Koumoulos, Elias P.; Charitidis, C.A., E-mail: charitidis@chemeng.ntua.gr

    2017-02-28

    Highlights: • Structure and wall numbers are identified through TEM. • Static contact angle measurements revealed a super-hydrophobic behavior. • Hysteresis was observed (loading–unloading) due to the local stress distribution. • Hardness and modulus mapping for a grid of 70 μm{sup 2} is conducted. • Resistance is clearly divided in 2 regions (MWCNT and MWCNT – MWCNT) interface. - Abstract: Carbon nanotube (CNT) based architectures have increased the scientific interest owning to their exceptional performance rendering them promising candidates for advanced industrial applications in the nanotechnology field. Despite individual CNTs being considered as one of the most known strong materials, much less is known about other CNT forms, such as CNT arrays, in terms of their mechanical performance (integrity). In this work, thermal chemical vapor deposition (CVD) method is employed to produce vertically aligned multiwall (VA-MW) CNT carpets. Their structural properties were studied by means of scanning electron microscopy (SEM), X-Ray diffraction (XRD) and Raman spectroscopy, while their hydrophobic behavior was investigated via contact angle measurements. The resistance to indentation deformation of VA-MWCNT carpets was investigated through nanoindentation technique. The synthesized VA-MWCNTs carpets consisted of well-aligned MWCNTs. Static contact angle measurements were performed with water and glycerol, revealing a rather super-hydrophobic behavior. The structural analysis, hydrophobic behavior and indentation response of VA-MWCNTs carpets synthesized via CVD method are clearly demonstrated. Additionally, cycle indentation load-depth curve was applied and hysteresis loops were observed in the indenter loading–unloading cycle due to the local stress distribution. Hardness (as resistance to applied load) and modulus mapping, at 200 nm of displacement for a grid of 70 μm{sup 2} is presented. Through trajection, the resistance is clearly divided in 2

  10. SNP array analysis reveals novel genomic abnormalities including copy neutral loss of heterozygosity in anaplastic oligodendrogliomas.

    Directory of Open Access Journals (Sweden)

    Ahmed Idbaih

    Full Text Available Anaplastic oligodendrogliomas (AOD are rare glial tumors in adults with relative homogeneous clinical, radiological and histological features at the time of diagnosis but dramatically various clinical courses. Studies have identified several molecular abnormalities with clinical or biological relevance to AOD (e.g. t(1;19(q10;p10, IDH1, IDH2, CIC and FUBP1 mutations.To better characterize the clinical and biological behavior of this tumor type, the creation of a national multicentric network, named "Prise en charge des OLigodendrogliomes Anaplasiques (POLA," has been supported by the Institut National du Cancer (InCA. Newly diagnosed and centrally validated AOD patients and their related biological material (tumor and blood samples were prospectively included in the POLA clinical database and tissue bank, respectively.At the molecular level, we have conducted a high-resolution single nucleotide polymorphism array analysis, which included 83 patients. Despite a careful central pathological review, AOD have been found to exhibit heterogeneous genomic features. A total of 82% of the tumors exhibited a 1p/19q-co-deletion, while 18% harbor a distinct chromosome pattern. Novel focal abnormalities, including homozygously deleted, amplified and disrupted regions, have been identified. Recurring copy neutral losses of heterozygosity (CNLOH inducing the modulation of gene expression have also been discovered. CNLOH in the CDKN2A locus was associated with protein silencing in 1/3 of the cases. In addition, FUBP1 homozygous deletion was detected in one case suggesting a putative tumor suppressor role of FUBP1 in AOD.Our study showed that the genomic and pathological analyses of AOD are synergistic in detecting relevant clinical and biological subgroups of AOD.

  11. Parallel multispot smFRET analysis using an 8-pixel SPAD array

    Science.gov (United States)

    Ingargiola, A.; Colyer, R. A.; Kim, D.; Panzeri, F.; Lin, R.; Gulinatti, A.; Rech, I.; Ghioni, M.; Weiss, S.; Michalet, X.

    2012-02-01

    Single-molecule Förster resonance energy transfer (smFRET) is a powerful tool for extracting distance information between two fluorophores (a donor and acceptor dye) on a nanometer scale. This method is commonly used to monitor binding interactions or intra- and intermolecular conformations in biomolecules freely diffusing through a focal volume or immobilized on a surface. The diffusing geometry has the advantage to not interfere with the molecules and to give access to fast time scales. However, separating photon bursts from individual molecules requires low sample concentrations. This results in long acquisition time (several minutes to an hour) to obtain sufficient statistics. It also prevents studying dynamic phenomena happening on time scales larger than the burst duration and smaller than the acquisition time. Parallelization of acquisition overcomes this limit by increasing the acquisition rate using the same low concentrations required for individual molecule burst identification. In this work we present a new two-color smFRET approach using multispot excitation and detection. The donor excitation pattern is composed of 4 spots arranged in a linear pattern. The fluorescent emission of donor and acceptor dyes is then collected and refocused on two separate areas of a custom 8-pixel SPAD array. We report smFRET measurements performed on various DNA samples synthesized with various distances between the donor and acceptor fluorophores. We demonstrate that our approach provides identical FRET efficiency values to a conventional single-spot acquisition approach, but with a reduced acquisition time. Our work thus opens the way to high-throughput smFRET analysis on freely diffusing molecules.

  12. COHERENT NETWORK ANALYSIS FOR CONTINUOUS GRAVITATIONAL WAVE SIGNALS IN A PULSAR TIMING ARRAY: PULSAR PHASES AS EXTRINSIC PARAMETERS

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Yan [MOE Key Laboratory of Fundamental Physical Quantities Measurements, School of Physics, Huazhong University of Science and Technology, 1037 Luoyu Road, Wuhan, Hubei Province 430074 (China); Mohanty, Soumya D.; Jenet, Fredrick A., E-mail: ywang12@hust.edu.cn [Department of Physics, University of Texas Rio Grande Valley, 1 West University Boulevard, Brownsville, TX 78520 (United States)

    2015-12-20

    Supermassive black hole binaries are one of the primary targets of gravitational wave (GW) searches using pulsar timing arrays (PTAs). GW signals from such systems are well represented by parameterized models, allowing the standard Generalized Likelihood Ratio Test (GLRT) to be used for their detection and estimation. However, there is a dichotomy in how the GLRT can be implemented for PTAs: there are two possible ways in which one can split the set of signal parameters for semi-analytical and numerical extremization. The straightforward extension of the method used for continuous signals in ground-based GW searches, where the so-called pulsar phase parameters are maximized numerically, was addressed in an earlier paper. In this paper, we report the first study of the performance of the second approach where the pulsar phases are maximized semi-analytically. This approach is scalable since the number of parameters left over for numerical optimization does not depend on the size of the PTA. Our results show that for the same array size (9 pulsars), the new method performs somewhat worse in parameter estimation, but not in detection, than the previous method where the pulsar phases were maximized numerically. The origin of the performance discrepancy is likely to be in the ill-posedness that is intrinsic to any network analysis method. However, the scalability of the new method allows the ill-posedness to be mitigated by simply adding more pulsars to the array. This is shown explicitly by taking a larger array of pulsars.

  13. Verification of computed tomographic estimates of cochlear implant array position: a micro-CT and histologic analysis.

    Science.gov (United States)

    Teymouri, Jessica; Hullar, Timothy E; Holden, Timothy A; Chole, Richard A

    2011-08-01

    To determine the efficacy of clinical computed tomographic (CT) imaging to verify postoperative electrode array placement in cochlear implant (CI) patients. Nine fresh cadaver heads underwent clinical CT scanning, followed by bilateral CI insertion and postoperative clinical CT scanning. Temporal bones were removed, trimmed, and scanned using micro-CT. Specimens were then dehydrated, embedded in either methyl methacrylate or LR White resin, and sectioned with a diamond wafering saw. Histology sections were examined by 3 blinded observers to determine the position of individual electrodes relative to soft tissue structures within the cochlea. Electrodes were judged to be within the scala tympani, scala vestibuli, or in an intermediate position between scalae. The position of the array could be estimated accurately from clinical CT scans in all specimens using micro-CT and histology as a criterion standard. Verification using micro-CT yielded 97% agreement, and histologic analysis revealed 95% agreement with clinical CT results. A composite, 3-dimensional image derived from a patient's preoperative and postoperative CT images using a clinical scanner accurately estimates the position of the electrode array as determined by micro-CT imaging and histologic analyses. Information obtained using the CT method provides valuable insight into numerous variables of interest to patient performance such as surgical technique, array design, and processor programming and troubleshooting.

  14. Structural model of standard ultrasonic transducer array developed for FEM analysis of mechanical crosstalk.

    Science.gov (United States)

    Celmer, M; Opieliński, K J; Dopierała, M

    2018-02-01

    One of the reasons of distortions in ultrasonic imaging are crosstalk effects. They can be divided into groups according to the way of their formation. One of them is constituted by mechanical crosstalk, which is propagated by a construction of a multi-element array of piezoelectric transducers. When an individual transducer is excited, mechanical vibrations are transferred to adjacent construction components, thereby stimulating neighboring transducers to an undesired operation. In order to explore ways of the propagation of such vibrations, the authors developed the FEM model of the array of piezoelectric transducers designed for calculations in COMSOL Multiphysics software. Simulations of activating individual transducers and calculated electrical voltages appearing on transducers unstimulated intentionally, were performed in the time domain in order to assess the propagation velocity of different vibration modes through the construction elements. On this basis, conclusions were drawn in terms of the participation of various construction parts of the array of piezoelectric transducers in the process of creating the mechanical crosstalk. The elaborated FEM model allowed also to examine the ways aimed at reducing the transmission of mechanical crosstalk vibrations through the components of the array. Studies showed that correct cuts in the fasteners and the front layer improve the reduction of the mechanical crosstalk effect. The model can become a helpful tool in the process of design and modifications of manufactured ultrasonic arrays particularly in terms of mechanical crosstalk reduction. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. The power reinforcement framework revisited

    DEFF Research Database (Denmark)

    Nielsen, Jeppe; Andersen, Kim Normann; Danziger, James N.

    2016-01-01

    Whereas digital technologies are often depicted as being capable of disrupting long-standing power structures and facilitating new governance mechanisms, the power reinforcement framework suggests that information and communications technologies tend to strengthen existing power arrangements within...... public organizations. This article revisits the 30-yearold power reinforcement framework by means of an empirical analysis on the use of mobile technology in a large-scale programme in Danish public sector home care. It explores whether and to what extent administrative management has controlled decision......-making and gained most benefits from mobile technology use, relative to the effects of the technology on the street-level workers who deliver services. Current mobile technology-in-use might be less likely to be power reinforcing because it is far more decentralized and individualized than the mainly expert...

  16. Ctrl+Shift+Enter mastering Excel array formulas a book about building efficient formulas, advanced formulas, and array formulas for data analysis

    CERN Document Server

    Girvin, Mike

    2013-01-01

    Designed with Excel gurus in mind, this handbook outlines how to create formulas that can be used to solve everyday problems with a series of data values that standard Excel formulas cannot or would be too arduous to attempt. Beginning with an introduction to array formulas, this manual examines topics such as how they differ from ordinary formulas, the benefits and drawbacks of their use, functions that can and cannot handle array calculations, and array constants and functions. Among the practical applications surveyed include how to extract data from tables and unique lists, how to get resu

  17. CFD Analysis of a Finite Linear Array of Savonius Wind Turbines

    Science.gov (United States)

    Belkacem, Belabes; Paraschivoiu, Marius

    2016-09-01

    Vertical axis wind turbines such as Savonius rotors have been shown to be suitable for low wind speeds normally associated with wind resources in all corners of the world. However, the efficiency of the rotor is low. This paper presents results of Computational Fluid Dynamics (CFD) simulations for an array of Savonius rotors that show a significant increase in efficiency. It looks at identifying the effect on the energy yield of a number of turbines placed in a linear array. Results from this investigation suggest that an increase in the energy yield could be achieved which can reach almost two times than the conventional Savonius wind turbine in the case of an array of 11turbines with a distance of 1.4R in between them. The effect of different TSR values and different wind inlet speeds on the farm has been studied for both a synchronous and asynchronous wind farm.

  18. CFD Analysis of a Finite Linear Array of Savonius Wind Turbines

    International Nuclear Information System (INIS)

    Belkacem, Belabes; Paraschivoiu, Marius

    2016-01-01

    Vertical axis wind turbines such as Savonius rotors have been shown to be suitable for low wind speeds normally associated with wind resources in all corners of the world. However, the efficiency of the rotor is low. This paper presents results of Computational Fluid Dynamics (CFD) simulations for an array of Savonius rotors that show a significant increase in efficiency. It looks at identifying the effect on the energy yield of a number of turbines placed in a linear array. Results from this investigation suggest that an increase in the energy yield could be achieved which can reach almost two times than the conventional Savonius wind turbine in the case of an array of 11turbines with a distance of 1.4R in between them. The effect of different TSR values and different wind inlet speeds on the farm has been studied for both a synchronous and asynchronous wind farm. (paper)

  19. Array CGH analysis of a cohort of Russian patients with intellectual disability.

    Science.gov (United States)

    Kashevarova, Anna A; Nazarenko, Lyudmila P; Skryabin, Nikolay A; Salyukova, Olga A; Chechetkina, Nataliya N; Tolmacheva, Ekaterina N; Sazhenova, Elena A; Magini, Pamela; Graziano, Claudio; Romeo, Giovanni; Kučinskas, Vaidutis; Lebedev, Igor N

    2014-02-15

    The use of array comparative genomic hybridization (array CGH) as a diagnostic tool in molecular genetics has facilitated the identification of many new microdeletion/microduplication syndromes (MMSs). Furthermore, this method has allowed for the identification of copy number variations (CNVs) whose pathogenic role has yet to be uncovered. Here, we report on our application of array CGH for the identification of pathogenic CNVs in 79 Russian children with intellectual disability (ID). Twenty-six pathogenic or likely pathogenic changes in copy number were detected in 22 patients (28%): 8 CNVs corresponded to known MMSs, and 17 were not associated with previously described syndromes. In this report, we describe our findings and comment on genes potentially associated with ID that are located within the CNV regions. Copyright © 2013 Elsevier B.V. All rights reserved.

  20. Eddy Current Signal Analysis for Transmit-Receive Pancake Coil on ECT Array Probe

    International Nuclear Information System (INIS)

    Lee, Hyang Beom

    2006-01-01

    In this paper, the eddy current signals come from a pair of transmit-receive (T/R) pancake coil on ECT array Probe are analyzed with the variations of the lift-of and of the distance between transmit and receive coils. To obtain the electromagnetic characteristics of the probes, the governing equation describing the eddy current problems is derived from Maxwell's equation and is solved using three-dimensional finite element method. Eddy current signals from T/R coils on ECT array probe have quite different characteristics compared with ones from impedance coil on rotating pancake coil probe. The results in this paper ran be helpful when the field eddy current signals from ECT array probe are evaluated

  1. Analysis of Circularly Polarized Hemispheroidal Dielectric Resonator Antenna Phased Arrays Using the Method of Auxiliary Sources

    DEFF Research Database (Denmark)

    Larsen, Niels Vesterdal; Breinbjerg, Olav

    2007-01-01

    The method of auxiliary sources is employed to model and analyze probe-fed hemispheroidal dielectric resonator antennas and arrays. Circularly polarized antenna elements of different designs are analyzed, and impedance bandwidths of up to 14.7% are achieved. Selected element designs are subsequen......The method of auxiliary sources is employed to model and analyze probe-fed hemispheroidal dielectric resonator antennas and arrays. Circularly polarized antenna elements of different designs are analyzed, and impedance bandwidths of up to 14.7% are achieved. Selected element designs...

  2. Numerical analysis of ALADIN optics contamination due to outgassing of solar array materials

    International Nuclear Information System (INIS)

    Markelov, G; Endemann, M; Wernham, D

    2008-01-01

    ALADIN is the very first space-based lidar that will provide global wind profile and a special attention has been paid to contamination of ALADIN optics. The paper presents a numerical approach, which is based on the direct simulation Monte Carlo method. The method allows one to accurately compute collisions between various species, in the case under consideration, free-stream flow and outgassing from solar array materials. The collisions create a contamination flux onto the optics despite there is no line-of-sight from the solar arrays to the optics. Comparison of obtained results with a simple analytical model prediction shows that the analytical model underpredicts mass fluxes

  3. Numerical analysis of ALADIN optics contamination due to outgassing of solar array materials

    Energy Technology Data Exchange (ETDEWEB)

    Markelov, G [Advanced Operations and Engineering Services (AOES) Group BV, Postbus 342, 2300 AH Leiden (Netherlands); Endemann, M [ESA-ESTEC/EOP-PAS, Postbus 299, 2200 AG Noordwijk (Netherlands); Wernham, D [ESA-ESTEC/EOP-PAQ, Postbus 299, 2200 AG Noordwijk (Netherlands)], E-mail: Gennady.Markelov@aoes.com

    2008-03-01

    ALADIN is the very first space-based lidar that will provide global wind profile and a special attention has been paid to contamination of ALADIN optics. The paper presents a numerical approach, which is based on the direct simulation Monte Carlo method. The method allows one to accurately compute collisions between various species, in the case under consideration, free-stream flow and outgassing from solar array materials. The collisions create a contamination flux onto the optics despite there is no line-of-sight from the solar arrays to the optics. Comparison of obtained results with a simple analytical model prediction shows that the analytical model underpredicts mass fluxes.

  4. 'Felson Signs' revisited

    International Nuclear Information System (INIS)

    George, Phiji P.; Irodi, Aparna; Keshava, Shyamkumar N.; Lamont, Anthony C.

    2014-01-01

    In this article we revisit, with the help of images, those classic signs in chest radiography described by Dr Benjamin Felson himself, or other illustrious radiologists of his time, cited and discussed in 'Chest Roentgenology'. We briefly describe the causes of the signs, their utility and the differential diagnosis to be considered when each sign is seen. Wherever possible, we use CT images to illustrate the basis of some of these classic radiographic signs.

  5. Time functions revisited

    Science.gov (United States)

    Fathi, Albert

    2015-07-01

    In this paper we revisit our joint work with Antonio Siconolfi on time functions. We will give a brief introduction to the subject. We will then show how to construct a Lipschitz time function in a simplified setting. We will end with a new result showing that the Aubry set is not an artifact of our proof of existence of time functions for stably causal manifolds.

  6. Seven Issues, Revisited

    OpenAIRE

    Whitehead, Jim; De Bra, Paul; Grønbæk, Kaj; Larsen, Deena; Legget, John; schraefel, monica m.c.

    2002-01-01

    It has been 15 years since the original presentation by Frank Halasz at Hypertext'87 on seven issues for the next generation of hypertext systems. These issues are: Search and Query Composites Virtual Structures Computation in/over hypertext network Versioning Collaborative Work Extensibility and Tailorability Since that time, these issues have formed the nucleus of multiple research agendas within the Hypertext community. Befitting this direction-setting role, the issues have been revisited ...

  7. Deterministic Graphical Games Revisited

    DEFF Research Database (Denmark)

    Andersson, Daniel; Hansen, Kristoffer Arnsfelt; Miltersen, Peter Bro

    2008-01-01

    We revisit the deterministic graphical games of Washburn. A deterministic graphical game can be described as a simple stochastic game (a notion due to Anne Condon), except that we allow arbitrary real payoffs but disallow moves of chance. We study the complexity of solving deterministic graphical...... games and obtain an almost-linear time comparison-based algorithm for computing an equilibrium of such a game. The existence of a linear time comparison-based algorithm remains an open problem....

  8. Analysis of directional dependence of the two-dimensional array of detectors 2D array seven 29 implications in the planning system

    International Nuclear Information System (INIS)

    Mora Melendez, R.; Seguro Fernandez, A.; Iborra Oquendo, M.; Urena Llinares, A.

    2013-01-01

    The main objective of our study is to find correction factors dependent on the 2D array incidence angles, and to give account of the phenomenon, allowing the Planner to faithfully reproduce data and curves measured experimentally. (Author)

  9. [Analysis of clinical outcomes of different embryo stage biopsy in array comparative genomic hybridization based preimplantation genetic diagnosis and screening].

    Science.gov (United States)

    Shen, J D; Wu, W; Shu, L; Cai, L L; Xie, J Z; Ma, L; Sun, X P; Cui, Y G; Liu, J Y

    2017-12-25

    Objective: To evaluate the efficiency of the application of array comparative genomic hybridization (array-CGH) in preimplantation genetic diagnosis or screening (PGD/PGS), and compare the clinical outcomes of different stage embryo biopsy. Methods: The outcomes of 381 PGD/PGS cycles referred in the First Affiliated Hospital of Nanjing Medical University from July 2011 to August 2015 were retrospectively analyzed. There were 320 PGD cycles with 156 cleavage-stage-biopsy cycles and 164 trophectoderm-biopsy cycles, 61 PGS cycles with 23 cleavage-stage-biopsy cycles and 38 trophectoderm-biopsy cycles. Chromosomal analysis was performed by array-CGH technology combined with whole genome amplification. Single embryo transfer was performed in all transfer cycles. Live birth rate was calculated as the main clinical outcomes. Results: The embryo diagnosis rate of PGD/PGS by array-CGH were 96.9%-99.1%. In PGD biopsy cycles, the live birth rate per embryo transfer cycle and live birth rate per embryo biopsy cycle were 50.0%(58/116) and 37.2%(58/156) in cleavage-stage-biopsy group, 67.5%(85/126) and 51.8%(85/164) in trophectoderm-biopsy group (both P 0.05). Conclusions: High diagnosis rate and idea live birth rate are achieved in PGD/PGS cycles based on array-CGH technology. The live birth rate of trophectoderm-biopsy group is significantly higher than that of cleavage-stage-biopsy group in PGD cycles; the efficiency of trophectoderm-biopsy is better.

  10. MethLAB: a graphical user interface package for the analysis of array-based DNA methylation data.

    Science.gov (United States)

    Kilaru, Varun; Barfield, Richard T; Schroeder, James W; Smith, Alicia K; Conneely, Karen N

    2012-03-01

    Recent evidence suggests that DNA methylation changes may underlie numerous complex traits and diseases. The advent of commercial, array-based methods to interrogate DNA methylation has led to a profusion of epigenetic studies in the literature. Array-based methods, such as the popular Illumina GoldenGate and Infinium platforms, estimate the proportion of DNA methylated at single-base resolution for thousands of CpG sites across the genome. These arrays generate enormous amounts of data, but few software resources exist for efficient and flexible analysis of these data. We developed a software package called MethLAB (http://genetics.emory.edu/conneely/MethLAB) using R, an open source statistical language that can be edited to suit the needs of the user. MethLAB features a graphical user interface (GUI) with a menu-driven format designed to efficiently read in and manipulate array-based methylation data in a user-friendly manner. MethLAB tests for association between methylation and relevant phenotypes by fitting a separate linear model for each CpG site. These models can incorporate both continuous and categorical phenotypes and covariates, as well as fixed or random batch or chip effects. MethLAB accounts for multiple testing by controlling the false discovery rate (FDR) at a user-specified level. Standard output includes a spreadsheet-ready text file and an array of publication-quality figures. Considering the growing interest in and availability of DNA methylation data, there is a great need for user-friendly open source analytical tools. With MethLAB, we present a timely resource that will allow users with no programming experience to implement flexible and powerful analyses of DNA methylation data.

  11. Shear wave velocities in the upper mantle of the Western Alps: new constraints using array analysis of seismic surface waves

    Science.gov (United States)

    Lyu, Chao; Pedersen, Helle A.; Paul, Anne; Zhao, Liang; Solarino, Stefano

    2017-07-01

    It remains challenging to obtain absolute shear wave velocities of heterogeneities of small lateral extension in the uppermost mantle. This study presents a cross-section of Vs across the strongly heterogeneous 3-D structure of the western European Alps, based on array analysis of data from 92 broad-band seismic stations from the CIFALPS experiment and from permanent networks in France and Italy. Half of the stations were located along a dense sublinear array. Using a combination of these stations and off-profile stations, fundamental-mode Rayleigh wave dispersion curves were calculated using a combined frequency-time beamforming approach. We calculated dispersion curves for seven arrays of approximately 100 km aperture and 14 arrays of approximately 50 km aperture, the latter with the aim of obtaining a 2-D vertical cross-section of Vs beneath the western Alps. The dispersion curves were inverted for Vs(z), with crustal interfaces imposed from a previous receiver function study. The array approach proved feasible, as Vs(z) from independent arrays vary smoothly across the profile length. Results from the seven large arrays show that the shear velocity of the upper mantle beneath the European plate is overall low compared to AK135 with the lowest velocities in the internal part of the western Alps, and higher velocities east of the Alps beneath the Po plain. The 2-D Vs model is coherent with (i) a ∼100 km thick eastward-dipping European lithosphere west of the Alps, (ii) very high velocities beneath the Po plain, coherent with the presence of the Alpine (European) slab and (iii) a narrow low-velocity anomaly beneath the core of the western Alps (from the Briançonnais to the Dora Maira massif), and approximately colocated with a similar anomaly observed in a recent teleseismic P-wave tomography. This intriguing anomaly is also supported by traveltime variations of subvertically propagating body waves from two teleseismic events that are approximately located on

  12. Dynamic Topography Revisited

    Science.gov (United States)

    Moresi, Louis

    2015-04-01

    Dynamic Topography Revisited Dynamic topography is usually considered to be one of the trinity of contributing causes to the Earth's non-hydrostatic topography along with the long-term elastic strength of the lithosphere and isostatic responses to density anomalies within the lithosphere. Dynamic topography, thought of this way, is what is left over when other sources of support have been eliminated. An alternate and explicit definition of dynamic topography is that deflection of the surface which is attributable to creeping viscous flow. The problem with the first definition of dynamic topography is 1) that the lithosphere is almost certainly a visco-elastic / brittle layer with no absolute boundary between flowing and static regions, and 2) the lithosphere is, a thermal / compositional boundary layer in which some buoyancy is attributable to immutable, intrinsic density variations and some is due to thermal anomalies which are coupled to the flow. In each case, it is difficult to draw a sharp line between each contribution to the overall topography. The second definition of dynamic topography does seem cleaner / more precise but it suffers from the problem that it is not measurable in practice. On the other hand, this approach has resulted in a rich literature concerning the analysis of large scale geoid and topography and the relation to buoyancy and mechanical properties of the Earth [e.g. refs 1,2,3] In convection models with viscous, elastic, brittle rheology and compositional buoyancy, however, it is possible to examine how the surface topography (and geoid) are supported and how different ways of interpreting the "observable" fields introduce different biases. This is what we will do. References (a.k.a. homework) [1] Hager, B. H., R. W. Clayton, M. A. Richards, R. P. Comer, and A. M. Dziewonski (1985), Lower mantle heterogeneity, dynamic topography and the geoid, Nature, 313(6003), 541-545, doi:10.1038/313541a0. [2] Parsons, B., and S. Daly (1983), The

  13. Coherency analysis of accelerograms recorded by the UPSAR array during the 2004 Parkfield earthquake

    DEFF Research Database (Denmark)

    Konakli, Katerina; Kiureghian, Armen Der; Dreger, Douglas

    2014-01-01

    Spatial variability of near-fault strong motions recorded by the US Geological Survey Parkfield Seismograph Array (UPSAR) during the 2004 Parkfield (California) earthquake is investigated. Behavior of the lagged coherency for two horizontal and the vertical components is analyzed by separately...

  14. Power balance and loss mechanism analysis in RF transmit coil arrays.

    Science.gov (United States)

    Kuehne, Andre; Goluch, Sigrun; Waxmann, Patrick; Seifert, Frank; Ittermann, Bernd; Moser, Ewald; Laistler, Elmar

    2015-10-01

    To establish a framework for transmit array power balance calculations based on power correlation matrices to accurately quantify the loss contributions from different mechanisms such as coupling, lumped components, and radiation. Starting from Poynting's theorem, power correlation matrices are derived for all terms in the power balance, which is formulated as a matrix equation. Finite-difference time-domain simulations of two 7 T eight-channel head array coils at 297.2 MHz are used to verify the theoretical considerations and demonstrate their application. Care is taken to accurately incorporate all loss mechanisms. The power balance for static B1 phase shims as well as two-dimensional spatially selective transmit SENSE pulses is shown. The simulated power balance shows an excellent agreement with theory, with a maximum power imbalance of less than 0.11%. Power loss contributions from the different loss mechanisms vary significantly between the investigated setups, and depending on the excitation mode imposed on the coil. The presented approach enables a straightforward loss evaluation for an arbitrary excitation of transmit coil arrays. Worst-case power imbalance and losses are calculated in a straightforward manner. This allows for deeper insight into transmit array loss mechanisms, incorporation of radiated power components in specific absorption rate calculations and verification of electromagnetic simulations. © 2014 Wiley Periodicals, Inc.

  15. Characterization of a patch-clamp microchannel array towards neuronal networks analysis

    DEFF Research Database (Denmark)

    Alberti, Massimo; Snakenborg, Detlef; Lopacinska, Joanna M.

    2010-01-01

    for simultaneous patch clamping of cultured cells or neurons in the same network. A disposable silicon/silicon dioxide (Si/SiO2) chip with a microhole array was integrated in a microfluidic system for cell handling, perfusion and electrical recording. Fluidic characterization showed that our PC mu CA can work...

  16. All-diamond functional surface micro-electrode arrays for brain-slice neural analysis

    Czech Academy of Sciences Publication Activity Database

    Vahidpour, F.; Curley, L.; Biró, I.; McDonald, M.; Croux, D.; Pobedinskas, P.; Haenen, K.; Giugliano, M.; Vlčková Živcová, Zuzana; Kavan, Ladislav; Nesládek, M.

    2017-01-01

    Roč. 214, č. 2 (2017), č. článku 1532347. ISSN 1862-6300 R&D Projects: GA ČR GA13-31783S Institutional support: RVO:61388955 Keywords : impedance spectroscopy * microelectrode arrays * surface termination Subject RIV: CG - Electrochemistry OBOR OECD: Electrochemistry (dry cells, batteries, fuel cells, corrosion metals, electrolysis) Impact factor: 1.775, year: 2016

  17. Mathematical analysis of the real time array PCR (RTA PCR) process

    NARCIS (Netherlands)

    Dijksman, Johan Frederik; Pierik, A.

    2012-01-01

    Real time array PCR (RTA PCR) is a recently developed biochemical technique that measures amplification curves (like with quantitative real time Polymerase Chain Reaction (qRT PCR)) of a multitude of different templates in a sample. It combines two different methods in order to profit from the

  18. HPLC-photodiode array detection analysis of curcuminoids in Curcuma species indigenous to Indonesia

    NARCIS (Netherlands)

    Bos, Rein; Windono, Tri; Woerdenbag, Herman J.; Boersma, Ykelien L.; Koulman, Albert; Kayser, Oliver

    An optimized HPLC method with photodiode array detection was developed and applied to analyse the curcuminoids curcumin, demethoxycurcumin, and bis-demethoxycurcumin in rhizomes of Curcuma mangga Val &. v. Zijp, C. heyneana Val. & v. Zijp, C. aeruginosa Roxb. and C. soloensis Val. (Zingiberaceae),

  19. Performance Analysis of Compact FD-MIMO Antenna Arrays in a Correlated Environment

    KAUST Repository

    Nadeem, Qurrat-Ul-Ain

    2017-03-06

    Full dimension multiple-input-multiple-output (FDMIMO) is one of the key technologies proposed in the 3rd Generation Partnership Project (3GPP) for the fifth generation (5G) communication systems. The reason can be attributed to its ability to yield significant performance gains through the deployment of active antenna elements at the base station in the vertical as well as the conventional horizontal directions, enabling several elevation beamforming strategies. The resulting improvement in spectral efficiency largely depends on the orthogonality of the sub-channels constituting the FD-MIMO system. Accommodating a large number of antenna elements with sufficient spacing poses several constraints for practical implementation, making it imperative to consider compact antenna arrangements that minimize the overall channel correlation. Two such configurations considered in this work are the uniform linear array (ULA) and the uniform circular array (UCA) of antenna ports, where each port is mapped to a group of physical antenna elements arranged in the vertical direction. The generalized analytical expression for the spatial correlation function (SCF) for the UCA is derived, exploiting results on spherical harmonics and Legendre polynomials. The mutual coupling between antenna dipoles is accounted for and the resulting SCF is also presented. The second part of this work compares the spatial correlation and mutual information (MI) performance of the ULA and UCA configurations in the 3GPP 3D urban-macro and urban-micro cell scenarios, utilizing results from Random Matrix Theory (RMT) on the deterministic equivalent of the MI for the Kronecker channel model. Simulation results study the performance patterns of the two arrays as a function of several channel and array parameters and identify applications and environments suitable for the deployment of each array.

  20. Analysis of O-glycans as 9-fluorenylmethyl derivatives and its application to the studies on glycan array.

    Science.gov (United States)

    Yamada, Keita; Hirabayashi, Jun; Kakehi, Kazuaki

    2013-03-19

    A method is proposed for the analysis of O-glycans as 9-fluorenylmethyl (Fmoc) derivatives. After releasing the O-glycans from the protein backbone in the presence of ammonia-based media, the glycosylamines thus formed are conveniently labeled with Fmoc-Cl and analyzed by HPLC and MALDI-TOF MS after easy purification. Fmoc labeled O-glycans showed 3.5 times higher sensitivities than those labeled with 2-aminobenzoic acid in fluorescent detection. Various types of O-glycans having sialic acids, fucose, and/or sulfate residues were successfully labeled with Fmoc and analyzed by HPLC and MALDI-TOF MS. The method was applied to the comprehensive analysis of O-glycans expressed on MKN45 cells (human gastric adenocarcinoma). In addition, Fmoc-derivatized O-glycans were easily converted to free hemiacetal or glycosylamine-form glycans that are available for fabrication of glycan array and neoglycoproteins. To demonstrate the availability of our methods, we fabricate the glycan array with Fmoc labeled glycans derived from mucin samples and cancer cells. The model studies using the glycan array showed clear interactions between immobilized glycans and some lectins.

  1. Cosmic Infrared Background Fluctuations in Deep Spitzer Infrared Array Camera Images: Data Processing and Analysis

    Science.gov (United States)

    Arendt, Richard; Kashlinsky, A.; Moseley, S.; Mather, J.

    2010-01-01

    This paper provides a detailed description of the data reduction and analysis procedures that have been employed in our previous studies of spatial fluctuation of the cosmic infrared background (CIB) using deep Spitzer Infrared Array Camera observations. The self-calibration we apply removes a strong instrumental signal from the fluctuations that would otherwise corrupt the results. The procedures and results for masking bright sources and modeling faint sources down to levels set by the instrumental noise are presented. Various tests are performed to demonstrate that the resulting power spectra of these fields are not dominated by instrumental or procedural effects. These tests indicate that the large-scale ([greater, similar]30') fluctuations that remain in the deepest fields are not directly related to the galaxies that are bright enough to be individually detected. We provide the parameterization of these power spectra in terms of separate instrument noise, shot noise, and power-law components. We discuss the relationship between fluctuations measured at different wavelengths and depths, and the relations between constraints on the mean intensity of the CIB and its fluctuation spectrum. Consistent with growing evidence that the [approx]1-5 [mu]m mean intensity of the CIB may not be as far above the integrated emission of resolved galaxies as has been reported in some analyses of DIRBE and IRTS observations, our measurements of spatial fluctuations of the CIB intensity indicate the mean emission from the objects producing the fluctuations is quite low ([greater, similar]1 nW m-2 sr-1 at 3-5 [mu]m), and thus consistent with current [gamma]-ray absorption constraints. The source of the fluctuations may be high-z Population III objects, or a more local component of very low luminosity objects with clustering properties that differ from the resolved galaxies. Finally, we discuss the prospects of the upcoming space-based surveys to directly measure the epochs

  2. COSMIC INFRARED BACKGROUND FLUCTUATIONS IN DEEP SPITZER INFRARED ARRAY CAMERA IMAGES: DATA PROCESSING AND ANALYSIS

    International Nuclear Information System (INIS)

    Arendt, Richard G.; Kashlinsky, A.; Moseley, S. H.; Mather, J.

    2010-01-01

    This paper provides a detailed description of the data reduction and analysis procedures that have been employed in our previous studies of spatial fluctuation of the cosmic infrared background (CIB) using deep Spitzer Infrared Array Camera observations. The self-calibration we apply removes a strong instrumental signal from the fluctuations that would otherwise corrupt the results. The procedures and results for masking bright sources and modeling faint sources down to levels set by the instrumental noise are presented. Various tests are performed to demonstrate that the resulting power spectra of these fields are not dominated by instrumental or procedural effects. These tests indicate that the large-scale (∼>30') fluctuations that remain in the deepest fields are not directly related to the galaxies that are bright enough to be individually detected. We provide the parameterization of these power spectra in terms of separate instrument noise, shot noise, and power-law components. We discuss the relationship between fluctuations measured at different wavelengths and depths, and the relations between constraints on the mean intensity of the CIB and its fluctuation spectrum. Consistent with growing evidence that the ∼1-5 μm mean intensity of the CIB may not be as far above the integrated emission of resolved galaxies as has been reported in some analyses of DIRBE and IRTS observations, our measurements of spatial fluctuations of the CIB intensity indicate the mean emission from the objects producing the fluctuations is quite low (∼>1 nW m -2 sr -1 at 3-5 μm), and thus consistent with current γ-ray absorption constraints. The source of the fluctuations may be high-z Population III objects, or a more local component of very low luminosity objects with clustering properties that differ from the resolved galaxies. Finally, we discuss the prospects of the upcoming space-based surveys to directly measure the epochs inhabited by the populations producing these

  3. DNA-barcode directed capture and electrochemical metabolic analysis of single mammalian cells on a microelectrode array.

    Science.gov (United States)

    Douglas, Erik S; Hsiao, Sonny C; Onoe, Hiroaki; Bertozzi, Carolyn R; Francis, Matthew B; Mathies, Richard A

    2009-07-21

    A microdevice is developed for DNA-barcode directed capture of single cells on an array of pH-sensitive microelectrodes for metabolic analysis. Cells are modified with membrane-bound single-stranded DNA, and specific single-cell capture is directed by the complementary strand bound in the sensor area of the iridium oxide pH microelectrodes within a microfluidic channel. This bifunctional microelectrode array is demonstrated for the pH monitoring and differentiation of primary T cells and Jurkat T lymphoma cells. Single Jurkat cells exhibited an extracellular acidification rate of 11 milli-pH min(-1), while primary T cells exhibited only 2 milli-pH min(-1). This system can be used to capture non-adherent cells specifically and to discriminate between visually similar healthy and cancerous cells in a heterogeneous ensemble based on their altered metabolic properties.

  4. Design, fabrication, and calibration of a cryogenic search-coil array for harmonic analysis of quadrupole magnets

    International Nuclear Information System (INIS)

    Green, M.I.; Barale, P.J.; Hassenzahl, W.V.; Nelson, D.H.; O'Neill, J.W.; Schafer, R.V.; Taylor, C.E.

    1987-09-01

    A cryogenic search-coil array has been fabricated at LBL for harmonic error analysis of SSC model quadrupoles. It consists of three triplets of coils; the center-coil triplet is 10 cm long, and the end coil triplets are 70 cm long. Design objectives are a high bucking ratio for the dipole and quadrupole signals and utility at cryogenic operating currents (∼6 kA) with sufficient sensitivity for use at room-temperature currents (∼10 A). the design and fabrication are described. Individual coils are mechanically measured to +-5 μm, and their magnetic areas measured to 0.05%. A computer program has been developed to predict the quadrupole and dipole bucking ratios from the mechanical and magnetic measurements. The calibration procedure and accuracy of the array are specified. Results of measurements of SSC model quadrupoles are presented. 1 ref., 4 figs

  5. Numerical analysis of natural convection and radiation heat transfer from various shaped thin fin-arrays placed on a horizontal plate-a conjugate analysis

    International Nuclear Information System (INIS)

    Dogan, M.; Sivrioglu, Mecit; Yılmaz, Onder

    2014-01-01

    Highlights: • Optimum fin shape is determined for natural convection and radiation heat transfer. • Fin array with the optimum shape has a much greater average heat transfer coefficient. • The most important factors affecting the heat transfer coefficient are determined. - Abstract: Steady state natural convection and radiation heat transfer from various shaped thin fin-arrays on a horizontal base plate has been numerically investigated. A conjugate analysis has been carried out in which the conservation equations of mass, momentum and energy for the fluid in the two fin enclosure are solved together with the heat conduction equation in the fin and the base plate. Heat transfer by radiation is also considered in analysis. The heat transfer coefficient has been determined for each of the fin array considered in the present study at the same base and the same total area. The results of the analysis show that there are some important geometrical factors affecting the design of fin arrays. Taking into consideration these factors, an optimum fin shape that yields the highest average heat transfer coefficient has been determined

  6. Electron photoemission in plasmonic nanoparticle arrays: analysis of collective resonances and embedding effects

    DEFF Research Database (Denmark)

    Zhukovsky, Sergei V.; Babicheva, Viktoriia; Uskov, Alexander

    2014-01-01

    We theoretically study the characteristics of photoelectron emission in plasmonic nanoparticle arrays. Nanoparticles are partially embedded in a semiconductor, forming Schottky barriers at metal/semiconductor interfaces through which photoelectrons can tunnel from the nanoparticle...... into the semiconductor; photodetection in the infrared range, where photon energies are below the semiconductor band gap (insufficient for band-to-band absorption in semiconductor), is therefore possible. The nanoparticles are arranged in a sparse rectangular lattice so that the wavelength of the lattice......-induced Rayleigh anomalies can overlap the wavelength of the localized surface plasmon resonance of the individual particles, bringing about collective effects from the nanoparticle array. Using full-wave numerical simulations, we analyze the effects of lattice constant, embedding depth, and refractive index step...

  7. Application of an array processor to the analysis of magnetic data for the Doublet III tokamak

    International Nuclear Information System (INIS)

    Wang, T.S.; Saito, M.T.

    1980-08-01

    Discussed herein is a fast computational technique employing the Floating Point Systems AP-190L array processor to analyze magnetic data for the Doublet III tokamak, a fusion research device. Interpretation of the experimental data requires the repeated solution of a free-boundary nonlinear partial differential equation, which describes the magnetohydrodynamic (MHD) equilibrium of the plasma. For this particular application, we have found that the array processor is only 1.4 and 3.5 times slower than the CDC-7600 and CRAY computers, respectively. The overhead on the host DEC-10 computer was kept to a minimum by chaining the complete Poisson solver and free-boundary algorithm into one single-load module using the vector function chainer (VFC). A simple time-sharing scheme for using the MHD code is also discussed

  8. The Texcoco Seismic Array: Analysis of the Seismic Movement in the Deep Sediments of Mexico Basin.

    Science.gov (United States)

    Flores-Estrella, H.; Cardenas-Soto, M.; Lomnitz, C.

    2007-05-01

    The seismic movement in the Lake Zone of the Mexico Basin is characterized by long durations and late energy arrivals; many efforts have been made to find the origin of these late waves. In 1997 the Texcoco Seismic Array (TXC) was installed in the former Lake of Texcoco, in the northeastern part of Mexico Basin. It is a natural reserve formed by the same lacustrine clays of the Lake Zone in Mexico City, however we consider TXC as a virgin site as there are no buildings near, and there is almost no human activity. We analyzed 7 earthquakes recorded at TXC in two instrumental arrays, to identify late energy arrivals near the fundamental period and we also analyzed these pulses with F-K method to estimate the phase velocity and its origin.

  9. Transcriptome analysis of exosome-compromised human cells using high-density tiling arrays

    DEFF Research Database (Denmark)

    Jensen, Torben Heick

    The extent of RNA degradation in the nucleus has traditionally been underestimated. However, all major RNA species are synthesized, processed and can be degraded in this compartment and consequently an enormous amount of nucleosides are turned over and recycled. The RNA exosome, a multisubunit co......) tiling array that covers discrete regions from different chromosomes to represent a range of gene content and exonic/nonexonic conservation grades of the human genome....

  10. DNA micro array analysis of yeast global genome expression in response to ELF-MF exposure

    International Nuclear Information System (INIS)

    Shimizu, K.; Yamamoto, T.; Ishibashi, T.; Kyoh, B.

    2002-01-01

    There is wide spread public concern over the possible health risk of ELF-MF. Electromagnetic fields may produce a variety of effects in several biological systems, including the elevation of cancer risk and reduction of cell growth. Epidemiological studies have shown weak correlations between the exposure to ELF and the incidence of several cancers, but negative studies have also been reported. Moreover, there are some reports that basic biological events such as the cell cycle and DNA replication were affected by exposure to MF. However, to date the molecular mechanism of the MF effect on living organism is not clear. In this study, we used yeast DNA micro array to examine the transcriptional profile of all genes in response to ELF-MF. A few years ago it was difficult to carry out a global gene expression study to identify important genes regarding ELF-MF, however, today DNA micro arrays allow gene regulation in response to high density ELF-MF exposure. Thus we used micro array to analyze changes in mRNA abundance during ELF-MF exposure

  11. Super-transition-arrays: A model for the spectral analysis of hot, dense plasma

    International Nuclear Information System (INIS)

    Bar-Shalom, A.; Oreg, J.; Goldstein, W.H.; Shvarts, D.; Zigler, A.

    1989-01-01

    A method is presented for calculating the bound-bound emission from a local thermodynamic equilibrium plasma. The total transition array of a specific single-electron transition, including all possible contributing configurations, is described by only a small number of super-transition-arrays (STA's). Exact analytic expressions are given for the first few moments of an STA. The method is shown to interpolate smoothly between the average-atom (AA) results and the detailed configuration accounting that underlies the unresolved transition array (UTA) method. Each STA is calculated in its own, optimized potential, and the model achieves rapid convergence in the number of STA's included. Comparisons of predicted STA spectra with the results of the AA and UTA methods are presented. It is shown that under certain plasma conditions the contributions of low-probability transitions can accumulate into an important component of the emission. In these cases, detailed configuration accounting is impractical. On the other hand, the detailed structure of the spectrum under such conditions is not described by the AA method. The application of the STA method to laser-produced plasma experiments is discussed

  12. A user-friendly workflow for analysis of Illumina gene expression bead array data available at the arrayanalysis.org portal.

    Science.gov (United States)

    Eijssen, Lars M T; Goelela, Varshna S; Kelder, Thomas; Adriaens, Michiel E; Evelo, Chris T; Radonjic, Marijana

    2015-06-30

    Illumina whole-genome expression bead arrays are a widely used platform for transcriptomics. Most of the tools available for the analysis of the resulting data are not easily applicable by less experienced users. ArrayAnalysis.org provides researchers with an easy-to-use and comprehensive interface to the functionality of R and Bioconductor packages for microarray data analysis. As a modular open source project, it allows developers to contribute modules that provide support for additional types of data or extend workflows. To enable data analysis of Illumina bead arrays for a broad user community, we have developed a module for ArrayAnalysis.org that provides a free and user-friendly web interface for quality control and pre-processing for these arrays. This module can be used together with existing modules for statistical and pathway analysis to provide a full workflow for Illumina gene expression data analysis. The module accepts data exported from Illumina's GenomeStudio, and provides the user with quality control plots and normalized data. The outputs are directly linked to the existing statistics module of ArrayAnalysis.org, but can also be downloaded for further downstream analysis in third-party tools. The Illumina bead arrays analysis module is available at http://www.arrayanalysis.org . A user guide, a tutorial demonstrating the analysis of an example dataset, and R scripts are available. The module can be used as a starting point for statistical evaluation and pathway analysis provided on the website or to generate processed input data for a broad range of applications in life sciences research.

  13. Array CGH Analysis and Developmental Delay: A Diagnostic Tool for Neurologists.

    Science.gov (United States)

    Cameron, F; Xu, J; Jung, J; Prasad, C

    2013-11-01

    Developmental delay occurs in 1-3% of the population, with unknown etiology in approximately 50% of cases. Initial genetic work up for developmental delay previously included chromosome analysis and subtelomeric FISH (fluorescent in situ hybridization). Array Comparative Genomic Hybridization (aCGH) has emerged as a tool to detect genetic copy number changes and uniparental disomy and is the most sensitive test in providing etiological diagnosis in developmental delay. aCGH allows for the provision of prognosis and recurrence risks, improves access to resources, helps limit further investigations and may alter medical management in many cases. aCGH has led to the delineation of novel genetic syndromes associated with developmental delay. An illustrative case of a 31-year-old man with long standing global developmental delay and recently diagnosed 4q21 deletion syndrome with a deletion of 20.8 Mb genomic interval is provided. aCGH is now recommended as a first line test in children and adults with undiagnosed developmental delay and congenital anomalies. Puce d'hybridation génomique comparative et retard de développement : un outil diagnostic pour les neurologues. Le retard de développement survient chez 1 à 3% de la population et son étiologie est inconnue chez à peu près 50% des cas. L'évaluation génétique initiale pour un retard de développement incluait antérieurement une analyse chromosomique et une analyse par FISH (hybridation in situ en fluorescence) de régions subtélomériques. La puce d'hybridation génomique comparative (CGHa) est devenue un outil de détection des changements du nombre de copies géniques ainsi que de la disomie uniparentale et elle est le test le plus sensible pour fournir un diagnostic étiologique dans le retard de développement. Le CGHa permet d'offrir un pronostic et un risque de récurrence, améliore l'accès aux ressources, aide à limiter les évaluations et peut modifier le traitement médical dans bien des cas

  14. Bottomonium spectrum revisited

    CERN Document Server

    Segovia, Jorge; Entem, David R.; Fernández, Francisco

    2016-01-01

    We revisit the bottomonium spectrum motivated by the recently exciting experimental progress in the observation of new bottomonium states, both conventional and unconventional. Our framework is a nonrelativistic constituent quark model which has been applied to a wide range of hadronic observables from the light to the heavy quark sector and thus the model parameters are completely constrained. Beyond the spectrum, we provide a large number of electromagnetic, strong and hadronic decays in order to discuss the quark content of the bottomonium states and give more insights about the better way to determine their properties experimentally.

  15. Metamorphosis in Craniiformea revisited

    DEFF Research Database (Denmark)

    Altenburger, Andreas; Wanninger, Andreas; Holmer, Lars E.

    2013-01-01

    We revisited the brachiopod fold hypothesis and investigated metamorphosis in the craniiform brachiopod Novocrania anomala. Larval development is lecithotrophic and the dorsal (brachial) valve is secreted by dorsal epithelia. We found that the juvenile ventral valve, which consists only of a thin...... brachiopods during metamorphosis to cement their pedicle to the substrate. N. anomala is therefore not initially attached by a valve but by material corresponding to pedicle cuticle. This is different to previous descriptions, which had led to speculations about a folding event in the evolution of Brachiopoda...

  16. Fiber array based hyperspectral Raman imaging for chemical selective analysis of malaria-infected red blood cells

    Energy Technology Data Exchange (ETDEWEB)

    Brückner, Michael [Leibniz Institute of Photonic Technology, 07745 Jena (Germany); Becker, Katja [Justus Liebig University Giessen, Biochemistry and Molecular Biology, 35392 Giessen (Germany); Popp, Jürgen [Leibniz Institute of Photonic Technology, 07745 Jena (Germany); Friedrich Schiller University Jena, Institute for Physical Chemistry, 07745 Jena (Germany); Friedrich Schiller University Jena, Abbe Centre of Photonics, 07745 Jena (Germany); Frosch, Torsten, E-mail: torsten.frosch@uni-jena.de [Leibniz Institute of Photonic Technology, 07745 Jena (Germany); Friedrich Schiller University Jena, Institute for Physical Chemistry, 07745 Jena (Germany); Friedrich Schiller University Jena, Abbe Centre of Photonics, 07745 Jena (Germany)

    2015-09-24

    A new setup for Raman spectroscopic wide-field imaging is presented. It combines the advantages of a fiber array based spectral translator with a tailor-made laser illumination system for high-quality Raman chemical imaging of sensitive biological samples. The Gaussian-like intensity distribution of the illuminating laser beam is shaped by a square-core optical multimode fiber to a top-hat profile with very homogeneous intensity distribution to fulfill the conditions of Koehler. The 30 m long optical fiber and an additional vibrator efficiently destroy the polarization and coherence of the illuminating light. This homogeneous, incoherent illumination is an essential prerequisite for stable quantitative imaging of complex biological samples. The fiber array translates the two-dimensional lateral information of the Raman stray light into separated spectral channels with very high contrast. The Raman image can be correlated with a corresponding white light microscopic image of the sample. The new setup enables simultaneous quantification of all Raman spectra across the whole spatial area with very good spectral resolution and thus outperforms other Raman imaging approaches based on scanning and tunable filters. The unique capabilities of the setup for fast, gentle, sensitive, and selective chemical imaging of biological samples were applied for automated hemozoin analysis. A special algorithm was developed to generate Raman images based on the hemozoin distribution in red blood cells without any influence from other Raman scattering. The new imaging setup in combination with the robust algorithm provides a novel, elegant way for chemical selective analysis of the malaria pigment hemozoin in early ring stages of Plasmodium falciparum infected erythrocytes. - Highlights: • Raman hyperspectral imaging allows for chemical selective analysis of biological samples with spatial heterogeneity. • A homogeneous, incoherent illumination is essential for reliable

  17. Fiber array based hyperspectral Raman imaging for chemical selective analysis of malaria-infected red blood cells

    International Nuclear Information System (INIS)

    Brückner, Michael; Becker, Katja; Popp, Jürgen; Frosch, Torsten

    2015-01-01

    A new setup for Raman spectroscopic wide-field imaging is presented. It combines the advantages of a fiber array based spectral translator with a tailor-made laser illumination system for high-quality Raman chemical imaging of sensitive biological samples. The Gaussian-like intensity distribution of the illuminating laser beam is shaped by a square-core optical multimode fiber to a top-hat profile with very homogeneous intensity distribution to fulfill the conditions of Koehler. The 30 m long optical fiber and an additional vibrator efficiently destroy the polarization and coherence of the illuminating light. This homogeneous, incoherent illumination is an essential prerequisite for stable quantitative imaging of complex biological samples. The fiber array translates the two-dimensional lateral information of the Raman stray light into separated spectral channels with very high contrast. The Raman image can be correlated with a corresponding white light microscopic image of the sample. The new setup enables simultaneous quantification of all Raman spectra across the whole spatial area with very good spectral resolution and thus outperforms other Raman imaging approaches based on scanning and tunable filters. The unique capabilities of the setup for fast, gentle, sensitive, and selective chemical imaging of biological samples were applied for automated hemozoin analysis. A special algorithm was developed to generate Raman images based on the hemozoin distribution in red blood cells without any influence from other Raman scattering. The new imaging setup in combination with the robust algorithm provides a novel, elegant way for chemical selective analysis of the malaria pigment hemozoin in early ring stages of Plasmodium falciparum infected erythrocytes. - Highlights: • Raman hyperspectral imaging allows for chemical selective analysis of biological samples with spatial heterogeneity. • A homogeneous, incoherent illumination is essential for reliable

  18. Angular acceptance analysis of an infrared focal plane array with a built-in stationary Fourier transform spectrometer.

    Science.gov (United States)

    Gillard, Frédéric; Ferrec, Yann; Guérineau, Nicolas; Rommeluère, Sylvain; Taboury, Jean; Chavel, Pierre

    2012-06-01

    Stationary Fourier transform spectrometry is an interesting concept for building reliable field or embedded spectroradiometers, especially for the mid- and far- IR. Here, a very compact configuration of a cryogenic stationary Fourier transform IR (FTIR) spectrometer is investigated, where the interferometer is directly integrated in the focal plane array (FPA). We present a theoretical analysis to explain and describe the fringe formation inside the FTIR-FPA structure when illuminated by an extended source positioned at a finite distance from the detection plane. The results are then exploited to propose a simple front lens design compatible with a handheld package.

  19. Analysis of thermal dispersion in an array of parallel plates with fully-developed laminar flow

    International Nuclear Information System (INIS)

    Xu Jiaying; Lu Tianjian; Hodson, Howard P.; Fleck, Norman A.

    2010-01-01

    The effect of thermal dispersion upon heat transfer across a periodic array of parallel plates is studied. Three basic heat transfer problems are addressed, each for steady, fully-developed, laminar fluid flow: (a) transient heat transfer due to an arbitrary initial temperature distribution within the fluid, (b) steady heat transfer with constant heat flux on all plate surfaces, and (c) steady heat transfer with constant wall temperatures. For problems (a) and (b), the effective thermal dispersivity scales with the Peclet number Pe according to 1 + CPe 2 , where the coefficient C is independent of Pe. For problem (c) the coefficient C is a function of Pe.

  20. Methods for Room Acoustic Analysis and Synthesis using a Monopole-Dipole Microphone Array

    Science.gov (United States)

    Abel, J. S.; Begault, Durand R.; Null, Cynthia H. (Technical Monitor)

    1998-01-01

    In recent work, a microphone array consisting of an omnidirectional microphone and colocated dipole microphones having orthogonally aligned dipole axes was used to examine the directional nature of a room impulse response. The arrival of significant reflections was indicated by peaks in the power of the omnidirectional microphone response; reflection direction of arrival was revealed by comparing zero-lag crosscorrelations between the omnidirectional response and the dipole responses to the omnidirectional response power to estimate arrival direction cosines with respect to the dipole axes.

  1. Measurement and analysis of flow wall shear stress in an interior subchannel of triangular array rods

    International Nuclear Information System (INIS)

    Fakori-Monazah, M.R.; Todreas, N.E.

    1977-08-01

    A simulated model of triangular array rods with pitch to diameter ratio of 1.10 (as a test section) and air as the fluid flow was used to study the LMFBR hydraulic parameters. The wall shear stress distribution around the rod periphery, friction factors, static pressure distributions and turbulence intensity corresponding to various Reynolds numbers ranging from 4140 to 36170 in the central subchannel were measured. Various approaches for measurement of wall shear stress were compared. The measurement was performed using the Preston tube technique with the probe outside diameter equal to 0.014 in

  2. Revisiting Nursing Research in Nigeria

    African Journals Online (AJOL)

    2016-08-18

    Aug 18, 2016 ... health care research, it is therefore pertinent to revisit the state of nursing research in the country. .... platforms, updated libraries with electronic resource ... benchmarks for developing countries of 26%, [17] the amount is still ...

  3. Revisiting a dogma: similar survival of patients with small bowel and gastric GIST. A population-based propensity score SEER analysis.

    Science.gov (United States)

    Guller, Ulrich; Tarantino, Ignazio; Cerny, Thomas; Ulrich, Alexis; Schmied, Bruno M; Warschkow, Rene

    2017-01-01

    The objective of the present analysis was to assess whether small bowel gastrointestinal stromal tumor (GIST) is associated with worse cancer-specific survival (CSS) and overall survival (OS) compared with gastric GIST on a population-based level. Data on patients aged 18 years or older with histologically proven GIST was extracted from the SEER database from 1998 to 2011. OS and CSS for small bowel GIST were compared with OS and CSS for gastric GIST by application of adjusted and unadjusted Cox regression analyses and propensity score analyses. GIST were located in the stomach (n = 3011, 59 %), duodenum (n = 313, 6 %), jejunum/ileum (n = 1288, 25 %), colon (n = 139, 3 %), rectum (n = 172, 3 %), and extraviscerally (n = 173, 3 %). OS and CSS of patients with GIST in the duodenum [OS, HR 0.95, 95 % confidence interval (CI) 0.76-1.19; CSS, HR 0.99, 95 % CI 0.76-1.29] and in the jejunum/ileum (OS, HR 0.97, 95 % CI 0.85-1.10; CSS, HR = 0.95, 95 % CI 0.81-1.10) were similar to those of patients with gastric GIST in multivariate analyses. Conversely, OS and CSS of patients with GIST in the colon (OS, HR 1.40; 95 % CI 1.07-1.83; CSS, HR 1.89, 95 % CI 1.41-2.54) and in an extravisceral location (OS, HR 1.42, 95 % CI 1.14-1.77; CSS, HR = 1.43, 95 % CI 1.11-1.84) were significantly worse than those of patients with gastric GIST. Contrary to common belief, OS and CSS of patients with small bowel GIST are not statistically different from those of patients with gastric GIST when adjustment is made for confounding variables on a population-based level. The prognosis of patients with nongastric GIST is worse because of a colonic and extravisceral GIST location. These findings have implications regarding adjuvant treatment of GIST patients. Hence, the dogma that small bowel GIST patients have worse prognosis than gastric GIST patients and therefore should receive adjuvant treatment to a greater extent must be revisited.

  4. Towards an integrated biosensor array for simultaneous and rapid multi-analysis of endocrine disrupting chemicals

    Energy Technology Data Exchange (ETDEWEB)

    Scognamiglio, Viviana, E-mail: viviana.scognamiglio@mlib.ic.cnr.it [IC-CNR Istituto di Cristallografia, AdR1 Dipartimento Agroalimentare - Via Salaria Km 29.3 00015, Rome (Italy); Pezzotti, Italo; Pezzotti, Gianni; Cano, Juan; Manfredonia, Ivano [Biosensor S.r.l. - Via degli Olmetti 44 00060 Formello, Rome (Italy); Buonasera, Katia [IC-CNR Istituto di Cristallografia, AdR1 Dipartimento Agroalimentare - Via Salaria Km 29.3 00015, Rome (Italy); Arduini, Fabiana; Moscone, Danila; Palleschi, Giuseppe [Universita di Roma Tor Vergata, Dipartimento di Scienze e Tecnologie Chimiche - Via della Ricerca Scientifica 00133, Rome (Italy); Giardi, Maria Teresa [IC-CNR Istituto di Cristallografia, AdR1 Dipartimento Agroalimentare - Via Salaria Km 29.3 00015, Rome (Italy)

    2012-11-02

    Highlights: Black-Right-Pointing-Pointer A multitask biosensor for the detection of endocrine disrupting chemicals is proposed. Black-Right-Pointing-Pointer The sensing system employ an array of biological recognition elements. Black-Right-Pointing-Pointer Amperometric and optical transduction methods are provided in an integrated biosensor together with flow control systems. Black-Right-Pointing-Pointer The biosensing device results in an integrated, automatic and portable system for environmental and agrifood application. - Abstract: In this paper we propose the construction and application of a portable multi-purpose biosensor array for the simultaneous detection of a wide range of endocrine disruptor chemicals (EDCs), based on the recognition operated by various enzymes and microorganisms. The developed biosensor combines both electrochemical and optical transduction systems, in order to increase the number of chemical species which can be monitored. Considering to the maximum residue level (MRL) of contaminants established by the European Commission, the biosensor system was able to detect most of the chemicals analysed with very high sensitivity. In particular, atrazine and diuron were detected with a limit of detection of 0.5 nM, with an RSD% less than 5%; paraoxon and chlorpyrifos were revealed with a detection of 5 {mu}M and 4.5 {mu}M, respectively, with an RSD% less than 6%; catechol and bisphenol A were identified with a limit of detection of 1 {mu}M and 35 {mu}M respectively, with an RSD% less than 5%.

  5. Microdeletion and microduplication analysis of chinese conotruncal defects patients with targeted array comparative genomic hybridization.

    Directory of Open Access Journals (Sweden)

    Xiaohui Gong

    Full Text Available OBJECTIVE: The current study aimed to develop a reliable targeted array comparative genomic hybridization (aCGH to detect microdeletions and microduplications in congenital conotruncal defects (CTDs, especially on 22q11.2 region, and for some other chromosomal aberrations, such as 5p15-5p, 7q11.23 and 4p16.3. METHODS: Twenty-seven patients with CTDs, including 12 pulmonary atresia (PA, 10 double-outlet right ventricle (DORV, 3 transposition of great arteries (TGA, 1 tetralogy of Fallot (TOF and one ventricular septal defect (VSD, were enrolled in this study and screened for pathogenic copy number variations (CNVs, using Agilent 8 x 15K targeted aCGH. Real-time quantitative polymerase chain reaction (qPCR was performed to test the molecular results of targeted aCGH. RESULTS: Four of 27 patients (14.8% had 22q11.2 CNVs, 1 microdeletion and 3 microduplications. qPCR test confirmed the microdeletion and microduplication detected by the targeted aCGH. CONCLUSION: Chromosomal abnormalities were a well-known cause of multiple congenital anomalies (MCA. This aCGH using arrays with high-density coverage in the targeted regions can detect genomic imbalances including 22q11.2 and other 10 kinds CNVs effectively and quickly. This approach has the potential to be applied to detect aneuploidy and common microdeletion/microduplication syndromes on a single microarray.

  6. Towards an integrated biosensor array for simultaneous and rapid multi-analysis of endocrine disrupting chemicals

    International Nuclear Information System (INIS)

    Scognamiglio, Viviana; Pezzotti, Italo; Pezzotti, Gianni; Cano, Juan; Manfredonia, Ivano; Buonasera, Katia; Arduini, Fabiana; Moscone, Danila; Palleschi, Giuseppe; Giardi, Maria Teresa

    2012-01-01

    Highlights: ► A multitask biosensor for the detection of endocrine disrupting chemicals is proposed. ► The sensing system employ an array of biological recognition elements. ► Amperometric and optical transduction methods are provided in an integrated biosensor together with flow control systems. ► The biosensing device results in an integrated, automatic and portable system for environmental and agrifood application. - Abstract: In this paper we propose the construction and application of a portable multi-purpose biosensor array for the simultaneous detection of a wide range of endocrine disruptor chemicals (EDCs), based on the recognition operated by various enzymes and microorganisms. The developed biosensor combines both electrochemical and optical transduction systems, in order to increase the number of chemical species which can be monitored. Considering to the maximum residue level (MRL) of contaminants established by the European Commission, the biosensor system was able to detect most of the chemicals analysed with very high sensitivity. In particular, atrazine and diuron were detected with a limit of detection of 0.5 nM, with an RSD% less than 5%; paraoxon and chlorpyrifos were revealed with a detection of 5 μM and 4.5 μM, respectively, with an RSD% less than 6%; catechol and bisphenol A were identified with a limit of detection of 1 μM and 35 μM respectively, with an RSD% less than 5%.

  7. Analysis of an array of piezoelectric energy harvesters connected in series

    International Nuclear Information System (INIS)

    Lin, H C; Wu, P H; Lien, I C; Shu, Y C

    2013-01-01

    This paper investigates the electrical response of a series connection of piezoelectric energy harvesters (PEHs) attached to various interface electronics, including standard and parallel-/series-SSHI (synchronized switch harvesting on inductor) circuits. In contrast to the case of parallel connection of multiple oscillators, the system response is determined by the matrix formulation of charging on a capacitance. In addition, the adoption of an equivalent impedance approach shows that the capacitance matrix can be explicitly expressed in terms of the relevant load impedance. A model problem is proposed for performance evaluation of harvested power under different choices of interface circuits. The result demonstrates that the parallel-SSHI array system exhibits higher power output with moderate bandwidth improvement, while the series-SSHI system delivers a pronounced wideband at the cost of peak harvested power. The standard array system shows a mild ability in power harvesting between these two SSHI systems. Finally, comparisons between the series and parallel connection of oscillators are made, showing the striking contrast of these two cases. (paper)

  8. Numerical Analysis of CNC Milling Chatter Using Embedded Miniature MEMS Microphone Array System

    Directory of Open Access Journals (Sweden)

    Pang-Li Wang

    2018-01-01

    Full Text Available With the increasingly common use of industrial automation for mass production, there are many computer numerical control (CNC machine tools that require the collection of data from intelligent sensors in order to analyze their processing quality. In general, for high speed rotating machines, an accelerometer can be attached on the spindle to collect the data from the detected vibration of the CNC. However, due to their cost, accelerometers have not been widely adopted for use with typical CNC machine tools. This study sought to develop an embedded miniature MEMS microphone array system (Radius 5.25 cm, 8 channels to discover the vibration source of the CNC from spatial phase array processing. The proposed method utilizes voice activity detection (VAD to distinguish between the presence and absence of abnormal noise in the pre-stage, and utilizes the traditional direction of arrival method (DOA via multiple signal classification (MUSIC to isolate the spatial orientation of the noise source in post-processing. In the numerical simulation, the non-interfering noise source location is calibrated in the anechoic chamber, and is tested with real milling processing in the milling machine. As this results in a high background noise level, the vibration sound source is more accurate in the presented energy gradation graphs as compared to the traditional MUSIC method.

  9. Analysis of Precursor Properties of mixed Al/Alumel Cylindrical Wire Arrays*

    Science.gov (United States)

    Stafford, A.; Safronova, A. S.; Kantsyrev, V. L.; Esaulov, A. A.; Weller, M. E.; Shrestha, I.; Osborne, G. C.; Shlyaptseva, V. V.; Keim, S. F.; Coverdale, C. A.; Chuvatin, A. S.

    2012-10-01

    Previous studies of mid-Z (Cu and Ni) cylindrical wire arrays (CWAs) on Zebra have found precursors with high electron temperatures of >300 eV. However, past experiments with Al CWAs did not find the same high temperature precursors. New precursor experiments using mixed Al/Alumel (Ni 95%, Si 2%, and Al 2%) cylindrical wire arrays have been performed to understand how the properties of L-shell Ni precursor will change and whether Al precursor will be observed. Time gated spectra and pinholes are used to determine precursor plasma conditions for comparison with previous Alumel precursor experiments. A full diagnostic set which included more than ten different beam-lines was implemented. Future work in this direction is discussed. [4pt] *This work was supported by NNSA under DOE Cooperative Agreements DE-FC52-06NA27588, and in part by DE-FC52-06NA27586, and DE-FC52-06NA27616. Sandia is a multi-program laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy under Contract DE-AC04-94AL85000.

  10. Analysis of a Combined Antenna Arrays and Reverse-Link Synchronous DS-CDMA System over Multipath Rician Fading Channels

    Directory of Open Access Journals (Sweden)

    Kim Yong-Seok

    2005-01-01

    Full Text Available We present the BER analysis of antenna array (AA receiver in reverse-link asynchronous multipath Rician channels and analyze the performance of an improved AA system which applies a reverse-link synchronous transmission technique (RLSTT in order to effectively make a better estimation of covariance matrices at a beamformer-RAKE receiver. In this work, we provide a comprehensive analysis of user capacity which reflects several important factors such as the ratio of the specular component power to the Rayleigh fading power, the shape of multipath intensity profile, and the number of antennas. Theoretical analysis demonstrates that for the case of a strong specular path's power or for a high decay factor, the employment of RLSTT along with AA has the potential of improving the achievable capacity by an order of magnitude.

  11. Micro-hole array fluorescent sensor based on AC-Dielectrophoresis (DEP) for simultaneous analysis of nano-molecules

    Science.gov (United States)

    Kim, Hye Jin; Kang, Dong-Hoon; Lee, Eunji; Hwang, Kyo Seon; Shin, Hyun-Joon; Kim, Jinsik

    2018-02-01

    We propose a simple fluorescent bio-chip based on two types of alternative current-dielectrophoretic (AC-DEP) force, attractive (positive DEP) and repulsive (negative DEP) force, for simultaneous nano-molecules analysis. Various radius of micro-holes on the bio-chip are designed to apply the different AC-DEP forces, and the nano-molecules are concentrated inside the micro-hole arrays according to the intensity of the DEP force. The bio-chip was fabricated by Micro Electro Mechanical system (MEMS) technique, and was composed of two layers; a SiO2 layer and Ta/Pt layer were accomplished for an insulation layer and a top electrode with micro-hole arrays to apply electric fields for DEP force, respectively. Each SiO2 and Ta/Pt layers were deposited by thermal oxidation and sputtering, and micro-hole arrays were fabricated with Inductively Coupled Plasma (ICP) etching process. For generation of each positive and negative DEP at micro-holes, we applied two types of sine-wave AC voltage with different frequency range alternately. The intensity of the DEP force was controlled by the radius of the micro-hole and size of nano-molecule, and calculated with COMSOL multi-physics. Three types of nano-molecules labelled with different fluorescent dye were used and the intensity of nano-molecules was examined by the fluorescent optical analysis after applying the DEP force. By analyzing the fluorescent intensities of the nano-molecules, we verify the various nano-molecules in analyte are located successfully inside corresponding micro-holes with different radius according to their size.

  12. A control center design revisited: learning from users’ appropriation

    DEFF Research Database (Denmark)

    Souza da Conceição, Carolina; Cordeiro, Cláudia

    2014-01-01

    This paper aims to present the lessons learned during a control center design project by revisiting another control center from the same company designed two and a half years before by the same project team. In light of the experience with the first project and its analysis, the designers and res...

  13. Analysis of X-ray iron and nickel radiation and jets from planar wire arrays and X-pinches

    International Nuclear Information System (INIS)

    Safronova, A S; Kantsyrev, V L; Esaulov, A A; Ouart, N D; Shlyaptseva, V; Williamson, K M; Shrestha, I; Osborne, G C; Weller, M E

    2010-01-01

    University-scale Z-pinch devices are able to produce plasmas with a broad range of sizes, temperatures, densities, their gradients, and opacity properties. Radiative properties of such plasmas depend on material, mass, and configuration of the wire array loads. Experiments with two different types of loads, double planar wire arrays (DPWA) and X-pinches, performed on the 1 MA Zebra generator at UNR are analyzed. X-pinches are made from Stainless Steel (69% Fe, 20% Cr, and 9% Ni) wires. Combined DPWAs consist of one plane from SS wires and another plane from Alumel (95% Ni, 2% Al, 2% Si) wires. The main focus of this work is on the analysis of plasma jets at the early phase of plasma formation and the K-and L-shell radiation generation at the implosion and stagnation phases in experiments with the two aforementioned wire loads. The relevant theoretical tools that guide the data analysis include non-LTE collisional-radiative and wire ablation dynamics models. The astrophysical relevance of the plasma jets as well as of spectroscopic and imaging studies are demonstrated.

  14. One-leg hop kinematics 20 years following anterior cruciate ligament rupture: Data revisited using functional data analysis.

    Science.gov (United States)

    Hébert-Losier, Kim; Pini, Alessia; Vantini, Simone; Strandberg, Johan; Abramowicz, Konrad; Schelin, Lina; Häger, Charlotte K

    2015-12-01

    Despite interventions, anterior cruciate ligament ruptures can cause long-term deficits. To assist in identifying and treating deficiencies, 3D-motion analysis is used for objectivizing data. Conventional statistics are commonly employed to analyze kinematics, reducing continuous data series to discrete variables. Conversely, functional data analysis considers the entire data series. Here, we employ functional data analysis to examine and compare the entire time-domain of knee-kinematic curves from one-leg hops between and within three groups. All subjects (n=95) were part of a long-term follow-up study involving anterior cruciate ligament ruptures treated ~20 years ago conservatively with physiotherapy only or with reconstructive surgery and physiotherapy, and matched knee-healthy controls. Between-group differences (injured leg, treated groups; non-dominant leg, controls) were identified during the take-off and landing phases, and in the sagittal (flexion/extension) rather than coronal (abduction/adduction) and transverse (internal/external) planes. Overall, surgical and control groups demonstrated comparable knee-kinematic curves. However, compared to controls, the physiotherapy-only group exhibited less flexion during the take-off (0-55% of the normalized phase) and landing (44-73%) phase. Between-leg differences were absent in controls and the surgically treated group, but observed during the flight (4-22%, injured leg>flexion) and the landing (57-85%, injured legFunctional data analysis identified specific functional knee-joint deviations from controls persisting 20 years post anterior cruciate ligament rupture, especially when treated conservatively. This approach is suggested as a means for comprehensively analyzing complex movements, adding to previous analyses. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Multiple and substitute addictions involving prescription drugs misuse among 12th graders: gateway theory revisited with Market Basket Analysis.

    Science.gov (United States)

    Jayawardene, Wasantha Parakrama; YoussefAgha, Ahmed Hassan

    2014-01-01

    This study aimed to identify the sequential patterns of drug use initiation, which included prescription drugs misuse (PDM), among 12th-grade students in Indiana. The study also tested the suitability of the data mining method Market Basket Analysis (MBA) to detect common drug use initiation sequences in large-scale surveys. Data from 2007 to 2009 Annual Surveys of Alcohol, Tobacco, and Other Drug Use by Indiana Children and Adolescents were used for this study. A close-ended, self-administered questionnaire was used to ask adolescents about the use of 21 substance categories and the age of first use. "Support%" and "confidence%" statistics of Market Basket Analysis detected multiple and substitute addictions, respectively. The lifetime prevalence of using any addictive substance was 73.3%, and it has been decreasing during past few years. Although the lifetime prevalence of PDM was 19.2%, it has been increasing. Males and whites were more likely to use drugs and engage in multiple addictions. Market Basket Analysis identified common drug use initiation sequences that involved 11 drugs. High levels of support existed for associations among alcohol, cigarettes, and marijuana, whereas associations that included prescription drugs had medium levels of support. Market Basket Analysis is useful for the detection of common substance use initiation sequences in large-scale surveys. Before initiation of prescription drugs, physicians should consider the adolescents' risk of addiction. Prevention programs should address multiple addictions, substitute addictions, common sequences in drug use initiation, sex and racial differences in PDM, and normative beliefs of parents and adolescents in relation to PDM.

  16. Break-even analysis revisited: the need to adjust for profitability, the collection rate and autonomous income.

    Science.gov (United States)

    Broyles, R W; Narine, L; Khaliq, A

    2003-08-01

    This paper modifies traditional break-even analysis and develops a model that reflects the influence of variation in payer mix, the collection rate, profitability and autonomous income on the desired volume alternative. The augmented model indicates that a failure to adjust for uncollectibles and the net surplus results in a systematic understatement of the desired volume alternative. Conversely, a failure to adjust for autonomous income derived from the operation of cafeterias, gift shops or an organization's investment in marketable securities produces an overstatement of the desired volume. In addition, this paper uses Microsoft Excel to develop a spreadsheet that constructs a pro forma income statement, expressed in terms of the contribution margin. The spreadsheet also relies on the percentage of sales or revenue approach to prepare a balance sheet from which indicators of fiscal performance are calculated. Hence, the analysis enables the organization to perform a sensitivity analysis of potential changes in the desired volume, the operating margin, the current ratio, the debt: equity ratio and the amount of cash derived from operations that are associated with expected variation in payer mix, the collection rate, grouped by payer, the net surplus and autonomous income.

  17. Revisiting Information Technology tools serving authorship and editorship: a case-guided tutorial to statistical analysis and plagiarism detection.

    Science.gov (United States)

    Bamidis, P D; Lithari, C; Konstantinidis, S T

    2010-12-01

    With the number of scientific papers published in journals, conference proceedings, and international literature ever increasing, authors and reviewers are not only facilitated with an abundance of information, but unfortunately continuously confronted with risks associated with the erroneous copy of another's material. In parallel, Information Communication Technology (ICT) tools provide to researchers novel and continuously more effective ways to analyze and present their work. Software tools regarding statistical analysis offer scientists the chance to validate their work and enhance the quality of published papers. Moreover, from the reviewers and the editor's perspective, it is now possible to ensure the (text-content) originality of a scientific article with automated software tools for plagiarism detection. In this paper, we provide a step-bystep demonstration of two categories of tools, namely, statistical analysis and plagiarism detection. The aim is not to come up with a specific tool recommendation, but rather to provide useful guidelines on the proper use and efficiency of either category of tools. In the context of this special issue, this paper offers a useful tutorial to specific problems concerned with scientific writing and review discourse. A specific neuroscience experimental case example is utilized to illustrate the young researcher's statistical analysis burden, while a test scenario is purpose-built using open access journal articles to exemplify the use and comparative outputs of seven plagiarism detection software pieces.

  18. Revisiting Information Technology tools serving authorship and editorship: a case-guided tutorial to statistical analysis and plagiarism detection

    Science.gov (United States)

    Bamidis, P D; Lithari, C; Konstantinidis, S T

    2010-01-01

    With the number of scientific papers published in journals, conference proceedings, and international literature ever increasing, authors and reviewers are not only facilitated with an abundance of information, but unfortunately continuously confronted with risks associated with the erroneous copy of another's material. In parallel, Information Communication Technology (ICT) tools provide to researchers novel and continuously more effective ways to analyze and present their work. Software tools regarding statistical analysis offer scientists the chance to validate their work and enhance the quality of published papers. Moreover, from the reviewers and the editor's perspective, it is now possible to ensure the (text-content) originality of a scientific article with automated software tools for plagiarism detection. In this paper, we provide a step-bystep demonstration of two categories of tools, namely, statistical analysis and plagiarism detection. The aim is not to come up with a specific tool recommendation, but rather to provide useful guidelines on the proper use and efficiency of either category of tools. In the context of this special issue, this paper offers a useful tutorial to specific problems concerned with scientific writing and review discourse. A specific neuroscience experimental case example is utilized to illustrate the young researcher's statistical analysis burden, while a test scenario is purpose-built using open access journal articles to exemplify the use and comparative outputs of seven plagiarism detection software pieces. PMID:21487489

  19. Array capabilities and future arrays

    International Nuclear Information System (INIS)

    Radford, D.

    1993-01-01

    Early results from the new third-generation instruments GAMMASPHERE and EUROGAM are confirming the expectation that such arrays will have a revolutionary effect on the field of high-spin nuclear structure. When completed, GAMMASHPERE will have a resolving power am order of magnitude greater that of the best second-generation arrays. When combined with other instruments such as particle-detector arrays and fragment mass analysers, the capabilites of the arrays for the study of more exotic nuclei will be further enhanced. In order to better understand the limitations of these instruments, and to design improved future detector systems, it is important to have some intelligible and reliable calculation for the relative resolving power of different instrument designs. The derivation of such a figure of merit will be briefly presented, and the relative sensitivities of arrays currently proposed or under construction presented. The design of TRIGAM, a new third-generation array proposed for Chalk River, will also be discussed. It is instructive to consider how far arrays of Compton-suppressed Ge detectors could be taken. For example, it will be shown that an idealised open-quote perfectclose quotes third-generation array of 1000 detectors has a sensitivity an order of magnitude higher again than that of GAMMASPHERE. Less conventional options for new arrays will also be explored

  20. The analysis of colour uniformity for a volumetric display based on a rotating LED array

    International Nuclear Information System (INIS)

    Wu, Jiang; Liu, Xu; Yan, Caijie; Xia, XinXing; Li, Haifeng

    2011-01-01

    There is a colour nonuniformity zone existing in three-dimensional (3D) volumetric displays which is based on the rotating colour light-emitting diode (LED) array. We analyse the reason for the colour nonuniformity zone by measuring the light intensity distribution and chromaticity coordinates of the LED in the volumetric display. Two boundaries of the colour nonuniformity zone are calculated. We measure the colour uniformities for a single cuboid of 3*3*4 voxels to display red, green, blue and white colour in different horizontal viewing angles, and for 64 cuboids distributed in the whole cylindrical image space with a fixed viewpoint. To evaluate the colour uniformity of a 3D image, we propose three evaluation indices of colour uniformity: the average of colour difference, the maximum colour difference and the variance of colour difference. The measurement results show that the character of colour uniformity is different for the 3D volumetric display and the two-dimensional display

  1. Computer programs for the acquisition and analysis of eddy-current array probe data

    International Nuclear Information System (INIS)

    Pate, J.R.; Dodd, C.V.

    1996-07-01

    Objective of the Improved Eddy-Curent ISI (in-service inspection) for Steam Generators Tubing program is to upgrade and validate eddy-current inspections, including probes, instrumentation, and data processing techniques for ISI of new, used, and repaired steam generator tubes; to improve defect detection, classification and characterization as affected by diameter and thickness variations, denting, probe wobble, tube sheet, tube supports, copper and sludge deposits, even when defect types and other variables occur in combination; to transfer this advanced technology to NRC's mobile NDE laboratory and staff. This report documents computer programs that were developed for acquisition of eddy-current data from specially designed 16-coil array probes. Complete code as well as instructions for use are provided

  2. Field Programmable Gate Array Reliability Analysis Guidelines for Launch Vehicle Reliability Block Diagrams

    Science.gov (United States)

    Al Hassan, Mohammad; Britton, Paul; Hatfield, Glen Spencer; Novack, Steven D.

    2017-01-01

    Field Programmable Gate Arrays (FPGAs) integrated circuits (IC) are one of the key electronic components in today's sophisticated launch and space vehicle complex avionic systems, largely due to their superb reprogrammable and reconfigurable capabilities combined with relatively low non-recurring engineering costs (NRE) and short design cycle. Consequently, FPGAs are prevalent ICs in communication protocols and control signal commands. This paper will identify reliability concerns and high level guidelines to estimate FPGA total failure rates in a launch vehicle application. The paper will discuss hardware, hardware description language, and radiation induced failures. The hardware contribution of the approach accounts for physical failures of the IC. The hardware description language portion will discuss the high level FPGA programming languages and software/code reliability growth. The radiation portion will discuss FPGA susceptibility to space environment radiation.

  3. Analysis of steady state temperature distribution in rod arrays exposed to stagnant gaseous environment

    International Nuclear Information System (INIS)

    Pal, G.; MaarKandeya, S.G.; Raj, V.V.

    1991-01-01

    This paper deals with the calculation of radiative heat exchange in a rod array exposed to stagnant gaseous environment. a computer code has been developed for this purpose and has been used for predicting the steady state temperature distribution in a nuclear fuel sub-assembly. Nuclear fuels continue to generate heat even after their removal from the core. During the transfer of the nuclear fuel sub-assemblies from the core to the storage bay, they pass through stagnant gaseous environment and may remain there for extended periods under abnormal conditions. Radiative heat exchange will be the dominant mode within the sub-assembly involved, since axial heat conduction through the fuel pins needs to be accounted for. a computer code RHEINA-3D (Radiative Heat Exchange In Nuclear Assemblies -3D) has been developed based on a simplified numerical model which considers both the above-mentioned phenomena. The analytical model and the results obtained are briefly discussed in this paper

  4. Luminance and image quality analysis of an organic electroluminescent panel with a patterned microlens array attachment

    International Nuclear Information System (INIS)

    Lin, Hoang Yan; Chen, Kuan-Yu; Ho, Yu-Hsuan; Fang, Jheng-Hao; Hsu, Sheng-Chih; Lee, Jiun-Haw; Lin, Jia-Rong; Wei, Mao-Kuo

    2010-01-01

    Luminance and image quality observed from the normal direction of a commercial 2.0 inch panel based on organic electroluminescence (OEL) technology attached to regular and patterned microlens array films (MAFs) were studied and analyzed. When applying the regularly arranged MAF on the panel, a luminance enhancement of 23% was observed, accompanied by a reduction of the image quality index as low as 74%. By removing the microlenses on the emitting areas, the patterned MAF enhances the luminance efficiency of the OEL by 52% keeping the image quality index of the display as high as 94%, due to the effective light extraction in the glass substrate being less than the critical angle. 3D simulation based on a ray-tracing model was also established to investigate the spatial distribution of light rays radiated from an OEL pixel with different microstructures which showed consistent results with the experimental results

  5. Performance Analysis of Blind Beamforming Algorithms in Adaptive Antenna Array in Rayleigh Fading Channel Model

    International Nuclear Information System (INIS)

    Yasin, M; Akhtar, Pervez; Pathan, Amir Hassan

    2013-01-01

    In this paper, we analyze the performance of adaptive blind algorithms – i.e. Kaiser Constant Modulus Algorithm (KCMA), Hamming CMA (HAMCMA) – with CMA in a wireless cellular communication system using digital modulation technique. These blind algorithms are used in digital signal processor of adaptive antenna to make it smart and change weights of the antenna array system dynamically. The simulation results revealed that KCMA and HAMCMA provide minimum mean square error (MSE) with 1.247 dB and 1.077 dB antenna gain enhancement, 75% reduction in bit error rate (BER) respectively over that of CMA. Therefore, KCMA and HAMCMA algorithms give a cost effective solution for a communication system

  6. Analysis of Surface Electric Field Measurements from an Array of Electric Field Mills

    Science.gov (United States)

    Lucas, G.; Thayer, J. P.; Deierling, W.

    2016-12-01

    Kennedy Space Center (KSC) has operated an distributed array of over 30 electric field mills over the past 18 years, providing a unique data set of surface electric field measurements over a very long timespan. In addition to the electric field instruments there are many meteorological towers around KSC that monitor the local meteorological conditions. Utilizing these datasets we have investigated and found unique spatial and temporal signatures in the electric field data that are attributed to local meteorological effects and the global electric circuit. The local and global scale influences on the atmospheric electric field will be discussed including the generation of space charge from the ocean surf, local cloud cover, and a local enhancement in the electric field that is seen at sunrise.

  7. Analysis of the Kanamycin in Raw Milk Using the Suspension Array

    Directory of Open Access Journals (Sweden)

    Yanfei Wang

    2013-01-01

    Full Text Available With the monoclonal antibody against kanamycin being prepared successfully, a bead-based indirect competitive fluorescent immunoassay was developed to detect kanamycin in milk. The fact that there was no significant cross-reaction with other aminoglycoside antibiotics implied that the monoclonal antibody was highly specific for kanamycin. The limit of detection (LOD and the 50% inhibition concentration (IC50 in raw milk were 3.2 ng/mL and 52.5 ng/mL, respectively. Using the method developed in this study, the kanamycin concentrations were monitored in raw milk after the intramuscular administration of kanamycin in sick cows. Compared to the conventional enzyme-linked immunosorbent assay (ELISA, the method using the suspension array system was more sensitive. The results obtained in the present study showed a good correlation with that of the ELISA.

  8. Revisiting the analysis of passive plasma shutdown during an ex-vessel loss of coolant accident in ITER blanket

    International Nuclear Information System (INIS)

    Rivas, J.C.; Dies, J.; Fajarnés, X.

    2015-01-01

    Highlights: • We have repeated the safety analysis for the hypothesis of passive plasma shutdown for beryllium evaporation during an ex-vessel LOCA of ITER first wall, with AINA code. • We have performed a sensitivity analysis over some key parameters that represents uncertainties in physics and engineering, to identify cliff edge effects. • The obtained results for the 500 MW inductive scenario, with an ex-vessel LOCA affecting a third of first wall surface are similar to those of previous studies and point to the possibility of a passive plasma shutdown during this safety case, before a serious damage is inflicted to the ITER wall. • The sensitivity analysis revealed a new scenario potentially damaging for the first wall if we increase fusion power and time delay for impurity transport, and decrease fraction of affected first wall area and initial beryllium fraction in plasma. • After studying the 700 MW inductive scenario, with an ex-vessel LOCA affecting 10% of first wall surface, with 0.5% of Be in plasma and a time delay twice the energy confinement time, it was found that affected area of first wall would melt before a passive plasma shutdown occurs. - Abstract: In this contribution, the analysis of passive safety during an ex-vessel loss of coolant accident (LOCA) in the first wall/shield blanket of ITER has been studied with AINA safety code. In the past, this case has been studied using robust safety arguments, based on simple 0D models for plasma balance equations and 1D models for wall heat transfer. The conclusion was that, after first wall heating up due to the loss of all coolant, the beryllium evaporation in the wall surface would induce a growing impurity flux into core plasma that finally would end in a passive shut down of the discharge. The analysis of plasma-wall transients in this work is based in results from AINA code simulations. AINA (Analyses of IN vessel Accidents) code is a safety code developed at Fusion Energy Engineering

  9. Improved Protein Arrays for Quantitative Systems Analysis of the Dynamics of Signaling Pathway Interactions

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Chin-Rang [National Inst. of Health (NIH), Bethesda, MD (United States). National Heart, Lung and Blood Inst.

    2013-12-11

    Astronauts and workers in nuclear plants who repeatedly exposed to low doses of ionizing radiation (IR, <10 cGy) are likely to incur specific changes in signal transduction and gene expression in various tissues of their body. Remarkable advances in high throughput genomics and proteomics technologies enable researchers to broaden their focus from examining single gene/protein kinetics to better understanding global gene/protein expression profiling and biological pathway analyses, namely Systems Biology. An ultimate goal of systems biology is to develop dynamic mathematical models of interacting biological systems capable of simulating living systems in a computer. This Glue Grant is to complement Dr. Boothman’s existing DOE grant (No. DE-FG02-06ER64186) entitled “The IGF1/IGF-1R-MAPK-Secretory Clusterin (sCLU) Pathway: Mediator of a Low Dose IR-Inducible Bystander Effect” to develop sensitive and quantitative proteomic technology that suitable for low dose radiobiology researches. An improved version of quantitative protein array platform utilizing linear Quantum dot signaling for systematically measuring protein levels and phosphorylation states for systems biology modeling is presented. The signals are amplified by a confocal laser Quantum dot scanner resulting in ~1000-fold more sensitivity than traditional Western blots and show the good linearity that is impossible for the signals of HRP-amplification. Therefore this improved protein array technology is suitable to detect weak responses of low dose radiation. Software is developed to facilitate the quantitative readout of signaling network activities. Kinetics of EGFRvIII mutant signaling was analyzed to quantify cross-talks between EGFR and other signaling pathways.

  10. Optical analysis of a III-V-nanowire-array-on-Si dual junction solar cell.

    Science.gov (United States)

    Chen, Yang; Höhn, Oliver; Tucher, Nico; Pistol, Mats-Erik; Anttu, Nicklas

    2017-08-07

    A tandem solar cell consisting of a III-V nanowire subcell on top of a planar Si subcell is a promising candidate for next generation photovoltaics due to the potential for high efficiency. However, for success with such applications, the geometry of the system must be optimized for absorption of sunlight. Here, we consider this absorption through optics modeling. Similarly, as for a bulk dual-junction tandem system on a silicon bottom cell, a bandgap of approximately 1.7 eV is optimum for the nanowire top cell. First, we consider a simplified system of bare, uncoated III-V nanowires on the silicon substrate and optimize the absorption in the nanowires. We find that an optimum absorption in 2000 nm long nanowires is reached for a dense array of approximately 15 nanowires per square micrometer. However, when we coat such an array with a conformal indium tin oxide (ITO) top contact layer, a substantial absorption loss occurs in the ITO. This ITO could absorb 37% of the low energy photons intended for the silicon subcell. By moving to a design with a 50 nm thick, planarized ITO top layer, we can reduce this ITO absorption to 5%. However, such a planarized design introduces additional reflection losses. We show that these reflection losses can be reduced with a 100 nm thick SiO 2 anti-reflection coating on top of the ITO layer. When we at the same time include a Si 3 N 4 layer with a thickness of 90 nm on the silicon surface between the nanowires, we can reduce the average reflection loss of the silicon cell from 17% to 4%. Finally, we show that different approximate models for the absorption in the silicon substrate can lead to a 15% variation in the estimated photocurrent density in the silicon subcell.

  11. Responses of murine and human macrophages to leptospiral infection: a study using comparative array analysis.

    Directory of Open Access Journals (Sweden)

    Feng Xue

    Full Text Available Leptospirosis is a re-emerging tropical infectious disease caused by pathogenic Leptospira spp. The different host innate immune responses are partially related to the different severities of leptospirosis. In this study, we employed transcriptomics and cytokine arrays to comparatively calculate the responses of murine peritoneal macrophages (MPMs and human peripheral blood monocytes (HBMs to leptospiral infection. We uncovered a series of different expression profiles of these two immune cells. The percentages of regulated genes in several biological processes of MPMs, such as antigen processing and presentation, membrane potential regulation, and the innate immune response, etc., were much greater than those of HBMs (>2-fold. In MPMs and HBMs, the caspase-8 and Fas-associated protein with death domain (FADD-like apoptosis regulator genes were significantly up-regulated, which supported previous results that the caspase-8 and caspase-3 pathways play an important role in macrophage apoptosis during leptospiral infection. In addition, the key component of the complement pathway, C3, was only up-regulated in MPMs. Furthermore, several cytokines, e.g. interleukin 10 (IL-10 and tumor necrosis factor alpha (TNF-alpha, were differentially expressed at both mRNA and protein levels in MPMs and HBMs. Some of the differential expressions were proved to be pathogenic Leptospira-specific regulations at mRNA level or protein level. Though it is still unclear why some animals are resistant and others are susceptible to leptospiral infection, this comparative study based on transcriptomics and cytokine arrays partially uncovered the differences of murine resistance and human susceptibility to leptospirosis. Taken together, these findings will facilitate further molecular studies on the innate immune response to leptospiral infection.

  12. Responses of Murine and Human Macrophages to Leptospiral Infection: A Study Using Comparative Array Analysis

    Science.gov (United States)

    Yang, Yingchao; Zhao, Jinping; Yang, Yutao; Cao, Yongguo; Hong, Cailing; Liu, Yuan; Sun, Lan; Huang, Minjun; Gu, Junchao

    2013-01-01

    Leptospirosis is a re-emerging tropical infectious disease caused by pathogenic Leptospira spp. The different host innate immune responses are partially related to the different severities of leptospirosis. In this study, we employed transcriptomics and cytokine arrays to comparatively calculate the responses of murine peritoneal macrophages (MPMs) and human peripheral blood monocytes (HBMs) to leptospiral infection. We uncovered a series of different expression profiles of these two immune cells. The percentages of regulated genes in several biological processes of MPMs, such as antigen processing and presentation, membrane potential regulation, and the innate immune response, etc., were much greater than those of HBMs (>2-fold). In MPMs and HBMs, the caspase-8 and Fas-associated protein with death domain (FADD)-like apoptosis regulator genes were significantly up-regulated, which supported previous results that the caspase-8 and caspase-3 pathways play an important role in macrophage apoptosis during leptospiral infection. In addition, the key component of the complement pathway, C3, was only up-regulated in MPMs. Furthermore, several cytokines, e.g. interleukin 10 (IL-10) and tumor necrosis factor alpha (TNF-alpha), were differentially expressed at both mRNA and protein levels in MPMs and HBMs. Some of the differential expressions were proved to be pathogenic Leptospira-specific regulations at mRNA level or protein level. Though it is still unclear why some animals are resistant and others are susceptible to leptospiral infection, this comparative study based on transcriptomics and cytokine arrays partially uncovered the differences of murine resistance and human susceptibility to leptospirosis. Taken together, these findings will facilitate further molecular studies on the innate immune response to leptospiral infection. PMID:24130911

  13. Cross-platform array comparative genomic hybridization meta-analysis separates hematopoietic and mesenchymal from epithelial tumors

    NARCIS (Netherlands)

    Jong, C.; Marchiori, E.; van der Vaart, A.W.; Chin, S.F.; Carvalho, B; Tijssen, M.; Eijk, P.P.; van den IJssel, P.; Grabsch, H.; Quirke, P.; Oudejans, J.J.; Meijer, G.J.; Caldas, C.; Ylstra, B.

    2007-01-01

    A series of studies have been published that evaluate the chromosomal copy number changes of different tumor classes using array comparative genomic hybridization (array CGH); however, the chromosomal aberrations that distinguish the different tumor classes have not been fully characterized.

  14. Genotype-phenotype analysis of recombinant chromosome 4 syndrome: an array-CGH study and literature review.

    Science.gov (United States)

    Hemmat, Morteza; Hemmat, Omid; Anguiano, Arturo; Boyar, Fatih Z; El Naggar, Mohammed; Wang, Jia-Chi; Wang, Borris T; Sahoo, Trilochan; Owen, Renius; Haddadin, Mary

    2013-05-02

    Recombinant chromosome 4, a rare constitutional rearrangement arising from pericentric inversion, comprises a duplicated segment of 4p13~p15→4pter and a deleted segment of 4q35→4qter. To date, 10 cases of recombinant chromosome 4 have been reported. We describe the second case in which array-CGH was used to characterize recombinant chromosome 4 syndrome. The patient was a one-year old boy with consistent clinical features. Conventional cytogenetics and FISH documented a recombinant chromosome 4, derived from a paternal pericentric inversion, leading to partial trisomy 4p and partial monosomy of 4q. Array-CGH, performed to further characterize the rearranged chromosome 4 and delineate the breakpoints, documented a small (4.36 Mb) 4q35.1 terminal deletion and a large (23.81 Mb) 4p15.1 terminal duplication. Genotype-phenotype analysis of 10 previously reported cases and the present case indicated relatively consistent clinical features and breakpoints. This consistency was more evident in our case and another characterized by array-CGH, where both showed the common breakpoints of p15.1 and q35.1. A genotype-phenotype correlation study between rec(4), dup(4p), and del(4q) syndromes revealed that urogenital and cardiac defects are probably due to the deletion of 4q whereas the other clinical features are likely due to 4p duplication. Our findings support that the clinical features of patients with rec(4) are relatively consistent and specific to the regions of duplication or deletion. Recombinant chromosome 4 syndrome thus appears to be a discrete entity that can be suspected on the basis of clinical features or specific deleted and duplicated chromosomal regions.

  15. Conclusive evidence for hexasomic inheritance in chrysanthemum based on analysis of a 183 k SNP array.

    Science.gov (United States)

    van Geest, Geert; Voorrips, Roeland E; Esselink, Danny; Post, Aike; Visser, Richard Gf; Arens, Paul

    2017-08-07

    Cultivated chrysanthemum is an outcrossing hexaploid (2n = 6× = 54) with a disputed mode of inheritance. In this paper, we present a single nucleotide polymorphism (SNP) selection pipeline that was used to design an Affymetrix Axiom array with 183 k SNPs from RNA sequencing data (1). With this array, we genotyped four bi-parental populations (with sizes of 405, 53, 76 and 37 offspring plants respectively), and a cultivar panel of 63 genotypes. Further, we present a method for dosage scoring in hexaploids from signal intensities of the array based on mixture models (2) and validation of selection steps in the SNP selection pipeline (3). The resulting genotypic data is used to draw conclusions on the mode of inheritance in chrysanthemum (4), and to make an inference on allelic expression bias (5). With use of the mixture model approach, we successfully called the dosage of 73,936 out of 183,130 SNPs (40.4%) that segregated in any of the bi-parental populations. To investigate the mode of inheritance, we analysed markers that segregated in the large bi-parental population (n = 405). Analysis of segregation of duplex x nulliplex SNPs resulted in evidence for genome-wide hexasomic inheritance. This evidence was substantiated by the absence of strong linkage between markers in repulsion, which indicated absence of full disomic inheritance. We present the success rate of SNP discovery out of RNA sequencing data as affected by different selection steps, among which SNP coverage over genotypes and use of different types of sequence read mapping software. Genomic dosage highly correlated with relative allele coverage from the RNA sequencing data, indicating that most alleles are expressed according to their genomic dosage. The large population, genotyped with a very large number of markers, is a unique framework for extensive genetic analyses in hexaploid chrysanthemum. As starting point, we show conclusive evidence for genome-wide hexasomic inheritance.

  16. Revisiting the level scheme of the proton emitter 151Lu

    International Nuclear Information System (INIS)

    Wang, F.; Sun, B.H.; Liu, Z.; Scholey, C.; Eeckhaudt, S.; Grahn, T.; Greenlees, P.T.; Jones, P.; Julin, R.; Juutinen, S.; Kettelhut, S.; Leino, M.; Nyman, M.; Rahkila, P.; Saren, J.; Sorri, J.; Uusitalo, J.; Ashley, S.F.; Cullen, I.J.; Garnsworthy, A.B.; Gelletly, W.; Jones, G.A.; Pietri, S.; Podolyak, Z.; Steer, S.; Thompson, N.J.; Walker, P.M.; Williams, S.; Bianco, L.; Darby, I.G.; Joss, D.T.; Page, R.D.; Pakarinen, J.; Rigby, S.; Cullen, D.M.; Khan, S.; Kishada, A.; Gomez-Hornillos, M.B.; Simpson, J.; Jenkins, D.G.; Niikura, M.; Seweryniak, D.; Shizuma, Toshiyuki

    2015-01-01

    An experiment aiming to search for new isomers in the region of proton emitter 151 Lu was performed at the Accelerator Laboratory of the University of Jyväskylä (JYFL), by combining the high resolution γ-ray array JUROGAM, gas-filled RITU separator and GREAT detectors with the triggerless total data readout acquisition (TDR) system. In this proceeding, we revisit the level scheme of 151 Lu by using the proton-tagging technique. A level scheme consistent with the latest experimental results is obtained, and 3 additional levels are identified at high excitation energies. (author)

  17. GLINT: a user-friendly toolset for the analysis of high-throughput DNA-methylation array data.

    Science.gov (United States)

    Rahmani, Elior; Yedidim, Reut; Shenhav, Liat; Schweiger, Regev; Weissbrod, Omer; Zaitlen, Noah; Halperin, Eran

    2017-06-15

    GLINT is a user-friendly command-line toolset for fast analysis of genome-wide DNA methylation data generated using the Illumina human methylation arrays. GLINT, which does not require any programming proficiency, allows an easy execution of Epigenome-Wide Association Study analysis pipeline under different models while accounting for known confounders in methylation data. GLINT is a command-line software, freely available at https://github.com/cozygene/glint/releases . It requires Python 2.7 and several freely available Python packages. Further information and documentation as well as a quick start tutorial are available at http://glint-epigenetics.readthedocs.io . elior.rahmani@gmail.com or ehalperin@cs.ucla.edu. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  18. Revisiting persuasion in oral academic and professional genres: Towards a methodological framework for Multimodal Discourse Analysis of research dissemination talks

    Directory of Open Access Journals (Sweden)

    Julia Valeiras-Jurado

    2018-04-01

    Full Text Available Previous work on oral genres (Kress & Van Leeuwen, 2001; Kress, 2010; Bateman, 2011 as well as on persuasion (O’Keefe, 2002; Perloff, 2003; Poggi & Pelachaud, 2008 has indicated that effective persuasive oral communication depends heavily on the use of a wide range of different semiotic modes including words, gestures and intonation. However, little attention has been paid so far to how speakers convey their communicative intentions orchestrating different modes into a coherent multimodal ensemble (Kress, 2010. In this paper we propose a methodological framework for Multimodal Discourse Analysis (MDA of persuasion in oral academic and professional genres. Drawing on previous studies on persuasion (Fuertes-Olivera et al., 2001; O’Keefe, 2002; Perloff, 2003; Virtanen & Halmari, 2005; Dafouz-Milne, 2008, our framework combines earlier proposals for MDA (Querol-Julián, 2011; Querol-Julián & Fortanet-Gómez, 2014 with an ethnographic perspective (Rubin & Rubin, 1995. Our study focuses specifically on the analysis of persuasive strategies used in dissemination talks. The proposed MDA caters for the following modes: words, intonation, head movements and gestures. Preliminary findings hint at a relation between persuasion and so-called modal density (Norris, 2004. Finally, we propose a tentative taxonomy of persuasive strategies and how they are realised multimodally.

  19. Revisiting the impact of OXTR rs53576 on empathy: A population-based study and a meta-analysis.

    Science.gov (United States)

    Gong, Pingyuan; Fan, Huiyong; Liu, Jinting; Yang, Xing; Zhang, Kejin; Zhou, Xiaolin

    2017-06-01

    Oxytocin in the brain is related to empathy, which refers to the ability to understand and share others' internal states or responses. Previous studies have investigated the impact of OXTR rs53576, the most intensively examined polymorphism in the oxytocin receptor (OXTR) gene, on individual differences in empathy. However, these studies produced inconsistent results. In the current study, we reexamined the association of OXTR rs53576 with empathy in a relatively large population (N=1830) and also evaluated the association by a comprehensive meta-analysis (N=6631, 13 independent samples). The replication study indicated that OXTR rs53576 was indeed associated with individual differences in empathy. Individuals with a greater number of G alleles showed better empathic ability, particularly in fantasizing other's feelings and actions. The meta-analysis not only confirmed this association, but also indicated that the impact of this polymorphism was significant in both Europeans and Asians. These findings provide convincing evidence for the impact of OXTR rs53576 on empathy, highlighting the importance of OXTR gene in individuals' social cognition. Copyright © 2017. Published by Elsevier Ltd.

  20. Hierarchical structure of the energy landscape of proteins revisited by time series analysis. II. Investigation of explicit solvent effects

    Science.gov (United States)

    Alakent, Burak; Camurdan, Mehmet C.; Doruker, Pemra

    2005-10-01

    Time series analysis tools are employed on the principal modes obtained from the Cα trajectories from two independent molecular-dynamics simulations of α-amylase inhibitor (tendamistat). Fluctuations inside an energy minimum (intraminimum motions), transitions between minima (interminimum motions), and relaxations in different hierarchical energy levels are investigated and compared with those encountered in vacuum by using different sampling window sizes and intervals. The low-frequency low-indexed mode relationship, established in vacuum, is also encountered in water, which shows the reliability of the important dynamics information offered by principal components analysis in water. It has been shown that examining a short data collection period (100ps) may result in a high population of overdamped modes, while some of the low-frequency oscillations (memory: future conformations are less dependent on previous conformations due to the lowering of energy barriers in hierarchical levels of the energy landscape. In short-time dynamics (sight contradicts. However, this comes about because water enhances the transitions between minima and forces the protein to reduce its already inherent inability to maintain oscillations observed in vacuum. Some of the frequencies lower than 10cm-1 are found to be overdamped, while those higher than 20cm-1 are slightly increased. As for the long-time dynamics in water, it is found that random-walk motion is maintained for approximately 200ps (about five times of that in vacuum) in the low-indexed modes, showing the lowering of energy barriers between the higher-level minima.

  1. High Throughput Sample Preparation and Analysis for DNA Sequencing, PCR and Combinatorial Screening of Catalysis Based on Capillary Array Technique

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Yonghua [Iowa State Univ., Ames, IA (United States)

    2000-01-01

    Sample preparation has been one of the major bottlenecks for many high throughput analyses. The purpose of this research was to develop new sample preparation and integration approach for DNA sequencing, PCR based DNA analysis and combinatorial screening of homogeneous catalysis based on multiplexed capillary electrophoresis with laser induced fluorescence or imaging UV absorption detection. The author first introduced a method to integrate the front-end tasks to DNA capillary-array sequencers. protocols for directly sequencing the plasmids from a single bacterial colony in fused-silica capillaries were developed. After the colony was picked, lysis was accomplished in situ in the plastic sample tube using either a thermocycler or heating block. Upon heating, the plasmids were released while chromsomal DNA and membrane proteins were denatured and precipitated to the bottom of the tube. After adding enzyme and Sanger reagents, the resulting solution was aspirated into the reaction capillaries by a syringe pump, and cycle sequencing was initiated. No deleterious effect upon the reaction efficiency, the on-line purification system, or the capillary electrophoresis separation was observed, even though the crude lysate was used as the template. Multiplexed on-line DNA sequencing data from 8 parallel channels allowed base calling up to 620 bp with an accuracy of 98%. The entire system can be automatically regenerated for repeated operation. For PCR based DNA analysis, they demonstrated that capillary electrophoresis with UV detection can be used for DNA analysis starting from clinical sample without purification. After PCR reaction using cheek cell, blood or HIV-1 gag DNA, the reaction mixtures was injected into the capillary either on-line or off-line by base stacking. The protocol was also applied to capillary array electrophoresis. The use of cheaper detection, and the elimination of purification of DNA sample before or after PCR reaction, will make this approach an

  2. Radiation Dose Escalation in Esophageal Cancer Revisited: A Contemporary Analysis of the National Cancer Data Base, 2004 to 2012

    International Nuclear Information System (INIS)

    Brower, Jeffrey V.; Chen, Shuai; Bassetti, Michael F.; Yu, Menggang; Harari, Paul M.; Ritter, Mark A.; Baschnagel, Andrew M.

    2016-01-01

    Purpose: To evaluate the effect of radiation dose escalation on overall survival (OS) for patients with nonmetastatic esophageal cancer treated with concurrent radiation and chemotherapy. Methods and Materials: Patients diagnosed with stage I to III esophageal cancer treated from 2004 to 2012 were identified from the National Cancer Data Base. Patients who received concurrent radiation and chemotherapy with radiation doses of ≥50 Gy and did not undergo surgery were included. OS was compared using Cox proportional hazards regression and propensity score matching. Results: A total of 6854 patients were included; 3821 (55.7%) received 50 to 50.4 Gy and 3033 (44.3%) received doses >50.4 Gy. Univariate analysis revealed no significant difference in OS between patients receiving 50 to 50.4 Gy and those receiving >50.4 Gy (P=.53). The dose analysis, binned as 50 to 50.4, 51 to 54, 55 to 60, and >60 Gy, revealed no appreciable difference in OS within any group compared with 50 to 50.4 Gy. Subgroup analyses investigating the effect of dose escalation by histologic type and in the setting of intensity modulated radiation therapy also failed to reveal a benefit. Propensity score matching confirmed the absence of a statistically significant difference in OS among the dose levels. The factors associated with improved OS on multivariable analysis included female sex, lower Charlson-Deyo comorbidity score, private insurance, cervical/upper esophagus location, squamous cell histologic type, lower T stage, and node-negative status (P 50.4 Gy did not result in improved OS among patients with stage I to III esophageal cancer treated with definitive concurrent radiation and chemotherapy. These data suggest that despite advanced contemporary treatment techniques, OS for patients with esophageal cancer remains unaltered by escalation of radiation dose >50.4 Gy, consistent with the results of the INT-0123 trial. Furthermore, these data highlight that many radiation

  3. Radiation Dose Escalation in Esophageal Cancer Revisited: A Contemporary Analysis of the National Cancer Data Base, 2004 to 2012

    Energy Technology Data Exchange (ETDEWEB)

    Brower, Jeffrey V. [Department of Human Oncology, University of Wisconsin Carbone Cancer Center, School of Medicine and Public Health, University of Wisconsin, Madison, Wisconsin (United States); Chen, Shuai [Department of Biostatistics and Medical Informatics, School of Medicine and Public Health, University of Wisconsin, Madison, Wisconsin (United States); Bassetti, Michael F. [Department of Human Oncology, University of Wisconsin Carbone Cancer Center, School of Medicine and Public Health, University of Wisconsin, Madison, Wisconsin (United States); Yu, Menggang [Department of Biostatistics and Medical Informatics, School of Medicine and Public Health, University of Wisconsin, Madison, Wisconsin (United States); Harari, Paul M.; Ritter, Mark A. [Department of Human Oncology, University of Wisconsin Carbone Cancer Center, School of Medicine and Public Health, University of Wisconsin, Madison, Wisconsin (United States); Baschnagel, Andrew M., E-mail: baschnagel@humonc.wisc.edu [Department of Human Oncology, University of Wisconsin Carbone Cancer Center, School of Medicine and Public Health, University of Wisconsin, Madison, Wisconsin (United States)

    2016-12-01

    Purpose: To evaluate the effect of radiation dose escalation on overall survival (OS) for patients with nonmetastatic esophageal cancer treated with concurrent radiation and chemotherapy. Methods and Materials: Patients diagnosed with stage I to III esophageal cancer treated from 2004 to 2012 were identified from the National Cancer Data Base. Patients who received concurrent radiation and chemotherapy with radiation doses of ≥50 Gy and did not undergo surgery were included. OS was compared using Cox proportional hazards regression and propensity score matching. Results: A total of 6854 patients were included; 3821 (55.7%) received 50 to 50.4 Gy and 3033 (44.3%) received doses >50.4 Gy. Univariate analysis revealed no significant difference in OS between patients receiving 50 to 50.4 Gy and those receiving >50.4 Gy (P=.53). The dose analysis, binned as 50 to 50.4, 51 to 54, 55 to 60, and >60 Gy, revealed no appreciable difference in OS within any group compared with 50 to 50.4 Gy. Subgroup analyses investigating the effect of dose escalation by histologic type and in the setting of intensity modulated radiation therapy also failed to reveal a benefit. Propensity score matching confirmed the absence of a statistically significant difference in OS among the dose levels. The factors associated with improved OS on multivariable analysis included female sex, lower Charlson-Deyo comorbidity score, private insurance, cervical/upper esophagus location, squamous cell histologic type, lower T stage, and node-negative status (P<.01 for all analyses). Conclusions: In this large national cohort, dose escalation >50.4 Gy did not result in improved OS among patients with stage I to III esophageal cancer treated with definitive concurrent radiation and chemotherapy. These data suggest that despite advanced contemporary treatment techniques, OS for patients with esophageal cancer remains unaltered by escalation of radiation dose >50.4 Gy, consistent with the results of

  4. Revisiting Ethnic Niches: A Comparative Analysis of the Labor Market Experiences of Asian and Latino Undocumented Young Adults

    Directory of Open Access Journals (Sweden)

    Esther Yoona Cho

    2017-07-01

    Full Text Available Drawing on thirty in-depth interviews with Korean- and Mexican-origin undocumented young adults in California, this comparative analysis explores how the intersection of immigration status and ethnoracial background affects social and economic incorporation. Respective locations of principal ethnic niches, and access to these labor market structures, lead to divergent pathways of employment when no legal recourse exists. Despite similar levels of academic achievement, Korean respondents were more likely to enter into a greater diversity of occupations relative to Mexican respondents. However, the experiences of Mexican respondents varied depending on their connection to pan-ethnic Latino nonprofit organizations. Illegality, therefore, is conditioned by opportunity structures that vary strongly by membership in different ethnoracial communities, leading to structured heterogeneity in experiences with undocumented status.

  5. Predictors and Outcomes of Revisits in Older Adults Discharged from the Emergency Department.

    Science.gov (United States)

    de Gelder, Jelle; Lucke, Jacinta A; de Groot, Bas; Fogteloo, Anne J; Anten, Sander; Heringhaus, Christian; Dekkers, Olaf M; Blauw, Gerard J; Mooijaart, Simon P

    2018-04-01

    To study predictors of emergency department (ED) revisits and the association between ED revisits and 90-day functional decline or mortality. Multicenter cohort study. One academic and two regional Dutch hospitals. Older adults discharged from the ED (N=1,093). At baseline, data on demographic characteristics, illness severity, and geriatric parameters (cognition, functional capacity) were collected. All participants were prospectively followed for an unplanned revisit within 30 days and for functional decline and mortality 90 days after the initial visit. The median age was 79 (interquartile range 74-84), and 114 participants (10.4%) had an ED revisit within 30 days of discharge. Age (hazard ratio (HR)=0.96, 95% confidence interval (CI)=0.92-0.99), male sex (HR=1.61, 95% CI=1.05-2.45), polypharmacy (HR=2.06, 95% CI=1.34-3.16), and cognitive impairment (HR=1.71, 95% CI=1.02-2.88) were independent predictors of a 30-day ED revisit. The area under the receiver operating characteristic curve to predict an ED revisit was 0.65 (95% CI=0.60-0.70). In a propensity score-matched analysis, individuals with an ED revisit were at higher risk (odds ratio=1.99 95% CI=1.06-3.71) of functional decline or mortality. Age, male sex, polypharmacy, and cognitive impairment were independent predictors of a 30-day ED revisit, but no useful clinical prediction model could be developed. However, an early ED revisit is a strong new predictor of adverse outcomes in older adults. © 2018 The Authors. The Journal of the American Geriatrics Society published by Wiley Periodicals, Inc. on behalf of The American Geriatrics Society.

  6. Large-scale analysis of antisense transcription in wheat using the Affymetrix GeneChip Wheat Genome Array

    Directory of Open Access Journals (Sweden)

    Settles Matthew L

    2009-05-01

    -antisense transcript pairs, analysis of the gene ontology terms showed a significant over-representation of transcripts involved in energy production. These included several representations of ATP synthase, photosystem proteins and RUBISCO, which indicated that photosynthesis is likely to be regulated by antisense transcripts. Conclusion This study demonstrated the novel use of an adapted labeling protocol and a 3'IVT GeneChip array for large-scale identification of antisense transcription in wheat. The results show that antisense transcription is relatively abundant in wheat, and may affect the expression of valuable agronomic phenotypes. Future work should select potentially interesting transcript pairs for further functional characterization to determine biological activity.

  7. Quantum duel revisited

    International Nuclear Information System (INIS)

    Schmidt, Alexandre G M; Paiva, Milena M

    2012-01-01

    We revisit the quantum two-person duel. In this problem, both Alice and Bob each possess a spin-1/2 particle which models dead and alive states for each player. We review the Abbott and Flitney result—now considering non-zero α 1 and α 2 in order to decide if it is better for Alice to shoot or not the second time—and we also consider a duel where players do not necessarily start alive. This simple assumption allows us to explore several interesting special cases, namely how a dead player can win the duel shooting just once, or how can Bob revive Alice after one shot, and the better strategy for Alice—being either alive or in a superposition of alive and dead states—fighting a dead opponent. (paper)

  8. Satellite failures revisited

    Science.gov (United States)

    Balcerak, Ernie

    2012-12-01

    In January 1994, the two geostationary satellites known as Anik-E1 and Anik-E2, operated by Telesat Canada, failed one after the other within 9 hours, leaving many northern Canadian communities without television and data services. The outage, which shut down much of the country's broadcast television for hours and cost Telesat Canada more than $15 million, generated significant media attention. Lam et al. used publicly available records to revisit the event; they looked at failure details, media coverage, recovery effort, and cost. They also used satellite and ground data to determine the precise causes of those satellite failures. The researchers traced the entire space weather event from conditions on the Sun through the interplanetary medium to the particle environment in geostationary orbit.

  9. Logistics Innovation Process Revisited

    DEFF Research Database (Denmark)

    Gammelgaard, Britta; Su, Shong-Iee Ivan; Yang, Su-Lan

    2011-01-01

    Purpose – The purpose of this paper is to learn more about logistics innovation processes and their implications for the focal organization as well as the supply chain, especially suppliers. Design/methodology/approach – The empirical basis of the study is a longitudinal action research project...... that was triggered by the practical needs of new ways of handling material flows of a hospital. This approach made it possible to revisit theory on logistics innovation process. Findings – Apart from the tangible benefits reported to the case hospital, five findings can be extracted from this study: the logistics...... innovation process model may include not just customers but also suppliers; logistics innovation in buyer-supplier relations may serve as an alternative to outsourcing; logistics innovation processes are dynamic and may improve supplier partnerships; logistics innovations in the supply chain are as dependent...

  10. Klein's double discontinuity revisited

    DEFF Research Database (Denmark)

    Winsløw, Carl; Grønbæk, Niels

    2014-01-01

    Much effort and research has been invested into understanding and bridging the ‘gaps’ which many students experience in terms of contents and expectations as they begin university studies with a heavy component of mathematics, typically in the form of calculus courses. We have several studies...... of bridging measures, success rates and many other aspects of these “entrance transition” problems. In this paper, we consider the inverse transition, experienced by university students as they revisit core parts of high school mathematics (in particular, calculus) after completing the undergraduate...... mathematics courses which are mandatory to become a high school teacher of mathematics. To what extent does the “advanced” experience enable them to approach the high school calculus in a deeper and more autonomous way ? To what extent can “capstone” courses support such an approach ? How could it be hindered...

  11. Reframing in dentistry: Revisited

    Directory of Open Access Journals (Sweden)

    Sivakumar Nuvvula

    2013-01-01

    Full Text Available The successful practice of dentistry involves a good combination of technical skills and soft skills. Soft skills or communication skills are not taught extensively in dental schools and it can be challenging to learn and at times in treating dental patients. Guiding the child′s behavior in the dental operatory is one of the preliminary steps to be taken by the pediatric dentist and one who can successfully modify the behavior can definitely pave the way for a life time comprehensive oral care. This article is an attempt to revisit a simple behavior guidance technique, reframing and explain the possible psychological perspectives behind it for better use in the clinical practice.

  12. Analysis of the phase control of the ITER ICRH antenna array. Influence on the load resilience and radiated power spectrum

    Energy Technology Data Exchange (ETDEWEB)

    Messiaen, A., E-mail: a.messiaen@fz-juelich.de; Ongena, J.; Vervier, M. [Laboratory for Plasma Physics, ERM-KMS, TEC partner, Cycle, B1000-Brussels (Belgium); Swain, D. [US ITER Team, ORNL (United States)

    2015-12-10

    The paper analyses how the phasing of the ITER ICRH 24 strap array evolves from the power sources up to the strap currents of the antenna. The study of the phasing control and coherence through the feeding circuits with prematching and automatic matching and decoupling network is made by modeling starting from the TOPICA matrix of the antenna array for a low coupling plasma profile and for current drive phasing (worst case for mutual coupling effects). The main results of the analysis are: (i) the strap current amplitude is well controlled by the antinode V{sub max} amplitude of the feeding lines, (ii) the best toroidal phasing control is done by the adjustment of the mean phase of V{sub max} of each poloidal straps column, (iii) with well adjusted system the largest strap current phasing error is ±20°, (iv) the effect on load resilience remains well below the maximum affordable VSWR of the generators, (v) the effect on the radiated power spectrum versus k{sub //} computed by means of the coupling code ANTITER II remains small for the considered cases.

  13. Functional gene array-based analysis of microbial community structure in groundwaters with a gradient of contaminant levels

    Energy Technology Data Exchange (ETDEWEB)

    Waldron, P.J.; Wu, L.; Van Nostrand, J.D.; Schadt, C.W.; Watson, D.B.; Jardine, P.M.; Palumbo, A.V.; Hazen, T.C.; Zhou, J.

    2009-06-15

    To understand how contaminants affect microbial community diversity, heterogeneity, and functional structure, six groundwater monitoring wells from the Field Research Center of the U.S. Department of Energy Environmental Remediation Science Program (ERSP; Oak Ridge, TN), with a wide range of pH, nitrate, and heavy metal contamination were investigated. DNA from the groundwater community was analyzed with a functional gene array containing 2006 probes to detect genes involved in metal resistance, sulfate reduction, organic contaminant degradation, and carbon and nitrogen cycling. Microbial diversity decreased in relation to the contamination levels of the wells. Highly contaminated wells had lower gene diversity but greater signal intensity than the pristine well. The microbial composition was heterogeneous, with 17-70% overlap between different wells. Metal-resistant and metal-reducing microorganisms were detected in both contaminated and pristine wells, suggesting the potential for successful bioremediation of metal-contaminated groundwaters. In addition, results of Mantel tests and canonical correspondence analysis indicate that nitrate, sulfate, pH, uranium, and technetium have a significant (p < 0.05) effect on microbial community structure. This study provides an overall picture of microbial community structure in contaminated environments with functional gene arrays by showing that diversity and heterogeneity can vary greatly in relation to contamination.

  14. Force characteristic analysis of a magnetic gravity compensator with annular magnet array for magnetic levitation positioning system

    Science.gov (United States)

    Zhou, Yiheng; Kou, Baoquan; Liu, Peng; Zhang, He; Xing, Feng; Yang, Xiaobao

    2018-05-01

    Magnetic levitation positioning system (MLPS) is considered to be the state of the art in inspection and manufacturing systems in vacuum. In this paper, a magnetic gravity compensator with annular magnet array (AMA-MGC) for MLPS is proposed. Benefiting from the double-layer annular Halbach magnet array on the stator, the proposed AMA-MGC possesses the advantages of symmetrical force, high force density and small force fluctuation. Firstly, the basic structure and operation principle of the AMA-MGC are introduced. Secondly, the basic characteristics of the AMA-MGC such as magnetic field distribution, levitation force, parasitic force and parasitic torque are analyzed by the three-dimensional finite element analysis (3-D FEA). Thirdly, the influence of structural parameters on force density and force fluctuation is investigated, which is conductive to the design and optimization of the AMA-MGC. Finally, a prototype of the AMA-MGC is constructed, and the experiment shows good agreement with the 3-D FEA results.

  15. Fluctuations, Finite-Size Effects and the Thermodynamic Limit in Computer Simulations: Revisiting the Spatial Block Analysis Method

    Directory of Open Access Journals (Sweden)

    Maziar Heidari

    2018-03-01

    Full Text Available The spatial block analysis (SBA method has been introduced to efficiently extrapolate thermodynamic quantities from finite-size computer simulations of a large variety of physical systems. In the particular case of simple liquids and liquid mixtures, by subdividing the simulation box into blocks of increasing size and calculating volume-dependent fluctuations of the number of particles, it is possible to extrapolate the bulk isothermal compressibility and Kirkwood–Buff integrals in the thermodynamic limit. Only by explicitly including finite-size effects, ubiquitous in computer simulations, into the SBA method, the extrapolation to the thermodynamic limit can be achieved. In this review, we discuss two of these finite-size effects in the context of the SBA method due to (i the statistical ensemble and (ii the finite integration domains used in computer simulations. To illustrate the method, we consider prototypical liquids and liquid mixtures described by truncated and shifted Lennard–Jones (TSLJ potentials. Furthermore, we show some of the most recent developments of the SBA method, in particular its use to calculate chemical potentials of liquids in a wide range of density/concentration conditions.

  16. Revisiting the “Guns versus Butter” Argument in China (1950–2014: New Evidence from the Continuous Wavelet Analysis

    Directory of Open Access Journals (Sweden)

    Ying Zhang

    2016-07-01

    Full Text Available The long-lasting “guns versus butter” argument reflects the fact that China has been experiencing a difficult choice in terms of improving the defense and social welfare sectors, and thus achieving fiscal sustainability. The result, however, is controversial. The present paper therefore re-examines the relationship between defense and social welfare by employing continuous wavelet analysis during a long period of 1950–2014 in China. We focus in particular on their dynamic correlation and the lead-lag relationship across different frequency bands. Our results clearly show the inexistence of the crowding-out effect between defense expenditure and social welfare; moreover, the increase in defense (social welfare expenditure could stimulate the expansion of social welfare (defense spending. In addition, we find a positive relationship between defense and social welfare with defense leading during 1961–1968 in the short term, when China suffered from the economic breakdown and the social turbulence caused by the Great Famine, Sino-Soviet border conflict, etc. Notably, social welfare also led the progress in defense during 1984–1988 and 1995–1998 in the medium and long terms by the further deepening of the opening-up policy and enforcing the economic system reform.

  17. Neutrino assisted GUT baryogenesis revisited

    Science.gov (United States)

    Huang, Wei-Chih; Päs, Heinrich; Zeißner, Sinan

    2018-03-01

    Many grand unified theory (GUT) models conserve the difference between the baryon and lepton number, B -L . These models can create baryon and lepton asymmetries from heavy Higgs or gauge boson decays with B +L ≠0 but with B -L =0 . Since the sphaleron processes violate B +L , such GUT-generated asymmetries will finally be washed out completely, making GUT baryogenesis scenarios incapable of reproducing the observed baryon asymmetry of the Universe. In this work, we revisit the idea to revive GUT baryogenesis, proposed by Fukugita and Yanagida, where right-handed neutrinos erase the lepton asymmetry before the sphaleron processes can significantly wash out the original B +L asymmetry, and in this way one can prevent a total washout of the initial baryon asymmetry. By solving the Boltzmann equations numerically for baryon and lepton asymmetries in a simplified 1 +1 flavor scenario, we can confirm the results of the original work. We further generalize the analysis to a more realistic scenario of three active and two right-handed neutrinos to highlight flavor effects of the right-handed neutrinos. Large regions in the parameter space of the Yukawa coupling and the right-handed neutrino mass featuring successful baryogenesis are identified.

  18. Amplitude correlation analysis of W7-AS Mirnov-coil array data and other transport relevant diagnostics

    International Nuclear Information System (INIS)

    Pokol, G.; Por, G.; Zoletnik, S.; Basse, N.P.

    2005-01-01

    This work is based on the amplitude correlation analysis of the signals from a poloidal Mirnov-coil array on the Wendelstein 7 - Advanced Stellarator (W7-AS). The motivation behind this work is an earlier finding, that changes in the RMS amplitude of Mirnov-coil signals are correlated with the amplitude of small scale density turbulence measured by CO2 Laser Scattering. Based on this and other measurements, the hypothesis was set, that some of the magnetic fluctuations are caused by transient MHD modes excited by large turbulent structures. The statistical dependencies between the power modulations of different eigenmodes can provide information about the statistics of these structures. Our amplitude correlation method is based on linear continuous time-frequency representations of the signal, we use Short-Time Fourier Transformation (STFT) with Gabor-atoms to map the signal onto the time-frequency plane, as two dimensional power density distributions. From these transforms we can recover the power modulation of different frequency bands. Provided the selection of the resolution of the transforms and the limits of the frequency bands were correct, the time series calculated this way resembles the original power fluctuation of the selected eigenmode. The only distortion introduced is a convolution smoothing by the time-window used in the transformation. Detailed correlation analysis between different bandpowers of the Mirnov-coil array signals were carried out and presented in bad and good confinement states. In order to reveal the true structure and cause of magnetic fluctuations Mirnov-coil diagnostic signals were also compared with Lithium beam and CO2 Laser Scattering measurements. In our analysis we have found, that there was a strong and systematic difference in the cross-correlations of power bands between different confinement states. (author)

  19. Direct extraction of electron parameters from magnetoconductance analysis in mesoscopic ring array structures

    Science.gov (United States)

    Sawada, A.; Faniel, S.; Mineshige, S.; Kawabata, S.; Saito, K.; Kobayashi, K.; Sekine, Y.; Sugiyama, H.; Koga, T.

    2018-05-01

    We report an approach for examining electron properties using information about the shape and size of a nanostructure as a measurement reference. This approach quantifies the spin precession angles per unit length directly by considering the time-reversal interferences on chaotic return trajectories within mesoscopic ring arrays (MRAs). Experimentally, we fabricated MRAs using nanolithography in InGaAs quantum wells which had a gate-controllable spin-orbit interaction (SOI). As a result, we observed an Onsager symmetry related to relativistic magnetic fields, which provided us with indispensable information for the semiclassical billiard ball simulation. Our simulations, developed based on the real-space formalism of the weak localization/antilocalization effect including the degree of freedom for electronic spin, reproduced the experimental magnetoconductivity (MC) curves with high fidelity. The values of five distinct electron parameters (Fermi wavelength, spin precession angles per unit length for two different SOIs, impurity scattering length, and phase coherence length) were thereby extracted from a single MC curve. The methodology developed here is applicable to wide ranges of nanomaterials and devices, providing a diagnostic tool for exotic properties of two-dimensional electron systems.

  20. Design and analysis of a dual mode CMOS field programmable analog array

    International Nuclear Information System (INIS)

    Cheng Xiaoyan; Yang Haigang; Yin Tao; Wu Qisong; Zhang Hongfeng; Liu Fei

    2014-01-01

    This paper presents a novel field-programmable analog array (FPAA) architecture featuring a dual mode including discrete-time (DT) and continuous-time (CT) operation modes, along with a highly routable connection boxes (CBs) based interconnection lattice. The dual mode circuit for the FPAA is capable of achieving targeted optimal performance in different applications. The architecture utilizes routing switches in a CB not only for the signal interconnection purpose but also for control of the electrical charge transfer required in switched-capacitor circuits. This way, the performance of the circuit in either mode shall not be hampered with adding of programmability. The proposed FPAA is designed and implemented in a 0.18 μm standard CMOS process with a 3.3 V supply voltage. The result from post-layout simulation shows that a maximum bandwidth of 265 MHz through the interconnection network is achieved. The measured results from demonstrated examples show that the maximum signal bandwidth of up to 2 MHz in CT mode is obtained with the spurious free dynamic range of 54 dB, while the signal processing precision in DT mode reaches 96.4%. (semiconductor integrated circuits)

  1. Qualitative and quantitative analysis of anthraquinones in rhubarbs by high performance liquid chromatography with diode array detector and mass spectrometry.

    Science.gov (United States)

    Wei, Shao-yin; Yao, Wen-xin; Ji, Wen-yuan; Wei, Jia-qi; Peng, Shi-qi

    2013-12-01

    Rhubarb is well known in traditional Chinese medicines (TCMs) mainly due to its effective purgative activity. Anthraquinones, including anthraquinone derivatives and their glycosides, are thought to be the major active components in rhubarb. To improve the quality control method of rhubarb, we studied on the extraction method, and did qualitative and quantitative analysis of widely used rhubarbs, Rheum tanguticum Maxim. ex Balf. and Rheum palmatum L., by HPLC-photodiode array detection (HPLC-DAD) and HPLC-mass spectrum (HPLC-MS) on a Waters SymmetryShield RP18 column (250 mm × 4.6 mm i.d., 5 μm). Amount of five anthraquinones was viewed as the evaluating standard. A standardized characteristic fingerprint of rhubarb was provided. From the quantitative analysis, the rationality was demonstrated for ancestors to use these two species of rhubarb equally. Under modern extraction methods, the amount of five anthraquinones in Rheum tanguticum Maxim. ex Balf. is higher than that in Rheum palmatum L. Among various extraction methods, ultrasonication with 70% methanol for 30 min is a promising one. For HPLC analysis, mobile phase consisted of methanol and 0.1% phosphoric acid in water with a gradient program, the detection wavelength at 280nm for fingerprinting analysis and 254 nm for quantitative analysis are good choices. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. electrode array

    African Journals Online (AJOL)

    PROF EKWUEME

    A geoelectric investigation employing vertical electrical soundings (VES) using the Ajayi - Makinde Two-Electrode array and the ... arrangements used in electrical D.C. resistivity survey. These include ..... Refraction Tomography to Study the.

  3. Reporter-Based Synthetic Genetic Array Analysis: A Functional Genomics Approach for Investigating Transcript or Protein Abundance Using Fluorescent Proteins in Saccharomyces cerevisiae.

    Science.gov (United States)

    Göttert, Hendrikje; Mattiazzi Usaj, Mojca; Rosebrock, Adam P; Andrews, Brenda J

    2018-01-01

    Fluorescent reporter genes have long been used to quantify various cell features such as transcript and protein abundance. Here, we describe a method, reporter synthetic genetic array (R-SGA) analysis, which allows for the simultaneous quantification of any fluorescent protein readout in thousands of yeast strains using an automated pipeline. R-SGA combines a fluorescent reporter system with standard SGA analysis and can be used to examine any array-based strain collection available to the yeast community. This protocol describes the R-SGA methodology for screening different arrays of yeast mutants including the deletion collection, a collection of temperature-sensitive strains for the assessment of essential yeast genes and a collection of inducible overexpression strains. We also present an alternative pipeline for the analysis of R-SGA output strains using flow cytometry of cells in liquid culture. Data normalization for both pipelines is discussed.

  4. Trace analysis of tiamulin in honey by liquid chromatography-diode array-electrospray ionization mass spectrometry detection.

    Science.gov (United States)

    Nozal, M J; Bernal, J L; Martín, M T; Jiménez, J J; Bernal, J; Higes, M

    2006-05-26

    A liquid chromatography with diode array or electrospray ionisation mass spectrometry detection (LC-DAD-ESI-MS) method for the determination of tiamulin residues in honey is presented. The procedure employs a solid-phase extraction (SPE) on polymeric cartridges for the isolation of tiamulin from honey samples diluted in aqueous solution of tartaric acid. Chromatographic separation of the tiamulin is performed, in isocratic mode, on a C18 column using methanol and ammonium carbonate 0.1% in water, in proportion (30:70, v/v). Average analyte recoveries were from 88 to 106% in replica sets of fortified honey samples. The LC-ESI-MS method detection limits differ from 0.5 microg kg(-1) for clear honeys to 1.2 microg kg(-1) for dark honeys. The developed method has been applied to the analysis of tiamulin residues in multifloral honey samples collected from veterinary treated beehives.

  5. Application of Microtremor Array Analysis to Estimate the Bedrock Depth in the Beijing Plain area

    Science.gov (United States)

    Xu, P.; Ling, S.; Liu, J.; Su, W.

    2013-12-01

    With the rapid expansion of large cities around the world, urban geological survey provides key information regarding resource development and urban construction. Among the major cities of the world, China's capital city Beijing is among the largest cities possessing complex geological structures. The urban geological survey and study in Beijing involves the following aspects: (1) estimating the thickness of the Cenozoic deposit; (2) mapping the three-dimensional structure of the underlying bedrock, as well as its relations to faults and tectonic settings; and (3) assessing the capacity of the city's geological resources in order to support its urban development and operation safety. The geological study of Beijing in general was also intended to provide basic data regarding the urban development and appraisal of engineering and environment geological conditions, as well as underground space resources. In this work, we utilized the microtremor exploration method to estimate the thickness of the bedrock depth, in order to delineate the geological interfaces and improve the accuracy of the bedrock depth map. The microtremor observation sites were located in the Beijing Plain area. Traditional geophysical or geological survey methods were not effective in these areas due to the heavy traffic and dense buildings in the highly-populated urban area. The microtremor exploration method is a Rayleigh-wave inversion technique which extracts its phase velocity dispersion curve from the vertical component of the microtremor array records using the spatial autocorrelation (SPAC) method, then inverts the shear-wave velocity structure. A triple-circular array was adopted for acquiring microtremor data, with the observation radius in ranging from 40 to 300 m, properly adjusted depending on the geological conditions (depth of the bedrock). The collected microtremor data are used to: (1) estimation of phase velocities of Rayleigh-wave from the vertical components of the microtremor

  6. Principal component analysis for predicting transcription-factor binding motifs from array-derived data

    Directory of Open Access Journals (Sweden)

    Vincenti Matthew P

    2005-11-01

    Full Text Available Abstract Background The responses to interleukin 1 (IL-1 in human chondrocytes constitute a complex regulatory mechanism, where multiple transcription factors interact combinatorially to transcription-factor binding motifs (TFBMs. In order to select a critical set of TFBMs from genomic DNA information and an array-derived data, an efficient algorithm to solve a combinatorial optimization problem is required. Although computational approaches based on evolutionary algorithms are commonly employed, an analytical algorithm would be useful to predict TFBMs at nearly no computational cost and evaluate varying modelling conditions. Singular value decomposition (SVD is a powerful method to derive primary components of a given matrix. Applying SVD to a promoter matrix defined from regulatory DNA sequences, we derived a novel method to predict the critical set of TFBMs. Results The promoter matrix was defined to establish a quantitative relationship between the IL-1-driven mRNA alteration and genomic DNA sequences of the IL-1 responsive genes. The matrix was decomposed with SVD, and the effects of 8 potential TFBMs (5'-CAGGC-3', 5'-CGCCC-3', 5'-CCGCC-3', 5'-ATGGG-3', 5'-GGGAA-3', 5'-CGTCC-3', 5'-AAAGG-3', and 5'-ACCCA-3' were predicted from a pool of 512 random DNA sequences. The prediction included matches to the core binding motifs of biologically known TFBMs such as AP2, SP1, EGR1, KROX, GC-BOX, ABI4, ETF, E2F, SRF, STAT, IK-1, PPARγ, STAF, ROAZ, and NFκB, and their significance was evaluated numerically using Monte Carlo simulation and genetic algorithm. Conclusion The described SVD-based prediction is an analytical method to provide a set of potential TFBMs involved in transcriptional regulation. The results would be useful to evaluate analytically a contribution of individual DNA sequences.

  7. Sao Paulo Lightning Mapping Array (SP-LMA): Deployment, Operation and Initial Data Analysis

    Science.gov (United States)

    Blakeslee, R.; Bailey, J. C.; Carey, L. D.; Rudlosky, S.; Goodman, S. J.; Albrecht, R.; Morales, C. A.; Anseimo, E. M.; Pinto, O.

    2012-01-01

    An 8-10 station Lightning Mapping Array (LMA) network is being deployed in the vicinity of Sao Paulo to create the SP-LMA for total lightning measurements in association with the international CHUVA [Cloud processes of the main precipitation systems in Brazil: A contribution to cloud resolving modeling and to the GPM (Global Precipitation Measurement)] field campaign. Besides supporting CHUVA science/mission objectives and the Sao Luiz do Paraitinga intensive operation period (IOP) in November-December 2011, the SP-LMA will support the generation of unique proxy data for the Geostationary Lightning Mapper (GLM) and Advanced Baseline Imager (ABI), both sensors on the NOAA Geostationary Operational Environmental Satellite-R (GOES-R), presently under development and scheduled for a 2015 launch. The proxy data will be used to develop and validate operational algorithms so that they will be ready for use on "day1" following the launch of GOES-R. A preliminary survey of potential sites in the vicinity of Sao Paulo was conducted in December 2009 and January 2010, followed up by a detailed survey in July 2010, with initial network deployment scheduled for October 2010. However, due to a delay in the Sao Luiz do Paraitinga IOP, the SP-LMA will now be installed in July 2011 and operated for one year. Spacing between stations is on the order of 15-30 km, with the network "diameter" being on the order of 30-40 km, which provides good 3-D lightning mapping 150 km from the network center. Optionally, 1-3 additional stations may be deployed in the vicinity of Sao Jos dos Campos.

  8. Array-based genotyping and genetic dissimilarity analysis of a set of maize inbred lines belonging to different heterotic groups

    Directory of Open Access Journals (Sweden)

    Jambrović Antun

    2014-01-01

    Full Text Available Here we describe the results of the detailed array-based genotyping obtained by using the Illumina MaizeSNP50 BeadChip of eleven inbred lines belonging to different heterotic groups relevant for maize breeding in Southeast Europe - European Corn Belt. The objectives of this study were to assess the utility of the MaizeSNP50 BeadChip platform by determining its descriptive power and to assess genetic dissimilarity of the inbred lines. The distribution of the SNPs was found not completely uniform among chromosomes, but average call rate was very high (97.9% and number of polymorphic loci was 33200 out of 50074 SNPs with known mapping position indicating descriptive power of the MaizeSNP50 BeadChip. The dendrogram obtained from UPGMA cluster analysis as well as principal component analysis (PCA confirmed pedigree information, undoubtedly distinguishing lines according to their background in two population varieties of Reid Yellow Dent and Lancaster Sure Crop. Dissimilarity analysis showed that all of the inbred lines could be distinguished from each other. Whereas cluster analysis did not definitely differentiate Mo17 and Ohio inbred lines, PCA revealed clear genetic differences between them. The studied inbred lines were confirmed to be genetically diverse, representing a large proportion of the genetic variation occurring in two maize heterotic groups.

  9. Thermoresponsive Arrays Patterned via Photoclick Chemistry: Smart MALDI Plate for Protein Digest Enrichment, Desalting, and Direct MS Analysis.

    Science.gov (United States)

    Meng, Xiao; Hu, Junjie; Chao, Zhicong; Liu, Ying; Ju, Huangxian; Cheng, Quan

    2018-01-10

    Sample desalting and concentration are crucial steps before matrix-assisted laser desorption/ionization-mass spectrometry (MALDI-MS) analysis. Current sample pretreatment approaches require tedious fabrication and operation procedures, which are unamenable to high-throughput analysis and also result in sample loss. Here, we report the development of a smart MALDI substrate for on-plate desalting, enrichment, and direct MS analysis of protein digests based on thermoresponsive, hydrophilic/hydrophobic transition of surface-grafted poly(N-isopropylacrylamide) (PNIPAM) microarrays. Superhydrophilic 1-thioglycerol microwells are first constructed on alkyne-silane-functionalized rough indium tin oxide substrates based on two sequential thiol-yne photoclick reactions, whereas the surrounding regions are modified with hydrophobic 1H,1H,2H,2H-perfluorodecanethiol. Surface-initiated atom-transfer radical polymerization is then triggered in microwells to form PNIPAM arrays, which facilitate sample loading and enrichment of protein digests by concentrating large-volume samples into small dots and achieving on-plate desalting through PNIPAM configuration change at elevated temperature. The smart MALDI plate shows high performance for mass spectrometric analysis of cytochrome c and neurotensin in the presence of 1 M urea and 100 mM NaHCO 3 , as well as improved detection sensitivity and high sequence coverage for α-casein and cytochrome c digests in femtomole range. The work presents a versatile sample pretreatment platform with great potential for proteomic research.

  10. Second-order Data by Flow Injection Analysis with Spectrophotometric Diode-array Detection and Incorporated Gel-filtration Chromatographic Column

    DEFF Research Database (Denmark)

    Bechmann, Iben Ellegaard

    1997-01-01

    A flow injection analysis (FIA) system furnished with a gel-filtration chromatographic column and with photodiode-array detection was used for the generation of second-order data. The system presented is a model system in which the analytes are blue dextran, potassium hexacyanoferrate(III) and he......A flow injection analysis (FIA) system furnished with a gel-filtration chromatographic column and with photodiode-array detection was used for the generation of second-order data. The system presented is a model system in which the analytes are blue dextran, potassium hexacyanoferrate...

  11. Growth and analysis of micro and nano CdTe arrays for solar cell applications

    Science.gov (United States)

    Aguirre, Brandon Adrian

    CdTe is an excellent material for infrared detectors and photovoltaic applications. The efficiency of CdTe/CdS solar cells has increased very rapidly in the last 3 years to ˜20% but is still below the maximum theoretical value of 30%. Although the short-circuit current density is close to its maximum of 30 mA/cm2, the open circuit voltage has potential to be increased further to over 1 Volt. The main limitation that prevents further increase in the open-circuit voltage and therefore efficiency is the high defect density in the CdTe absorber layer. Reducing the defect density will increase the open-circuit voltage above 1 V through an increase in the carrier lifetime and concentration to tau >10 ns and p > 10 16 cm-3, respectively. However, the large lattice mismatch (10%) between CdTe and CdS and the polycrystalline nature of the CdTe film are the fundamental reasons for the high defect density and pose a difficult challenge to solve. In this work, a method to physically and electrically isolate the different kinds of defects at the nanoscale and understand their effect on the electrical performance of CdTe is presented. A SiO2 template with arrays of window openings was deposited between the CdTe and CdS to achieve selective-area growth of the CdTe via close-space sublimation. The diameter of the window openings was varied from the micro to the nanoscale to study the effect of size on nucleation, grain growth, and defect density. The resulting structures enabled the possibility to electrically isolate and individually probe micrometer and nanoscale sized CdTe/CdS cells. Electron back-scattered diffraction was used to observe grain orientation and defects in the miniature cells. Scanning and transmission electron microscopy was used to study the morphology, grain boundaries, grain orientation, defect structure, and strain in the layers. Finally, conducting atomic force microscopy was used to study the current-voltage characteristics of the solar cells. An

  12. Neurotoxicity screening of (illicit) drugs using novel methods for analysis of microelectrode array (MEA) recordings.

    Science.gov (United States)

    Hondebrink, L; Verboven, A H A; Drega, W S; Schmeink, S; de Groot, M W G D M; van Kleef, R G D M; Wijnolts, F M J; de Groot, A; Meulenbelt, J; Westerink, R H S

    2016-07-01

    Annual prevalence of the use of common illicit drugs and new psychoactive substances (NPS) is high, despite the often limited knowledge on the health risks of these substances. Recently, cortical cultures grown on multi-well microelectrode arrays (mwMEAs) have been used for neurotoxicity screening of chemicals, pharmaceuticals, and toxins with a high sensitivity and specificity. However, the use of mwMEAs to investigate the effects of illicit drugs on neuronal activity is largely unexplored. We therefore first characterised the cortical cultures using immunocytochemistry and show the presence of astrocytes, glutamatergic and GABAergic neurons. Neuronal activity is concentration-dependently affected following exposure to six neurotransmitters (glutamate, GABA, serotonin, dopamine, acetylcholine and nicotine). Most neurotransmitters inhibit neuronal activity, although glutamate and acetylcholine transiently increase activity at specific concentrations. These transient effects are not detected when activity is determined during the entire 30min exposure window, potentially resulting in false-negative results. As expected, exposure to the GABAA-receptor antagonist bicuculline increases neuronal activity. Exposure to a positive allosteric modulator of the GABAA-receptor (diazepam) or to glutamate receptor antagonists (CNQX and MK-801) reduces neuronal activity. Further, we demonstrate that exposure to common drugs (3,4-methylenedioxymethamphetamine (MDMA) and amphetamine) and NPS (1-(3-chlorophenyl)piperazine (mCPP), 4-fluoroamphetamine (4-FA) and methoxetamine (MXE)) decreases neuronal activity. MXE most potently inhibits neuronal activity with an IC50 of 0.5μM, whereas 4-FA is least potent with an IC50 of 113μM. Our data demonstrate the importance of analysing neuronal activity within different time windows during exposure to prevent false-negative results. We also show that cortical cultures grown on mwMEAs can successfully be applied to investigate the effects of

  13. Surgical Workflow Analysis: Ideal Application of Navigated Linear Array Ultrasound in Low-Grade Glioma Surgery.

    Science.gov (United States)

    Lothes, Thomas Ernst; Siekmann, Max; König, Ralph Werner; Wirtz, Christian Rainer; Coburger, Jan

    2016-11-01

    Background  Intraoperative imaging in low-grade glioma (LGG) surgery can facilitate residual tumor control and improve surgical outcome. The aim of the study was to evaluate the ideal application and typical interactions of intraoperative MRI (iMRI), conventional low-frequency intraoperative ultrasound (cioUS), and high-frequency linear array intraoperative ultrasound (lioUS) to optimize surgical workflow. Methods  Prospectively, we included 11 patients with an LGG. Typical procedural workflow in the iMRI suite was recorded with a compatible software. We took notes of duration, frequency of application, the surgeon's evaluation of image quality, and the respective benefit of lioUS (15 MHz), cioUS (7 MHz), and iMRI (1.5 T). With the help of the workflow software, we meticulously analyzed ∼ 55 hours of surgery. Results  During the interventions, lioUS was used more often (76.3%) than cioUS (23.7%) and showed a better mean image quality (1 = best to 6 = worst) of 2.08 versus 3.26 with cioUS. The benefit of the lioUS application was rated with an average of 2.27, whereas the cioUS probe only reached a mean value of 3.83. The most common application of lioUS was resection control (42.6%); cioUS was used mainly for orientation (63.2%). Overall, lioUS was used more often and was rated better for both the purposes just described regarding image quality and benefit. Estimated residual tumor based on lioUS alone was lower than the final residual tumor detected with iMRI (7.5% versus 14.5%). The latter technique was rated as the best imaging modality for resection control in all cases followed by lioUS. Conclusion  We provide proof of principle for workflow assessment in cranial neurosurgery. Although iMRI remains the imaging method of choice, lioUS has shown to be beneficial in a combined setup. Evaluation of lioUS was significantly superior to cioUS in most indications except for subcortical lesions. Georg Thieme Verlag KG Stuttgart · New York.

  14. Open-array analysis of genetic variants in Egyptian patients with ...

    African Journals Online (AJOL)

    Hanaa R.M. Attia

    hypothesis: The ... A case - control study of 74 Egyptian participants; 37 patients with type 2 ..... A meta-analysis investigated the ..... amino acid at this SNP position [20,21]. .... glucose metabolism and obesity resulting in reduced beta cell function.

  15. RoboSCell: An automated single cell arraying and analysis instrument

    KAUST Repository

    Sakaki, Kelly; Foulds, Ian G.; Liu, William; Dechev, Nikolai; Burke, Robert Douglas; Park, Edward

    2009-01-01

    Single cell research has the potential to revolutionize experimental methods in biomedical sciences and contribute to clinical practices. Recent studies suggest analysis of single cells reveals novel features of intracellular processes, cell-to-cell

  16. Array comparative genomic hybridization analysis of a familial duplication of chromosome 13q: A recognizable syndrome

    NARCIS (Netherlands)

    Mathijssen, Inge B.; Hoovers, Jan M. N.; Mul, Adri N. P. M.; Man, Hai-Yen; Ket, Jan L.; Hennekam, Raoul C. M.

    2005-01-01

    We report on a family with six persons in three generations who have mild mental retardation, behavioral problems, seizures, hearing loss, strabismus, dental anomalies, hypermobility, juvenile hallux valgus, and mild dysmorphic features. Classical cytogenetic analysis showed a partial duplication of

  17. A microfluidic chip with a U-shaped microstructure array for multicellular spheroid formation, culturing and analysis

    International Nuclear Information System (INIS)

    Fu, Chien-Yu; Chang, Hwan-You; Tseng, Sheng-Yang; Yang, Shih-Mo; Hsu, Long; Liu, Cheng-Hsien

    2014-01-01

    Multicellular spheroids (MCS), formed by self-assembly of single cells, are commonly used as a three-dimensional cell culture model to bridge the gap between in vitro monolayer culture and in vivo tissues. However, current methods for MCS generation and analysis still suffer drawbacks such as being labor-intensive and of poor controllability, and are not suitable for high-throughput applications. This study demonstrates a novel microfluidic chip to facilitate MCS formation, culturing and analysis. The chip contains an array of U-shaped microstructures fabricated by photopolymerizing the poly(ethylene glycol) diacrylate hydrogel through defining the ultraviolet light exposure pattern with a photomask. The geometry of the U-shaped microstructures allowed trapping cells into the pocket through the actions of fluid flow and the force of gravity. The hydrogel is non-adherent for cells, promoting the formation of MCS. Its permselective property also facilitates exchange of nutrients and waste for MCS, while providing protection of MCS from shearing stress during the medium perfusion. Heterotypic MCS can be formed easily by manipulating the cell trapping steps. Subsequent drug susceptibility analysis and long-term culture could also be achieved within the same chip. This MCS formation and culture platform can be used as a micro-scale bioreactor and applied in many cell biology and drug testing studies. (paper)

  18. Multiplex preamplification of specific cDNA targets prior to gene expression analysis by TaqMan Arrays

    Directory of Open Access Journals (Sweden)

    Ribal María

    2008-06-01

    Full Text Available Abstract Background An accurate gene expression quantification using TaqMan Arrays (TA could be limited by the low RNA quantity obtained from some clinical samples. The novel cDNA preamplification system, the TaqMan PreAmp Master Mix kit (TPAMMK, enables a multiplex preamplification of cDNA targets and therefore, could provide a sufficient amount of specific amplicons for their posterior analysis on TA. Findings A multiplex preamplification of 47 genes was performed in 22 samples prior to their analysis by TA, and relative gene expression levels of non-preamplified (NPA and preamplified (PA samples were compared. Overall, the mean cycle threshold (CT decrement in the PA genes was 3.85 (ranging from 2.07 to 5.01. A high correlation (r between the gene expression measurements of NPA and PA samples was found (mean r = 0.970, ranging from 0.937 to 0.994; p Conclusion We demonstrate that cDNA preamplification using the TPAMMK before TA analysis is a reliable approach to simultaneously measure gene expression of multiple targets in a single sample. Moreover, this procedure was validated in genes from degraded RNA samples and low abundance expressed genes. This combined methodology could have wide applications in clinical research, where scarce amounts of degraded RNA are usually obtained and several genes need to be quantified in each sample.

  19. High-throughput analysis of ammonia oxidiser community composition via a novel, amoA-based functional gene array.

    Directory of Open Access Journals (Sweden)

    Guy C J Abell

    Full Text Available Advances in microbial ecology research are more often than not limited by the capabilities of available methodologies. Aerobic autotrophic nitrification is one of the most important and well studied microbiological processes in terrestrial and aquatic ecosystems. We have developed and validated a microbial diagnostic microarray based on the ammonia-monooxygenase subunit A (amoA gene, enabling the in-depth analysis of the community structure of bacterial and archaeal ammonia oxidisers. The amoA microarray has been successfully applied to analyse nitrifier diversity in marine, estuarine, soil and wastewater treatment plant environments. The microarray has moderate costs for labour and consumables and enables the analysis of hundreds of environmental DNA or RNA samples per week per person. The array has been thoroughly validated with a range of individual and complex targets (amoA clones and environmental samples, respectively, combined with parallel analysis using traditional sequencing methods. The moderate cost and high throughput of the microarray makes it possible to adequately address broader questions of the ecology of microbial ammonia oxidation requiring high sample numbers and high resolution of the community composition.

  20. Nitrate-induced genes in tomato roots. Array analysis reveals novel genes that may play a role in nitrogen nutrition.

    Science.gov (United States)

    Wang, Y H; Garvin, D F; Kochian, L V

    2001-09-01

    A subtractive tomato (Lycopersicon esculentum) root cDNA library enriched in genes up-regulated by changes in plant mineral status was screened with labeled mRNA from roots of both nitrate-induced and mineral nutrient-deficient (-nitrogen [N], -phosphorus, -potassium [K], -sulfur, -magnesium, -calcium, -iron, -zinc, and -copper) tomato plants. A subset of cDNAs was selected from this library based on mineral nutrient-related changes in expression. Additional cDNAs were selected from a second mineral-deficient tomato root library based on sequence homology to known genes. These selection processes yielded a set of 1,280 mineral nutrition-related cDNAs that were arrayed on nylon membranes for further analysis. These high-density arrays were hybridized with mRNA from tomato plants exposed to nitrate at different time points after N was withheld for 48 h, for plants that were grown on nitrate/ammonium for 5 weeks prior to the withholding of N. One hundred-fifteen genes were found to be up-regulated by nitrate resupply. Among these genes were several previously identified as nitrate responsive, including nitrate transporters, nitrate and nitrite reductase, and metabolic enzymes such as transaldolase, transketolase, malate dehydrogenase, asparagine synthetase, and histidine decarboxylase. We also identified 14 novel nitrate-inducible genes, including: (a) water channels, (b) root phosphate and K(+) transporters, (c) genes potentially involved in transcriptional regulation, (d) stress response genes, and (e) ribosomal protein genes. In addition, both families of nitrate transporters were also found to be inducible by phosphate, K, and iron deficiencies. The identification of these novel nitrate-inducible genes is providing avenues of research that will yield new insights into the molecular basis of plant N nutrition, as well as possible networking between the regulation of N, phosphorus, and K nutrition.

  1. Filter arrays

    Science.gov (United States)

    Page, Ralph H.; Doty, Patrick F.

    2017-08-01

    The various technologies presented herein relate to a tiled filter array that can be used in connection with performance of spatial sampling of optical signals. The filter array comprises filter tiles, wherein a first plurality of filter tiles are formed from a first material, the first material being configured such that only photons having wavelengths in a first wavelength band pass therethrough. A second plurality of filter tiles is formed from a second material, the second material being configured such that only photons having wavelengths in a second wavelength band pass therethrough. The first plurality of filter tiles and the second plurality of filter tiles can be interspersed to form the filter array comprising an alternating arrangement of first filter tiles and second filter tiles.

  2. The critical catastrophe revisited

    International Nuclear Information System (INIS)

    De Mulatier, Clélia; Rosso, Alberto; Dumonteil, Eric; Zoia, Andrea

    2015-01-01

    The neutron population in a prototype model of nuclear reactor can be described in terms of a collection of particles confined in a box and undergoing three key random mechanisms: diffusion, reproduction due to fissions, and death due to absorption events. When the reactor is operated at the critical point, and fissions are exactly compensated by absorptions, the whole neutron population might in principle go to extinction because of the wild fluctuations induced by births and deaths. This phenomenon, which has been named critical catastrophe, is nonetheless never observed in practice: feedback mechanisms acting on the total population, such as human intervention, have a stabilizing effect. In this work, we revisit the critical catastrophe by investigating the spatial behaviour of the fluctuations in a confined geometry. When the system is free to evolve, the neutrons may display a wild patchiness (clustering). On the contrary, imposing a population control on the total population acts also against the local fluctuations, and may thus inhibit the spatial clustering. The effectiveness of population control in quenching spatial fluctuations will be shown to depend on the competition between the mixing time of the neutrons (i.e. the average time taken for a particle to explore the finite viable space) and the extinction time

  3. Magnetic moments revisited

    International Nuclear Information System (INIS)

    Towner, I.S.; Khanna, F.C.

    1984-01-01

    Consideration of core polarization, isobar currents and meson-exchange processes gives a satisfactory understanding of the ground-state magnetic moments in closed-shell-plus (or minus)-one nuclei, A = 3, 15, 17, 39 and 41. Ever since the earliest days of the nuclear shell model the understanding of magnetic moments of nuclear states of supposedly simple configurations, such as doubly closed LS shells +-1 nucleon, has been a challenge for theorists. The experimental moments, which in most cases are known with extraordinary precision, show a small yet significant departure from the single-particle Schmidt values. The departure, however, is difficult to evaluate precisely since, as will be seen, it results from a sensitive cancellation between several competing corrections each of which can be as large as the observed discrepancy. This, then, is the continuing fascination of magnetic moments. In this contribution, we revisit the subjet principally to identify the role played by isobar currents, which are of much concern at this conference. But in so doing we warn quite strongly of the dangers of considering just isobar currents in isolation; equal consideration must be given to competing processes which in this context are the mundane nuclear structure effects, such as core polarization, and the more popular meson-exchange currents

  4. Cross-interval histogram analysis of neuronal activity on multi-electrode arrays

    NARCIS (Netherlands)

    Castellone, P.; Rutten, Wim; Marani, Enrico

    2003-01-01

    Cross-neuron-interval histogram (CNIH) analysis has been performed in order to study correlated activity and connectivity between pairs of neurons in a spontaneously active developing cultured network of rat cortical cells. Thirty-eight histograms could be analyzed using two parameters, one for the

  5. Analysis of a new unidimensional model and lateral vibrations of 1-3 piezocomposite side scan sonar array

    CSIR Research Space (South Africa)

    Shatalov, MY

    2006-01-01

    Full Text Available . Lamb modes of the 1-3 piezocomposites are investigated in term of the Certon-Patat membrane model by means of direct variational method application. A new design of a 1-3 piezocomposite side scan SONAR array is considered. An implementation of the array...

  6. Design of Smart Ion-Selective Electrode Arrays Based on Source Separation through Nonlinear Independent Component Analysis

    Directory of Open Access Journals (Sweden)

    Duarte L.T.

    2014-03-01

    Full Text Available The development of chemical sensor arrays based on Blind Source Separation (BSS provides a promising solution to overcome the interference problem associated with Ion-Selective Electrodes (ISE. The main motivation behind this new approach is to ease the time-demanding calibration stage. While the first works on this problem only considered the case in which the ions under analysis have equal valences, the present work aims at developing a BSS technique that works when the ions have different charges. In this situation, the resulting mixing model belongs to a particular class of nonlinear systems that have never been studied in the BSS literature. In order to tackle this sort of mixing process, we adopted a recurrent network as separating system. Moreover, concerning the BSS learning strategy, we develop a mutual information minimization approach based on the notion of the differential of the mutual information. The method works requires a batch operation, and, thus, can be used to perform off-line analysis. The validity of our approach is supported by experiments where the mixing model parameters were extracted from actual data.

  7. Equivalent-circuit model for stacked slot-based 2D periodic arrays of arbitrary geometry for broadband analysis

    Science.gov (United States)

    Astorino, Maria Denise; Frezza, Fabrizio; Tedeschi, Nicola

    2018-03-01

    The analysis of the transmission and reflection spectra of stacked slot-based 2D periodic structures of arbitrary geometry and the ability to devise and control their electromagnetic responses have been a matter of extensive research for many decades. The purpose of this paper is to develop an equivalent Π circuit model based on the transmission-line theory and Floquet harmonic interactions, for broadband and short longitudinal period analysis. The proposed circuit model overcomes the limits of identical and symmetrical configurations imposed by the even/odd excitation approach, exploiting both the circuit topology of a single 2D periodic array of apertures and the ABCD matrix formalism. The transmission spectra obtained through the equivalent-circuit model have been validated by comparison with full-wave simulations carried out with a finite-element commercial electromagnetic solver. This allowed for a physical insight into the spectral and angular responses of multilayer devices with arbitrary aperture shapes, guaranteeing a noticeable saving of computational resources.

  8. Leadership and Management Theories Revisited

    DEFF Research Database (Denmark)

    Madsen, Mona Toft

    2001-01-01

    The goal of the paper is to revisit and analyze key contributions to the understanding of leadership and management. As a part of the discussion a role perspective that allows for additional and/or integrated leader dimensions, including a change-centered, will be outlined. Seemingly, a major...

  9. Revisiting Inter-Genre Similarity

    DEFF Research Database (Denmark)

    Sturm, Bob L.; Gouyon, Fabien

    2013-01-01

    We revisit the idea of ``inter-genre similarity'' (IGS) for machine learning in general, and music genre recognition in particular. We show analytically that the probability of error for IGS is higher than naive Bayes classification with zero-one loss (NB). We show empirically that IGS does...... not perform well, even for data that satisfies all its assumptions....

  10. Characterisation of acid dyes in forensic fibre analysis by high-performance liquid chromatography using narrow-bore columns and diode array detection.

    Science.gov (United States)

    Laing, D K; Gill, R; Blacklaws, C; Bickley, H M

    1988-06-17

    A gradient elution high-performance liquid chromatographic (HPLC) system with a diode array detector and a short narrow-bore (40 x 1 mm I.D.) column has been used to characterise a number of acid dyes. The resolution and reproducibility of the HPLC system have been evaluated and the detection limits for various dyes have been estimated. Comparisons are made with current methods of fibre dyestuff examination used in forensic science. The system has been applied to the analysis of dye extracted from single fibres. Using diode array detection, both chromatographic and spectral data can be produced in a single operation from casework sized samples.

  11. Source location in plates based on the multiple sensors array method and wavelet analysis

    International Nuclear Information System (INIS)

    Yang, Hong Jun; Shin, Tae Jin; Lee, Sang Kwon

    2014-01-01

    A new method for impact source localization in a plate is proposed based on the multiple signal classification (MUSIC) and wavelet analysis. For source localization, the direction of arrival of the wave caused by an impact on a plate and the distance between impact position and sensor should be estimated. The direction of arrival can be estimated accurately using MUSIC method. The distance can be obtained by using the time delay of arrival and the group velocity of the Lamb wave in a plate. Time delay is experimentally estimated using the continuous wavelet transform for the wave. The elasto dynamic theory is used for the group velocity estimation.

  12. Source location in plates based on the multiple sensors array method and wavelet analysis

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Hong Jun; Shin, Tae Jin; Lee, Sang Kwon [Inha University, Incheon (Korea, Republic of)

    2014-01-15

    A new method for impact source localization in a plate is proposed based on the multiple signal classification (MUSIC) and wavelet analysis. For source localization, the direction of arrival of the wave caused by an impact on a plate and the distance between impact position and sensor should be estimated. The direction of arrival can be estimated accurately using MUSIC method. The distance can be obtained by using the time delay of arrival and the group velocity of the Lamb wave in a plate. Time delay is experimentally estimated using the continuous wavelet transform for the wave. The elasto dynamic theory is used for the group velocity estimation.

  13. Criticality safety analysis of the fissile material storage arrays in the east end of building 6592

    International Nuclear Information System (INIS)

    McKeon, D.C.; Philbin, J.S.

    1981-03-01

    A criticality safety analysis of nine concrete storage holes that have been formed in the floor of the Materials Balance Area (MBA) in Building 6592 is reported. Unit cell dimensions and unit mass limits are defined for the most likely plutonium and uranium fuel types that will be stored there. Two tables of mass limits are derived. The first table is to be used for short units that can be stacked with fixed separation in the same hole. The second table will permit units greater than one foot in length providing that the appropriate linear mass density limit (in kg/ft) is not exceeded

  14. Optimization of oligonucleotide arrays and RNA amplification protocols for analysis of transcript structure and alternative splicing.

    Science.gov (United States)

    Castle, John; Garrett-Engele, Phil; Armour, Christopher D; Duenwald, Sven J; Loerch, Patrick M; Meyer, Michael R; Schadt, Eric E; Stoughton, Roland; Parrish, Mark L; Shoemaker, Daniel D; Johnson, Jason M

    2003-01-01

    Microarrays offer a high-resolution means for monitoring pre-mRNA splicing on a genomic scale. We have developed a novel, unbiased amplification protocol that permits labeling of entire transcripts. Also, hybridization conditions, probe characteristics, and analysis algorithms were optimized for detection of exons, exon-intron edges, and exon junctions. These optimized protocols can be used to detect small variations and isoform mixtures, map the tissue specificity of known human alternative isoforms, and provide a robust, scalable platform for high-throughput discovery of alternative splicing.

  15. Quantitative Analysis of Rat Dorsal Root Ganglion Neurons Cultured on Microelectrode Arrays Based on Fluorescence Microscopy Image Processing.

    Science.gov (United States)

    Mari, João Fernando; Saito, José Hiroki; Neves, Amanda Ferreira; Lotufo, Celina Monteiro da Cruz; Destro-Filho, João-Batista; Nicoletti, Maria do Carmo

    2015-12-01

    Microelectrode Arrays (MEA) are devices for long term electrophysiological recording of extracellular spontaneous or evocated activities on in vitro neuron culture. This work proposes and develops a framework for quantitative and morphological analysis of neuron cultures on MEAs, by processing their corresponding images, acquired by fluorescence microscopy. The neurons are segmented from the fluorescence channel images using a combination of segmentation by thresholding, watershed transform, and object classification. The positioning of microelectrodes is obtained from the transmitted light channel images using the circular Hough transform. The proposed method was applied to images of dissociated culture of rat dorsal root ganglion (DRG) neuronal cells. The morphological and topological quantitative analysis carried out produced information regarding the state of culture, such as population count, neuron-to-neuron and neuron-to-microelectrode distances, soma morphologies, neuron sizes, neuron and microelectrode spatial distributions. Most of the analysis of microscopy images taken from neuronal cultures on MEA only consider simple qualitative analysis. Also, the proposed framework aims to standardize the image processing and to compute quantitative useful measures for integrated image-signal studies and further computational simulations. As results show, the implemented microelectrode identification method is robust and so are the implemented neuron segmentation and classification one (with a correct segmentation rate up to 84%). The quantitative information retrieved by the method is highly relevant to assist the integrated signal-image study of recorded electrophysiological signals as well as the physical aspects of the neuron culture on MEA. Although the experiments deal with DRG cell images, cortical and hippocampal cell images could also be processed with small adjustments in the image processing parameter estimation.

  16. Heat And Mass Transfer Analysis of a Film Evaporative MEMS Tunable Array

    Science.gov (United States)

    O'Neill, William J.

    This thesis details the heat and mass transfer analysis of a MEMs microthruster designed to provide propulsive, attitude control and thermal control capabilities to a cubesat. This thruster is designed to function by retaining water as a propellant and applying resistive heating in order to increase the temperature of the liquid-vapor interface to either increase evaporation or induce boiling to regulate mass flow. The resulting vapor is then expanded out of a diverging nozzle to produce thrust. Because of the low operating pressure and small length scale of this thruster, unique forms of mass transfer analysis such as non-continuum gas flow were modeled using the Direct Simulation Monte Carlo method. Continuum fluid/thermal simulations using COMSOL Multiphysics have been applied to model heat and mass transfer in the solid and liquid portions of the thruster. The two methods were coupled through variables at the liquid-vapor interface and solved iteratively by the bisection method. The simulations presented in this thesis confirm the thermal valving concept. It is shown that when power is applied to the thruster there is a nearly linear increase in mass flow and thrust. Thus, mass flow can be regulated by regulating the applied power. This concept can also be used as a thermal control device for spacecraft.

  17. Analysis and Optimization of a Novel 2-D Magnet Array with Gaps and Staggers for a Moving-Magnet Planar Motor

    Science.gov (United States)

    Chen, Xuedong; Zeng, Lizhan

    2018-01-01

    This paper presents a novel 2-D magnet array with gaps and staggers, which is especially suitable for magnetically levitated planar motor with moving magnets. The magnetic flux density distribution is derived by Fourier analysis and superposition. The influences of gaps and staggers on high-order harmonics and flux density were analyzed, and the optimized design is presented. Compared with the other improved structures based on traditional Halbach magnet arrays, the proposed design has the lowest high-order harmonics percentage, and the characteristics of flux density meet the demand of high acceleration in horizontal directions. It is also lightweight and easy to manufacture. The proposed magnet array was built, and the calculation results have been verified with experiment. PMID:29300323

  18. Low concentration ratio solar array for low Earth orbit multi-100 kW application. Volume 1: Design, analysis and development tests

    Science.gov (United States)

    1983-01-01

    A preliminary design effort directed toward a low concentration ratio photovoltaic array system capable of delivering multihundred kilowatts (300 kW to 1000 kW range) in low earth orbit is described. The array system consists of two or more array modules each capable of delivering between 113 kW to 175 kW using silicon solar cells or gallium arsenide solar cells, respectively. The array module deployed area is 1320 square meters and consists of 4356 pyramidal concentrator elements. The module, when stowed in the Space Shuttle's payload bay, has a stowage volume of a cube with 3.24 meters on a side. The concentrator elements are sized for a geometric concentration ratio (GCR) of six with an aperture area of .25 sq. m. The structural analysis and design trades leading to the baseline design are discussed. It describes the configuration, as well as optical, thermal and electrical performance analyses that support the design and overall performance estimates for the array are described.

  19. 'Counterfeit deviance' revisited.

    Science.gov (United States)

    Griffiths, Dorothy; Hingsburger, Dave; Hoath, Jordan; Ioannou, Stephanie

    2013-09-01

    The field has seen a renewed interest in exploring the theory of 'counterfeit deviance' for persons with intellectual disability who sexually offend. The term was first presented in 1991 by Hingsburger, Griffiths and Quinsey as a means to differentiate in clinical assessment a subgroup of persons with intellectual disability whose behaviours appeared like paraphilia but served a function that was not related to paraphilia sexual urges or fantasies. Case observations were put forward to provide differential diagnosis of paraphilia in persons with intellectual disabilities compared to those with counterfeit deviance. The brief paper was published in a journal that is no longer available and as such much of what is currently written on the topic is based on secondary sources. The current paper presents a theoretical piece to revisit the original counterfeit deviance theory to clarify the myths and misconceptions that have arisen and evaluate the theory based on additional research and clinical findings. The authors also propose areas where there may be a basis for expansion of the theory. The theory of counterfeit deviance still has relevance as a consideration for clinicians when assessing the nature of a sexual offence committed by a person with an intellectual disability. Clinical differentiation of paraphilia from counterfeit deviance provides a foundation for intervention that is designed to specifically treat the underlying factors that contributed to the offence for a given individual. Counterfeit deviance is a concept that continues to provide areas for consideration for clinicians regarding the assessment and treatment of an individual with an intellectual disability who has sexually offended. It is not and never was an explanation for all sexually offending behavior among persons with intellectual disabilities. © 2013 John Wiley & Sons Ltd.

  20. Izmit Foreshocks Revisited

    Science.gov (United States)

    Ellsworth, W. L.; Bulut, F.

    2016-12-01

    Much of what we know about the initiation of earthquakes comes from the temporal and spatial relationship of foreshocks to the initiation point of the mainshock. The 1999 Mw 7.6 Izmit, Turkey, earthquake was preceded by a 44 minute-long foreshock sequence. Bouchon et al. (Science, 2011) analyzed the foreshocks using a single seismic station, UCG, located to the north of the east-west fault, and concluded on the basis of waveform similarity that the foreshocks repeatedly re-ruptured the same fault patch, driven by slow slip at the base of the crust. We revisit the foreshock sequence using seismograms from 9 additional stations that recorded the four largest foreshocks (Mw 2.0 to 2.8) to better characterize spatial and temporal evolution of the foreshock sequence and their relationship to the mainshock hypocenter. Cross-correlation timing and hypocentroid location with hypoDD reveals a systematic west-to-east propagation of the four largest foreshocks toward the mainshock hypocenter. Foreshock rupture dimensions estimated using spectral ratios imply no major overlap for the first three foreshocks. The centroid of 4th and largest foreshock continues the eastward migration, but lies within the circular source area of the 3rd. The 3rd, however, has a low stress drop and strong directivity to the west . The mainshock hypocenter locates on the eastern edge of foreshock 4. We also re-analyzed waveform similarity of all 18 foreshocks recorded at UCG by removing the common mode signal and clustering the residual seismogram using the correlation coefficient as the distance metric. The smaller foreshocks cluster with the larger events in time order, sometimes as foreshocks and more commonly as aftershocks. These observations show that the Izmit foreshock sequence is consistent with a stress-transfer driven cascade, moving systematically to the east along the fault and that there is no observational requirement for creep as a driving mechanism.

  1. A Fusion Approach to Feature Extraction by Wavelet Decomposition and Principal Component Analysis in Transient Signal Processing of SAW Odor Sensor Array

    Directory of Open Access Journals (Sweden)

    Prashant SINGH

    2011-03-01

    Full Text Available This paper presents theoretical analysis of a new approach for development of surface acoustic wave (SAW sensor array based odor recognition system. The construction of sensor array employs a single polymer interface for selective sorption of odorant chemicals in vapor phase. The individual sensors are however coated with different thicknesses. The idea of sensor coating thickness variation is for terminating solvation and diffusion kinetics of vapors into polymer up to different stages of equilibration on different sensors. This is expected to generate diversity in information content of the sensors transient. The analysis is based on wavelet decomposition of transient signals. The single sensor transients have been used earlier for generating odor identity signatures based on wavelet approximation coefficients. In the present work, however, we exploit variability in diffusion kinetics due to polymer thicknesses for making odor signatures. This is done by fusion of the wavelet coefficients from different sensors in the array, and then applying the principal component analysis. We find that the present approach substantially enhances the vapor class separability in feature space. The validation is done by generating synthetic sensor array data based on well-established SAW sensor theory.

  2. Analysis of ultrasonic beam profile due to change of elements' number for phased array transducer (part 2)

    International Nuclear Information System (INIS)

    Choi, Sang Woo; Lee, Joon Hyun

    1998-01-01

    The phased array offers many advantages and improvements over conventional single-element transducers such as the straight-beam and angle-beam. The advantages of array sensors for large structures are two folds; firstly, array transducers provide a method of rapid beam steering and sequential addressing of a large area of interest without requiring mechanical or manual scanning which is particularly important in real-time application. Secondly, array transducer provide a method of dynamic focusing, in which the focal length of the ultrasonic beam varies as the pulse propagates through the material. There are some parameters such as number, size, center to center space of elements to design phased array transducer. In previous study. the characteristics of beam steering and dynamic focusing had been simulated for ultrasonic SH-wave with varying the number of phased array transducer's element. In this study, the characteristic of beam steering for phased array transducer has been simulated for ultrasonic SH-wave on the basis of Huygen's principle with varying center to center space of elements. Ultrasonic beam directivity and focusing due to change of time delay of each element were discussed with varying center to center space of elements.

  3. Rotating positron tomographs revisited

    International Nuclear Information System (INIS)

    Townsend, D.; Defrise, M.; Geissbuhler, A.

    1994-01-01

    We have compared the performance of a PET scanner comprising two rotating arrays of detectors with that of the more conventional stationary-ring design. The same total number of detectors was used in each, and neither scanner had septa. For brain imaging, we find that the noise-equivalent count rate is greater for the rotating arrays by a factor of two. Rotating arrays have a sensitivity profile that peaks in the centre of the field of view, both axially and transaxially. In the transaxial plane, this effect offsets to a certain extent the decrease in the number of photons detected towards the centre of the brain due to self-absorption. We have also compared the performance of a rotating scanner to that of a full-ring scanner with the same number of rings. We find that a full-ring scanner with an axial extent of 16.2 cm (24 rings) is a factor of 3.5 more sensitive than a rotating scanner with 40% of the detectors and the same axial extent. (Author)

  4. Search and analysis of superdeformed and oblate states in 193Pb nucleus with the EUROGAM II multidetector array

    International Nuclear Information System (INIS)

    Ducroux, L.

    1997-01-01

    This work is devoted to the search and analysis of superdeformed and oblate states in 193 Pb nucleus. High spin states of this isotope, populated via fusion-evaporation reaction 168 Er ( 30 Si, 5n) 193 Pb, have been studied with the EUROGAM II γ multidetector array located near the VIVITRON accelerator in Strasbourg. New sorting and analysis programs have been developed in particular related to the background treatment. Angular distribution and linear polarisation analysis allowed us to assign the γ transition multipolarities. Five dipole bands, corresponding to a weakly oblate-deformed shape of the nucleus, have been observed and connected to the low-lying states. The level scheme has been considerably extended up to a spin of 61/2 ℎ and an excitation energy of about 8 MeV. These structures have been interpreted as based on a high-K two-quasi-proton excitation coupled to rotation aligned quasi-neutrons. Six superdeformed bands, corresponding to a high prolate-deformed shape of the nucleus, have been observed. These six bands have been interpreted as three pairs of signature partners based on quasineutron excitations. The extraction of the g-factor of a K=9/2 neutron superdeformed orbital has been done for the first time in lead isotopes, giving access to the magnetic properties of the extreme nuclear matter. All these results have been discussed in terms of microscopic mean field self-consistent Hartree-Fock calculations using the microscopic 'rotor + particle(s)' model. (author)

  5. Analysis of the impacts of Wave Energy Converter arrays on the nearshore wave climate in the Pacific Northwest

    Science.gov (United States)

    O'Dea, A.; Haller, M. C.

    2013-12-01

    As concerns over the use of fossil fuels increase, more and more effort is being put into the search for renewable and reliable sources of energy. Developments in ocean technologies have made the extraction of wave energy a promising alternative. Commercial exploitation of wave energy would require the deployment of arrays of Wave Energy Converters (WECs) that include several to hundreds of individual devices. Interactions between WECs and ocean waves result in both near-field and far-field changes in the incident wave field, including a significant decrease in wave height and a redirection of waves in the lee of the array, referred to as the wave shadow. Nearshore wave height and direction are directly related to the wave radiation stresses that drive longshore currents, rip currents and nearshore sediment transport, which suggests that significant far-field changes in the wave field due to WEC arrays could have an impact on littoral processes. The goal of this study is to investigate the changes in nearshore wave conditions and radiation stress forcing as a result of an offshore array of point-absorber type WECs using a nested SWAN model, and to determine how array size, configuration, spacing and distance from shore influence these changes. The two sites of interest are the Northwest National Marine Renewable Energy Center (NNMREC) test sites off the coast of Newport Oregon, the North Energy Test Site (NETS) and the South Energy Test Site (SETS). NETS and SETS are permitted wave energy test sites located approximately 4 km and 10 km offshore, respectively. Twenty array configurations are simulated, including 5, 10, 25, 50 and 100 devices in two and three staggered rows in both closely spaced (three times the WEC diameter) and widely spaced (ten times the WEC diameter) arrays. Daily offshore wave spectra are obtained from a regional WAVEWATCH III hindcast for 2011, which are then propagated across the continental shelf using SWAN. Arrays are represented in SWAN

  6. Electrophysiological Analysis of human Pluripotent Stem Cell-derived Cardiomyocytes (hPSC-CMs) Using Multi-electrode Arrays (MEAs).

    Science.gov (United States)

    Sala, Luca; Ward-van Oostwaard, Dorien; Tertoolen, Leon G J; Mummery, Christine L; Bellin, Milena

    2017-05-12

    Cardiomyocytes can now be derived with high efficiency from both human embryonic and human induced-Pluripotent Stem Cells (hPSC). hPSC-derived cardiomyocytes (hPSC-CMs) are increasingly recognized as having great value for modeling cardiovascular diseases in humans, especially arrhythmia syndromes. They have also demonstrated relevance as in vitro systems for predicting drug responses, which makes them potentially useful for drug-screening and discovery, safety pharmacology and perhaps eventually for personalized medicine. This would be facilitated by deriving hPSC-CMs from patients or susceptible individuals as hiPSCs. For all applications, however, precise measurement and analysis of hPSC-CM electrical properties are essential for identifying changes due to cardiac ion channel mutations and/or drugs that target ion channels and can cause sudden cardiac death. Compared with manual patch-clamp, multi-electrode array (MEA) devices offer the advantage of allowing medium- to high-throughput recordings. This protocol describes how to dissociate 2D cell cultures of hPSC-CMs to small aggregates and single cells and plate them on MEAs to record their spontaneous electrical activity as field potential. Methods for analyzing the recorded data to extract specific parameters, such as the QT and the RR intervals, are also described here. Changes in these parameters would be expected in hPSC-CMs carrying mutations responsible for cardiac arrhythmias and following addition of specific drugs, allowing detection of those that carry a cardiotoxic risk.

  7. WebaCGH: an interactive online tool for the analysis and display of array comparative genomic hybridisation data.

    Science.gov (United States)

    Frankenberger, Casey; Wu, Xiaolin; Harmon, Jerry; Church, Deanna; Gangi, Lisa M; Munroe, David J; Urzúa, Ulises

    2006-01-01

    Gene copy number variations occur both in normal cells and in numerous pathologies including cancer and developmental diseases. Array comparative genomic hybridisation (aCGH) is an emerging technology that allows detection of chromosomal gains and losses in a high-resolution format. When aCGH is performed on cDNA and oligonucleotide microarrays, the impact of DNA copy number on gene transcription profiles may be directly compared. We have created an online software tool, WebaCGH, that functions to (i) upload aCGH and gene transcription results from multiple experiments; (ii) identify significant aberrant regions using a local Z-score threshold in user-selected chromosomal segments subjected to smoothing with moving averages; and (iii) display results in a graphical format with full genome and individual chromosome views. In the individual chromosome display, data can be zoomed in/out in both dimensions (i.e. ratio and physical location) and plotted features can have 'mouse over' linking to outside databases to identify loci of interest. Uploaded data can be stored indefinitely for subsequent retrieval and analysis. WebaCGH was created as a Java-based web application using the open-source database MySQL. WebaCGH is freely accessible at http://129.43.22.27/WebaCGH/welcome.htm Xiaolin Wu (forestwu@mail.nih.gov) or Ulises Urzúa (uurzua@med.uchile.cl).

  8. An intelligent sensor array distributed system for vibration analysis and acoustic noise characterization of a linear switched reluctance actuator.

    Science.gov (United States)

    Salvado, José; Espírito-Santo, António; Calado, Maria

    2012-01-01

    This paper proposes a distributed system for analysis and monitoring (DSAM) of vibrations and acoustic noise, which consists of an array of intelligent modules, sensor modules, communication bus and a host PC acting as data center. The main advantages of the DSAM are its modularity, scalability, and flexibility for use of different type of sensors/transducers, with analog or digital outputs, and for signals of different nature. Its final cost is also significantly lower than other available commercial solutions. The system is reconfigurable, can operate either with synchronous or asynchronous modes, with programmable sampling frequencies, 8-bit or 12-bit resolution and a memory buffer of 15 kbyte. It allows real-time data-acquisition for signals of different nature, in applications that require a large number of sensors, thus it is suited for monitoring of vibrations in Linear Switched Reluctance Actuators (LSRAs). The acquired data allows the full characterization of the LSRA in terms of its response to vibrations of structural origins, and the vibrations and acoustic noise emitted under normal operation. The DSAM can also be used for electrical machine condition monitoring, machine fault diagnosis, structural characterization and monitoring, among other applications.

  9. An Intelligent Sensor Array Distributed System for Vibration Analysis and Acoustic Noise Characterization of a Linear Switched Reluctance Actuator

    Directory of Open Access Journals (Sweden)

    Maria Calado

    2012-06-01

    Full Text Available This paper proposes a distributed system for analysis and monitoring (DSAM of vibrations and acoustic noise, which consists of an array of intelligent modules, sensor modules, communication bus and a host PC acting as data center. The main advantages of the DSAM are its modularity, scalability, and flexibility for use of different type of sensors/transducers, with analog or digital outputs, and for signals of different nature. Its final cost is also significantly lower than other available commercial solutions. The system is reconfigurable, can operate either with synchronous or asynchronous modes, with programmable sampling frequencies, 8-bit or 12-bit resolution and a memory buffer of 15 kbyte. It allows real-time data-acquisition for signals of different nature, in applications that require a large number of sensors, thus it is suited for monitoring of vibrations in Linear Switched Reluctance Actuators (LSRAs. The acquired data allows the full characterization of the LSRA in terms of its response to vibrations of structural origins, and the vibrations and acoustic noise emitted under normal operation. The DSAM can also be used for electrical machine condition monitoring, machine fault diagnosis, structural characterization and monitoring, among other applications.

  10. Subclassification and Detection of New Markers for the Discrimination of Primary Liver Tumors by Gene Expression Analysis Using Oligonucleotide Arrays.

    Science.gov (United States)

    Hass, Holger G; Vogel, Ulrich; Scheurlen, Michael; Jobst, Jürgen

    2017-12-26

    The failure to correctly differentiate between intrahepatic cholangiocarcinoma [CC] and hepatocellular carcinoma [HCC] is a significant clinical problem, particularly in terms of the different treatment goals for both cancers. In this study a specific gene expression profile to discriminate these two subgroups of liver cancer was established and potential diagnostic markers for clinical use were analyzed. To evaluate the gene expression profiles of HCC and intrahepatic CC, Oligonucleotide arrays ( Affymetrix U133A) were used. Overexpressed genes were checked for their potential use as new markers for discrimination and their expression values were validated by reverse transcription polymerase chain reaction and immunohistochemistry analyses. 695 genes/expressed sequence tags (ESTs) in HCC (245 up-/450 down-regulated) and 552 genes/ESTs in CC (221 up-/331 down-regulated) were significantly dysregulated (p〈0.05, fold change >2, ≥70%). Using a supervised learning method, and one-way analysis of variance a specific 270-gene expression profile that enabled rapid, reproducible differentiation between both tumors and non-malignant liver tissues was established. A panel of 12 genes (e.g. HSP90β, ERG1, GPC3, TKT, ACLY, and NME1 for HCC; SPT2, T4S3, CNX43, TTD1, HBD01 for CC) were detected and partly described for the first time as potential discrimination markers. A specific gene expression profile for discrimination of primary liver cancer was identified and potential marker genes with feasible clinical impact were described.

  11. Radiative corrections to neutrino deep inelastic scattering revisited

    International Nuclear Information System (INIS)

    Arbuzov, Andrej B.; Bardin, Dmitry Yu.; Kalinovskaya, Lidia V.

    2005-01-01

    Radiative corrections to neutrino deep inelastic scattering are revisited. One-loop electroweak corrections are re-calculated within the automatic SANC system. Terms with mass singularities are treated including higher order leading logarithmic corrections. Scheme dependence of corrections due to weak interactions is investigated. The results are implemented into the data analysis of the NOMAD experiment. The present theoretical accuracy in description of the process is discussed

  12. Ambulatory thyroidectomy: A multistate study of revisits and complications

    OpenAIRE

    Orosco, RK; Lin, HW; Bhattacharyya, N

    2015-01-01

    © 2015 American Academy of Otolaryngology - Head and Neck Surgery Foundation. Objective. Determine rates and reasons for revisits after ambulatory adult thyroidectomy. Study Design. Cross-sectional analysis of multistate ambulatory surgery and hospital databases. Setting. Ambulatory surgery data from the State Ambulatory Surgery Databases of California, Florida, Iowa, and New York for calendar years 2010 and 2011. Subjects and Methods. Ambulatory thyroidectomy cases were linked to state ambul...

  13. Tomographic array

    International Nuclear Information System (INIS)

    1976-01-01

    The configuration of a tomographic array in which the object can rotate about its axis is described. The X-ray detector is a cylindrical screen perpendicular to the axis of rotation. The X-ray source has a line-shaped focus coinciding with the axis of rotation. The beam is fan-shaped with one side of this fan lying along the axis of rotation. The detector screen is placed inside an X-ray image multiplier tube

  14. Tomographic array

    International Nuclear Information System (INIS)

    1976-01-01

    A tomographic array with the following characteristics is described. An X-ray screen serving as detector is placed before a photomultiplier tube which itself is placed in front of a television camera connected to a set of image processors. The detector is concave towards the source and is replacable. Different images of the object are obtained simultaneously. Optical fibers and lenses are used for transmission within the system

  15. Protein expression profiling by antibody array analysis with use of dried blood spot samples on filter paper.

    Science.gov (United States)

    Jiang, Weidong; Mao, Ying Qing; Huang, Ruochun; Duan, Chaohui; Xi, Yun; Yang, Kai; Huang, Ruo-Pan

    2014-01-31

    Dried blood spot samples (DBSS) on filter paper offer several advantages compared to conventional serum/plasma samples: they do not require any phlebotomy or separation of blood by centrifugation; they are less invasive; they allow sample stability and shipment at room temperature; and they pose a negligible risk of infection with blood-borne viruses, such as HIV, HBV and HCV, to those who handle them. Therefore dried blood spot samples (DBSS) on filter paper can be a quick, convenient and inexpensive means of obtaining blood samples for biomarker discovery, disease screening, diagnosis and treatment monitoring in non-hospitalized, public health settings. In this study, we investigated for the first time the potential application of dried blood spot samples (DBSS) in protein expression profiling using antibody array technology. First, optimal conditions for array assay performance using dried blood spot samples (DBSS) was established, including sample elution buffer, elution time, elution temperature and assay blocking buffer. Second, we analyzed dried blood spot samples (DBSS) using three distinct antibody array platforms, including sandwich-based antibody arrays, quantitative antibody arrays and biotin-label-based antibody arrays. In comparison with paired serum samples, detection of circulating proteins in dried blood spot samples (DBSS) correlated well for both low- and high-abundance proteins on all three antibody array platforms. In conclusion, our study strongly indicates the novel application of multiplex antibody array platforms to analyze dried blood spot samples (DBSS) on filter paper represents a viable, cost-effective method for protein profiling, biomarker discovery and disease screening in a large, population-based survey. Copyright © 2013 Elsevier B.V. All rights reserved.

  16. Automated flow quantification in valvular heart disease based on backscattered Doppler power analysis: implementation on matrix-array ultrasound imaging systems.

    Science.gov (United States)

    Buck, Thomas; Hwang, Shawn M; Plicht, Björn; Mucci, Ronald A; Hunold, Peter; Erbel, Raimund; Levine, Robert A

    2008-06-01

    Cardiac ultrasound imaging systems are limited in the noninvasive quantification of valvular regurgitation due to indirect measurements and inaccurate hemodynamic assumptions. We recently demonstrated that the principle of integration of backscattered acoustic Doppler power times velocity can be used for flow quantification in valvular regurgitation directly at the vena contracta of a regurgitant flow jet. We now aimed to accomplish implementation of automated Doppler power flow analysis software on a standard cardiac ultrasound system utilizing novel matrix-array transducer technology with detailed description of system requirements, components and software contributing to the system. This system based on a 3.5 MHz, matrix-array cardiac ultrasound scanner (Sonos 5500, Philips Medical Systems) was validated by means of comprehensive experimental signal generator trials, in vitro flow phantom trials and in vivo testing in 48 patients with mitral regurgitation of different severity and etiology using magnetic resonance imaging (MRI) for reference. All measurements displayed good correlation to the reference values, indicating successful implementation of automated Doppler power flow analysis on a matrix-array ultrasound imaging system. Systematic underestimation of effective regurgitant orifice areas >0.65 cm(2) and volumes >40 ml was found due to currently limited Doppler beam width that could be readily overcome by the use of new generation 2D matrix-array technology. Automated flow quantification in valvular heart disease based on backscattered Doppler power can be fully implemented on board a routinely used matrix-array ultrasound imaging systems. Such automated Doppler power flow analysis of valvular regurgitant flow directly, noninvasively, and user independent overcomes the practical limitations of current techniques.

  17. ArrayMining: a modular web-application for microarray analysis combining ensemble and consensus methods with cross-study normalization

    Directory of Open Access Journals (Sweden)

    Krasnogor Natalio

    2009-10-01

    Full Text Available Abstract Background Statistical analysis of DNA microarray data provides a valuable diagnostic tool for the investigation of genetic components of diseases. To take advantage of the multitude of available data sets and analysis methods, it is desirable to combine both different algorithms and data from different studies. Applying ensemble learning, consensus clustering and cross-study normalization methods for this purpose in an almost fully automated process and linking different analysis modules together under a single interface would simplify many microarray analysis tasks. Results We present ArrayMining.net, a web-application for microarray analysis that provides easy access to a wide choice of feature selection, clustering, prediction, gene set analysis and cross-study normalization methods. In contrast to other microarray-related web-tools, multiple algorithms and data sets for an analysis task can be combined using ensemble feature selection, ensemble prediction, consensus clustering and cross-platform data integration. By interlinking different analysis tools in a modular fashion, new exploratory routes become available, e.g. ensemble sample classification using features obtained from a gene set analysis and data from multiple studies. The analysis is further simplified by automatic parameter selection mechanisms and linkage to web tools and databases for functional annotation and literature mining. Conclusion ArrayMining.net is a free web-application for microarray analysis combining a broad choice of algorithms based on ensemble and consensus methods, using automatic parameter selection and integration with annotation databases.

  18. Remembered Experiences and Revisit Intentions

    DEFF Research Database (Denmark)

    Barnes, Stuart; Mattsson, Jan; Sørensen, Flemming

    2016-01-01

    Tourism is an experience-intensive sector in which customers seek and pay for experiences above everything else. Remembering past tourism experiences is also crucial for an understanding of the present, including the predicted behaviours of visitors to tourist destinations. We adopt a longitudinal...... approach to memory data collection from psychological science, which has the potential to contribute to our understanding of tourist behaviour. In this study, we examine the impact of remembered tourist experiences in a safari park. In particular, using matched survey data collected longitudinally and PLS...... path modelling, we examine the impact of positive affect tourist experiences on the development of revisit intentions. We find that longer-term remembered experiences have the strongest impact on revisit intentions, more so than predicted or immediate memory after an event. We also find that remembered...

  19. Revisiting Mutual Fund Performance Evaluation

    OpenAIRE

    Angelidis, Timotheos; Giamouridis, Daniel; Tessaromatis, Nikolaos

    2012-01-01

    Mutual fund manager excess performance should be measured relative to their self-reported benchmark rather than the return of a passive portfolio with the same risk characteristics. Ignoring the self-reported benchmark introduces biases in the measurement of stock selection and timing components of excess performance. We revisit baseline empirical evidence in mutual fund performance evaluation utilizing stock selection and timing measures that address these biases. We introduce a new factor e...

  20. Analysis of nonvolcanic tremor on the San Andreas fault near Parkfield, CA using U. S. Geological Survey Parkfield Seismic Array

    Science.gov (United States)

    Fletcher, Jon B.; Baker, Lawrence M.

    2010-10-01

    Reports by Nadeau and Dolenc (2005) that tremor had been detected near Cholame Valley spawned an effort to use UPSAR (U. S. Geological Survey Parkfield Seismic Array) to study characteristics of tremor. UPSAR was modified to record three channels of velocity at 40-50 sps continuously in January 2005 and ran for about 1 month, during which time we recorded numerous episodes of tremor. One tremor, on 21 January at 0728, was recorded with particularly high signal levels as well as another episode 3 days later. Both events were very emergent, had a frequency content between 2 and 8 Hz, and had numerous high-amplitude, short-duration arrivals within the tremor signal. Here using the first episode as an example, we discuss an analysis procedure, which yields azimuth and apparent velocity of the tremor at UPSAR. We then provide locations for both tremor episodes. The emphasis here is how the tremor episode evolves. Twelve stations were operating at the time of recording. Slowness of arrivals was determined using cross correlation of pairs of stations; the same method used in analyzing the main shock data from 28 September 2004. A feature of this analysis is that 20 s of the time series were used at a time to calculate correlation; the longer windows resulted in more consistent estimates of slowness, but lower peak correlations. These values of correlation (peaks of about 0.25), however, are similar to that obtained for the S wave of a microearthquake. Observed peaks in slowness were traced back to source locations assumed to lie on the San Andreas fault. Our inferred locations for the two tremor events cluster near the locations of previously observed tremor, south of the Cholame Valley. Tremor source depths are in the 14-24 km range, which is below the seismogenic brittle zone, but above the Moho. Estimates of error do not preclude locations below the Moho, however. The tremor signal is very emergent but contains packets that are several times larger than the background

  1. WE-D-BRA-07: Analysis of ArcCHECK Diode Array Performance for ViewRay Quality Assurance

    International Nuclear Information System (INIS)

    Ellefson, S; Culberson, W; Bednarz, B; DeWerd, L; Bayouth, J

    2015-01-01

    Purpose: Discrepancies in absolute dose values have been detected between the ViewRay treatment planning system and ArcCHECK readings when performing delivery quality assurance on the ViewRay system with the ArcCHECK-MR diode array (SunNuclear Corporation). In this work, we investigate whether these discrepancies are due to errors in the ViewRay planning and/or delivery system or due to errors in the ArcCHECK’s readings. Methods: Gamma analysis was performed on 19 ViewRay patient plans using the ArcCHECK. Frequency analysis on the dose differences was performed. To investigate whether discrepancies were due to measurement or delivery error, 10 diodes in low-gradient dose regions were chosen to compare with ion chamber measurements in a PMMA phantom with the same size and shape as the ArcCHECK, provided by SunNuclear. The diodes chosen all had significant discrepancies in absolute dose values compared to the ViewRay TPS. Absolute doses to PMMA were compared between the ViewRay TPS calculations, ArcCHECK measurements, and measurements in the PMMA phantom. Results: Three of the 19 patient plans had 3%/3mm gamma passing rates less than 95%, and ten of the 19 plans had 2%/2mm passing rates less than 95%. Frequency analysis implied a non-random error process. Out of the 10 diode locations measured, ion chamber measurements were all within 2.2% error relative to the TPS and had a mean error of 1.2%. ArcCHECK measurements ranged from 4.5% to over 15% error relative to the TPS and had a mean error of 8.0%. Conclusion: The ArcCHECK performs well for quality assurance on the ViewRay under most circumstances. However, under certain conditions the absolute dose readings are significantly higher compared to the planned doses. As the ion chamber measurements consistently agree with the TPS, it can be concluded that the discrepancies are due to ArcCHECK measurement error and not TPS or delivery system error. This work was funded by the Bhudatt Paliwal Professorship and the

  2. Numerical Signal Analysis of Thermo-Cyclically Operated MOG Gas Sensor Arrays for Early Identification of Emissions from Overloaded Electric Cables

    Directory of Open Access Journals (Sweden)

    Rolf Seifert

    2015-10-01

    Full Text Available A thermo-cyclically operated multi metal oxide gas sensor (MOG array is introduced together with a novel signal analysis approach (SimSens for identifying the emissions from overheated isolation cable materials thereby detecting the fires originated in electrical cabinets at early stages. The MOG array can yield specific conductance signatures appropriate to specifically identify gases. The obtained results bear good capability for detection and identification of pyrolysis gas emissions at relatively low sample heating temperatures even before a visible color-change of the polyvinyl chloride (PVC-isolation material. The dynamic conductance signals were evaluated using SimSens, a numerical analysis tool designed for simultaneous evaluation of conductance profiles. The results show promising pyrolysis gas identification and concentration determination capabilities in relation to the conductance profile shapes of model gases like carbon monoxide (CO and propene.

  3. Three-dimensional lithographically-defined organotypic tissue arrays for quantitative analysis of morphogenesis and neoplastic progression

    Energy Technology Data Exchange (ETDEWEB)

    Nelson, Celeste M.; Inman, Jamie L.; Bissell, Mina J.

    2008-02-13

    Here we describe a simple micromolding method to construct three-dimensional arrays of organotypic epithelial tissue structures that approximate in vivo histology. An elastomeric stamp containing an array of posts of defined geometry and spacing is used to mold microscale cavities into the surface of type I collagen gels. Epithelial cells are seeded into the cavities and covered with a second layer of collagen. The cells reorganize into hollow tissues corresponding to the geometry of the cavities. Patterned tissue arrays can be produced in 3-4 h and will undergo morphogenesis over the following one to three days. The protocol can easily be adapted to study a variety of tissues and aspects of normal and neoplastic development.

  4. SU-F-T-458: Tracking Trends of TG-142 Parameters Via Analysis of Data Recorded by 2D Chamber Array

    Energy Technology Data Exchange (ETDEWEB)

    Alexandrian, A; Kabat, C; Defoor, D; Saenz, D; Rasmussen, K; Kirby, N; Gutierrez, A; Papanikolaou, N; Stathakis, S [University of Texas HSC SA, San Antonio, TX (United States)

    2016-06-15

    Purpose: With increasing QA demands of medical physicists in clinical radiation oncology, the need for an effective method of tracking clinical data has become paramount. A tool was produced which scans through data automatically recorded by a 2D chamber array and extracts relevant information recommended by TG-142. Using this extracted information a timely and comprehensive analysis of QA parameters can be easily performed enabling efficient monthly checks on multiple linear accelerators simultaneously. Methods: A PTW STARCHECK chamber array was used to record several months of beam outputs from two Varian 2100 series linear accelerators and a Varian NovalisTx−. In conjunction with the chamber array, a beam quality phantom was used to simultaneously to determine beam quality. A minimalist GUI was created in MatLab that allows a user to set the file path of the data for each modality to be analyzed. These file paths are recorded to a MatLab structure and then subsequently accessed by a script written in Python (version 3.5.1) which then extracts values required to perform monthly checks as outlined by recommendations from TG-142. The script incorporates calculations to determine if the values recorded by the chamber array fall within an acceptable threshold. Results: Values obtained by the script are written to a spreadsheet where results can be easily viewed and annotated with a “pass” or “fail” and saved for further analysis. In addition to creating a new scheme for reviewing monthly checks, this application allows for able to succinctly store data for follow up analysis. Conclusion: By utilizing this tool, parameters recommended by TG-142 for multiple linear accelerators can be rapidly obtained and analyzed which can be used for evaluation of monthly checks.

  5. Numerical Analysis of Ultrasonic Beam Profile Due to the Change of the Number of Piezoelectric Elements for Phased Array Transducer

    International Nuclear Information System (INIS)

    Choi, Sang Woo; Lee, Joon Hyun

    1999-01-01

    A phased array is a multi-element piezoelectric device whose elements are individually excited by electric pulses at programmed delay time. One of the advantages of using phased array in nondestructive evaluation (NDE) application over conventional ultrasonic transducers is their great maneuverability of ultrasonic beam. There are some parameters such as the number and the size of the piezoelectric elements and the inter-element spacing of the elements to design phased array transducer. In this study, the characteristic of ultrasonic beam for phased array transducer due to the variation of the number of elements has been simulated for ultrasonic SH-wave on the basis of Huygen's principle. Ultrasonic beam directivity and focusing due to the change of time delay of each element were discussed due to the change of the number of piezoelectric elements. It was found that ultrasonic beam was much more spreaded and hence its sound pressure was decreased as steering angle of ultrasonic beam was increased. In addition, the ability of ultrasonic bean focusing decreased gradually with the increase of focal length at the same piezoelectric elements. However, the ability of beam focusing was improved as the number of consisting elements was increased

  6. Analysis of taxines in Taxus plant material and cell cultures by hplc photodiode array and hplc-electrospray mass spectrometry

    NARCIS (Netherlands)

    Theodoridis, G.; Laskaris, G.; Rozendaal, E.L.M.; Verpoorte, R.

    2001-01-01

    A semi-purified Taxus baccata needles extract was analysed by RP-HPLC. More than 18 taxines and cinnamates were detected by photodiode array detection and LC-MS, 10 of them being positively identified. Furthermore, 10-deacetyl baccatin III (paclitaxel's main precursor) and other taxanes were also

  7. Secondary Education Students' Difficulties in Algorithmic Problems with Arrays: An Analysis Using the SOLO Taxonomy

    Science.gov (United States)

    Vrachnos, Euripides; Jimoyiannis, Athanassios

    2017-01-01

    Developing students' algorithmic and computational thinking is currently a major objective for primary and secondary education in many countries around the globe. Literature suggests that students face at various difficulties in programming processes, because of their mental models about basic programming constructs. Arrays constitute the first…

  8. Disposable micro-fluidic biosensor array for online parallelized cell adhesion kinetics analysis on quartz crystal resonators

    DEFF Research Database (Denmark)

    Cama, G.; Jacobs, T.; Dimaki, Maria

    2010-01-01

    among all the sensors of the array. As well, dedicated sensor interface electronics were developed and optimized for fast spectra acquisition of all 16 QCRs with a miniaturized impedance analyzer. This allowed performing cell cultivation experiments for the observation of fast cellular reaction kinetics...

  9. Novel ring resonator-based integrated photonic beamformer for broadband phased array receive antennas - part 1: design and performance analysis

    NARCIS (Netherlands)

    Meijerink, Arjan; Roeloffzen, C.G.H.; Meijerink, Roland; Zhuang, L.; Marpaung, D.A.I.; Bentum, Marinus Jan; Burla, M.; Verpoorte, Jaco; Jorna, Pieter; Huizinga, Adriaan; van Etten, Wim

    2010-01-01

    A novel optical beamformer concept is introduced that can be used for seamless control of the reception angle in broadband wireless receivers employing a large phased array antenna (PAA). The core of this beamformer is an optical beamforming network (OBFN), using ring resonator-based broadband

  10. An analysis of the ArcCHECK-MR diode array's performance for ViewRay quality assurance.

    Science.gov (United States)

    Ellefson, Steven T; Culberson, Wesley S; Bednarz, Bryan P; DeWerd, Larry A; Bayouth, John E

    2017-07-01

    The ArcCHECK-MR diode array utilizes a correction system with a virtual inclinometer to correct the angular response dependencies of the diodes. However, this correction system cannot be applied to measurements on the ViewRay MR-IGRT system due to the virtual inclinometer's incompatibility with the ViewRay's multiple simultaneous beams. Additionally, the ArcCHECK's current correction factors were determined without magnetic field effects taken into account. In the course of performing ViewRay IMRT quality assurance with the ArcCHECK, measurements were observed to be consistently higher than the ViewRay TPS predictions. The goals of this study were to quantify the observed discrepancies and test whether applying the current factors improves the ArcCHECK's accuracy for measurements on the ViewRay. Gamma and frequency analysis were performed on 19 ViewRay patient plans. Ion chamber measurements were performed at a subset of diode locations using a PMMA phantom with the same dimensions as the ArcCHECK. A new method for applying directionally dependent factors utilizing beam information from the ViewRay TPS was developed in order to analyze the current ArcCHECK correction factors. To test the current factors, nine ViewRay plans were altered to be delivered with only a single simultaneous beam and were measured with the ArcCHECK. The current correction factors were applied using both the new and current methods. The new method was also used to apply corrections to the original 19 ViewRay plans. It was found the ArcCHECK systematically reports doses higher than those actually delivered by the ViewRay. Application of the current correction factors by either method did not consistently improve measurement accuracy. As dose deposition and diode response have both been shown to change under the influence of a magnetic field, it can be concluded the current ArcCHECK correction factors are invalid and/or inadequate to correct measurements on the ViewRay system. © 2017 The

  11. Bayes PCA Revisited

    DEFF Research Database (Denmark)

    Sporring, Jon

    Principle Component Analysis is a simple tool to obtain linear models for stochastic data and is used both for a data reduction or equivalently noise elim- ination and for data analysis. Principle Component Analysis ts a multivariate Gaussian distribution to the data, and the typical method is by...

  12. Cyclotron-Resonance-Maser Arrays

    International Nuclear Information System (INIS)

    Kesar, A.; Lei, L.; Dikhtyar, V.; Korol, M.; Jerby, E.

    1999-01-01

    The cyclotron-resonance-maser (CRM) array [1] is a radiation source which consists of CRM elements coupled together under a common magnetic field. Each CRM-element employs a low-energy electron-beam which performs a cyclotron interaction with the local electromagnetic wave. These waves can be coupled together among the CRM elements, hence the interaction is coherently synchronized in the entire array. The implementation of the CRM-array approach may alleviate several technological difficulties which impede the development of single-beam gyro-devices. Furthermore, it proposes new features, such as the phased-array antenna incorporated in the CRM-array itself. The CRM-array studies may lead to the development of compact, high-power radiation sources operating at low-voltages. This paper introduces new conceptual schemes of CRM-arrays, and presents the progress in related theoretical and experimental studies in our laboratory. These include a multi-mode analysis of a CRM-array, and a first operation of this device with five carbon-fiber cathodes

  13. Development of Very Long Baseline Interferometry (VLBI) techniques in New Zealand: Array simulation, image synthesis and analysis

    Science.gov (United States)

    Weston, S. D.

    2008-04-01

    This thesis presents the design and development of a process to model Very Long Base Line Interferometry (VLBI) aperture synthesis antenna arrays. In line with the Auckland University of Technology (AUT) Institute for Radiophysics and Space Research (IRSR) aims to develop the knowledge, skills and experience within New Zealand, extensive use of existing radio astronomical software has been incorporated into the process namely AIPS (Astronomical Imaging Processing System), MIRIAD (a radio interferometry data reduction package) and DIFMAP (a program for synthesis imaging of visibility data from interferometer arrays of radio telescopes). This process has been used to model various antenna array configurations for two proposed New Zealand sites for antenna in a VLBI array configuration with existing Australian facilities and a passable antenna at Scott Base in Antarctica; and the results are presented in an attempt to demonstrate the improvement to be gained by joint trans-Tasman VLBI observation. It is hoped these results and process will assist the planning and placement of proposed New Zealand radio telescopes for cooperation with groups such as the Australian Long Baseline Array (LBA), others in the Pacific Rim and possibly globally; also potential future involvement of New Zealand with the SKA. The developed process has also been used to model a phased building schedule for the SKA in Australia and the addition of two antennas in New Zealand. This has been presented to the wider astronomical community via the Royal Astronomical Society of New Zealand Journal, and is summarized in this thesis with some additional material. A new measure of quality ("figure of merit") for comparing the original model image and final CLEAN images by utilizing normalized 2-D cross correlation is evaluated as an alternative to the existing subjective visual operator image comparison undertaken to date by other groups. This new unit of measure is then used ! in the presentation of the

  14. Optimisation of the conditions for stripping voltammetric analysis at liquid-liquid interfaces supported at micropore arrays: a computational simulation.

    Science.gov (United States)

    Strutwolf, Jörg; Arrigan, Damien W M

    2010-10-01

    Micropore membranes have been used to form arrays of microinterfaces between immiscible electrolyte solutions (µITIES) as a basis for the sensing of non-redox-active ions. Implementation of stripping voltammetry as a sensing method at these arrays of µITIES was applied recently to detect drugs and biomolecules at low concentrations. The present study uses computational simulation to investigate the optimum conditions for stripping voltammetric sensing at the µITIES array. In this scenario, the diffusion of ions in both the aqueous and the organic phases contributes to the sensing response. The influence of the preconcentration time, the micropore aspect ratio, the location of the microinterface within the pore, the ratio of the diffusion coefficients of the analyte ion in the organic and aqueous phases, and the pore wall angle were investigated. The simulations reveal that the accessibility of the microinterfaces during the preconcentration period should not be hampered by a recessed interface and that diffusional transport in the phase where the analyte ions are preconcentrated should be minimized. This will ensure that the ions are accumulated within the micropores close to the interface and thus be readily available for back transfer during the stripping process. On the basis of the results, an optimal combination of the examined parameters is proposed, which together improve the stripping voltammetric signal and provide an improvement in the detection limit.

  15. An initial analysis of options for a UK feed-in tariff for photovoltaic energy, from an array owner's viewpoint

    International Nuclear Information System (INIS)

    Plater, Steve

    2009-01-01

    The UK government has announced the introduction from April 2010 of a feed-in tariff (FIT) for renewable energy, and initiated a consultation on its design. This paper compares three possible variants of a UK FIT for rooftop photovoltaic (PV) arrays, on the basis of calculated income and array cost payback time, and for three locations (north, central and southern England) and various levels of household electricity consumption. This modelling is based on an FIT rate equivalent to Germany's. It concludes that an FIT which paid only for PV electricity surplus to on-site needs, and exported to the grid, would mean a simple payback time too long to make array purchase appealing. Preferable would be either export to the grid of all PV electricity for FIT payment; or a lower FIT rate for electricity used on-site, plus full FIT for any surplus exported. The latter would involve significantly lower costs in feed-in tariff payments. Finally, the effect of the UK government's illustrative FIT rate for consultation is examined for the same locations and annual consumption levels.

  16. Solution-based analysis of multiple analytes by a sensor array: toward the development of an electronic tongue

    Science.gov (United States)

    Savoy, Steven M.; Lavigne, John J.; Yoo, J. S.; Wright, John; Rodriguez, Marc; Goodey, Adrian; McDoniel, Bridget; McDevitt, John T.; Anslyn, Eric V.; Shear, Jason B.; Ellington, Andrew D.; Neikirk, Dean P.

    1998-12-01

    A micromachined sensor array has been developed for the rapid characterization of multi-component mixtures in aqueous media. The sensor functions in a manner analogous to that of the mammalian tongue, using an array composed of individually immobilized polystyrene-polyethylene glycol composite microspheres selectively arranged in micromachined etch cavities localized o n silicon wafers. Sensing occurs via colorimetric or fluorometric changes to indicator molecules that are covalently bound to amine termination sites on the polymeric microspheres. The hybrid micromachined structure has been interfaced directly to a charged-coupled-device that is used for the simultaneous acquisition of the optical data from the individually addressable `taste bud' elements. With the miniature sensor array, acquisition of data streams composed of red, green, and blue color patterns distinctive for the analytes in the solution are rapidly acquired. The unique combination of carefully chosen reporter molecules with water permeable microspheres allows for the simultaneous detection and quantification of a variety of analytes. The fabrication of the sensor structures and the initial colorimetric and fluorescent responses for pH, Ca+2, Ce+3, and sugar are reported. Interface to microfluidic components should also be possible, producing a complete sampling/sensing system.

  17. Functionalized vertical GaN micro pillar arrays with high signal-to-background ratio for detection and analysis of proteins secreted from breast tumor cells.

    Science.gov (United States)

    Choi, Mun-Ki; Kim, Gil-Sung; Jeong, Jin-Tak; Lim, Jung-Taek; Lee, Won-Yong; Umar, Ahmad; Lee, Sang-Kwon

    2017-11-02

    The detection of cancer biomarkers has recently attracted significant attention as a means of determining the correct course of treatment with targeted therapeutics. However, because the concentration of these biomarkers in blood is usually relatively low, highly sensitive biosensors for fluorescence imaging and precise detection are needed. In this study, we have successfully developed vertical GaN micropillar (MP) based biosensors for fluorescence sensing and quantitative measurement of CA15-3 antigens. The highly ordered vertical GaN MP arrays result in the successful immobilization of CA15-3 antigens on each feature of the arrays, thereby allowing the detection of an individual fluorescence signal from the top surface of the arrays owing to the high regularity of fluorophore-tagged MP spots and relatively low background signal. Therefore, our fluorescence-labeled and CA15-3 functionalized vertical GaN-MP-based biosensor is suitable for the selective quantitative analysis of secreted CA15-3 antigens from MCF-7 cell lines, and helps in the early diagnosis and prognosis of serious diseases as well as the monitoring of the therapeutic response of breast cancer patients.

  18. Automated HAZOP revisited

    DEFF Research Database (Denmark)

    Taylor, J. R.

    2017-01-01

    Hazard and operability analysis (HAZOP) has developed from a tentative approach to hazard identification for process plants in the early 1970s to an almost universally accepted approach today, and a central technique of safety engineering. Techniques for automated HAZOP analysis were developed...

  19. Economics of vaccines revisited

    NARCIS (Netherlands)

    Postma, Maarten J.; Standaert, Baudouin A.

    2013-01-01

    Performing a total health economic analysis of a vaccine newly introduced into the market today is a challenge when using the conventional cost-effectiveness analysis we normally apply on pharmaceutical products. There are many reasons for that, such as: the uncertainty in the total benefit (direct

  20. Schroedinger's variational method of quantization revisited

    International Nuclear Information System (INIS)

    Yasue, K.

    1980-01-01

    Schroedinger's original quantization procedure is revisited in the light of Nelson's stochastic framework of quantum mechanics. It is clarified why Schroedinger's proposal of a variational problem led us to a true description of quantum mechanics. (orig.)

  1. Pipe failure probability - the Thomas paper revisited

    International Nuclear Information System (INIS)

    Lydell, B.O.Y.

    2000-01-01

    Almost twenty years ago, in Volume 2 of Reliability Engineering (the predecessor of Reliability Engineering and System Safety), a paper by H. M. Thomas of Rolls Royce and Associates Ltd. presented a generalized approach to the estimation of piping and vessel failure probability. The 'Thomas-approach' used insights from actual failure statistics to calculate the probability of leakage and conditional probability of rupture given leakage. It was intended for practitioners without access to data on the service experience with piping and piping system components. This article revisits the Thomas paper by drawing on insights from development of a new database on piping failures in commercial nuclear power plants worldwide (SKI-PIPE). Partially sponsored by the Swedish Nuclear Power Inspectorate (SKI), the R and D leading up to this note was performed during 1994-1999. Motivated by data requirements of reliability analysis and probabilistic safety assessment (PSA), the new database supports statistical analysis of piping failure data. Against the background of this database development program, the article reviews the applicability of the 'Thomas approach' in applied risk and reliability analysis. It addresses the question whether a new and expanded database on the service experience with piping systems would alter the original piping reliability correlation as suggested by H. M. Thomas

  2. Lectin-Array Blotting.

    Science.gov (United States)

    Pazos, Raquel; Echevarria, Juan; Hernandez, Alvaro; Reichardt, Niels-Christian

    2017-09-01

    Aberrant protein glycosylation is a hallmark of cancer, infectious diseases, and autoimmune or neurodegenerative disorders. Unlocking the potential of glycans as disease markers will require rapid and unbiased glycoproteomics methods for glycan biomarker discovery. The present method is a facile and rapid protocol for qualitative analysis of protein glycosylation in complex biological mixtures. While traditional lectin arrays only provide an average signal for the glycans in the mixture, which is usually dominated by the most abundant proteins, our method provides individual lectin binding profiles for all proteins separated in the gel electrophoresis step. Proteins do not have to be excised from the gel for subsequent analysis via the lectin array but are transferred by contact diffusion from the gel to a glass slide presenting multiple copies of printed lectin arrays. Fluorescently marked glycoproteins are trapped by the printed lectins via specific carbohydrate-lectin interactions and after a washing step their binding profile with up to 20 lectin probes is analyzed with a fluorescent scanner. The method produces the equivalent of 20 lectin blots in a single experiment, giving detailed insight into the binding epitopes present in the fractionated proteins. © 2017 by John Wiley & Sons, Inc. Copyright © 2017 John Wiley & Sons, Inc.

  3. Genome-wide comparison of paired fresh frozen and formalin-fixed paraffin-embedded gliomas by custom BAC and oligonucleotide array comparative genomic hybridization: facilitating analysis of archival gliomas.

    Science.gov (United States)

    Mohapatra, Gayatry; Engler, David A; Starbuck, Kristen D; Kim, James C; Bernay, Derek C; Scangas, George A; Rousseau, Audrey; Batchelor, Tracy T; Betensky, Rebecca A; Louis, David N

    2011-04-01

    Array comparative genomic hybridization (aCGH) is a powerful tool for detecting DNA copy number alterations (CNA). Because diffuse malignant gliomas are often sampled by small biopsies, formalin-fixed paraffin-embedded (FFPE) blocks are often the only tissue available for genetic analysis; FFPE tissues are also needed to study the intratumoral heterogeneity that characterizes these neoplasms. In this paper, we present a combination of evaluations and technical advances that provide strong support for the ready use of oligonucleotide aCGH on FFPE diffuse gliomas. We first compared aCGH using bacterial artificial chromosome (BAC) arrays in 45 paired frozen and FFPE gliomas, and demonstrate a high concordance rate between FFPE and frozen DNA in an individual clone-level analysis of sensitivity and specificity, assuring that under certain array conditions, frozen and FFPE DNA can perform nearly identically. However, because oligonucleotide arrays offer advantages to BAC arrays in genomic coverage and practical availability, we next developed a method of labeling DNA from FFPE tissue that allows efficient hybridization to oligonucleotide arrays. To demonstrate utility in FFPE tissues, we applied this approach to biphasic anaplastic oligoastrocytomas and demonstrate CNA differences between DNA obtained from the two components. Therefore, BAC and oligonucleotide aCGH can be sensitive and specific tools for detecting CNAs in FFPE DNA, and novel labeling techniques enable the routine use of oligonucleotide arrays for FFPE DNA. In combination, these advances should facilitate genome-wide analysis of rare, small and/or histologically heterogeneous gliomas from FFPE tissues.

  4. Tourists' perceptions and intention to revisit Norway

    OpenAIRE

    Lazar, Ana Florina; Komolikova-Blindheim, Galyna

    2016-01-01

    Purpose - The overall purpose of this study is to explore tourists' perceptions and their intention to revisit Norway. The aim is to find out what are the factors that drive the overall satisfaction, the willingness to recommend and the revisit intention of international tourists that spend their holiday in Norway. Design-Method-Approach - the Theory of Planned Behavior (Ajzen 1991), is used as a framework to investigate tourists' intention and behavior towards Norway as destination. The o...

  5. IVF and retinoblastoma revisited

    NARCIS (Netherlands)

    Dommering, Charlotte J.; van der Hout, Annemarie H.; Meijers-Heijboer, Hanne; Marees, Tamara; Moll, Annette C.

    2012-01-01

    Objective: To evaluate the suggested association between IVF, retinoblastoma, and tumor methylation characteristics. Design: Laboratory analysis. Setting: National Retinoblastoma Center in the Netherlands. Patient(s): Retinoblastoma tumors from seven children conceived by IVF or intracytoplasmic

  6. IVF and retinoblastoma revisited

    NARCIS (Netherlands)

    Dommering, Charlotte J.; van der Hout, Annemarie H.; Meijers-Heijboer, Hanne; Marees, Tamara; Moll, Annette C.

    Objective: To evaluate the suggested association between IVF, retinoblastoma, and tumor methylation characteristics. Design: Laboratory analysis. Setting: National Retinoblastoma Center in the Netherlands. Patient(s): Retinoblastoma tumors from seven children conceived by IVF or intracytoplasmic

  7. Coupling in reflector arrays

    DEFF Research Database (Denmark)

    Appel-Hansen, Jørgen

    1968-01-01

    In order to reduce the space occupied by a reflector array, it is desirable to arrange the array antennas as close to each other as possible; however, in this case coupling between the array antennas will reduce the reflecting properties of the reflector array. The purpose of the present communic......In order to reduce the space occupied by a reflector array, it is desirable to arrange the array antennas as close to each other as possible; however, in this case coupling between the array antennas will reduce the reflecting properties of the reflector array. The purpose of the present...

  8. Teacher Communication Concerns Revisited: Calling into Question the Gnawing Pull towards Equilibrium

    Science.gov (United States)

    Dannels, Deanna P.

    2015-01-01

    This study revisits the long-standing teacher communication concerns framework originating over three decades ago. Analysis of 10 years of contemporary GTA teacher communication concerns reveals a typology of 10 concerns, which taken together construct teaching as a process of negotiating relationships, managing identities, and focusing attention.…

  9. Using destination image to predict visitors' intention to revisit three Hudson River Valley, New York, communities

    Science.gov (United States)

    Rudy M. Schuster; Laura Sullivan; Duarte Morais; Diane Kuehn

    2009-01-01

    This analysis explores the differences in Affective and Cognitive Destination Image among three Hudson River Valley (New York) tourism communities. Multiple regressions were used with six dimensions of visitors' images to predict future intention to revisit. Two of the three regression models were significant. The only significantly contributing independent...

  10. Classification of the medicinal plants of the genus Atractylodes using high-performance liquid chromatography with diode array and tandem mass spectrometry detection combined with multivariate statistical analysis.

    Science.gov (United States)

    Cho, Hyun-Deok; Kim, Unyong; Suh, Joon Hyuk; Eom, Han Young; Kim, Junghyun; Lee, Seul Gi; Choi, Yong Seok; Han, Sang Beom

    2016-04-01

    Analytical methods using high-performance liquid chromatography with diode array and tandem mass spectrometry detection were developed for the discrimination of the rhizomes of four Atractylodes medicinal plants: A. japonica, A. macrocephala, A. chinensis, and A. lancea. A quantitative study was performed, selecting five bioactive components, including atractylenolide I, II, III, eudesma-4(14),7(11)-dien-8-one and atractylodin, on twenty-six Atractylodes samples of various origins. Sample extraction was optimized to sonication with 80% methanol for 40 min at room temperature. High-performance liquid chromatography with diode array detection was established using a C18 column with a water/acetonitrile gradient system at a flow rate of 1.0 mL/min, and the detection wavelength was set at 236 nm. Liquid chromatography with tandem mass spectrometry was applied to certify the reliability of the quantitative results. The developed methods were validated by ensuring specificity, linearity, limit of quantification, accuracy, precision, recovery, robustness, and stability. Results showed that cangzhu contained higher amounts of atractylenolide I and atractylodin than baizhu, and especially atractylodin contents showed the greatest variation between baizhu and cangzhu. Multivariate statistical analysis, such as principal component analysis and hierarchical cluster analysis, were also employed for further classification of the Atractylodes plants. The established method was suitable for quality control of the Atractylodes plants. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Revisiting the Dutch hypothesis

    NARCIS (Netherlands)

    Postma, Dirkje S.; Weiss, Scott T.; van den Berge, Maarten; Kerstjens, Huib A. M.; Koppelman, Gerard H.

    The Dutch hypothesis was first articulated in 1961, when many novel and advanced scientific techniques were not available, such as genomics techniques for pinpointing genes, gene expression, lipid and protein profiles, and the microbiome. In addition, computed tomographic scans and advanced analysis

  12. The quench front revisited

    International Nuclear Information System (INIS)

    Wendroff, B.

    1988-01-01

    The cooling of hot surfaces can be modeled in certain simples cases by a nonlinear eigenvalue problem describing the motion of a steady traveling cooling wave. Earlier work on the mathematical theory, the numerical analysis, and the asymptotics of this problem are reviewed

  13. Kilburn High Road Revisited

    Directory of Open Access Journals (Sweden)

    Cristina Capineri

    2016-07-01

    Full Text Available Drawing on John Agnew’s (1987 theoretical framework for the analysis of place (location, locale and sense of place and on Doreen Massey’s (1991 interpretation of Kilburn High Road (London, the contribution develops an analysis of the notion of place in the case study of Kilburn High Road by comparing the semantics emerging from Doreen Massey’s interpretation of Kilburn High Road in the late Nineties with those from a selection of noisy and unstructured volunteered geographic information collected from Flickr photos and Tweets harvested in 2014–2015. The comparison shows how sense of place is dynamic and changing over time and explores Kilburn High Road through the categories of location, locale and sense of place derived from the qualitative analysis of VGI content and annotations. The contribution shows how VGI can contribute to discovering the unique relationship between people and place which takes the form given by Doreen Massey to Kilburn High Road and then moves on to the many forms given by people experiencing Kilburn High Road through a photo, a Tweet or a simple narrative. Finally, the paper suggests that the analysis of VGI content can contribute to detect the relevant features of street life, from infrastructure to citizens’ perceptions, which should be taken into account for a more human-centered approach in planning or service management.

  14. B-waves revisited

    Directory of Open Access Journals (Sweden)

    Andreas Spiegelberg

    2016-12-01

    With the still unmet need for a clinically acceptable method for acquiring intracranial compliance, and the revival of ICP waveform analysis, B-waves are moving back into the research focus. Herein we provide a concise review of the literature on B-waves, including a critical assessment of non-invasive methods for obtaining B-wave surrogates.

  15. Revisit to diffraction anomalous fine structure

    International Nuclear Information System (INIS)

    Kawaguchi, T.; Fukuda, K.; Tokuda, K.; Shimada, K.; Ichitsubo, T.; Oishi, M.; Mizuki, J.; Matsubara, E.

    2014-01-01

    The diffraction anomalous fine structure method has been revisited by applying this measurement technique to polycrystalline samples and using an analytical method with the logarithmic dispersion relation. The diffraction anomalous fine structure (DAFS) method that is a spectroscopic analysis combined with resonant X-ray diffraction enables the determination of the valence state and local structure of a selected element at a specific crystalline site and/or phase. This method has been improved by using a polycrystalline sample, channel-cut monochromator optics with an undulator synchrotron radiation source, an area detector and direct determination of resonant terms with a logarithmic dispersion relation. This study makes the DAFS method more convenient and saves a large amount of measurement time in comparison with the conventional DAFS method with a single crystal. The improved DAFS method has been applied to some model samples, Ni foil and Fe 3 O 4 powder, to demonstrate the validity of the measurement and the analysis of the present DAFS method

  16. Design of a Weighted-Rotor Energy Harvester Based on Dynamic Analysis and Optimization of Circular Halbach Array Magnetic Disk

    Directory of Open Access Journals (Sweden)

    Yu-Jen Wang

    2015-03-01

    Full Text Available This paper proposes the design of a weighted-rotor energy harvester (WREH in which the oscillation is caused by the periodic change of the tangential component of gravity, to harvest kinetic energy from a rotating wheel. When a WREH is designed with a suitable characteristic length, the rotor’s natural frequency changes according to the wheel rotation speed and the rotor oscillates at a wide angle and high angular velocity to generate a large amount of power. The magnetic disk is designed according to an optimized circular Halbach array. The optimized circular Halbach array magnetic disk provides the largest induced EMF for different sector-angle ratios for the same magnetic disk volume. This study examined the output voltage and power by considering the constant and accelerating plate-rotation speeds, respectively. This paper discusses the effects of the angular acceleration speed of a rotating wheel corresponding to the dynamic behaviors of a weighted rotor. The average output power is 399 to 535 microwatts at plate-rotation speeds from 300 to 500 rpm, enabling the WREH to be a suitable power source for a tire-pressure monitoring system.

  17. Design and Analysis of Printed Yagi-Uda Antenna and Two-Element Array for WLAN Applications

    Directory of Open Access Journals (Sweden)

    Cai Run-Nan

    2012-01-01

    Full Text Available A printed director antenna with compact structure is proposed. The antenna is fed by a balanced microstrip-slotline and makes good use of space to reduce feeding network area and the size of antenna. According to the simulation results of CST MICROWAVE STUDIO software, broadband characteristics and directional radiation properties of the antenna are explained. The operating bandwidth is 1.8 GHz–3.5 GHz with reflection coefficient less than −10 dB. Antenna gain in band can achieve 4.5–6.8 dBi, and the overall size of antenna is smaller than 0.34λ0×0.58λ0. Then the antenna is developed to a two-element antenna array, working frequency and relative bandwidth of which are 2.15–2.87 GHz and 28.7%, respectively. Compared with antenna unit, the gain of the antenna array has increased by 2 dB. Thus the proposed antenna has characteristics of compact structure, relatively small size, and wideband, and it can be widely used in PCS/UMTS/WLAN/ WiMAX fields.

  18. Fast batch injection analysis of H{sub 2}O{sub 2} using an array of Pt-modified gold microelectrodes obtained from split electronic chips

    Energy Technology Data Exchange (ETDEWEB)

    Pacheco, Bruno D.; Valerio, Jaqueline [Centro de Ciencias e Humanidades - Universidade Presbiteriana Mackenzie, Rua da Consolacao, 896, 01302-907 Sao Paulo, SP (Brazil); Angnes, Lucio [Departamento de Quimica Fundamental, Instituto de Quimica da USP, Av. Prof. Lineu Prestes, 748, 05508-000 Cidade Universitaria, Sao Paulo, SP (Brazil); Pedrotti, Jairo J., E-mail: jpedrotti@mackenzie.br [Centro de Ciencias e Humanidades - Universidade Presbiteriana Mackenzie, Rua da Consolacao, 896, 01302-907 Sao Paulo, SP (Brazil)

    2011-06-24

    Graphical abstract: Highlights: > An array of gold microelectrodes modified with Pt was used for batch injection analysis of H{sub 2}O{sub 2} in rainwater. > The microelectrode array (n = 14) was obtained from electronic chips developed for surface mounted device technology. > The analytical frequency of the method can attain 300 determinations per hour. > The volume-weighted mean concentration of H{sub 2}O{sub 2} in rainwater investigated (n = 25) was 14.2 {mu}mol L{sup -1}. - Abstract: A fast and robust analytical method for amperometric determination of hydrogen peroxide (H{sub 2}O{sub 2}) based on batch injection analysis (BIA) on an array of gold microelectrodes modified with platinum is proposed. The gold microelectrode array (n = 14) was obtained from electronic chips developed for surface mounted device technology (SMD), whose size offers advantages to adapt them in batch cells. The effect of the dispensing rate, volume injected, distance between the platinum microelectrodes and the pipette tip, as well as the volume of solution in the cell on the analytical response were evaluated. The method allows the H{sub 2}O{sub 2} amperometric determination in the concentration range from 0.8 {mu}mol L{sup -1} to 100 {mu}mol L{sup -1}. The analytical frequency can attain 300 determinations per hour and the detection limit was estimated in 0.34 {mu}mol L{sup -1} (3{sigma}). The anodic current peaks obtained after a series of 23 successive injections of 50 {mu}L of 25 {mu}mol L{sup -1} H{sub 2}O{sub 2} showed an RSD < 0.9%. To ensure the good selectivity to detect H{sub 2}O{sub 2}, its determination was performed in a differential mode, with selective destruction of the H{sub 2}O{sub 2} with catalase in 10 mmol L{sup -1} phosphate buffer solution. Practical application of the analytical procedure involved H{sub 2}O{sub 2} determination in rainwater of Sao Paulo City. A comparison of the results obtained by the proposed amperometric method with another one which

  19. High-Throughput Analysis With 96-Capillary Array Electrophoresis and Integrated Sample Preparation for DNA Sequencing Based on Laser Induced Fluorescence Detection

    Energy Technology Data Exchange (ETDEWEB)

    Xue, Gang [Iowa State Univ., Ames, IA (United States)

    2001-01-01

    The purpose of this research was to improve the fluorescence detection for the multiplexed capillary array electrophoresis, extend its use beyond the genomic analysis, and to develop an integrated micro-sample preparation system for high-throughput DNA sequencing. The authors first demonstrated multiplexed capillary zone electrophoresis (CZE) and micellar electrokinetic chromatography (MEKC) separations in a 96-capillary array system with laser-induced fluorescence detection. Migration times of four kinds of fluoresceins and six polyaromatic hydrocarbons (PAHs) are normalized to one of the capillaries using two internal standards. The relative standard deviations (RSD) after normalization are 0.6-1.4% for the fluoresceins and 0.1-1.5% for the PAHs. Quantitative calibration of the separations based on peak areas is also performed, again with substantial improvement over the raw data. This opens up the possibility of performing massively parallel separations for high-throughput chemical analysis for process monitoring, combinatorial synthesis, and clinical diagnosis. The authors further improved the fluorescence detection by step laser scanning. A computer-controlled galvanometer scanner is adapted for scanning a focused laser beam across a 96-capillary array for laser-induced fluorescence detection. The signal at a single photomultiplier tube is temporally sorted to distinguish among the capillaries. The limit of detection for fluorescein is 3 x 10-11 M (S/N = 3) for 5-mW of total laser power scanned at 4 Hz. The observed cross-talk among capillaries is 0.2%. Advantages include the efficient utilization of light due to the high duty-cycle of step scan, good detection performance due to the reduction of stray light, ruggedness due to the small mass of the galvanometer mirror, low cost due to the simplicity of components, and flexibility due to the independent paths for excitation and emission.

  20. Revisiting software ecosystems research

    DEFF Research Database (Denmark)

    Manikas, Konstantinos

    2016-01-01

    ‘Software ecosystems’ is argued to first appear as a concept more than 10 years ago and software ecosystem research started to take off in 2010. We conduct a systematic literature study, based on the most extensive literature review in the field up to date, with two primarily aims: (a) to provide...... an updated overview of the field and (b) to document evolution in the field. In total, we analyze 231 papers from 2007 until 2014 and provide an overview of the research in software ecosystems. Our analysis reveals a field that is rapidly growing both in volume and empirical focus while becoming more mature...... from evolving. We propose means for future research and the community to address them. Finally, our analysis shapes the view of the field having evolved outside the existing definitions of software ecosystems and thus propose the update of the definition of software ecosystems....

  1. Environmental policy performance revisited

    DEFF Research Database (Denmark)

    Daugbjerg, Carsten; Sønderskov, Kim Mannemar

    2012-01-01

    . On the basis of the typology, a hypothesis on their ability to expand green markets is generated and tested in a comparative analysis of the performance of organic food policies in Denmark, Sweden, the UK and the US, focusing on their impact on organic consumption. Our analysis demonstrates that cross......Studies of environmental policy performance tend to concentrate on the impact of particular policy institutions or of single policy instruments. However, environmental policies most often consist of a package of policy instruments. Further, these studies pay no or very little attention to policy...... instruments directed at the demand side of the market. Therefore this article develops a policy typology for government intervention aimed at creating green markets. The typology distinguishes between four types of policy based on the balance between the supply-side and demand-side policy instruments...

  2. Scanner calibration revisited

    Directory of Open Access Journals (Sweden)

    Pozhitkov Alexander E

    2010-07-01

    Full Text Available Abstract Background Calibration of a microarray scanner is critical for accurate interpretation of microarray results. Shi et al. (BMC Bioinformatics, 2005, 6, Art. No. S11 Suppl. 2. reported usage of a Full Moon BioSystems slide for calibration. Inspired by the Shi et al. work, we have calibrated microarray scanners in our previous research. We were puzzled however, that most of the signal intensities from a biological sample fell below the sensitivity threshold level determined by the calibration slide. This conundrum led us to re-investigate the quality of calibration provided by the Full Moon BioSystems slide as well as the accuracy of the analysis performed by Shi et al. Methods Signal intensities were recorded on three different microarray scanners at various photomultiplier gain levels using the same calibration slide from Full Moon BioSystems. Data analysis was conducted on raw signal intensities without normalization or transformation of any kind. Weighted least-squares method was used to fit the data. Results We found that initial analysis performed by Shi et al. did not take into account autofluorescence of the Full Moon BioSystems slide, which led to a grossly distorted microarray scanner response. Our analysis revealed that a power-law function, which is explicitly accounting for the slide autofluorescence, perfectly described a relationship between signal intensities and fluorophore quantities. Conclusions Microarray scanners respond in a much less distorted fashion than was reported by Shi et al. Full Moon BioSystems calibration slides are inadequate for performing calibration. We recommend against using these slides.

  3. ESPRIT And Uniform Linear Arrays

    Science.gov (United States)

    Roy, R. H.; Goldburg, M.; Ottersten, B. E.; Swindlehurst, A. L.; Viberg, M.; Kailath, T.

    1989-11-01

    Abstract ¬â€?ESPRIT is a recently developed and patented technique for high-resolution estimation of signal parameters. It exploits an invariance structure designed into the sensor array to achieve a reduction in computational requirements of many orders of magnitude over previous techniques such as MUSIC, Burg's MEM, and Capon's ML, and in addition achieves performance improvement as measured by parameter estimate error variance. It is also manifestly more robust with respect to sensor errors (e.g. gain, phase, and location errors) than other methods as well. Whereas ESPRIT only requires that the sensor array possess a single invariance best visualized by considering two identical but other-wise arbitrary arrays of sensors displaced (but not rotated) with respect to each other, many arrays currently in use in various applications are uniform linear arrays of identical sensor elements. Phased array radars are commonplace in high-resolution direction finding systems, and uniform tapped delay lines (i.e., constant rate A/D converters) are the rule rather than the exception in digital signal processing systems. Such arrays possess many invariances, and are amenable to other types of analysis, which is one of the main reasons such structures are so prevalent. Recent developments in high-resolution algorithms of the signal/noise subspace genre including total least squares (TLS) ESPRIT applied to uniform linear arrays are summarized. ESPRIT is also shown to be a generalization of the root-MUSIC algorithm (applicable only to the case of uniform linear arrays of omni-directional sensors and unimodular cisoids). Comparisons with various estimator bounds, including CramerRao bounds, are presented.

  4. Seismometer array station processors

    International Nuclear Information System (INIS)

    Key, F.A.; Lea, T.G.; Douglas, A.

    1977-01-01

    A description is given of the design, construction and initial testing of two types of Seismometer Array Station Processor (SASP), one to work with data stored on magnetic tape in analogue form, the other with data in digital form. The purpose of a SASP is to detect the short period P waves recorded by a UK-type array of 20 seismometers and to edit these on to a a digital library tape or disc. The edited data are then processed to obtain a rough location for the source and to produce seismograms (after optimum processing) for analysis by a seismologist. SASPs are an important component in the scheme for monitoring underground explosions advocated by the UK in the Conference of the Committee on Disarmament. With digital input a SASP can operate at 30 times real time using a linear detection process and at 20 times real time using the log detector of Weichert. Although the log detector is slower, it has the advantage over the linear detector that signals with lower signal-to-noise ratio can be detected and spurious large amplitudes are less likely to produce a detection. It is recommended, therefore, that where possible array data should be recorded in digital form for input to a SASP and that the log detector of Weichert be used. Trial runs show that a SASP is capable of detecting signals down to signal-to-noise ratios of about two with very few false detections, and at mid-continental array sites it should be capable of detecting most, if not all, the signals with magnitude above msub(b) 4.5; the UK argues that, given a suitable network, it is realistic to hope that sources of this magnitude and above can be detected and identified by seismological means alone. (author)

  5. Characterization and error analysis of an N×N unfolding procedure applied to filtered, photoelectric x-ray detector arrays. II. Error analysis and generalization

    Directory of Open Access Journals (Sweden)

    D. L. Fehl

    2010-12-01

    Full Text Available A five-channel, filtered-x-ray-detector (XRD array has been used to measure time-dependent, soft-x-ray flux emitted by z-pinch plasmas at the Z pulsed-power accelerator (Sandia National Laboratories, Albuquerque, New Mexico, USA. The preceding, companion paper [D. L. Fehl et al., Phys. Rev. ST Accel. Beams 13, 120402 (2010PRABFM1098-4402] describes an algorithm for spectral reconstructions (unfolds and spectrally integrated flux estimates from data obtained by this instrument. The unfolded spectrum S_{unfold}(E,t is based on (N=5 first-order B-splines (histograms in contiguous unfold bins j=1,…,N; the recovered x-ray flux F_{unfold}(t is estimated as ∫S_{unfold}(E,tdE, where E is x-ray energy and t is time. This paper adds two major improvements to the preceding unfold analysis: (a Error analysis.—Both data noise and response-function uncertainties are propagated into S_{unfold}(E,t and F_{unfold}(t. Noise factors ν are derived from simulations to quantify algorithm-induced changes in the noise-to-signal ratio (NSR for S_{unfold} in each unfold bin j and for F_{unfold} (ν≡NSR_{output}/NSR_{input}: for S_{unfold}, 1≲ν_{j}≲30, an outcome that is strongly spectrally dependent; for F_{unfold}, 0.6≲ν_{F}≲1, a result that is less spectrally sensitive and corroborated independently. For nominal z-pinch experiments, the combined uncertainty (noise and calibrations in F_{unfold}(t at peak is estimated to be ∼15%. (b Generalization of the unfold method.—Spectral sensitivities (called here passband functions are constructed for S_{unfold} and F_{unfold}. Predicting how the unfold algorithm reconstructs arbitrary spectra is thereby reduced to quadratures. These tools allow one to understand and quantitatively predict algorithmic distortions (including negative artifacts, to identify potentially troublesome spectra, and to design more useful response functions.

  6. New inflation revisited

    International Nuclear Information System (INIS)

    Brandenberger, R.H.

    1986-01-01

    Cosmological phase transitions are examined using a new approach based on the dynamical analysis of the equations of motion of quantum fields rather than on static effective potential considerations. In many models the universe enters a period of exponential expansion required for an inflationary cosmology. Analytical methods show that this will be the case if the interaction rate due to quantum field nonlinearities is small compared to the expansion rate of the universe. They derive a heuristic criterion for the maximal value of the coupling constant for which they expect inflation. The prediction is in good agreement with numerical results

  7. PIXE: early NAA revisited

    International Nuclear Information System (INIS)

    Lyon, W.S.

    1982-01-01

    A short, comparative evaluation of the methods of neutron activation analysis (NAA), and particle induced X-ray emission (PIXE), is given based on the Proc. of the 2nd international conference on PIXE and its analytical applications. The conference took place June 9-12, 1980, Lund, Sweden. 'The PIXE people have paralleled NAA people - in work, problems and mistakes, summarises the author, but PIXE was proved and established as a standard analytical method.' The method has proved extremely valuable in microprobe applications. (Sz.J.)

  8. Rainflow counting revisited

    Energy Technology Data Exchange (ETDEWEB)

    Soeker, H [Deutsches Windenergie-Institut (Germany)

    1996-09-01

    As state of the art method the rainflow counting technique is presently applied everywhere in fatigue analysis. However, the author feels that the potential of the technique is not fully recognized in wind energy industries as it is used, most of the times, as a mere data reduction technique disregarding some of the inherent information of the rainflow counting results. The ideas described in the following aim at exploitation of this information and making it available for use in the design and verification process. (au)

  9. Economics of vaccines revisited.

    Science.gov (United States)

    Postma, Maarten J; Standaert, Baudouin A

    2013-05-01

    Performing a total health economic analysis of a vaccine newly introduced into the market today is a challenge when using the conventional cost-effectiveness analysis we normally apply on pharmaceutical products. There are many reasons for that, such as: the uncertainty in the total benefit (direct and indirect) to be measured in a population when using a cohort model; (1) appropriate rules about discounting the long-term impact of vaccines are absent jeopardizing therefore their value at the initial investment; (2) the presence of opposite contexts when introducing the vaccine in developed vs. the developing world with high benefits, low initial health care investment for the latter vs. marginal benefit and high cost for the former; with a corresponding paradox for the vaccine becoming very cost-effective in low income countries but rather medium in middle low to high middle income countries; (3) and the type of trial assessment for the newer vaccines is now often performed with immunogenicity reaction instead of clinical endpoints which still leaves questions on their real impact and their head-to-head comparison. (4.)

  10. Data Interactive Publications Revisited

    Science.gov (United States)

    Domenico, B.; Weber, W. J.

    2011-12-01

    A few years back, the authors presented examples of online documents that allowed the reader to interact directly with datasets, but there were limitations that restricted the interaction to specific desktop analysis and display tools that were not generally available to all readers of the documents. Recent advances in web service technology and related standards are making it possible to develop systems for publishing online documents that enable readers to access, analyze, and display the data discussed in the publication from the perspective and in the manner from which the author wants it to be represented. By clicking on embedded links, the reader accesses not only the usual textual information in a publication, but also data residing on a local or remote web server as well as a set of processing tools for analyzing and displaying the data. With the option of having the analysis and display processing provided on the server, there are now a broader set of possibilities on the client side where the reader can interact with the data via a thin web client, a rich desktop application, or a mobile platform "app." The presentation will outline the architecture of data interactive publications along with illustrative examples.

  11. The Lisse effect revisited.

    Science.gov (United States)

    Weeks, Edwin P

    2002-01-01

    The Lisse effect is a rarely noted phenomenon occurring when infiltration caused by intense rain seals the surface soil layer to airflow, trapping air in the unsaturated zone. Compression of air by the advancing front results in a pressure increase that produces a water-level rise in an observation well screened below the water table that is several times as large as the distance penetrated by the wetting front. The effect is triggered by intense rains and results in a very rapid water-level rise, followed by a recession lasting a few days. The Lisse effect was first noted and explained by Thal Larsen in 1932 from water-level observations obtained in a shallow well in the village of Lisse, Holland. The original explanation does not account for the increased air pressure pushing up on the bottom of the wetting front. Analysis of the effect of this upward pressure indicates that a negative pressure head at the base of the wetting front, psi(f), analogous to that postulated by Green and Ampt (1911) to explain initially rapid infiltration rates into unsaturated soils, is involved in producing the Lisse effect. Analysis of recorded observations of the Lisse effect by Larsen and others indicates that the water-level rise, which typically ranges from 0.10 to 0.55 m, should be only slightly larger than psi(f) and that the depth of penetration of the wetting front is no more than several millimeters.

  12. MR Cygni revisited

    International Nuclear Information System (INIS)

    Linnell, A.P.; Kallrath, J.

    1986-08-01

    New analysis tools and additional unanalyzed observations justify a reanalysis of MR Cygni. The reanalysis applied successively more restrictive physical models, each with an optimization program. The final model assigned separate first and second order limb darkening coefficients, from model atmospheres, to individual grid points. Proper operation of the optimization procedure was tested on simulated observational data, produced by light synthesis with assigned system parameters, and modulated by simulated observational error. The iterative solution converged to a weakly-determined mass ratio of 0.75. Assuming the B3 primary component is on the main sequence, the HR diagram location of the secondary from the light ratio (ordinate) and adjusted T sub eff (abscissa) was calculated. The derived mass ratio, together with a main-sequence mass for the B3 component, implies a main-sequence secondary spectral type of B4. The photometrically-determined secondary radii agree with this spectral type, in marginal disagreement with the B7 type from the HR diagram analysis. The individual masses, derived from the radial velocity curve of the primary component, the photometrically-determined i, and alternative values of derived mass ratio are seriously discrepant with main sequence objects. The imputed physical status of the system is in disagreement with representations that have appeared in the literature

  13. The Levy sections theorem revisited

    International Nuclear Information System (INIS)

    Figueiredo, Annibal; Gleria, Iram; Matsushita, Raul; Silva, Sergio Da

    2007-01-01

    This paper revisits the Levy sections theorem. We extend the scope of the theorem to time series and apply it to historical daily returns of selected dollar exchange rates. The elevated kurtosis usually observed in such series is then explained by their volatility patterns. And the duration of exchange rate pegs explains the extra elevated kurtosis in the exchange rates of emerging markets. In the end, our extension of the theorem provides an approach that is simpler than the more common explicit modelling of fat tails and dependence. Our main purpose is to build up a technique based on the sections that allows one to artificially remove the fat tails and dependence present in a data set. By analysing data through the lenses of the Levy sections theorem one can find common patterns in otherwise very different data sets

  14. The Levy sections theorem revisited

    Science.gov (United States)

    Figueiredo, Annibal; Gleria, Iram; Matsushita, Raul; Da Silva, Sergio

    2007-06-01

    This paper revisits the Levy sections theorem. We extend the scope of the theorem to time series and apply it to historical daily returns of selected dollar exchange rates. The elevated kurtosis usually observed in such series is then explained by their volatility patterns. And the duration of exchange rate pegs explains the extra elevated kurtosis in the exchange rates of emerging markets. In the end, our extension of the theorem provides an approach that is simpler than the more common explicit modelling of fat tails and dependence. Our main purpose is to build up a technique based on the sections that allows one to artificially remove the fat tails and dependence present in a data set. By analysing data through the lenses of the Levy sections theorem one can find common patterns in otherwise very different data sets.

  15. Single-electron tunnel junction array

    International Nuclear Information System (INIS)

    Likharev, K.K.; Bakhvalov, N.S.; Kazacha, G.S.; Serdyukova, S.I.

    1989-01-01

    The authors have carried out an analysis of statics and dynamics of uniform one-dimensional arrays of ultrasmall tunnel junctions. The correlated single-electron tunneling in the junctions of the array results in its behavior qualitatively similar to that of the Josephson transmission line. In particular, external electric fields applied to the array edges can inject single-electron-charged solitons into the array interior. Shape of such soliton and character of its interactions with other solitons and the array edges are very similar to those of the Josephson vortices (sine-Gordon solitons) in the Josephson transmission line. Under certain conditions, a coherent motion of the soliton train along the array is possible, resulting in generation of narrowband SET oscillations with frequency f/sub s/ = /e where is the dc current flowing along the array

  16. The effect of the atmospheric condition on the extensive air shower analysis at the Telescope Array experiment

    International Nuclear Information System (INIS)

    Kobayashi, Y.; Tsunesada, Y.; Tokuno, H.; Kakimoto, F.; Tomida, T.

    2011-01-01

    The accuracies in determination of air shower parameters such as longitudinal profiles or primary energies with the fluorescence detection technique are strongly dependent on atmospheric conditions of the molecular and aerosol components. Moreover, air fluorescence photon yield depends on the atmospheric density, and the transparency of the air for fluorescence photons depends on the atmospheric conditions from EAS to FDs. In this paper, we describe the atmospheric monitoring system in the Telescope Array (TA experiment), and the impact of the atmospheric conditions in air shower reconstructions. The systematic uncertainties of the determination of the primary cosmic ray energies and of the measurement of depth of maximum development (X max ) of EASs due to atmospheric variance are evaluated by Monte Carlo simulation.

  17. SCORE-EVET: a computer code for the multidimensional transient thermal-hydraulic analysis of nuclear fuel rod arrays

    International Nuclear Information System (INIS)

    Benedetti, R.L.; Lords, L.V.; Kiser, D.M.

    1978-02-01

    The SCORE-EVET code was developed to study multidimensional transient fluid flow in nuclear reactor fuel rod arrays. The conservation equations used were derived by volume averaging the transient compressible three-dimensional local continuum equations in Cartesian coordinates. No assumptions associated with subchannel flow have been incorporated into the derivation of the conservation equations. In addition to the three-dimensional fluid flow equations, the SCORE-EVET code ocntains: (a) a one-dimensional steady state solution scheme to initialize the flow field, (b) steady state and transient fuel rod conduction models, and (c) comprehensive correlation packages to describe fluid-to-fuel rod interfacial energy and momentum exchange. Velocity and pressure boundary conditions can be specified as a function of time and space to model reactor transient conditions such as a hypothesized loss-of-coolant accident (LOCA) or flow blockage

  18. Radiogenic leukemia revisited

    International Nuclear Information System (INIS)

    Moloney, W.C.

    1987-01-01

    Radiation-induced leukemia is considered to be similar to the de novo disease. However, following an analysis of clinical and hematological findings in leukemia occurring in irradiated cervical cancer patients, adult Japanese atomic-bomb survivors, and spondylitics treated with x-ray, striking differences were noted. Acute leukemias in cervical cancer patients and Japanese survivors were similar in type to acute de novo leukemias in adults. Cell types among spondylitics were very dissimilar; rare forms, eg, acute erythromyelocytic leukemia (AEL) and acute megakaryocytic leukemia, were increased. Pancytopenia occurred in 25 of 35 cases and erythromyelodysplastic disorders were noted in seven of 35 acute cases. The leukemias and myelodysplastic disorders closely resembled those occurring in patients treated with alkylating agents. This similarity suggests a common pathogenesis involving marrow stem cell injury and extra-medullary mediators of hematopoiesis. Investigation of early acute leukemias and myelodysplastic disorders with newer techniques may provide valuable insights into the pathogenesis of leukemia in humans

  19. The Guderley problem revisited

    International Nuclear Information System (INIS)

    Ramsey, Scott D.; Kamm, James R.; Bolstad, John H.

    2009-01-01

    The self-similar converging-diverging shock wave problem introduced by Guderley in 1942 has been the source of numerous investigations since its publication. In this paper, we review the simplifications and group invariance properties that lead to a self-similar formulation of this problem from the compressible flow equations for a polytropic gas. The complete solution to the self-similar problem reduces to two coupled nonlinear eigenvalue problems: the eigenvalue of the first is the so-called similarity exponent for the converging flow, and that of the second is a trajectory multiplier for the diverging regime. We provide a clear exposition concerning the reflected shock configuration. Additionally, we introduce a new approximation for the similarity exponent, which we compare with other estimates and numerically computed values. Lastly, we use the Guderley problem as the basis of a quantitative verification analysis of a cell-centered, finite volume, Eulerian compressible flow algorithm.

  20. THE FARMLAND VALUATION REVISITED

    Directory of Open Access Journals (Sweden)

    Xin Li

    2016-04-01

    Full Text Available Empirical research is scarce concerning the dynamics of farmland markets which inspire the decision to sell farmlands. This paper explores the real option to postpone the sale of land in farmland valuation. In this article, a real options approach is used to analyze farmland prices behavior using historical cash flow and land price information for Illinois. In general, rising farmland values are primarily dependent on agricultural commodity prices and interest rates. Results suggest that uncertainty about future growth and capital gains is a significant component of farmland market value. Furthermore, this research examines several shift factors of the option value of the state’s farmland by taking into account of uncertainty to improve the analysis of farmland market values.

  1. Biphasic Effect of Curcumin on Morphine Tolerance: A Preliminary Evidence from Cytokine/Chemokine Protein Array Analysis

    Directory of Open Access Journals (Sweden)

    Jui-An Lin

    2011-01-01

    Full Text Available The aim of this study was to evaluate the effect of curcumin on morphine tolerance and the corresponding cytokine/chemokine changes. Male ICR mice were made tolerant to morphine by daily subcutaneous injection for 7 days. Intraperitoneal injections of vehicle, low-dose or high-dose curcumin were administered 15 min after morphine injection, either acutely or chronically for 7 days to test the effect of curcumin on morphine-induced antinociception and development of morphine tolerance. On day 8, cumulative dose-response curves were generated and the 50% of maximal analgesic dose values were calculated and compared among groups. Corresponding set of mice were used for analyzing the cytokine responses by antibody-based cytokine protein array. Acute, high-dose curcumin enhanced morphine-induced antinociception. While morphine tolerance was attenuated by administration of low-dose curcumin following morphine injections for 7 days, it was aggravated by chronic high-dose curcumin following morphine injection, suggesting a biphasic effect of curcumin on morphine-induced tolerance. Of the 96 cytokine/chemokines analyzed by mouse cytokine protein array, 14 cytokines exhibited significant changes after the different 7-day treatments. Mechanisms for the modulatory effects of low-dose and high-dose curcumin on morphine tolerance were discussed. Even though curcumin itself is a neuroprotectant and low doses of the compound serve to attenuate morphine tolerance, high-doses of curcumin might cause neurotoxicity and aggravate morphine tolerance by inhibiting the expression of antiapoptotic cytokines and neuroprotective factors. Our results indicate that the effect of curcumin on morphine tolerance may be biphasic, and therefore curcumin should be used cautiously.

  2. Sensitivity of an eight-element phased array coil in 3 Tesla MR imaging: a basic analysis.

    Science.gov (United States)

    Hiratsuka, Yoshiyasu; Miki, Hitoshi; Kikuchi, Keiichi; Kiriyama, Ikuko; Mochizuki, Teruhito; Takahashi, Shizue; Sadamoto, Kazuhiko

    2007-01-01

    To evaluate the performance advantages of an 8-element phased array head coil (8 ch coil) over a conventional quadrature-type birdcage head coil (QD coil) with regard to the signal-to-noise ratio (SNR) and image uniformity in 3 Tesla magnetic resonance (MR) imaging. We scanned a phantom filled with silicon oil using an 8 ch coil and a QD coil in a 3T MR imaging system and compared the SNR and image uniformity obtained from T(1)-weighted spin echo (SE) images and T(2)-weighted fast SE images between the 2 coils. We also visually evaluated images from 4 healthy volunteers. The SNR with the 8 ch coil was approximately twice that with the QD coil in the region of interest (ROI), which was set as 75% of the area in the center of the phantom images. With regard to the spatial variation of sensitivity, the SNR with the 8 ch coil was lower at the center of the images than at the periphery, whereas the SNR with the QD coil exhibited an inverse pattern. At the center of the images with the 8 ch coil, the SNR was somewhat lower, and that distribution was relatively flat compared to that in the periphery. Image uniformity varied less with the 8 ch coil than with the QD coil on both imaging sequences. The 8 ch phased array coil was useful for obtaining high quality 3T images because of its higher SNR and improved image uniformity than those obtained with conventional quadrature-type birdcage head coil.

  3. Histamine fish poisoning revisited.

    Science.gov (United States)

    Lehane, L; Olley, J

    2000-06-30

    distribution system, or in restaurants or homes. The key to keeping bacterial numbers and histamine levels low is the rapid cooling of fish after catching and the maintenance of adequate refrigeration during handling and storage. Despite the huge expansion in trade in recent years, great progress has been made in ensuring the quality and safety of fish products. This is largely the result of the introduction of international standards of food hygiene and the application of risk analysis and hazard analysis and critical control point (HACCP) principles.

  4. Ion channeling revisited

    Energy Technology Data Exchange (ETDEWEB)

    Doyle, Barney Lee [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Corona, Aldo [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Nguyen, Anh [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-09-01

    A MS Excel program has been written that calculates accidental, or unintentional, ion channeling in cubic bcc, fcc and diamond lattice crystals or polycrystalline materials. This becomes an important issue when simulating the creation by energetic neutrons of point displacement damage and extended defects using beams of ions. All of the tables and graphs in the three Ion Beam Analysis Handbooks that previously had to be manually looked up and read from were programed into Excel in handy lookup tables, or parameterized, for the case of the graphs, using rather simple exponential functions with different powers of the argument. The program then offers an extremely convenient way to calculate axial and planar half-angles and minimum yield or dechanneling probabilities, effects on half-angles of amorphous overlayers, accidental channeling probabilities for randomly oriented crystals or crystallites, and finally a way to automatically generate stereographic projections of axial and planar channeling half-angles. The program can generate these projections and calculate these probabilities for axes and [hkl] planes up to (555).

  5. The Fortios disks revisited

    Directory of Open Access Journals (Sweden)

    António M. Monge Soares

    2017-07-01

    Full Text Available We have used EDXRF, Micro-PIXE and optical microscopy (metallographic analysis, complemented with SEM-EDS, to first determine the elemental content, and second, to identify the process used to join the components (disk, peripheral rod and tab of several Iron Age gold buttons. These have a very similar typology and were found at three archaeological sites in the South-Western part of the Iberian Peninsula. A set of 35 buttons from Castro dos Ratinhos (7, Outeiro da Cabeça (23 and Fortios (5 were analyzed and the results published in Trabajos de Prehistoria (Soares et al. 2010. Recently Perea et al. (2016 have published analyses of other 4 gold buttons from Fortios with the same purpose, but only using one technique, SEM-EDS. As they only analysed the rough surface layer, the results are neither effective nor reliable, taking into account the constraints associated with the technique, namely the small depth reached (< 2 ?m by the incident beam and, consequently, its sensitivity to the topography of the analyzed surface. Despite these constraints, they have accepted uncritically their results and, at the same time, question our own analyses and results and the interpretation we have made. Here we discuss the approach of Perea et al. in order to determine not only the elemental content of the Fortios gold buttons, but also to identify the joining process used in their manufacture.

  6. Revisiting Appendicular Lump

    Directory of Open Access Journals (Sweden)

    R S Bhandari

    2010-06-01

    Full Text Available INTRODUCTION: Appendicular lump is a well known sequalae of acute appendicitis encountered in 2-6% of patients. Successful management of appendicular lump is controversial with different approaches. As many controversies are arising regarding management of appendicular lump. The aim of this study was to find out the outcome and evaluate possible need of changing our management strategy of appendicular lump. METHODS: A retrospective analysis of the patients managed with appendicular lump were done. All the patients admitted with diagnosis of appendicular lump and managed between, over two and half years, were included in the study. All age groups and both sex were included. Any patients whose diagnosis was changed after initial diagnosis of appendicular lump were excluded from the study. RESULTS: Total 75 patients had appendicular lump suggesting 10% incidence. Age varied between 11-83 years with nearly equal incidence in both sexes. Majority had onset of symptoms between 2 to 14 days with an average of 4 days. Average stay was 3 to 4 days. During study period, 12 (16% came with recurrence and 13 (17% cases came for elective appendectomy. CONCLUSIONS: Based on our finding, it is not sufficient to change our classical management strategy of appendicular lump and suggests a need for long term prospective study in this very common clinical condition. KEYWORDS: appendicular lump, conservative management.

  7. Revisiting appendicular lump.

    Science.gov (United States)

    Bhandari, R S; Thakur, D K; Lakhey, P J; Singh, K P

    2010-01-01

    Appendicular lump is a well known sequalae of acute appendicitis encountered in 2-6% of patients. Successful management of appendicular lump is controversial with different approaches. As many controversies are arising regarding management of appendicular lump. The aim of this study was to find out the outcome and evaluate possible need of changing our management strategy of appendicular lump. A retrospective analysis of the patients managed with appendicular lump were done. All the patients admitted with diagnosis of appendicular lump and managed between, over two and half years, were included in the study. All age groups and both sex were included. Any patients whose diagnosis was changed after initial diagnosis of appendicular lump were excluded from the study. Total 75 patients had appendicular lump suggesting 10% incidence. Age varied between 11-83 years with nearly equal incidence in both sexes. Majority had onset of symptoms between 2 to 14 days with an average of 4 days. Average stay was 3 to 4 days. During study period, 12 (16%) came with recurrence and 13 (17%) cases came for elective appendectomy. Based on our finding, it is not sufficient to change our classical management strategy of appendicular lump and suggests a need for long term prospective study in this very common clinical condition.

  8. Early tetrapod relationships revisited.

    Science.gov (United States)

    Ruta, Marcello; Coates, Michael I; Quicke, Donald L J

    2003-05-01

    In an attempt to investigate differences between the most widely discussed hypotheses of early tetrapod relationships, we assembled a new data matrix including 90 taxa coded for 319 cranial and postcranial characters. We have incorporated, where possible, original observations of numerous taxa spread throughout the major tetrapod clades. A stem-based (total-group) definition of Tetrapoda is preferred over apomorphy- and node-based (crown-group) definitions. This definition is operational, since it is based on a formal character analysis. A PAUP* search using a recently implemented version of the parsimony ratchet method yields 64 shortest trees. Differences between these trees concern: (1) the internal relationships of aïstopods, the three selected species of which form a trichotomy; (2) the internal relationships of embolomeres, with Archeria crassidisca and Pholiderpeton scut collapsed in a trichotomy with a clade formed by Anthracosaurus russelli and Pholiderpeton attheyi; (3) the internal relationships of derived dissorophoids, with four amphibamid species forming an unresolved node with a clade consisting of micromelerpetontids and branchiosaurids and a clade consisting of albanerpetontids plus basal crown-group lissamphibians; (4) the position of albenerpetontids and Eocaecilia micropoda, which form an unresolved node with a trichotomy subtending Karaurus sharovi, Valdotriton gracilis and Triadobatrachus massinoti; (5) the branching pattern of derived diplocaulid nectrideans, with Batrachiderpeton reticulatum and Diceratosaurus brevirostris collapsed in a trichotomy with a clade formed by Diplocaulus magnicornis and Diploceraspis burkei. The results of the original parsimony run--as well as those retrieved from several other treatments of the data set (e.g. exclusion of postcranial and lower jaw data; character reweighting; reverse weighting)--indicate a deep split of early tetrapods between lissamphibian- and amniote-related taxa. Colosteids, Crassigyrinus

  9. Asymmetric Gepner models (revisited)

    Energy Technology Data Exchange (ETDEWEB)

    Gato-Rivera, B. [NIKHEF Theory Group, Kruislaan 409, 1098 SJ Amsterdam (Netherlands)] [Instituto de Fisica Fundamental, CSIC, Serrano 123, Madrid 28006 (Spain); Schellekens, A.N., E-mail: t58@nikhef.n [NIKHEF Theory Group, Kruislaan 409, 1098 SJ Amsterdam (Netherlands)] [Instituto de Fisica Fundamental, CSIC, Serrano 123, Madrid 28006 (Spain)] [IMAPP, Radboud Universiteit, Nijmegen (Netherlands)

    2010-12-11

    We reconsider a class of heterotic string theories studied in 1989, based on tensor products of N=2 minimal models with asymmetric simple current invariants. We extend this analysis from (2,2) and (1,2) spectra to (0,2) spectra with SO(10) broken to the Standard Model. In the latter case the spectrum must contain fractionally charged particles. We find that in nearly all cases at least some of them are massless. However, we identify a large subclass where the fractional charges are at worst half-integer, and often vector-like. The number of families is very often reduced in comparison to the 1989 results, but there are no new tensor combinations yielding three families. All tensor combinations turn out to fall into two classes: those where the number of families is always divisible by three, and those where it is never divisible by three. We find an empirical rule to determine the class, which appears to extend beyond minimal N=2 tensor products. We observe that distributions of physical quantities such as the number of families, singlets and mirrors have an interesting tendency towards smaller values as the gauge groups approaches the Standard Model. We compare our results with an analogous class of free fermionic models. This displays similar features, but with less resolution. Finally we present a complete scan of the three family models based on the triply-exceptional combination (1,16{sup *},16{sup *},16{sup *}) identified originally by Gepner. We find 1220 distinct three family spectra in this case, forming 610 mirror pairs. About half of them have the gauge group SU(3)xSU(2){sub L}xSU(2){sub R}xU(1){sup 5}, the theoretical minimum, and many others are trinification models.

  10. The Lanthanide Contraction Revisited

    Energy Technology Data Exchange (ETDEWEB)

    Seitz, Michael; Oliver, Allen G.; Raymond, Kenneth N.

    2007-04-19

    A complete, isostructural series of lanthanide complexes (except Pm) with the ligand TREN-1,2-HOIQO has been synthesized and structurally characterized by means of single-crystal X-ray analysis. All complexes are 1D-polymeric species in the solid state, with the lanthanide being in an eight-coordinate, distorted trigonal-dodecahedral environment with a donor set of eight unique oxygen atoms. This series constitutes the first complete set of isostructural lanthanide complexes with a ligand of denticity greater than two. The geometric arrangement of the chelating moieties slightly deviates across the lanthanide series, as analyzed by a shape parameter metric based on the comparison of the dihedral angles along all edges of the coordination polyhedron. The apparent lanthanide contraction in the individual Ln-O bond lengths deviates considerably from the expected quadratic decrease that was found previously in a number of complexes with ligands of low denticity. The sum of all bond lengths around the trivalent metal cation, however, is more regular, showing an almost ideal quadratic behavior across the entire series. The quadratic nature of the lanthanide contraction is derived theoretically from Slater's model for the calculation of ionic radii. In addition, the sum of all distances along the edges of the coordination polyhedron show exactly the same quadratic dependency as the Ln-X bond lengths. The universal validity of this coordination sphere contraction, concomitant with the quadratic decrease in Ln-X bond lengths, was confirmed by reexamination of four other, previously published, almost complete series of lanthanide complexes. Due to the importance of multidentate ligands for the chelation of rare-earth metals, this result provides a significant advance for the prediction and rationalization of the geometric features of the corresponding lanthanide complexes, with great potential impact for all aspects of lanthanide coordination.

  11. Hotspot swells revisited

    Science.gov (United States)

    King, Scott D.; Adam, Claudia

    2014-10-01

    The first attempts to quantify the width and height of hotspot swells were made more than 30 years ago. Since that time, topography, ocean-floor age, and sediment thickness datasets have improved considerably. Swell heights and widths have been used to estimate the heat flow from the core-mantle boundary, constrain numerical models of plumes, and as an indicator of the origin of hotspots. In this paper, we repeat the analysis of swell geometry and buoyancy flux for 54 hotspots, including the 37 considered by Sleep (1990) and the 49 considered by Courtillot et al. (2003), using the latest and most accurate data. We are able to calculate swell geometry for a number of hotspots that Sleep was only able to estimate by comparison with other swells. We find that in spite of the increased resolution in global bathymetry models there is significant uncertainty in our calculation of buoyancy fluxes due to differences in our measurement of the swells’ width and height, the integration method (volume integration or cross-sectional area), and the variations of the plate velocities between HS2-Nuvel1a (Gripp and Gordon, 1990) and HS3-Nuvel1a (Gripp and Gordon, 2002). We also note that the buoyancy flux for Pacific hotspots is in general larger than for Eurasian, North American, African and Antarctic hotspots. Considering that buoyancy flux is linearly related to plate velocity, we speculate that either the calculation of buoyancy flux using plate velocity over-estimates the actual vertical flow of material from the deep mantle or that convection in the Pacific hemisphere is more vigorous than the Atlantic hemisphere.

  12. The Phenylephrine Test Revisited.

    Science.gov (United States)

    Barsegian, Arpine; Botwinick, Adam; Reddy, Harsha S

    To characterize the phenylephrine test in ptotic patients to help clinicians perform the test more efficiently. Adults with involutional ptosis (n = 24, 30 eyes) were assessed with digital photographs for response to topical 2.5% phenylephrine drop instillation. Patient characteristics (age, gender, iris color, dermatochalasis, brow ptosis, and baseline marginal reflex distance-1 [MRD-1] height) were recorded. From the photographs, change in (MRD-1), presence of conjunctival blanching, pupillary dilation, and Hering effect were recorded at specified time intervals, 1 minute to 1 hour after drop placement. Correlations between patient characteristics and measured outcomes were evaluated using analysis of variance, Pearson coefficient, or chi-square tests. The authors found that 73% of eyes had eyelid elevation with phenylephrine. Of these, 50% reached maximal eyelid elevation by 5 minutes, and 86% by 10 minutes after drop placement, but 14% did not reach maximal MRD-1 until 30 minutes. There is a negative correlation between the maximum MRD-1 and the baseline MRD-1 eyelid height (r = -0.5330, p patient characteristic studied affected the likelihood of eyelid response to phenylephrine or presence of Hering effect. Although most ptotic eyelids demonstrate a response to 2.5% phenylephrine within 10 minutes, there is a subset of patients that respond much later. More ptotic eyelids had greater eyelid elevation with phenylephrine. Pupillary dilation and conjunctival blanching are neither predictive of nor temporally associated with eyelid height elevation. The authors did not identify any patient factors (e.g., dermatochalasis, brow ptosis) that can predict the likelihood of response to phenylephrine.

  13. The OncoArray Consortium

    DEFF Research Database (Denmark)

    Amos, Christopher I; Dennis, Joe; Wang, Zhaoming

    2017-01-01

    by Illumina to facilitate efficient genotyping. The consortium developed standard approaches for selecting SNPs for study, for quality control of markers, and for ancestry analysis. The array was genotyped at selected sites and with prespecified replicate samples to permit evaluation of genotyping accuracy...... among centers and by ethnic background. RESULTS: The OncoArray consortium genotyped 447,705 samples. A total of 494,763 SNPs passed quality control steps with a sample success rate of 97% of the samples. Participating sites performed ancestry analysis using a common set of markers and a scoring...

  14. Analysis of the Spatial and Temporal Distribution of the Seismicity of the Mid-Atlantic Ridge Using the SIRENA and the South Azores Autonomous Hydrophone Arrays

    Science.gov (United States)

    Simão, N.; Goslin, J.; Perrot, J.; Haxel, J.; Dziak, R.

    2006-12-01

    Acoustic data recorded by two Autonomous Hydrophone Arrays (AHA) were jointly processed in Brest (IUEM) and Newport (PMEL-VENTS) to monitor the seismicity of the Mid-Atlantic Ridge (MAR) over a ten month period, at a wide range of spatial scales. Over the deployment period, nearly 6000 T-phase generating earthquakes were localized using a semi-automatic algorithm. Our analysis of the temporal and spatial distribution of these events combined with their acoustic energy source levels provides important insights for the generation mechanisms and characteristic behavior of MAR seismicity. It shows for the AHA catalog a variation of the cumulative number of events with time almost linear. Taking in account the area inside the arrays, the section of the ridge north of the Azores is more seismically active than the southern part of it and the seismic activity occurs in large localized clusters. Our (AHA) catalog of acoustic events was used to compare locations, focal mechanisms and magnitude observations with correlated data from land-based stations of the NEIC global seismic network to establish completeness levels from both within and outside of the hydrophone array. The (AHA) catalog has a Source Level of Completeness (SLc) of 204dB, and a b-value of 0.0605. The NEIC catalog for this region during this period has a Magnitude of Completeness (Mc) of 4.6 and a b-value of 1.01. Regressing the AHA values onto the NEIC derived Mc/b-value relationship suggests a Mc of 3.2 for the AHA catalog. By restricting the events to the region inside the AHA, the NEIC catalog has an Mc of 4.7 with a b-value of 1.09, while the AHA catalog has a SLc of 205dB with a b-value of 0.0753. Comparing the b-values of the NEIC catalog with the AHA catalog, we obtain an improved Mc of 3.0 for the AHA inside the array. A time- and space-dependent Single-Link-Cluster algorithm was applied to the events localized inside the AHA. This allowed us to gather cluster sequences of earthquakes for higher

  15. Simultaneous Determination of Iron, Copper and Cobalt in Food Samples by CCD-diode Array Detection-Flow Injection Analysis with Partial Least Squares Calibration Model

    International Nuclear Information System (INIS)

    Mi Jiaping; Li Yuanqian; Zhou Xiaoli; Zheng Bo; Zhou Ying

    2006-01-01

    A flow injection-CCD diode array detection spectrophotometry with partial least squares (PLS) program for simultaneous determination of iron, copper and cobalt in food samples has been established. The method was based on the chromogenic reaction of the three metal ions and 2- (5-Bromo-2-pyridylazo)-5-diethylaminophenol, 5-Br-PADAP in acetic acid - sodium acetate buffer solution (pH5) with Triton X-100 and ascorbic acid. The overlapped spectra of the colored complexes were collected by charge-coupled device (CCD) - diode array detector and the multi-wavelength absorbance data was processed using partial least squares (PLS) algorithm. Optimum reaction conditions and parameters of flow injection analysis were investigated. The samples of tea, sesame, laver, millet, cornmeal, mung bean and soybean powder were determined by the proposed method. The average recoveries of spiked samples were 91.80%∼100.9% for Iron, 92.50%∼108.0% for Copper, 93.00%∼110.5% for Cobalt, respectively with relative standard deviation (R.S.D) of 1.1%∼12.1%. The sampling rate is 45 samples h -1 . The determination results of the food samples were in good agreement between the proposed method and ICP-AES

  16. Simultaneous Determination of Iron, Copper and Cobalt in Food Samples by CCD-diode Array Detection-Flow Injection Analysis with Partial Least Squares Calibration Model

    Energy Technology Data Exchange (ETDEWEB)

    Mi Jiaping; Li Yuanqian; Zhou Xiaoli; Zheng Bo; Zhou Ying [West China School of Public Health, Sichuan University, Chengdu, 610041 (China)

    2006-01-01

    A flow injection-CCD diode array detection spectrophotometry with partial least squares (PLS) program for simultaneous determination of iron, copper and cobalt in food samples has been established. The method was based on the chromogenic reaction of the three metal ions and 2- (5-Bromo-2-pyridylazo)-5-diethylaminophenol, 5-Br-PADAP in acetic acid - sodium acetate buffer solution (pH5) with Triton X-100 and ascorbic acid. The overlapped spectra of the colored complexes were collected by charge-coupled device (CCD) - diode array detector and the multi-wavelength absorbance data was processed using partial least squares (PLS) algorithm. Optimum reaction conditions and parameters of flow injection analysis were investigated. The samples of tea, sesame, laver, millet, cornmeal, mung bean and soybean powder were determined by the proposed method. The average recoveries of spiked samples were 91.80%{approx}100.9% for Iron, 92.50%{approx}108.0% for Copper, 93.00%{approx}110.5% for Cobalt, respectively with relative standard deviation (R.S.D) of 1.1%{approx}12.1%. The sampling rate is 45 samples h{sup -1}. The determination results of the food samples were in good agreement between the proposed method and ICP-AES.

  17. Simultaneous Determination of Iron, Copper and Cobalt in Food Samples by CCD-diode Array Detection-Flow Injection Analysis with Partial Least Squares Calibration Model

    Science.gov (United States)

    Mi, Jiaping; Li, Yuanqian; Zhou, Xiaoli; Zheng, Bo; Zhou, Ying

    2006-01-01

    A flow injection-CCD diode array detection spectrophotometry with partial least squares (PLS) program for simultaneous determination of iron, copper and cobalt in food samples has been established. The method was based on the chromogenic reaction of the three metal ions and 2- (5-Bromo-2-pyridylazo)-5-diethylaminophenol, 5-Br-PADAP in acetic acid - sodium acetate buffer solution (pH5) with Triton X-100 and ascorbic acid. The overlapped spectra of the colored complexes were collected by charge-coupled device (CCD) - diode array detector and the multi-wavelength absorbance data was processed using partial least squares (PLS) algorithm. Optimum reaction conditions and parameters of flow injection analysis were investigated. The samples of tea, sesame, laver, millet, cornmeal, mung bean and soybean powder were determined by the proposed method. The average recoveries of spiked samples were 91.80%~100.9% for Iron, 92.50%~108.0% for Copper, 93.00%~110.5% for Cobalt, respectively with relative standard deviation (R.S.D) of 1.1%~12.1%. The sampling rate is 45 samples h-1. The determination results of the food samples were in good agreement between the proposed method and ICP-AES.

  18. The Effect of Brand Equity and Perceived Value on Customer Revisit Intention: A Study in Quick-Service Restaurants in Vietnam

    Directory of Open Access Journals (Sweden)

    Ly Thi Minh Pham

    2016-10-01

    Full Text Available The purpose of this study is to examine how brand equity, from a customer point of view, influences quick-service restaurant revisit intention. The authors propose a conceptual framework in which three dimensions of brand equity including brand associations combined with brand awareness, perceived quality, brand loyalty and perceived value are related to revisit intention. Data from 570 customers who had visited four quick-service restaurants in Ho Chi Minh City were used for the structural equation modelling (SEM analysis. The results show that strong brand equity is significantly correlated with revisit intention. Additionally, the effect of brand equity on revisit intention was mediated by perceived value, among others. Overall, this study emphasizes the importance of perceived value in lodging in the customer’s mind. Finally, managerial implications are presented based on the study results.

  19. Multivariate curve-resolution analysis of pesticides in water samples from liquid chromatographic-diode array data.

    Science.gov (United States)

    Maggio, Rubén M; Damiani, Patricia C; Olivieri, Alejandro C

    2011-01-30

    Liquid chromatographic-diode array detection data recorded for aqueous mixtures of 11 pesticides show the combined presence of strongly coeluting peaks, distortions in the time dimension between experimental runs, and the presence of potential interferents not modeled by the calibration phase in certain test samples. Due to the complexity of these phenomena, data were processed by a second-order multivariate algorithm based on multivariate curve resolution and alternating least-squares, which allows one to successfully model both the spectral and retention time behavior for all sample constituents. This led to the accurate quantitation of all analytes in a set of validation samples: aldicarb sulfoxide, oxamyl, aldicarb sulfone, methomyl, 3-hydroxy-carbofuran, aldicarb, propoxur, carbofuran, carbaryl, 1-naphthol and methiocarb. Limits of detection in the range 0.1-2 μg mL(-1) were obtained. Additionally, the second-order advantage for several analytes was achieved in samples containing several uncalibrated interferences. The limits of detection for all analytes were decreased by solid phase pre-concentration to values compatible to those officially recommended, i.e., in the order of 5 ng mL(-1). Copyright © 2010 Elsevier B.V. All rights reserved.

  20. Array-based FMR1 sequencing and deletion analysis in patients with a fragile X syndrome-like phenotype.

    Directory of Open Access Journals (Sweden)

    Stephen C Collins

    2010-03-01

    Full Text Available Fragile X syndrome (FXS is caused by loss of function mutations in the FMR1 gene. Trinucleotide CGG-repeat expansions, resulting in FMR1 gene silencing, are the most common mutations observed at this locus. Even though the repeat expansion mutation is a functional null mutation, few conventional mutations have been identified at this locus, largely due to the clinical laboratory focus on the repeat tract.To more thoroughly evaluate the frequency of conventional mutations in FXS-like patients, we used an array-based method to sequence FMR1 in 51 unrelated males exhibiting several features characteristic of FXS but with normal CGG-repeat tracts of FMR1. One patient was identified with a deletion in FMR1, but none of the patients were found to have other conventional mutations.These data suggest that missense mutations in FMR1 are not a common cause of the FXS phenotype in patients who have normal-length CGG-repeat tracts. However, screening for small deletions of FMR1 may be of clinically utility.

  1. Design and reliability analysis of high-speed and continuous data recording system based on disk array

    Science.gov (United States)

    Jiang, Changlong; Ma, Cheng; He, Ning; Zhang, Xugang; Wang, Chongyang; Jia, Huibo

    2002-12-01

    In many real-time fields the sustained high-speed data recording system is required. This paper proposes a high-speed and sustained data recording system based on the complex-RAID 3+0. The system consists of Array Controller Module (ACM), String Controller Module (SCM) and Main Controller Module (MCM). ACM implemented by an FPGA chip is used to split the high-speed incoming data stream into several lower-speed streams and generate one parity code stream synchronously. It also can inversely recover the original data stream while reading. SCMs record lower-speed streams from the ACM into the SCSI disk drivers. In the SCM, the dual-page buffer technology is adopted to implement speed-matching function and satisfy the need of sustainable recording. MCM monitors the whole system, controls ACM and SCMs to realize the data stripping, reconstruction, and recovery functions. The method of how to determine the system scale is presented. At the end, two new ways Floating Parity Group (FPG) and full 2D-Parity Group (full 2D-PG) are proposed to improve the system reliability and compared with the Traditional Parity Group (TPG). This recording system can be used conveniently in many areas of data recording, storing, playback and remote backup with its high-reliability.

  2. Comparative Analysis of Human and Rodent Brain Primary Neuronal Culture Spontaneous Activity Using Micro-Electrode Array Technology.

    Science.gov (United States)

    Napoli, Alessandro; Obeid, Iyad

    2016-03-01

    Electrical activity in embryonic brain tissue has typically been studied using Micro Electrode Array (MEA) technology to make dozens of simultaneous recordings from dissociated neuronal cultures, brain stem cell progenitors, or brain slices from fetal rodents. Although these rodent neuronal primary culture electrical properties are mostly investigated, it has not been yet established to what extent the electrical characteristics of rodent brain neuronal cultures can be generalized to those of humans. A direct comparison of spontaneous spiking activity between rodent and human primary neurons grown under the same in vitro conditions using MEA technology has never been carried out before and will be described in the present study. Human and rodent dissociated fetal brain neuronal cultures were established in-vitro by culturing on a glass grid of 60 planar microelectrodes neurons under identical conditions. Three different cultures of human neurons were produced from tissue sourced from a single aborted fetus (at 16-18 gestational weeks) and these were compared with seven different cultures of embryonic rat neurons (at 18 gestational days) originally isolated from a single rat. The results show that the human and rodent cultures behaved significantly differently. Whereas the rodent cultures demonstrated robust spontaneous activation and network activity after only 10 days, the human cultures required nearly 40 days to achieve a substantially weaker level of electrical function. These results suggest that rat neuron preparations may yield inferences that do not necessarily transfer to humans. © 2015 Wiley Periodicals, Inc.

  3. Elevated specific peripheral cytokines found in major depressive disorder patients with childhood trauma exposure: a cytokine antibody array analysis.

    Science.gov (United States)

    Lu, Shaojia; Peng, Hongjun; Wang, Lifeng; Vasish, Seewoobudul; Zhang, Yan; Gao, Weijia; Wu, Weiwei; Liao, Mei; Wang, Mi; Tang, Hao; Li, Wenping; Li, Weihui; Li, Zexuan; Zhou, Jiansong; Zhang, Zhijun; Li, Lingjiang

    2013-10-01

    Taking into consideration the previous evidence of revealing the relationship of early life adversity, major depressive disorder (MDD), and stress-linked immunological changes, we recruited 22 MDD patients with childhood trauma exposures (CTE), 21 MDD patients without CTE, and 22 healthy controls without CTE, and then utilized a novel cytokine antibody array methodology to detect potential biomarkers underlying MDD in 120 peripheral cytokines and to evaluate the effect of CTE on cytokine changes in MDD patients. Although 13 cytokines were identified with highly significant differences in expressions between MDD patients and normal controls, this relationship was significantly attenuated and no longer significant after consideration of the effect of CTE in MDD patients. Depressed individuals with CTE (TD patients) were more likely to have higher peripheral levels of those cytokines. Severity of depression was associated with plasma levels of certain increased cytokines; meanwhile, the increased cytokines led to a proper separation of TD patients from normal controls during clustering analyses. Our research outcomes add great strength to the relationship between depression and cytokine changes and suggest that childhood trauma may play a vital role in the co-appearance of cytokine changes and depression. Copyright © 2013 Elsevier Inc. All rights reserved.

  4. Forensic seismology revisited

    Science.gov (United States)

    Douglas, A.

    2007-01-01

    the size of possible pP and sP. The relative-amplitude method is then used to search for orientations of the earthquake source that are compatible with the observations. If no such orientations are found the source must be shallow so that any surface reflections merge with direct P, and hence could be an explosion. The IMS when completed will be a global network of 321 monitoring stations, including 170 seismological stations principally to detect the seismic waves from earthquakes and underground explosions. The IMS will also have stations with hydrophones, microbarographs and radionuclide detectors to detect explosions in the oceans and the atmosphere and any isotopes in the air characteristic of a nuclear test. The Global Communications Infrastructure provides communications between the IMS stations and the International Data Centre (IDC), Vienna, where the recordings from the monitoring stations is collected, collated, and analysed. The IDC issues bulletins listing geophysical disturbances, to States Signatories to the CTBT. The assessment of the disturbances to decide whether any are possible explosions, is a task for State Signatories. For each Signatory to do a detailed analysis of all disturbances would be expensive and time consuming. Fortunately many disturbances can be readily identified as earthquakes and removed from consideration—a process referred to as “event screening”. For example, many earthquakes with epicentres over the oceans can be distinguished from underwater explosions, because an explosion signal is of much higher frequency than that of earthquakes that occur below the ocean bed. Further, many earthquakes could clearly be identified at the IDC on the m b : M s criterion, but there is a difficulty—how to set the decision line. The possibility has to be very small that an explosion will be classed by mistake, as an earthquake. The decision line has therefore to be set conservatively, consequently with routine application of current

  5. Analysis of an integrated 8-channel Tx/Rx body array for use as a body coil in 7-Tesla MRI

    Science.gov (United States)

    Orzada, Stephan; Bitz, Andreas K.; Johst, Sören; Gratz, Marcel; Völker, Maximilian N.; Kraff, Oliver; Abuelhaija, Ashraf; Fiedler, Thomas M.; Solbach, Klaus; Quick, Harald H.; Ladd, Mark E.

    2017-06-01

    Object In this work an 8-channel array integrated into the gap between the gradient coil and bore liner of a 7-Tesla whole-body magnet is presented that would allow a workflow closer to that of systems at lower magnetic fields that have a built-in body coil; this integrated coil is compared to a local 8-channel array built from identical elements placed directly on the patient. Materials and Methods SAR efficiency and the homogeneity of the right-rotating B1 field component (B_1^+) are investigated numerically and compared to the local array. Power efficiency measurements are performed in the MRI System. First in vivo gradient echo images are acquired with the integrated array. Results While the remote array shows a slightly better performance in terms of B_1^+ homogeneity, the power efficiency and the SAR efficiency are inferior to those of the local array: the transmit voltage has to be increased by a factor of 3.15 to achieve equal flip angles in a central axial slice. The g-factor calculations show a better parallel imaging g-factor for the local array. The field of view of the integrated array is larger than that of the local array. First in vivo images with the integrated array look subjectively promising. Conclusion Although some RF performance parameters of the integrated array are inferior to a tight-fitting local array, these disadvantages might be compensated by the use of amplifiers with higher power and the use of local receive arrays. In addition, the distant placement provides the potential to include more elements in the array design.

  6. Analysis of an Integrated 8-Channel Tx/Rx Body Array for Use as a Body Coil in 7-Tesla MRI

    Directory of Open Access Journals (Sweden)

    Stephan Orzada

    2017-06-01

    Full Text Available Object: In this work an 8-channel array integrated into the gap between the gradient coil and bore liner of a 7-Tesla whole-body magnet is presented that would allow a workflow closer to that of systems at lower magnetic fields that have a built-in body coil; this integrated coil is compared to a local 8-channel array built from identical elements placed directly on the patient.Materials and Methods: SAR efficiency and the homogeneity of the right-rotating B1 field component (B1+ are investigated numerically and compared to the local array. Power efficiency measurements are performed in the MRI System. First in vivo gradient echo images are acquired with the integrated array.Results: While the remote array shows a slightly better performance in terms of (B1+ homogeneity, the power efficiency and the SAR efficiency are inferior to those of the local array: the transmit voltage has to be increased by a factor of 3.15 to achieve equal flip angles in a central axial slice. The g-factor calculations show a better parallel imaging g-factor for the local array. The field of view of the integrated array is larger than that of the local array. First in vivo images with the integrated array look subjectively promising.Conclusion: Although some RF performance parameters of the integrated array are inferior to a tight-fitting local array, these disadvantages might be compensated by the use of amplifiers with higher power and the use of local receive arrays. In addition, the distant placement provides the potential to include more elements in the array design.

  7. Revisiting tourist behavior via destination brand worldness

    Directory of Open Access Journals (Sweden)

    Murat Kayak

    2016-11-01

    Full Text Available Taking tourists’ perspective rather than destination offerings as its core concept, this study introduces “perceived destination brand worldness” as a variable. Perceived destination brand worldness is defined as the positive perception that a tourist has of a country that is visited by tourists from all over the world. Then, the relationship between perceived destination brand worldness and intention to revisit is analyzed using partial least squares regression. This empirical study selects Taiwanese tourists as its sample, and the results show that perceived destination brand worldness is a direct predictor of intention to revisit. In light of these empirical findings and observations, practical and theoretical implications are discussed.

  8. Analysis of Naturally Occurring Phenolic Compounds in Aromatic Plants by RP-HPLC Coupled to Diode Array Detector (DAD and GC-MS after Silylation

    Directory of Open Access Journals (Sweden)

    Charalampos Proestos

    2013-03-01

    Full Text Available The following aromatic plants of Greek origin, Origanum dictamnus (dictamus, Eucalyptus globulus (eucalyptus, Origanum vulgare L. (oregano, Mellisa officinalis L. (balm mint and Sideritis cretica (mountain tea, were examined for the content of phenolic substances. Reversed phase HPLC coupled to diode array detector (DAD was used for the analysis of the plant extracts. The gas chromatography-mass spectrometry method (GC-MS was also used for identification of phenolic compounds after silylation. The most abundant phenolic acids were: gallic acid (1.5–2.6 mg/100 g dry sample, ferulic acid (0.34–6.9 mg/100 g dry sample and caffeic acid (1.0–13.8 mg/100 g dry sample. (+-Catechin and (−-epicatechin were the main flavonoids identified in oregano and mountain tea. Quercetin was detected only in eucalyptus and mountain tea.

  9. Fiber Laser Array

    National Research Council Canada - National Science Library

    Simpson, Thomas

    2002-01-01

    ...., field-dependent, loss within the coupled laser array. During this program, Jaycor focused on the construction and use of an experimental apparatus that can be used to investigate the coherent combination of an array of fiber lasers...

  10. Multi-component determination and chemometric analysis of Paris polyphylla by ultra high performance liquid chromatography with photodiode array detection.

    Science.gov (United States)

    Chen, Pei; Jin, Hong-Yu; Sun, Lei; Ma, Shuang-Cheng

    2016-09-01

    Multi-source analysis of traditional Chinese medicine is key to ensuring its safety and efficacy. Compared with traditional experimental differentiation, chemometric analysis is a simpler strategy to identify traditional Chinese medicines. Multi-component analysis plays an increasingly vital role in the quality control of traditional Chinese medicines. A novel strategy, based on chemometric analysis and quantitative analysis of multiple components, was proposed to easily and effectively control the quality of traditional Chinese medicines such as Chonglou. Ultra high performance liquid chromatography was more convenient and efficient. Five species of Chonglou were distinguished by chemometric analysis and nine saponins, including Chonglou saponins I, II, V, VI, VII, D, and H, as well as dioscin and gracillin, were determined in 18 min. The method is feasible and credible, and enables to improve quality control of traditional Chinese medicines and natural products. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Proteomics Analysis of Cancer Exosomes Using a Novel Modified Aptamer-based Array (SOMAscanTM) Platform*

    Science.gov (United States)

    Webber, Jason; Stone, Timothy C.; Katilius, Evaldas; Smith, Breanna C.; Gordon, Bridget; Mason, Malcolm D.; Tabi, Zsuzsanna; Brewis, Ian A.; Clayton, Aled

    2014-01-01

    We have used a novel affinity-based proteomics technology to examine the protein signature of small secreted extracellular vesicles called exosomes. The technology uses a new class of protein binding reagents called SOMAmers® (slow off-rate modified aptamers) and allows the simultaneous precise measurement of over 1000 proteins. Exosomes were highly purified from the Du145 prostate cancer cell line, by pooling selected fractions from a continuous sucrose gradient (within the density range of 1.1 to 1.2 g/ml), and examined under standard conditions or with additional detergent treatment by the SOMAscanTM array (version 3.0). Lysates of Du145 cells were also prepared, and the profiles were compared. Housekeeping proteins such as cyclophilin-A, LDH, and Hsp70 were present in exosomes, and we identified almost 100 proteins that were enriched in exosomes relative to cells. These included proteins of known association with cancer exosomes such as MFG-E8, integrins, and MET, and also those less widely reported as exosomally associated, such as ROR1 and ITIH4. Several proteins with no previously known exosomal association were confirmed as exosomally expressed in experiments using individual SOMAmer® reagents or antibodies in micro-plate assays. Western blotting confirmed the SOMAscanTM-identified enrichment of exosomal NOTCH-3, L1CAM, RAC1, and ADAM9. In conclusion, we describe here over 300 proteins of hitherto unknown association with prostate cancer exosomes and suggest that the SOMAmer®-based assay technology is an effective proteomics platform for exosome-associated biomarker discovery in diverse clinical settings. PMID:24505114

  12. A Framework for the Comparative Assessment of Neuronal Spike Sorting Algorithms towards More Accurate Off-Line and On-Line Microelectrode Arrays Data Analysis.

    Science.gov (United States)

    Regalia, Giulia; Coelli, Stefania; Biffi, Emilia; Ferrigno, Giancarlo; Pedrocchi, Alessandra

    2016-01-01

    Neuronal spike sorting algorithms are designed to retrieve neuronal network activity on a single-cell level from extracellular multiunit recordings with Microelectrode Arrays (MEAs). In typical analysis of MEA data, one spike sorting algorithm is applied indiscriminately to all electrode signals. However, this approach neglects the dependency of algorithms' performances on the neuronal signals properties at each channel, which require data-centric methods. Moreover, sorting is commonly performed off-line, which is time and memory consuming and prevents researchers from having an immediate glance at ongoing experiments. The aim of this work is to provide a versatile framework to support the evaluation and comparison of different spike classification algorithms suitable for both off-line and on-line analysis. We incorporated different spike sorting "building blocks" into a Matlab-based software, including 4 feature extraction methods, 3 feature clustering methods, and 1 template matching classifier. The framework was validated by applying different algorithms on simulated and real signals from neuronal cultures coupled to MEAs. Moreover, the system has been proven effective in running on-line analysis on a standard desktop computer, after the selection of the most suitable sorting methods. This work provides a useful and versatile instrument for a supported comparison of different options for spike sorting towards more accurate off-line and on-line MEA data analysis.

  13. Leukemia and ionizing radiation revisited

    Energy Technology Data Exchange (ETDEWEB)

    Cuttler, J.M. [Cuttler & Associates Inc., Vaughan, Ontario (Canada); Welsh, J.S. [Loyola University-Chicago, Dept. or Radiation Oncology, Stritch School of Medicine, Maywood, Illinois (United States)

    2016-03-15

    A world-wide radiation health scare was created in the late 19508 to stop the testing of atomic bombs and block the development of nuclear energy. In spite of the large amount of evidence that contradicts the cancer predictions, this fear continues. It impairs the use of low radiation doses in medical diagnostic imaging and radiation therapy. This brief article revisits the second of two key studies, which revolutionized radiation protection, and identifies a serious error that was missed. This error in analyzing the leukemia incidence among the 195,000 survivors, in the combined exposed populations of Hiroshima and Nagasaki, invalidates use of the LNT model for assessing the risk of cancer from ionizing radiation. The threshold acute dose for radiation-induced leukemia, based on about 96,800 humans, is identified to be about 50 rem, or 0.5 Sv. It is reasonable to expect that the thresholds for other cancer types are higher than this level. No predictions or hints of excess cancer risk (or any other health risk) should be made for an acute exposure below this value until there is scientific evidence to support the LNT hypothesis. (author)

  14. Individualist Biocentrism vs. Holism Revisited

    Directory of Open Access Journals (Sweden)

    Katie McShane

    2014-06-01

    Full Text Available While holist views such as ecocentrism have considerable intuitive appeal, arguing for the moral considerability of ecological wholes such as ecosystems has turned out to be a very difficult task. In the environmental ethics literature, individualist biocentrists have persuasively argued that individual organisms—but not ecological wholes—are properly regarded as having a good of their own . In this paper, I revisit those arguments and contend that they are fatally flawed. The paper proceeds in five parts. First, I consider some problems brought about by climate change for environmental conservation strategies and argue that these problems give us good pragmatic reasons to want a better account of the welfare of ecological wholes. Second, I describe the theoretical assumptions from normative ethics that form the background of the arguments against holism. Third, I review the arguments given by individualist biocentrists in favour of individualism over holism. Fourth, I review recent work in the philosophy of biology on the units of selection problem, work in medicine on the human biome, and work in evolutionary biology on epigenetics and endogenous viral elements. I show how these developments undermine both the individualist arguments described above as well as the distinction between individuals and wholes as it has been understood by individualists. Finally, I consider five possible theoretical responses to these problems.

  15. Revisiting the safety of aspartame.

    Science.gov (United States)

    Choudhary, Arbind Kumar; Pretorius, Etheresia

    2017-09-01

    Aspartame is a synthetic dipeptide artificial sweetener, frequently used in foods, medications, and beverages, notably carbonated and powdered soft drinks. Since 1981, when aspartame was first approved by the US Food and Drug Administration, researchers have debated both its recommended safe dosage (40 mg/kg/d) and its general safety to organ systems. This review examines papers published between 2000 and 2016 on both the safe dosage and higher-than-recommended dosages and presents a concise synthesis of current trends. Data on the safe aspartame dosage are controversial, and the literature suggests there are potential side effects associated with aspartame consumption. Since aspartame consumption is on the rise, the safety of this sweetener should be revisited. Most of the literature available on the safety of aspartame is included in this review. Safety studies are based primarily on animal models, as data from human studies are limited. The existing animal studies and the limited human studies suggest that aspartame and its metabolites, whether consumed in quantities significantly higher than the recommended safe dosage or within recommended safe levels, may disrupt the oxidant/antioxidant balance, induce oxidative stress, and damage cell membrane integrity, potentially affecting a variety of cells and tissues and causing a deregulation of cellular function, ultimately leading to systemic inflammation. © The Author(s) 2017. Published by Oxford University Press on behalf of the International Life Sciences Institute. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  16. Depth-dependence of time-lapse seismic velocity change detected by a joint interferometric analysis of vertical array data

    Science.gov (United States)

    Sawazaki, K.; Saito, T.; Ueno, T.; Shiomi, K.

    2015-12-01

    In this study, utilizing depth-sensitivity of interferometric waveforms recorded by co-located Hi-net and KiK-net sensors, we separate the responsible depth of seismic velocity change associated with the M6.3 earthquake occurred on November 22, 2014, in central Japan. The Hi-net station N.MKGH is located about 20 km northeast from the epicenter, where the seismometer is installed at the 150 m depth. At the same site, the KiK-net has two strong motion seismometers installed at the depths of 0 and 150 m. To estimate average velocity change around the N.MKGH station, we apply the stretching technique to auto-correlation function (ACF) of ambient noise recorded by the Hi-net sensor. To evaluate sensitivity of the Hi-net ACF to velocity change above and below the 150 m depth, we perform a numerical wave propagation simulation using 2-D FDM. To obtain velocity change above the 150 m depth, we measure response waveform from the depths of 150 m to 0 m by computing deconvolution function (DCF) of earthquake records obtained by the two KiK-net vertical array sensors. The background annual velocity variation is subtracted from the detected velocity change. From the KiK-net DCF records, the velocity reduction ratio above the 150 m depth is estimated to be 4.2 % and 3.1 % in the periods of 1-7 days and 7 days - 4 months after the mainshock, respectively. From the Hi-net ACF records, the velocity reduction ratio is estimated to be 2.2 % and 1.8 % in the same time periods, respectively. This difference in the estimated velocity reduction ratio is attributed to depth-dependence of the velocity change. By using the depth sensitivity obtained from the numerical simulation, we estimate the velocity reduction ratio below the 150 m depth to be lower than 1.0 % for both time periods. Thus the significant velocity reduction and recovery are observed above the 150 m depth only, which may be caused by strong ground motion of the mainshock and following healing in the shallow ground.

  17. Hybrid Arrays for Chemical Sensing

    Science.gov (United States)

    Kramer, Kirsten E.; Rose-Pehrsson, Susan L.; Johnson, Kevin J.; Minor, Christian P.

    In recent years, multisensory approaches to environment monitoring for chemical detection as well as other forms of situational awareness have become increasingly popular. A hybrid sensor is a multimodal system that incorporates several sensing elements and thus produces data that are multivariate in nature and may be significantly increased in complexity compared to data provided by single-sensor systems. Though a hybrid sensor is itself an array, hybrid sensors are often organized into more complex sensing systems through an assortment of network topologies. Part of the reason for the shift to hybrid sensors is due to advancements in sensor technology and computational power available for processing larger amounts of data. There is also ample evidence to support the claim that a multivariate analytical approach is generally superior to univariate measurements because it provides additional redundant and complementary information (Hall, D. L.; Linas, J., Eds., Handbook of Multisensor Data Fusion, CRC, Boca Raton, FL, 2001). However, the benefits of a multisensory approach are not automatically achieved. Interpretation of data from hybrid arrays of sensors requires the analyst to develop an application-specific methodology to optimally fuse the disparate sources of data generated by the hybrid array into useful information characterizing the sample or environment being observed. Consequently, multivariate data analysis techniques such as those employed in the field of chemometrics have become more important in analyzing sensor array data. Depending on the nature of the acquired data, a number of chemometric algorithms may prove useful in the analysis and interpretation of data from hybrid sensor arrays. It is important to note, however, that the challenges posed by the analysis of hybrid sensor array data are not unique to the field of chemical sensing. Applications in electrical and process engineering, remote sensing, medicine, and of course, artificial

  18. Diagnostics of the BIOMASS feed array prototype

    DEFF Research Database (Denmark)

    Cappellin, Cecilia; Pivnenko, Sergey; Pontoppidan, Kennie Nybo

    2013-01-01

    The 3D reconstruction algorithm is applied to the prototype feed array of the BIOMASS synthetic aperture radar, recently measured at the DTU-ESA Spherical Near-Field Antenna Test Facility in Denmark. Careful analysis of the measured feed array data has shown that the test support structure...

  19. Revisiting Francisella tularensis subsp. holarctica, Causative Agent of Tularemia in Germany With Bioinformatics: New Insights in Genome Structure, DNA Methylation and Comparative Phylogenetic Analysis

    Directory of Open Access Journals (Sweden)

    Anne Busch

    2018-03-01

    Full Text Available Francisella (F. tularensis is a highly virulent, Gram-negative bacterial pathogen and the causative agent of the zoonotic disease tularemia. Here, we generated, analyzed and characterized a high quality circular genome sequence of the F. tularensis subsp. holarctica strain 12T0050 that caused fatal tularemia in a hare. Besides the genomic structure, we focused on the analysis of oriC, unique to the Francisella genus and regulating replication in and outside hosts and the first report on genomic DNA methylation of a Francisella strain. The high quality genome was used to establish and evaluate a diagnostic whole genome sequencing pipeline. A genotyping strategy for F. tularensis was developed using various bioinformatics tools for genotyping. Additionally, whole genome sequences of F. tularensis subsp. holarctica isolates isolated in the years 2008–2015 in Germany were generated. A phylogenetic analysis allowed to determine the genetic relatedness of these isolates and confirmed the highly conserved nature of F. tularensis subsp. holarctica.

  20. Development of a novel ozone- and photo-stable HyPer5 red fluorescent dye for array CGH and microarray gene expression analysis with consistent performance irrespective of environmental conditions

    Directory of Open Access Journals (Sweden)

    Kille Peter

    2008-11-01

    Full Text Available Abstract Background Array-based comparative genomic hybridization (CGH and gene expression profiling have become vital techniques for identifying molecular defects underlying genetic diseases. Regardless of the microarray platform, cyanine dyes (Cy3 and Cy5 are one of the most widely used fluorescent dye pairs for microarray analysis owing to their brightness and ease of incorporation, enabling high level of assay sensitivity. However, combining both dyes on arrays can become problematic during summer months when ozone levels rise to near 25 parts per billion (ppb. Under such conditions, Cy5 is known to rapidly degrade leading to loss of signal from either "homebrew" or commercial arrays. Cy5 can also suffer disproportionately from dye photobleaching resulting in distortion of (Cy5/Cy3 ratios used in copy number analysis. Our laboratory has been active in fluorescent dye research to find a suitable alternative to Cy5 that is stable to ozone and resistant to photo-bleaching. Here, we report on the development of such a dye, called HyPer5, and describe its' exceptional ozone and photostable properties on microarrays. Results Our results show HyPer5 signal to be stable to high ozone levels. Repeated exposure of mouse arrays hybridized with HyPer5-labeled cDNA to 300 ppb ozone at 5, 10 and 15 minute intervals resulted in no signal loss from the dye. In comparison, Cy5 arrays showed a dramatic 80% decrease in total signal during the same interval. Photobleaching experiments show HyPer5 to be resistant to light induced damage with 3- fold improvement in dye stability over Cy5. In high resolution array CGH experiments, HyPer5 is demonstrated to detect chromosomal aberrations at loci 2p21-16.3 and 15q26.3-26.2 from three patient sample using bacterial artificial chromosome (BAC arrays. The photostability of HyPer5 is further documented by repeat array scanning without loss of detection. Additionally, HyPer5 arrays are shown to preserve sensitivity and

  1. Revisiting Hansen Solubility Parameters by Including Thermodynamics

    NARCIS (Netherlands)

    Louwerse, Manuel J; Fernández-Maldonado, Ana María; Rousseau, Simon; Moreau-Masselon, Chloe; Roux, Bernard; Rothenberg, Gadi

    2017-01-01

    The Hansen solubility parameter approach is revisited by implementing the thermodynamics of dissolution and mixing. Hansen's pragmatic approach has earned its spurs in predicting solvents for polymer solutions, but for molecular solutes improvements are needed. By going into the details of entropy

  2. The Future of Engineering Education--Revisited

    Science.gov (United States)

    Wankat, Phillip C.; Bullard, Lisa G.

    2016-01-01

    This paper revisits the landmark CEE series, "The Future of Engineering Education," published in 2000 (available free in the CEE archives on the internet) to examine the predictions made in the original paper as well as the tools and approaches documented. Most of the advice offered in the original series remains current. Despite new…

  3. Revisiting the formal foundation of Probabilistic Databases

    NARCIS (Netherlands)

    Wanders, B.; van Keulen, Maurice

    2015-01-01

    One of the core problems in soft computing is dealing with uncertainty in data. In this paper, we revisit the formal foundation of a class of probabilistic databases with the purpose to (1) obtain data model independence, (2) separate metadata on uncertainty and probabilities from the raw data, (3)

  4. Revisiting Weak Simulation for Substochastic Markov Chains

    DEFF Research Database (Denmark)

    Jansen, David N.; Song, Lei; Zhang, Lijun

    2013-01-01

    of the logic PCTL\\x, and its completeness was conjectured. We revisit this result and show that soundness does not hold in general, but only for Markov chains without divergence. It is refuted for some systems with substochastic distributions. Moreover, we provide a counterexample to completeness...

  5. Coccolithophorids in polar waters: Wigwamma spp. revisited

    DEFF Research Database (Denmark)

    Thomsen, Helge Abildhauge; Østergaard, Jette B.; Heldal, Mikal

    2013-01-01

    A contingent of weakly calcified coccolithophorid genera and species were described from polar regions almost 40 years ago. In the interim period a few additional findings have been reported enlarging the realm of some of the species. The genus Wigwamma is revisited here with the purpose of provi...... appearance of the coccolith armour of the cell...

  6. The Faraday effect revisited: General theory

    DEFF Research Database (Denmark)

    Cornean, Horia Decebal; Nenciu, Gheorghe; Pedersen, Thomas Garm

    2006-01-01

    This paper is the first in a series revisiting the Faraday effect, or more generally, the theory of electronic quantum transport/optical response in bulk media in the presence of a constant magnetic field. The independent electron approximation is assumed. At zero temperature and zero frequency...

  7. The Faraday effect revisited: General theory

    DEFF Research Database (Denmark)

    Cornean, Horia Decebal; Nenciu, Gheorghe; Pedersen, Thomas Garm

    This paper is the first in a series revisiting the Faraday effect, or more generally, the theory of electronic quantum transport/optical response in bulk media in the presence of a constant magnetic field. The independent electron approximation is assumed. For free electrons, the transverse...

  8. Backwardation in energy futures markets: Metalgesellschaft revisited

    International Nuclear Information System (INIS)

    Charupat, N.; Deaves, R.

    2003-01-01

    Energy supply contracts negotiated by the US Subsidiary of Metalgesellschaft Refining and Marketing (MGRM), which were the subject of much subsequent debate, are re-examined. The contracts were hedged by the US Subsidiary barrel-for-barrel using short-dated energy derivatives. When the hedge program experienced difficulties, the derivatives positions were promptly liquidated by the parent company. Revisiting the MGRM contracts also provides the opportunity to explore the latest evidence on backwardation in energy markets. Accordingly, the paper discusses first the theoretical reasons for backwardation, followed by an empirical examination using the MGRM data available at the time of the hedge program in 1992 and a second set of data that became available in 2000. By using a more up-to-date data set covering a longer time period and by controlling the time series properties of the data, the authors expect to provide more reliable empirical evidence on the behaviour of energy futures prices. Results based on the 1992 data suggest that the strategy employed by MGRM could be expected to be profitable while the risks are relatively low. However, analysis based on the 2000 data shows lower, although still significant profits, but higher risks. The final conclusion was that the likelihood of problems similar to those faced by MGRM in 1992 are twice as high with the updated 2000 data, suggesting that the risk-return pattern of the stack-and-roll hedging strategy using short-dated energy future contracts to hedge long-tem contracts is less appealing now than when MGRM implemented its hedging program in 1992. 24 refs., 3 tabs., 6 figs

  9. Burgers vector analysis of large area misfit dislocation arrays from bend contour contrast in transmission electron microscope images

    CERN Document Server

    Spiecker, E

    2002-01-01

    A transmission electron microscopy method is described which allows us to determine the Burgers vectors (BVs) of a large number of interfacial misfit dislocations (MDs) in mismatched heterostructures. The method combines large-area plan-view thinning of the sample for creating a strongly bent electron transparent foil with the analysis of the splitting and displacement of bend contours at their crossings with the MDs. The BV analysis is demonstrated for 60 deg. MDs in a low-mismatched SiGe/Si(001) heterostructure. Crossings of various bend contours with the MDs are analysed with respect to their information content for the BV analysis. In future applications the method may be used for analysing such a large number of MDs that a quantitative comparison with x-ray diffraction experiments, especially with data on diffusely scattered x-rays originating from the strain fields around the dislocations, becomes possible.

  10. The Mimas ghost revisited - An analysis of the electron flux and electron microsignatures observed in the vicinity of Mimas at Saturn

    Science.gov (United States)

    Chenette, D. L.; Stone, E. C.

    1983-01-01

    An analysis of the electron-absorption signature observed by the cosmic-ray system on Voyager 2 near the orbit of Mimas is presented. It is found that these observations cannot be explained as the absorption signature of Mimas. By combining Pioneer 11 and Voyager 2 measurements of the electron flux at Mimas's orbit (L = 3.1), an electron spectrum is found in which most of the flux above about 100 keV is concentrated near 1 to 3 MeV. This spectral form is qualitatively consistent with the bandpass filter model of Van Allen et al. (1980). The expected Mimas absorption signature is calculated from this spectrum neglecting radial diffusion. Since no Mimas absorption signature was observed in the inbound Voyager 2 data, a lower limit on the diffusion coefficient for MeV electrons at L = 3.1 of D greater than 10 to the -8th sq Saturn radii/sec is obtained. With a diffusion coefficient this large, both the Voyager 2 and the Pioneer 11 small-scale electron-absorption-signature observations in Mimas's orbit are enigmatic. Thus the mechanism for producing these signatures is referred to as the Mimas ghost. A cloud of material in orbit with Mimas may account for the observed electron signature if the cloud is at least 1-percent opaque to electrons across a region extending over a few hundred kilometers.

  11. The Mimas ghost revisited: An analysis of the electron flux and electron microsignatures observed in the vicinity of Mimas at Saturn

    Science.gov (United States)

    Chenette, D. L.; Stone, E. C.

    1983-01-01

    An analysis of the electron absorption signature observed by the Cosmic Ray System (CRS) on Voyage 2 near the orbit of Mimas is presented. We find that these observations cannot be explained as the absorption signature of Mimas. Combing Pioneer 11 and Voyager 2 measurements of the electron flux at Mimas's orbit (L=3.1), we find an electron spectrum where most of the flux above approx 100 keV is concentrated near 1 to 3 MeV. The expected Mimas absorption signature is calculated from this spectrum neglecting radial diffusion. A lower limit on the diffusion coefficient for MeV electrons is obtained. With a diffusion coefficient this large, both the Voyager 2 and the Pioneer 11 small-scale electron absorption signature observations in Mimas's orbit are enigmatic. Thus we refer to the mechanism for producing these signatures as the Mimas ghost. A cloud of material in orbit with Mimas may account for the observed electron signature if the cloud is at least 1% opaque to electrons across a region extending over a few hundred kilometers.

  12. Evolution of social versus individual learning in a subdivided population revisited: comparative analysis of three coexistence mechanisms using the inclusive-fitness method.

    Science.gov (United States)

    Kobayashi, Yutaka; Ohtsuki, Hisashi

    2014-03-01

    Learning abilities are categorized into social (learning from others) and individual learning (learning on one's own). Despite the typically higher cost of individual learning, there are mechanisms that allow stable coexistence of both learning modes in a single population. In this paper, we investigate by means of mathematical modeling how the effect of spatial structure on evolutionary outcomes of pure social and individual learning strategies depends on the mechanisms for coexistence. We model a spatially structured population based on the infinite-island framework and consider three scenarios that differ in coexistence mechanisms. Using the inclusive-fitness method, we derive the equilibrium frequency of social learners and the genetic load of social learning (defined as average fecundity reduction caused by the presence of social learning) in terms of some summary statistics, such as relatedness, for each of the three scenarios and compare the results. This comparative analysis not only reconciles previous models that made contradictory predictions as to the effect of spatial structure on the equilibrium frequency of social learners but also derives a simple mathematical rule that determines the sign of the genetic load (i.e. whether or not social learning contributes to the mean fecundity of the population). Copyright © 2013 Elsevier Inc. All rights reserved.

  13. Socio-economic development and emotion-health connection revisited: a multilevel modeling analysis using data from 162 counties in China.

    Science.gov (United States)

    Yu, Zonghuo; Wang, Fei

    2016-03-12

    Substantial research has shown that emotions play a critical role in physical health. However, most of these studies were conducted in industrialized countries, and it is still an open question whether the emotion-health connection is a "first-world problem". In the current study, we examined socio-economic development's influence on emotion-health connection by performing multilevel-modeling analysis in a dataset of 33,600 individuals from 162 counties in China. Results showed that both positive emotions and negative emotions predicted level of physical health and regional Gross Domestic Product Per Capita (GDPPC) had some impact on the association between emotion and health through accessibility of medical resources and educational status. But these impacts were suppressed, and the total effects of GDPPC on emotion-health connections were not significant. These results support the universality of emotion-health connection across levels of GDPPC and provide new insight into how socio-economic development might affect these connections.

  14. The Analysis of the Possible Thermal Emission at Radio Frequencies from an Evolved Supernova Remnant HB 3 (G132.7+1.3: Revisited

    Directory of Open Access Journals (Sweden)

    Onić, D.

    2008-12-01

    Full Text Available It has recently been reported that some of the flux density values for an evolved supernova remnant (SNR HB 3 (G132.7$+$1.3 are not accurate enough. In this work we therefore revised the analysis of the possible thermal emission at radio frequencies from this SNR using the recently published, corrected flux density values. A model including the sum of non-thermal (purely synchrotron and thermal (bremsstrahlung components is applied to fit the integrated radio spectrum of this SNR. The contribution of thermal component to the total volume emissivity at $1 mathrm{GHz}$ is estimated to be $approx37 \\%$. The ambient density is also estimated to be $napprox 9 mathrm{cm}^{-3}$ for $mathrm{T}=10^{4} mathrm{K}$. Again we obtained a relatively significant presence of thermal emission at radio frequencies from the SNR, which can support interaction between SNR HB 3 and adjacent molecular cloud associated with the mbox{H,{sc ii}} region W3. Our model estimates for thermal component contribution to total volume emissivity at $1 mathrm{GHz}$ and ambient density are similar to those obtained earlier ($approx40 \\%$, $approx10 mathrm{cm^{-3}}$. It is thus obvious that the corrected flux density values do not affect the basic conclusions.

  15. Fingerprint enhancement revisited and the effects of blood enhancement chemicals on subsequent profiler Plus fluorescent short tandem repeat DNA analysis of fresh and aged bloody fingerprints.

    Science.gov (United States)

    Frégeau, C J; Germain, O; Fourney, R M

    2000-03-01

    This study was aimed at determining the effect of seven blood enhancement reagents on the subsequent Profiler Plus fluorescent STR DNA analysis of fresh or aged bloody fingerprints deposited on various porous and nonporous surfaces. Amido Black, Crowle's Double Stain. 1,8-diazafluoren-9-one (DFO), Hungarian Red, leucomalachite green, luminol and ninhydrin were tested on linoleum, glass, metal, wood (pine, painted white), clothing (85% polyester/15% cotton, 65% polyester/35% cotton, and blue denim) and paper (Scott 2-ply and Xerox-grade). Preliminary experiments were designed to determine the optimal blood dilutions to use to ensure a DNA typing result following chemical enhancement. A 1:200 blood dilution deposited on linoleum and enhanced with Crowle's Double Stain generated enough DNA for one to two rounds of Profiler Plus PCR amplification. A comparative study of the DNA yields before and after treatment indicated that the quantity of DNA recovered from bloody fingerprints following enhancement was reduced by a factor of 2 to 12. Such a reduction in the DNA yields could potentially compromise DNA typing analysis in the case of small stains. The blood enhancement chemicals selected were also evaluated for their capability to reveal bloodmarks on the various porous and nonporous surfaces chosen in this study. Luminol. Amido Black and Crowle's Double Stain showed the highest sensitivity of all seven chemicals tested and revealed highly diluted (1:200) bloody fingerprints. Both luminol and Amido Black produced excellent results on both porous and nonporous surfaces, but Crowle's Double Stain failed to produce any results on porous substrates. Hungarian Red, DFO, leucomalachite green and ninhydrin showed lower sensitivities. Enhancement of bloodmarks using any of the chemicals selected, and short-term exposure to these same chemicals (i.e., less than 54 days), had no adverse effects on the PCR amplification of the nine STR systems surveyed (D3S 1358, HumvWA, Hum

  16. Revisiting Sex Equality With Transcatheter Aortic Valve Replacement Outcomes: A Collaborative, Patient-Level Meta-Analysis of 11,310 Patients.

    Science.gov (United States)

    O'Connor, Stephen A; Morice, Marie-Claude; Gilard, Martine; Leon, Martin B; Webb, John G; Dvir, Danny; Rodés-Cabau, Josep; Tamburino, Corrado; Capodanno, Davide; D'Ascenzo, Fabrizio; Garot, Philippe; Chevalier, Bernard; Mikhail, Ghada W; Ludman, Peter F

    2015-07-21

    There has been conflicting clinical evidence as to the influence of female sex on outcomes after transcatheter aortic valve replacement. The aim of this study was to evaluate the impact of sex on early and late mortality and safety end points after transcatheter aortic valve replacement using a collaborative meta-analysis of patient-level data. From the MEDLINE, Embase, and the Cochrane Library databases, data were obtained from 5 studies, and a database containing individual patient-level time-to-event data was generated from the registry of each selected study. The primary outcome of interest was all-cause mortality. The safety end point was the combined 30-day safety end points of major vascular complications, bleeding events, and stroke, as defined by the Valve Academic Research Consortium when available. Five studies and their ongoing registry data, comprising 11,310 patients, were included. Women constituted 48.6% of the cohort and had fewer comorbidities than men. Women had a higher rate of major vascular complications (6.3% vs. 3.4%; p women and men (2.6 % vs. 2.2% [p = 0.24] and 6.5% vs. 6.5% [p = 0.93], respectively), but female sex was independently associated with improved survival at median follow-up of 387 days (interquartile range: 192 to 730 days) from the index procedure (adjusted hazard ratio: 0.79; 95% confidence interval: 0.73 to 0.86; p = 0.001). Although women experience more bleeding events, as well as vascular and stroke complications, female sex is an independent predictor of late survival after transcatheter aortic valve replacement. This should be taken into account during patient selection for this procedure. Copyright © 2015 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  17. Spatial Analysis and Synthesis of Car Audio System and Car Cabin Acoustics with a Compact Microphone Array

    DEFF Research Database (Denmark)

    Sakari, Tervo; Pätynen, Jukka; Kaplanis, Neofytos

    2015-01-01

    This research proposes a spatial sound analysis and synthesis approach for automobile sound systems, where the acquisition of the measurement data is much faster than with the Binaural Car Scanning method. This approach avoids the problems that are typically found with binaural reproduction...

  18. Josephson junctions array resonators

    Energy Technology Data Exchange (ETDEWEB)

    Gargiulo, Oscar; Muppalla, Phani; Mirzaei, Iman; Kirchmair, Gerhard [Institute for Quantum Optics and Quantum Information, Innsbruck (Austria)

    2016-07-01

    We present an experimental analysis of the self- and cross-Kerr effect of extended plasma resonances in Josephson junction chains. The chain consists of 1600 individual junctions and we can measure quality factors in excess of 10000. The Kerr effect manifests itself as a frequency shift that depends linearly on the number of photons in a resonant mode. By changing the input power we are able to measure this frequency shift on a single mode (self-kerr). By changing the input power on another mode while measuring the same one, we are able to evaluate the cross-kerr effect. We can measure the cross-Kerr effect by probing the resonance frequency of one mode while exciting another mode of the array with a microwave drive.

  19. The clinical phenotype associated with myositis-specific and associated autoantibodies: a meta-analysis revisiting the so-called antisynthetase syndrome.

    Science.gov (United States)

    Lega, Jean-Christophe; Fabien, Nicole; Reynaud, Quitterie; Durieu, Isabelle; Durupt, Stéphane; Dutertre, Marine; Cordier, Jean-François; Cottin, Vincent

    2014-09-01

    To describe the clinical spectrum associated with aminoacyl-transfer RNA synthetase (ARS) autoantibodies in patients with idiopathic inflammatory myositis defined according to Peter and Bohan's criteria. Cohort studies were selected from MEDLINE and Embase up to August 2013. Two investigators independently extracted data on study design, patient characteristics, and clinical features (interstitial lung disease [ILD], fever, mechanic's hands [MH], Raynaud's phenomenon [RPh], arthralgia, sclerodactyly, cancer and dermatomyositis-specific rash) according to the presence of myositis-specific (anti-aminoacyl-transfer RNA synthetase [ARS], anti-signal recognition particle [anti-SRP] and anti-Mi2) and myositis-associated (anti-PM/Scl, anti-U1-RNP and anti-Ku) autoantibodies. 27 studies (3487 patients) were included in the meta-analysis. Arthralgia (75%, CI 67-81) and ILD (69%, CI 63-74) were the most prevalent clinical signs associated with anti-ARS autoantibodies. Anti-Mi2 and anti-SRP autoantibodies were associated with few extramuscular signs. ARS autoantibodies were identified in 13% of patients with cancer-associated myositis (5-25). Patients with non-anti-Jo1 ARS had greater odds of presenting fever (RR 0.63, CI 0.52-0.90) and ILD (RR 0.87, CI 0.81-0.93) compared to those with anti-Jo1 autoantibodies. The frequencies of myositis (RR 1.60, CI 1.38-1.85), arthralgia (RR 1.52, CI 1.32-1.76) and MH (RR 1.47, CI 1.11-1.94) were almost 50% higher in patients with anti-Jo1 compared to non-anti-Jo1 ARS autoantibodies. Patients with anti-PM/Scl differed from those with anti-ARS autoantibodies by a greater prevalence of RPh (RR 0.70, CI 0.53-0.94) and sclerodactyly (RR 0.47, CI 0.25-0.89). ILD was less frequent in patients with anti-U1-RNP autoantibodies (RR 3.35, CI 1.07-10.43). No difference was observed between anti-ARS and myositis-associated autoantibodies for other outcomes. The presence of anti-ARS autoantibodies delimits a heterogeneous subset of patients with a high

  20. Analyzing Array Manipulating Programs by Program Transformation

    Science.gov (United States)

    Cornish, J. Robert M.; Gange, Graeme; Navas, Jorge A.; Schachte, Peter; Sondergaard, Harald; Stuckey, Peter J.

    2014-01-01

    We explore a transformational approach to the problem of verifying simple array-manipulating programs. Traditionally, verification of such programs requires intricate analysis machinery to reason with universally quantified statements about symbolic array segments, such as "every data item stored in the segment A[i] to A[j] is equal to the corresponding item stored in the segment B[i] to B[j]." We define a simple abstract machine which allows for set-valued variables and we show how to translate programs with array operations to array-free code for this machine. For the purpose of program analysis, the translated program remains faithful to the semantics of array manipulation. Based on our implementation in LLVM, we evaluate the approach with respect to its ability to extract useful invariants and the cost in terms of code size.