WorldWideScience

Sample records for probe set optimization

  1. An Optimized Set of Fluorescence In Situ Hybridization Probes for Detection of Pancreatobiliary Tract Cancer in Cytology Brush Samples.

    Science.gov (United States)

    Barr Fritcher, Emily G; Voss, Jesse S; Brankley, Shannon M; Campion, Michael B; Jenkins, Sarah M; Keeney, Matthew E; Henry, Michael R; Kerr, Sarah M; Chaiteerakij, Roongruedee; Pestova, Ekaterina V; Clayton, Amy C; Zhang, Jun; Roberts, Lewis R; Gores, Gregory J; Halling, Kevin C; Kipp, Benjamin R

    2015-12-01

    Pancreatobiliary cancer is detected by fluorescence in situ hybridization (FISH) of pancreatobiliary brush samples with UroVysion probes, originally designed to detect bladder cancer. We designed a set of new probes to detect pancreatobiliary cancer and compared its performance with that of UroVysion and routine cytology analysis. We tested a set of FISH probes on tumor tissues (cholangiocarcinoma or pancreatic carcinoma) and non-tumor tissues from 29 patients. We identified 4 probes that had high specificity for tumor vs non-tumor tissues; we called this set of probes pancreatobiliary FISH. We performed a retrospective analysis of brush samples from 272 patients who underwent endoscopic retrograde cholangiopancreatography for evaluation of malignancy at the Mayo Clinic; results were available from routine cytology and FISH with UroVysion probes. Archived residual specimens were retrieved and used to evaluate the pancreatobiliary FISH probes. Cutoff values for FISH with the pancreatobiliary probes were determined using 89 samples and validated in the remaining 183 samples. Clinical and pathologic evidence of malignancy in the pancreatobiliary tract within 2 years of brush sample collection was used as the standard; samples from patients without malignancies were used as negative controls. The validation cohort included 85 patients with malignancies (46.4%) and 114 patients with primary sclerosing cholangitis (62.3%). Samples containing cells above the cutoff for polysomy (copy number gain of ≥2 probes) were classified as positive in FISH with the UroVysion and pancreatobiliary probes. Multivariable logistic regression was used to estimate associations between clinical and pathology findings and results from FISH. The combination of FISH probes 1q21, 7p12, 8q24, and 9p21 identified cancer cells with 93% sensitivity and 100% specificity in pancreatobiliary tissue samples and were therefore included in the pancreatobiliary probe set. In the validation cohort of

  2. Standardless quantification by parameter optimization in electron probe microanalysis

    International Nuclear Information System (INIS)

    Limandri, Silvina P.; Bonetto, Rita D.; Josa, Víctor Galván; Carreras, Alejo C.; Trincavelli, Jorge C.

    2012-01-01

    A method for standardless quantification by parameter optimization in electron probe microanalysis is presented. The method consists in minimizing the quadratic differences between an experimental spectrum and an analytical function proposed to describe it, by optimizing the parameters involved in the analytical prediction. This algorithm, implemented in the software POEMA (Parameter Optimization in Electron Probe Microanalysis), allows the determination of the elemental concentrations, along with their uncertainties. The method was tested in a set of 159 elemental constituents corresponding to 36 spectra of standards (mostly minerals) that include trace elements. The results were compared with those obtained with the commercial software GENESIS Spectrum® for standardless quantification. The quantifications performed with the method proposed here are better in the 74% of the cases studied. In addition, the performance of the method proposed is compared with the first principles standardless analysis procedure DTSA for a different data set, which excludes trace elements. The relative deviations with respect to the nominal concentrations are lower than 0.04, 0.08 and 0.35 for the 66% of the cases for POEMA, GENESIS and DTSA, respectively. - Highlights: ► A method for standardless quantification in EPMA is presented. ► It gives better results than the commercial software GENESIS Spectrum. ► It gives better results than the software DTSA. ► It allows the determination of the conductive coating thickness. ► It gives an estimation for the concentration uncertainties.

  3. Standardless quantification by parameter optimization in electron probe microanalysis

    Energy Technology Data Exchange (ETDEWEB)

    Limandri, Silvina P. [Instituto de Fisica Enrique Gaviola (IFEG), CONICET (Argentina); Facultad de Matematica, Astronomia y Fisica, Universidad Nacional de Cordoba, Medina Allende s/n, (5016) Cordoba (Argentina); Bonetto, Rita D. [Centro de Investigacion y Desarrollo en Ciencias Aplicadas Dr. Jorge Ronco (CINDECA), CONICET, 47 Street 257, (1900) La Plata (Argentina); Facultad de Ciencias Exactas, Universidad Nacional de La Plata, 1 and 47 Streets (1900) La Plata (Argentina); Josa, Victor Galvan; Carreras, Alejo C. [Instituto de Fisica Enrique Gaviola (IFEG), CONICET (Argentina); Facultad de Matematica, Astronomia y Fisica, Universidad Nacional de Cordoba, Medina Allende s/n, (5016) Cordoba (Argentina); Trincavelli, Jorge C., E-mail: trincavelli@famaf.unc.edu.ar [Instituto de Fisica Enrique Gaviola (IFEG), CONICET (Argentina); Facultad de Matematica, Astronomia y Fisica, Universidad Nacional de Cordoba, Medina Allende s/n, (5016) Cordoba (Argentina)

    2012-11-15

    A method for standardless quantification by parameter optimization in electron probe microanalysis is presented. The method consists in minimizing the quadratic differences between an experimental spectrum and an analytical function proposed to describe it, by optimizing the parameters involved in the analytical prediction. This algorithm, implemented in the software POEMA (Parameter Optimization in Electron Probe Microanalysis), allows the determination of the elemental concentrations, along with their uncertainties. The method was tested in a set of 159 elemental constituents corresponding to 36 spectra of standards (mostly minerals) that include trace elements. The results were compared with those obtained with the commercial software GENESIS Spectrum Registered-Sign for standardless quantification. The quantifications performed with the method proposed here are better in the 74% of the cases studied. In addition, the performance of the method proposed is compared with the first principles standardless analysis procedure DTSA for a different data set, which excludes trace elements. The relative deviations with respect to the nominal concentrations are lower than 0.04, 0.08 and 0.35 for the 66% of the cases for POEMA, GENESIS and DTSA, respectively. - Highlights: Black-Right-Pointing-Pointer A method for standardless quantification in EPMA is presented. Black-Right-Pointing-Pointer It gives better results than the commercial software GENESIS Spectrum. Black-Right-Pointing-Pointer It gives better results than the software DTSA. Black-Right-Pointing-Pointer It allows the determination of the conductive coating thickness. Black-Right-Pointing-Pointer It gives an estimation for the concentration uncertainties.

  4. Prederivatives of gamma paraconvex set-valued maps and Pareto optimality conditions for set optimization problems.

    Science.gov (United States)

    Huang, Hui; Ning, Jixian

    2017-01-01

    Prederivatives play an important role in the research of set optimization problems. First, we establish several existence theorems of prederivatives for γ -paraconvex set-valued mappings in Banach spaces with [Formula: see text]. Then, in terms of prederivatives, we establish both necessary and sufficient conditions for the existence of Pareto minimal solution of set optimization problems.

  5. Optimized Chemical Probes for REV-ERBα

    OpenAIRE

    Trump, Ryan P.; Bresciani, Stefano; Cooper, Anthony W. J.; Tellam, James P.; Wojno, Justyna; Blaikley, John; Orband-Miller, Lisa A.; Kashatus, Jennifer A.; Dawson, Helen C.; Loudon, Andrew; Ray, David; Grant, Daniel; Farrow, Stuart N.; Willson, Timothy M.; Tomkinson, Nicholas C. O.

    2013-01-01

    REV-ERBα has emerged as an important target for regulation of circadian rhythm and its associated physiology. Herein, we report on the optimization of a series of REV-ERBα agonists based on GSK4112 (1) for potency, selectivity, and bioavailability. Potent REV-ERBα agonists 4, 10, 16, and 23 are detailed for their ability to suppress BMAL and IL-6 expression from human cells while also demonstrating excellent selectivity over LXRα. Amine 4 demonstrated in vivo bioavailability after either IV o...

  6. Eddy current testing probe optimization using a parallel genetic algorithm

    Directory of Open Access Journals (Sweden)

    Dolapchiev Ivaylo

    2008-01-01

    Full Text Available This paper uses the developed parallel version of Michalewicz's Genocop III Genetic Algorithm (GA searching technique to optimize the coil geometry of an eddy current non-destructive testing probe (ECTP. The electromagnetic field is computed using FEMM 2D finite element code. The aim of this optimization was to determine coil dimensions and positions that improve ECTP sensitivity to physical properties of the tested devices.

  7. Improved precision and accuracy for microarrays using updated probe set definitions

    Directory of Open Access Journals (Sweden)

    Larsson Ola

    2007-02-01

    Full Text Available Abstract Background Microarrays enable high throughput detection of transcript expression levels. Different investigators have recently introduced updated probe set definitions to more accurately map probes to our current knowledge of genes and transcripts. Results We demonstrate that updated probe set definitions provide both better precision and accuracy in probe set estimates compared to the original Affymetrix definitions. We show that the improved precision mainly depends on the increased number of probes that are integrated into each probe set, but we also demonstrate an improvement when the same number of probes is used. Conclusion Updated probe set definitions does not only offer expression levels that are more accurately associated to genes and transcripts but also improvements in the estimated transcript expression levels. These results give support for the use of updated probe set definitions for analysis and meta-analysis of microarray data.

  8. Optimal timing for intravenous administration set replacement.

    Science.gov (United States)

    Gillies, D; O'Riordan, L; Wallen, M; Morrison, A; Rankin, K; Nagy, S

    2005-10-19

    Administration of intravenous therapy is a common occurrence within the hospital setting. Routine replacement of administration sets has been advocated to reduce intravenous infusion contamination. If decreasing the frequency of changing intravenous administration sets does not increase infection rates, a change in practice could result in considerable cost savings. The objective of this review was to identify the optimal interval for the routine replacement of intravenous administration sets when infusate or parenteral nutrition (lipid and non-lipid) solutions are administered to people in hospital via central or peripheral venous catheters. We searched The Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, CINAHL, EMBASE: all from inception to February 2004; reference lists of identified trials, and bibliographies of published reviews. We also contacted researchers in the field. We did not have a language restriction. We included all randomized or quasi-randomized controlled trials addressing the frequency of replacing intravenous administration sets when parenteral nutrition (lipid and non-lipid containing solutions) or infusions (excluding blood) were administered to people in hospital via a central or peripheral catheter. Two authors assessed all potentially relevant studies. We resolved disagreements between the two authors by discussion with a third author. We collected data for the outcomes; infusate contamination; infusate-related bloodstream infection; catheter contamination; catheter-related bloodstream infection; all-cause bloodstream infection and all-cause mortality. We identified 23 references for review. We excluded eight of these studies; five because they did not fit the inclusion criteria and three because of inadequate data. We extracted data from the remaining 15 references (13 studies) with 4783 participants. We conclude that there is no evidence that changing intravenous administration sets more often than every 96 hours

  9. Giant machine set to probe secrets of the universe

    CERN Multimedia

    2006-01-01

    "Deep underground on the Franco-Swiss border someone will throw a switch next year to start one of the most ambitious experiments in history, probing the secrets of the universe and possibly finding new dimensions." (1 page)

  10. Giant machine set to probe secrets of the universe

    CERN Multimedia

    2006-01-01

    "Deep underground on the Franco-Swiss border someone will throw a switch next year to start one of the most ambitious experiments in history, probing the secrets of the universe and possibly finding new dimensions." (2/3 page)

  11. Machine set to probe secrets of the universe

    CERN Multimedia

    Lovell, Jeremy

    2006-01-01

    "Deep underground on the Franco-Swiss border someonw will throw a switch next year to start one of the most ambitious experiments in history, probing the secrets of the universe and possibly finding new dimensions."

  12. Improving probe set selection for microbial community analysis by leveraging taxonomic information of training sequences

    Directory of Open Access Journals (Sweden)

    Jiang Tao

    2011-10-01

    Full Text Available Abstract Background Population levels of microbial phylotypes can be examined using a hybridization-based method that utilizes a small set of computationally-designed DNA probes targeted to a gene common to all. Our previous algorithm attempts to select a set of probes such that each training sequence manifests a unique theoretical hybridization pattern (a binary fingerprint to a probe set. It does so without taking into account similarity between training gene sequences or their putative taxonomic classifications, however. We present an improved algorithm for probe set selection that utilizes the available taxonomic information of training gene sequences and attempts to choose probes such that the resultant binary fingerprints cluster into real taxonomic groups. Results Gene sequences manifesting identical fingerprints with probes chosen by the new algorithm are more likely to be from the same taxonomic group than probes chosen by the previous algorithm. In cases where they are from different taxonomic groups, underlying DNA sequences of identical fingerprints are more similar to each other in probe sets made with the new versus the previous algorithm. Complete removal of large taxonomic groups from training data does not greatly decrease the ability of probe sets to distinguish those groups. Conclusions Probe sets made from the new algorithm create fingerprints that more reliably cluster into biologically meaningful groups. The method can readily distinguish microbial phylotypes that were excluded from the training sequences, suggesting novel microbes can also be detected.

  13. Improving probe set selection for microbial community analysis by leveraging taxonomic information of training sequences.

    Science.gov (United States)

    Ruegger, Paul M; Della Vedova, Gianluca; Jiang, Tao; Borneman, James

    2011-10-10

    Population levels of microbial phylotypes can be examined using a hybridization-based method that utilizes a small set of computationally-designed DNA probes targeted to a gene common to all. Our previous algorithm attempts to select a set of probes such that each training sequence manifests a unique theoretical hybridization pattern (a binary fingerprint) to a probe set. It does so without taking into account similarity between training gene sequences or their putative taxonomic classifications, however. We present an improved algorithm for probe set selection that utilizes the available taxonomic information of training gene sequences and attempts to choose probes such that the resultant binary fingerprints cluster into real taxonomic groups. Gene sequences manifesting identical fingerprints with probes chosen by the new algorithm are more likely to be from the same taxonomic group than probes chosen by the previous algorithm. In cases where they are from different taxonomic groups, underlying DNA sequences of identical fingerprints are more similar to each other in probe sets made with the new versus the previous algorithm. Complete removal of large taxonomic groups from training data does not greatly decrease the ability of probe sets to distinguish those groups. Probe sets made from the new algorithm create fingerprints that more reliably cluster into biologically meaningful groups. The method can readily distinguish microbial phylotypes that were excluded from the training sequences, suggesting novel microbes can also be detected.

  14. Vibration behavior optimization of planetary gear sets

    Directory of Open Access Journals (Sweden)

    Farshad Shakeri Aski

    2014-12-01

    Full Text Available This paper presents a global optimization method focused on planetary gear vibration reduction by means of tip relief profile modifications. A nonlinear dynamic model is used to study the vibration behavior. In order to investigate the optimal radius and amplitude, Brute Force method optimization is used. One approach in optimization is straightforward and requires considerable computation power: brute force methods try to calculate all possible solutions and decide afterwards which one is the best. Results show the influence of optimal profile on planetary gear vibrations.

  15. A general framework for optimization of probes for gene expression microarray and its application to the fungus Podospora anserina.

    Science.gov (United States)

    Bidard, Frédérique; Imbeaud, Sandrine; Reymond, Nancie; Lespinet, Olivier; Silar, Philippe; Clavé, Corinne; Delacroix, Hervé; Berteaux-Lecellier, Véronique; Debuchy, Robert

    2010-06-18

    The development of new microarray technologies makes custom long oligonucleotide arrays affordable for many experimental applications, notably gene expression analyses. Reliable results depend on probe design quality and selection. Probe design strategy should cope with the limited accuracy of de novo gene prediction programs, and annotation up-dating. We present a novel in silico procedure which addresses these issues and includes experimental screening, as an empirical approach is the best strategy to identify optimal probes in the in silico outcome. We used four criteria for in silico probe selection: cross-hybridization, hairpin stability, probe location relative to coding sequence end and intron position. This latter criterion is critical when exon-intron gene structure predictions for intron-rich genes are inaccurate. For each coding sequence (CDS), we selected a sub-set of four probes. These probes were included in a test microarray, which was used to evaluate the hybridization behavior of each probe. The best probe for each CDS was selected according to three experimental criteria: signal-to-noise ratio, signal reproducibility, and representative signal intensities. This procedure was applied for the development of a gene expression Agilent platform for the filamentous fungus Podospora anserina and the selection of a single 60-mer probe for each of the 10,556 P. anserina CDS. A reliable gene expression microarray version based on the Agilent 44K platform was developed with four spot replicates of each probe to increase statistical significance of analysis.

  16. A general framework for optimization of probes for gene expression microarray and its application to the fungus Podospora anserina

    Directory of Open Access Journals (Sweden)

    Bidard Frédérique

    2010-06-01

    Full Text Available Abstract Background The development of new microarray technologies makes custom long oligonucleotide arrays affordable for many experimental applications, notably gene expression analyses. Reliable results depend on probe design quality and selection. Probe design strategy should cope with the limited accuracy of de novo gene prediction programs, and annotation up-dating. We present a novel in silico procedure which addresses these issues and includes experimental screening, as an empirical approach is the best strategy to identify optimal probes in the in silico outcome. Findings We used four criteria for in silico probe selection: cross-hybridization, hairpin stability, probe location relative to coding sequence end and intron position. This latter criterion is critical when exon-intron gene structure predictions for intron-rich genes are inaccurate. For each coding sequence (CDS, we selected a sub-set of four probes. These probes were included in a test microarray, which was used to evaluate the hybridization behavior of each probe. The best probe for each CDS was selected according to three experimental criteria: signal-to-noise ratio, signal reproducibility, and representative signal intensities. This procedure was applied for the development of a gene expression Agilent platform for the filamentous fungus Podospora anserina and the selection of a single 60-mer probe for each of the 10,556 P. anserina CDS. Conclusions A reliable gene expression microarray version based on the Agilent 44K platform was developed with four spot replicates of each probe to increase statistical significance of analysis.

  17. Estimating the similarity of alternative Affymetrix probe sets using transcriptional networks

    Science.gov (United States)

    2013-01-01

    Background The usefulness of the data from Affymetrix microarray analysis depends largely on the reliability of the files describing the correspondence between probe sets, genes and transcripts. Particularly, when a gene is targeted by several probe sets, these files should give information about the similarity of each alternative probe set pair. Transcriptional networks integrate the multiple correlations that exist between all probe sets and supply much more information than a simple correlation coefficient calculated for two series of signals. In this study, we used the PSAWN (Probe Set Assignment With Networks) programme we developed to investigate whether similarity of alternative probe sets resulted in some specific properties. Findings PSAWNpy delivered a full textual description of each probe set and information on the number and properties of secondary targets. PSAWNml calculated the similarity of each alternative probe set pair and allowed finding relationships between similarity and localisation of probes in common transcripts or exons. Similar alternative probe sets had very low negative correlation, high positive correlation and similar neighbourhood overlap. Using these properties, we devised a test that allowed grouping similar probe sets in a given network. By considering several networks, additional information concerning the similarity reproducibility was obtained, which allowed defining the actual similarity of alternative probe set pairs. In particular, we calculated the common localisation of probes in exons and in known transcripts and we showed that similarity was correctly correlated with them. The information collected on all pairs of alternative probe sets in the most popular 3’ IVT Affymetrix chips is available in tabular form at http://bns.crbm.cnrs.fr/download.html. Conclusions These processed data can be used to obtain a finer interpretation when comparing microarray data between biological conditions. They are particularly well

  18. Estimating the similarity of alternative Affymetrix probe sets using transcriptional networks.

    Science.gov (United States)

    Bellis, Michel

    2013-03-21

    The usefulness of the data from Affymetrix microarray analysis depends largely on the reliability of the files describing the correspondence between probe sets, genes and transcripts. Particularly, when a gene is targeted by several probe sets, these files should give information about the similarity of each alternative probe set pair. Transcriptional networks integrate the multiple correlations that exist between all probe sets and supply much more information than a simple correlation coefficient calculated for two series of signals. In this study, we used the PSAWN (Probe Set Assignment With Networks) programme we developed to investigate whether similarity of alternative probe sets resulted in some specific properties. PSAWNpy delivered a full textual description of each probe set and information on the number and properties of secondary targets. PSAWNml calculated the similarity of each alternative probe set pair and allowed finding relationships between similarity and localisation of probes in common transcripts or exons. Similar alternative probe sets had very low negative correlation, high positive correlation and similar neighbourhood overlap. Using these properties, we devised a test that allowed grouping similar probe sets in a given network. By considering several networks, additional information concerning the similarity reproducibility was obtained, which allowed defining the actual similarity of alternative probe set pairs. In particular, we calculated the common localisation of probes in exons and in known transcripts and we showed that similarity was correctly correlated with them. The information collected on all pairs of alternative probe sets in the most popular 3' IVT Affymetrix chips is available in tabular form at http://bns.crbm.cnrs.fr/download.html. These processed data can be used to obtain a finer interpretation when comparing microarray data between biological conditions. They are particularly well adapted for searching 3' alternative

  19. Sulcal set optimization for cortical surface registration.

    Science.gov (United States)

    Joshi, Anand A; Pantazis, Dimitrios; Li, Quanzheng; Damasio, Hanna; Shattuck, David W; Toga, Arthur W; Leahy, Richard M

    2010-04-15

    Flat mapping based cortical surface registration constrained by manually traced sulcal curves has been widely used for inter subject comparisons of neuroanatomical data. Even for an experienced neuroanatomist, manual sulcal tracing can be quite time consuming, with the cost increasing with the number of sulcal curves used for registration. We present a method for estimation of an optimal subset of size N(C) from N possible candidate sulcal curves that minimizes a mean squared error metric over all combinations of N(C) curves. The resulting procedure allows us to estimate a subset with a reduced number of curves to be traced as part of the registration procedure leading to optimal use of manual labeling effort for registration. To minimize the error metric we analyze the correlation structure of the errors in the sulcal curves by modeling them as a multivariate Gaussian distribution. For a given subset of sulci used as constraints in surface registration, the proposed model estimates registration error based on the correlation structure of the sulcal errors. The optimal subset of constraint curves consists of the N(C) sulci that jointly minimize the estimated error variance for the subset of unconstrained curves conditioned on the N(C) constraint curves. The optimal subsets of sulci are presented and the estimated and actual registration errors for these subsets are computed. Copyright 2009 Elsevier Inc. All rights reserved.

  20. Set-valued optimization an introduction with applications

    CERN Document Server

    Khan, Akhtar A; Zalinescu, Constantin

    2014-01-01

    Set-valued optimization is a vibrant and expanding branch of mathematics that deals with optimization problems where the objective map and/or the constraints maps are set-valued maps acting between certain spaces. Since set-valued maps subsumes single valued maps, set-valued optimization provides an important extension and unification of the scalar as well as the vector optimization problems. Therefore this relatively new discipline has justifiably attracted a great deal of attention in recent years. This book presents, in a unified framework, basic properties on ordering relations, solution c

  1. Optimal timing for intravascular administration set replacement.

    Science.gov (United States)

    Ullman, Amanda J; Cooke, Marie L; Gillies, Donna; Marsh, Nicole M; Daud, Azlina; McGrail, Matthew R; O'Riordan, Elizabeth; Rickard, Claire M

    2013-09-15

    The tubing (administration set) attached to both venous and arterial catheters may contribute to bacteraemia and other infections. The rate of infection may be increased or decreased by routine replacement of administration sets. This review was originally published in 2005 and was updated in 2012. The objective of this review was to identify any relationship between the frequency with which administration sets are replaced and rates of microbial colonization, infection and death. We searched The Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library 2012, Issue 6), MEDLINE (1950 to June 2012), CINAHL (1982 to June 2012), EMBASE (1980 to June 2012), reference lists of identified trials and bibliographies of published reviews. The original search was performed in February 2004. We also contacted researchers in the field. We applied no language restriction. We included all randomized or controlled clinical trials on the frequency of venous or arterial catheter administration set replacement in hospitalized participants. Two review authors assessed all potentially relevant studies. We resolved disagreements between the two review authors by discussion with a third review author. We collected data for seven outcomes: catheter-related infection; infusate-related infection; infusate microbial colonization; catheter microbial colonization; all-cause bloodstream infection; mortality; and cost. We pooled results from studies that compared different frequencies of administration set replacement, for instance, we pooled studies that compared replacement ≥ every 96 hours versus every 72 hours with studies that compared replacement ≥ every 48 hours versus every 24 hours. We identified 26 studies for this updated review, 10 of which we excluded; six did not fulfil the inclusion criteria and four did not report usable data. We extracted data from the remaining 18 references (16 studies) with 5001 participants: study designs included neonate and adult

  2. Optimizing Filter-Probe Diffusion Weighting in the Rat Spinal Cord for Human Translation

    Directory of Open Access Journals (Sweden)

    Matthew D. Budde

    2017-12-01

    Full Text Available Diffusion tensor imaging (DTI is a promising biomarker of spinal cord injury (SCI. In the acute aftermath, DTI in SCI animal models consistently demonstrates high sensitivity and prognostic performance, yet translation of DTI to acute human SCI has been limited. In addition to technical challenges, interpretation of the resulting metrics is ambiguous, with contributions in the acute setting from both axonal injury and edema. Novel diffusion MRI acquisition strategies such as double diffusion encoding (DDE have recently enabled detection of features not available with DTI or similar methods. In this work, we perform a systematic optimization of DDE using simulations and an in vivo rat model of SCI and subsequently implement the protocol to the healthy human spinal cord. First, two complementary DDE approaches were evaluated using an orientationally invariant or a filter-probe diffusion encoding approach. While the two methods were similar in their ability to detect acute SCI, the filter-probe DDE approach had greater predictive power for functional outcomes. Next, the filter-probe DDE was compared to an analogous single diffusion encoding (SDE approach, with the results indicating that in the spinal cord, SDE provides similar contrast with improved signal to noise. In the SCI rat model, the filter-probe SDE scheme was coupled with a reduced field of view (rFOV excitation, and the results demonstrate high quality maps of the spinal cord without contamination from edema and cerebrospinal fluid, thereby providing high sensitivity to injury severity. The optimized protocol was demonstrated in the healthy human spinal cord using the commercially-available diffusion MRI sequence with modifications only to the diffusion encoding directions. Maps of axial diffusivity devoid of CSF partial volume effects were obtained in a clinically feasible imaging time with a straightforward analysis and variability comparable to axial diffusivity derived from DTI

  3. Collimator setting optimization in intensity modulated radiotherapy

    International Nuclear Information System (INIS)

    Williams, M.; Hoban, P.

    2001-01-01

    Full text: The aim of this study was to investigate the role of collimator angle and bixel size settings in IMRT when using the step and shoot method of delivery. Of particular interest is minimisation of the total monitor units delivered. Beam intensity maps with bixel size 10 x 10 mm were segmented into MLC leaf sequences and the collimator angle optimised to minimise the total number of MU's. The monitor units were estimated from the maximum sum of positive-gradient intensity changes along the direction of leaf motion. To investigate the use of low resolution maps at optimum collimator angles, several high resolution maps with bixel size 5 x 5 mm were generated. These were resampled into bixel sizes, 5 x 10 mm and 10 x 10 mm and the collimator angle optimised to minimise the RMS error between the original and resampled map. Finally, a clinical IMRT case was investigated with the collimator angle optimised. Both the dose distribution and dose-volume histograms were compared between the standard IMRT plan and the optimised plan. For the 10 x 10 mm bixel maps there was a variation of 5% - 40% in monitor units at the different collimator angles. The maps with a high degree of radial symmetry showed little variation. For the resampled 5 x 5 mm maps, a small RMS error was achievable with a 5 x 10 mm bixel size at particular collimator positions. This was most noticeable for maps with an elongated intensity distribution. A comparison between the 5 x 5 mm bixel plan and the 5 x 10 mm showed no significant difference in dose distribution. The monitor units required to deliver an intensity modulated field can be reduced by rotating the collimator and aligning the direction of leaf motion with the axis of the fluence map that has the least intensity. Copyright (2001) Australasian College of Physical Scientists and Engineers in Medicine

  4. Elitism set based particle swarm optimization and its application

    Directory of Open Access Journals (Sweden)

    Yanxia Sun

    2017-01-01

    Full Text Available Topology plays an important role for Particle Swarm Optimization (PSO to achieve good optimization performance. It is difficult to find one topology structure for the particles to achieve better optimization performance than the others since the optimization performance not only depends on the searching abilities of the particles, also depends on the type of the optimization problems. Three elitist set based PSO algorithm without using explicit topology structure is proposed in this paper. An elitist set, which is based on the individual best experience, is used to communicate among the particles. Moreover, to avoid the premature of the particles, different statistical methods have been used in these three proposed methods. The performance of the proposed PSOs is compared with the results of the standard PSO 2011 and several PSO with different topologies, and the simulation results and comparisons demonstrate that the proposed PSO with adaptive probabilistic preference can achieve good optimization performance.

  5. Optimal set of selected uranium enrichments that minimizes blending consequences

    International Nuclear Information System (INIS)

    Nachlas, J.A.; Kurstedt, H.A. Jr.; Lobber, J.S. Jr.

    1977-01-01

    Identities, quantities, and costs associated with producing a set of selected enrichments and blending them to provide fuel for existing reactors are investigated using an optimization model constructed with appropriate constraints. Selected enrichments are required for either nuclear reactor fuel standardization or potential uranium enrichment alternatives such as the gas centrifuge. Using a mixed-integer linear program, the model minimizes present worth costs for a 39-product-enrichment reference case. For four ingredients, the marginal blending cost is only 0.18% of the total direct production cost. Natural uranium is not an optimal blending ingredient. Optimal values reappear in most sets of ingredient enrichments

  6. Optimal Set-Point Synthesis in HVAC Systems

    DEFF Research Database (Denmark)

    Komareji, Mohammad; Stoustrup, Jakob; Rasmussen, Henrik

    2007-01-01

    This paper presents optimal set-point synthesis for a heating, ventilating, and air-conditioning (HVAC) system. This HVAC system is made of two heat exchangers: an air-to-air heat exchanger and a water-to-air heat exchanger. The objective function is composed of the electrical power for different...... components, encompassing fans, primary/secondary pump, tertiary pump, and air-to-air heat exchanger wheel; and a fraction of thermal power used by the HVAC system. The goals that have to be achieved by the HVAC system appear as constraints in the optimization problem. To solve the optimization problem......, a steady state model of the HVAC system is derived while different supplying hydronic circuits are studied for the water-to-air heat exchanger. Finally, the optimal set-points and the optimal supplying hydronic circuit are resulted....

  7. Optimization of crack detection in steam generator tubes using a punctual probe

    International Nuclear Information System (INIS)

    Levy, R.; Ferre, C.

    1985-01-01

    The existence of cracks at the upper end of the expanded zone of a steam generator tube is a recent problem. A differential pencil probe was used for the detection of those cracks with encouraging results. An optimization study has been necessary to solve the difficulties in the evaluation of defects, due to the design of the first probe; the result is a probe making possible a precise analysis of detected signals

  8. Global Optimization for Transport Network Expansion and Signal Setting

    OpenAIRE

    Liu, Haoxiang; Wang, David Z. W.; Yue, Hao

    2015-01-01

    This paper proposes a model to address an urban transport planning problem involving combined network design and signal setting in a saturated network. Conventional transport planning models usually deal with the network design problem and signal setting problem separately. However, the fact that network capacity design and capacity allocation determined by network signal setting combine to govern the transport network performance requires the optimal transport planning to consider the two pr...

  9. Modeling the Insertion Mechanics of Flexible Neural Probes Coated with Sacrificial Polymers for Optimizing Probe Design

    Directory of Open Access Journals (Sweden)

    Sagar Singh

    2016-03-01

    Full Text Available Single-unit recording neural probes have significant advantages towards improving signal-to-noise ratio and specificity for signal acquisition in brain-to-computer interface devices. Long-term effectiveness is unfortunately limited by the chronic injury response, which has been linked to the mechanical mismatch between rigid probes and compliant brain tissue. Small, flexible microelectrodes may overcome this limitation, but insertion of these probes without buckling requires supporting elements such as a stiff coating with a biodegradable polymer. For these coated probes, there is a design trade-off between the potential for successful insertion into brain tissue and the degree of trauma generated by the insertion. The objective of this study was to develop and validate a finite element model (FEM to simulate insertion of coated neural probes of varying dimensions and material properties into brain tissue. Simulations were performed to predict the buckling and insertion forces during insertion of coated probes into a tissue phantom with material properties of brain. The simulations were validated with parallel experimental studies where probes were inserted into agarose tissue phantom, ex vivo chick embryonic brain tissue, and ex vivo rat brain tissue. Experiments were performed with uncoated copper wire and both uncoated and coated SU-8 photoresist and Parylene C probes. Model predictions were found to strongly agree with experimental results (<10% error. The ratio of the predicted buckling force-to-predicted insertion force, where a value greater than one would ideally be expected to result in successful insertion, was plotted against the actual success rate from experiments. A sigmoidal relationship was observed, with a ratio of 1.35 corresponding to equal probability of insertion and failure, and a ratio of 3.5 corresponding to a 100% success rate. This ratio was dubbed the “safety factor”, as it indicated the degree to which the coating

  10. Global Optimization for Bus Line Timetable Setting Problem

    Directory of Open Access Journals (Sweden)

    Qun Chen

    2014-01-01

    Full Text Available This paper defines bus timetables setting problem during each time period divided in terms of passenger flow intensity; it is supposed that passengers evenly arrive and bus runs are set evenly; the problem is to determine bus runs assignment in each time period to minimize the total waiting time of passengers on platforms if the number of the total runs is known. For such a multistage decision problem, this paper designed a dynamic programming algorithm to solve it. Global optimization procedures using dynamic programming are developed. A numerical example about bus runs assignment optimization of a single line is given to demonstrate the efficiency of the proposed methodology, showing that optimizing buses’ departure time using dynamic programming can save computational time and find the global optimal solution.

  11. Vector optimization set-valued and variational analysis

    CERN Document Server

    Chen, Guang-ya; Yang, Xiaogi

    2005-01-01

    This book is devoted to vector or multiple criteria approaches in optimization. Topics covered include: vector optimization, vector variational inequalities, vector variational principles, vector minmax inequalities and vector equilibrium problems. In particular, problems with variable ordering relations and set-valued mappings are treated. The nonlinear scalarization method is extensively used throughout the book to deal with various vector-related problems. The results presented are original and should be interesting to researchers and graduates in applied mathematics and operations research

  12. An improved model for the oPtImal Measurement Probes Allocation tool

    Energy Technology Data Exchange (ETDEWEB)

    Sterle, C., E-mail: claudio.sterle@unina.it [Consorzio CREATE/Dipartimento di Ingegneria Elettrica e delle Tecnologie dell’Informazione, Università degli Studi di Napoli Federico II, Via Claudio 21, 80125 Napoli (Italy); Neto, A.C. [Fusion for Energy, 08019 Barcelona (Spain); De Tommasi, G. [Consorzio CREATE/Dipartimento di Ingegneria Elettrica e delle Tecnologie dell’Informazione, Università degli Studi di Napoli Federico II, Via Claudio 21, 80125 Napoli (Italy)

    2015-10-15

    Highlights: • The problem of optimally allocating the probes of a diagnostic system is tackled. • The problem is decomposed in two consecutive optimization problems. • Two original ILP models are proposed and sequentially solved to optimality. • The proposed ILP models improve and extend the previous work present in literature. • Real size instances have been optimally solved with very low computation time. - Abstract: The oPtImal Measurement Probes Allocation (PIMPA) tool has been recently proposed in [1] to maximize the reliability of a tokamak diagnostic system against the failure of one or more of the processing nodes. PIMPA is based on the solution of integer linear programming (ILP) problems, and it minimizes the effect of the failure of a data acquisition component. The first formulation of the PIMPA model did not support the concept of individual slots. This work presents an improved ILP model that addresses the above mentioned problem, by taking into account all the individual probes.

  13. An improved model for the oPtImal Measurement Probes Allocation tool

    International Nuclear Information System (INIS)

    Sterle, C.; Neto, A.C.; De Tommasi, G.

    2015-01-01

    Highlights: • The problem of optimally allocating the probes of a diagnostic system is tackled. • The problem is decomposed in two consecutive optimization problems. • Two original ILP models are proposed and sequentially solved to optimality. • The proposed ILP models improve and extend the previous work present in literature. • Real size instances have been optimally solved with very low computation time. - Abstract: The oPtImal Measurement Probes Allocation (PIMPA) tool has been recently proposed in [1] to maximize the reliability of a tokamak diagnostic system against the failure of one or more of the processing nodes. PIMPA is based on the solution of integer linear programming (ILP) problems, and it minimizes the effect of the failure of a data acquisition component. The first formulation of the PIMPA model did not support the concept of individual slots. This work presents an improved ILP model that addresses the above mentioned problem, by taking into account all the individual probes.

  14. Training set optimization under population structure in genomic selection.

    Science.gov (United States)

    Isidro, Julio; Jannink, Jean-Luc; Akdemir, Deniz; Poland, Jesse; Heslot, Nicolas; Sorrells, Mark E

    2015-01-01

    Population structure must be evaluated before optimization of the training set population. Maximizing the phenotypic variance captured by the training set is important for optimal performance. The optimization of the training set (TRS) in genomic selection has received much interest in both animal and plant breeding, because it is critical to the accuracy of the prediction models. In this study, five different TRS sampling algorithms, stratified sampling, mean of the coefficient of determination (CDmean), mean of predictor error variance (PEVmean), stratified CDmean (StratCDmean) and random sampling, were evaluated for prediction accuracy in the presence of different levels of population structure. In the presence of population structure, the most phenotypic variation captured by a sampling method in the TRS is desirable. The wheat dataset showed mild population structure, and CDmean and stratified CDmean methods showed the highest accuracies for all the traits except for test weight and heading date. The rice dataset had strong population structure and the approach based on stratified sampling showed the highest accuracies for all traits. In general, CDmean minimized the relationship between genotypes in the TRS, maximizing the relationship between TRS and the test set. This makes it suitable as an optimization criterion for long-term selection. Our results indicated that the best selection criterion used to optimize the TRS seems to depend on the interaction of trait architecture and population structure.

  15. Set optimization and applications the state of the art : from set relations to set-valued risk measures

    CERN Document Server

    Heyde, Frank; Löhne, Andreas; Rudloff, Birgit; Schrage, Carola

    2015-01-01

    This volume presents five surveys with extensive bibliographies and six original contributions on set optimization and its applications in mathematical finance and game theory. The topics range from more conventional approaches that look for minimal/maximal elements with respect to vector orders or set relations, to the new complete-lattice approach that comprises a coherent solution concept for set optimization problems, along with existence results, duality theorems, optimality conditions, variational inequalities and theoretical foundations for algorithms. Modern approaches to scalarization methods can be found as well as a fundamental contribution to conditional analysis. The theory is tailor-made for financial applications, in particular risk evaluation and [super-]hedging for market models with transaction costs, but it also provides a refreshing new perspective on vector optimization. There is no comparable volume on the market, making the book an invaluable resource for researchers working in vector o...

  16. Optimal regional biases in ECB interest rate setting

    NARCIS (Netherlands)

    Arnold, I.J.M.

    2005-01-01

    This paper uses a simple model of optimal monetary policy to consider whether the influence of national output and inflation rates on ECB interest rate setting should equal a country’s weight in the eurozone economy. The findings depend on assumptions regarding interest rate elasticities, exchange

  17. Level-Set Topology Optimization with Aeroelastic Constraints

    Science.gov (United States)

    Dunning, Peter D.; Stanford, Bret K.; Kim, H. Alicia

    2015-01-01

    Level-set topology optimization is used to design a wing considering skin buckling under static aeroelastic trim loading, as well as dynamic aeroelastic stability (flutter). The level-set function is defined over the entire 3D volume of a transport aircraft wing box. Therefore, the approach is not limited by any predefined structure and can explore novel configurations. The Sequential Linear Programming (SLP) level-set method is used to solve the constrained optimization problems. The proposed method is demonstrated using three problems with mass, linear buckling and flutter objective and/or constraints. A constraint aggregation method is used to handle multiple buckling constraints in the wing skins. A continuous flutter constraint formulation is used to handle difficulties arising from discontinuities in the design space caused by a switching of the critical flutter mode.

  18. Constructing DNA Barcode Sets Based on Particle Swarm Optimization.

    Science.gov (United States)

    Wang, Bin; Zheng, Xuedong; Zhou, Shihua; Zhou, Changjun; Wei, Xiaopeng; Zhang, Qiang; Wei, Ziqi

    2018-01-01

    Following the completion of the human genome project, a large amount of high-throughput bio-data was generated. To analyze these data, massively parallel sequencing, namely next-generation sequencing, was rapidly developed. DNA barcodes are used to identify the ownership between sequences and samples when they are attached at the beginning or end of sequencing reads. Constructing DNA barcode sets provides the candidate DNA barcodes for this application. To increase the accuracy of DNA barcode sets, a particle swarm optimization (PSO) algorithm has been modified and used to construct the DNA barcode sets in this paper. Compared with the extant results, some lower bounds of DNA barcode sets are improved. The results show that the proposed algorithm is effective in constructing DNA barcode sets.

  19. Probing Entanglement in Adiabatic Quantum Optimization with Trapped Ions

    Directory of Open Access Journals (Sweden)

    Philipp eHauke

    2015-04-01

    Full Text Available Adiabatic quantum optimization has been proposed as a route to solve NP-complete problems, with a possible quantum speedup compared to classical algorithms. However, the precise role of quantum effects, such as entanglement, in these optimization protocols is still unclear. We propose a setup of cold trapped ions that allows one to quantitatively characterize, in a controlled experiment, the interplay of entanglement, decoherence, and non-adiabaticity in adiabatic quantum optimization. We show that, in this way, a broad class of NP-complete problems becomes accessible for quantum simulations, including the knapsack problem, number partitioning, and instances of the max-cut problem. Moreover, a general theoretical study reveals correlations of the success probability with entanglement at the end of the protocol. From exact numerical simulations for small systems and linear ramps, however, we find no substantial correlations with the entanglement during the optimization. For the final state, we derive analytically a universal upper bound for the success probability as a function of entanglement, which can be measured in experiment. The proposed trapped-ion setups and the presented study of entanglement address pertinent questions of adiabatic quantum optimization, which may be of general interest across experimental platforms.

  20. Optimization of Actinide Quantification by Electron Probe Microanalysis

    International Nuclear Information System (INIS)

    Moy, A.; Merlet, C.; Llovet, X.; Dugne, O.

    2013-06-01

    Conventional quantitative electron probe microanalysis of actinides requires the use of reference standard samples. However, for such elements, standards are generally not available. To overcome this difficulty, standard-less methods of analysis are used, in which the x-ray intensity emitted by the standard is calculated. To be reliable, such calculations require accurate knowledge of physical data such as the x-ray production cross section. However, experimental data of this quantity are not always available for actinide elements. In the present work, experimental L and M x-ray production cross sections were measured for elements uranium and lead. Measurements were performed with two electron microprobes using wavelength-dispersive spectrometers using thin self-supporting targets. Experimental results are compared with calculated cross sections obtained from different analytical formulae, and, whenever possible, with experimental data obtained from the literature. (authors)

  1. Topology optimization of hyperelastic structures using a level set method

    Science.gov (United States)

    Chen, Feifei; Wang, Yiqiang; Wang, Michael Yu; Zhang, Y. F.

    2017-12-01

    Soft rubberlike materials, due to their inherent compliance, are finding widespread implementation in a variety of applications ranging from assistive wearable technologies to soft material robots. Structural design of such soft and rubbery materials necessitates the consideration of large nonlinear deformations and hyperelastic material models to accurately predict their mechanical behaviour. In this paper, we present an effective level set-based topology optimization method for the design of hyperelastic structures that undergo large deformations. The method incorporates both geometric and material nonlinearities where the strain and stress measures are defined within the total Lagrange framework and the hyperelasticity is characterized by the widely-adopted Mooney-Rivlin material model. A shape sensitivity analysis is carried out, in the strict sense of the material derivative, where the high-order terms involving the displacement gradient are retained to ensure the descent direction. As the design velocity enters into the shape derivative in terms of its gradient and divergence terms, we develop a discrete velocity selection strategy. The whole optimization implementation undergoes a two-step process, where the linear optimization is first performed and its optimized solution serves as the initial design for the subsequent nonlinear optimization. It turns out that this operation could efficiently alleviate the numerical instability and facilitate the optimization process. To demonstrate the validity and effectiveness of the proposed method, three compliance minimization problems are studied and their optimized solutions present significant mechanical benefits of incorporating the nonlinearities, in terms of remarkable enhancement in not only the structural stiffness but also the critical buckling load.

  2. Cluster analysis by optimal decomposition of induced fuzzy sets

    Energy Technology Data Exchange (ETDEWEB)

    Backer, E

    1978-01-01

    Nonsupervised pattern recognition is addressed and the concept of fuzzy sets is explored in order to provide the investigator (data analyst) additional information supplied by the pattern class membership values apart from the classical pattern class assignments. The basic ideas behind the pattern recognition problem, the clustering problem, and the concept of fuzzy sets in cluster analysis are discussed, and a brief review of the literature of the fuzzy cluster analysis is given. Some mathematical aspects of fuzzy set theory are briefly discussed; in particular, a measure of fuzziness is suggested. The optimization-clustering problem is characterized. Then the fundamental idea behind affinity decomposition is considered. Next, further analysis takes place with respect to the partitioning-characterization functions. The iterative optimization procedure is then addressed. The reclassification function is investigated and convergence properties are examined. Finally, several experiments in support of the method suggested are described. Four object data sets serve as appropriate test cases. 120 references, 70 figures, 11 tables. (RWR)

  3. Optimal Strategies for Probing Terrestrial Exoplanet Atmospheres with JWST

    Science.gov (United States)

    Batalha, Natasha E.; Lewis, Nikole K.; Line, Michael

    2018-01-01

    It is imperative that the exoplanet community determines the feasibility and the resources needed to yield high fidelity atmospheric compositions from terrestrial exoplanets. In particular, LHS 1140b and the TRAPPIST-1 system, already slated for observations by JWST’s Guaranteed Time Observers, will be the first two terrestrial planets observed by JWST. I will discuss optimal observing strategies for observing these two systems, focusing on the NIRSpec Prism (1-5μm) and the combination of NIRISS SOSS (1-2.7μm) and NIRSpec G395H (3-5μm). I will also introduce currently unsupported JWST readmodes that have the potential to greatly increase the precision on our atmospheric spectra. Lastly, I will use information content theory to compute the expected confidence interval on the retrieved abundances of key molecular species and temperature profiles as a function of JWST observing cycles.

  4. Optimal Set Anode Potentials Vary in Bioelectrochemical Systems

    KAUST Repository

    Wagner, Rachel C.

    2010-08-15

    In bioelectrochemical systems (BESs), the anode potential can be set to a fixed voltage using a potentiostat, but there is no accepted method for defining an optimal potential. Microbes can theoretically gain more energy by reducing a terminal electron acceptor with a more positive potential, for example oxygen compared to nitrate. Therefore, more positive anode potentials should allow microbes to gain more energy per electron transferred than a lower potential, but this can only occur if the microbe has metabolic pathways capable of capturing the available energy. Our review of the literature shows that there is a general trend of improved performance using more positive potentials, but there are several notable cases where biofilm growth and current generation improved or only occurred at more negative potentials. This suggests that even with diverse microbial communities, it is primarily the potential of the terminal respiratory proteins used by certain exoelectrogenic bacteria, and to a lesser extent the anode potential, that determines the optimal growth conditions in the reactor. Our analysis suggests that additional bioelectrochemical investigations of both pure and mixed cultures, over a wide range of potentials, are needed to better understand how to set and evaluate optimal anode potentials for improving BES performance. © 2010 American Chemical Society.

  5. Optimization Settings in the Fuzzy Combined Mamdani PID Controller

    Science.gov (United States)

    Kudinov, Y. I.; Pashchenko, F. F.; Pashchenko, A. F.; Kelina, A. Y.; Kolesnikov, V. A.

    2017-11-01

    In the present work the actual problem of determining the optimal settings of fuzzy parallel proportional-integral-derivative (PID) controller is considered to control nonlinear plants that is not always possible to perform with classical linear PID controllers. In contrast to the linear fuzzy PID controllers there are no analytical methods of settings calculation. In this paper, we develop a numerical optimization approach to determining the coefficients of a fuzzy PID controller. Decomposition method of optimization is proposed, the essence of which was as follows. All homogeneous coefficients were distributed to the relevant groups, for example, three error coefficients, the three coefficients of the changes of errors and the three coefficients of the outputs P, I and D components. Consistently in each of such groups the search algorithm was selected that has determined the coefficients under which we receive the schedule of the transition process satisfying all the applicable constraints. Thus, with the help of Matlab and Simulink in a reasonable time were found the factors of a fuzzy PID controller, which meet the accepted limitations on the transition process.

  6. Optimization of Comb-Drive Actuators [Nanopositioners for probe-based data storage and musical MEMS

    NARCIS (Netherlands)

    Engelen, Johannes Bernardus Charles

    2011-01-01

    The era of infinite storage seems near. To reach it, data storage capabilities need to grow, and new storage technologies must be developed.This thesis studies one aspect of one of the emergent storage technologies: optimizing electrostatic combdrive actuation for a parallel probe-based data storage

  7. Three axis vector magnet set-up for cryogenic scanning probe microscopy

    International Nuclear Information System (INIS)

    Galvis, J. A.; Herrera, E.; Buendía, A.; Guillamón, I.; Vieira, S.; Suderow, H.; Azpeitia, J.; Luccas, R. F.; Munuera, C.; García-Hernandez, M.

    2015-01-01

    We describe a three axis vector magnet system for cryogenic scanning probe microscopy measurements. We discuss the magnet support system and the power supply, consisting of a compact three way 100 A current source. We obtain tilted magnetic fields in all directions with maximum value of 5T along z-axis and of 1.2T for XY-plane magnetic fields. We describe a scanning tunneling microscopy-spectroscopy (STM-STS) set-up, operating in a dilution refrigerator, which includes a new high voltage ultralow noise piezodrive electronics and discuss the noise level due to vibrations. STM images and STS maps show atomic resolution and the tilted vortex lattice at 150 mK in the superconductor β-Bi 2 Pd. We observe a strongly elongated hexagonal lattice, which corresponds to the projection of the tilted hexagonal vortex lattice on the surface. We also discuss Magnetic Force Microscopy images in a variable temperature insert

  8. Three axis vector magnet set-up for cryogenic scanning probe microscopy

    Energy Technology Data Exchange (ETDEWEB)

    Galvis, J. A. [Laboratorio de Bajas Temperaturas, Departamento de Física de la Materia Condensada, Instituto de Ciencia de Materiales Nicolás Cabrera, Condensed Matter Physics Center (IFIMAC), Facultad de Ciencias Universidad Autónoma de Madrid, 28049 Madrid (Spain); Departamento de Ciencias Naturales Facultad de Ingeniería Universidad Central, Bogotá (Colombia); Herrera, E.; Buendía, A. [Laboratorio de Bajas Temperaturas, Departamento de Física de la Materia Condensada, Instituto de Ciencia de Materiales Nicolás Cabrera, Condensed Matter Physics Center (IFIMAC), Facultad de Ciencias Universidad Autónoma de Madrid, 28049 Madrid (Spain); Guillamón, I.; Vieira, S.; Suderow, H. [Laboratorio de Bajas Temperaturas, Departamento de Física de la Materia Condensada, Instituto de Ciencia de Materiales Nicolás Cabrera, Condensed Matter Physics Center (IFIMAC), Facultad de Ciencias Universidad Autónoma de Madrid, 28049 Madrid (Spain); Unidad Asociada de Bajas Temperaturas y Altos Campos Magnéticos, UAM, CSIC, Cantoblanco, E-28049 Madrid (Spain); Azpeitia, J.; Luccas, R. F.; Munuera, C.; García-Hernandez, M. [Unidad Asociada de Bajas Temperaturas y Altos Campos Magnéticos, UAM, CSIC, Cantoblanco, E-28049 Madrid (Spain); Instituto de Ciencia de Materiales de Madrid, Consejo Superior de Investigaciones Científicas (ICMM-CSIC), Sor Juana Inés de la Cruz 3, 28049 Madrid (Spain); and others

    2015-01-15

    We describe a three axis vector magnet system for cryogenic scanning probe microscopy measurements. We discuss the magnet support system and the power supply, consisting of a compact three way 100 A current source. We obtain tilted magnetic fields in all directions with maximum value of 5T along z-axis and of 1.2T for XY-plane magnetic fields. We describe a scanning tunneling microscopy-spectroscopy (STM-STS) set-up, operating in a dilution refrigerator, which includes a new high voltage ultralow noise piezodrive electronics and discuss the noise level due to vibrations. STM images and STS maps show atomic resolution and the tilted vortex lattice at 150 mK in the superconductor β-Bi{sub 2}Pd. We observe a strongly elongated hexagonal lattice, which corresponds to the projection of the tilted hexagonal vortex lattice on the surface. We also discuss Magnetic Force Microscopy images in a variable temperature insert.

  9. Setting value optimization method in integration for relay protection based on improved quantum particle swarm optimization algorithm

    Science.gov (United States)

    Yang, Guo Sheng; Wang, Xiao Yang; Li, Xue Dong

    2018-03-01

    With the establishment of the integrated model of relay protection and the scale of the power system expanding, the global setting and optimization of relay protection is an extremely difficult task. This paper presents a kind of application in relay protection of global optimization improved particle swarm optimization algorithm and the inverse time current protection as an example, selecting reliability of the relay protection, selectivity, quick action and flexibility as the four requires to establish the optimization targets, and optimizing protection setting values of the whole system. Finally, in the case of actual power system, the optimized setting value results of the proposed method in this paper are compared with the particle swarm algorithm. The results show that the improved quantum particle swarm optimization algorithm has strong search ability, good robustness, and it is suitable for optimizing setting value in the relay protection of the whole power system.

  10. Optimality Conditions in Differentiable Vector Optimization via Second-Order Tangent Sets

    International Nuclear Information System (INIS)

    Jimenez, Bienvenido; Novo, Vicente

    2004-01-01

    We provide second-order necessary and sufficient conditions for a point to be an efficient element of a set with respect to a cone in a normed space, so that there is only a small gap between necessary and sufficient conditions. To this aim, we use the common second-order tangent set and the asymptotic second-order cone utilized by Penot. As an application we establish second-order necessary conditions for a point to be a solution of a vector optimization problem with an arbitrary feasible set and a twice Frechet differentiable objective function between two normed spaces. We also establish second-order sufficient conditions when the initial space is finite-dimensional so that there is no gap with necessary conditions. Lagrange multiplier rules are also given

  11. Optimal projection of observations in a Bayesian setting

    KAUST Repository

    Giraldi, Loic; Le Maî tre, Olivier P.; Hoteit, Ibrahim; Knio, Omar

    2018-01-01

    , and the one that maximizes the mutual information between the parameter of interest and the projected observations. The first two optimization problems are formulated as the determination of an optimal subspace and therefore the solution is computed using

  12. Optimizing Distributed Machine Learning for Large Scale EEG Data Set

    Directory of Open Access Journals (Sweden)

    M Bilal Shaikh

    2017-06-01

    Full Text Available Distributed Machine Learning (DML has gained its importance more than ever in this era of Big Data. There are a lot of challenges to scale machine learning techniques on distributed platforms. When it comes to scalability, improving the processor technology for high level computation of data is at its limit, however increasing machine nodes and distributing data along with computation looks as a viable solution. Different frameworks   and platforms are available to solve DML problems. These platforms provide automated random data distribution of datasets which miss the power of user defined intelligent data partitioning based on domain knowledge. We have conducted an empirical study which uses an EEG Data Set collected through P300 Speller component of an ERP (Event Related Potential which is widely used in BCI problems; it helps in translating the intention of subject w h i l e performing any cognitive task. EEG data contains noise due to waves generated by other activities in the brain which contaminates true P300Speller. Use of Machine Learning techniques could help in detecting errors made by P300 Speller. We are solving this classification problem by partitioning data into different chunks and preparing distributed models using Elastic CV Classifier. To present a case of optimizing distributed machine learning, we propose an intelligent user defined data partitioning approach that could impact on the accuracy of distributed machine learners on average. Our results show better average AUC as compared to average AUC obtained after applying random data partitioning which gives no control to user over data partitioning. It improves the average accuracy of distributed learner due to the domain specific intelligent partitioning by the user. Our customized approach achieves 0.66 AUC on individual sessions and 0.75 AUC on mixed sessions, whereas random / uncontrolled data distribution records 0.63 AUC.

  13. Topology optimization problems with design-dependent sets of constraints

    DEFF Research Database (Denmark)

    Schou, Marie-Louise Højlund

    Topology optimization is a design tool which is used in numerous fields. It can be used whenever the design is driven by weight and strength considerations. The basic concept of topology optimization is the interpretation of partial differential equation coefficients as effective material...... properties and designing through changing these coefficients. For example, consider a continuous structure. Then the basic concept is to represent this structure by small pieces of material that are coinciding with the elements of a finite element model of the structure. This thesis treats stress constrained...... structural topology optimization problems. For such problems a stress constraint for an element should only be present in the optimization problem when the structural design variable corresponding to this element has a value greater than zero. We model the stress constrained topology optimization problem...

  14. Progresses in optimization strategy for radiolabeled molecular probes targeting integrin αvβ3

    International Nuclear Information System (INIS)

    Chen Haojun; Wu Hua

    2012-01-01

    Tumor angiogenesis is critical in the growth, invasion and metastasis of malignant tumors. The integrins, which express on many types of tumor cells and activated vascular endothelial cells, play an important role in regulation of the tumor angiogenesis. RGD peptide, which contains Arg-Gly-Asp sequence, binds specifically to integrin α v β 3 . Therefore, the radiolabeled RGD peptides may have broad application prospects in radionuclide imaging and therapy. Major research interests include the selection of radionuclides, modification and improvement of RGD structures. In this article, we give a review on research progresses in optimization strategy for radiolabeled molecular probes targeting integrin α v β 3 . (authors)

  15. Optimized Basis Sets for the Environment in the Domain-Specific Basis Set Approach of the Incremental Scheme.

    Science.gov (United States)

    Anacker, Tony; Hill, J Grant; Friedrich, Joachim

    2016-04-21

    Minimal basis sets, denoted DSBSenv, based on the segmented basis sets of Ahlrichs and co-workers have been developed for use as environmental basis sets for the domain-specific basis set (DSBS) incremental scheme with the aim of decreasing the CPU requirements of the incremental scheme. The use of these minimal basis sets within explicitly correlated (F12) methods has been enabled by the optimization of matching auxiliary basis sets for use in density fitting of two-electron integrals and resolution of the identity. The accuracy of these auxiliary sets has been validated by calculations on a test set containing small- to medium-sized molecules. The errors due to density fitting are about 2-4 orders of magnitude smaller than the basis set incompleteness error of the DSBSenv orbital basis sets. Additional reductions in computational cost have been tested with the reduced DSBSenv basis sets, in which the highest angular momentum functions of the DSBSenv auxiliary basis sets have been removed. The optimized and reduced basis sets are used in the framework of the domain-specific basis set of the incremental scheme to decrease the computation time without significant loss of accuracy. The computation times and accuracy of the previously used environmental basis and that optimized in this work have been validated with a test set of medium- to large-sized systems. The optimized and reduced DSBSenv basis sets decrease the CPU time by about 15.4% and 19.4% compared with the old environmental basis and retain the accuracy in the absolute energy with standard deviations of 0.99 and 1.06 kJ/mol, respectively.

  16. Simulation-based robust optimization for signal timing and setting.

    Science.gov (United States)

    2009-12-30

    The performance of signal timing plans obtained from traditional approaches for : pre-timed (fixed-time or actuated) control systems is often unstable under fluctuating traffic : conditions. This report develops a general approach for optimizing the ...

  17. Setting of the Optimal Parameters of Melted Glass

    Czech Academy of Sciences Publication Activity Database

    Luptáková, Natália; Matejíčka, L.; Krečmer, N.

    2015-01-01

    Roč. 10, č. 1 (2015), s. 73-79 ISSN 1802-2308 Institutional support: RVO:68081723 Keywords : Striae * Glass * Glass melting * Regression * Optimal parameters Subject RIV: JH - Ceramics, Fire-Resistant Materials and Glass

  18. Optimal projection of observations in a Bayesian setting

    KAUST Repository

    Giraldi, Loic

    2018-03-18

    Optimal dimensionality reduction methods are proposed for the Bayesian inference of a Gaussian linear model with additive noise in presence of overabundant data. Three different optimal projections of the observations are proposed based on information theory: the projection that minimizes the Kullback–Leibler divergence between the posterior distributions of the original and the projected models, the one that minimizes the expected Kullback–Leibler divergence between the same distributions, and the one that maximizes the mutual information between the parameter of interest and the projected observations. The first two optimization problems are formulated as the determination of an optimal subspace and therefore the solution is computed using Riemannian optimization algorithms on the Grassmann manifold. Regarding the maximization of the mutual information, it is shown that there exists an optimal subspace that minimizes the entropy of the posterior distribution of the reduced model; a basis of the subspace can be computed as the solution to a generalized eigenvalue problem; an a priori error estimate on the mutual information is available for this particular solution; and that the dimensionality of the subspace to exactly conserve the mutual information between the input and the output of the models is less than the number of parameters to be inferred. Numerical applications to linear and nonlinear models are used to assess the efficiency of the proposed approaches, and to highlight their advantages compared to standard approaches based on the principal component analysis of the observations.

  19. Optimized Fast-FISH with a-satellite probes: acceleration by microwave activation

    Directory of Open Access Journals (Sweden)

    Durm M.

    1997-01-01

    Full Text Available It has been shown for several DNA probes that the recently introduced Fast-FISH (fluorescence in situ hybridization technique is well suited for quantitative microscopy. For highly repetitive DNA probes the hybridization (renaturation time and the number of subsequent washing steps were reduced considerably by omitting denaturing chemical agents (e.g., formamide. The appropriate hybridization temperature and time allow a clear discrimination between major and minor binding sites by quantitative fluorescence microscopy. The well-defined physical conditions for hybridization permit automatization of the procedure, e.g., by a programmable thermal cycler. Here, we present optimized conditions for a commercially available X-specific a-satellite probe. Highly fluorescent major binding sites were obtained for 74oC hybridization temperature and 60 min hybridization time. They were clearly discriminated from some low fluorescent minor binding sites on metaphase chromosomes as well as in interphase cell nuclei. On average, a total of 3.43 ± 1.59 binding sites were measured in metaphase spreads, and 2.69 ± 1.00 in interphase nuclei. Microwave activation for denaturation and hybridization was tested to accelerate the procedure. The slides with the target material and the hybridization buffer were placed in a standard microwave oven. After denaturation for 20 s at 900 W, hybridization was performed for 4 min at 90 W. The suitability of a microwave oven for Fast-FISH was confirmed by the application to a chromosome 1-specific a-satellite probe. In this case, denaturation was performed at 630 W for 60 s and hybridization at 90 W for 5 min. In all cases, the results were analyzed quantitatively and compared to the results obtained by Fast-FISH. The major binding sites were clearly discriminated by their brightness

  20. Probing optimal measurement configuration for optical scatterometry by the multi-objective genetic algorithm

    Science.gov (United States)

    Chen, Xiuguo; Gu, Honggang; Jiang, Hao; Zhang, Chuanwei; Liu, Shiyuan

    2018-04-01

    Measurement configuration optimization (MCO) is a ubiquitous and important issue in optical scatterometry, whose aim is to probe the optimal combination of measurement conditions, such as wavelength, incidence angle, azimuthal angle, and/or polarization directions, to achieve a higher measurement precision for a given measuring instrument. In this paper, the MCO problem is investigated and formulated as a multi-objective optimization problem, which is then solved by the multi-objective genetic algorithm (MOGA). The case study on the Mueller matrix scatterometry for the measurement of a Si grating verifies the feasibility of the MOGA in handling the MCO problem in optical scatterometry by making a comparison with the Monte Carlo simulations. Experiments performed at the achieved optimal measurement configuration also show good agreement between the measured and calculated best-fit Mueller matrix spectra. The proposed MCO method based on MOGA is expected to provide a more general and practical means to solve the MCO problem in the state-of-the-art optical scatterometry.

  1. Depression screening optimization in an academic rural setting.

    Science.gov (United States)

    Aleem, Sohaib; Torrey, William C; Duncan, Mathew S; Hort, Shoshana J; Mecchella, John N

    2015-01-01

    Primary care plays a critical role in screening and management of depression. The purpose of this paper is to focus on leveraging the electronic health record (EHR) as well as work flow redesign to improve the efficiency and reliability of the process of depression screening in two adult primary care clinics of a rural academic institution in USA. The authors utilized various process improvement tools from lean six sigma methodology including project charter, swim lane process maps, critical to quality tree, process control charts, fishbone diagrams, frequency impact matrix, mistake proofing and monitoring plan in Define-Measure-Analyze-Improve-Control format. Interventions included change in depression screening tool, optimization of data entry in EHR. EHR data entry optimization; follow up of positive screen, staff training and EHR redesign. Depression screening rate for office-based primary care visits improved from 17.0 percent at baseline to 75.9 percent in the post-intervention control phase (p<0.001). Follow up of positive depression screen with Patient History Questionnaire-9 data collection remained above 90 percent. Duplication of depression screening increased from 0.6 percent initially to 11.7 percent and then decreased to 4.7 percent after optimization of data entry by patients and flow staff. Impact of interventions on clinical outcomes could not be evaluated. Successful implementation, sustainability and revision of a process improvement initiative to facilitate screening, follow up and management of depression in primary care requires accounting for voice of the process (performance metrics), system limitations and voice of the customer (staff and patients) to overcome various system, customer and human resource constraints.

  2. COMPROMISE, OPTIMAL AND TRACTIONAL ACCOUNTS ON PARETO SET

    Directory of Open Access Journals (Sweden)

    V. V. Lahuta

    2010-11-01

    Full Text Available The problem of optimum traction calculations is considered as a problem about optimum distribution of a resource. The dynamic programming solution is based on a step-by-step calculation of set of points of Pareto-optimum values of a criterion function (energy expenses and a resource (time.

  3. Optimization models using fuzzy sets and possibility theory

    CERN Document Server

    Orlovski, S

    1987-01-01

    Optimization is of central concern to a number of discip­ lines. Operations Research and Decision Theory are often consi­ dered to be identical with optimizationo But also in other areas such as engineering design, regional policy, logistics and many others, the search for optimal solutions is one of the prime goals. The methods and models which have been used over the last decades in these areas have primarily been "hard" or "crisp", i. e. the solutions were considered to be either fea­ sible or unfeasible, either above a certain aspiration level or below. This dichotomous structure of methods very often forced the modeller to approximate real problem situations of the more-or-less type by yes-or-no-type models, the solutions of which might turn out not to be the solutions to the real prob­ lems. This is particularly true if the problem under considera­ tion includes vaguely defined relationships, human evaluations, uncertainty due to inconsistent or incomplete evidence, if na­ tural language has to be...

  4. Approximating the Pareto set of multiobjective linear programs via robust optimization

    NARCIS (Netherlands)

    Gorissen, B.L.; den Hertog, D.

    2012-01-01

    We consider problems with multiple linear objectives and linear constraints and use adjustable robust optimization and polynomial optimization as tools to approximate the Pareto set with polynomials of arbitrarily large degree. The main difference with existing techniques is that we optimize a

  5. Optimizing distance-based methods for large data sets

    Science.gov (United States)

    Scholl, Tobias; Brenner, Thomas

    2015-10-01

    Distance-based methods for measuring spatial concentration of industries have received an increasing popularity in the spatial econometrics community. However, a limiting factor for using these methods is their computational complexity since both their memory requirements and running times are in {{O}}(n^2). In this paper, we present an algorithm with constant memory requirements and shorter running time, enabling distance-based methods to deal with large data sets. We discuss three recent distance-based methods in spatial econometrics: the D&O-Index by Duranton and Overman (Rev Econ Stud 72(4):1077-1106, 2005), the M-function by Marcon and Puech (J Econ Geogr 10(5):745-762, 2010) and the Cluster-Index by Scholl and Brenner (Reg Stud (ahead-of-print):1-15, 2014). Finally, we present an alternative calculation for the latter index that allows the use of data sets with millions of firms.

  6. Perturbing engine performance measurements to determine optimal engine control settings

    Science.gov (United States)

    Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna

    2014-12-30

    Methods and systems for optimizing a performance of a vehicle engine are provided. The method includes determining an initial value for a first engine control parameter based on one or more detected operating conditions of the vehicle engine, determining a value of an engine performance variable, and artificially perturbing the determined value of the engine performance variable. The initial value for the first engine control parameter is then adjusted based on the perturbed engine performance variable causing the engine performance variable to approach a target engine performance variable. Operation of the vehicle engine is controlled based on the adjusted initial value for the first engine control parameter. These acts are repeated until the engine performance variable approaches the target engine performance variable.

  7. A set of rules for constructing an admissible set of D optimal exact ...

    African Journals Online (AJOL)

    In the search for a D-optimal exact design using the combinatorial iterative technique introduced by Onukogu and Iwundu, 2008, all the support points that make up the experimental region are grouped into H concentric balls according to their distances from the centre. Any selection of N support points from the balls defines ...

  8. Methods for optimizing over the efficient and weakly efficient sets of an affine fractional vector optimization program

    DEFF Research Database (Denmark)

    Le, T.H.A.; Pham, D. T.; Canh, Nam Nguyen

    2010-01-01

    Both the efficient and weakly efficient sets of an affine fractional vector optimization problem, in general, are neither convex nor given explicitly. Optimization problems over one of these sets are thus nonconvex. We propose two methods for optimizing a real-valued function over the efficient...... and weakly efficient sets of an affine fractional vector optimization problem. The first method is a local one. By using a regularization function, we reformulate the problem into a standard smooth mathematical programming problem that allows applying available methods for smooth programming. In case...... the objective function is linear, we have investigated a global algorithm based upon a branch-and-bound procedure. The algorithm uses Lagrangian bound coupling with a simplicial bisection in the criteria space. Preliminary computational results show that the global algorithm is promising....

  9. Cameras and settings for optimal image capture from UAVs

    Science.gov (United States)

    Smith, Mike; O'Connor, James; James, Mike R.

    2017-04-01

    Aerial image capture has become very common within the geosciences due to the increasing affordability of low payload (markets. Their application to surveying has led to many studies being undertaken using UAV imagery captured from consumer grade cameras as primary data sources. However, image quality and the principles of image capture are seldom given rigorous discussion which can lead to experiments being difficult to accurately reproduce. In this contribution we revisit the underpinning concepts behind image capture, from which the requirements for acquiring sharp, well exposed and suitable imagery are derived. This then leads to discussion of how to optimise the platform, camera, lens and imaging settings relevant to image quality planning, presenting some worked examples as a guide. Finally, we challenge the community to make their image data open for review in order to ensure confidence in the outputs/error estimates, allow reproducibility of the results and have these comparable with future studies. We recommend providing open access imagery where possible, a range of example images, and detailed metadata to rigorously describe the image capture process.

  10. Comprehensive detection of diverse exon 19 deletion mutations of EGFR in lung Cancer by a single probe set.

    Science.gov (United States)

    Bae, Jin Ho; Jo, Seong-Min; Kim, Hak-Sung

    2015-12-15

    Detection of exon 19 deletion mutation of EGFR, one of the most frequently occurring mutations in lung cancer, provides the crucial information for diagnosis and treatment guideline in non-small-cell lung cancer (NSCLC). Here, we demonstrate a simple and efficient method to detect various exon 19 deletion mutations of EGFR using a single probe set comprising of an oligo-quencher (oligo-Q) and a molecular beacon (MB). While the MB hybridizes to both the wild and mutant target DNA, the oligo-Q only binds to the wild target DNA, leading to a fluorescent signal in case of deletion mutation. This enables the comprehensive detection of the diverse exon 19 deletion mutations using a single probe set. We demonstrated the utility and efficiency of the approach by detecting the frequent exon 19 deletion mutations of EGFR through a real-time PCR and in situ fluorescence imaging. Our approach enabled the detection of genomic DNA as low as 0.02 ng, showing a detection limit of 2% in a heterogeneous DNA mixture, and could be used for detecting mutations in a single cell level. The present MB and oligo-Q dual probe system can be used for diagnosis and treatment guideline in NSCLC. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Energy optimized Gaussian basis sets for the atoms T1 - Rn

    International Nuclear Information System (INIS)

    Faegri, K. Jr.

    1987-01-01

    Energy optimized Gaussian basis sets have been derived for the atoms Tl-Rn. Two sets are presented - a (20,16,10,6) set and a (22,17,13,8) set. The smallest sets yield atomic energies 107 to 123 mH above the numerical Hartree-Fock values, while the larger sets give energies 11 mH above the numerical results. Energy trends from the smaller sets indicate that reduced shielding by p-electrons may place a greater demand on the flexibility of d- and f-orbital description for the lighter elements of the series

  12. A LEVEL SET BASED SHAPE OPTIMIZATION METHOD FOR AN ELLIPTIC OBSTACLE PROBLEM

    KAUST Repository

    Burger, Martin

    2011-04-01

    In this paper, we construct a level set method for an elliptic obstacle problem, which can be reformulated as a shape optimization problem. We provide a detailed shape sensitivity analysis for this reformulation and a stability result for the shape Hessian at the optimal shape. Using the shape sensitivities, we construct a geometric gradient flow, which can be realized in the context of level set methods. We prove the convergence of the gradient flow to an optimal shape and provide a complete analysis of the level set method in terms of viscosity solutions. To our knowledge this is the first complete analysis of a level set method for a nonlocal shape optimization problem. Finally, we discuss the implementation of the methods and illustrate its behavior through several computational experiments. © 2011 World Scientific Publishing Company.

  13. Optimal PID settings for first and second-order processes - Comparison with different controller tuning approaches

    OpenAIRE

    Pappas, Iosif

    2016-01-01

    PID controllers are extensively used in industry. Although many tuning methodologies exist, finding good controller settings is not an easy task and frequently optimization-based design is preferred to satisfy more complex criteria. In this thesis, the focus was to find which tuning approaches, if any, present close to optimal behavior. Pareto-optimal controllers were found for different first and second-order processes with time delay. Performance was quantified in terms of the integrat...

  14. Optimization problem in quantum cryptography

    International Nuclear Information System (INIS)

    Brandt, Howard E

    2003-01-01

    A complete optimization was recently performed, yielding the maximum information gain by a general unitary entangling probe in the four-state protocol of quantum cryptography. A larger set of optimum probe parameters was found than was known previously from an incomplete optimization. In the present work, a detailed comparison is made between the complete and incomplete optimizations. Also, a new set of optimum probe parameters is identified for the four-state protocol

  15. A Novel Rough Set Reduct Algorithm for Medical Domain Based on Bee Colony Optimization

    OpenAIRE

    Suguna, N.; Thanushkodi, K.

    2010-01-01

    Feature selection refers to the problem of selecting relevant features which produce the most predictive outcome. In particular, feature selection task is involved in datasets containing huge number of features. Rough set theory has been one of the most successful methods used for feature selection. However, this method is still not able to find optimal subsets. This paper proposes a new feature selection method based on Rough set theory hybrid with Bee Colony Optimization (BCO) in an attempt...

  16. Setting the optimal type of equipment to be adopted and the optimal time to replace it

    OpenAIRE

    Albici, Mihaela

    2009-01-01

    The mathematical models of equipment’s wear and tear, and replacement theory aim at deciding on the purchase selection of a certain equipment type, the optimal exploitation time of the equipment, the time and ways to replace or repair it, or to ensure its spare parts, the equipment’s performance in the technical progress context, the opportunities to modernize it etc.

  17. Luminescence rigidochromism as a probe for the setting of gypsum plaster

    International Nuclear Information System (INIS)

    Kunkely, Horst; Vogler, Arnd

    2008-01-01

    The setting of gypsum plaster can be monitored by luminescence rigidochromism. The progress of the setting process which is accompanied by hardening is indicated by a blue shift of the phosphorescence of a suitable water soluble rhenium complex. This rigidity increase of the plaster/water mixture takes place in two phases. In the beginning the rigidity increase is rather large while in the second much longer phase it is relatively small. The addition of a plasticizer (or retarder) keeps the rigidity smaller in the beginning, but only slightly affects the final rigidity of the set plaster

  18. Level Set-Based Topology Optimization for the Design of an Electromagnetic Cloak With Ferrite Material

    DEFF Research Database (Denmark)

    Otomori, Masaki; Yamada, Takayuki; Andkjær, Jacob Anders

    2013-01-01

    . A level set-based topology optimization method incorporating a fictitious interface energy is used to find optimized configurations of the ferrite material. The numerical results demonstrate that the optimization successfully found an appropriate ferrite configuration that functions as an electromagnetic......This paper presents a structural optimization method for the design of an electromagnetic cloak made of ferrite material. Ferrite materials exhibit a frequency-dependent degree of permeability, due to a magnetic resonance phenomenon that can be altered by changing the magnitude of an externally...

  19. A multilevel, level-set method for optimizing eigenvalues in shape design problems

    International Nuclear Information System (INIS)

    Haber, E.

    2004-01-01

    In this paper, we consider optimal design problems that involve shape optimization. The goal is to determine the shape of a certain structure such that it is either as rigid or as soft as possible. To achieve this goal we combine two new ideas for an efficient solution of the problem. First, we replace the eigenvalue problem with an approximation by using inverse iteration. Second, we use a level set method but rather than propagating the front we use constrained optimization methods combined with multilevel continuation techniques. Combining these two ideas we obtain a robust and rapid method for the solution of the optimal design problem

  20. Aerostructural Level Set Topology Optimization for a Common Research Model Wing

    Science.gov (United States)

    Dunning, Peter D.; Stanford, Bret K.; Kim, H. Alicia

    2014-01-01

    The purpose of this work is to use level set topology optimization to improve the design of a representative wing box structure for the NASA common research model. The objective is to minimize the total compliance of the structure under aerodynamic and body force loading, where the aerodynamic loading is coupled to the structural deformation. A taxi bump case was also considered, where only body force loads were applied. The trim condition that aerodynamic lift must balance the total weight of the aircraft is enforced by allowing the root angle of attack to change. The level set optimization method is implemented on an unstructured three-dimensional grid, so that the method can optimize a wing box with arbitrary geometry. Fast matching and upwind schemes are developed for an unstructured grid, which make the level set method robust and efficient. The adjoint method is used to obtain the coupled shape sensitivities required to perform aerostructural optimization of the wing box structure.

  1. Optimization of ultrasonic arrays design and setting using a differential evolution

    International Nuclear Information System (INIS)

    Puel, B.; Chatillon, S.; Calmon, P.; Lesselier, D.

    2011-01-01

    Optimization of both design and setting of phased arrays could be not so easy when they are performed manually via parametric studies. An optimization method based on an Evolutionary Algorithm and numerical simulation is proposed and evaluated. The Randomized Adaptive Differential Evolution has been adapted to meet the specificities of the non-destructive testing applications. In particular, the solution of multi-objective problems is aimed at with the implementation of the concept of pareto-optimal sets of solutions. The algorithm has been implemented and connected to the ultrasonic simulation modules of the CIVA software used as forward model. The efficiency of the method is illustrated on two realistic cases of application: optimization of the position and delay laws of a flexible array inspecting a nozzle, considered as a mono-objective problem; and optimization of the design of a surrounded array and its delay laws, considered as a constrained bi-objective problem. (authors)

  2. A LEVEL SET BASED SHAPE OPTIMIZATION METHOD FOR AN ELLIPTIC OBSTACLE PROBLEM

    KAUST Repository

    Burger, Martin; Matevosyan, Norayr; Wolfram, Marie-Therese

    2011-01-01

    analysis of the level set method in terms of viscosity solutions. To our knowledge this is the first complete analysis of a level set method for a nonlocal shape optimization problem. Finally, we discuss the implementation of the methods and illustrate its

  3. CHESS-changing horizon efficient set search: A simple principle for multiobjective optimization

    DEFF Research Database (Denmark)

    Borges, Pedro Manuel F. C.

    2000-01-01

    This paper presents a new concept for generating approximations to the non-dominated set in multiobjective optimization problems. The approximation set A is constructed by solving several single-objective minimization problems in which a particular function D(A, z) is minimized. A new algorithm t...

  4. Optimal Interest-Rate Setting in a Dynamic IS/AS Model

    DEFF Research Database (Denmark)

    Jensen, Henrik

    2011-01-01

    This note deals with interest-rate setting in a simple dynamic macroeconomic setting. The purpose is to present some basic and central properties of an optimal interest-rate rule. The model framework predates the New-Keynesian paradigm of the late 1990s and onwards (it is accordingly dubbed “Old...

  5. Homogeneity analysis with k sets of variables: An alternating least squares method with optimal scaling features

    NARCIS (Netherlands)

    van der Burg, Eeke; de Leeuw, Jan; Verdegaal, Renée

    1988-01-01

    Homogeneity analysis, or multiple correspondence analysis, is usually applied tok separate variables. In this paper we apply it to sets of variables by using sums within sets. The resulting technique is called OVERALS. It uses the notion of optimal scaling, with transformations that can be multiple

  6. Utilization of reduced fuelling ripple set in ROP detector layout optimization

    International Nuclear Information System (INIS)

    Kastanya, Doddy

    2012-01-01

    Highlights: ► ADORE is an ROP detect layout optimization algorithm in CANDU reactors. ► The effect of using reduced set of fuelling ripples in ADORE is assessed. ► Significant speedup can be realized by adopting this approach. ► The quality of the results is comparable to results from full set of ripples. - Abstract: The ADORE (Alternative Detector layout Optimization for REgional overpower protection system) algorithm for performing the optimization of regional overpower protection (ROP) for CANDU® reactors has been recently developed. This algorithm utilizes the simulated annealing (SA) stochastic optimization technique to come up with an optimized detector layout for the ROP systems. For each history in the SA iteration where a particular detector layout is evaluated, the goodness of this detector layout is measured in terms of its trip set point value which is obtained by performing a probabilistic trip set point calculation using the ROVER-F code. Since during each optimization execution thousands of candidate detector layouts are evaluated, the overall optimization process is time consuming. Since for each ROVER-F evaluation the number of fuelling ripples controls the execution time, reducing the number of fuelling ripples will reduce the overall execution time. This approach has been investigated and the results are presented in this paper. The challenge is to construct a set of representative fuelling ripples which will significantly speedup the optimization process while guaranteeing that the resulting detector layout has similar quality to the ones produced when the complete set of fuelling ripples is employed.

  7. Mixture modeling of multi-component data sets with application to ion-probe zircon ages

    Science.gov (United States)

    Sambridge, M. S.; Compston, W.

    1994-12-01

    A method is presented for detecting multiple components in a population of analytical observations for zircon and other ages. The procedure uses an approach known as mixture modeling, in order to estimate the most likely ages, proportions and number of distinct components in a given data set. Particular attention is paid to estimating errors in the estimated ages and proportions. At each stage of the procedure several alternative numerical approaches are suggested, each having their own advantages in terms of efficency and accuracy. The methodology is tested on synthetic data sets simulating two or more mixed populations of zircon ages. In this case true ages and proportions of each population are known and compare well with the results of the new procedure. Two examples are presented of its use with sets of SHRIMP U-238 - Pb-206 zircon ages from Palaeozoic rocks. A published data set for altered zircons from bentonite at Meishucun, South China, previously treated as a single-component population after screening for gross alteration effects, can be resolved into two components by the new procedure and their ages, proportions and standard errors estimated. The older component, at 530 +/- 5 Ma (2 sigma), is our best current estimate for the age of the bentonite. Mixture modeling of a data set for unaltered zircons from a tonalite elsewhere defines the magmatic U-238 - Pb-206 age at high precision (2 sigma +/- 1.5 Ma), but one-quarter of the 41 analyses detect hidden and significantly older cores.

  8. Optimal testing input sets for reduced diagnosis time of nuclear power plant digital electronic circuits

    International Nuclear Information System (INIS)

    Kim, D.S.; Seong, P.H.

    1994-01-01

    This paper describes the optimal testing input sets required for the fault diagnosis of the nuclear power plant digital electronic circuits. With the complicated systems such as very large scale integration (VLSI), nuclear power plant (NPP), and aircraft, testing is the major factor of the maintenance of the system. Particularly, diagnosis time grows quickly with the complexity of the component. In this research, for reduce diagnosis time the authors derived the optimal testing sets that are the minimal testing sets required for detecting the failure and for locating of the failed component. For reduced diagnosis time, the technique presented by Hayes fits best for the approach to testing sets generation among many conventional methods. However, this method has the following disadvantages: (a) it considers only the simple network (b) it concerns only whether the system is in failed state or not and does not provide the way to locate the failed component. Therefore the authors have derived the optimal testing input sets that resolve these problems by Hayes while preserving its advantages. When they applied the optimal testing sets to the automatic fault diagnosis system (AFDS) which incorporates the advanced fault diagnosis method of artificial intelligence technique, they found that the fault diagnosis using the optimal testing sets makes testing the digital electronic circuits much faster than that using exhaustive testing input sets; when they applied them to test the Universal (UV) Card which is a nuclear power plant digital input/output solid state protection system card, they reduced the testing time up to about 100 times

  9. Accuracy, reproducibility, and uncertainty analysis of thyroid-probe-based activity measurements for determination of dose calibrator settings.

    Science.gov (United States)

    Esquinas, Pedro L; Tanguay, Jesse; Gonzalez, Marjorie; Vuckovic, Milan; Rodríguez-Rodríguez, Cristina; Häfeli, Urs O; Celler, Anna

    2016-12-01

    In the nuclear medicine department, the activity of radiopharmaceuticals is measured using dose calibrators (DCs) prior to patient injection. The DC consists of an ionization chamber that measures current generated by ionizing radiation (emitted from the radiotracer). In order to obtain an activity reading, the current is converted into units of activity by applying an appropriate calibration factor (also referred to as DC dial setting). Accurate determination of DC dial settings is crucial to ensure that patients receive the appropriate dose in diagnostic scans or radionuclide therapies. The goals of this study were (1) to describe a practical method to experimentally determine dose calibrator settings using a thyroid-probe (TP) and (2) to investigate the accuracy, reproducibility, and uncertainties of the method. As an illustration, the TP method was applied to determine 188 Re dial settings for two dose calibrator models: Atomlab 100plus and Capintec CRC-55tR. Using the TP to determine dose calibrator settings involved three measurements. First, the energy-dependent efficiency of the TP was determined from energy spectra measurements of two calibration sources ( 152 Eu and 22 Na). Second, the gamma emissions from the investigated isotope ( 188 Re) were measured using the TP and its activity was determined using γ-ray spectroscopy methods. Ambient background, scatter, and source-geometry corrections were applied during the efficiency and activity determination steps. Third, the TP-based 188 Re activity was used to determine the dose calibrator settings following the calibration curve method [B. E. Zimmerman et al., J. Nucl. Med. 40, 1508-1516 (1999)]. The interobserver reproducibility of TP measurements was determined by the coefficient of variation (COV) and uncertainties associated to each step of the measuring process were estimated. The accuracy of activity measurements using the proposed method was evaluated by comparing the TP activity estimates of 99m Tc

  10. OPTIMIZATION OF AGGREGATION AND SEQUENTIAL-PARALLEL EXECUTION MODES OF INTERSECTING OPERATION SETS

    Directory of Open Access Journals (Sweden)

    G. М. Levin

    2016-01-01

    Full Text Available A mathematical model and a method for the problem of optimization of aggregation and of sequential- parallel execution modes of intersecting operation sets are proposed. The proposed method is based on the two-level decomposition scheme. At the top level the variant of aggregation for groups of operations is selected, and at the lower level the execution modes of operations are optimized for a fixed version of aggregation.

  11. Ultrafuzziness Optimization Based on Type II Fuzzy Sets for Image Thresholding

    Directory of Open Access Journals (Sweden)

    Hudan Studiawan

    2010-11-01

    Full Text Available Image thresholding is one of the processing techniques to provide high quality preprocessed image. Image vagueness and bad illumination are common obstacles yielding in a poor image thresholding output. By assuming image as fuzzy sets, several different fuzzy thresholding techniques have been proposed to remove these obstacles during threshold selection. In this paper, we proposed an algorithm for thresholding image using ultrafuzziness optimization to decrease uncertainty in fuzzy system by common fuzzy sets like type II fuzzy sets. Optimization was conducted by involving ultrafuzziness measurement for background and object fuzzy sets separately. Experimental results demonstrated that the proposed image thresholding method had good performances for images with high vagueness, low level contrast, and grayscale ambiguity.

  12. Multi-probe-based resonance-frequency electrical impedance spectroscopy for detection of suspicious breast lesions: improving performance using partial ROC optimization

    Science.gov (United States)

    Lederman, Dror; Zheng, Bin; Wang, Xingwei; Wang, Xiao Hui; Gur, David

    2011-03-01

    We have developed a multi-probe resonance-frequency electrical impedance spectroscope (REIS) system to detect breast abnormalities. Based on assessing asymmetry in REIS signals acquired between left and right breasts, we developed several machine learning classifiers to classify younger women (i.e., under 50YO) into two groups of having high and low risk for developing breast cancer. In this study, we investigated a new method to optimize performance based on the area under a selected partial receiver operating characteristic (ROC) curve when optimizing an artificial neural network (ANN), and tested whether it could improve classification performance. From an ongoing prospective study, we selected a dataset of 174 cases for whom we have both REIS signals and diagnostic status verification. The dataset includes 66 "positive" cases recommended for biopsy due to detection of highly suspicious breast lesions and 108 "negative" cases determined by imaging based examinations. A set of REIS-based feature differences, extracted from the two breasts using a mirror-matched approach, was computed and constituted an initial feature pool. Using a leave-one-case-out cross-validation method, we applied a genetic algorithm (GA) to train the ANN with an optimal subset of features. Two optimization criteria were separately used in GA optimization, namely the area under the entire ROC curve (AUC) and the partial area under the ROC curve, up to a predetermined threshold (i.e., 90% specificity). The results showed that although the ANN optimized using the entire AUC yielded higher overall performance (AUC = 0.83 versus 0.76), the ANN optimized using the partial ROC area criterion achieved substantially higher operational performance (i.e., increasing sensitivity level from 28% to 48% at 95% specificity and/ or from 48% to 58% at 90% specificity).

  13. Sensitivity of the optimal parameter settings for a LTE packet scheduler

    NARCIS (Netherlands)

    Fernandez-Diaz, I.; Litjens, R.; van den Berg, C.A.; Dimitrova, D.C.; Spaey, K.

    Advanced packet scheduling schemes in 3G/3G+ mobile networks provide one or more parameters to optimise the trade-off between QoS and resource efficiency. In this paper we study the sensitivity of the optimal parameter setting for packet scheduling in LTE radio networks with respect to various

  14. A Decomposition Model for HPLC-DAD Data Set and Its Solution by Particle Swarm Optimization

    Directory of Open Access Journals (Sweden)

    Lizhi Cui

    2014-01-01

    Full Text Available This paper proposes a separation method, based on the model of Generalized Reference Curve Measurement and the algorithm of Particle Swarm Optimization (GRCM-PSO, for the High Performance Liquid Chromatography with Diode Array Detection (HPLC-DAD data set. Firstly, initial parameters are generated to construct reference curves for the chromatogram peaks of the compounds based on its physical principle. Then, a General Reference Curve Measurement (GRCM model is designed to transform these parameters to scalar values, which indicate the fitness for all parameters. Thirdly, rough solutions are found by searching individual target for every parameter, and reinitialization only around these rough solutions is executed. Then, the Particle Swarm Optimization (PSO algorithm is adopted to obtain the optimal parameters by minimizing the fitness of these new parameters given by the GRCM model. Finally, spectra for the compounds are estimated based on the optimal parameters and the HPLC-DAD data set. Through simulations and experiments, following conclusions are drawn: (1 the GRCM-PSO method can separate the chromatogram peaks and spectra from the HPLC-DAD data set without knowing the number of the compounds in advance even when severe overlap and white noise exist; (2 the GRCM-PSO method is able to handle the real HPLC-DAD data set.

  15. Approximating the Pareto Set of Multiobjective Linear Programs via Robust Optimization

    NARCIS (Netherlands)

    Gorissen, B.L.; den Hertog, D.

    2012-01-01

    Abstract: The Pareto set of a multiobjective optimization problem consists of the solutions for which one or more objectives can not be improved without deteriorating one or more other objectives. We consider problems with linear objectives and linear constraints and use Adjustable Robust

  16. Ventilation area measured with eit in order to optimize peep settings in mechanically ventilated patients

    NARCIS (Netherlands)

    Blankman, P; Groot Jebbink, E; Preis, C; Bikker, I.; Gommers, D.

    2012-01-01

    INTRODUCTION. Electrical Impedance Tomography (EIT) is a non-invasive imaging technique, which can be used to visualize ventilation. Ventilation will be measured by impedance changes due to ventilation. OBJECTIVES. The aim of this study was to optimize PEEP settings based on the ventilation area of

  17. Internal combustion engine report: Spark ignited ICE GenSet optimization and novel concept development

    Energy Technology Data Exchange (ETDEWEB)

    Keller, J.; Blarigan, P. Van [Sandia National Labs., Livermore, CA (United States)

    1998-08-01

    In this manuscript the authors report on two projects each of which the goal is to produce cost effective hydrogen utilization technologies. These projects are: (1) the development of an electrical generation system using a conventional four-stroke spark-ignited internal combustion engine generator combination (SI-GenSet) optimized for maximum efficiency and minimum emissions, and (2) the development of a novel internal combustion engine concept. The SI-GenSet will be optimized to run on either hydrogen or hydrogen-blends. The novel concept seeks to develop an engine that optimizes the Otto cycle in a free piston configuration while minimizing all emissions. To this end the authors are developing a rapid combustion homogeneous charge compression ignition (HCCI) engine using a linear alternator for both power take-off and engine control. Targeted applications include stationary electrical power generation, stationary shaft power generation, hybrid vehicles, and nearly any other application now being accomplished with internal combustion engines.

  18. Set-Based Discrete Particle Swarm Optimization Based on Decomposition for Permutation-Based Multiobjective Combinatorial Optimization Problems.

    Science.gov (United States)

    Yu, Xue; Chen, Wei-Neng; Gu, Tianlong; Zhang, Huaxiang; Yuan, Huaqiang; Kwong, Sam; Zhang, Jun

    2017-08-07

    This paper studies a specific class of multiobjective combinatorial optimization problems (MOCOPs), namely the permutation-based MOCOPs. Many commonly seen MOCOPs, e.g., multiobjective traveling salesman problem (MOTSP), multiobjective project scheduling problem (MOPSP), belong to this problem class and they can be very different. However, as the permutation-based MOCOPs share the inherent similarity that the structure of their search space is usually in the shape of a permutation tree, this paper proposes a generic multiobjective set-based particle swarm optimization methodology based on decomposition, termed MS-PSO/D. In order to coordinate with the property of permutation-based MOCOPs, MS-PSO/D utilizes an element-based representation and a constructive approach. Through this, feasible solutions under constraints can be generated step by step following the permutation-tree-shaped structure. And problem-related heuristic information is introduced in the constructive approach for efficiency. In order to address the multiobjective optimization issues, the decomposition strategy is employed, in which the problem is converted into multiple single-objective subproblems according to a set of weight vectors. Besides, a flexible mechanism for diversity control is provided in MS-PSO/D. Extensive experiments have been conducted to study MS-PSO/D on two permutation-based MOCOPs, namely the MOTSP and the MOPSP. Experimental results validate that the proposed methodology is promising.

  19. An Eddy Current Testing Platform System for Pipe Defect Inspection Based on an Optimized Eddy Current Technique Probe Design

    Science.gov (United States)

    Rifai, Damhuji; Abdalla, Ahmed N.; Razali, Ramdan; Ali, Kharudin; Faraj, Moneer A.

    2017-01-01

    The use of the eddy current technique (ECT) for the non-destructive testing of conducting materials has become increasingly important in the past few years. The use of the non-destructive ECT plays a key role in the ensuring the safety and integrity of the large industrial structures such as oil and gas pipelines. This paper introduce a novel ECT probe design integrated with the distributed ECT inspection system (DSECT) use for crack inspection on inner ferromagnetic pipes. The system consists of an array of giant magneto-resistive (GMR) sensors, a pneumatic system, a rotating magnetic field excitation source and a host PC acting as the data analysis center. Probe design parameters, namely probe diameter, an excitation coil and the number of GMR sensors in the array sensor is optimized using numerical optimization based on the desirability approach. The main benefits of DSECT can be seen in terms of its modularity and flexibility for the use of different types of magnetic transducers/sensors, and signals of a different nature with either digital or analog outputs, making it suited for the ECT probe design using an array of GMR magnetic sensors. A real-time application of the DSECT distributed system for ECT inspection can be exploited for the inspection of 70 mm carbon steel pipe. In order to predict the axial and circumference defect detection, a mathematical model is developed based on the technique known as response surface methodology (RSM). The inspection results of a carbon steel pipe sample with artificial defects indicate that the system design is highly efficient. PMID:28335399

  20. Trace element analysis in an optimized set-up for total reflection PIXE (TPIXE)

    International Nuclear Information System (INIS)

    Van Kan, J.A.; Vis, R.D.

    1996-01-01

    A newly constructed chamber for measuring with MeV proton beams at small incidence angles (0 to 35 mrad) is used to analyse trace elements on flat surfaces such as Si wafers, quartz substrates and perspex. This set-up is constructed in such a way that the X-ray detector can reach very large solid angles, larger than 1 sr. Using these large solid angles in combination with the reduction of bremsstrahlungs background, lower limits of detection (LOD) using TPIXE can be obtained as compared with PIXE in the conventional geometry. Standard solutions are used to determine the LODs obtainable with TPIXE in the optimized set-up. These solutions contain traces of As and Sr with concentrations down to 20 ppb in an insulin solution. The limits of detection found are compared with earlier ones obtained with TPIXE in a non optimized set-up and with TXRF results. (author)

  1. Optimizing Geographic Allotment of Photovoltaic Capacity in a Distributed Generation Setting: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Urquhart, B.; Sengupta, M.; Keller, J.

    2012-09-01

    A multi-objective optimization was performed to allocate 2MW of PV among four candidate sites on the island of Lanai such that energy was maximized and variability in the form of ramp rates was minimized. This resulted in an optimal solution set which provides a range of geographic allotment alternatives for the fixed PV capacity. Within the optimal set, a tradeoff between energy produced and variability experienced was found, whereby a decrease in variability always necessitates a simultaneous decrease in energy. A design point within the optimal set was selected for study which decreased extreme ramp rates by over 50% while only decreasing annual energy generation by 3% over the maximum generation allocation. To quantify the allotment mix selected, a metric was developed, called the ramp ratio, which compares ramping magnitude when all capacity is allotted to a single location to the aggregate ramping magnitude in a distributed scenario. The ramp ratio quantifies simultaneously how much smoothing a distributed scenario would experience over single site allotment and how much a single site is being under-utilized for its ability to reduce aggregate variability. This paper creates a framework for use by cities and municipal utilities to reduce variability impacts while planning for high penetration of PV on the distribution grid.

  2. Topology optimization in acoustics and elasto-acoustics via a level-set method

    Science.gov (United States)

    Desai, J.; Faure, A.; Michailidis, G.; Parry, G.; Estevez, R.

    2018-04-01

    Optimizing the shape and topology (S&T) of structures to improve their acoustic performance is quite challenging. The exact position of the structural boundary is usually of critical importance, which dictates the use of geometric methods for topology optimization instead of standard density approaches. The goal of the present work is to investigate different possibilities for handling topology optimization problems in acoustics and elasto-acoustics via a level-set method. From a theoretical point of view, we detail two equivalent ways to perform the derivation of surface-dependent terms and propose a smoothing technique for treating problems of boundary conditions optimization. In the numerical part, we examine the importance of the surface-dependent term in the shape derivative, neglected in previous studies found in the literature, on the optimal designs. Moreover, we test different mesh adaptation choices, as well as technical details related to the implicit surface definition in the level-set approach. We present results in two and three-space dimensions.

  3. Security Optimization for Distributed Applications Oriented on Very Large Data Sets

    Directory of Open Access Journals (Sweden)

    Mihai DOINEA

    2010-01-01

    Full Text Available The paper presents the main characteristics of applications which are working with very large data sets and the issues related to security. First section addresses the optimization process and how it is approached when dealing with security. The second section describes the concept of very large datasets management while in the third section the risks related are identified and classified. Finally, a security optimization schema is presented with a cost-efficiency analysis upon its feasibility. Conclusions are drawn and future approaches are identified.

  4. Optimization of in situ prompt gamma-ray analysis using a HPGe-252Cf probe

    International Nuclear Information System (INIS)

    Chien Chung; Jiunnhsing Chao

    1991-01-01

    Application of in situ measurements by the neutron-induced prompt gamma-ray activation analysis (PGAA) technique to geochemical analysis and mineral survey have been investigated. An in situ survey of water pollutants by PGAA techniques was first proposed in the authors' previous study, where a 2.7-μg 252 Cf neutron source used in connection with a gamma-ray detecting system to determine water pollutants was described. In this paper the authors describe a modified detection probe designed and constructed to look for the optimum conditions of various-intensity 252 Cf neutron sources in measurement of some elements in lake water. Detecting efficiencies at high-energy regions and detection limits for elements commonly found in polluted lakes were evaluated and predicted to investigate the potential application of the probe for in situ measurements

  5. A Binary Cat Swarm Optimization Algorithm for the Non-Unicost Set Covering Problem

    Directory of Open Access Journals (Sweden)

    Broderick Crawford

    2015-01-01

    Full Text Available The Set Covering Problem consists in finding a subset of columns in a zero-one matrix such that they cover all the rows of the matrix at a minimum cost. To solve the Set Covering Problem we use a metaheuristic called Binary Cat Swarm Optimization. This metaheuristic is a recent swarm metaheuristic technique based on the cat behavior. Domestic cats show the ability to hunt and are curious about moving objects. Based on this, the cats have two modes of behavior: seeking mode and tracing mode. We are the first ones to use this metaheuristic to solve this problem; our algorithm solves a set of 65 Set Covering Problem instances from OR-Library.

  6. Optimal wage setting for an export oriented firm under labor taxes and labor mobility

    Directory of Open Access Journals (Sweden)

    Raúl Ponce Rodríguez

    2005-01-01

    Full Text Available In this paper it is developed a theoretical model to study the incentives that a labor tax might induce in terms of the optimal wage setting for an export oriented firm. In particular, we analyze the interaction of a labor tax that tends to reduce the wage due the firm is induced to shift backwards the tax burden to its employees minimizing the possible increase in the payroll costs and a fall of profits. However a lower wage might not be an optimal response to the establishment of a labor tax because it increases the labor turnover and as a result the firm faces both: an output’s opportunity cost and a labors turnover cost. The firm thus optimally decides to respond to the qualification and labor taxes by increasing the after tax wage.

  7. Probe code: a set of programs for processing and analysis of the left ventricular function - User's manual

    International Nuclear Information System (INIS)

    Piva, R.M.V.

    1987-01-01

    The User's Manual of the Probe Code is an addendum to the M.Sc. thesis entitled A Microcomputer System of Nuclear Probe to Check the Left Ventricular Function. The Probe Code is a software which was developed for processing and off-line analysis curves from the Left Ventricular Function, that were obtained in vivo. These curves are produced by means of an external scintigraph probe, which was collimated and put on the left ventricule, after a venous inoculation of Tc-99 m. (author)

  8. A jazz-based approach for optimal setting of pressure reducing valves in water distribution networks

    Science.gov (United States)

    De Paola, Francesco; Galdiero, Enzo; Giugni, Maurizio

    2016-05-01

    This study presents a model for valve setting in water distribution networks (WDNs), with the aim of reducing the level of leakage. The approach is based on the harmony search (HS) optimization algorithm. The HS mimics a jazz improvisation process able to find the best solutions, in this case corresponding to valve settings in a WDN. The model also interfaces with the improved version of a popular hydraulic simulator, EPANET 2.0, to check the hydraulic constraints and to evaluate the performances of the solutions. Penalties are introduced in the objective function in case of violation of the hydraulic constraints. The model is applied to two case studies, and the obtained results in terms of pressure reductions are comparable with those of competitive metaheuristic algorithms (e.g. genetic algorithms). The results demonstrate the suitability of the HS algorithm for water network management and optimization.

  9. Searching for optimal integer solutions to set partitioning problems using column generation

    OpenAIRE

    Bredström, David; Jörnsten, Kurt; Rönnqvist, Mikael

    2007-01-01

    We describe a new approach to produce integer feasible columns to a set partitioning problem directly in solving the linear programming (LP) relaxation using column generation. Traditionally, column generation is aimed to solve the LP relaxation as quick as possible without any concern of the integer properties of the columns formed. In our approach we aim to generate the columns forming the optimal integer solution while simultaneously solving the LP relaxation. By this we can re...

  10. A Method of Forming the Optimal Set of Disjoint Path in Computer Networks

    Directory of Open Access Journals (Sweden)

    As'ad Mahmoud As'ad ALNASER

    2017-04-01

    Full Text Available This work provides a short analysis of algorithms of multipath routing. The modified algorithm of formation of the maximum set of not crossed paths taking into account their metrics is offered. Optimization of paths is carried out due to their reconfiguration with adjacent deadlock path. Reconfigurations are realized within the subgraphs including only peaks of the main and an adjacent deadlock path. It allows to reduce the field of formation of an optimum path and time complexity of its formation.

  11. Pareto Optimization Identifies Diverse Set of Phosphorylation Signatures Predicting Response to Treatment with Dasatinib.

    Science.gov (United States)

    Klammer, Martin; Dybowski, J Nikolaj; Hoffmann, Daniel; Schaab, Christoph

    2015-01-01

    Multivariate biomarkers that can predict the effectiveness of targeted therapy in individual patients are highly desired. Previous biomarker discovery studies have largely focused on the identification of single biomarker signatures, aimed at maximizing prediction accuracy. Here, we present a different approach that identifies multiple biomarkers by simultaneously optimizing their predictive power, number of features, and proximity to the drug target in a protein-protein interaction network. To this end, we incorporated NSGA-II, a fast and elitist multi-objective optimization algorithm that is based on the principle of Pareto optimality, into the biomarker discovery workflow. The method was applied to quantitative phosphoproteome data of 19 non-small cell lung cancer (NSCLC) cell lines from a previous biomarker study. The algorithm successfully identified a total of 77 candidate biomarker signatures predicting response to treatment with dasatinib. Through filtering and similarity clustering, this set was trimmed to four final biomarker signatures, which then were validated on an independent set of breast cancer cell lines. All four candidates reached the same good prediction accuracy (83%) as the originally published biomarker. Although the newly discovered signatures were diverse in their composition and in their size, the central protein of the originally published signature - integrin β4 (ITGB4) - was also present in all four Pareto signatures, confirming its pivotal role in predicting dasatinib response in NSCLC cell lines. In summary, the method presented here allows for a robust and simultaneous identification of multiple multivariate biomarkers that are optimized for prediction performance, size, and relevance.

  12. Optimization of the primary collimator settings for fractionated IMRT stereotactic radiotherapy

    International Nuclear Information System (INIS)

    Tobler, Matt; Leavitt, Dennis D.; Watson, Gordon

    2004-01-01

    Advances in field-shaping techniques for stereotactic radiosurgery/radiotherapy have allowed dynamic adjustment of field shape with gantry rotation (dynamic conformal arc) in an effort to minimize dose to critical structures. Recent work evaluated the potential for increased sparing of dose to normal tissues when the primary collimator setting is optimized to only the size necessary to cover the largest shape of the dynamic micro multi leaf field. Intensity-modulated radiotherapy (IMRT) is now a treatment option for patients receiving stereotactic radiotherapy treatments. This multisegmentation of the dose delivered through multiple fixed treatment fields provides for delivery of uniform dose to the tumor volume while allowing sparing of critical structures, particularly for patients whose tumor volumes are less suited for rotational treatment. For these segmented fields, the total number of monitor units (MUs) delivered may be much greater than the number of MUs required if dose delivery occurred through an unmodulated treatment field. As a result, undesired dose delivered, as leakage through the leaves to tissues outside the area of interest, will be proportionally increased. This work will evaluate the role of optimization of the primary collimator setting for these IMRT treatment fields, and compare these results to treatment fields where the primary collimator settings have not been optimized

  13. Application of HGSO to security based optimal placement and parameter setting of UPFC

    International Nuclear Information System (INIS)

    Tarafdar Hagh, Mehrdad; Alipour, Manijeh; Teimourzadeh, Saeed

    2014-01-01

    Highlights: • A new method for solving the security based UPFC placement and parameter setting problem is proposed. • The proposed method is a global method for all mixed-integer problems. • The proposed method has the ability of the parallel search in binary and continues space. • By using the proposed method, most of the problems due to line contingencies are solved. • Comparison studies are done to compare the performance of the proposed method. - Abstract: This paper presents a novel method to solve security based optimal placement and parameter setting of unified power flow controller (UPFC) problem based on hybrid group search optimization (HGSO) technique. Firstly, HGSO is introduced in order to solve mix-integer type problems. Afterwards, the proposed method is applied to the security based optimal placement and parameter setting of UPFC problem. The focus of the paper is to enhance the power system security through eliminating or minimizing the over loaded lines and the bus voltage limit violations under single line contingencies. Simulation studies are carried out on the IEEE 6-bus, IEEE 14-bus and IEEE 30-bus systems in order to verify the accuracy and robustness of the proposed method. The results indicate that by using the proposed method, the power system remains secure under single line contingencies

  14. Application of Multiple-Population Genetic Algorithm in Optimizing the Train-Set Circulation Plan Problem

    Directory of Open Access Journals (Sweden)

    Yu Zhou

    2017-01-01

    Full Text Available The train-set circulation plan problem (TCPP belongs to the rolling stock scheduling (RSS problem and is similar to the aircraft routing problem (ARP in airline operations and the vehicle routing problem (VRP in the logistics field. However, TCPP involves additional complexity due to the maintenance constraint of train-sets: train-sets must conduct maintenance tasks after running for a certain time and distance. The TCPP is nondeterministic polynomial hard (NP-hard. There is no available algorithm that can obtain the optimal global solution, and many factors such as the utilization mode and the maintenance mode impact the solution of the TCPP. This paper proposes a train-set circulation optimization model to minimize the total connection time and maintenance costs and describes the design of an efficient multiple-population genetic algorithm (MPGA to solve this model. A realistic high-speed railway (HSR case is selected to verify our model and algorithm, and, then, a comparison of different algorithms is carried out. Furthermore, a new maintenance mode is proposed, and related implementation requirements are discussed.

  15. An intelligent hybrid scheme for optimizing parking space: A Tabu metaphor and rough set based approach

    Directory of Open Access Journals (Sweden)

    Soumya Banerjee

    2011-03-01

    Full Text Available Congested roads, high traffic, and parking problems are major concerns for any modern city planning. Congestion of on-street spaces in official neighborhoods may give rise to inappropriate parking areas in office and shopping mall complex during the peak time of official transactions. This paper proposes an intelligent and optimized scheme to solve parking space problem for a small city (e.g., Mauritius using a reactive search technique (named as Tabu Search assisted by rough set. Rough set is being used for the extraction of uncertain rules that exist in the databases of parking situations. The inclusion of rough set theory depicts the accuracy and roughness, which are used to characterize uncertainty of the parking lot. Approximation accuracy is employed to depict accuracy of a rough classification [1] according to different dynamic parking scenarios. And as such, the hybrid metaphor proposed comprising of Tabu Search and rough set could provide substantial research directions for other similar hard optimization problems.

  16. A parametric level-set approach for topology optimization of flow domains

    DEFF Research Database (Denmark)

    Pingen, Georg; Waidmann, Matthias; Evgrafov, Anton

    2010-01-01

    of the design variables in the traditional approaches is seen as a possible cause for the slow convergence. Non-smooth material distributions are suspected to trigger premature onset of instationary flows which cannot be treated by steady-state flow models. In the present work, we study whether the convergence...... and the versatility of topology optimization methods for fluidic systems can be improved by employing a parametric level-set description. In general, level-set methods allow controlling the smoothness of boundaries, yield a non-local influence of design variables, and decouple the material description from the flow...... field discretization. The parametric level-set method used in this study utilizes a material distribution approach to represent flow boundaries, resulting in a non-trivial mapping between design variables and local material properties. Using a hydrodynamic lattice Boltzmann method, we study...

  17. Quality of Gaussian basis sets: direct optimization of orbital exponents by the method of conjugate gradients

    International Nuclear Information System (INIS)

    Kari, R.E.; Mezey, P.G.; Csizmadia, I.G.

    1975-01-01

    Expressions are given for calculating the energy gradient vector in the exponent space of Gaussian basis sets and a technique to optimize orbital exponents using the method of conjugate gradients is described. The method is tested on the (9/sups/5/supp/) Gaussian basis space and optimum exponents are determined for the carbon atom. The analysis of the results shows that the calculated one-electron properties converge more slowly to their optimum values than the total energy converges to its optimum value. In addition, basis sets approximating the optimum total energy very well can still be markedly improved for the prediction of one-electron properties. For smaller basis sets, this improvement does not warrant the necessary expense

  18. SkyProbeBV: dual-color absolute sky transparency monitor to optimize science operations

    Science.gov (United States)

    Cuillandre, Jean-Charles; Magnier, Eugene; Sabin, Dan; Mahoney, Billy

    2008-07-01

    Mauna Kea (4200 m elevation, Hawaii) is known for its pristine seeing conditions, but sky transparency can be an issue for science operations: 25% of the nights are not photometric, a cloud coverage mostly due to high-altitude thin cirrus. The Canada-France-Hawaii Telescope (CFHT) is upgrading its real-time sky transparency monitor in the optical domain (V-band) into a dual-color system by adding a B-band channel and redesigning the entire optical and mechanical assembly. Since 2000, the original single-channel SkyProbe has gathered one exposure every minute during each observing night using a small CCD camera with a very wide field of view (35 sq. deg.) encompassing the region pointed by the telescope for science operations, and exposures long enough (30 seconds) to capture at least 100 stars of Hipparcos' Tychos catalog at high galactic latitudes (and up to 600 stars at low galactic latitudes). A key advantage of SkyProbe over direct thermal infrared imaging detection of clouds, is that it allows an accurate absolute measurement, within 5%, of the true atmospheric absorption by clouds affecting the data being gathered by the telescope's main science instrument. This system has proven crucial for decision making in the CFHT queued service observing (QSO), representing today 95% of the telescope time: science exposures taken in non-photometric conditions are automatically registered for being re-observed later on (at 1/10th of the original exposure time per pointing in the observed filters) to ensure a proper final absolute photometric calibration. If the absorption is too high, exposures can be repeated, or the observing can be done for a lower ranked science program. The new dual color system (simultaneous B & V bands) will allow a better characterization of the sky properties above Mauna Kea and should enable a better detection of the thinner cirrus (absorption down to 0.02 mag., i.e. 2%). SkyProbe is operated within the Elixir pipeline, a collection of tools

  19. A computerized traffic control algorithm to determine optimal traffic signal settings. Ph.D. Thesis - Toledo Univ.

    Science.gov (United States)

    Seldner, K.

    1977-01-01

    An algorithm was developed to optimally control the traffic signals at each intersection using a discrete time traffic model applicable to heavy or peak traffic. Off line optimization procedures were applied to compute the cycle splits required to minimize the lengths of the vehicle queues and delay at each intersection. The method was applied to an extensive traffic network in Toledo, Ohio. Results obtained with the derived optimal settings are compared with the control settings presently in use.

  20. Optimal design and fabrication of three-dimensional calibration specimens for scanning probe microscopy

    Energy Technology Data Exchange (ETDEWEB)

    Liu Xiaoning; Luo Tingting; Chen Yuhang; Huang Wenhao [Department of Precision Machinery and Instrumentation, University of Science and Technology of China, 230026 Hefei (China); Piaszenski, Guido [Raith GmbH, Konrad-Adenauer-Allee 8, 44263 Dortmund (Germany)

    2012-05-15

    Micro-/nano-scale roughness specimens are highly demanded to synthetically calibrate the scanning probe microscopy (SPM) instrument. In this study, three-dimensional (3D) specimens with controllable main surface evaluation parameters were designed. In order to improve the design accuracy, the genetic algorithm was introduced into the conventional digital filter method. A primary 3D calibration specimen with the dimension of 10 {mu}m x 10 {mu}m was fabricated by electron beam lithography. Atomic force microscopy characterizations demonstrated that the statistical and spectral parameters of the fabricated specimen match well with the designed values. Such a kind of 3D specimens has the potential to calibrate the SPM for applications in quantitative surface evaluations.

  1. Statistically optimized near field acoustic holography using an array of pressure-velocity probes

    DEFF Research Database (Denmark)

    Jacobsen, Finn; Jaud, Virginie

    2007-01-01

    of a measurement aperture that extends well beyond the source can be relaxed. Both NAH and SONAH are based on the assumption that all sources are on one side of the measurement plane whereas the other side is source free. An extension of the SONAH procedure based on measurement with a double layer array...... of pressure microphones has been suggested. The double layer technique makes it possible to distinguish between sources on the two sides of the array and thus suppress the influence of extraneous noise coming from the “wrong” side. It has also recently been demonstrated that there are significant advantages...... in NAH based on an array of acoustic particle velocity transducers (in a single layer) compared with NAH based on an array of pressure microphones. This investigation combines the two ideas and examines SONAH based on an array of pressure-velocity intensity probes through computer simulations as well...

  2. Geminal embedding scheme for optimal atomic basis set construction in correlated calculations

    Energy Technology Data Exchange (ETDEWEB)

    Sorella, S., E-mail: sorella@sissa.it [International School for Advanced Studies (SISSA), Via Beirut 2-4, 34014 Trieste, Italy and INFM Democritos National Simulation Center, Trieste (Italy); Devaux, N.; Dagrada, M., E-mail: mario.dagrada@impmc.upmc.fr [Institut de Minéralogie, de Physique des Matériaux et de Cosmochimie, Université Pierre et Marie Curie, Case 115, 4 Place Jussieu, 75252 Paris Cedex 05 (France); Mazzola, G., E-mail: gmazzola@phys.ethz.ch [Theoretische Physik, ETH Zurich, 8093 Zurich (Switzerland); Casula, M., E-mail: michele.casula@impmc.upmc.fr [CNRS and Institut de Minéralogie, de Physique des Matériaux et de Cosmochimie, Université Pierre et Marie Curie, Case 115, 4 Place Jussieu, 75252 Paris Cedex 05 (France)

    2015-12-28

    We introduce an efficient method to construct optimal and system adaptive basis sets for use in electronic structure and quantum Monte Carlo calculations. The method is based on an embedding scheme in which a reference atom is singled out from its environment, while the entire system (atom and environment) is described by a Slater determinant or its antisymmetrized geminal power (AGP) extension. The embedding procedure described here allows for the systematic and consistent contraction of the primitive basis set into geminal embedded orbitals (GEOs), with a dramatic reduction of the number of variational parameters necessary to represent the many-body wave function, for a chosen target accuracy. Within the variational Monte Carlo method, the Slater or AGP part is determined by a variational minimization of the energy of the whole system in presence of a flexible and accurate Jastrow factor, representing most of the dynamical electronic correlation. The resulting GEO basis set opens the way for a fully controlled optimization of many-body wave functions in electronic structure calculation of bulk materials, namely, containing a large number of electrons and atoms. We present applications on the water molecule, the volume collapse transition in cerium, and the high-pressure liquid hydrogen.

  3. Structure Optimal Design of Electromagnetic Levitation Load Reduction Device for Hydroturbine Generator Set

    Directory of Open Access Journals (Sweden)

    Qingyan Wang

    2015-01-01

    Full Text Available Thrust bearing is one part with the highest failure rate in hydroturbine generator set, which is primarily due to heavy axial load. Such heavy load often makes oil film destruction, bearing friction, and even burning. It is necessary to study the load and the reduction method. The dynamic thrust is an important factor to influence the axial load and reduction design of electromagnetic device. Therefore, in the paper, combined with the structure features of vertical turbine, the hydraulic thrust is analyzed accurately. Then, take the turbine model HL-220-LT-550, for instance; the electromagnetic levitation load reduction device is designed, and its mathematical model is built, whose purpose is to minimize excitation loss and total quality under the constraints of installation space, connection layout, and heat dissipation. Particle swarm optimization (PSO is employed to search for the optimum solution; finally, the result is verified by finite element method (FEM, which demonstrates that the optimized structure is more effective.

  4. Level set method for optimal shape design of MRAM core. Micromagnetic approach

    International Nuclear Information System (INIS)

    Melicher, Valdemar; Cimrak, Ivan; Keer, Roger van

    2008-01-01

    We aim at optimizing the shape of the magnetic core in MRAM memories. The evolution of the magnetization during the writing process is described by the Landau-Lifshitz equation (LLE). The actual shape of the core in one cell is characterized by the coefficient γ. Cost functional f=f(γ) expresses the quality of the writing process having in mind the competition between the full-select and the half-select element. We derive an explicit form of the derivative F=∂f/∂γ which allows for the use of gradient-type methods for the actual computation of the optimized shape (e.g., steepest descend method). The level set method (LSM) is employed for the representation of the piecewise constant coefficient γ

  5. Optimized probes for dose rate measurements at local government sites and in emergency planning zones and their integration into measurement networks

    International Nuclear Information System (INIS)

    Kuca, Petr; Helebrant, Jan; Cespirova, Irena; Judas, Libor; Skala, Lukas

    2015-01-01

    The results of a security project aimed at the development of a radiation situation monitoring system using optimized probes for dose rate measurements are described. The system is suitable for use at local government sites as well as at other sites. The system includes dose rate measurement probes with the variable configuration functionality (detection part), equipment for data transfer to a central workplace (communication part) and application for collection, storage and administration of the results and their presentation at a website (presentation part). The dosimetric and other operational properties of the probes were tested and the feasibility of their integration into measurement networks using the IMS central application was examined. (orig.)

  6. Reverse line blot probe design and polymerase chain reaction optimization for bloodmeal analysis of ticks from the eastern United States.

    Science.gov (United States)

    Scott, M C; Harmon, J R; Tsao, J I; Jones, C J; Hickling, G J

    2012-05-01

    Determining the host preference of vector ticks is vital to elucidating the eco-epidemiology of the diseases they spread. Detachment of ticks from captured hosts can provide evidence of feeding on those host species, but only for those species that are feasible to capture. Recently developed, highly sensitive molecular assays show great promise in allowing host selection to be determined from minute traces of host DNA that persist in recently molted ticks. Using methods developed in Europe as a starting-point, we designed 12S rDNA mitochondrial gene probes suitable for use in a reverse line blot (RLB) assay of ticks feeding on common host species in the eastern United States. This is the first study to use the 12S mitochondrial gene in a RLB bloodmeal assay in North America. The assay combines conventional PCR with a biotin-labeled primer and reverse line blots that can be stripped and rehybridized up to 20 times, making the method less expensive and more straightforward to interpret than previous methods of tick bloodmeal identification. Probes were designed that target the species, genus, genus group, family, order, or class of eight reptile, 13 birds, and 32 mammal hosts. After optimization, the RLB assay correctly identified the current hostspecies for 99% of ticks [Amblyomma americanum (L.) and eight other ixodid tick species] collected directly from known hosts. The method identified previous-host DNA for approximately half of all questing ticks assayed. Multiple bloodmeal determinations were obtained in some instances from feeding and questing ticks; this pattern is consistent with previous RLB studies but requires further investigation. Development of this probe library, suitable for eastern U.S. ecosystems, opens new avenues for eco-epidemiological investigations of this region's tick-host systems.

  7. On the optimal identification of tag sets in time-constrained RFID configurations.

    Science.gov (United States)

    Vales-Alonso, Javier; Bueno-Delgado, María Victoria; Egea-López, Esteban; Alcaraz, Juan José; Pérez-Mañogil, Juan Manuel

    2011-01-01

    In Radio Frequency Identification facilities the identification delay of a set of tags is mainly caused by the random access nature of the reading protocol, yielding a random identification time of the set of tags. In this paper, the cumulative distribution function of the identification time is evaluated using a discrete time Markov chain for single-set time-constrained passive RFID systems, namely those ones where a single group of tags is assumed to be in the reading area and only for a bounded time (sojourn time) before leaving. In these scenarios some tags in a set may leave the reader coverage area unidentified. The probability of this event is obtained from the cumulative distribution function of the identification time as a function of the sojourn time. This result provides a suitable criterion to minimize the probability of losing tags. Besides, an identification strategy based on splitting the set of tags in smaller subsets is also considered. Results demonstrate that there are optimal splitting configurations that reduce the overall identification time while keeping the same probability of losing tags.

  8. Transcript profiling of common bean (Phaseolus vulgaris L. using the GeneChip® Soybean Genome Array: optimizing analysis by masking biased probes

    Directory of Open Access Journals (Sweden)

    Gronwald John W

    2010-05-01

    Full Text Available Abstract Background Common bean (Phaseolus vulgaris L. and soybean (Glycine max both belong to the Phaseoleae tribe and share significant coding sequence homology. This suggests that the GeneChip® Soybean Genome Array (soybean GeneChip may be used for gene expression studies using common bean. Results To evaluate the utility of the soybean GeneChip for transcript profiling of common bean, we hybridized cRNAs purified from nodule, leaf, and root of common bean and soybean in triplicate to the soybean GeneChip. Initial data analysis showed a decreased sensitivity and accuracy of measuring differential gene expression in common bean cross-species hybridization (CSH GeneChip data compared to that of soybean. We employed a method that masked putative probes targeting inter-species variable (ISV regions between common bean and soybean. A masking signal intensity threshold was selected that optimized both sensitivity and accuracy of measuring differential gene expression. After masking for ISV regions, the number of differentially-expressed genes identified in common bean was increased by 2.8-fold reflecting increased sensitivity. Quantitative RT-PCR (qRT-PCR analysis of 20 randomly selected genes and purine-ureide pathway genes demonstrated an increased accuracy of measuring differential gene expression after masking for ISV regions. We also evaluated masked probe frequency per probe set to gain insight into the sequence divergence pattern between common bean and soybean. The sequence divergence pattern analysis suggested that the genes for basic cellular functions and metabolism were highly conserved between soybean and common bean. Additionally, our results show that some classes of genes, particularly those associated with environmental adaptation, are highly divergent. Conclusions The soybean GeneChip is a suitable cross-species platform for transcript profiling in common bean when used in combination with the masking protocol described. In

  9. Training set optimization and classifier performance in a top-down diabetic retinopathy screening system

    Science.gov (United States)

    Wigdahl, J.; Agurto, C.; Murray, V.; Barriga, S.; Soliz, P.

    2013-03-01

    Diabetic retinopathy (DR) affects more than 4.4 million Americans age 40 and over. Automatic screening for DR has shown to be an efficient and cost-effective way to lower the burden on the healthcare system, by triaging diabetic patients and ensuring timely care for those presenting with DR. Several supervised algorithms have been developed to detect pathologies related to DR, but little work has been done in determining the size of the training set that optimizes an algorithm's performance. In this paper we analyze the effect of the training sample size on the performance of a top-down DR screening algorithm for different types of statistical classifiers. Results are based on partial least squares (PLS), support vector machines (SVM), k-nearest neighbor (kNN), and Naïve Bayes classifiers. Our dataset consisted of digital retinal images collected from a total of 745 cases (595 controls, 150 with DR). We varied the number of normal controls in the training set, while keeping the number of DR samples constant, and repeated the procedure 10 times using randomized training sets to avoid bias. Results show increasing performance in terms of area under the ROC curve (AUC) when the number of DR subjects in the training set increased, with similar trends for each of the classifiers. Of these, PLS and k-NN had the highest average AUC. Lower standard deviation and a flattening of the AUC curve gives evidence that there is a limit to the learning ability of the classifiers and an optimal number of cases to train on.

  10. Optimal structural inference of signaling pathways from unordered and overlapping gene sets.

    Science.gov (United States)

    Acharya, Lipi R; Judeh, Thair; Wang, Guangdi; Zhu, Dongxiao

    2012-02-15

    A plethora of bioinformatics analysis has led to the discovery of numerous gene sets, which can be interpreted as discrete measurements emitted from latent signaling pathways. Their potential to infer signaling pathway structures, however, has not been sufficiently exploited. Existing methods accommodating discrete data do not explicitly consider signal cascading mechanisms that characterize a signaling pathway. Novel computational methods are thus needed to fully utilize gene sets and broaden the scope from focusing only on pairwise interactions to the more general cascading events in the inference of signaling pathway structures. We propose a gene set based simulated annealing (SA) algorithm for the reconstruction of signaling pathway structures. A signaling pathway structure is a directed graph containing up to a few hundred nodes and many overlapping signal cascades, where each cascade represents a chain of molecular interactions from the cell surface to the nucleus. Gene sets in our context refer to discrete sets of genes participating in signal cascades, the basic building blocks of a signaling pathway, with no prior information about gene orderings in the cascades. From a compendium of gene sets related to a pathway, SA aims to search for signal cascades that characterize the optimal signaling pathway structure. In the search process, the extent of overlap among signal cascades is used to measure the optimality of a structure. Throughout, we treat gene sets as random samples from a first-order Markov chain model. We evaluated the performance of SA in three case studies. In the first study conducted on 83 KEGG pathways, SA demonstrated a significantly better performance than Bayesian network methods. Since both SA and Bayesian network methods accommodate discrete data, use a 'search and score' network learning strategy and output a directed network, they can be compared in terms of performance and computational time. In the second study, we compared SA and

  11. A topology optimization method based on the level set method for the design of negative permeability dielectric metamaterials

    DEFF Research Database (Denmark)

    Otomori, Masaki; Yamada, Takayuki; Izui, Kazuhiro

    2012-01-01

    This paper presents a level set-based topology optimization method for the design of negative permeability dielectric metamaterials. Metamaterials are artificial materials that display extraordinary physical properties that are unavailable with natural materials. The aim of the formulated...... optimization problem is to find optimized layouts of a dielectric material that achieve negative permeability. The presence of grayscale areas in the optimized configurations critically affects the performance of metamaterials, positively as well as negatively, but configurations that contain grayscale areas...... are highly impractical from an engineering and manufacturing point of view. Therefore, a topology optimization method that can obtain clear optimized configurations is desirable. Here, a level set-based topology optimization method incorporating a fictitious interface energy is applied to a negative...

  12. COMPARING INTRA- AND INTERENVIRONMENTAL PARAMETERS OF OPTIMAL SETTING IN BREEDING EXPERIMENTS

    Directory of Open Access Journals (Sweden)

    Domagoj Šimić

    2004-06-01

    Full Text Available A series of biometrical and quantitative-genetic parameters, not well known in Croatia, are being used for the most important agronomic traits to determine optimal genotype setting within a location as well as among locations. Objectives of the study are to estimate and to compare 1 parameters of intra-environment setting (effective mean square error EMSE, in lattice design, relative efficiency RE, of lattice design LD, compared to randomized complete block design RCBD, and repeatability Rep, of a plot value, and 2 operative heritability h2, as a parameter of inter-environment setting in an experiment with 72 maize hybrids. Trials were set up at four environments (two locations in two years evaluating grain yield and stalk rot. EMSE values corresponded across environments for both traits, while the estimations for RE of LD varied inconsistently over environments and traits. Rep estimates were more different over environments than traits. Rep values did not correspond with h2 estimates: Rep estimates for stalk rot were higher than those for grain yield, while h2 for grain yield was higher than for stalk rot in all instances. Our results suggest that due to importance of genotype × environment interaction, there is a need for multienvironment trials for both traits. If the experiment framework should be reduced due to economic or other reasons, decreasing number of locations in a year rather than decreasing number of years of investigation is recommended.

  13. Selecting Optimal Feature Set in High-Dimensional Data by Swarm Search

    Directory of Open Access Journals (Sweden)

    Simon Fong

    2013-01-01

    Full Text Available Selecting the right set of features from data of high dimensionality for inducing an accurate classification model is a tough computational challenge. It is almost a NP-hard problem as the combinations of features escalate exponentially as the number of features increases. Unfortunately in data mining, as well as other engineering applications and bioinformatics, some data are described by a long array of features. Many feature subset selection algorithms have been proposed in the past, but not all of them are effective. Since it takes seemingly forever to use brute force in exhaustively trying every possible combination of features, stochastic optimization may be a solution. In this paper, we propose a new feature selection scheme called Swarm Search to find an optimal feature set by using metaheuristics. The advantage of Swarm Search is its flexibility in integrating any classifier into its fitness function and plugging in any metaheuristic algorithm to facilitate heuristic search. Simulation experiments are carried out by testing the Swarm Search over some high-dimensional datasets, with different classification algorithms and various metaheuristic algorithms. The comparative experiment results show that Swarm Search is able to attain relatively low error rates in classification without shrinking the size of the feature subset to its minimum.

  14. Application of Fuzzy Sets for the Improvement of Routing Optimization Heuristic Algorithms

    Directory of Open Access Journals (Sweden)

    Mattas Konstantinos

    2016-12-01

    Full Text Available The determination of the optimal circular path has become widely known for its difficulty in producing a solution and for the numerous applications in the scope of organization and management of passenger and freight transport. It is a mathematical combinatorial optimization problem for which several deterministic and heuristic models have been developed in recent years, applicable to route organization issues, passenger and freight transport, storage and distribution of goods, waste collection, supply and control of terminals, as well as human resource management. Scope of the present paper is the development, with the use of fuzzy sets, of a practical, comprehensible and speedy heuristic algorithm for the improvement of the ability of the classical deterministic algorithms to identify optimum, symmetrical or non-symmetrical, circular route. The proposed fuzzy heuristic algorithm is compared to the corresponding deterministic ones, with regard to the deviation of the proposed solution from the best known solution and the complexity of the calculations needed to obtain this solution. It is shown that the use of fuzzy sets reduced up to 35% the deviation of the solution identified by the classical deterministic algorithms from the best known solution.

  15. Optimal energy window setting depending on the energy resolution for radionuclides used in gamma camera imaging. Planar imaging evaluation

    International Nuclear Information System (INIS)

    Kojima, Akihiro; Watanabe, Hiroyuki; Arao, Yuichi; Kawasaki, Masaaki; Takaki, Akihiro; Matsumoto, Masanori

    2007-01-01

    In this study, we examined whether the optimal energy window (EW) setting depending on an energy resolution of a gamma camera, which we previously proposed, is valid on planar scintigraphic imaging using Tl-201, Ga-67, Tc-99m, and I-123. Image acquisitions for line sources and paper sheet phantoms containing each radionuclide were performed in air and with scattering materials. For the six photopeaks excluding the Hg-201 characteristic x-rays' one, the conventional 20%-width energy window (EW20%) setting and the optimal energy window (optimal EW) setting (15%-width below 100 keV and 13%-width above 100 keV) were compared. For the Hg-201 characteristic x-rays' photopeak, the conventional on-peak EW20% setting was compared with the off-peak EW setting (73 keV-25%) and the wider off-peak EW setting (77 keV-29%). Image-count ratio (defined as the ratio of the image counts obtained with an EW and the total image counts obtained with the EW covered the whole photopeak for a line source in air), image quality, spatial resolutions (full width half maximum (FWHM) and full width tenth maximum (FWTM) values), count-profile curves, and defect-contrast values were compared between the conventional EW setting and the optimal EW setting. Except for the Hg-201 characteristic x-rays, the image-count ratios were 94-99% for the EW20% setting, but 78-89% for the optimal EW setting. However, the optimal EW setting reduced scatter fraction (defined as the scattered-to-primary counts ratio) effectively, as compared with the EW20% setting. Consequently, all the images with the optimal EW setting gave better image quality than ones with the EW20% setting. For the Hg-201 characteristic x-rays, the off-peak EW setting showed great improvement in image quality in comparison with the EW20% setting and the wider off-peak EW setting gave the best results. In conclusion, from our planar imaging study it was shown that although the optimal EW setting proposed by us gives less image-count ratio by

  16. Assessing the optimality of ASHRAE climate zones using high resolution meteorological data sets

    Science.gov (United States)

    Fils, P. D.; Kumar, J.; Collier, N.; Hoffman, F. M.; Xu, M.; Forbes, W.

    2017-12-01

    Energy consumed by built infrastructure constitutes a significant fraction of the nation's energy budget. According to 2015 US Energy Information Agency report, 41% of the energy used in the US was going to residential and commercial buildings. Additional research has shown that 32% of commercial building energy goes into heating and cooling the building. The American National Standards Institute and the American Society of Heating Refrigerating and Air-Conditioning Engineers Standard 90.1 provides climate zones for current state-of-practice since heating and cooling demands are strongly influenced by spatio-temporal weather variations. For this reason, we have been assessing the optimality of the climate zones using high resolution daily climate data from NASA's DAYMET database. We analyzed time series of meteorological data sets for all ASHRAE climate zones between 1980-2016 inclusively. We computed the mean, standard deviation, and other statistics for a set of meteorological variables (solar radiation, maximum and minimum temperature)within each zone. By plotting all the zonal statistics, we analyzed patterns and trends in those data over the past 36 years. We compared the means of each zone to its standard deviation to determine the range of spatial variability that exist within each zone. If the band around the mean is too large, it indicates that regions in the zone experience a wide range of weather conditions and perhaps a common set of building design guidelines would lead to a non-optimal energy consumption scenario. In this study we have observed a strong variation in the different climate zones. Some have shown consistent patterns in the past 36 years, indicating that the zone was well constructed, while others have greatly deviated from their mean indicating that the zone needs to be reconstructed. We also looked at redesigning the climate zones based on high resolution climate data. We are using building simulations models like EnergyPlus to develop

  17. Application of Bayesian statistical decision theory to the optimization of generating set maintenance

    International Nuclear Information System (INIS)

    Procaccia, H.; Cordier, R.; Muller, S.

    1994-07-01

    Statistical decision theory could be a alternative for the optimization of preventive maintenance periodicity. In effect, this theory concerns the situation in which a decision maker has to make a choice between a set of reasonable decisions, and where the loss associated to a given decision depends on a probabilistic risk, called state of nature. In the case of maintenance optimization, the decisions to be analyzed are different periodicities proposed by the experts, given the observed feedback experience, the states of nature are the associated failure probabilities, and the losses are the expectations of the induced cost of maintenance and of consequences of the failures. As failure probabilities concern rare events, at the ultimate state of RCM analysis (failure of sub-component), and as expected foreseeable behaviour of equipment has to be evaluated by experts, Bayesian approach is successfully used to compute states of nature. In Bayesian decision theory, a prior distribution for failure probabilities is modeled from expert knowledge, and is combined with few stochastic information provided by feedback experience, giving a posterior distribution of failure probabilities. The optimized decision is the decision that minimizes the expected loss over the posterior distribution. This methodology has been applied to inspection and maintenance optimization of cylinders of diesel generator engines of 900 MW nuclear plants. In these plants, auxiliary electric power is supplied by 2 redundant diesel generators which are tested every 2 weeks during about 1 hour. Until now, during yearly refueling of each plant, one endoscopic inspection of diesel cylinders is performed, and every 5 operating years, all cylinders are replaced. RCM has shown that cylinder failures could be critical. So Bayesian decision theory has been applied, taking into account expert opinions, and possibility of aging when maintenance periodicity is extended. (authors). 8 refs., 5 figs., 1 tab

  18. On the choice of an optimal value-set of qualitative attributes for information retrieval in databases

    International Nuclear Information System (INIS)

    Ryjov, A.; Loginov, D.

    1994-01-01

    The problem of choosing an optimal set of significances of qualitative attributes for information retrieval in databases is addressed. Given a particular database, a set of significances is called optimal if it results in the minimization of losses of information and information noise for information retrieval in the data base. Obviously, such a set of significances depends on the statistical parameters of the data base. The software, which enables to calculate on the basis of the statistical parameters of the given data base, the losses of information and the information noise for arbitrary sets of significances of qualitative attributes, is described. The software also permits to compare various sets of significances of qualitative attributes and to choose the optimal set of significances

  19. Development and validation of a numerical model for cross-section optimization of a multi-part probe for soft tissue intervention.

    Science.gov (United States)

    Frasson, L; Neubert, J; Reina, S; Oldfield, M; Davies, B L; Rodriguez Y Baena, F

    2010-01-01

    The popularity of minimally invasive surgical procedures is driving the development of novel, safer and more accurate surgical tools. In this context a multi-part probe for soft tissue surgery is being developed in the Mechatronics in Medicine Laboratory at Imperial College, London. This study reports an optimization procedure using finite element methods, for the identification of an interlock geometry able to limit the separation of the segments composing the multi-part probe. An optimal geometry was obtained and the corresponding three-dimensional finite element model validated experimentally. Simulation results are shown to be consistent with the physical experiments. The outcome of this study is an important step in the provision of a novel miniature steerable probe for surgery.

  20. Sets of RNA repeated tags and hybridization-sensitive fluorescent probes for distinct images of RNA in a living cell.

    Directory of Open Access Journals (Sweden)

    Takeshi Kubota

    Full Text Available BACKGROUND: Imaging the behavior of RNA in a living cell is a powerful means for understanding RNA functions and acquiring spatiotemporal information in a single cell. For more distinct RNA imaging in a living cell, a more effective chemical method to fluorescently label RNA is now required. In addition, development of the technology labeling with different colors for different RNA would make it easier to analyze plural RNA strands expressing in a cell. METHODOLOGY/PRINCIPAL FINDINGS: Tag technology for RNA imaging in a living cell has been developed based on the unique chemical functions of exciton-controlled hybridization-sensitive oligonucleotide (ECHO probes. Repetitions of selected 18-nucleotide RNA tags were incorporated into the mRNA 3'-UTR. Pairs with complementary ECHO probes exhibited hybridization-sensitive fluorescence emission for the mRNA expressed in a living cell. The mRNA in a nucleus was detected clearly as fluorescent puncta, and the images of the expression of two mRNAs were obtained independently and simultaneously with two orthogonal tag-probe pairs. CONCLUSIONS/SIGNIFICANCE: A compact and repeated label has been developed for RNA imaging in a living cell, based on the photochemistry of ECHO probes. The pairs of an 18-nt RNA tag and the complementary ECHO probes are highly thermostable, sequence-specifically emissive, and orthogonal to each other. The nucleotide length necessary for one tag sequence is much shorter compared with conventional tag technologies, resulting in easy preparation of the tag sequences with a larger number of repeats for more distinct RNA imaging.

  1. Adaptive Conflict-Free Optimization of Rule Sets for Network Security Packet Filtering Devices

    Directory of Open Access Journals (Sweden)

    Andrea Baiocchi

    2015-01-01

    Full Text Available Packet filtering and processing rules management in firewalls and security gateways has become commonplace in increasingly complex networks. On one side there is a need to maintain the logic of high level policies, which requires administrators to implement and update a large amount of filtering rules while keeping them conflict-free, that is, avoiding security inconsistencies. On the other side, traffic adaptive optimization of large rule lists is useful for general purpose computers used as filtering devices, without specific designed hardware, to face growing link speeds and to harden filtering devices against DoS and DDoS attacks. Our work joins the two issues in an innovative way and defines a traffic adaptive algorithm to find conflict-free optimized rule sets, by relying on information gathered with traffic logs. The proposed approach suits current technology architectures and exploits available features, like traffic log databases, to minimize the impact of ACO development on the packet filtering devices. We demonstrate the benefit entailed by the proposed algorithm through measurements on a test bed made up of real-life, commercial packet filtering devices.

  2. A New Methodology to Select the Preferred Solutions from the Pareto-optimal Set: Application to Polymer Extrusion

    International Nuclear Information System (INIS)

    Ferreira, Jose C.; Gaspar-Cunha, Antonio; Fonseca, Carlos M.

    2007-01-01

    Most of the real world optimization problems involve multiple, usually conflicting, optimization criteria. Generating Pareto optimal solutions plays an important role in multi-objective optimization, and the problem is considered to be solved when the Pareto optimal set is found, i.e., the set of non-dominated solutions. Multi-Objective Evolutionary Algorithms based on the principle of Pareto optimality are designed to produce the complete set of non-dominated solutions. However, this is not allays enough since the aim is not only to know the Pareto set but, also, to obtain one solution from this Pareto set. Thus, the definition of a methodology able to select a single solution from the set of non-dominated solutions (or a region of the Pareto frontier), and taking into account the preferences of a Decision Maker (DM), is necessary. A different method, based on a weighted stress function, is proposed. It is able to integrate the user's preferences in order to find the best region of the Pareto frontier accordingly with these preferences. This method was tested on some benchmark test problems, with two and three criteria, and on a polymer extrusion problem. This methodology is able to select efficiently the best Pareto-frontier region for the specified relative importance of the criteria

  3. Optimal allocation of the limited oral cholera vaccine supply between endemic and epidemic settings.

    Science.gov (United States)

    Moore, Sean M; Lessler, Justin

    2015-10-06

    The World Health Organization (WHO) recently established a global stockpile of oral cholera vaccine (OCV) to be preferentially used in epidemic response (reactive campaigns) with any vaccine remaining after 1 year allocated to endemic settings. Hence, the number of cholera cases or deaths prevented in an endemic setting represents the minimum utility of these doses, and the optimal risk-averse response to any reactive vaccination request (i.e. the minimax strategy) is one that allocates the remaining doses between the requested epidemic response and endemic use in order to ensure that at least this minimum utility is achieved. Using mathematical models, we find that the best minimax strategy is to allocate the majority of doses to reactive campaigns, unless the request came late in the targeted epidemic. As vaccine supplies dwindle, the case for reactive use of the remaining doses grows stronger. Our analysis provides a lower bound for the amount of OCV to keep in reserve when responding to any request. These results provide a strategic context for the fulfilment of requests to the stockpile, and define allocation strategies that minimize the number of OCV doses that are allocated to suboptimal situations. © 2015 The Authors.

  4. Optimal Inversion Parameters for Full Waveform Inversion using OBS Data Set

    Science.gov (United States)

    Kim, S.; Chung, W.; Shin, S.; Kim, D.; Lee, D.

    2017-12-01

    In recent years, full Waveform Inversion (FWI) has been the most researched technique in seismic data processing. It uses the residuals between observed and modeled data as an objective function; thereafter, the final subsurface velocity model is generated through a series of iterations meant to minimize the residuals.Research on FWI has expanded from acoustic media to elastic media. In acoustic media, the subsurface property is defined by P-velocity; however, in elastic media, properties are defined by multiple parameters, such as P-velocity, S-velocity, and density. Further, the elastic media can also be defined by Lamé constants, density or impedance PI, SI; consequently, research is being carried out to ascertain the optimal parameters.From results of advanced exploration equipment and Ocean Bottom Seismic (OBS) survey, it is now possible to obtain multi-component seismic data. However, to perform FWI on these data and generate an accurate subsurface model, it is important to determine optimal inversion parameters among (Vp, Vs, ρ), (λ, μ, ρ), and (PI, SI) in elastic media. In this study, staggered grid finite difference method was applied to simulate OBS survey. As in inversion, l2-norm was set as objective function. Further, the accurate computation of gradient direction was performed using the back-propagation technique and its scaling was done using the Pseudo-hessian matrix.In acoustic media, only Vp is used as the inversion parameter. In contrast, various sets of parameters, such as (Vp, Vs, ρ) and (λ, μ, ρ) can be used to define inversion in elastic media. Therefore, it is important to ascertain the parameter that gives the most accurate result for inversion with OBS data set.In this study, we generated Vp and Vs subsurface models by using (λ, μ, ρ) and (Vp, Vs, ρ) as inversion parameters in every iteration, and compared the final two FWI results.This research was supported by the Basic Research Project(17-3312) of the Korea Institute of

  5. Renormalization group invariance and optimal QCD renormalization scale-setting: a key issues review

    Science.gov (United States)

    Wu, Xing-Gang; Ma, Yang; Wang, Sheng-Quan; Fu, Hai-Bing; Ma, Hong-Hao; Brodsky, Stanley J.; Mojaza, Matin

    2015-12-01

    A valid prediction for a physical observable from quantum field theory should be independent of the choice of renormalization scheme—this is the primary requirement of renormalization group invariance (RGI). Satisfying scheme invariance is a challenging problem for perturbative QCD (pQCD), since a truncated perturbation series does not automatically satisfy the requirements of the renormalization group. In a previous review, we provided a general introduction to the various scale setting approaches suggested in the literature. As a step forward, in the present review, we present a discussion in depth of two well-established scale-setting methods based on RGI. One is the ‘principle of maximum conformality’ (PMC) in which the terms associated with the β-function are absorbed into the scale of the running coupling at each perturbative order; its predictions are scheme and scale independent at every finite order. The other approach is the ‘principle of minimum sensitivity’ (PMS), which is based on local RGI; the PMS approach determines the optimal renormalization scale by requiring the slope of the approximant of an observable to vanish. In this paper, we present a detailed comparison of the PMC and PMS procedures by analyzing two physical observables R e+e- and Γ(H\\to b\\bar{b}) up to four-loop order in pQCD. At the four-loop level, the PMC and PMS predictions for both observables agree within small errors with those of conventional scale setting assuming a physically-motivated scale, and each prediction shows small scale dependences. However, the convergence of the pQCD series at high orders, behaves quite differently: the PMC displays the best pQCD convergence since it eliminates divergent renormalon terms; in contrast, the convergence of the PMS prediction is questionable, often even worse than the conventional prediction based on an arbitrary guess for the renormalization scale. PMC predictions also have the property that any residual dependence on

  6. Existence theorem and optimality conditions for a class of convex semi-infinite problems with noncompact index sets

    Directory of Open Access Journals (Sweden)

    Olga Kostyukova

    2017-11-01

    Full Text Available The paper is devoted to study of a special class of semi-infinite problems arising in nonlinear parametric Semi-infinite Programming, when the differential properties of the solutions are being studied. These problems are convex and possess noncompact index sets. In the paper, we present conditions guaranteeing the existence of optimal solutions, and prove new optimality criterion. An example illustrating the obtained results is presented.

  7. Dimensioning of optimal probe circuits for the non-destructive testing of materials by eddy-current using Buschbeck-Meinke chart

    International Nuclear Information System (INIS)

    Ott, A.

    1982-01-01

    By application of a modified form of the Buschbeck-Meinke-diagram, known from conduction theory, easy-to use dimensioning rules can be given for the probe circuits of single-frequency eddy-current test instruments. Dimensioning is found for circuits that work with amplitude or phase measurements, that suppress optimal the disturbance parameters in certain regions. In a similar way one can determine dimensioning, with which the measurement quantity causes the highest possible signal charge. (orig.) [de

  8. Optimization of GEANT4 settings for Proton Pencil Beam Scanning simulations using GATE

    Energy Technology Data Exchange (ETDEWEB)

    Grevillot, Loic, E-mail: loic.grevillot@gmail.co [Universite de Lyon, F-69622 Lyon (France); Creatis, CNRS UMR 5220, F-69622 Villeurbanne (France); Centre de Lutte Contre le Cancer Leon Berard, F-69373 Lyon (France); IBA, B-1348 Louvain-la-Neuve (Belgium); Frisson, Thibault [Universite de Lyon, F-69622 Lyon (France); Creatis, CNRS UMR 5220, F-69622 Villeurbanne (France); Centre de Lutte Contre le Cancer Leon Berard, F-69373 Lyon (France); Zahra, Nabil [Universite de Lyon, F-69622 Lyon (France); IPNL, CNRS UMR 5822, F-69622 Villeurbanne (France); Centre de Lutte Contre le Cancer Leon Berard, F-69373 Lyon (France); Bertrand, Damien; Stichelbaut, Frederic [IBA, B-1348 Louvain-la-Neuve (Belgium); Freud, Nicolas [Universite de Lyon, F-69622 Lyon (France); CNDRI, INSA-Lyon, F-69621 Villeurbanne Cedex (France); Sarrut, David [Universite de Lyon, F-69622 Lyon (France); Creatis, CNRS UMR 5220, F-69622 Villeurbanne (France); Centre de Lutte Contre le Cancer Leon Berard, F-69373 Lyon (France)

    2010-10-15

    This study reports the investigation of different GEANT4 settings for proton therapy applications in the context of Treatment Planning System comparisons. The GEANT4.9.2 release was used through the GATE platform. We focused on the Pencil Beam Scanning delivery technique, which allows for intensity modulated proton therapy applications. The most relevant options and parameters (range cut, step size, database binning) for the simulation that influence the dose deposition were investigated, in order to determine a robust, accurate and efficient simulation environment. In this perspective, simulations of depth-dose profiles and transverse profiles at different depths and energies between 100 and 230 MeV have been assessed against reference measurements in water and PMMA. These measurements were performed in Essen, Germany, with the IBA dedicated Pencil Beam Scanning system, using Bragg-peak chambers and radiochromic films. GEANT4 simulations were also compared to the PHITS.2.14 and MCNPX.2.5.0 Monte Carlo codes. Depth-dose simulations reached 0.3 mm range accuracy compared to NIST CSDA ranges, with a dose agreement of about 1% over a set of five different energies. The transverse profiles simulated using the different Monte Carlo codes showed discrepancies, with up to 15% difference in beam widening between GEANT4 and MCNPX in water. A 8% difference between the GEANT4 multiple scattering and single scattering algorithms was observed. The simulations showed the inability of reproducing the measured transverse dose spreading with depth in PMMA, corroborating the fact that GEANT4 underestimates the lateral dose spreading. GATE was found to be a very convenient simulation environment to perform this study. A reference physics-list and an optimized parameters-list have been proposed. Satisfactory agreement against depth-dose profiles measurements was obtained. The simulation of transverse profiles using different Monte Carlo codes showed significant deviations. This point

  9. Optimization of super-resolution processing using incomplete image sets in PET imaging.

    Science.gov (United States)

    Chang, Guoping; Pan, Tinsu; Clark, John W; Mawlawi, Osama R

    2008-12-01

    Super-resolution (SR) techniques are used in PET imaging to generate a high-resolution image by combining multiple low-resolution images that have been acquired from different points of view (POVs). The number of low-resolution images used defines the processing time and memory storage necessary to generate the SR image. In this paper, the authors propose two optimized SR implementations (ISR-1 and ISR-2) that require only a subset of the low-resolution images (two sides and diagonal of the image matrix, respectively), thereby reducing the overall processing time and memory storage. In an N x N matrix of low-resolution images, ISR-1 would be generated using images from the two sides of the N x N matrix, while ISR-2 would be generated from images across the diagonal of the image matrix. The objective of this paper is to investigate whether the two proposed SR methods can achieve similar performance in contrast and signal-to-noise ratio (SNR) as the SR image generated from a complete set of low-resolution images (CSR) using simulation and experimental studies. A simulation, a point source, and a NEMA/IEC phantom study were conducted for this investigation. In each study, 4 (2 x 2) or 16 (4 x 4) low-resolution images were reconstructed from the same acquired data set while shifting the reconstruction grid to generate images from different POVs. SR processing was then applied in each study to combine all as well as two different subsets of the low-resolution images to generate the CSR, ISR-1, and ISR-2 images, respectively. For reference purpose, a native reconstruction (NR) image using the same matrix size as the three SR images was also generated. The resultant images (CSR, ISR-1, ISR-2, and NR) were then analyzed using visual inspection, line profiles, SNR plots, and background noise spectra. The simulation study showed that the contrast and the SNR difference between the two ISR images and the CSR image were on average 0.4% and 0.3%, respectively. Line profiles of

  10. Optimally setting up directed searches for continuous gravitational waves in Advanced LIGO O1 data

    Science.gov (United States)

    Ming, Jing; Papa, Maria Alessandra; Krishnan, Badri; Prix, Reinhard; Beer, Christian; Zhu, Sylvia J.; Eggenstein, Heinz-Bernd; Bock, Oliver; Machenschalk, Bernd

    2018-02-01

    In this paper we design a search for continuous gravitational waves from three supernova remnants: Vela Jr., Cassiopeia A (Cas A) and G347.3. These systems might harbor rapidly rotating neutron stars emitting quasiperiodic gravitational radiation detectable by the advanced LIGO detectors. Our search is designed to use the volunteer computing project Einstein@Home for a few months and assumes the sensitivity and duty cycles of the advanced LIGO detectors during their first science run. For all three supernova remnants, the sky positions of their central compact objects are well known but the frequency and spin-down rates of the neutron stars are unknown which makes the searches computationally limited. In a previous paper we have proposed a general framework for deciding on what target we should spend computational resources and in what proportion, what frequency and spin-down ranges we should search for every target, and with what search setup. Here we further expand this framework and apply it to design a search directed at detecting continuous gravitational wave signals from the most promising three supernova remnants identified as such in the previous work. Our optimization procedure yields broad frequency and spin-down searches for all three objects, at an unprecedented level of sensitivity: The smallest detectable gravitational wave strain h0 for Cas A is expected to be 2 times smaller than the most sensitive upper limits published to date, and our proposed search, which was set up and ran on the volunteer computing project Einstein@Home, covers a much larger frequency range.

  11. A two-level strategy to realize life-cycle production optimization in an operational setting

    NARCIS (Netherlands)

    Essen, van G.M.; Hof, Van den P.M.J.; Jansen, J.D.

    2012-01-01

    We present a two-level strategy to improve robustness against uncertainty and model errors in life-cycle flooding optimization. At the upper level, a physics-based large-scale reservoir model is used to determine optimal life-cycle injection and production profiles. At the lower level these profiles

  12. A two-level strategy to realize life-cycle production optimization in an operational setting

    NARCIS (Netherlands)

    Essen, van G.M.; Hof, Van den P.M.J.; Jansen, J.D.

    2013-01-01

    We present a two-level strategy to improve robustness against uncertainty and model errors in life-cycle flooding optimization. At the upper level, a physics-based large-scale reservoir model is used to determine optimal life-cycle injection and production profiles. At the lower level these profiles

  13. Simulation of neuro-fuzzy model for optimization of combine header setting

    Directory of Open Access Journals (Sweden)

    S Zareei

    2016-09-01

    of reel tine bar from cutter bar and vertical distance of reel tine bar from cutter bar could be recommended according to minimize header loss. Conclusions In the final step, the designed controller was simulated in SIMULINK. The Controller can change setting of header components in order to their impaction gathering loss and in each step, compare gathering loss with optimal value and If it was more than optimum then change the settings again. The simulation results were evaluated satisfactory.

  14. Wind-break walls with optimized setting angles for natural draft dry cooling tower with vertical radiators

    International Nuclear Information System (INIS)

    Ma, Huan; Si, Fengqi; Kong, Yu; Zhu, Kangping; Yan, Wensheng

    2017-01-01

    Highlights: • Aerodynamic field around dry cooling tower is presented with numerical model. • Performances of cooling deltas are figured out by air inflow velocity analysis. • Setting angles of wind-break walls are optimized to improve cooling performance. • Optimized walls can reduce the interference on air inflow at low wind speeds. • Optimized walls create stronger outside secondary flow at high wind speeds. - Abstract: To get larger cooling performance enhancement for natural draft dry cooling tower with vertical cooling deltas under crosswind, setting angles of wind-break walls were optimized. Considering specific structure of each cooling delta, an efficient numerical model was established and validated by some published results. Aerodynamic fields around cooling deltas under various crosswind speeds were presented, and outlet water temperatures of the two columns of cooling delta were exported as well. It was found that for each cooling delta, there was a difference in cooling performance between the two columns, which is closely related to the characteristic of main airflow outside the tower. Using the present model, air inflow deviation angles at cooling deltas’ inlet were calculated, and the effects of air inflow deviation on outlet water temperatures of the two columns for corresponding cooling delta were explained in detail. Subsequently, at cooling deltas’ inlet along radial direction of the tower, setting angles of wind-break walls were optimized equal to air inflow deviation angles when no airflow separation appeared outside the tower, while equal to zero when outside airflow separation occurred. In addition, wind-break walls with optimized setting angles were verified to be extremely effective, compared to the previous radial walls.

  15. Theory of NMR probe design

    International Nuclear Information System (INIS)

    Schnall, M.D.

    1988-01-01

    The NMR probe is the intrinsic part of the NMR system which allows transmission of a stimulus to a sample and the reception of a resulting signal from a sample. NMR probes are used in both imaging and spectroscopy. Optimal probe design is important to the production of adequate signal/moise. It is important for anyone using NMR techniques to understand how NMR probes work and how to optimize probe design

  16. Population health management as a strategy for creation of optimal healing environments in worksite and corporate settings.

    Science.gov (United States)

    Chapman, Larry S; Pelletier, Kenneth R

    2004-01-01

    This paper provides an (OHE) overview of a population health management (PHM) approach to the creation of optimal healing environments (OHEs) in worksite and corporate settings. It presents a framework for consideration as the context for potential research projects to examine the health, well-being, and economic effects of a set of newer "virtual" prevention interventions operating in an integrated manner in worksite settings. The main topics discussed are the fundamentals of PHM with basic terminology and core principles, a description of PHM core technology and implications of a PHM approach to creating OHEs.

  17. Optimization to the Culture Conditions for Phellinus Production with Regression Analysis and Gene-Set Based Genetic Algorithm

    Science.gov (United States)

    Li, Zhongwei; Xin, Yuezhen; Wang, Xun; Sun, Beibei; Xia, Shengyu; Li, Hui

    2016-01-01

    Phellinus is a kind of fungus and is known as one of the elemental components in drugs to avoid cancers. With the purpose of finding optimized culture conditions for Phellinus production in the laboratory, plenty of experiments focusing on single factor were operated and large scale of experimental data were generated. In this work, we use the data collected from experiments for regression analysis, and then a mathematical model of predicting Phellinus production is achieved. Subsequently, a gene-set based genetic algorithm is developed to optimize the values of parameters involved in culture conditions, including inoculum size, PH value, initial liquid volume, temperature, seed age, fermentation time, and rotation speed. These optimized values of the parameters have accordance with biological experimental results, which indicate that our method has a good predictability for culture conditions optimization. PMID:27610365

  18. The Bayesian statistical decision theory applied to the optimization of generating set maintenance

    International Nuclear Information System (INIS)

    Procaccia, H.; Cordier, R.; Muller, S.

    1994-11-01

    The difficulty in RCM methodology is the allocation of a new periodicity of preventive maintenance on one equipment when a critical failure has been identified: until now this new allocation has been based on the engineer's judgment, and one must wait for a full cycle of feedback experience before to validate it. Statistical decision theory could be a more rational alternative for the optimization of preventive maintenance periodicity. This methodology has been applied to inspection and maintenance optimization of cylinders of diesel generator engines of 900 MW nuclear plants, and has shown that previous preventive maintenance periodicity can be extended. (authors). 8 refs., 5 figs

  19. Social welfare and the Affordable Care Act: is it ever optimal to set aside comparative cost?

    Science.gov (United States)

    Mortimer, Duncan; Peacock, Stuart

    2012-10-01

    The creation of the Patient-Centered Outcomes Research Institute (PCORI) under the Affordable Care Act has set comparative effectiveness research (CER) at centre stage of US health care reform. Comparative cost analysis has remained marginalised and it now appears unlikely that the PCORI will require comparative cost data to be collected as an essential component of CER. In this paper, we review the literature to identify ethical and distributional objectives that might motivate calls to set priorities without regard to comparative cost. We then present argument and evidence to consider whether there is any plausible set of objectives and constraints against which priorities can be set without reference to comparative cost. We conclude that - to set aside comparative cost even after accounting for ethical and distributional constraints - would be truly to act as if money is no object. Copyright © 2012 Elsevier Ltd. All rights reserved.

  20. Bayesian models of cognition revisited: Setting optimality aside and letting data drive psychological theory.

    Science.gov (United States)

    Tauber, Sean; Navarro, Daniel J; Perfors, Amy; Steyvers, Mark

    2017-07-01

    Recent debates in the psychological literature have raised questions about the assumptions that underpin Bayesian models of cognition and what inferences they license about human cognition. In this paper we revisit this topic, arguing that there are 2 qualitatively different ways in which a Bayesian model could be constructed. The most common approach uses a Bayesian model as a normative standard upon which to license a claim about optimality. In the alternative approach, a descriptive Bayesian model need not correspond to any claim that the underlying cognition is optimal or rational, and is used solely as a tool for instantiating a substantive psychological theory. We present 3 case studies in which these 2 perspectives lead to different computational models and license different conclusions about human cognition. We demonstrate how the descriptive Bayesian approach can be used to answer different sorts of questions than the optimal approach, especially when combined with principled tools for model evaluation and model selection. More generally we argue for the importance of making a clear distinction between the 2 perspectives. Considerable confusion results when descriptive models and optimal models are conflated, and if Bayesians are to avoid contributing to this confusion it is important to avoid making normative claims when none are intended. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  1. Optimizing the fabrication of aluminum-coated fiber probes and their application to optical near-field lithography

    DEFF Research Database (Denmark)

    Madsen, S; Holme, NCR; Ramanujam, PS

    1998-01-01

    in terms of roughness and the presence of leaking holes in the coating. We report on how the quality of the coating depends on parameters such as deposition rate and background pressure during evaporation. We have used aluminum-coated fiber probes in lithographical studies of different materials, like side...

  2. Optimization of a Solid-State Electron Spin Qubit Using Gate Set Tomography (Open Access, Publisher’s Version)

    Science.gov (United States)

    2016-10-13

    and addressedwhen the qubit is usedwithin a fault-tolerant quantum computation scheme. 1. Introduction One of themain challenges in the physical...supplied in the supplementarymaterial. Additionally, we have supplied the datafiles constructed from the experiments, alongwith the Python notebook used to...New J. Phys. 18 (2016) 103018 doi:10.1088/1367-2630/18/10/103018 PAPER Optimization of a solid-state electron spin qubit using gate set tomography

  3. Optimization of the size and shape of the set-in nozzle for a PWR reactor pressure vessel

    Energy Technology Data Exchange (ETDEWEB)

    Murtaza, Usman Tariq, E-mail: maniiut@yahoo.com; Javed Hyder, M., E-mail: hyder@pieas.edu.pk

    2015-04-01

    Highlights: • The size and shape of the set-in nozzle of the RPV have been optimized. • The optimized nozzle ensure the reduction of the mass around 198 kg per nozzle. • The mass of the RPV should be minimized for better fracture toughness. - Abstract: The objective of this research work is to optimize the size and shape of the set-in nozzle for a typical reactor pressure vessel (RPV) of a 300 MW pressurized water reactor. The analysis was performed by optimizing the four design variables which control the size and shape of the nozzle. These variables are inner radius of the nozzle, thickness of the nozzle, taper angle at the nozzle-cylinder intersection, and the point where taper of the nozzle starts from. It is concluded that the optimum design of the nozzle is the one that minimizes the two conflicting state variables, i.e., the stress intensity (Tresca yield criterion) and the mass of the RPV.

  4. Research and Setting the Modified Algorithm "Predator-Prey" in the Problem of the Multi-Objective Optimization

    Directory of Open Access Journals (Sweden)

    A. P. Karpenko

    2016-01-01

    Full Text Available We consider a class of algorithms for multi-objective optimization - Pareto-approximation algorithms, which suppose a preliminary building of finite-dimensional approximation of a Pareto set, thereby also a Pareto front of the problem. The article gives an overview of population and non-population algorithms of the Pareto-approximation, identifies their strengths and weaknesses, and presents a canonical algorithm "predator-prey", showing its shortcomings. We offer a number of modifications of the canonical algorithm "predator-prey" with the aim to overcome the drawbacks of this algorithm, present the results of a broad study of the efficiency of these modifications of the algorithm. The peculiarity of the study is the use of the quality indicators of the Pareto-approximation, which previous publications have not used. In addition, we present the results of the meta-optimization of the modified algorithm, i.e. determining the optimal values of some free parameters of the algorithm. The study of efficiency of the modified algorithm "predator-prey" has shown that the proposed modifications allow us to improve the following indicators of the basic algorithm: cardinality of a set of the archive solutions, uniformity of archive solutions, and computation time. By and large, the research results have shown that the modified and meta-optimized algorithm enables achieving exactly the same approximation as the basic algorithm, but with the number of preys being one order less. Computational costs are proportionally reduced.

  5. Optimal Switching Control of Burner Setting for a Compact Marine Boiler Design

    DEFF Research Database (Denmark)

    Solberg, Brian; Andersen, Palle; Maciejowski, Jan M.

    2010-01-01

    This paper discusses optimal control strategies for switching between different burner modes in a novel compact  marine boiler design. The ideal behaviour is defined in a performance index the minimisation of which defines an ideal trade-off between deviations in boiler pressure and water level...... approach is based on a generalisation of hysteresis control. The strategies are verified on a simulation model of the compact marine boiler for control of low/high burner load switches.  ...

  6. The FERMI (at) Elettra Technical Optimization Study: Preliminary Parameter Set and Initial Studies

    International Nuclear Information System (INIS)

    Byrd, John; Corlett, John; Doolittle, Larry; Fawley, William; Lidia, Steven; Penn, Gregory; Ratti, Alex; Staples, John; Wilcox, Russell; Wurtele, Jonathan; Zholents, Alexander

    2005-01-01

    The goal of the FERMI (at) Elettra Technical Optimization Study is to produce a machine design and layout consistent with user needs for radiation in the approximate ranges 100 nm to 40 nm, and 40 nm to 10 nm, using seeded FEL's. The Study will involve collaboration between Italian and US physicists and engineers, and will form the basis for the engineering design and the cost estimation

  7. SETTING OF TASK OF OPTIMIZATION OF THE ACTIVITY OF A MACHINE-BUILDING CLUSTER COMPANY

    Directory of Open Access Journals (Sweden)

    A. V. Romanenko

    2014-01-01

    Full Text Available The work is dedicated to the development of methodological approaches to the management of machine-building enterprise on the basis of cost reduction, optimization of the portfolio of orders and capacity utilization in the process of operational management. Evaluation of economic efficiency of such economic entities of the real sector of the economy is determined, including the timing of orders, which depend on the issues of building a production facility, maintenance of fixed assets and maintain them at a given level. Formulated key components of economic-mathematical model of industrial activity and is defined as the optimization criterion. As proposed formula accumulating profits due to production capacity and technology to produce products current direct variable costs, the amount of property tax and expenses appearing as a result of manifestations of variance when performing replacement of production tasks for a single period of time. The main component of the optimization of the production activity of the enterprise on the basis of this criterion is the vector of direct variable costs. It depends on the number of types of products in the current portfolio of orders, production schedules production, the normative time for the release of a particular product available Fund time efficient production positions, the current valuation for certain groups of technological operations and the current priority of operations for the degree of readiness performed internal orders. Modeling of industrial activity based on the proposed provisions would allow the enterprises of machine-building cluster, active innovation, improve the efficient use of available production resources by optimizing current operations at the high uncertainty of the magnitude of the demand planning and carrying out maintenance and routine repairs.

  8. Fast-Solving Quasi-Optimal LS-S3VM Based on an Extended Candidate Set.

    Science.gov (United States)

    Ma, Yuefeng; Liang, Xun; Kwok, James T; Li, Jianping; Zhou, Xiaoping; Zhang, Haiyan

    2018-04-01

    The semisupervised least squares support vector machine (LS-S 3 VM) is an important enhancement of least squares support vector machines in semisupervised learning. Given that most data collected from the real world are without labels, semisupervised approaches are more applicable than standard supervised approaches. Although a few training methods for LS-S 3 VM exist, the problem of deriving the optimal decision hyperplane efficiently and effectually has not been solved. In this paper, a fully weighted model of LS-S 3 VM is proposed, and a simple integer programming (IP) model is introduced through an equivalent transformation to solve the model. Based on the distances between the unlabeled data and the decision hyperplane, a new indicator is designed to represent the possibility that the label of an unlabeled datum should be reversed in each iteration during training. Using the indicator, we construct an extended candidate set consisting of the indices of unlabeled data with high possibilities, which integrates more information from unlabeled data. Our algorithm is degenerated into a special scenario of the previous algorithm when the extended candidate set is reduced into a set with only one element. Two strategies are utilized to determine the descent directions based on the extended candidate set. Furthermore, we developed a novel method for locating a good starting point based on the properties of the equivalent IP model. Combined with the extended candidate set and the carefully computed starting point, a fast algorithm to solve LS-S 3 VM quasi-optimally is proposed. The choice of quasi-optimal solutions results in low computational cost and avoidance of overfitting. Experiments show that our algorithm equipped with the two designed strategies is more effective than other algorithms in at least one of the following three aspects: 1) computational complexity; 2) generalization ability; and 3) flexibility. However, our algorithm and other algorithms have

  9. Performance Improvement of the Core Protection Calculator System (CPCS) by Introducing Optimal Function Sets

    International Nuclear Information System (INIS)

    Won, Byung Hee; Kim, Kyung O; Kim, Jong Kyung; Kim, Soon Young

    2012-01-01

    The Core Protection Calculator System (CPCS) is an automated device which is adopted to inspect the safety parameters such as Departure from Nuclear Boiling Ratio (DNBR) and Local Power Density (LPD) during normal operation. One function of the CPCS is to predict the axial power distributions using function sets in cubic spline method. Another function of that is to impose penalty when the estimated distribution by the spline method disagrees with embedded data in CPCS (i.e., over 8%). In conventional CPCS, restricted function sets are used to synthesize axial power shape, whereby it occasionally can draw a disagreement between synthesized data and the embedded data. For this reason, the study on improvement for power distributions synthesis in CPCS has been conducted in many countries. In this study, many function sets (more than 18,000 types) differing from the conventional ones were evaluated in each power shape. Matlab code was used for calculating/arranging the numerous cases of function sets. Their synthesis performance was also evaluated through error between conventional data and consequences calculated by new function sets

  10. Use of GIS to identify optimal settings for cancer prevention and control in African American communities

    Science.gov (United States)

    Alcaraz, Kassandra I.; Kreuter, Matthew W.; Bryan, Rebecca P.

    2009-01-01

    Objective Rarely have Geographic Information Systems (GIS) been used to inform community-based outreach and intervention planning. This study sought to identify community settings most likely to reach individuals from geographically localized areas. Method An observational study conducted in an urban city in Missouri during 2003–2007 placed computerized breast cancer education kiosks in seven types of community settings: beauty salons, churches, health fairs, neighborhood health centers, Laundromats, public libraries and social service agencies. We used GIS to measure distance between kiosk users’ (n=7,297) home ZIP codes and the location where they used the kiosk. Mean distances were compared across settings. Results Mean distance between individuals’ home ZIP codes and the location where they used the kiosk varied significantly (pLaundromats (2.3 miles) and public libraries (2.8 miles) and greatest among kiosk users at health fairs (7.6 miles). Conclusion Some community settings are more likely than others to reach highly localized populations. A better understanding of how and where to reach specific populations can complement the progress already being made in identifying populations at increased disease risk. PMID:19422844

  11. Training a whole-book LSTM-based recognizer with an optimal training set

    Science.gov (United States)

    Soheili, Mohammad Reza; Yousefi, Mohammad Reza; Kabir, Ehsanollah; Stricker, Didier

    2018-04-01

    Despite the recent progress in OCR technologies, whole-book recognition, is still a challenging task, in particular in case of old and historical books, that the unknown font faces or low quality of paper and print contributes to the challenge. Therefore, pre-trained recognizers and generic methods do not usually perform up to required standards, and usually the performance degrades for larger scale recognition tasks, such as of a book. Such reportedly low error-rate methods turn out to require a great deal of manual correction. Generally, such methodologies do not make effective use of concepts such redundancy in whole-book recognition. In this work, we propose to train Long Short Term Memory (LSTM) networks on a minimal training set obtained from the book to be recognized. We show that clustering all the sub-words in the book, and using the sub-word cluster centers as the training set for the LSTM network, we can train models that outperform any identical network that is trained with randomly selected pages of the book. In our experiments, we also show that although the sub-word cluster centers are equivalent to about 8 pages of text for a 101- page book, a LSTM network trained on such a set performs competitively compared to an identical network that is trained on a set of 60 randomly selected pages of the book.

  12. Setting Optimal Bounds on Risk in Asset Allocation - a Convex Program

    Directory of Open Access Journals (Sweden)

    James E. Falk

    2002-10-01

    Full Text Available The 'Portfolio Selection Problem' is traditionally viewed as selecting a mix of investment opportunities that maximizes the expected return subject to a bound on risk. However, in reality, portfolios are made up of a few 'asset classes' that consist of similar opportunities. The asset classes are managed by individual `sub-managers', under guidelines set by an overall portfolio manager. Once a benchmark (the `strategic' allocation has been set, an overall manager may choose to allow the sub-managers some latitude in which opportunities make up the classes. He may choose some overall bound on risk (as measured by the variance and wish to set bounds that constrain the submanagers. Mathematically we show that the problem is equivalent to finding a hyper-rectangle of maximal volume within an ellipsoid. It is a convex program, albeit with potentially a large number of constraints. We suggest a cutting plane algorithm to solve the problem and include computational results on a set of randomly generated problems as well as a real-world problem taken from the literature.

  13. An Optimized, Grid Independent, Narrow Band Data Structure for High Resolution Level Sets

    DEFF Research Database (Denmark)

    Nielsen, Michael Bang; Museth, Ken

    2004-01-01

    enforced by the convex boundaries of an underlying cartesian computational grid. Here we present a novel very memory efficient narrow band data structure, dubbed the Sparse Grid, that enables the representation of grid independent high resolution level sets. The key features our new data structure are...

  14. Einstein Inflationary Probe (EIP)

    Science.gov (United States)

    Hinshaw, Gary

    2004-01-01

    I will discuss plans to develop a concept for the Einstein Inflation Probe: a mission to detect gravity waves from inflation via the unique signature they impart to the cosmic microwave background (CMB) polarization. A sensitive CMB polarization satellite may be the only way to probe physics at the grand-unified theory (GUT) scale, exceeding by 12 orders of magnitude the energies studied at the Large Hadron Collider. A detection of gravity waves would represent a remarkable confirmation of the inflationary paradigm and set the energy scale at which inflation occurred when the universe was a fraction of a second old. Even a strong upper limit to the gravity wave amplitude would be significant, ruling out many common models of inflation, and pointing to inflation occurring at much lower energy, if at all. Measuring gravity waves via the CMB polarization will be challenging. We will undertake a comprehensive study to identify the critical scientific requirements for the mission and their derived instrumental performance requirements. At the core of the study will be an assessment of what is scientifically and experimentally optimal within the scope and purpose of the Einstein Inflation Probe.

  15. Role of pharmacists in optimizing the use of anticancer drugs in the clinical setting

    Directory of Open Access Journals (Sweden)

    Ma CSJ

    2014-02-01

    Full Text Available Carolyn SJ Ma Department of Pharmacy Practice, Daniel K. Inouye College of Pharmacy, University of Hawaii at Hilo, Honolulu, HI, USA Abstract: Oncology pharmacists, also known as oncology pharmacy specialists (OPSs have specialized knowledge of anticancer medications and their role in cancer. As essential member of the interdisciplinary team, OPSs optimize the benefits of drug therapy, help to minimize toxicities and work with patients on supportive care issues. The OPSs expanded role as experts in drug therapy extends to seven major key elements of medication management that include: selection, procurement, storage, preparation/dispensing, prescribing/dosing/transcribing, administration and monitoring/evaluation/education. As front line caregivers in hospital, ambulatory care, long-term care facilities, and community specialty pharmacies, the OPS also helps patients in areas of supportive care including nausea and vomiting, hematologic support, nutrition and infection control. This role helps the patient in the recovery phase between treatment cycles and adherence to chemotherapy treatment schedules essential for optimal treatment and outcome. Keywords: oncology pharmacist, oncology pharmacy specialist, medication management, chemotherapy

  16. Optimizing the Nutritional Support of Adult Patients in the Setting of Cirrhosis.

    Science.gov (United States)

    Perumpail, Brandon J; Li, Andrew A; Cholankeril, George; Kumari, Radhika; Ahmed, Aijaz

    2017-10-13

    The aim of this work is to develop a pragmatic approach in the assessment and management strategies of patients with cirrhosis in order to optimize the outcomes in this patient population. A systematic review of literature was conducted through 8 July 2017 on the PubMed Database looking for key terms, such as malnutrition, nutrition, assessment, treatment, and cirrhosis. Articles and studies looking at associations between nutrition and cirrhosis were reviewed. An assessment of malnutrition should be conducted in two stages: the first, to identify patients at risk for malnutrition based on the severity of liver disease, and the second, to perform a complete multidisciplinary nutritional evaluation of these patients. Optimal management of malnutrition should focus on meeting recommended daily goals for caloric intake and inclusion of various nutrients in the diet. The nutritional goals should be pursued by encouraging and increasing oral intake or using other measures, such as oral supplementation, enteral nutrition, or parenteral nutrition. Although these strategies to improve nutritional support have been well established, current literature on the topic is limited in scope. Further research should be implemented to test if this enhanced approach is effective.

  17. Optimizing the Nutritional Support of Adult Patients in the Setting of Cirrhosis

    Directory of Open Access Journals (Sweden)

    Brandon J. Perumpail

    2017-10-01

    Full Text Available Aim: The aim of this work is to develop a pragmatic approach in the assessment and management strategies of patients with cirrhosis in order to optimize the outcomes in this patient population. Method: A systematic review of literature was conducted through 8 July 2017 on the PubMed Database looking for key terms, such as malnutrition, nutrition, assessment, treatment, and cirrhosis. Articles and studies looking at associations between nutrition and cirrhosis were reviewed. Results: An assessment of malnutrition should be conducted in two stages: the first, to identify patients at risk for malnutrition based on the severity of liver disease, and the second, to perform a complete multidisciplinary nutritional evaluation of these patients. Optimal management of malnutrition should focus on meeting recommended daily goals for caloric intake and inclusion of various nutrients in the diet. The nutritional goals should be pursued by encouraging and increasing oral intake or using other measures, such as oral supplementation, enteral nutrition, or parenteral nutrition. Conclusions: Although these strategies to improve nutritional support have been well established, current literature on the topic is limited in scope. Further research should be implemented to test if this enhanced approach is effective.

  18. The Role of eHealth in Optimizing Preventive Care in the Primary Care Setting.

    Science.gov (United States)

    Carey, Mariko; Noble, Natasha; Mansfield, Elise; Waller, Amy; Henskens, Frans; Sanson-Fisher, Rob

    2015-05-22

    Modifiable health risk behaviors such as smoking, overweight and obesity, risky alcohol consumption, physical inactivity, and poor nutrition contribute to a substantial proportion of the world's morbidity and mortality burden. General practitioners (GPs) play a key role in identifying and managing modifiable health risk behaviors. However, these are often underdetected and undermanaged in the primary care setting. We describe the potential of eHealth to help patients and GPs to overcome some of the barriers to managing health risk behaviors. In particular, we discuss (1) the role of eHealth in facilitating routine collection of patient-reported data on lifestyle risk factors, and (2) the role of eHealth in improving clinical management of identified risk factors through provision of tailored feedback, point-of-care reminders, tailored educational materials, and referral to online self-management programs. Strategies to harness the capacity of the eHealth medium, including the use of dynamic features and tailoring to help end users engage with, understand, and apply information need to be considered and maximized. Finally, the potential challenges in implementing eHealth solutions in the primary care setting are discussed. In conclusion, there is significant potential for innovative eHealth solutions to make a contribution to improving preventive care in the primary care setting. However, attention to issues such as data security and designing eHealth interfaces that maximize engagement from end users will be important to moving this field forward.

  19. Using a Robust Design Approach to Optimize Chair Set-up in Wheelchair Sport

    Directory of Open Access Journals (Sweden)

    David S. Haydon

    2018-02-01

    Full Text Available Optimisation of wheelchairs for court sports is currently a difficult and time-consuming process due to the broad range of impairments across athletes, difficulties in monitoring on-court performance, and the trade-off set-up that parameters have on key performance variables. A robust design approach to this problem can potentially reduce the amount of testing required, and therefore allow for individual on-court assessments. This study used orthogonal design with four set-up factors (seat height, depth, and angle, as well as tyre pressure at three levels (current, decreased, and increased for three elite wheelchair rugby players. Each player performed two maximal effort sprints from a stationary position in nine different set-ups, with this allowing for detailed analysis of each factor and level. Whilst statistical significance is difficult to obtain due to the small sample size, meaningful difference results aligning with previous research findings were identified and provide support for the use of this approach.

  20. Optimization of transversal phacoemulsification settings in peristaltic mode using a new transversal ultrasound machine.

    Science.gov (United States)

    Wright, Dannen D; Wright, Alex J; Boulter, Tyler D; Bernhisel, Ashlie A; Stagg, Brian C; Zaugg, Brian; Pettey, Jeff H; Ha, Larry; Ta, Brian T; Olson, Randall J

    2017-09-01

    To determine the optimum bottle height, vacuum, aspiration rate, and power settings in the peristaltic mode of the Whitestar Signature Pro machine with Ellips FX tip action (transversal). John A. Moran Eye Center Laboratories, University of Utah, Salt Lake City, Utah, USA. Experimental study. Porcine lens nuclei were hardened with formalin and cut into 2.0 mm cubes. Lens cubes were emulsified using transversal and fragment removal time (efficiency), and fragment bounces off the tip (chatter) were measured to determine optimum aspiration rate, bottle height, vacuum, and power settings in the peristaltic mode. Efficiency increased in a linear fashion with increasing bottle height and vacuum. The most efficient aspiration rate was 50 mL/min, with 60 mL/min statistically similar. Increasing power increased efficiency up to 90% with increased chatter at 100%. The most efficient values for the settings tested were bottle height at 100 cm, vacuum at 600 mm Hg, aspiration rate of 50 or 60 mL/min, and power at 90%. Copyright © 2017 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  1. Automatic optimal filament segmentation with sub-pixel accuracy using generalized linear models and B-spline level-sets.

    Science.gov (United States)

    Xiao, Xun; Geyer, Veikko F; Bowne-Anderson, Hugo; Howard, Jonathon; Sbalzarini, Ivo F

    2016-08-01

    Biological filaments, such as actin filaments, microtubules, and cilia, are often imaged using different light-microscopy techniques. Reconstructing the filament curve from the acquired images constitutes the filament segmentation problem. Since filaments have lower dimensionality than the image itself, there is an inherent trade-off between tracing the filament with sub-pixel accuracy and avoiding noise artifacts. Here, we present a globally optimal filament segmentation method based on B-spline vector level-sets and a generalized linear model for the pixel intensity statistics. We show that the resulting optimization problem is convex and can hence be solved with global optimality. We introduce a simple and efficient algorithm to compute such optimal filament segmentations, and provide an open-source implementation as an ImageJ/Fiji plugin. We further derive an information-theoretic lower bound on the filament segmentation error, quantifying how well an algorithm could possibly do given the information in the image. We show that our algorithm asymptotically reaches this bound in the spline coefficients. We validate our method in comprehensive benchmarks, compare with other methods, and show applications from fluorescence, phase-contrast, and dark-field microscopy. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  2. An energy-saving set-point optimizer with a sliding mode controller for automotive air-conditioning/refrigeration systems

    International Nuclear Information System (INIS)

    Huang, Yanjun; Khajepour, Amir; Ding, Haitao; Bagheri, Farshid; Bahrami, Majid

    2017-01-01

    Highlights: • A novel two-layer energy-saving controller for automotive A/C-R system is developed. • A set-point optimizer at the outer loop is designed based on the steady state model. • A sliding mode controller in the inner loop is built. • Extensively experiments studies show that about 9% energy can be saving by this controller. - Abstract: This paper presents an energy-saving controller for automotive air-conditioning/refrigeration (A/C-R) systems. With their extensive application in homes, industry, and vehicles, A/C-R systems are consuming considerable amounts of energy. The proposed controller consists of two different time-scale layers. The outer or the slow time-scale layer called a set-point optimizer is used to find the set points related to energy efficiency by using the steady state model; whereas, the inner or the fast time-scale layer is used to track the obtained set points. In the inner loop, thanks to its robustness, a sliding mode controller (SMC) is utilized to track the set point of the cargo temperature. The currently used on/off controller is presented and employed as a basis for comparison to the proposed controller. More importantly, the real experimental results under several disturbed scenarios are analysed to demonstrate how the proposed controller can improve performance while reducing the energy consumption by 9% comparing with the on/off controller. The controller is suitable for any type of A/C-R system even though it is applied to an automotive A/C-R system in this paper.

  3. Community-based interventions to optimize early childhood development in low resource settings.

    Science.gov (United States)

    Maulik, P K; Darmstadt, G L

    2009-08-01

    Interventions targeting the early childhood period (0 to 3 years) help to improve neuro-cognitive functioning throughout life. Some of the more low cost, low resource-intensive community practices for this age-group are play, reading, music and tactile stimulation. This research was conducted to summarize the evidence regarding the effectiveness of such strategies on child development, with particular focus on techniques that may be transferable to developing countries and to children at risk of developing secondary impairments. PubMed, PsycInfo, Embase, ERIC, CINAHL and Cochrane were searched for studies involving the above strategies for early intervention. Reference lists of these studies were scanned and other studies were incorporated based on snow-balling. Overall, 76 articles corresponding to 53 studies, 24 of which were randomized controlled trials, were identified. Sixteen of those studies were from low- and middle-income countries. Play and reading were the two commonest interventions and showed positive impact on intellectual development of the child. Music was evaluated primarily in intensive care settings. Kangaroo Mother Care, and to a lesser extent massage, also showed beneficial effects. Improvement in parent-child interaction was common to all the interventions. Play and reading were effective interventions for early childhood interventions in low- and middle-income countries. More research is needed to judge the effectiveness of music. Kangaroo Mother Care is effective for low birth weight babies in resource poor settings, but further research is needed in community settings. Massage is useful, but needs more rigorous research prior to being advocated for community-level interventions.

  4. Clustering procedures for the optimal selection of data sets from multiple crystals in macromolecular crystallography

    Science.gov (United States)

    Foadi, James; Aller, Pierre; Alguel, Yilmaz; Cameron, Alex; Axford, Danny; Owen, Robin L.; Armour, Wes; Waterman, David G.; Iwata, So; Evans, Gwyndaf

    2013-01-01

    The availability of intense microbeam macromolecular crystallography beamlines at third-generation synchrotron sources has enabled data collection and structure solution from microcrystals of sets from many crystals of the same protein structure. The associated analysis and merging of multi-crystal data is currently a manual and time-consuming step. Here, a computer program, BLEND, that has been written to assist with and automate many of the steps in this process is described. It is demonstrated how BLEND has successfully been used in the solution of a novel membrane protein. PMID:23897484

  5. Decision Optimization of Machine Sets Taking Into Consideration Logical Tree Minimization of Design Guidelines

    Science.gov (United States)

    Deptuła, A.; Partyka, M. A.

    2014-08-01

    The method of minimization of complex partial multi-valued logical functions determines the degree of importance of construction and exploitation parameters playing the role of logical decision variables. Logical functions are taken into consideration in the issues of modelling machine sets. In multi-valued logical functions with weighting products, it is possible to use a modified Quine - McCluskey algorithm of multi-valued functions minimization. Taking into account weighting coefficients in the logical tree minimization reflects a physical model of the object being analysed much better

  6. Crop Evaluation System Optimization: Attribute Weights Determination Based on Rough Sets Theory

    Directory of Open Access Journals (Sweden)

    Ruihong Wang

    2017-01-01

    Full Text Available The present study is mainly a continuation of our previous study, which is about a crop evaluation system development that is based on grey relational analysis. In that system, the attribute weight determination affects the evaluation result directly. Attribute weight is usually ascertained by decision-makers experience knowledge. In this paper, we utilize rough sets theory to calculate attribute significance and then combine it with weight given by decision-maker. This method is a comprehensive consideration of subjective experience knowledge and objective situation; thus it can acquire much more ideal results. Finally, based on this method, we improve the system based on ASP.NET technology.

  7. OPTIMIZATION-BASED APPROACH TO TILING OF FINITE AREAS WITH ARBITRARY SETS OF WANG TILES

    Directory of Open Access Journals (Sweden)

    Marek Tyburec

    2017-11-01

    Full Text Available Wang tiles proved to be a convenient tool for the design of aperiodic tilings in computer graphics and in materials engineering. While there are several algorithms for generation of finite-sized tilings, they exploit the specific structure of individual tile sets, which prevents their general usage. In this contribution, we reformulate the NP-complete tiling generation problem as a binary linear program, together with its linear and semidefinite relaxations suitable for the branch and bound method. Finally, we assess the performance of the established formulations on generations of several aperiodic tilings reported in the literature, and conclude that the linear relaxation is better suited for the problem.

  8. Mobile Probing and Probes

    DEFF Research Database (Denmark)

    Duvaa, Uffe; Ørngreen, Rikke; Weinkouff Mathiasen, Anne-Gitte

    2013-01-01

    Mobile probing is a method, developed for learning about digital work situations, as an approach to discover new grounds. The method can be used when there is a need to know more about users and their work with certain tasks, but where users at the same time are distributed (in time and space......). Mobile probing was inspired by the cultural probe method, and was influenced by qualitative interview and inquiry approaches. The method has been used in two subsequent projects, involving school children (young adults at 15-17 years old) and employees (adults) in a consultancy company. Findings point...... to mobile probing being a flexible method for uncovering the unknowns, as a way of getting rich data to the analysis and design phases. On the other hand it is difficult to engage users to give in depth explanations, which seem easier in synchronous dialogs (whether online or face2face). The development...

  9. Mobile Probing and Probes

    DEFF Research Database (Denmark)

    Duvaa, Uffe; Ørngreen, Rikke; Weinkouff, Anne-Gitte

    2012-01-01

    Mobile probing is a method, which has been developed for learning about digital work situations, as an approach to discover new grounds. The method can be used when there is a need to know more about users and their work with certain tasks, but where users at the same time are distributed (in time...... and space). Mobile probing was inspired by the cultural probe method, and was influenced by qualitative interview and inquiry approaches. The method has been used in two subsequent projects, involving school children (young adults at 15-17 years old) and employees (adults) in a consultancy company. Findings...... point to mobile probing being a flexible method for uncovering the unknowns, as a way of getting rich data to the analysis and design phases. On the other hand it is difficult to engage users to give in depth explanations, which seem easier in synchronous dialogs (whether online or face2face...

  10. An Approximate Method for Solving Optimal Control Problems for Discrete Systems Based on Local Approximation of an Attainability Set

    Directory of Open Access Journals (Sweden)

    V. A. Baturin

    2017-03-01

    Full Text Available An optimal control problem for discrete systems is considered. A method of successive improvements along with its modernization based on the expansion of the main structures of the core algorithm about the parameter is suggested. The idea of the method is based on local approximation of attainability set, which is described by the zeros of the Bellman function in the special problem of optimal control. The essence of the problem is as follows: from the end point of the phase is required to find a path that minimizes functional deviations of the norm from the initial state. If the initial point belongs to the attainability set of the original controlled system, the value of the Bellman function equal to zero, otherwise the value of the Bellman function is greater than zero. For this special task Bellman equation is considered. The support approximation and Bellman equation are selected. The Bellman function is approximated by quadratic terms. Along the allowable trajectory, this approximation gives nothing, because Bellman function and its expansion coefficients are zero. We used a special trick: an additional variable is introduced, which characterizes the degree of deviation of the system from the initial state, thus it is obtained expanded original chain. For the new variable initial nonzero conditions is selected, thus obtained trajectory is lying outside attainability set and relevant Bellman function is greater than zero, which allows it to hold a non-trivial approximation. As a result of these procedures algorithms of successive improvements is designed. Conditions for relaxation algorithms and conditions for the necessary conditions of optimality are also obtained.

  11. Quantum dot nanoparticle for optimization of breast cancer diagnostics and therapy in a clinical setting.

    Science.gov (United States)

    Radenkovic, Dina; Kobayashi, Hisataka; Remsey-Semmelweis, Ernö; Seifalian, Alexander M

    2016-08-01

    Breast cancer is the most common cancer in the world. Sentinel lymph node (SLN) biopsy is used for staging of axillary lymph nodes. Organic dyes and radiocolloid are currently used for SLN mapping, but expose patients to ionizing radiation, are unstable during surgery and cause local tissue damage. Quantum dots (QD) could be used for SLN mapping without the need for biopsy. Surgical resection of the primary tumor is the optimal treatment for early-diagnosed breast cancer, but due to difficulties in defining tumor margins, cancer cells often remain leading to reoccurrences. Functionalized QD could be used for image-guided tumor resection to allow visualization of cancer cells. Near Infrared QD are photostable and have improved deep tissue penetration. Slow elimination of QD raises concerns of potential accumulation. Nevertheless, promising findings with cadmium-free QD in recent in vivo studies and first in-human trial suggest huge potential for cancer diagnostic and therapy. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. The Incompatibility of Pareto Optimality and Dominant-Strategy Incentive Compatibility in Sufficiently-Anonymous Budget-Constrained Quasilinear Settings

    Directory of Open Access Journals (Sweden)

    Rica Gonen

    2013-11-01

    Full Text Available We analyze the space of deterministic, dominant-strategy incentive compatible, individually rational and Pareto optimal combinatorial auctions. We examine a model with multidimensional types, nonidentical items, private values and quasilinear preferences for the players with one relaxation; the players are subject to publicly-known budget constraints. We show that the space includes dictatorial mechanisms and that if dictatorial mechanisms are ruled out by a natural anonymity property, then an impossibility of design is revealed. The same impossibility naturally extends to other abstract mechanisms with an arbitrary outcome set if one maintains the original assumptions of players with quasilinear utilities, public budgets and nonnegative prices.

  13. Optimal set of agri-environmental indicators for the agricultural sector of Czech Republic

    Directory of Open Access Journals (Sweden)

    Jiří Hřebíček

    2013-01-01

    Full Text Available Current trends of agri-environmental indicators evaluation (i.e., the measurement of environmental performance and farm reporting are discussed in the paper focusing on the agriculture sector. From the perspective of agricultural policy, there are two broad decisions to make: which indicators to recommend and promote to farmers, and which indicators to collect to assist in agriculture policy-making. We introduce several general approaches for indicators to collect to assist in policy-making (European Union, Organization for Economic Cooperation and Development and Food and Agriculture Organization of the United Nations in the first part of our paper and given the differences in decision-making problems faced by these sets of decision makers. We continue in the second part of the paper with a proposal of indicators to recommend and promote to farmers in the Czech Republic.

  14. Clustering procedures for the optimal selection of data sets from multiple crystals in macromolecular crystallography

    International Nuclear Information System (INIS)

    Foadi, James; Aller, Pierre; Alguel, Yilmaz; Cameron, Alex; Axford, Danny; Owen, Robin L.; Armour, Wes; Waterman, David G.; Iwata, So; Evans, Gwyndaf

    2013-01-01

    A systematic approach to the scaling and merging of data from multiple crystals in macromolecular crystallography is introduced and explained. The availability of intense microbeam macromolecular crystallography beamlines at third-generation synchrotron sources has enabled data collection and structure solution from microcrystals of <10 µm in size. The increased likelihood of severe radiation damage where microcrystals or particularly sensitive crystals are used forces crystallographers to acquire large numbers of data sets from many crystals of the same protein structure. The associated analysis and merging of multi-crystal data is currently a manual and time-consuming step. Here, a computer program, BLEND, that has been written to assist with and automate many of the steps in this process is described. It is demonstrated how BLEND has successfully been used in the solution of a novel membrane protein

  15. Clustering procedures for the optimal selection of data sets from multiple crystals in macromolecular crystallography

    Energy Technology Data Exchange (ETDEWEB)

    Foadi, James [Diamond Light Source, Harwell Science and Innovation Campus, Didcot, Oxfordshire OX11 0DE (United Kingdom); Imperial College, London SW7 2AZ (United Kingdom); Aller, Pierre [Diamond Light Source, Harwell Science and Innovation Campus, Didcot, Oxfordshire OX11 0DE (United Kingdom); Alguel, Yilmaz; Cameron, Alex [Imperial College, London SW7 2AZ (United Kingdom); Axford, Danny; Owen, Robin L. [Diamond Light Source, Harwell Science and Innovation Campus, Didcot, Oxfordshire OX11 0DE (United Kingdom); Armour, Wes [Oxford e-Research Centre (OeRC), Keble Road, Oxford OX1 3QG (United Kingdom); Waterman, David G. [Research Complex at Harwell (RCaH), Harwell Science and Innovation Campus, Didcot, Oxfordshire OX11 0FA (United Kingdom); Iwata, So [Diamond Light Source, Harwell Science and Innovation Campus, Didcot, Oxfordshire OX11 0DE (United Kingdom); Imperial College, London SW7 2AZ (United Kingdom); Evans, Gwyndaf, E-mail: gwyndaf.evans@diamond.ac.uk [Diamond Light Source, Harwell Science and Innovation Campus, Didcot, Oxfordshire OX11 0DE (United Kingdom)

    2013-08-01

    A systematic approach to the scaling and merging of data from multiple crystals in macromolecular crystallography is introduced and explained. The availability of intense microbeam macromolecular crystallography beamlines at third-generation synchrotron sources has enabled data collection and structure solution from microcrystals of <10 µm in size. The increased likelihood of severe radiation damage where microcrystals or particularly sensitive crystals are used forces crystallographers to acquire large numbers of data sets from many crystals of the same protein structure. The associated analysis and merging of multi-crystal data is currently a manual and time-consuming step. Here, a computer program, BLEND, that has been written to assist with and automate many of the steps in this process is described. It is demonstrated how BLEND has successfully been used in the solution of a novel membrane protein.

  16. Myosin-II sets the optimal response time scale of chemotactic amoeba

    Science.gov (United States)

    Hsu, Hsin-Fang; Westendorf, Christian; Tarantola, Marco; Bodenschatz, Eberhard; Beta, Carsten

    2014-03-01

    The response dynamics of the actin cytoskeleton to external chemical stimuli plays a fundamental role in numerous cellular functions. One of the key players that governs the dynamics of the actin network is the motor protein myosin-II. Here we investigate the role of myosin-II in the response of the actin system to external stimuli. We used a microfluidic device in combination with a photoactivatable chemoattractant to apply stimuli to individual cells with high temporal resolution. We directly compare the actin dynamics in Dictyostelium discodelium wild type (WT) cells to a knockout mutant that is deficient in myosin-II (MNL). Similar to the WT a small population of MNL cells showed self-sustained oscillations even in absence of external stimuli. The actin response of MNL cells to a short pulse of chemoattractant resembles WT during the first 15 sec but is significantly delayed afterward. The amplitude of the dominant peak in the power spectrum from the response time series of MNL cells to periodic stimuli with varying period showed a clear resonance peak at a forcing period of 36 sec, which is significantly delayed as compared to the resonance at 20 sec found for the WT. This shift indicates an important role of myosin-II in setting the response time scale of motile amoeba. Institute of Physics und Astronomy, University of Potsdam, Karl-Liebknecht-Str. 24/25, 14476 Potsdam, Germany.

  17. Optimal Geometrical Set for Automated Marker Placement to Virtualized Real-Time Facial Emotions.

    Directory of Open Access Journals (Sweden)

    Vasanthan Maruthapillai

    Full Text Available In recent years, real-time face recognition has been a major topic of interest in developing intelligent human-machine interaction systems. Over the past several decades, researchers have proposed different algorithms for facial expression recognition, but there has been little focus on detection in real-time scenarios. The present work proposes a new algorithmic method of automated marker placement used to classify six facial expressions: happiness, sadness, anger, fear, disgust, and surprise. Emotional facial expressions were captured using a webcam, while the proposed algorithm placed a set of eight virtual markers on each subject's face. Facial feature extraction methods, including marker distance (distance between each marker to the center of the face and change in marker distance (change in distance between the original and new marker positions, were used to extract three statistical features (mean, variance, and root mean square from the real-time video sequence. The initial position of each marker was subjected to the optical flow algorithm for marker tracking with each emotional facial expression. Finally, the extracted statistical features were mapped into corresponding emotional facial expressions using two simple non-linear classifiers, K-nearest neighbor and probabilistic neural network. The results indicate that the proposed automated marker placement algorithm effectively placed eight virtual markers on each subject's face and gave a maximum mean emotion classification rate of 96.94% using the probabilistic neural network.

  18. Optimal Geometrical Set for Automated Marker Placement to Virtualized Real-Time Facial Emotions.

    Science.gov (United States)

    Maruthapillai, Vasanthan; Murugappan, Murugappan

    2016-01-01

    In recent years, real-time face recognition has been a major topic of interest in developing intelligent human-machine interaction systems. Over the past several decades, researchers have proposed different algorithms for facial expression recognition, but there has been little focus on detection in real-time scenarios. The present work proposes a new algorithmic method of automated marker placement used to classify six facial expressions: happiness, sadness, anger, fear, disgust, and surprise. Emotional facial expressions were captured using a webcam, while the proposed algorithm placed a set of eight virtual markers on each subject's face. Facial feature extraction methods, including marker distance (distance between each marker to the center of the face) and change in marker distance (change in distance between the original and new marker positions), were used to extract three statistical features (mean, variance, and root mean square) from the real-time video sequence. The initial position of each marker was subjected to the optical flow algorithm for marker tracking with each emotional facial expression. Finally, the extracted statistical features were mapped into corresponding emotional facial expressions using two simple non-linear classifiers, K-nearest neighbor and probabilistic neural network. The results indicate that the proposed automated marker placement algorithm effectively placed eight virtual markers on each subject's face and gave a maximum mean emotion classification rate of 96.94% using the probabilistic neural network.

  19. Optimization of metabolite basis sets prior to quantitation in magnetic resonance spectroscopy: an approach based on quantum mechanics

    International Nuclear Information System (INIS)

    Lazariev, A; Graveron-Demilly, D; Allouche, A-R; Aubert-Frécon, M; Fauvelle, F; Piotto, M; Elbayed, K; Namer, I-J; Van Ormondt, D

    2011-01-01

    High-resolution magic angle spinning (HRMAS) nuclear magnetic resonance (NMR) is playing an increasingly important role for diagnosis. This technique enables setting up metabolite profiles of ex vivo pathological and healthy tissue. The need to monitor diseases and pharmaceutical follow-up requires an automatic quantitation of HRMAS 1 H signals. However, for several metabolites, the values of chemical shifts of proton groups may slightly differ according to the micro-environment in the tissue or cells, in particular to its pH. This hampers the accurate estimation of the metabolite concentrations mainly when using quantitation algorithms based on a metabolite basis set: the metabolite fingerprints are not correct anymore. In this work, we propose an accurate method coupling quantum mechanical simulations and quantitation algorithms to handle basis-set changes. The proposed algorithm automatically corrects mismatches between the signals of the simulated basis set and the signal under analysis by maximizing the normalized cross-correlation between the mentioned signals. Optimized chemical shift values of the metabolites are obtained. This method, QM-QUEST, provides more robust fitting while limiting user involvement and respects the correct fingerprints of metabolites. Its efficiency is demonstrated by accurately quantitating 33 signals from tissue samples of human brains with oligodendroglioma, obtained at 11.7 tesla. The corresponding chemical shift changes of several metabolites within the series are also analyzed

  20. Optimization of metabolite basis sets prior to quantitation in magnetic resonance spectroscopy: an approach based on quantum mechanics

    Science.gov (United States)

    Lazariev, A.; Allouche, A.-R.; Aubert-Frécon, M.; Fauvelle, F.; Piotto, M.; Elbayed, K.; Namer, I.-J.; van Ormondt, D.; Graveron-Demilly, D.

    2011-11-01

    High-resolution magic angle spinning (HRMAS) nuclear magnetic resonance (NMR) is playing an increasingly important role for diagnosis. This technique enables setting up metabolite profiles of ex vivo pathological and healthy tissue. The need to monitor diseases and pharmaceutical follow-up requires an automatic quantitation of HRMAS 1H signals. However, for several metabolites, the values of chemical shifts of proton groups may slightly differ according to the micro-environment in the tissue or cells, in particular to its pH. This hampers the accurate estimation of the metabolite concentrations mainly when using quantitation algorithms based on a metabolite basis set: the metabolite fingerprints are not correct anymore. In this work, we propose an accurate method coupling quantum mechanical simulations and quantitation algorithms to handle basis-set changes. The proposed algorithm automatically corrects mismatches between the signals of the simulated basis set and the signal under analysis by maximizing the normalized cross-correlation between the mentioned signals. Optimized chemical shift values of the metabolites are obtained. This method, QM-QUEST, provides more robust fitting while limiting user involvement and respects the correct fingerprints of metabolites. Its efficiency is demonstrated by accurately quantitating 33 signals from tissue samples of human brains with oligodendroglioma, obtained at 11.7 tesla. The corresponding chemical shift changes of several metabolites within the series are also analyzed.

  1. Optimization of auxiliary basis sets for the LEDO expansion and a projection technique for LEDO-DFT.

    Science.gov (United States)

    Götz, Andreas W; Kollmar, Christian; Hess, Bernd A

    2005-09-01

    We present a systematic procedure for the optimization of the expansion basis for the limited expansion of diatomic overlap density functional theory (LEDO-DFT) and report on optimized auxiliary orbitals for the Ahlrichs split valence plus polarization basis set (SVP) for the elements H, Li--F, and Na--Cl. A new method to deal with near-linear dependences in the LEDO expansion basis is introduced, which greatly reduces the computational effort of LEDO-DFT calculations. Numerical results for a test set of small molecules demonstrate the accuracy of electronic energies, structural parameters, dipole moments, and harmonic frequencies. For larger molecular systems the numerical errors introduced by the LEDO approximation can lead to an uncontrollable behavior of the self-consistent field (SCF) process. A projection technique suggested by Löwdin is presented in the framework of LEDO-DFT, which guarantees for SCF convergence. Numerical results on some critical test molecules suggest the general applicability of the auxiliary orbitals presented in combination with this projection technique. Timing results indicate that LEDO-DFT is competitive with conventional density fitting methods. (c) 2005 Wiley Periodicals, Inc.

  2. Optimization of Ventilation and Alarm Setting During the Process of Ammonia Leak in Refrigeration Machinery Room Based on Numerical Simulation

    Directory of Open Access Journals (Sweden)

    Dongliang Liu

    2017-03-01

    Full Text Available In order to optimize the ventilation effect of ammonia leakage in the refrigeration machinery room, a food processing enterprise is selected as the subject investigated. The velocity and concentration field distribution during the process of ammonia leakage are discussed through simulation of refrigeration machinery room using CFD software. The ventilation system of the room is optimized in three aspects which are named air distribution, ventilation volume and discharge outlet. The influence of the ammonia alarm system through ventilation is also analyzed. The results show that it will be better to set the discharge outlet at the top of the plant than at the side of the wall, and the smaller of the distance between the air outlet and the ammonia gathering area, the better of the effect of ventilation will be. The air flow can be improved and the vortex flow can be reduced if the ventilation volume, the number of air vents and the exhaust velocity are reasonably arranged. Not only the function of the alarm could be ensured, but also the scope of the detection area could be enlarged if the detectors are set on the ceiling of the refrigeration units or the ammonia storage vessel.

  3. An optimized process flow for rapid segmentation of cortical bones of the craniofacial skeleton using the level-set method.

    Science.gov (United States)

    Szwedowski, T D; Fialkov, J; Pakdel, A; Whyne, C M

    2013-01-01

    Accurate representation of skeletal structures is essential for quantifying structural integrity, for developing accurate models, for improving patient-specific implant design and in image-guided surgery applications. The complex morphology of thin cortical structures of the craniofacial skeleton (CFS) represents a significant challenge with respect to accurate bony segmentation. This technical study presents optimized processing steps to segment the three-dimensional (3D) geometry of thin cortical bone structures from CT images. In this procedure, anoisotropic filtering and a connected components scheme were utilized to isolate and enhance the internal boundaries between craniofacial cortical and trabecular bone. Subsequently, the shell-like nature of cortical bone was exploited using boundary-tracking level-set methods with optimized parameters determined from large-scale sensitivity analysis. The process was applied to clinical CT images acquired from two cadaveric CFSs. The accuracy of the automated segmentations was determined based on their volumetric concurrencies with visually optimized manual segmentations, without statistical appraisal. The full CFSs demonstrated volumetric concurrencies of 0.904 and 0.719; accuracy increased to concurrencies of 0.936 and 0.846 when considering only the maxillary region. The highly automated approach presented here is able to segment the cortical shell and trabecular boundaries of the CFS in clinical CT images. The results indicate that initial scan resolution and cortical-trabecular bone contrast may impact performance. Future application of these steps to larger data sets will enable the determination of the method's sensitivity to differences in image quality and CFS morphology.

  4. Perceived Enablers and Barriers to Optimal Health among Music Students: A Qualitative Study in the Music Conservatoire Setting.

    Science.gov (United States)

    Perkins, Rosie; Reid, Helen; Araújo, Liliana S; Clark, Terry; Williamon, Aaron

    2017-01-01

    Student health and wellbeing within higher education has been documented as poor in relation to the general population. This is a particular problem among students at music conservatoires, who are studying within a unique educational context that is known to generate both physical and psychological challenges. This article examines how conservatoire students experience health and wellbeing within their institutional context, using a framework from health promotion to focus attention on perceived enablers and barriers to optimal health in relation to three levels: lifestyle, support services, and conservatoire environment. In order to respond to the individuality of students' experiences, a qualitative approach was taken based on semi-structured interviews with 20 current or recent conservatoire students in the United Kingdom. Thematic analysis revealed a complex set of enablers and barriers: (i) lifestyle enablers included value placed on the importance of optimal health and wellbeing for musicians and daily practices to enable this; lifestyle barriers included struggling to maintain healthy lifestyles within the context of musical practice and learning; (ii) support enablers included accessible support sources within and beyond the conservatoire; support barriers included a perceived lack of availability or awareness of appropriate support; (iii) environmental enablers included positive and enjoyable experiences of performance as well as strong relationships and communities; environmental barriers included experiences of comparison and competition, pressure and stress, challenges with negative performance feedback, psychological distress, and perceived overwork. The findings reveal a need for health promotion to focus not only on individuals but also on the daily practices and routines of conservatoires. Additionally, they suggest that continued work is required to embed health and wellbeing support as an integral component of conservatoire education, raising

  5. Perceived Enablers and Barriers to Optimal Health among Music Students: A Qualitative Study in the Music Conservatoire Setting

    Directory of Open Access Journals (Sweden)

    Rosie Perkins

    2017-06-01

    Full Text Available Student health and wellbeing within higher education has been documented as poor in relation to the general population. This is a particular problem among students at music conservatoires, who are studying within a unique educational context that is known to generate both physical and psychological challenges. This article examines how conservatoire students experience health and wellbeing within their institutional context, using a framework from health promotion to focus attention on perceived enablers and barriers to optimal health in relation to three levels: lifestyle, support services, and conservatoire environment. In order to respond to the individuality of students’ experiences, a qualitative approach was taken based on semi-structured interviews with 20 current or recent conservatoire students in the United Kingdom. Thematic analysis revealed a complex set of enablers and barriers: (i lifestyle enablers included value placed on the importance of optimal health and wellbeing for musicians and daily practices to enable this; lifestyle barriers included struggling to maintain healthy lifestyles within the context of musical practice and learning; (ii support enablers included accessible support sources within and beyond the conservatoire; support barriers included a perceived lack of availability or awareness of appropriate support; (iii environmental enablers included positive and enjoyable experiences of performance as well as strong relationships and communities; environmental barriers included experiences of comparison and competition, pressure and stress, challenges with negative performance feedback, psychological distress, and perceived overwork. The findings reveal a need for health promotion to focus not only on individuals but also on the daily practices and routines of conservatoires. Additionally, they suggest that continued work is required to embed health and wellbeing support as an integral component of conservatoire

  6. Analysis and optimization of three main organic Rankine cycle configurations using a set of working fluids with different thermodynamic behaviors

    Science.gov (United States)

    Hamdi, Basma; Mabrouk, Mohamed Tahar; Kairouani, Lakdar; Kheiri, Abdelhamid

    2017-06-01

    Different configurations of organic Rankine cycle (ORC) systems are potential thermodynamic concepts for power generation from low grade heat. The aim of this work is to investigate and optimize the performances of the three main ORC systems configurations: basic ORC, ORC with internal heat exchange (IHE) and regenerative ORC. The evaluation for those configurations was performed using seven working fluids with typical different thermodynamic behaviours (R245fa, R601a, R600a, R227ea, R134a, R1234ze and R1234yf). The optimization has been performed using a genetic algorithm under a comprehensive set of operative parameters such as the fluid evaporating temperature, the fraction of flow rate or the pressure at the steam extracting point in the turbine. Results show that there is no general best ORC configuration for all those fluids. However, there is a suitable configuration for each fluid. Contribution to the topical issue "Materials for Energy harvesting, conversion and storage II (ICOME 2016)", edited by Jean-Michel Nunzi, Rachid Bennacer and Mohammed El Ganaoui

  7. Optimal clinical time for reliable measurement of transcutaneous CO2 with ear probes: counterbalancing overshoot and the vasodilatation effect.

    Science.gov (United States)

    Domingo, Christian; Canturri, Elisa; Moreno, Amalia; Espuelas, Humildad; Vigil, Laura; Luján, Manel

    2010-01-01

    To determine the optimal clinical reading time for the transcutaneous measurement of oxygen saturation (SpO(2)) and transcutaneous CO(2) (TcPCO(2)) in awake spontaneously breathing individuals, considering the overshoot phenomenon (transient overestimation of arterial PaCO(2)). EXPERIMENTAL SECTION: Observational study of 91 (75 men) individuals undergoing forced spirometry, measurement of SpO(2) and TcPCO(2) with the SenTec monitor every two minutes until minute 20 and arterial blood gas (ABG) analysis. Overshoot severity: (a) mild (0.1-1.9 mm Hg); (b) moderate (2-4.9 mm Hg); (c) severe: (>5 mm Hg). The mean difference was calculated for SpO(2) and TcPCO(2) and arterial values of PaCO(2) and SpO(2). The intraclass correlation coefficient (ICC) between monitor readings and blood values was calculated as a measure of agreement. The mean age was 63.1 ± 11.8 years. Spirometric values: FVC: 75.4 ± 6.2%; FEV(1): 72.9 ± 23.9%; FEV(1)/FVC: 70 ± 15.5%. ABG: PaO(2): 82.6 ± 13.2; PaCO(2): 39.9.1 ± 4.8 mmHg; SaO(2): 95.3 ± 4.4%. Overshoot analysis: overshoot was mild in 33 (36.3%) patients, moderate in 20 (22%) and severe in nine (10%); no overshoot was observed in 29 (31%) patients. The lowest mean differences between arterial blood gas and TcPCO(2) was -0.57 mmHg at minute 10, although the highest ICC was obtained at minutes 12 and 14 (>0.8). The overshoot lost its influence after minute 12. For SpO(2), measurements were reliable at minute 2. The optimal clinical reading measurement recommended for the ear lobe TcPCO(2) measurement ranges between minute 12 and 14. The SpO(2) measurement can be performed at minute 2.

  8. Optimal Clinical Time for Reliable Measurement of Transcutaneous CO2 with Ear Probes: Counterbalancing Overshoot and the Vasodilatation Effect

    Directory of Open Access Journals (Sweden)

    Manel Luján

    2010-01-01

    Full Text Available OBJECTIVES: To determine the optimal clinical reading time for the transcutaneous measurement of oxygen saturation (SpO and transcutaneous CO2 (TcPCO2 in awake spontaneously breathing individuals, considering the overshoot phenomenon (transient overestimation of arterial PaCO2. EXPERIMENTAL SECTION: Observational study of 91 (75 men individuals undergoing forced spirometry, measurement of SpO2 and TcPCO2 with the SenTec monitor every two minutes until minute 20 and arterial blood gas (ABG analysis. Overshoot severity: (a mild (0.1–1.9 mm Hg; (b moderate (2–4.9 mm Hg; (c severe: (>5 mm Hg. The mean difference was calculated for SpO2 and TcPCO2 and arterial values of PaCO2 and SpO2. The intraclass correlation coefficient (ICC between monitor readings and blood values was calculated as a measure of agreement. RESULTS: The mean age was 63.1 ± 11.8 years. Spirometric values: FVC: 75.4 ± 6.2%; FEV1: 72.9 ± 23.9%; FEV1/FVC: 70 ± 15.5%. ABG: PaO2: 82.6 ± 13.2; PaCO2: 39.9.1 ± 4.8 mmHg; SaO2: 95.3 ± 4.4%. Overshoot analysis: overshoot was mild in 33 (36.3% patients, moderate in 20 (22% and severe in nine (10%; no overshoot was observed in 29 (31% patients. The lowest mean differences between arterial blood gas and TcPCO2 was –0.57 mmHg at minute 10, although the highest ICC was obtained at minutes 12 and 14 (>0.8. The overshoot lost its influence after minute 12. For SpO2, measurements were reliable at minute 2. CONCLUSIONS: The optimal clinical reading measurement recommended for the ear lobe TcPCO2 measurement ranges between minute 12 and 14. The SpO2 measurement can be performed at minute 2.

  9. Optimal Clinical Time for Reliable Measurement of Transcutaneous CO2 with Ear Probes: Counterbalancing Overshoot and the Vasodilatation Effect

    Science.gov (United States)

    Domingo, Christian; Canturri, Elisa; Moreno, Amalia; Espuelas, Humildad; Vigil, Laura; Luján, Manel

    2010-01-01

    OBJECTIVES: To determine the optimal clinical reading time for the transcutaneous measurement of oxygen saturation (SpO2) and transcutaneous CO2 (TcPCO2) in awake spontaneously breathing individuals, considering the overshoot phenomenon (transient overestimation of arterial PaCO2). EXPERIMENTAL SECTION: Observational study of 91 (75 men) individuals undergoing forced spirometry, measurement of SpO2 and TcPCO2 with the SenTec monitor every two minutes until minute 20 and arterial blood gas (ABG) analysis. Overshoot severity: (a) mild (0.1–1.9 mm Hg); (b) moderate (2–4.9 mm Hg); (c) severe: (>5 mm Hg). The mean difference was calculated for SpO2 and TcPCO2 and arterial values of PaCO2 and SpO2. The intraclass correlation coefficient (ICC) between monitor readings and blood values was calculated as a measure of agreement. RESULTS: The mean age was 63.1 ± 11.8 years. Spirometric values: FVC: 75.4 ± 6.2%; FEV1: 72.9 ± 23.9%; FEV1/FVC: 70 ± 15.5%. ABG: PaO2: 82.6 ± 13.2; PaCO2: 39.9.1 ± 4.8 mmHg; SaO2: 95.3 ± 4.4%. Overshoot analysis: overshoot was mild in 33 (36.3%) patients, moderate in 20 (22%) and severe in nine (10%); no overshoot was observed in 29 (31%) patients. The lowest mean differences between arterial blood gas and TcPCO2 was −0.57 mmHg at minute 10, although the highest ICC was obtained at minutes 12 and 14 (>0.8). The overshoot lost its influence after minute 12. For SpO2, measurements were reliable at minute 2. CONCLUSIONS: The optimal clinical reading measurement recommended for the ear lobe TcPCO2 measurement ranges between minute 12 and 14. The SpO2 measurement can be performed at minute 2. PMID:22315552

  10. Transcript profiling of two alfalfa genotypes with contrasting cell wall composition in stems using a cross-species platform: optimizing analysis by masking biased probes

    Directory of Open Access Journals (Sweden)

    Jung Hans-Joachim G

    2010-05-01

    Full Text Available Abstract Background The GeneChip® Medicago Genome Array, developed for Medicago truncatula, is a suitable platform for transcript profiling in tetraploid alfalfa [Medicago sativa (L. subsp. sativa]. However, previous research involving cross-species hybridization (CSH has shown that sequence variation between two species can bias transcript profiling by decreasing sensitivity (number of expressed genes detected and the accuracy of measuring fold-differences in gene expression. Results Transcript profiling using the Medicago GeneChip® was conducted with elongating stem (ES and post-elongation stem (PES internodes from alfalfa genotypes 252 and 1283 that differ in stem cell wall concentrations of cellulose and lignin. A protocol was developed that masked probes targeting inter-species variable (ISV regions of alfalfa transcripts. A probe signal intensity threshold was selected that optimized both sensitivity and accuracy. After masking for both ISV regions and previously identified single-feature polymorphisms (SFPs, the number of differentially expressed genes between the two genotypes in both ES and PES internodes was approximately 2-fold greater than the number detected prior to masking. Regulatory genes, including transcription factor and receptor kinase genes that may play a role in development of secondary xylem, were significantly over-represented among genes up-regulated in 252 PES internodes compared to 1283 PES internodes. Several cell wall-related genes were also up-regulated in genotype 252 PES internodes. Real-time quantitative RT-PCR of differentially expressed regulatory and cell wall-related genes demonstrated increased sensitivity and accuracy after masking for both ISV regions and SFPs. Over 1,000 genes that were differentially expressed in ES and PES internodes of genotypes 252 and 1283 were mapped onto putative orthologous loci on M. truncatula chromosomes. Clustering simulation analysis of the differentially expressed genes

  11. Optimizing the Relaxivity of MRI Probes at High Magnetic Field Strengths With Binuclear GdIII Complexes

    Directory of Open Access Journals (Sweden)

    Loredana Leone

    2018-05-01

    Full Text Available The key criteria to optimize the relaxivity of a Gd(III contrast agent at high fields (defined as the region ≥ 1.5 T can be summarized as follows: (i the occurrence of a rotational correlation time τR in the range of ca. 0.2–0.5 ns; (ii the rate of water exchange is not critical, but a τM < 100 ns is preferred; (iii a relevant contribution from water molecules in the second sphere of hydration. In addition, the use of macrocycle-based systems ensures the formation of thermodynamically and kinetically stable Gd(III complexes. Binuclear Gd(III complexes could potentially meet these requirements. Their efficiency depends primarily on the degree of flexibility of the linker connecting the two monomeric units, the absence of local motions and the presence of contribution from the second sphere water molecules. With the aim to maximize relaxivity (per Gd over a wide range of magnetic field strengths, two binuclear Gd(III chelates derived from the well-known macrocyclic systems DOTA-monopropionamide and HPDO3A (Gd2L1 and Gd2L2, respectively were synthesized through a multistep synthesis. Chemical Exchange Saturation Transfer (CEST experiments carried out on Eu2L2 at different pH showed the occurrence of a CEST effect at acidic pH that disappears at neutral pH, associated with the deprotonation of the hydroxyl groups. Then, a complete 1H and 17O NMR relaxometric study was carried out in order to evaluate the parameters that govern the relaxivity associated with these complexes. The relaxivities of Gd2L1 and Gd2L2 (20 MHz, 298 K are 8.7 and 9.5 mM−1 s−1, respectively, +77% and +106% higher than the relaxivity values of the corresponding mononuclear GdDOTAMAP-En and GdHPDO3A complexes. A significant contribution of second sphere water molecules was accounted for the strong relaxivity enhancement of Gd2L2. MR phantom images of the dinuclear complexes compared to GdHPDO3A, recorded at 7 T, confirmed the superiority of Gd2L2. Finally, ab initio

  12. Thermodynamic limits set relevant constraints to the soil-plant-atmosphere system and to optimality in terrestrial vegetation

    Science.gov (United States)

    Kleidon, Axel; Renner, Maik

    2016-04-01

    , which then links this thermodynamic approach to optimality in vegetation. We also contrast this approach to common, semi-empirical approaches of surface-atmosphere exchange and discuss how thermodynamics may set a broader range of transport limitations and optimality in the soil-plant-atmosphere system.

  13. Building versatile bipartite probes for quantum metrology

    Science.gov (United States)

    Farace, Alessandro; De Pasquale, Antonella; Adesso, Gerardo; Giovannetti, Vittorio

    2016-01-01

    We consider bipartite systems as versatile probes for the estimation of transformations acting locally on one of the subsystems. We investigate what resources are required for the probes to offer a guaranteed level of metrological performance, when the latter is averaged over specific sets of local transformations. We quantify such a performance via the average skew information (AvSk), a convex quantity which we compute in closed form for bipartite states of arbitrary dimensions, and which is shown to be strongly dependent on the degree of local purity of the probes. Our analysis contrasts and complements the recent series of studies focused on the minimum, rather than the average, performance of bipartite probes in local estimation tasks, which was instead determined by quantum correlations other than entanglement. We provide explicit prescriptions to characterize the most reliable states maximizing the AvSk, and elucidate the role of state purity, separability and correlations in the classification of optimal probes. Our results can help in the identification of useful resources for sensing, estimation and discrimination applications when complete knowledge of the interaction mechanism realizing the local transformation is unavailable, and access to pure entangled probes is technologically limited.

  14. Building versatile bipartite probes for quantum metrology

    International Nuclear Information System (INIS)

    Farace, Alessandro; Pasquale, Antonella De; Giovannetti, Vittorio; Adesso, Gerardo

    2016-01-01

    We consider bipartite systems as versatile probes for the estimation of transformations acting locally on one of the subsystems. We investigate what resources are required for the probes to offer a guaranteed level of metrological performance, when the latter is averaged over specific sets of local transformations. We quantify such a performance via the average skew information (AvSk), a convex quantity which we compute in closed form for bipartite states of arbitrary dimensions, and which is shown to be strongly dependent on the degree of local purity of the probes. Our analysis contrasts and complements the recent series of studies focused on the minimum, rather than the average, performance of bipartite probes in local estimation tasks, which was instead determined by quantum correlations other than entanglement. We provide explicit prescriptions to characterize the most reliable states maximizing the AvSk, and elucidate the role of state purity, separability and correlations in the classification of optimal probes. Our results can help in the identification of useful resources for sensing, estimation and discrimination applications when complete knowledge of the interaction mechanism realizing the local transformation is unavailable, and access to pure entangled probes is technologically limited. (paper)

  15. A set cover approach to fast beam orientation optimization in intensity modulated radiation therapy for total marrow irradiation

    International Nuclear Information System (INIS)

    Lee, Chieh-Hsiu Jason; Aleman, Dionne M; Sharpe, Michael B

    2011-01-01

    The beam orientation optimization (BOO) problem in intensity modulated radiation therapy (IMRT) treatment planning is a nonlinear problem, and existing methods to obtain solutions to the BOO problem are time consuming due to the complex nature of the objective function and size of the solution space. These issues become even more difficult in total marrow irradiation (TMI), where many more beams must be used to cover a vastly larger treatment area than typical site-specific treatments (e.g., head-and-neck, prostate, etc). These complications result in excessively long computation times to develop IMRT treatment plans for TMI, so we attempt to develop methods that drastically reduce treatment planning time. We transform the BOO problem into the classical set cover problem (SCP) and use existing methods to solve SCP to obtain beam solutions. Although SCP is NP-Hard, our methods obtain beam solutions that result in quality treatments in minutes. We compare our approach to an integer programming solver for the SCP to illustrate the speed advantage of our approach.

  16. A set cover approach to fast beam orientation optimization in intensity modulated radiation therapy for total marrow irradiation

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Chieh-Hsiu Jason; Aleman, Dionne M [Department of Mechanical and Industrial Engineering, University of Toronto, 5 King' s College Road, Toronto, ON M5S 3G8 (Canada); Sharpe, Michael B, E-mail: chjlee@mie.utoronto.ca, E-mail: aleman@mie.utoronto.ca, E-mail: michael.sharpe@rmp.uhn.on.ca [Princess Margaret Hospital, Department of Radiation Oncology, University of Toronto, 610 University Avenue, Toronto, ON M5G 2M9 (Canada)

    2011-09-07

    The beam orientation optimization (BOO) problem in intensity modulated radiation therapy (IMRT) treatment planning is a nonlinear problem, and existing methods to obtain solutions to the BOO problem are time consuming due to the complex nature of the objective function and size of the solution space. These issues become even more difficult in total marrow irradiation (TMI), where many more beams must be used to cover a vastly larger treatment area than typical site-specific treatments (e.g., head-and-neck, prostate, etc). These complications result in excessively long computation times to develop IMRT treatment plans for TMI, so we attempt to develop methods that drastically reduce treatment planning time. We transform the BOO problem into the classical set cover problem (SCP) and use existing methods to solve SCP to obtain beam solutions. Although SCP is NP-Hard, our methods obtain beam solutions that result in quality treatments in minutes. We compare our approach to an integer programming solver for the SCP to illustrate the speed advantage of our approach.

  17. ISP: an optimal out-of-core image-set processing streaming architecture for parallel heterogeneous systems.

    Science.gov (United States)

    Ha, Linh Khanh; Krüger, Jens; Dihl Comba, João Luiz; Silva, Cláudio T; Joshi, Sarang

    2012-06-01

    Image population analysis is the class of statistical methods that plays a central role in understanding the development, evolution, and disease of a population. However, these techniques often require excessive computational power and memory that are compounded with a large number of volumetric inputs. Restricted access to supercomputing power limits its influence in general research and practical applications. In this paper we introduce ISP, an Image-Set Processing streaming framework that harnesses the processing power of commodity heterogeneous CPU/GPU systems and attempts to solve this computational problem. In ISP, we introduce specially designed streaming algorithms and data structures that provide an optimal solution for out-of-core multiimage processing problems both in terms of memory usage and computational efficiency. ISP makes use of the asynchronous execution mechanism supported by parallel heterogeneous systems to efficiently hide the inherent latency of the processing pipeline of out-of-core approaches. Consequently, with computationally intensive problems, the ISP out-of-core solution can achieve the same performance as the in-core solution. We demonstrate the efficiency of the ISP framework on synthetic and real datasets.

  18. A comparative research of different ensemble surrogate models based on set pair analysis for the DNAPL-contaminated aquifer remediation strategy optimization

    Science.gov (United States)

    Hou, Zeyu; Lu, Wenxi; Xue, Haibo; Lin, Jin

    2017-08-01

    Surrogate-based simulation-optimization technique is an effective approach for optimizing the surfactant enhanced aquifer remediation (SEAR) strategy for clearing DNAPLs. The performance of the surrogate model, which is used to replace the simulation model for the aim of reducing computation burden, is the key of corresponding researches. However, previous researches are generally based on a stand-alone surrogate model, and rarely make efforts to improve the approximation accuracy of the surrogate model to the simulation model sufficiently by combining various methods. In this regard, we present set pair analysis (SPA) as a new method to build ensemble surrogate (ES) model, and conducted a comparative research to select a better ES modeling pattern for the SEAR strategy optimization problems. Surrogate models were developed using radial basis function artificial neural network (RBFANN), support vector regression (SVR), and Kriging. One ES model is assembling RBFANN model, SVR model, and Kriging model using set pair weights according their performance, and the other is assembling several Kriging (the best surrogate modeling method of three) models built with different training sample datasets. Finally, an optimization model, in which the ES model was embedded, was established to obtain the optimal remediation strategy. The results showed the residuals of the outputs between the best ES model and simulation model for 100 testing samples were lower than 1.5%. Using an ES model instead of the simulation model was critical for considerably reducing the computation time of simulation-optimization process and maintaining high computation accuracy simultaneously.

  19. On the Optimal Policy for the Single-product Inventory Problem with Set-up Cost and a Restricted Production Capacity

    NARCIS (Netherlands)

    Foreest, N. D. van; Wijngaard, J.

    2010-01-01

    The single-product, stationary inventory problem with set-up cost is one of the classical problems in stochastic operations research. Theories have been developed to cope with finite production capacity in periodic review systems, and it has been proved that optimal policies for these cases are not

  20. Assessment of electricity demand-supply in health facilities in resource-constrained settings : optimization and evaluation of energy systems for a case in Rwanda

    NARCIS (Netherlands)

    Palacios, S.G.

    2015-01-01

    In health facilities in resource-constrained settings, a lack of access to sustainable and reliable electricity can result on a sub-optimal delivery of healthcare services, as they do not have lighting for medical procedures and power to run essential equipment and devices to treat their patients.

  1. The role of therapeutic optimism in recruitment to a clinical trial in a peripartum setting: balancing hope and uncertainty.

    Science.gov (United States)

    Hallowell, Nina; Snowdon, Claire; Morrow, Susan; Norman, Jane E; Denison, Fiona C; Lawton, Julia

    2016-06-01

    Hope has therapeutic value because it enables people to cope with uncertainty about their future health. Indeed, hope, or therapeutic optimism (TO), is seen as an essential aspect of the provision and experience of medical care. The role of TO in clinical research has been briefly discussed, but the concept, and whether it can be transferred from care to research and from patients to clinicians, has not been fully investigated. The role played by TO in research emerged during interviews with staff involved in a peripartum trial. This paper unpacks the concept of TO in this setting and considers the role it may play in the wider delivery of clinical trials. The Got-it trial is a UK-based, randomised placebo-controlled trial that investigates the use of sublingual glyceryl trinitrate (GTN) spray to treat retained placenta. Qualitative data were collected in open-ended interviews with obstetricians, research and clinical midwives (n =27) involved in trial recruitment. Data were analysed using the method of constant comparison. TO influenced staff engagement with Got-it at different points in the trial and in different ways. Prior knowledge of, and familiarity with, GTN meant that from the outset staff perceived the trial as low risk. TO facilitated staff involvement in the trial; staff who already understood GTN's effects were optimistic that it would work, and staff collaborated because they hoped that the trial would address what they identified as an important clinical need. TO could fluctuate over the course of the trial, and was sustained or undermined by unofficial observation of clinical outcomes and speculations about treatment allocation. Thus, TO appeared to be influenced by key situational factors: prior knowledge and experience, clinical need and observed participant outcomes. Situational TO plays a role in facilitating staff engagement with clinical research. TO may affect trial recruitment by enabling staff to sustain the levels of uncertainty, or

  2. Optimization for set-points and robust model predictive control for steam generator in nuclear power plants

    International Nuclear Information System (INIS)

    Osgouee, Ahmad

    2010-01-01

    many advanced control methods proposed for the control of nuclear SG water level, operators are still experiencing difficulties especially at low powers. Therefore, it seems that a suitable controller to replace the manual operations is still needed. In this paper optimization of SGL set-points and designing a robust control for SGL control system using will be discussed

  3. WE-AB-BRB-01: Development of a Probe-Format Graphite Calorimeter for Practical Clinical Dosimetry: Numerical Design Optimization, Prototyping, and Experimental Proof-Of-Concept

    International Nuclear Information System (INIS)

    Renaud, J; Seuntjens, J; Sarfehnia, A

    2015-01-01

    Purpose: In this work, the feasibility of performing absolute dose to water measurements using a constant temperature graphite probe calorimeter (GPC) in a clinical environment is established. Methods: A numerical design optimization study was conducted by simulating the heat transfer in the GPC resulting from irradiation using a finite element method software package. The choice of device shape, dimensions, and materials was made to minimize the heat loss in the sensitive volume of the GPC. The resulting design, which incorporates a novel aerogel-based thermal insulator, and 15 temperature sensitive resistors capable of both Joule heating and measuring temperature, was constructed in house. A software based process controller was developed to stabilize the temperatures of the GPC’s constituent graphite components to within a few 10’s of µK. This control system enables the GPC to operate in either the quasi-adiabatic or isothermal mode, two well-known, and independent calorimetry techniques. Absorbed dose to water measurements were made using these two methods under standard conditions in a 6 MV 1000 MU/min photon beam and subsequently compared against TG-51 derived values. Results: Compared to an expected dose to water of 76.9 cGy/100 MU, the average GPC-measured doses were 76.5 ± 0.5 and 76.9 ± 0.5 cGy/100 MU for the adiabatic and isothermal modes, respectively. The Monte Carlo calculated graphite to water dose conversion was 1.013, and the adiabatic heat loss correction was 1.003. With an overall uncertainty of about 1%, the most significant contributions were the specific heat capacity (type B, 0.8%) and the repeatability (type A, 0.6%). Conclusion: While the quasi-adiabatic mode of operation had been validated in previous work, this is the first time that the GPC has been successfully used isothermally. This proof-of-concept will serve as the basis for further study into the GPC’s application to small fields and MRI-linac dosimetry. This work has been

  4. Probe Storage

    NARCIS (Netherlands)

    Gemelli, Marcellino; Abelmann, Leon; Engelen, Johannes Bernardus Charles; Khatib, M.G.; Koelmans, W.W.; Zaboronski, Olog; Campardo, Giovanni; Tiziani, Federico; Laculo, Massimo

    2011-01-01

    This chapter gives an overview of probe-based data storage research over the last three decades, encompassing all aspects of a probe recording system. Following the division found in all mechanically addressed storage systems, the different subsystems (media, read/write heads, positioning, data

  5. Cultural probes

    DEFF Research Database (Denmark)

    Madsen, Jacob Østergaard

    The aim of this study was thus to explore cultural probes (Gaver, Boucher et al. 2004), as a possible methodical approach, supporting knowledge production on situated and contextual aspects of occupation.......The aim of this study was thus to explore cultural probes (Gaver, Boucher et al. 2004), as a possible methodical approach, supporting knowledge production on situated and contextual aspects of occupation....

  6. Support Vector Data Description Model to Map Specific Land Cover with Optimal Parameters Determined from a Window-Based Validation Set

    Directory of Open Access Journals (Sweden)

    Jinshui Zhang

    2017-04-01

    Full Text Available This paper developed an approach, the window-based validation set for support vector data description (WVS-SVDD, to determine optimal parameters for support vector data description (SVDD model to map specific land cover by integrating training and window-based validation sets. Compared to the conventional approach where the validation set included target and outlier pixels selected visually and randomly, the validation set derived from WVS-SVDD constructed a tightened hypersphere because of the compact constraint by the outlier pixels which were located neighboring to the target class in the spectral feature space. The overall accuracies for wheat and bare land achieved were as high as 89.25% and 83.65%, respectively. However, target class was underestimated because the validation set covers only a small fraction of the heterogeneous spectra of the target class. The different window sizes were then tested to acquire more wheat pixels for validation set. The results showed that classification accuracy increased with the increasing window size and the overall accuracies were higher than 88% at all window size scales. Moreover, WVS-SVDD showed much less sensitivity to the untrained classes than the multi-class support vector machine (SVM method. Therefore, the developed method showed its merits using the optimal parameters, tradeoff coefficient (C and kernel width (s, in mapping homogeneous specific land cover.

  7. Simulation of fruit-set and trophic competition and optimization of yield advantages in six Capsicum cultivars using functional-structural plant modelling.

    Science.gov (United States)

    Ma, Y T; Wubs, A M; Mathieu, A; Heuvelink, E; Zhu, J Y; Hu, B G; Cournède, P H; de Reffye, P

    2011-04-01

    Many indeterminate plants can have wide fluctuations in the pattern of fruit-set and harvest. Fruit-set in these types of plants depends largely on the balance between source (assimilate supply) and sink strength (assimilate demand) within the plant. This study aims to evaluate the ability of functional-structural plant models to simulate different fruit-set patterns among Capsicum cultivars through source-sink relationships. A greenhouse experiment of six Capsicum cultivars characterized with different fruit weight and fruit-set was conducted. Fruit-set patterns and potential fruit sink strength were determined through measurement. Source and sink strength of other organs were determined via the GREENLAB model, with a description of plant organ weight and dimensions according to plant topological structure established from the measured data as inputs. Parameter optimization was determined using a generalized least squares method for the entire growth cycle. Fruit sink strength differed among cultivars. Vegetative sink strength was generally lower for large-fruited cultivars than for small-fruited ones. The larger the size of the fruit, the larger variation there was in fruit-set and fruit yield. Large-fruited cultivars need a higher source-sink ratio for fruit-set, which means higher demand for assimilates. Temporal heterogeneity of fruit-set affected both number and yield of fruit. The simulation study showed that reducing heterogeneity of fruit-set was obtained by different approaches: for example, increasing source strength; decreasing vegetative sink strength, source-sink ratio for fruit-set and flower appearance rate; and harvesting individual fruits earlier before full ripeness. Simulation results showed that, when we increased source strength or decreased vegetative sink strength, fruit-set and fruit weight increased. However, no significant differences were found between large-fruited and small-fruited groups of cultivars regarding the effects of source

  8. Estimation of an optimal chemotherapy utilisation rate for cancer: setting an evidence-based benchmark for quality cancer care.

    Science.gov (United States)

    Jacob, S A; Ng, W L; Do, V

    2015-02-01

    There is wide variation in the proportion of newly diagnosed cancer patients who receive chemotherapy, indicating the need for a benchmark rate of chemotherapy utilisation. This study describes an evidence-based model that estimates the proportion of new cancer patients in whom chemotherapy is indicated at least once (defined as the optimal chemotherapy utilisation rate). The optimal chemotherapy utilisation rate can act as a benchmark for measuring and improving the quality of care. Models of optimal chemotherapy utilisation were constructed for each cancer site based on indications for chemotherapy identified from evidence-based treatment guidelines. Data on the proportion of patient- and tumour-related attributes for which chemotherapy was indicated were obtained, using population-based data where possible. Treatment indications and epidemiological data were merged to calculate the optimal chemotherapy utilisation rate. Monte Carlo simulations and sensitivity analyses were used to assess the effect of controversial chemotherapy indications and variations in epidemiological data on our model. Chemotherapy is indicated at least once in 49.1% (95% confidence interval 48.8-49.6%) of all new cancer patients in Australia. The optimal chemotherapy utilisation rates for individual tumour sites ranged from a low of 13% in thyroid cancers to a high of 94% in myeloma. The optimal chemotherapy utilisation rate can serve as a benchmark for planning chemotherapy services on a population basis. The model can be used to evaluate service delivery by comparing the benchmark rate with patterns of care data. The overall estimate for other countries can be obtained by substituting the relevant distribution of cancer types. It can also be used to predict future chemotherapy workload and can be easily modified to take into account future changes in cancer incidence, presentation stage or chemotherapy indications. Copyright © 2014 The Royal College of Radiologists. Published by

  9. Congestion management of deregulated power systems by optimal setting of Interline Power Flow Controller using Gravitational Search algorithm

    Directory of Open Access Journals (Sweden)

    Akanksha Mishra

    2017-05-01

    Full Text Available In a deregulated electricity market it may at times become difficult to dispatch all the required power that is scheduled to flow due to congestion in transmission lines. An Interline Power Flow Controller (IPFC can be used to reduce the system loss and power flow in the heavily loaded line, improve stability and loadability of the system. This paper proposes a Disparity Line Utilization Factor for the optimal placement and Gravitational Search algorithm based optimal tuning of IPFC to control the congestion in transmission lines. DLUF ranks the transmission lines in terms of relative line congestion. The IPFC is accordingly placed in the most congested and the least congested line connected to the same bus. Optimal sizing of IPFC is carried using Gravitational Search algorithm. A multi-objective function has been chosen for tuning the parameters of the IPFC. The proposed method is implemented on an IEEE-30 bus test system. Graphical representations have been included in the paper showing reduction in LUF of the transmission lines after the placement of an IPFC. A reduction in active power and reactive power loss of the system by about 6% is observed after an optimally tuned IPFC has been included in the power system. The effectiveness of the proposed tuning method has also been shown in the paper through the reduction in the values of the objective functions.

  10. Always looking on the bright side of life? Exploring optimism and health in three UK post-industrial urban settings.

    Science.gov (United States)

    Walsh, David; McCartney, Gerry; McCullough, Sarah; van der Pol, Marjon; Buchanan, Duncan; Jones, Russell

    2015-09-01

    Many theories have been proposed to explain the high levels of 'excess' mortality (i.e. higher mortality over and above that explained by differences in socio-economic circumstances) shown in Scotland-and, especially, in its largest city, Glasgow-compared with elsewhere in the UK. One such proposal relates to differences in optimism, given previously reported evidence of the health benefits of an optimistic outlook. A representative survey of Glasgow, Liverpool and Manchester was undertaken in 2011. Optimism was measured by the Life Orientation Test (Revised) (LOT-R), and compared between the cities by means of multiple linear regression models, adjusting for any differences in sample characteristics. Unadjusted analyses showed LOT-R scores to be similar in Glasgow and Liverpool (mean score (SD): 14.7 (4.0) for both), but lower in Manchester (13.9 (3.8)). This was consistent in analyses by age, gender and social class. Multiple regression confirmed the city results: compared with Glasgow, optimism was either similar (Liverpool: adjusted difference in mean score: -0.16 (95% CI -0.45 to 0.13)) or lower (Manchester: -0.85 (-1.14 to -0.56)). The reasons for high levels of Scottish 'excess' mortality remain unclear. However, differences in psychological outlook such as optimism appear to be an unlikely explanation. © The Author 2015. Published by Oxford University Press on behalf of Faculty of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  11. Optimal Load-Tracking Operation of Grid-Connected Solid Oxide Fuel Cells through Set Point Scheduling and Combined L1-MPC Control

    Directory of Open Access Journals (Sweden)

    Siwei Han

    2018-03-01

    Full Text Available An optimal load-tracking operation strategy for a grid-connected tubular solid oxide fuel cell (SOFC is studied based on the steady-state analysis of the system thermodynamics and electrochemistry. Control of the SOFC is achieved by a two-level hierarchical control system. In the upper level, optimal setpoints of output voltage and the current corresponding to unit load demand is obtained through a nonlinear optimization by minimizing the SOFC’s internal power waste. In the lower level, a combined L1-MPC control strategy is designed to achieve fast set point tracking under system nonlinearities, while maintaining a constant fuel utilization factor. To prevent fuel starvation during the transient state resulting from the output power surging, a fuel flow constraint is imposed on the MPC with direct electron balance calculation. The proposed control schemes are testified on the grid-connected SOFC model.

  12. An optimal set of features for predicting type IV secretion system effector proteins for a subset of species based on a multi-level feature selection approach.

    Directory of Open Access Journals (Sweden)

    Zhila Esna Ashari

    Full Text Available Type IV secretion systems (T4SS are multi-protein complexes in a number of bacterial pathogens that can translocate proteins and DNA to the host. Most T4SSs function in conjugation and translocate DNA; however, approximately 13% function to secrete proteins, delivering effector proteins into the cytosol of eukaryotic host cells. Upon entry, these effectors manipulate the host cell's machinery for their own benefit, which can result in serious illness or death of the host. For this reason recognition of T4SS effectors has become an important subject. Much previous work has focused on verifying effectors experimentally, a costly endeavor in terms of money, time, and effort. Having good predictions for effectors will help to focus experimental validations and decrease testing costs. In recent years, several scoring and machine learning-based methods have been suggested for the purpose of predicting T4SS effector proteins. These methods have used different sets of features for prediction, and their predictions have been inconsistent. In this paper, an optimal set of features is presented for predicting T4SS effector proteins using a statistical approach. A thorough literature search was performed to find features that have been proposed. Feature values were calculated for datasets of known effectors and non-effectors for T4SS-containing pathogens for four genera with a sufficient number of known effectors, Legionella pneumophila, Coxiella burnetii, Brucella spp, and Bartonella spp. The features were ranked, and less important features were filtered out. Correlations between remaining features were removed, and dimensional reduction was accomplished using principal component analysis and factor analysis. Finally, the optimal features for each pathogen were chosen by building logistic regression models and evaluating each model. The results based on evaluation of our logistic regression models confirm the effectiveness of our four optimal sets of

  13. Analysis of Regional Timelines To Set Up a Global Phase III Clinical Trial in Breast Cancer: the Adjuvant Lapatinib and/or Trastuzumab Treatment Optimization Experience

    OpenAIRE

    Metzger-Filho, Otto; Azambuja, Evandro de; Bradbury, Ian; Saini, Kamal S.; Bines, Jose; Simon, Sergio D. [UNIFESP; Van Dooren, Veerle; Aktan, Gursel; Pritchard, Kathleen I.; Wolff, Antonio C.; Smith, Ian; Jackisch, Christian; Lang, Istvan; Untch, Michael; Boyle, Frances

    2013-01-01

    Purpose. This study measured the time taken for setting up the different facets of Adjuvant Lapatinib and/or Trastuzumab Treatment Optimization (ALTTO), an international phase III study being conducted in 44 participating countries.Methods. Time to regulatory authority (RA) approval, time to ethics committee/institutional review board (EC/IRB) approval, time from study approval by EC/IRB to first randomized patient, and time from first to last randomized patient were prospectively collected i...

  14. Electronic structure of crystalline uranium nitrides UN, U{sub 2}N{sub 3} and UN{sub 2}: LCAO calculations with the basis set optimization

    Energy Technology Data Exchange (ETDEWEB)

    Evarestov, R A; Panin, A I; Bandura, A V; Losev, M V [Department of Quantum Chemistry, St. Petersburg State University, University Prospect 26, Stary Peterghof, St. Petersburg, 198504 (Russian Federation)], E-mail: re1973@re1973.spb.edu

    2008-06-01

    The results of LCAO DFT calculations of lattice parameters, cohesive energy and bulk modulus of the crystalline uranium nitrides UN, U{sub 2}N{sub 3} and UN{sub 2} are presented and discussed. The LCAO computer codes Gaussian03 and Crystal06 are applied. The calculations are made with the uranium atom relativistic effective small core potential by Stuttgart-Cologne group (60 electrons in the core). The calculations include the U atom basis set optimization. Powell, Hooke-Jeeves, conjugated gradient and Box methods are implemented in the author's optimization package, being external to the codes for molecular and periodic calculations. The basis set optimization in LCAO calculations improves the agreement of the lattice parameter and bulk modulus of UN crystal with the experimental data, the change of the cohesive energy due to the optimization is small. The mixed metallic-covalent chemical bonding is found both in LCAO calculations of UN and U{sub 2}N{sub 3} crystals; UN{sub 2} crystal has the semiconducting nature.

  15. Electronic structure of crystalline uranium nitrides UN, U2N3 and UN2: LCAO calculations with the basis set optimization

    International Nuclear Information System (INIS)

    Evarestov, R A; Panin, A I; Bandura, A V; Losev, M V

    2008-01-01

    The results of LCAO DFT calculations of lattice parameters, cohesive energy and bulk modulus of the crystalline uranium nitrides UN, U 2 N 3 and UN 2 are presented and discussed. The LCAO computer codes Gaussian03 and Crystal06 are applied. The calculations are made with the uranium atom relativistic effective small core potential by Stuttgart-Cologne group (60 electrons in the core). The calculations include the U atom basis set optimization. Powell, Hooke-Jeeves, conjugated gradient and Box methods are implemented in the author's optimization package, being external to the codes for molecular and periodic calculations. The basis set optimization in LCAO calculations improves the agreement of the lattice parameter and bulk modulus of UN crystal with the experimental data, the change of the cohesive energy due to the optimization is small. The mixed metallic-covalent chemical bonding is found both in LCAO calculations of UN and U 2 N 3 crystals; UN 2 crystal has the semiconducting nature

  16. Aero-structural optimization of wind turbine blades using a reduced set of design load cases including turbulence

    DEFF Research Database (Denmark)

    Sessarego, Matias; Shen, Wen Zhong

    2018-01-01

    Modern wind turbine aero-structural blade design codes generally use a smaller fraction of the full design load base (DLB) or neglect turbulent inflow as defined by the International Electrotechnical Commission standards. The current article describes an automated blade design optimization method...... based on surrogate modeling that includes a very large number of design load cases (DLCs) including turbulence. In the present work, 325 DLCs representative of the full DLB are selected based on the message-passing-interface (MPI) limitations in Matlab. Other methods are currently being investigated, e.......g. a Python MPI implementation, to overcome the limitations in Matlab MPI and ultimately achieve a full DLB optimization framework. The reduced DLB and the annual energy production are computed using the state-of-the-art aero-servo-elastic tool HAWC2. Furthermore, some of the interior dimensions of the blade...

  17. Mobile probes

    DEFF Research Database (Denmark)

    Ørngreen, Rikke; Jørgensen, Anna Neustrup; Noesgaard, Signe Schack

    2016-01-01

    A project investigating the effectiveness of a collection of online resources for teachers' professional development used mobile probes as a data collection method. Teachers received questions and tasks on their mobile in a dialogic manner while in their everyday context as opposed...... to in an interview. This method provided valuable insight into the contextual use, i.e. how did the online resource transfer to the work practice. However, the research team also found that mobile probes may provide the scaffolding necessary for individual and peer learning at a very local (intra-school) community...... level. This paper is an initial investigation of how the mobile probes process proved to engage teachers in their efforts to improve teaching. It also highlights some of the barriers emerging when applying mobile probes as a scaffold for learning....

  18. Optical probe

    International Nuclear Information System (INIS)

    Denis, J.; Decaudin, J.M.

    1984-01-01

    The probe includes optical means of refractive index n, refracting an incident light beam from a medium with a refractive index n1>n and reflecting an incident light beam from a medium with a refractive index n2 [fr

  19. Counting probe

    International Nuclear Information System (INIS)

    Matsumoto, Haruya; Kaya, Nobuyuki; Yuasa, Kazuhiro; Hayashi, Tomoaki

    1976-01-01

    Electron counting method has been devised and experimented for the purpose of measuring electron temperature and density, the most fundamental quantities to represent plasma conditions. Electron counting is a method to count the electrons in plasma directly by equipping a probe with the secondary electron multiplier. It has three advantages of adjustable sensitivity, high sensitivity of the secondary electron multiplier, and directional property. Sensitivity adjustment is performed by changing the size of collecting hole (pin hole) on the incident front of the multiplier. The probe is usable as a direct reading thermometer of electron temperature because it requires to collect very small amount of electrons, thus it doesn't disturb the surrounding plasma, and the narrow sweep width of the probe voltage is enough. Therefore it can measure anisotropy more sensitively than a Langmuir probe, and it can be used for very low density plasma. Though many problems remain on anisotropy, computer simulation has been carried out. Also it is planned to provide a Helmholtz coil in the vacuum chamber to eliminate the effect of earth magnetic field. In practical experiments, the measurement with a Langmuir probe and an emission probe mounted to the movable structure, the comparison with the results obtained in reverse magnetic field by using a Helmholtz coil, and the measurement of ionic sound wave are scheduled. (Wakatsuki, Y.)

  20. Optimum Electrode Configurations for Two-Probe, Four-Probe and Multi-Probe Schemes in Electrical Resistance Tomography for Delamination Identification in Carbon Fiber Reinforced Composites

    Directory of Open Access Journals (Sweden)

    Luis Waldo Escalona-Galvis

    2018-04-01

    Full Text Available Internal damage in Carbon Fiber Reinforced Polymer (CFRP composites modifies the internal electrical conductivity of the composite material. Electrical Resistance Tomography (ERT is a non-destructive evaluation (NDE technique that determines the extent of damage based on electrical conductivity changes. Implementation of ERT for damage identification in CFRP composites requires the optimal selection of the sensing sites for accurate results. This selection depends on the measuring scheme used. The present work uses an effective independence (EI measure for selecting the minimum set of measurements for ERT damage identification using three measuring schemes: two-probe, four-probe and multi-probe. The electrical potential field in two CFRP laminate layups with 14 electrodes is calculated using finite element analyses (FEA for a set of specified delamination damage cases. The measuring schemes consider the cases of 14 electrodes distributed on both sides and seven electrodes on only one side of the laminate for each layup. The effectiveness of EI reduction is demonstrated by comparing the inverse identification results of delamination cases for the full and the reduced sets using the measuring schemes and electrode sets. This work shows that the EI measure optimally reduces electrode and electrode combinations in ERT based damage identification for different measuring schemes.

  1. A pseudo-optimal inexact stochastic interval T2 fuzzy sets approach for energy and environmental systems planning under uncertainty: A case study for Xiamen City of China

    International Nuclear Information System (INIS)

    Jin, L.; Huang, G.H.; Fan, Y.R.; Wang, L.; Wu, T.

    2015-01-01

    Highlights: • Propose a new energy PIS-IT2FSLP model for Xiamen City under uncertainties. • Analyze the energy supply, demand, and its flow structure of this city. • Use real energy statistics to prove the superiority of PIS-IT2FSLP method. • Obtain optimal solutions that reflect environmental requirements. • Help local authorities devise an optimal energy strategy for this local area. - Abstract: In this study, a new Pseudo-optimal Inexact Stochastic Interval Type-2 Fuzzy Sets Linear Programming (PIS-IT2FSLP) energy model is developed to support energy system planning and environment requirements under uncertainties for Xiamen City. The PIS-IT2FSLP model is based on an integration of interval Type 2 (T2) Fuzzy Sets (FS) boundary programming and stochastic linear programming techniques, enables it to have robust abilities to the tackle uncertainties expressed as T2 FS intervals and probabilistic distributions within a general optimization framework. This new model can sophisticatedly facilitate system analysis of energy supply and energy conversion processes, and environmental requirements as well as provide capacity expansion options with multiple periods. The PIS-IT2FSLP model was applied to a real case study of Xiamen energy systems. Based on a robust two-step solution algorithm, reasonable solutions have been obtained, which reflect tradeoffs between economic and environmental requirements, and among seasonal volatility energy demands of the right hand side constraints of Xiamen energy system. Thus, the lower and upper solutions of PIS-IT2FSLP would then help local energy authorities adjust current energy patterns, and discover an optimal energy strategy for the development of Xiamen City

  2. An adaptive control algorithm for optimization of intensity modulated radiotherapy considering uncertainties in beam profiles, patient set-up and internal organ motion

    International Nuclear Information System (INIS)

    Loef, Johan; Lind, Bengt K.; Brahme, Anders

    1998-01-01

    A new general beam optimization algorithm for inverse treatment planning is presented. It utilizes a new formulation of the probability to achieve complication-free tumour control. The new formulation explicitly describes the dependence of the treatment outcome on the incident fluence distribution, the patient geometry, the radiobiological properties of the patient and the fractionation schedule. In order to account for both measured and non-measured positioning uncertainties, the algorithm is based on a combination of dynamic and stochastic optimization techniques. Because of the difficulty in measuring all aspects of the intra- and interfractional variations in the patient geometry, such as internal organ displacements and deformations, these uncertainties are primarily accounted for in the treatment planning process by intensity modulation using stochastic optimization. The information about the deviations from the nominal fluence profiles and the nominal position of the patient relative to the beam that is obtained by portal imaging during treatment delivery, is used in a feedback loop to automatically adjust the profiles and the location of the patient for all subsequent treatments. Based on the treatment delivered in previous fractions, the algorithm furnishes optimal corrections for the remaining dose delivery both with regard to the fluence profile and its position relative to the patient. By dynamically refining the beam configuration from fraction to fraction, the algorithm generates an optimal sequence of treatments that very effectively reduces the influence of systematic and random set-up uncertainties to minimize and almost eliminate their overall effect on the treatment. Computer simulations have shown that the present algorithm leads to a significant increase in the probability of uncomplicated tumour control compared with the simple classical approach of adding fixed set-up margins to the internal target volume. (author)

  3. Results on Parity-Check Matrices With Optimal Stopping And/Or Dead-End Set Enumerators

    NARCIS (Netherlands)

    Weber, J.H.; Abdel-Ghaffar, K.A.S.

    2008-01-01

    The performance of iterative decoding techniques for linear block codes correcting erasures depends very much on the sizes of the stopping sets associated with the underlying Tanner graph, or, equivalently, the parity-check matrix representing the code. In this correspondence, we introduce the

  4. Zeroth-order exchange energy as a criterion for optimized atomic basis sets in interatomic force calculations

    International Nuclear Information System (INIS)

    Varandas, A.J.C.

    1980-01-01

    A suggestion is made for using the zeroth-order exchange term, at the one-exchange level, in the perturbation development of the interaction energy as a criterion for optmizing the atomic basis sets in interatomic force calculations. The approach is illustrated for the case of two helium atoms. (orig.)

  5. Eigenvalue-eigenvector decomposition (EED) analysis of dissimilarity and covariance matrix obtained from total synchronous fluorescence spectral (TSFS) data sets of herbal preparations: Optimizing the classification approach

    Science.gov (United States)

    Tarai, Madhumita; Kumar, Keshav; Divya, O.; Bairi, Partha; Mishra, Kishor Kumar; Mishra, Ashok Kumar

    2017-09-01

    The present work compares the dissimilarity and covariance based unsupervised chemometric classification approaches by taking the total synchronous fluorescence spectroscopy data sets acquired for the cumin and non-cumin based herbal preparations. The conventional decomposition method involves eigenvalue-eigenvector analysis of the covariance of the data set and finds the factors that can explain the overall major sources of variation present in the data set. The conventional approach does this irrespective of the fact that the samples belong to intrinsically different groups and hence leads to poor class separation. The present work shows that classification of such samples can be optimized by performing the eigenvalue-eigenvector decomposition on the pair-wise dissimilarity matrix.

  6. Eigenvalue-eigenvector decomposition (EED) analysis of dissimilarity and covariance matrix obtained from total synchronous fluorescence spectral (TSFS) data sets of herbal preparations: Optimizing the classification approach.

    Science.gov (United States)

    Tarai, Madhumita; Kumar, Keshav; Divya, O; Bairi, Partha; Mishra, Kishor Kumar; Mishra, Ashok Kumar

    2017-09-05

    The present work compares the dissimilarity and covariance based unsupervised chemometric classification approaches by taking the total synchronous fluorescence spectroscopy data sets acquired for the cumin and non-cumin based herbal preparations. The conventional decomposition method involves eigenvalue-eigenvector analysis of the covariance of the data set and finds the factors that can explain the overall major sources of variation present in the data set. The conventional approach does this irrespective of the fact that the samples belong to intrinsically different groups and hence leads to poor class separation. The present work shows that classification of such samples can be optimized by performing the eigenvalue-eigenvector decomposition on the pair-wise dissimilarity matrix. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. DNA probes

    International Nuclear Information System (INIS)

    Castelino, J.

    1992-01-01

    The creation of DNA probes for detection of specific nucleotide segments differs from ligand detection in that it is a chemical rather than an immunological reaction. Complementary DNA or RNA is used in place of the antibody and is labelled with 32 P. So far, DNA probes have been successfully employed in the diagnosis of inherited disorders, infectious diseases, and for identification of human oncogenes. The latest approach to the diagnosis of communicable and parasitic infections is based on the use of deoxyribonucleic acid (DNA) probes. The genetic information of all cells is encoded by DNA and DNA probe approach to identification of pathogens is unique because the focus of the method is the nucleic acid content of the organism rather than the products that the nucleic acid encodes. Since every properly classified species has some unique nucleotide sequences that distinguish it from every other species, each organism's genetic composition is in essence a finger print that can be used for its identification. In addition to this specificity, DNA probes offer other advantages in that pathogens may be identified directly in clinical specimens

  8. DNA probes

    Energy Technology Data Exchange (ETDEWEB)

    Castelino, J

    1993-12-31

    The creation of DNA probes for detection of specific nucleotide segments differs from ligand detection in that it is a chemical rather than an immunological reaction. Complementary DNA or RNA is used in place of the antibody and is labelled with {sup 32}P. So far, DNA probes have been successfully employed in the diagnosis of inherited disorders, infectious diseases, and for identification of human oncogenes. The latest approach to the diagnosis of communicable and parasitic infections is based on the use of deoxyribonucleic acid (DNA) probes. The genetic information of all cells is encoded by DNA and DNA probe approach to identification of pathogens is unique because the focus of the method is the nucleic acid content of the organism rather than the products that the nucleic acid encodes. Since every properly classified species has some unique nucleotide sequences that distinguish it from every other species, each organism`s genetic composition is in essence a finger print that can be used for its identification. In addition to this specificity, DNA probes offer other advantages in that pathogens may be identified directly in clinical specimens 10 figs, 2 tabs

  9. Synthesis and inhibition of N-alkyl-2-(4-hydroxybut-2-ynyl) pyridinium bromide for mild steel in acid solution: Box–Behnken design optimization and mechanism probe

    International Nuclear Information System (INIS)

    Gu, Tianbin; Chen, Zhengjun; Jiang, Xiaohui; Zhou, Limei; Liao, Yunwen; Duan, Ming; Wang, Hu; Pu, Qiang

    2015-01-01

    Highlights: • N-alkyl-2-(4-hydroxybut-2-ynyl) pyridinium bromide prepared is new type of inhibitor. • Box–Behnken experiment design-based optimization model is used to maximize inhibition efficiency. • O-n adsorbing on X70 steel surface enhances the resistance of the steel to acid corrosion. • O-n acts as mix-type inhibitor to suppress both the anodic and cathodic reaction of X70 steel. - Abstract: N-alkyl-2-(4-hydroxybut-2-ynyl) pyridinium bromides (designated as O-n) was synthesized and characterized by 1 H and 13 C NMR and FTIR. Box–Behnken design (BBD)-based optimization was engaged to analyze the factors and the interaction of the factors that influence the corrosion inhibition efficiency of O-n for X70 steel. The inhibition mechanism was also probed by means of X-ray photoelectron spectroscopy (XPS), Tafel polarization and electrochemical impedance spectroscopy (EIS) techniques

  10. When beauty is only skin deep; optimizing the sensitivity of specular neutron reflectivity for probing structure beneath the surface of thin filmsa)

    Science.gov (United States)

    Majkrzak, Charles F.; Carpenter, Elisabeth; Heinrich, Frank; Berk, Norman F.

    2011-11-01

    Specular neutron reflectometry has become an established probe of the nanometer scale structure of materials in thin film and multilayered form. It has contributed especially to our understanding of soft condensed matter of interest in polymer science, organic chemistry, and biology and of magnetic hard condensed matter systems. In this paper we examine a number of key factors which have emerged that can limit the sensitivity of neutron reflection as such a probe. Among these is loss of phase information, and we discuss how knowledge about material surrounding a film of interest can be applied to help resolve the problem. In this context we also consider what role the quantum phenomenon of interaction-free measurement might play in enhancing the statistical efficiency for obtaining reflectivity or transmission data.

  11. An objective method to optimize the MR sequence set for plaque classification in carotid vessel wall images using automated image segmentation.

    Directory of Open Access Journals (Sweden)

    Ronald van 't Klooster

    Full Text Available A typical MR imaging protocol to study the status of atherosclerosis in the carotid artery consists of the application of multiple MR sequences. Since scanner time is limited, a balance has to be reached between the duration of the applied MR protocol and the quantity and quality of the resulting images which are needed to assess the disease. In this study an objective method to optimize the MR sequence set for classification of soft plaque in vessel wall images of the carotid artery using automated image segmentation was developed. The automated method employs statistical pattern recognition techniques and was developed based on an extensive set of MR contrast weightings and corresponding manual segmentations of the vessel wall and soft plaque components, which were validated by histological sections. Evaluation of the results from nine contrast weightings showed the tradeoff between scan duration and automated image segmentation performance. For our dataset the best segmentation performance was achieved by selecting five contrast weightings. Similar performance was achieved with a set of three contrast weightings, which resulted in a reduction of scan time by more than 60%. The presented approach can help others to optimize MR imaging protocols by investigating the tradeoff between scan duration and automated image segmentation performance possibly leading to shorter scanning times and better image interpretation. This approach can potentially also be applied to other research fields focusing on different diseases and anatomical regions.

  12. Innovation and optimization of a method of pump-probe polarimetry with pulsed laser beams in view of a precise measurement of parity violation in atomic cesium

    International Nuclear Information System (INIS)

    Chauvat, D.

    1997-10-01

    While Parity Violation (PV) experiments on highly forbidden transitions have been using detection of fluorescence signals; our experiment uses a pump-probe scheme to detect the PV signal directly on a transmitted probe beam. A pulsed laser beam of linear polarisation ε 1 excites the atoms on the 6S-7S cesium transition in a colinear electric field E || k(ex). The probe beam (k(pr) || k(ex)) of linear polarisation ε 2 tuned to the transition 7S-6P(3/2) is amplified. The small asymmetry (∼ 10 -6 ) in the gain that depends on the handedness of the tri-hedron (E, ε 1 , ε 2 ) is the manifestation of the PV effect. This is measured as an E-odd apparent rotation of the plane of polarization of the probe beam; using balanced mode polarimetry. New criteria of selection have been devised, that allow us to distinguish the true PV-signal against fake rotations due to electromagnetic interferences, geometrical effects, polarization imperfections, or stray transverse electric and magnetic fields. These selection criteria exploit the symmetry of the PV-rotation - linear dichroism - and the revolution symmetry of the experiment. Using these criteria it is not only possible to reject fake signals, but also to elucidate the underlying physical mechanisms and to measure the relevant defects of the apparatus. The present signal-to-noise ratio allows embarking in PV measurements to reach the 10% statistical accuracy. A 1% measurement still requires improvements. Two methods have been demonstrated. The first one exploits the amplification of the asymmetry at high gain - one major advantage provided by our detection method based on stimulated emission. The second method uses both a much higher incident intensity and a special dichroic component which magnifies tiny polarization rotations. (author)

  13. Conductivity Probe

    Science.gov (United States)

    2008-01-01

    The Thermal and Electrical Conductivity Probe (TECP) for NASA's Phoenix Mars Lander took measurements in Martian soil and in the air. The needles on the end of the instrument were inserted into the Martian soil, allowing TECP to measure the propagation of both thermal and electrical energy. TECP also measured the humidity in the surrounding air. The needles on the probe are 15 millimeters (0.6 inch) long. The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  14. An Optimized Spline-Based Registration of a 3D CT to a Set of C-Arm Images

    Directory of Open Access Journals (Sweden)

    2006-01-01

    Full Text Available We have developed an algorithm for the rigid-body registration of a CT volume to a set of C-arm images. The algorithm uses a gradient-based iterative minimization of a least-squares measure of dissimilarity between the C-arm images and projections of the CT volume. To compute projections, we use a novel method for fast integration of the volume along rays. To improve robustness and speed, we take advantage of a coarse-to-fine processing of the volume/image pyramids. To compute the projections of the volume, the gradient of the dissimilarity measure, and the multiresolution data pyramids, we use a continuous image/volume model based on cubic B-splines, which ensures a high interpolation accuracy and a gradient of the dissimilarity measure that is well defined everywhere. We show the performance of our algorithm on a human spine phantom, where the true alignment is determined using a set of fiducial markers.

  15. Sequential Convex Programming for Power Set-point Optimization in a Wind Farm using Black-box Models, Simple Turbine Interactions, and Integer Variables

    DEFF Research Database (Denmark)

    Hovgaard, Tobias Gybel; Larsen, Lars F. S.; Jørgensen, John Bagterp

    2012-01-01

    We consider the optimization of power set-points to a large number of wind turbines arranged within close vicinity of each other in a wind farm. The goal is to maximize the total electric power extracted from the wind, taking the wake effects that couple the individual turbines in the farm into a...... is far superior to, a more naive distribution scheme. We employ a fast convex quadratic programming solver to carry out the iterations in the range of microseconds for even large wind farms....

  16. Setting up processes and standardization of the equipment in order to optimize analyses of the wavelength dispersion X-ray fluorescence (WDXRF) system

    International Nuclear Information System (INIS)

    Phan Trong Phuc; Luu Anh Tuyen; La Ly Nguyen; Nguyen Thi Ngoc Hue; Pham Thi Hue; Do Duy Khiem

    2015-01-01

    For the purpose of operating and optimizing the analyses of the equipment: wavelength dispersion X-ray fluorescence (WDXRF)- model S8 TIGER from Enhancing Equipment Project (TCTTB) 2011-2012, we set up sampling and analytical process for different sample kinds; we constructed multi-elemental calibration curve for clay sample; we analysed elemental concentrations of 5 clay samples by XRF method and compared the results with the results given by NAA method. Equipment sensitivity was tested by analysing elemental concentrations of 2 Kaolin standard samples. The results show that S8-Tiger equipment is within good condition and is able to analyze powder clay sample exactly. (author)

  17. Efficient globally optimal segmentation of cells in fluorescence microscopy images using level sets and convex energy functionals.

    Science.gov (United States)

    Bergeest, Jan-Philip; Rohr, Karl

    2012-10-01

    In high-throughput applications, accurate and efficient segmentation of cells in fluorescence microscopy images is of central importance for the quantification of protein expression and the understanding of cell function. We propose an approach for segmenting cell nuclei which is based on active contours using level sets and convex energy functionals. Compared to previous work, our approach determines the global solution. Thus, the approach does not suffer from local minima and the segmentation result does not depend on the initialization. We consider three different well-known energy functionals for active contour-based segmentation and introduce convex formulations of these functionals. We also suggest a numeric approach for efficiently computing the solution. The performance of our approach has been evaluated using fluorescence microscopy images from different experiments comprising different cell types. We have also performed a quantitative comparison with previous segmentation approaches. Copyright © 2012 Elsevier B.V. All rights reserved.

  18. Optimizing the management of neuromyelitis optica and spectrum disorders in resource poor settings: Experience from the Mangalore demyelinating disease registry

    Directory of Open Access Journals (Sweden)

    Lekha Pandit

    2013-01-01

    Full Text Available Background: In resource-poor settings, the management of neuromyelitis optica (NMO and NMO spectrum (NMOS disorders is limited because of delayed diagnosis and financial constraints. Aim: To device a cost-effective strategy for the management of NMO and related disorders in India. Materials and Methods: A cost-effective and disease-specific protocol was used for evaluating the course and treatment outcome of 70 consecutive patients. Results: Forty-five patients (65% had a relapse from the onset and included NMO (n = 20, recurrent transverse myelitis (RTM; n = 10, and recurrent optic neuritis (ROPN; n = 15. In 38 (84.4% patients presenting after multiple attacks, the diagnosis was made clinically. Only 7 patients with a relapsing course were seen at the onset and included ROPN (n = 5, NMO (n = 1, and RTM (n = 1. They had a second attack after a median interval of 1 ± 0.9 years, which was captured through our dedicated review process. Twenty-five patients had isolated longitudinally extensive transverse myelitis (LETM, of which 20 (80% remained ambulant at follow-up of 3 ± 1.9 years. Twelve patients (17% with median expanded disability status scale (EDSS of 8.5 at entry had a fatal outcome. Serum NMO-IgG testing was done in selected patients, and it was positive in 7 of 18 patients (39%. Irrespective of the NMO-IgG status, the treatment compliant patients (44.4% showed significant improvement in EDSS (P ≤ 0.001. Conclusions : Early clinical diagnosis and treatment compliance were important for good outcome. Isolated LETM was most likely a post-infectious demyelinating disorder in our set-up. NMO and NMOS disorders contributed to 14.9% (45/303 of all demyelinating disorders in our registry.

  19. Characterization of axial probes used in eddy current testing

    International Nuclear Information System (INIS)

    Wache, G.; Nourrisson, Ph.; Garet, Th.

    2001-01-01

    Customized reference tubes reduced sensitivity discrepancies able to be observed from one probe to the other, due to the gain setting adjustment required for a pre-definite level in amplitude response of the artificial notch. The use of a reference circuit in place of a reference part, makes characterization of the probe matched to its generator more accurate: - the material dependence is cancelled during the compensation process, - the reference signal can be adjusted more accurately in amplitude and phase response, - the manufacturing cost is reduced compared to the one necessary for machining the reference part, - the amplitude and phase response of the reference circuit can be simply modelled by using the transformer relations, such as one can appreciate the variations of the probe definition parameters and its connexion to the generator, and makes them optimal for use. The method proposed by ALSTOM for the characterization of the condenser and exchanger tubing probes, takes in account the amplitude and phase response of a reference circuit versus frequency, such it can be done by using SURECA tubing provided by ASCOT: it allows to control that the frequency values of the probe required for use are inside the useful bandwidth defined by the - 6 dB attenuation from the maximum amplitude response of the reference circuit versus frequency. Examples coming from measurements done among more than 200 probes, for which faults have been observed and replacements made by the manufacturer, are displayed and commented. (authors)

  20. Probe specificity

    International Nuclear Information System (INIS)

    Laget, J.M.

    1986-11-01

    Specificity and complementarity of hadron and electron probes must be systematically developed to answer three questions currently asked in intermediate energy nuclear physics: what is nucleus structure at short distances, what is nature of short range correlations, what is three body force nature [fr

  1. Time-Dependent Selection of an Optimal Set of Sources to Define a Stable Celestial Reference Frame

    Science.gov (United States)

    Le Bail, Karine; Gordon, David

    2010-01-01

    Temporal statistical position stability is required for VLBI sources to define a stable Celestial Reference Frame (CRF) and has been studied in many recent papers. This study analyzes the sources from the latest realization of the International Celestial Reference Frame (ICRF2) with the Allan variance, in addition to taking into account the apparent linear motions of the sources. Focusing on the 295 defining sources shows how they are a good compromise of different criteria, such as statistical stability and sky distribution, as well as having a sufficient number of sources, despite the fact that the most stable sources of the entire ICRF2 are mostly in the Northern Hemisphere. Nevertheless, the selection of a stable set is not unique: studying different solutions (GSF005a and AUG24 from GSFC and OPA from the Paris Observatory) over different time periods (1989.5 to 2009.5 and 1999.5 to 2009.5) leads to selections that can differ in up to 20% of the sources. Observing, recording, and network improvement are some of the causes, showing better stability for the CRF over the last decade than the last twenty years. But this may also be explained by the assumption of stationarity that is not necessarily right for some sources.

  2. Where do pulse oximeter probes break?

    Science.gov (United States)

    Crede, S; Van der Merwe, G; Hutchinson, J; Woods, D; Karlen, W; Lawn, J

    2014-06-01

    Pulse oximetry, a non-invasive method for accurate assessment of blood oxygen saturation (SPO2), is an important monitoring tool in health care facilities. However, it is often not available in many low-resource settings, due to expense, overly sophisticated design, a lack of organised procurement systems and inadequate medical device management and maintenance structures. Furthermore medical devices are often fragile and not designed to withstand the conditions of low-resource settings. In order to design a probe, better suited to the needs of health care facilities in low-resource settings this study aimed to document the site and nature of pulse oximeter probe breakages in a range of different probe designs in a low to middle income country. A retrospective review of job cards relating to the assessment and repair of damaged or faulty pulse oximeter probes was conducted at a medical device repair company based in Cape Town, South Africa, specializing in pulse oximeter probe repairs. 1,840 job cards relating to the assessment and repair of pulse oximeter probes were reviewed. 60.2 % of probes sent for assessment were finger-clip probes. For all probes, excluding the neonatal wrap probes, the most common point of failure was the probe wiring (>50 %). The neonatal wrap most commonly failed at the strap (51.5 %). The total cost for quoting on the broken pulse oximeter probes and for the subsequent repair of devices, excluding replacement components, amounted to an estimated ZAR 738,810 (USD $98,508). Improving the probe wiring would increase the life span of pulse oximeter probes. Increasing the life span of probes will make pulse oximetry more affordable and accessible. This is of high priority in low-resource settings where frequent repair or replacement of probes is unaffordable or impossible.

  3. Estimation of optimal b-value sets for obtaining apparent diffusion coefficient free from perfusion in non-small cell lung cancer.

    Science.gov (United States)

    Karki, Kishor; Hugo, Geoffrey D; Ford, John C; Olsen, Kathryn M; Saraiya, Siddharth; Groves, Robert; Weiss, Elisabeth

    2015-10-21

    The purpose of this study was to determine optimal sets of b-values in diffusion-weighted MRI (DW-MRI) for obtaining monoexponential apparent diffusion coefficient (ADC) close to perfusion-insensitive intravoxel incoherent motion (IVIM) model ADC (ADCIVIM) in non-small cell lung cancer. Ten subjects had 40 DW-MRI scans before and during radiotherapy in a 1.5 T MRI scanner. Respiratory triggering was applied to the echo-planar DW-MRI with TR ≈ 4500 ms, TE  =  74 ms, eight b-values of 0-1000 μs μm(-2), pixel size  =  1.98 × 1.98 mm(2), slice thickness  =  6 mm, interslice gap  =  1.2 mm, 7 axial slices and total acquisition time ≈6 min. One or more DW-MRI scans together covered the whole tumour volume. Monoexponential model ADC values using various b-value sets were compared to reference-standard ADCIVIM values using all eight b-values. Intra-scan coefficient of variation (CV) of active tumour volumes was computed to compare the relative noise in ADC maps. ADC values for one pre-treatment DW-MRI scan of each of the 10 subjects were computed using b-value pairs from DW-MRI images synthesized for b-values of 0-2000 μs μm(-2) from the estimated IVIM parametric maps and corrupted by various Rician noise levels. The square root of mean of squared error percentage (RMSE) of the ADC value relative to the corresponding ADCIVIM for the tumour volume of the scan was computed. Monoexponential ADC values for the b-value sets of 250 and 1000; 250, 500 and 1000; 250, 650 and 1000; 250, 800 and 1000; and 250-1000 μs μm(-2) were not significantly different from ADCIVIM values (p > 0.05, paired t-test). Mean error in ADC values for these sets relative to ADCIVIM were within 3.5%. Intra-scan CVs for these sets were comparable to that for ADCIVIM. The monoexponential ADC values for other sets-0-1000; 50-1000; 100-1000; 500-1000; and 250 and 800 μs μm(-2) were significantly different from the ADCIVIM values. From Rician noise

  4. Mobile Probes in Mobile Learning

    DEFF Research Database (Denmark)

    Ørngreen, Rikke; Blomhøj, Ulla; Duvaa, Uffe

    In this paper experiences from using mobile probes in educational design of a mobile learning application is presented. The probing process stems from the cultural probe method, and was influenced by qualitative interview and inquiry approaches. In the project, the mobile phone was not only acting...... as an agent for acquiring empirical data (as the situation in hitherto mobile probe settings) but was also the technological medium for which data should say something about (mobile learning). Consequently, not only the content of the data but also the ways in which data was delivered and handled, provided...... a valuable dimension for investigating mobile use. The data was collected at the same time as design activities took place and the collective data was analysed based on user experience goals and cognitive processes from interaction design and mobile learning. The mobile probe increased the knowledge base...

  5. Gas fluxing of aluminum: a bubble probe for optimization of bubbles/bubble distribution and minimization of splashing/droplet formation

    International Nuclear Information System (INIS)

    James W. Evans; Auitumn Fjeld

    2006-01-01

    totaling 1.549 million lbs. for only thirteen of the twenty three primary smelters then in operation in the US. The research work described in the body of this report (the doctoral dissertation of Dr. Autumn M. Fjeld) had as its objective the improvement of gas fluxing technology to reduce emissions while still maintaining fluxing unit metal throughput. A second objective was a better understanding of the splashing and droplet emission that occurs during fluxing at high gas throughput rates. In the extreme such droplets can form undesired accretions on the walls and gas exit lines of the fluxing unit. Consequently, the productivity of a fluxing unit is sometimes limited by the need to avoid such spraying of droplets produced as gas bubbles break at the metal surface. The approach used was a combination of experimental work in laboratories at UC Berkeley and at the Alcoa Technical Center. The experimental work was mostly on a bubble probe that could be used to determine the extent of dispersion of gas bubbles in the fluxing unit (a parameter affecting the utilization of the injected chlorine). Additionally a high speed digital movie camera was used to study droplet formation due to gas bubbles bursting at the surface of a low melting point alloy. The experimental work was supported by mathematical modeling. In particular, two FLUENT? base mathematical models were developed to compute the metal flow and distribution of the gas within a fluxing unit. Results from these models were then used in a third model to compute emissions and the progress of impurity removal as a function of parameters such as rotor speed. The project was successful in demonstrating that the bubble probe could detect bubbles in a gas fluxing unit at the Alcoa technical Center outside Pittsburgh, PA. This unit is a commercial sized one and the probe, with its associated electronics, was subjected to the hostile molten aluminum, electrical noise etc. Despite this the probes were, on several occasions

  6. A novel probe density controllable electrochemiluminescence biosensor for ultra-sensitive detection of Hg2+ based on DNA hybridization optimization with gold nanoparticles array patterned self-assembly platform.

    Science.gov (United States)

    Gao, Wenhua; Zhang, An; Chen, Yunsheng; Chen, Zixuan; Chen, Yaowen; Lu, Fushen; Chen, Zhanguang

    2013-11-15

    Biosensor based on DNA hybridization holds great potential to get higher sensitivity as the optimal DNA hybridization efficiency can be achieved by controlling the distribution and orientation of probe strands on the transducer surface. In this work, an innovative strategy is reported to tap the sensitivity potential of current electrochemiluminescence (ECL) biosensing system by dispersedly anchoring the DNA beacons on the gold nanoparticles (GNPs) array which was electrodeposited on the glassy carbon electrode surface, rather than simply sprawling the coil-like strands onto planar gold surface. The strategy was developed by designing a "signal-on" ECL biosensing switch fabricated on the GNPs nanopatterned electrode surface for enhanced ultra-sensitivity detection of Hg(2+). A 57-mer hairpin-DNA labeled with ferrocene as ECL quencher and a 13-mer DNA labeled with Ru(bpy)3(2+) as reporter were hybridized to construct the signal generator in off-state. A 31-mer thymine (T)-rich capture-DNA was introduced to form T-T mismatches with the loop sequence of the hairpin-DNA in the presence of Hg(2+) and induce the stem-loop open, meanwhile the ECL "signal-on" was triggered. The peak sensitivity with the lowest detection limit of 0.1 nM was achieved with the optimal GNPs number density while exorbitant GNPs deposition resulted in sensitivity deterioration for the biosensor. We expect the present strategy could lead the renovation of the existing probe-immobilized ECL genosensor design to get an even higher sensitivity in ultralow level of target detection such as the identification of genetic diseases and disorders in basic research and clinical application. Copyright © 2013 Elsevier B.V. All rights reserved.

  7. Previous study for the setting up and optimization of detection of ZnS(Ag) scintillation applied to the measure of alpha radioactivity index

    International Nuclear Information System (INIS)

    Pujol, L.; Suarez-Navarro, J.A.; Montero, M.

    1998-01-01

    The determination of radiological water quality is useful for a wide range of environmental studies. In these cases, the gross alpha activity is one of the parameters to determine. This parameter permits to decide if further radiological analyses are necessary in order to identify and quantify the presence of alpha emitters in water. The usual method for monitoring the gross alpha activity includes sample evaporation to dryness on a disk and counting using ZnS(Ag) scintillation detector. Detector electronics is provided with two components which are adjustable by the user the high-voltage applied to the photomultiplier tubes and the low level discriminator that is used to eliminate the electronic noise. The high-voltage and low level discriminator optimization are convenient in order to reach the best counting conditions. This paper is a preliminary study of the procedure followed for the setting up and optimization of the detector electronics in the laboratories of CEDEX for the measurement of gross alpha activity. (Author)

  8. Data Mining Empowers the Generation of a Novel Class of Chromosome-specific DNA Probes

    Energy Technology Data Exchange (ETDEWEB)

    Zeng, Hui; Weier, Heinz-Ulrich G.; Kwan, Johnson; Wang, Mei; O' Brien, Benjamin

    2011-03-08

    Probes that allow accurate delineation of chromosome-specific DNA sequences in interphase or metaphase cell nuclei have become important clinical tools that deliver life-saving information about the gender or chromosomal make-up of a product of conception or the probability of an embryo to implant, as well as the definition of tumor-specific genetic signatures. Often such highly specific DNA probes are proprietary in nature and have been the result of extensive probe selection and optimization procedures. We describe a novel approach that eliminates costly and time consuming probe selection and testing by applying data mining and common bioinformatics tools. Similar to a rational drug design process in which drug-protein interactions are modeled in the computer, the rational probe design described here uses a set of criteria and publicly available bioinformatics software to select the desired probe molecules from libraries comprised of hundreds of thousands of probe molecules. Examples describe the selection of DNA probes for the human X and Y chromosomes, both with unprecedented performance, but in a similar fashion, this approach can be applied to other chromosomes or species.

  9. Optimization method to branch-and-bound large SBO state spaces under dynamic probabilistic risk assessment via use of LENDIT scales and S2R2 sets

    International Nuclear Information System (INIS)

    Nielsen, Joseph; Tokuhiro, Akira; Khatry, Jivan; Hiromoto, Robert

    2014-01-01

    Traditional probabilistic risk assessment (PRA) methods have been developed to evaluate risk associated with complex systems; however, PRA methods lack the capability to evaluate complex dynamic systems. In these systems, time and energy scales associated with transient events may vary as a function of transition times and energies to arrive at a different physical state. Dynamic PRA (DPRA) methods provide a more rigorous analysis of complex dynamic systems. Unfortunately DPRA methods introduce issues associated with combinatorial explosion of states. In order to address this combinatorial complexity, a branch-and-bound optimization technique is applied to the DPRA formalism to control the combinatorial state explosion. In addition, a new characteristic scaling metric (LENDIT – length, energy, number, distribution, information and time) is proposed as linear constraints that are used to guide the branch-and-bound algorithm to limit the number of possible states to be analyzed. The LENDIT characterization is divided into four groups or sets – 'state, system, resource and response' (S2R2) – describing reactor operations (normal and off-normal). In this paper we introduce the branch-and-bound DPRA approach and the application of LENDIT scales and S2R2 sets to a station blackout (SBO) transient. (author)

  10. Optimized image processing with modified preprocessing of image data sets of a transparent imaging plate by way of the lateral view of the cervical spine

    International Nuclear Information System (INIS)

    Reissberg, S.; Hoeschen, C.; Redlich, U.; Scherlach, C.; Preuss, H.; Kaestner, A.; Doehring, W.; Woischneck, D.; Schuetze, M.; Reichardt, K.; Firsching, R.

    2002-01-01

    Purpose: To improve the diagnostic quality of lateral radiographs of the cervical spine by pre-processing the image data sets produced by a transparent imaging plate with both-side reading and to evaluate any possible impact on minimizing the number of additional radiographs and supplementary investigations. Material and Methods: One hundred lateral digital radiographs of the cervical spine were processed with two different methods: processing of each data set using the system-imminent parameters and using the manual model. The difference between the two types of processing is the level of the latitude value. Hard copies of the processed images were judged by five radiologists and three neurosurgeons. The evaluation applied the image criteria score (ICS) without conventional reference images. Results: In 99% of the lateral radiographs of the cervical spine, all vertebral bodies could be completed delineated using the manual mode, but only 76% of the images processed by the system-imminent parameters showed all vertebral bodies. Thus, the manual mode enabled the evaluation of up to two additional more caudal vertebral bodies. The manual mode processing was significantly better concerning object size and processing artifacts. This optimized image processing and the resultant minimization of supplementary investigations was calculated to correspond to a theoretical dose reduction of about 50%. (orig.) [de

  11. Optimally Stopped Optimization

    Science.gov (United States)

    Vinci, Walter; Lidar, Daniel

    We combine the fields of heuristic optimization and optimal stopping. We propose a strategy for benchmarking randomized optimization algorithms that minimizes the expected total cost for obtaining a good solution with an optimal number of calls to the solver. To do so, rather than letting the objective function alone define a cost to be minimized, we introduce a further cost-per-call of the algorithm. We show that this problem can be formulated using optimal stopping theory. The expected cost is a flexible figure of merit for benchmarking probabilistic solvers that can be computed when the optimal solution is not known, and that avoids the biases and arbitrariness that affect other measures. The optimal stopping formulation of benchmarking directly leads to a real-time, optimal-utilization strategy for probabilistic optimizers with practical impact. We apply our formulation to benchmark the performance of a D-Wave 2X quantum annealer and the HFS solver, a specialized classical heuristic algorithm designed for low tree-width graphs. On a set of frustrated-loop instances with planted solutions defined on up to N = 1098 variables, the D-Wave device is between one to two orders of magnitude faster than the HFS solver.

  12. ASPIC: a novel method to predict the exon-intron structure of a gene that is optimally compatible to a set of transcript sequences

    Directory of Open Access Journals (Sweden)

    Pesole Graziano

    2005-10-01

    Full Text Available Abstract Background: Currently available methods to predict splice sites are mainly based on the independent and progressive alignment of transcript data (mostly ESTs to the genomic sequence. Apart from often being computationally expensive, this approach is vulnerable to several problems – hence the need to develop novel strategies. Results: We propose a method, based on a novel multiple genome-EST alignment algorithm, for the detection of splice sites. To avoid limitations of splice sites prediction (mainly, over-predictions due to independent single EST alignments to the genomic sequence our approach performs a multiple alignment of transcript data to the genomic sequence based on the combined analysis of all available data. We recast the problem of predicting constitutive and alternative splicing as an optimization problem, where the optimal multiple transcript alignment minimizes the number of exons and hence of splice site observations. We have implemented a splice site predictor based on this algorithm in the software tool ASPIC (Alternative Splicing PredICtion. It is distinguished from other methods based on BLAST-like tools by the incorporation of entirely new ad hoc procedures for accurate and computationally efficient transcript alignment and adopts dynamic programming for the refinement of intron boundaries. ASPIC also provides the minimal set of non-mergeable transcript isoforms compatible with the detected splicing events. The ASPIC web resource is dynamically interconnected with the Ensembl and Unigene databases and also implements an upload facility. Conclusion: Extensive bench marking shows that ASPIC outperforms other existing methods in the detection of novel splicing isoforms and in the minimization of over-predictions. ASPIC also requires a lower computation time for processing a single gene and an EST cluster. The ASPIC web resource is available at http://aspic.algo.disco.unimib.it/aspic-devel/.

  13. Analysis of regional timelines to set up a global phase III clinical trial in breast cancer: the adjuvant lapatinib and/or trastuzumab treatment optimization experience.

    Science.gov (United States)

    Metzger-Filho, Otto; de Azambuja, Evandro; Bradbury, Ian; Saini, Kamal S; Bines, José; Simon, Sergio D; Dooren, Veerle Van; Aktan, Gursel; Pritchard, Kathleen I; Wolff, Antonio C; Smith, Ian; Jackisch, Christian; Lang, Istvan; Untch, Michael; Boyle, Frances; Xu, Binghe; Baselga, Jose; Perez, Edith A; Piccart-Gebhart, Martine

    2013-01-01

    This study measured the time taken for setting up the different facets of adjuvant lapatinib and/or trastuzumab treatment optimization (ALTTO), an nternational phase III study being conducted in 44 participating countries. Time to regulatory authority (RA) approval, time to ethics committee/institutional review board (EC/IRB) approval, time from study approval by EC/IRB to first randomized patient, and time from first to last randomized patient were prospectively collected in the ALTTO study. Analyses were conducted by grouping countries into either geographic regions or economic classes as per the World Bank's criteria. South America had a significantly longer time to RA approval (median: 236 days, range: 21-257 days) than Europe (median: 52 days, range: 0-151 days), North America (median: 26 days, range: 22-30 days), and Asia-Pacific (median: 62 days, range: 37-75 days). Upper-middle economies had longer times to RA approval (median: 123 days, range: 21-257 days) than high-income (median: 47 days, range: 0-112 days) and lower-middle income economies (median: 57 days, range: 37-62 days). No significant difference was observed for time to EC/IRB approval across the studied regions (median: 59 days, range 0-174 days). Overall, the median time from EC/IRB approval to first recruited patient was 169 days (range: 26-412 days). This study highlights the long time intervals required to activate a global phase III trial. Collaborative research groups, pharmaceutical industry sponsors, and regulatory authorities should analyze the current system and enter into dialogue for optimizing local policies. This would enable faster access of patients to innovative therapies and enhance the efficiency of clinical research.

  14. The Optimal Timing of Stage 2 Palliation for Hypoplastic Left Heart Syndrome: An Analysis of the Pediatric Heart Network Single Ventricle Reconstruction Trial Public Data Set.

    Science.gov (United States)

    Meza, James M; Hickey, Edward J; Blackstone, Eugene H; Jaquiss, Robert D B; Anderson, Brett R; Williams, William G; Cai, Sally; Van Arsdell, Glen S; Karamlou, Tara; McCrindle, Brian W

    2017-10-31

    In infants requiring 3-stage single-ventricle palliation for hypoplastic left heart syndrome, attrition after the Norwood procedure remains significant. The effect of the timing of stage 2 palliation (S2P), a physician-modifiable factor, on long-term survival is not well understood. We hypothesized that an optimal interval between the Norwood and S2P that both minimizes pre-S2P attrition and maximizes post-S2P survival exists and is associated with individual patient characteristics. The National Institutes of Health/National Heart, Lung, and Blood Institute Pediatric Heart Network Single Ventricle Reconstruction Trial public data set was used. Transplant-free survival (TFS) was modeled from (1) Norwood to S2P and (2) S2P to 3 years by using parametric hazard analysis. Factors associated with death or heart transplantation were determined for each interval. To account for staged procedures, risk-adjusted, 3-year, post-Norwood TFS (the probability of TFS at 3 years given survival to S2P) was calculated using parametric conditional survival analysis. TFS from the Norwood to S2P was first predicted. TFS after S2P to 3 years was then predicted and adjusted for attrition before S2P by multiplying by the estimate of TFS to S2P. The optimal timing of S2P was determined by generating nomograms of risk-adjusted, 3-year, post-Norwood, TFS versus the interval from the Norwood to S2P. Of 547 included patients, 399 survived to S2P (73%). Of the survivors to S2P, 349 (87%) survived to 3-year follow-up. The median interval from the Norwood to S2P was 5.1 (interquartile range, 4.1-6.0) months. The risk-adjusted, 3-year, TFS was 68±7%. A Norwood-S2P interval of 3 to 6 months was associated with greatest 3-year TFS overall and in patients with few risk factors. In patients with multiple risk factors, TFS was severely compromised, regardless of the timing of S2P and most severely when S2P was performed early. No difference in the optimal timing of S2P existed when stratified by

  15. WE-G-18C-02: Estimation of Optimal B-Value Set for Obtaining Apparent Diffusion Coefficient Free From Perfusion in Non-Small Cell Lung Cancer

    Energy Technology Data Exchange (ETDEWEB)

    Karki, K; Hugo, G; Ford, J; Saraiya, S; Weiss, E [Radiation Oncology, Virginia Commonwealth University, Richmond, VA (United States); Olsen, K; Groves, R [Radiology, Virginia Commonwealth University, Richmond, VA (United States)

    2014-06-15

    Purpose: Diffusion-weighted MRI (DW-MRI) is increasingly being investigated for radiotherapy planning and response assessment. Selection of a limited number of b-values in DW-MRI is important to keep geometrical variations low and imaging time short. We investigated various b-value sets to determine an optimal set for obtaining monoexponential apparent diffusion coefficient (ADC) close to perfusion-insensitive intravoxel incoherent motion (IVIM) model ADC (ADCIVIM) in nonsmall cell lung cancer. Methods: Seven patients had 27 DW-MRI scans before and during radiotherapy in a 1.5T scanner. Respiratory triggering was applied to the echo-planar DW-MRI with TR=4500ms approximately, TE=74ms, pixel size=1.98X1.98mm{sub 2}, slice thickness=4–6mm and 7 axial slices. Diffusion gradients were applied to all three axes producing traceweighted images with eight b-values of 0–1000μs/μm{sup 2}. Monoexponential model ADC values using various b-value sets were compared to ADCIVIM using all b-values. To compare the relative noise in ADC maps, intra-scan coefficient of variation (CV) of active tumor volumes was computed. Results: ADCIVIM, perfusion coefficient and perfusion fraction for tumor volumes were in the range of 880-1622 μm{sup 2}/s, 8119-33834 μm{sup 2}/s and 0.104–0.349, respectively. ADC values using sets of 250, 800 and 1000; 250, 650 and 1000; and 250–1000μs/μm{sup 2} only were not significantly different from ADCIVIM(p>0.05, paired t-test). Error in ADC values for 0–1000, 50–1000, 100–1000, 250–1000, 500–1000, and three b-value sets- 250, 500 and 1000; 250, 650 and 1000; and 250, 800 and 1000μs/μm{sup 2} were 15.0, 9.4, 5.6, 1.4, 11.7, 3.7, 2.0 and 0.2% relative to the reference-standard ADCIVIM, respectively. Mean intrascan CV was 20.2, 20.9, 21.9, 24.9, 32.6, 25.8, 25.4 and 24.8%, respectively, whereas that for ADCIVIM was 23.3%. Conclusion: ADC values of two 3 b-value sets

  16. Short rest interval lengths between sets optimally enhance body composition and performance with 8 weeks of strength resistance training in older men.

    Science.gov (United States)

    Villanueva, Matthew G; Lane, Christianne Joy; Schroeder, E Todd

    2015-02-01

    To determine if 8 weeks of periodized strength resistance training (RT) utilizing relatively short rest interval lengths (RI) in between sets (SS) would induce greater improvements in body composition and muscular performance, compared to the same RT program utilizing extended RI (SL). 22 male volunteers (SS: n = 11, 65.6 ± 3.4 years; SL: n = 11, 70.3 ± 4.9 years) were assigned to one of two strength RT groups, following 4 weeks of periodized hypertrophic RT (PHRT): strength RT with 60-s RI (SS) or strength RT with 4-min RI (SL). Prior to randomization, all 22 study participants trained 3 days/week, for 4 weeks, targeting hypertrophy; from week 4 to week 12, SS and SL followed the same periodized strength RT program for 8 weeks, with RI the only difference in their RT prescription. Following PHRT, all study participants experienced increases in lean body mass (LBM) (p body strength (p body fat (p high-intensity strength RT with shortened RI induces significantly greater enhancements in body composition, muscular performance, and functional performance, compared to the same RT prescription with extended RI, in older men. Applied professionals may optimize certain RT-induced adaptations, by incorporating shortened RI.

  17. Revising the retrieval technique of a long-term stratospheric HNO{sub 3} data set. From a constrained matrix inversion to the optimal estimation algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Fiorucci, I.; Muscari, G. [Istituto Nazionale di Geofisica e Vulcanologia, Rome (Italy); De Zafra, R.L. [State Univ. of New York, Stony Brook, NY (United States). Dept. of Physics and Astronomy

    2011-07-01

    The Ground-Based Millimeter-wave Spectrometer (GBMS) was designed and built at the State University of New York at Stony Brook in the early 1990s and since then has carried out many measurement campaigns of stratospheric O{sub 3}, HNO{sub 3}, CO and N{sub 2}O at polar and mid-latitudes. Its HNO{sub 3} data set shed light on HNO{sub 3} annual cycles over the Antarctic continent and contributed to the validation of both generations of the satellite-based JPL Microwave Limb Sounder (MLS). Following the increasing need for long-term data sets of stratospheric constituents, we resolved to establish a long-term GMBS observation site at the Arctic station of Thule (76.5 N, 68.8 W), Greenland, beginning in January 2009, in order to track the long- and short-term interactions between the changing climate and the seasonal processes tied to the ozone depletion phenomenon. Furthermore, we updated the retrieval algorithm adapting the Optimal Estimation (OE) method to GBMS spectral data in order to conform to the standard of the Network for the Detection of Atmospheric Composition Change (NDACC) microwave group, and to provide our retrievals with a set of averaging kernels that allow more straightforward comparisons with other data sets. The new OE algorithm was applied to GBMS HNO{sub 3} data sets from 1993 South Pole observations to date, in order to produce HNO{sub 3} version 2 (v2) profiles. A sample of results obtained at Antarctic latitudes in fall and winter and at mid-latitudes is shown here. In most conditions, v2 inversions show a sensitivity (i.e., sum of column elements of the averaging kernel matrix) of 100{+-}20% from 20 to 45 km altitude, with somewhat worse (better) sensitivity in the Antarctic winter lower (upper) stratosphere. The 1{sigma} uncertainty on HNO{sub 3} v2 mixing ratio vertical profiles depends on altitude and is estimated at {proportional_to}15% or 0.3 ppbv, whichever is larger. Comparisons of v2 with former (v1) GBMS HNO{sub 3} vertical profiles

  18. Proximal Probes Facility

    Data.gov (United States)

    Federal Laboratory Consortium — The Proximal Probes Facility consists of laboratories for microscopy, spectroscopy, and probing of nanostructured materials and their functional properties. At the...

  19. Probe Techniques. Introductory Remarks

    Energy Technology Data Exchange (ETDEWEB)

    Emeleus, K. G. [School of Physics and Applied Mathematics, Queen' s University, Belfast (United Kingdom)

    1968-04-15

    In this brief introduction to the session on probes, the history of theii development is first touched on briefly. Reference is then made to the significance of the work to be described by Medicus, for conductivity and recombination calculations, and by Lam and Su, for a wide range of medium and higher pressure plasmas. Finally, a number of other probe topics are mentioned, including multiple probes; probes in electronegative plasmas; resonance probes; probes in noisy discharges; probes as oscillation detectors; use of probes where space-charge is not negligible. (author)

  20. Influence of probe motion on laser probe temperature in circulating blood.

    Science.gov (United States)

    Hehrlein, C; Splinter, R; Littmann, L; Tuntelder, J R; Tatsis, G P; Svenson, R H

    1991-01-01

    The purpose of this study was to evaluate the effect of probe motion on laser probe temperature in various blood flow conditions. Laser probe temperatures were measured in an in vitro blood circulation model consisting of 3.2 nm-diameter plastic tubes. A 2.0 mm-diameter metal probe attached to a 300 microns optical quartz fiber was coupled to an argon laser. Continuous wave 4 watts and 8 watts of laser power were delivered to the fiber tip corresponding to a 6.7 +/- 0.5 and 13.2 +/- 0.7 watts power setting at the laser generator. The laser probe was either moved with constant velocity or kept stationary. A thermocouple inserted in the lateral portion of the probe was used to record probe temperatures. Probe temperature changes were found with the variation of laser power, probe velocity, blood flow, and duration of laser exposure. Probe motion significantly reduced probe temperatures. After 10 seconds of 4 watts laser power the probe temperature in stagnant blood decreased from 303 +/- 18 degrees C to 113 +/- 17 degrees C (63%) by moving the probe with a velocity of 5 cm/sec. Blood flow rates of 170 ml/min further decreased the probe temperature from 113 +/- 17 degrees C to 50 +/- 8 degrees C (56%). At 8 watts of laser power a probe temperature reduction from 591 +/- 25 degrees C to 534 +/- 36 degrees C (10%) due to 5 cm/sec probe velocity was noted. Probe temperatures were reduced to 130 +/- 30 degrees C (78%) under the combined influence of 5 cm/sec probe velocity and 170 ml/min blood flow.(ABSTRACT TRUNCATED AT 250 WORDS)

  1. Gamma-ray imaging probes

    International Nuclear Information System (INIS)

    Wild, W.J.

    1988-01-01

    External nuclear medicine diagnostic imaging of early primary and metastatic lung cancer tumors is difficult due to the poor sensitivity and resolution of existing gamma cameras. Nonimaging counting detectors used for internal tumor detection give ambiguous results because distant background variations are difficult to discriminate from neighboring tumor sites. This suggests that an internal imaging nuclear medicine probe, particularly an esophageal probe, may be advantageously used to detect small tumors because of the ability to discriminate against background variations and the capability to get close to sites neighboring the esophagus. The design, theory of operation, preliminary bench tests, characterization of noise behavior and optimization of such an imaging probe is the central theme of this work

  2. Square-wave anodic-stripping voltammetric determination of Cd, Pb and Cu in wine: Set-up and optimization of sample pre-treatment and instrumental parameters

    International Nuclear Information System (INIS)

    Illuminati, Silvia; Annibaldi, Anna; Truzzi, Cristina; Finale, Carolina; Scarponi, Giuseppe

    2013-01-01

    For the first time, square-wave anodic-stripping voltammetry (SWASV) was set up and optimized for the determination of Cd, Pb and Cu in white wine after UV photo-oxidative digestion of the sample. The best procedure for the sample pre-treatment consisted in a 6-h UV irradiation of diluted, acidified wine, with the addition of ultrapure H 2 O 2 (three sequential additions during the irradiation). Due to metal concentration differences, separate measurements were carried out for Cd (deposition potential −950 mV vs. Ag/AgCl/3 M KCl deposition time 15 min) and simultaneously for Pb and Cu (E d −750 mV, t d 30 s). The optimum set-up of the main instrumental parameters, evaluated also in terms of the signal-to-noise ratio, were as follows: E SW 20 mV, f 100 Hz, ΔE step 8 mV, t step 100 ms, t wait 60 ms, t delay 2 ms, t meas 3 ms. The electrochemical behaviour was reversible bielectronic for Cd and Pb, and kinetically controlled monoelectronic for Cu. Good accuracy was found both when the recovery procedure was used and when the results were compared with data obtained by differential pulse anodic stripping voltammetry. The linearity of the response was verified up to ∼4 μg L −1 for Cd and Pb and ∼15 μg L −1 for Cu. The detection limits for t d = 5 min in the 10 times diluted, UV digested sample were (ng L −1 ): Cd 7.0, Pb 1.2 and Cu 6.6, which are well below currently applied methods. Application to a Verdicchio dei Castelli di Jesi white wine revealed concentration levels of Cd ∼0.2, Pb ∼10, Cu ∼30 μg L −1 with repeatabilities of (±RSD%) Cd ±6%, Pb ±5%, Cu ±10%

  3. Eddy-current probe design

    International Nuclear Information System (INIS)

    Kincaid, T.G.; McCary, R.O.

    1983-01-01

    This paper describes theoretical and experimental work directed toward finding the optimum probe dimensions and operating frequency for eddy current detection of half-penny surface cracks in nonmagnetic conducting materials. The study applies to probes which excite an approximately uniform spatial field over the length of the crack at the surface of the material. In practical terms, this means that the probe is not smaller than the crack length in any of its critical dimensions. The optimization of a simple coil probe is first analyzed in detail. It is shown that signal-to-noise ratio and lift-off discrimination are maximized by a pancake coil with mean radius not greater than the crack length, operated at a frequency which gives a skin depth equal to the crack depth. The results obtained for the simple coil are then used as a basis for discussion of the design of coils with ferrite cores and shields, and for the design of recording head type probes

  4. Study of probe-sample distance for biomedical spectra measurement

    Directory of Open Access Journals (Sweden)

    Li Lei

    2011-11-01

    Full Text Available Abstract Background Fiber-based optical spectroscopy has been widely used for biomedical applications. However, the effect of probe-sample distance on the collection efficiency has not been well investigated. Method In this paper, we presented a theoretical model to maximize the illumination and collection efficiency in designing fiber optic probes for biomedical spectra measurement. This model was in general applicable to probes with single or multiple fibers at an arbitrary incident angle. In order to demonstrate the theory, a fluorescence spectrometer was used to measure the fluorescence of human finger skin at various probe-sample distances. The fluorescence spectrum and the total fluorescence intensity were recorded. Results The theoretical results show that for single fiber probes, contact measurement always provides the best results. While for multi-fiber probes, there is an optimal probe distance. When a 400- μm excitation fiber is used to deliver the light to the skin and another six 400- μm fibers surrounding the excitation fiber are used to collect the fluorescence signal, the experimental results show that human finger skin has very strong fluorescence between 475 nm and 700 nm under 450 nm excitation. The fluorescence intensity is heavily dependent on the probe-sample distance and there is an optimal probe distance. Conclusions We investigated a number of probe-sample configurations and found that contact measurement could be the primary choice for single-fiber probes, but was very inefficient for multi-fiber probes. There was an optimal probe-sample distance for multi-fiber probes. By carefully choosing the probe-sample distance, the collection efficiency could be enhanced by 5-10 times. Our experiments demonstrated that the experimental results of the probe-sample distance dependence of collection efficiency in multi-fiber probes were in general agreement with our theory.

  5. Endurance Enhancement and High Speed Set/Reset of 50 nm Generation HfO2 Based Resistive Random Access Memory Cell by Intelligent Set/Reset Pulse Shape Optimization and Verify Scheme

    Science.gov (United States)

    Higuchi, Kazuhide; Miyaji, Kousuke; Johguchi, Koh; Takeuchi, Ken

    2012-02-01

    This paper proposes a verify-programming method for the resistive random access memory (ReRAM) cell which achieves a 50-times higher endurance and a fast set and reset compared with the conventional method. The proposed verify-programming method uses the incremental pulse width with turnback (IPWWT) for the reset and the incremental voltage with turnback (IVWT) for the set. With the combination of IPWWT reset and IVWT set, the endurance-cycle increases from 48 ×103 to 2444 ×103 cycles. Furthermore, the measured data retention-time after 20 ×103 set/reset cycles is estimated to be 10 years. Additionally, the filamentary based physical model is proposed to explain the set/reset failure mechanism with various set/reset pulse shapes. The reset pulse width and set voltage correspond to the width and length of the conductive-filament, respectively. Consequently, since the proposed IPWWT and IVWT recover set and reset failures of ReRAM cells, the endurance-cycles are improved.

  6. Four-probe measurements with a three-probe scanning tunneling microscope

    International Nuclear Information System (INIS)

    Salomons, Mark; Martins, Bruno V. C.; Zikovsky, Janik; Wolkow, Robert A.

    2014-01-01

    We present an ultrahigh vacuum (UHV) three-probe scanning tunneling microscope in which each probe is capable of atomic resolution. A UHV JEOL scanning electron microscope aids in the placement of the probes on the sample. The machine also has a field ion microscope to clean, atomically image, and shape the probe tips. The machine uses bare conductive samples and tips with a homebuilt set of pliers for heating and loading. Automated feedback controlled tip-surface contacts allow for electrical stability and reproducibility while also greatly reducing tip and surface damage due to contact formation. The ability to register inter-tip position by imaging of a single surface feature by multiple tips is demonstrated. Four-probe material characterization is achieved by deploying two tips as fixed current probes and the third tip as a movable voltage probe

  7. Four-probe measurements with a three-probe scanning tunneling microscope

    Energy Technology Data Exchange (ETDEWEB)

    Salomons, Mark [National Institute for Nanotechnology, National Research Council of Canada, Edmonton, Alberta T6G 2M9 (Canada); Martins, Bruno V. C.; Zikovsky, Janik; Wolkow, Robert A., E-mail: rwolkow@ualberta.ca [National Institute for Nanotechnology, National Research Council of Canada, Edmonton, Alberta T6G 2M9 (Canada); Department of Physics, University of Alberta, Edmonton, Alberta T6G 2E1 (Canada)

    2014-04-15

    We present an ultrahigh vacuum (UHV) three-probe scanning tunneling microscope in which each probe is capable of atomic resolution. A UHV JEOL scanning electron microscope aids in the placement of the probes on the sample. The machine also has a field ion microscope to clean, atomically image, and shape the probe tips. The machine uses bare conductive samples and tips with a homebuilt set of pliers for heating and loading. Automated feedback controlled tip-surface contacts allow for electrical stability and reproducibility while also greatly reducing tip and surface damage due to contact formation. The ability to register inter-tip position by imaging of a single surface feature by multiple tips is demonstrated. Four-probe material characterization is achieved by deploying two tips as fixed current probes and the third tip as a movable voltage probe.

  8. Four-probe measurements with a three-probe scanning tunneling microscope.

    Science.gov (United States)

    Salomons, Mark; Martins, Bruno V C; Zikovsky, Janik; Wolkow, Robert A

    2014-04-01

    We present an ultrahigh vacuum (UHV) three-probe scanning tunneling microscope in which each probe is capable of atomic resolution. A UHV JEOL scanning electron microscope aids in the placement of the probes on the sample. The machine also has a field ion microscope to clean, atomically image, and shape the probe tips. The machine uses bare conductive samples and tips with a homebuilt set of pliers for heating and loading. Automated feedback controlled tip-surface contacts allow for electrical stability and reproducibility while also greatly reducing tip and surface damage due to contact formation. The ability to register inter-tip position by imaging of a single surface feature by multiple tips is demonstrated. Four-probe material characterization is achieved by deploying two tips as fixed current probes and the third tip as a movable voltage probe.

  9. Single-cell resolution imaging of retinal ganglion cell apoptosis in vivo using a cell-penetrating caspase-activatable peptide probe.

    Directory of Open Access Journals (Sweden)

    Xudong Qiu

    Full Text Available Peptide probes for imaging retinal ganglion cell (RGC apoptosis consist of a cell-penetrating peptide targeting moiety and a fluorophore-quencher pair flanking an effector caspase consensus sequence. Using ex vivo fluorescence imaging, we previously validated the capacity of these probes to identify apoptotic RGCs in cell culture and in an in vivo rat model of N-methyl- D-aspartate (NMDA-induced neurotoxicity. Herein, using TcapQ488, a new probe designed and synthesized for compatibility with clinically-relevant imaging instruments, and real time imaging of a live rat RGC degeneration model, we fully characterized time- and dose-dependent probe activation, signal-to-noise ratios, and probe safety profiles in vivo. Adult rats received intravitreal injections of four NMDA concentrations followed by varying TcapQ488 doses. Fluorescence fundus imaging was performed sequentially in vivo using a confocal scanning laser ophthalmoscope and individual RGCs displaying activated probe were counted and analyzed. Rats also underwent electroretinography following intravitreal injection of probe. In vivo fluorescence fundus imaging revealed distinct single-cell probe activation as an indicator of RGC apoptosis induced by intravitreal NMDA injection that corresponded to the identical cells observed in retinal flat mounts of the same eye. Peak activation of probe in vivo was detected 12 hours post probe injection. Detectable fluorescent RGCs increased with increasing NMDA concentration; sensitivity of detection generally increased with increasing TcapQ488 dose until saturating at 0.387 nmol. Electroretinography following intravitreal injections of TcapQ488 showed no significant difference compared with control injections. We optimized the signal-to-noise ratio of a caspase-activatable cell penetrating peptide probe for quantitative non-invasive detection of RGC apoptosis in vivo. Full characterization of probe performance in this setting creates an important in

  10. Mobile Game Probes

    DEFF Research Database (Denmark)

    Borup Lynggaard, Aviaja

    2006-01-01

    This paper will examine how probes can be useful for game designers in the preliminary phases of a design process. The work is based upon a case study concerning pervasive mobile phone games where Mobile Game Probes have emerged from the project. The new probes are aimed towards a specific target...... group and the goal is to specify the probes so they will cover the most relevant areas for our project. The Mobile Game Probes generated many interesting results and new issues occurred, since the probes came to be dynamic and favorable for the process in new ways....

  11. Optimal set values of zone modeling in the simulation of a walking beam type reheating furnace on the steady-state operating regime

    International Nuclear Information System (INIS)

    Yang, Zhi; Luo, Xiaochuan

    2016-01-01

    Highlights: • The adjoint equation is introduced to the PDE optimal control problem. • Lipschitz continuity for the gradient of the cost functional is derived. • The simulation time and iterations reduce by a large margin in the simulations. • The model validation and comparison are made to verify the proposed math model. - Abstract: In this paper, this study proposed a new method to solve the PDE optimal control problem by introducing the adjoint problem to the optimization model, which was used to get the reference values for the optimal furnace zone temperatures and the optimal temperature distribution of steel slabs in the reheating furnace on the steady-state operating regime. It was proved that the gradient of the cost functional could be written via the weak solution of this adjoint problem and then Lipschitz continuity of the gradient was derived. Model validation and comparison between the mathematics model and the experiment results indicated that the present heat transfer model worked well for the prediction of thermal behavior about a slab in the reheating furnace. Iterations and simulation time had shown a significant decline in the simulations of 20MnSi slab, and it was shown by numerical simulations for 0.4 m thick slabs that the proposed method was better applied in the medium and heavy plate plant, leading to better performance in terms of productivity, energy efficiency and other features of reheating furnaces.

  12. Simulation of fruit-set and trophic competition and optimization of yield advantages in six Xapsicum cultivars using functional-tructural plant modelling

    NARCIS (Netherlands)

    Ma, Y.T.; Wubs, A.M.; Mathieu, A.; Heuvelink, E.; Zhu, J.Y.; Hu, B.G.; Cournede, P.H.; Reffye, de P.

    2011-01-01

    Background and aims - Many indeterminate plants can have wide fluctuations in the pattern of fruit-set and harvest. Fruit-set in these types of plants depends largely on the balance between source (assimilate supply) and sink strength (assimilate demand) within the plant. This study aims to evaluate

  13. An Optimization Study on Listening Experiments to Improve the Comparability of Annoyance Ratings of Noise Samples from Different Experimental Sample Sets.

    Science.gov (United States)

    Di, Guoqing; Lu, Kuanguang; Shi, Xiaofan

    2018-03-08

    Annoyance ratings obtained from listening experiments are widely used in studies on health effect of environmental noise. In listening experiments, participants usually give the annoyance rating of each noise sample according to its relative annoyance degree among all samples in the experimental sample set if there are no reference sound samples, which leads to poor comparability between experimental results obtained from different experimental sample sets. To solve this problem, this study proposed to add several pink noise samples with certain loudness levels into experimental sample sets as reference sound samples. On this basis, the standard curve between logarithmic mean annoyance and loudness level of pink noise was used to calibrate the experimental results and the calibration procedures were described in detail. Furthermore, as a case study, six different types of noise sample sets were selected to conduct listening experiments using this method to examine the applicability of it. Results showed that the differences in the annoyance ratings of each identical noise sample from different experimental sample sets were markedly decreased after calibration. The determination coefficient ( R ²) of linear fitting functions between psychoacoustic annoyance (PA) and mean annoyance (MA) of noise samples from different experimental sample sets increased obviously after calibration. The case study indicated that the method above is applicable to calibrating annoyance ratings obtained from different types of noise sample sets. After calibration, the comparability of annoyance ratings of noise samples from different experimental sample sets can be distinctly improved.

  14. NeuroMEMS: Neural Probe Microtechnologies

    Directory of Open Access Journals (Sweden)

    Sam Musallam

    2008-10-01

    Full Text Available Neural probe technologies have already had a significant positive effect on our understanding of the brain by revealing the functioning of networks of biological neurons. Probes are implanted in different areas of the brain to record and/or stimulate specific sites in the brain. Neural probes are currently used in many clinical settings for diagnosis of brain diseases such as seizers, epilepsy, migraine, Alzheimer’s, and dementia. We find these devices assisting paralyzed patients by allowing them to operate computers or robots using their neural activity. In recent years, probe technologies were assisted by rapid advancements in microfabrication and microelectronic technologies and thus are enabling highly functional and robust neural probes which are opening new and exciting avenues in neural sciences and brain machine interfaces. With a wide variety of probes that have been designed, fabricated, and tested to date, this review aims to provide an overview of the advances and recent progress in the microfabrication techniques of neural probes. In addition, we aim to highlight the challenges faced in developing and implementing ultralong multi-site recording probes that are needed to monitor neural activity from deeper regions in the brain. Finally, we review techniques that can improve the biocompatibility of the neural probes to minimize the immune response and encourage neural growth around the electrodes for long term implantation studies.

  15. An algorithm and program for finding sequence specific oligo-nucleotide probes for species identification

    Directory of Open Access Journals (Sweden)

    Tautz Diethard

    2002-03-01

    Full Text Available Abstract Background The identification of species or species groups with specific oligo-nucleotides as molecular signatures is becoming increasingly popular for bacterial samples. However, it shows also great promise for other small organisms that are taxonomically difficult to tract. Results We have devised here an algorithm that aims to find the optimal probes for any given set of sequences. The program requires only a crude alignment of these sequences as input and is optimized for performance to deal also with very large datasets. The algorithm is designed such that the position of mismatches in the probes influences the selection and makes provision of single nucleotide outloops. Program implementations are available for Linux and Windows.

  16. Probe-diverse ptychography

    Energy Technology Data Exchange (ETDEWEB)

    Peterson, I., E-mail: isaac.russellpeterson@rmit.edu.au [ARC Centre of Excellence for Coherent X-ray Science, the University of Melbourne, School of Physics, Victoria 3010 (Australia); Harder, R. [Advanced Photon Source, Argonne National Laboratory, Argonne, IL 60439 (United States); Robinson, I.K. [Research Complex at Harwell, Didcot, Oxfordshire OX11 0DE (United Kingdom); London Centre for Nanotechnology, University College London, London WC1H 0AH (United Kingdom)

    2016-12-15

    We propose an extension of ptychography where the target sample is scanned separately through several probes with distinct amplitude and phase profiles and a diffraction image is recorded for each probe and each sample translation. The resulting probe-diverse dataset is used to iteratively retrieve high-resolution images of the sample and all probes simultaneously. The method is shown to yield significant improvement in the reconstructed sample image compared to the image obtained using the standard single-probe ptychographic phase-retrieval scheme.

  17. PET-Probe: Evaluation of Technical Performance and Clinical Utility of a Handheld High-Energy Gamma Probe in Oncologic Surgery.

    Science.gov (United States)

    Gulec, Seza A; Daghighian, Farhad; Essner, Richard

    2016-12-01

    Positron emission tomography (PET) has become an invaluable part of patient evaluation in surgical oncology. PET is less than optimal for detecting lesions PET-positive lesions can be challenging as a result of difficulties in surgical exposure. We undertook this investigation to assess the utility of a handheld high-energy gamma probe (PET-Probe) for intraoperative identification of 18 F-deoxyglucose (FDG)-avid tumors. Forty patients underwent a diagnostic whole-body FDG-PET scan for consideration for surgical exploration and resection. Before surgery, all patients received an intravenous injection of 7 to 10 mCi of FDG. At surgery, the PET-Probe was used to determine absolute counts per second at the known tumor site(s) demonstrated by whole-body PET and at adjacent normal tissue (at least 4 cm away from tumor-bearing sites). Tumor-to-background ratios were calculated. Thirty-two patients (80%) underwent PET-Probe-guided surgery with therapeutic intent in a recurrent or metastatic disease setting. Eight patients underwent surgery for diagnostic exploration. Anatomical locations of the PET-identified lesions were neck and supraclavicular (n = 8), axilla (n = 5), groin and deep iliac (n = 4), trunk and extremity soft tissue (n = 3), abdominal and retroperitoneal (n = 19), and lung (n = 2). PET-Probe detected all PET-positive lesions. The PET-Probe was instrumental in localization of lesions in 15 patients that were not immediately apparent by surgical exploration. The PET-Probe identified all lesions demonstrated by PET scanning and, in selected cases, was useful in localizing FDG-avid disease not seen with conventional PET scanning.

  18. Traversing probe system

    International Nuclear Information System (INIS)

    Mashburn, D.N.; Stevens, R.H.; Woodall, H.C.

    1977-01-01

    This invention comprises a rotatable annular probe-positioner which carries at least one radially disposed sensing probe, such as a Pitot tube having a right-angled tip. The positioner can be coaxially and rotatably mounted within a compressor casing or the like and then actuated to orient the sensing probe as required to make measurements at selected stations in the annulus between the positioner and compressor casing. The positioner can be actuated to (a) selectively move the probe along its own axis, (b) adjust the yaw angle of the right-angled probe tip, and (c) revolve the probe about the axis common to the positioner and casing. A cam plate engages a cam-follower portion of the probe and normally rotates with the positioner. The positioner includes a first-motor-driven ring gear which effects slidable movement of the probe by rotating the positioner at a time when an external pneumatic cylinder is actuated to engage the cam plate and hold it stationary. When the pneumatic cylinder is not actuated, this ring gear can be driven to revolve the positioner and thus the probe to a desired circumferential location about the above-mentioned common axis. A second motor-driven ring gear included in the positioner can be driven to rotate the probe about its axis, thus adjusting the yaw angle of the probe tip. The positioner can be used in highly corrosive atmosphere, such as gaseous uranium hexafluoride. 10 claims, 6 figures

  19. Traversing probe system

    Science.gov (United States)

    Mashburn, Douglas N.; Stevens, Richard H.; Woodall, Harold C.

    1977-01-01

    This invention comprises a rotatable annular probe-positioner which carries at least one radially disposed sensing probe, such as a Pitot tube having a right-angled tip. The positioner can be coaxially and rotatably mounted within a compressor casing or the like and then actuated to orient the sensing probe as required to make measurements at selected stations in the annulus between the positioner and compressor casing. The positioner can be actuated to (a) selectively move the probe along its own axis, (b) adjust the yaw angle of the right-angled probe tip, and (c) revolve the probe about the axis common to the positioner and casing. A cam plate engages a cam-follower portion of the probe and normally rotates with the positioner. The positioner includes a first-motor-driven ring gear which effects slidable movement of the probe by rotating the positioner at a time when an external pneumatic cylinder is actuated to engage the cam plate and hold it stationary. When the pneumatic cylinder is not actuated, this ring gear can be driven to revolve the positioner and thus the probe to a desired circumferential location about the above-mentioned common axis. A second motor-driven ring gear included in the positioner can be driven to rotate the probe about its axis, thus adjusting the yaw angle of the probe tip. The positioner can be used in highly corrosive atmosphere, such as gaseous uranium hexafluoride.

  20. Electrical resistivity probes

    Science.gov (United States)

    Lee, Ki Ha; Becker, Alex; Faybishenko, Boris A.; Solbau, Ray D.

    2003-10-21

    A miniaturized electrical resistivity (ER) probe based on a known current-voltage (I-V) electrode structure, the Wenner array, is designed for local (point) measurement. A pair of voltage measuring electrodes are positioned between a pair of current carrying electrodes. The electrodes are typically about 1 cm long, separated by 1 cm, so the probe is only about 1 inch long. The electrodes are mounted to a rigid tube with electrical wires in the tube and a sand bag may be placed around the electrodes to protect the electrodes. The probes can be positioned in a borehole or on the surface. The electrodes make contact with the surrounding medium. In a dual mode system, individual probes of a plurality of spaced probes can be used to measure local resistance, i.e. point measurements, but the system can select different probes to make interval measurements between probes and between boreholes.

  1. Measuring the surface-heating of medical ultrasonic probes

    International Nuclear Information System (INIS)

    Kollmann, Chr; Vacariu, G; Fialka-Moser, V; Bergmann, H

    2004-01-01

    Due to converting losses the probe's surface itself is heated up, especially when emitting into air. Possible temperature increases in an ensemble of 15 different diagnostic and therapeutic ultrasound probes from 7 manufacturers in the frequency range between 0.05-7.5 MHz have been examined. Surface temperatures were detected by means of a calibrated IR-thermographic camera using a scheme of various power and pulse settings, as well as different imaging modalitites as used in clinical routine. Depending on the setup and the output power, the absolute surface temperatures of some of the probes emitting in air can be beyond 43 deg. C within 5-7 min.; a maximum surface temperature of 84 deg. C has been detected. Continuous mode or high pulse repetition frequencies on the therapeutic system side, small focused Doppler modes on the diagnostic system side combined with increased emitted acoustic intensities result in high surface temperatures. Within a worst case scenario a potential risk of negative skin changes (heat damage) or non-optimal therapeutic effects seems to be possible if a therapeutic system is used very often and if its emission continues unintentionally. In general the user should be aware that low emission intensities of e.g. 50 mW cm -2 could already produce hot surfaces

  2. Practical aspects of spherical near-field antenna measurements using a high-order probe

    DEFF Research Database (Denmark)

    Laitinen, Tommi; Pivnenko, Sergey; Nielsen, Jeppe Majlund

    2006-01-01

    Two practical aspects related to accurate antenna pattern characterization by probe-corrected spherical near-field antenna measurements with a high-order probe are examined. First, the requirements set by an arbitrary high-order probe on the scanning technique are pointed out. Secondly, a channel...... balance calibration procedure for a high-order dual-port probe with non-identical ports is presented, and the requirements set by this procedure for the probe are discussed....

  3. Identifying members of the domain Archaea with rRNA-targeted oligonucleotide probes.

    OpenAIRE

    Burggraf, S; Mayer, T; Amann, R; Schadhauser, S; Woese, C R; Stetter, K O

    1994-01-01

    Two 16S rRNA-targeted oligonucleotide probes were designed for the archaeal kingdoms Euryachaeota and Crenarchaeota. Probe specificities were evaluated by nonradioactive dot blot hybridization against selected reference organisms. The successful application of fluorescent-probe derivatives for whole-cell hybridization required organism-specific optimizations of fixation and hybridization conditions to assure probe penetration and morphological integrity of the cells. The probes allowed prelim...

  4. Cost tradeoffs in consequence management at nuclear power plants: A risk based approach to setting optimal long-term interdiction limits for regulatory analyses

    International Nuclear Information System (INIS)

    Mubayi, V.

    1995-05-01

    The consequences of severe accidents at nuclear power plants can be limited by various protective actions, including emergency responses and long-term measures, to reduce exposures of affected populations. Each of these protective actions involve costs to society. The costs of the long-term protective actions depend on the criterion adopted for the allowable level of long-term exposure. This criterion, called the ''long term interdiction limit,'' is expressed in terms of the projected dose to an individual over a certain time period from the long-term exposure pathways. The two measures of offsite consequences, latent cancers and costs, are inversely related and the choice of an interdiction limit is, in effect, a trade-off between these two measures. By monetizing the health effects (through ascribing a monetary value to life lost), the costs of the two consequence measures vary with the interdiction limit, the health effect costs increasing as the limit is relaxed and the protective action costs decreasing. The minimum of the total cost curve can be used to calculate an optimal long term interdiction limit. The calculation of such an optimal limit is presented for each of five US nuclear power plants which were analyzed for severe accident risk in the NUREG-1150 program by the Nuclear Regulatory Commission

  5. Hyperpolarized NMR Probes for Biological Assays

    Directory of Open Access Journals (Sweden)

    Sebastian Meier

    2014-01-01

    Full Text Available During the last decade, the development of nuclear spin polarization enhanced (hyperpolarized molecular probes has opened up new opportunities for studying the inner workings of living cells in real time. The hyperpolarized probes are produced ex situ, introduced into biological systems and detected with high sensitivity and contrast against background signals using high resolution NMR spectroscopy. A variety of natural, derivatized and designed hyperpolarized probes has emerged for diverse biological studies including assays of intracellular reaction progression, pathway kinetics, probe uptake and export, pH, redox state, reactive oxygen species, ion concentrations, drug efficacy or oncogenic signaling. These probes are readily used directly under natural conditions in biofluids and are often directly developed and optimized for cellular assays, thus leaving little doubt about their specificity and utility under biologically relevant conditions. Hyperpolarized molecular probes for biological NMR spectroscopy enable the unbiased detection of complex processes by virtue of the high spectral resolution, structural specificity and quantifiability of NMR signals. Here, we provide a survey of strategies used for the selection, design and use of hyperpolarized NMR probes in biological assays, and describe current limitations and developments.

  6. Use of electromyography to optimize Lokomat® settings for subject-specific gait rehabilitation in post-stroke hemiparetic patients: A proof-of-concept study.

    Science.gov (United States)

    Cherni, Yosra; Begon, Mickael; Chababe, Hicham; Moissenet, Florent

    2017-09-01

    While generic protocols exist for gait rehabilitation using robotic orthotics such as the Lokomat ® , several settings - guidance, body-weight support (BWS) and velocity - may be adjusted to individualize patient training. However, no systematic approach has yet emerged. Our objective was to assess the feasibility and effects of a systematic approach based on electromyography to determine subject-specific settings with application to the strengthening of the gluteus maximus muscle in post-stroke hemiparetic patients. Two male patients (61 and 65 years) with post-stroke hemiparesis performed up to 9 Lokomat ® trials by changing guidance and BWS while electromyography of the gluteus maximus was measured. For each subject, the settings that maximized gluteus maximus activity were used in 20 sessions of Lokomat ® training. Modified Functional Ambulation Classification (mFAC), 6-minutes walking test (6-MWT), and extensor strength were measured before and after training. The greatest gluteus maximus activity was observed at (Guidance: 70% -BWS: 20%) for Patient 1 and (Guidance: 80% - BWS: 30%) for Patient 2. In both patients, mFAC score increased from 4 to 7. The additional distance in 6-MWT increased beyond minimal clinically important difference (MCID=34.4m) reported for post-stroke patients. The isometric strength of hip extensors increased by 43 and 114%. Defining subject-specific settings for a Lokomat ® training was feasible and simple to implement. These two case reports suggest a benefit of this approach for muscle strengthening. It remains to demonstrate the superiority of such an approach for a wider population, compared to the use of a generic protocol. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  7. Microarray of DNA probes on carboxylate functional beads surface

    Institute of Scientific and Technical Information of China (English)

    黄承志; 李原芳; 黄新华; 范美坤

    2000-01-01

    The microarray of DNA probes with 5’ -NH2 and 5’ -Tex/3’ -NH2 modified terminus on 10 um carboxylate functional beads surface in the presence of 1-ethyl-3-(3-dimethylaminopropyl)-carbodiimide (EDC) is characterized in the preseni paper. it was found that the microarray capacity of DNA probes on the beads surface depends on the pH of the aqueous solution, the concentra-tion of DNA probe and the total surface area of the beads. On optimal conditions, the minimum distance of 20 mer single-stranded DNA probe microarrayed on beads surface is about 14 nm, while that of 20 mer double-stranded DNA probes is about 27 nm. If the probe length increases from 20 mer to 35 mer, its microarray density decreases correspondingly. Mechanism study shows that the binding mode of DNA probes on the beads surface is nearly parallel to the beads surface.

  8. Microarray of DNA probes on carboxylate functional beads surface

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    The microarray of DNA probes with 5′-NH2 and 5′-Tex/3′-NH2 modified terminus on 10 m m carboxylate functional beads surface in the presence of 1-ethyl-3-(3-dimethylaminopropyl)- carbodiimide (EDC) is characterized in the present paper. It was found that the microarray capacity of DNA probes on the beads surface depends on the pH of the aqueous solution, the concentration of DNA probe and the total surface area of the beads. On optimal conditions, the minimum distance of 20 mer single-stranded DNA probe microarrayed on beads surface is about 14 nm, while that of 20 mer double-stranded DNA probes is about 27 nm. If the probe length increases from 20 mer to 35 mer, its microarray density decreases correspondingly. Mechanism study shows that the binding mode of DNA probes on the beads surface is nearly parallel to the beads surface.

  9. A Genetic Algorithm Based Optimization Scheme To Find The Best Set Of Design Parameters To Enhance The Performance Of An Automobile Radiator

    Directory of Open Access Journals (Sweden)

    G.Chaitanya

    2013-12-01

    Full Text Available The present work aims at maximizing the overall heat transfer rate of an automobile radiator using Genetic Algorithm approach. The design specifications and empirical data pertaining to a rally car radiator obtained from literature are considered in the present work. The mathematical function describing the objective for the problem is formulated using the radiator core design equations and heat transfer relations governing the radiator. The overall heat transfer rate obtained from the present optimization technique is found to be 9.48 percent higher compared to the empirical value present in the literature. Also, the enhancement in the overall heat transfer rate is achieved with a marginal reduction in the radiator dimensions indicating better spacing ratio compared to the existing design.

  10. ProSeeK: a web server for MLPA probe design.

    Science.gov (United States)

    Pantano, Lorena; Armengol, Lluís; Villatoro, Sergi; Estivill, Xavier

    2008-11-28

    The technological evolution of platforms for detecting genome-wide copy number imbalances has allowed the discovery of an unexpected amount of human sequence that is variable in copy number among individuals. This type of human variation can make an important contribution to human diversity and disease susceptibility. Multiplex Ligation-dependent Probe Amplification (MLPA) is a targeted method to assess copy number differences for up to 40 genomic loci in one single experiment. Although specific MLPA assays can be ordered from MRC-Holland (the proprietary company of the MLPA technology), custom designs are also developed in many laboratories worldwide. After our own experience, an important drawback of custom MLPA assays is the time spent during the design of the specific oligonucleotides that are used as probes. Due to the large number of probes included in a single assay, a number of restrictions need to be met in order to maximize specificity and to increase success likelihood. We have developed a web tool for facilitating and optimising custom probe design for MLPA experiments. The algorithm only requires the target sequence in FASTA format and a set of parameters, that are provided by the user according to each specific MLPA assay, to identify the best probes inside the given region. To our knowledge, this is the first available tool for optimizing custom probe design of MLPA assays. The ease-of-use and speed of the algorithm dramatically reduces the turn around time of probe design. ProSeeK will become a useful tool for all laboratories that are currently using MLPA in their research projects for CNV studies.

  11. ProSeeK: A web server for MLPA probe design

    Directory of Open Access Journals (Sweden)

    Villatoro Sergi

    2008-11-01

    Full Text Available Abstract Background The technological evolution of platforms for detecting genome-wide copy number imbalances has allowed the discovery of an unexpected amount of human sequence that is variable in copy number among individuals. This type of human variation can make an important contribution to human diversity and disease susceptibility. Multiplex Ligation-dependent Probe Amplification (MLPA is a targeted method to assess copy number differences for up to 40 genomic loci in one single experiment. Although specific MLPA assays can be ordered from MRC-Holland (the proprietary company of the MLPA technology, custom designs are also developed in many laboratories worldwide. After our own experience, an important drawback of custom MLPA assays is the time spent during the design of the specific oligonucleotides that are used as probes. Due to the large number of probes included in a single assay, a number of restrictions need to be met in order to maximize specificity and to increase success likelihood. Results We have developed a web tool for facilitating and optimising custom probe design for MLPA experiments. The algorithm only requires the target sequence in FASTA format and a set of parameters, that are provided by the user according to each specific MLPA assay, to identify the best probes inside the given region. Conclusion To our knowledge, this is the first available tool for optimizing custom probe design of MLPA assays. The ease-of-use and speed of the algorithm dramatically reduces the turn around time of probe design. ProSeeK will become a useful tool for all laboratories that are currently using MLPA in their research projects for CNV studies.

  12. Identifying members of the domain Archaea with rRNA-targeted oligonucleotide probes.

    Science.gov (United States)

    Burggraf, S; Mayer, T; Amann, R; Schadhauser, S; Woese, C R; Stetter, K O

    1994-09-01

    Two 16S rRNA-targeted oligonucleotide probes were designed for the archaeal kingdoms Euryachaeota and Crenarchaeota. Probe specificities were evaluated by nonradioactive dot blot hybridization against selected reference organisms. The successful application of fluorescent-probe derivatives for whole-cell hybridization required organism-specific optimizations of fixation and hybridization conditions to assure probe penetration and morphological integrity of the cells. The probes allowed preliminary grouping of three new hyperthermophilic isolates. Together with other group-specific rRNA-targeted oligonucleotide probes, these probes will facilitate rapid in situ monitoring of the populations present in hydrothermal systems and support cultivation attempts.

  13. Theory of Langmuir probes in anisotropic plasmas

    International Nuclear Information System (INIS)

    Sudit, I.D.; Woods, R.C.

    1994-01-01

    A theory has been developed for electron retardation by Langmuir probes of several geometries in a general anisotropic plasma with arbitrary probe orientation and valid for any sheath thickness. Electron densities and electron velocity distribution functions (EVDFs) are obtained from the second derivative of probe I-V curves, as in Druyvesteyn's original method, which was developed for isotropic plasmas. Fedorov had extended the latter method in the context of a thin sheath approximation, to axisymmetric plasmas, in which the EVDF is expanded in a series of Legendary polynomials. In the present work an expansion in a series of spherical harmonics is employed, and the coordinate transformations are handled using the irreducible representation of the three dimensional rotation group. It is shown that the Volterra integral equations that must be solved to obtain the expansion coefficients of the EVDF from the second derivative data are no more complicated in the general case that hose for the axisymmetric plasma. Furthermore in the latter case the results can be shown to be equivalent to Fedrov's thin sheath expression. For the case of planar probes a formulation based on first derivatives of the I-V curves has been obtained. If data is obtained at enough different probe orientation of a one sided planar disc probe, any number of spherical harmonic coefficient functions may be obtained by inverting a set of linear equations and the complete EVDF deduced. For a cylindrical probe or a two-sided planar disc probe the integration of the second derivative of the probe current gives the exact electron density with any arbitrary probe orientation and any degree of plasma anisotropy

  14. Transmit-receive eddy current probes

    International Nuclear Information System (INIS)

    Obrutsky, L.S.; Sullivan, S.P.; Cecco, V.S.

    1997-01-01

    In the last two decades, due to increased inspection demands, eddy current instrumentation has advanced from single-frequency, single-output instruments to multifrequency, computer-aided systems. This has significantly increased the scope of eddy current testing, but, unfortunately, it has also increased the cost and complexity of inspections. In addition, this approach has not always improved defect detectability or signal-to-noise. Most eddy current testing applications are still performed with impedance probes, which have well known limitations. However, recent research at AECL has led to improved eddy current inspections through the design and development of transmit-receive (T/R) probes. T/R eddy current probes, with laterally displaced transmit and receive coils, present a number of advantages over impedance probes. They have improved signal-to-noise ratio in the presence of variable lift-off compared to impedance probes. They have strong directional properties, permitting probe optimization for circumferential or axial crack detection, and possess good phase discrimination to surface defects. They can significantly increase the scope of eddy current testing permitting reliable detection and sizing of cracks in heat exchanger tubing as well as in welded areas of both ferritic and non-ferromagnetic components. This presentation will describe the operating principles of T/R probes with the help of computer-derived normalized voltage diagrams. We will discuss their directional properties and analyze the advantages of using single and multiple T/R probes over impedance probes for specific inspection cases. Current applications to surface and tube testing and some typical inspection results will be described. (author)

  15. Gamma-Ray Imaging Probes.

    Science.gov (United States)

    Wild, Walter James

    1988-12-01

    External nuclear medicine diagnostic imaging of early primary and metastatic lung cancer tumors is difficult due to the poor sensitivity and resolution of existing gamma cameras. Nonimaging counting detectors used for internal tumor detection give ambiguous results because distant background variations are difficult to discriminate from neighboring tumor sites. This suggests that an internal imaging nuclear medicine probe, particularly an esophageal probe, may be advantageously used to detect small tumors because of the ability to discriminate against background variations and the capability to get close to sites neighboring the esophagus. The design, theory of operation, preliminary bench tests, characterization of noise behavior and optimization of such an imaging probe is the central theme of this work. The central concept lies in the representation of the aperture shell by a sequence of binary digits. This, coupled with the mode of operation which is data encoding within an axial slice of space, leads to the fundamental imaging equation in which the coding operation is conveniently described by a circulant matrix operator. The coding/decoding process is a classic coded-aperture problem, and various estimators to achieve decoding are discussed. Some estimators require a priori information about the object (or object class) being imaged; the only unbiased estimator that does not impose this requirement is the simple inverse-matrix operator. The effects of noise on the estimate (or reconstruction) is discussed for general noise models and various codes/decoding operators. The choice of an optimal aperture for detector count times of clinical relevance is examined using a statistical class-separability formalism.

  16. On the control of irrigation through soil moisture measurement using a neutron depth probe in horizontal subsurface measuring circuits

    International Nuclear Information System (INIS)

    Schaecke, B.; Schaecke, E.

    1977-01-01

    An outline is given of the advantages inherent in soil moisture measurement by means of a neutron probe in horizontal subsurface measuring circuits for irrigation control. Preliminary experience for the setting up of a field calibration curve and for practical measurement are submitted. This technique includes the following advantages: almost complete covering of the upper soil range which is of interest to irrigation control; good measuring density; suitable distribution of measuring points per unit area; possibility of continuous probe passage; optimal repeatability of measurements; exploration of a unit area with but few measuring circuits; no obstacles to tillage, drilling, intercultivation and harvest operations; and complete conservation of crop and plot which is not reached with any other soil moisture measurement technique so far available. Making use of the above advantages, the new technique allows automatic irrigation control with only one neutron depth probe. (author)

  17. Probe tests microweld strength

    Science.gov (United States)

    1965-01-01

    Probe is developed to test strength of soldered, brazed or microwelded joints. It consists of a spring which may be adjusted to the desired test pressure by means of a threaded probe head, and an indicator lamp. Device may be used for electronic equipment testing.

  18. Applying genetic algorithms to set the optimal combination of forest fire related variables and model forest fire susceptibility based on data mining models. The case of Dayu County, China.

    Science.gov (United States)

    Hong, Haoyuan; Tsangaratos, Paraskevas; Ilia, Ioanna; Liu, Junzhi; Zhu, A-Xing; Xu, Chong

    2018-07-15

    The main objective of the present study was to utilize Genetic Algorithms (GA) in order to obtain the optimal combination of forest fire related variables and apply data mining methods for constructing a forest fire susceptibility map. In the proposed approach, a Random Forest (RF) and a Support Vector Machine (SVM) was used to produce a forest fire susceptibility map for the Dayu County which is located in southwest of Jiangxi Province, China. For this purpose, historic forest fires and thirteen forest fire related variables were analyzed, namely: elevation, slope angle, aspect, curvature, land use, soil cover, heat load index, normalized difference vegetation index, mean annual temperature, mean annual wind speed, mean annual rainfall, distance to river network and distance to road network. The Natural Break and the Certainty Factor method were used to classify and weight the thirteen variables, while a multicollinearity analysis was performed to determine the correlation among the variables and decide about their usability. The optimal set of variables, determined by the GA limited the number of variables into eight excluding from the analysis, aspect, land use, heat load index, distance to river network and mean annual rainfall. The performance of the forest fire models was evaluated by using the area under the Receiver Operating Characteristic curve (ROC-AUC) based on the validation dataset. Overall, the RF models gave higher AUC values. Also the results showed that the proposed optimized models outperform the original models. Specifically, the optimized RF model gave the best results (0.8495), followed by the original RF (0.8169), while the optimized SVM gave lower values (0.7456) than the RF, however higher than the original SVM (0.7148) model. The study highlights the significance of feature selection techniques in forest fire susceptibility, whereas data mining methods could be considered as a valid approach for forest fire susceptibility modeling

  19. Shared probe design and existing microarray reanalysis using PICKY

    Directory of Open Access Journals (Sweden)

    Chou Hui-Hsien

    2010-04-01

    Full Text Available Abstract Background Large genomes contain families of highly similar genes that cannot be individually identified by microarray probes. This limitation is due to thermodynamic restrictions and cannot be resolved by any computational method. Since gene annotations are updated more frequently than microarrays, another common issue facing microarray users is that existing microarrays must be routinely reanalyzed to determine probes that are still useful with respect to the updated annotations. Results PICKY 2.0 can design shared probes for sets of genes that cannot be individually identified using unique probes. PICKY 2.0 uses novel algorithms to track sharable regions among genes and to strictly distinguish them from other highly similar but nontarget regions during thermodynamic comparisons. Therefore, PICKY does not sacrifice the quality of shared probes when choosing them. The latest PICKY 2.1 includes the new capability to reanalyze existing microarray probes against updated gene sets to determine probes that are still valid to use. In addition, more precise nonlinear salt effect estimates and other improvements are added, making PICKY 2.1 more versatile to microarray users. Conclusions Shared probes allow expressed gene family members to be detected; this capability is generally more desirable than not knowing anything about these genes. Shared probes also enable the design of cross-genome microarrays, which facilitate multiple species identification in environmental samples. The new nonlinear salt effect calculation significantly increases the precision of probes at a lower buffer salt concentration, and the probe reanalysis function improves existing microarray result interpretations.

  20. Automated hybrid closed-loop control with a proportional-integral-derivative based system in adolescents and adults with type 1 diabetes: individualizing settings for optimal performance.

    Science.gov (United States)

    Ly, Trang T; Weinzimer, Stuart A; Maahs, David M; Sherr, Jennifer L; Roy, Anirban; Grosman, Benyamin; Cantwell, Martin; Kurtz, Natalie; Carria, Lori; Messer, Laurel; von Eyben, Rie; Buckingham, Bruce A

    2017-08-01

    Automated insulin delivery systems, utilizing a control algorithm to dose insulin based upon subcutaneous continuous glucose sensor values and insulin pump therapy, will soon be available for commercial use. The objective of this study was to determine the preliminary safety and efficacy of initialization parameters with the Medtronic hybrid closed-loop controller by comparing percentage of time in range, 70-180 mg/dL (3.9-10 mmol/L), mean glucose values, as well as percentage of time above and below target range between sensor-augmented pump therapy and hybrid closed-loop, in adults and adolescents with type 1 diabetes. We studied an initial cohort of 9 adults followed by a second cohort of 15 adolescents, using the Medtronic hybrid closed-loop system with the proportional-integral-derivative with insulin feed-back (PID-IFB) algorithm. Hybrid closed-loop was tested in supervised hotel-based studies over 4-5 days. The overall mean percentage of time in range (70-180 mg/dL, 3.9-10 mmol/L) during hybrid closed-loop was 71.8% in the adult cohort and 69.8% in the adolescent cohort. The overall percentage of time spent under 70 mg/dL (3.9 mmol/L) was 2.0% in the adult cohort and 2.5% in the adolescent cohort. Mean glucose values were 152 mg/dL (8.4 mmol/L) in the adult cohort and 153 mg/dL (8.5 mmol/L) in the adolescent cohort. Closed-loop control using the Medtronic hybrid closed-loop system enables adaptive, real-time basal rate modulation. Initializing hybrid closed-loop in clinical practice will involve individualizing initiation parameters to optimize overall glucose control. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  1. Expanding probe repertoire and improving reproducibility in human genomic hybridization

    Science.gov (United States)

    Dorman, Stephanie N.; Shirley, Ben C.; Knoll, Joan H. M.; Rogan, Peter K.

    2013-01-01

    Diagnostic DNA hybridization relies on probes composed of single copy (sc) genomic sequences. Sc sequences in probe design ensure high specificity and avoid cross-hybridization to other regions of the genome, which could lead to ambiguous results that are difficult to interpret. We examine how the distribution and composition of repetitive sequences in the genome affects sc probe performance. A divide and conquer algorithm was implemented to design sc probes. With this approach, sc probes can include divergent repetitive elements, which hybridize to unique genomic targets under higher stringency experimental conditions. Genome-wide custom probe sets were created for fluorescent in situ hybridization (FISH) and microarray genomic hybridization. The scFISH probes were developed for detection of copy number changes within small tumour suppressor genes and oncogenes. The microarrays demonstrated increased reproducibility by eliminating cross-hybridization to repetitive sequences adjacent to probe targets. The genome-wide microarrays exhibited lower median coefficients of variation (17.8%) for two HapMap family trios. The coefficients of variations of commercial probes within 300 nt of a repetitive element were 48.3% higher than the nearest custom probe. Furthermore, the custom microarray called a chromosome 15q11.2q13 deletion more consistently. This method for sc probe design increases probe coverage for FISH and lowers variability in genomic microarrays. PMID:23376933

  2. Numerical simulations for quantitative analysis of electrostatic interaction between atomic force microscopy probe and an embedded electrode within a thin dielectric: meshing optimization, sensitivity to potential distribution and impact of cantilever contribution

    Science.gov (United States)

    Azib, M.; Baudoin, F.; Binaud, N.; Villeneuve-Faure, C.; Bugarin, F.; Segonds, S.; Teyssedre, G.

    2018-04-01

    Recent experimental results demonstrated that an electrostatic force distance curve (EFDC) can be used for space charge probing in thin dielectric layers. A main advantage of the method is claimed to be its sensitivity to charge localization, which, however, needs to be substantiated by numerical simulations. In this paper, we have developed a model which permits us to compute an EFDC accurately by using the most sophisticated and accurate geometry for the atomic force microscopy probe. To avoid simplifications and in order to reproduce experimental conditions, the EFDC has been simulated for a system constituted of a polarized electrode embedded in a thin dielectric layer (SiN x ). The individual contributions of forces on the tip and on the cantilever have been analyzed separately to account for possible artefacts. The EFDC sensitivity to potential distribution is studied through the change in electrode shape, namely the width and the depth. Finally, the numerical results have been compared with experimental data.

  3. Methodological basis for the optimization of a marine sea-urchin embryo test (SET) for the ecological assessment of coastal water quality.

    Science.gov (United States)

    Saco-Alvarez, Liliana; Durán, Iria; Ignacio Lorenzo, J; Beiras, Ricardo

    2010-05-01

    The sea-urchin embryo test (SET) has been frequently used as a rapid, sensitive, and cost-effective biological tool for marine monitoring worldwide, but the selection of a sensitive, objective, and automatically readable endpoint, a stricter quality control to guarantee optimum handling and biological material, and the identification of confounding factors that interfere with the response have hampered its widespread routine use. Size increase in a minimum of n=30 individuals per replicate, either normal larvae or earlier developmental stages, was preferred to observer-dependent, discontinuous responses as test endpoint. Control size increase after 48 h incubation at 20 degrees C must meet an acceptability criterion of 218 microm. In order to avoid false positives minimums of 32 per thousand salinity, 7 pH and 2mg/L oxygen, and a maximum of 40 microg/L NH(3) (NOEC) are required in the incubation media. For in situ testing size increase rates must be corrected on a degree-day basis using 12 degrees C as the developmental threshold. Copyright 2010 Elsevier Inc. All rights reserved.

  4. Evaluation results of the optimal estimation based, multi-sensor cloud property data sets derived from AVHRR heritage measurements in the Cloud_cci project.

    Science.gov (United States)

    Stapelberg, S.; Jerg, M.; Stengel, M.; Hollmann, R.

    2014-12-01

    In 2010 the ESA Climate Change Initiative (CCI) Cloud project was started with the objectives of generating a long-term coherent data set of cloud properties. The cloud properties considered are cloud mask, cloud top estimates, cloud optical thickness, cloud effective radius and post processed parameters such as cloud liquid and ice water path. During the first phase of the project 3 years of data spanning 2007 to 2009 have been produced on a global gridded daily and monthly mean basis. Next to the processing an extended evaluation study was started in order to gain a first understanding of the quality of the retrieved data. The critical discussion of the results of the evaluation holds a key role for the further development and improvement of the dataset's quality. The presentation will give a short overview of the evaluation study undertaken in the Cloud_cci project. The focus will be on the evaluation of gridded, monthly mean cloud fraction and cloud top data from the Cloud_cci AVHRR-heritage dataset with CLARA-A1, MODIS-Coll5, PATMOS-X and ISCCP data. Exemplary results will be shown. Strengths and shortcomings of the retrieval scheme as well as possible impacts of averaging approaches on the evaluation will be discussed. An Overview of Cloud_cci Phase 2 will be given.

  5. Developments in Scanning Hall Probe Microscopy

    Science.gov (United States)

    Chouinard, Taras; Chu, Ricky; David, Nigel; Broun, David

    2009-05-01

    Low temperature scanning Hall probe microscopy is a sensitive means of imaging magnetic structures with high spatial resolution and magnetic flux sensitivity approaching that of a Superconducting Quantum Interference Device. We have developed a scanning Hall probe microscope with novel features, including highly reliable coarse positioning, in situ optimization of sensor-sample alignment and capacitive transducers for linear, long range positioning measurement. This has been motivated by the need to reposition accurately above fabricated nanostructures such as small superconducting rings. Details of the design and performance will be presented as well as recent progress towards time-resolved measurements with sub nanosecond resolution.

  6. Nuclear borehole probes - theory and experiments

    International Nuclear Information System (INIS)

    Joergensen, J.L.; Korsbech, U.; Gynther Nielsen, K.; Oelgaard, P.L.

    1985-06-01

    The report gives a summary of the theoretical and expeimental work on borehole probes that has been performed since 1971 at The Department of Electrophysics, The Technical University of Denmark. The first part of the report concerns the use of a spectral natural gamma-ray probe (SNG-probe), which is used for measurements of the spectral distribution of the gamma-rays of the geological strata around a borehole. In general the spectrum is divided into three parts - the gamma-rays from potassium-40, from thorium-232 and daughters, and from uranium-238 and daughters. A set of curves showing the intensities of the gamm-radiation from K, Th, and U versus depth is called a SNG-log. If proper calibrated, the SNG-log gives the concentration of Th, U, and K in the formation surrounding the borehole. Initially the basis for an interpretation of SNG-logs is discussed. Then follows a description og some SNG-problems designed and built by The Department of Electrophysics, and a discussion of the calibration of SNG-probes. Some examples of SNG-logs are presented, and some general comments on the use of SNG-logs are given. The second part of the report concerns mainly the development of theoretical models for neutron-neutron probes, gamma-gamma probes, and pulsed-neutron probes. The purpose of this work has been to examine how well the models correlate with measured results and - where reasonable agreement is found - to use the models in studies of the factors that affect the probe responses in interpretation of experimental results and in probe design. (author)

  7. The REVAMP trial to evaluate HIV resistance testing in sub-Saharan Africa: a case study in clinical trial design in resource limited settings to optimize effectiveness and cost effectiveness estimates.

    Science.gov (United States)

    Siedner, Mark J; Bwana, Mwebesa B; Moosa, Mahomed-Yunus S; Paul, Michelle; Pillay, Selvan; McCluskey, Suzanne; Aturinda, Isaac; Ard, Kevin; Muyindike, Winnie; Moodley, Pravikrishnen; Brijkumar, Jaysingh; Rautenberg, Tamlyn; George, Gavin; Johnson, Brent; Gandhi, Rajesh T; Sunpath, Henry; Marconi, Vincent C

    2017-07-01

    In sub-Saharan Africa, rates of sustained HIV virologic suppression remain below international goals. HIV resistance testing, while common in resource-rich settings, has not gained traction due to concerns about cost and sustainability. We designed a randomized clinical trial to determine the feasibility, effectiveness, and cost-effectiveness of routine HIV resistance testing in sub-Saharan Africa. We describe challenges common to intervention studies in resource-limited settings, and strategies used to address them, including: (1) optimizing generalizability and cost-effectiveness estimates to promote transition from study results to policy; (2) minimizing bias due to patient attrition; and (3) addressing ethical issues related to enrollment of pregnant women. The study randomizes people in Uganda and South Africa with virologic failure on first-line therapy to standard of care virologic monitoring or immediate resistance testing. To strengthen external validity, study procedures are conducted within publicly supported laboratory and clinical facilities using local staff. To optimize cost estimates, we collect primary data on quality of life and medical resource utilization. To minimize losses from observation, we collect locally relevant contact information, including Whatsapp account details, for field-based tracking of missing participants. Finally, pregnant women are followed with an adapted protocol which includes an increased visit frequency to minimize risk to them and their fetuses. REVAMP is a pragammatic randomized clinical trial designed to test the effectiveness and cost-effectiveness of HIV resistance testing versus standard of care in sub-Saharan Africa. We anticipate the results will directly inform HIV policy in sub-Saharan Africa to optimize care for HIV-infected patients.

  8. Hard probes 2006 Asilomar

    CERN Multimedia

    2006-01-01

    "The second international conference on hard and electromagnetic probes of high-energy nuclear collisions was held June 9 to 16, 2006 at the Asilomar Conference grounds in Pacific Grove, California" (photo and 1/2 page)

  9. Neutrons as a probe

    International Nuclear Information System (INIS)

    Iizumi, Masashi

    1993-01-01

    As an introduction to the symposium a brief overview will be given about the features of neutrons as a probe. First it will be pointed out that the utilization of neutrons as a probe for investigating the structural and dynamical properties of condensed matters is a benign gift eventuated from the release of atomic energy initiated by Enrico Fermi exactly half century ago. Features of neutrons as a probe are discussed in accordance with the four basic physical properties of neutrons as an elementary particle; (1) no electric charge (the interaction with matter is nuclear), (2) the mass of neutron is 1 amu, (3) spin is 1/2 and (4) neutrons have magnetic dipole moment. Overview will be given on the uniqueness of neutrons as a probe and on the variety in the way they are used in the wide research area from the pure science to the industrial applications. (author)

  10. Adjustable Pitot Probe

    Science.gov (United States)

    Ashby, George C., Jr.; Robbins, W. Eugene; Horsley, Lewis A.

    1991-01-01

    Probe readily positionable in core of uniform flow in hypersonic wind tunnel. Formed of pair of mating cylindrical housings: transducer housing and pitot-tube housing. Pitot tube supported by adjustable wedge fairing attached to top of pitot-tube housing with semicircular foot. Probe adjusted both radially and circumferentially. In addition, pressure-sensing transducer cooled internally by water or other cooling fluid passing through annulus of cooling system.

  11. Automated design of genomic Southern blot probes

    Directory of Open Access Journals (Sweden)

    Komiyama Noboru H

    2010-01-01

    Full Text Available Abstract Background Sothern blotting is a DNA analysis technique that has found widespread application in molecular biology. It has been used for gene discovery and mapping and has diagnostic and forensic applications, including mutation detection in patient samples and DNA fingerprinting in criminal investigations. Southern blotting has been employed as the definitive method for detecting transgene integration, and successful homologous recombination in gene targeting experiments. The technique employs a labeled DNA probe to detect a specific DNA sequence in a complex DNA sample that has been separated by restriction-digest and gel electrophoresis. Critically for the technique to succeed the probe must be unique to the target locus so as not to cross-hybridize to other endogenous DNA within the sample. Investigators routinely employ a manual approach to probe design. A genome browser is used to extract DNA sequence from the locus of interest, which is searched against the target genome using a BLAST-like tool. Ideally a single perfect match is obtained to the target, with little cross-reactivity caused by homologous DNA sequence present in the genome and/or repetitive and low-complexity elements in the candidate probe. This is a labor intensive process often requiring several attempts to find a suitable probe for laboratory testing. Results We have written an informatic pipeline to automatically design genomic Sothern blot probes that specifically attempts to optimize the resultant probe, employing a brute-force strategy of generating many candidate probes of acceptable length in the user-specified design window, searching all against the target genome, then scoring and ranking the candidates by uniqueness and repetitive DNA element content. Using these in silico measures we can automatically design probes that we predict to perform as well, or better, than our previous manual designs, while considerably reducing design time. We went on to

  12. A Characterization of 2-Tree Probe Interval Graphs

    Directory of Open Access Journals (Sweden)

    Brown David E.

    2014-08-01

    Full Text Available A graph is a probe interval graph if its vertices correspond to some set of intervals of the real line and can be partitioned into sets P and N so that vertices are adjacent if and only if their corresponding intervals intersect and at least one belongs to P. We characterize the 2-trees which are probe interval graphs and extend a list of forbidden induced subgraphs for such graphs created by Pržulj and Corneil in [2-tree probe interval graphs have a large obstruction set, Discrete Appl. Math. 150 (2005 216-231

  13. Scanning probe lithography for nanoimprinting mould fabrication

    International Nuclear Information System (INIS)

    Luo Gang; Xie Guoyong; Zhang Yongyi; Zhang Guoming; Zhang Yingying; Carlberg, Patrick; Zhu Tao; Liu Zhongfan

    2006-01-01

    We propose a rational fabrication method for nanoimprinting moulds by scanning probe lithography. By wet chemical etching, different kinds of moulds are realized on Si(110) and Si(100) surfaces according to the Si crystalline orientation. The structures have line widths of about 200 nm with a high aspect ratio. By reactive ion etching, moulds with patterns free from the limitation of Si crystalline orientation are also obtained. With closed-loop scan control of a scanning probe microscope, the length of patterned lines is more than 100 μm by integrating several steps of patterning. The fabrication process is optimized in order to produce a mould pattern with a line width about 10 nm. The structures on the mould are further duplicated into PMMA resists through the nanoimprinting process. The method of combining scanning probe lithography with wet chemical etching or reactive ion etching (RIE) provides a resistless route for the fabrication of nanoimprinting moulds

  14. Counting SET-free sets

    OpenAIRE

    Harman, Nate

    2016-01-01

    We consider the following counting problem related to the card game SET: How many $k$-element SET-free sets are there in an $n$-dimensional SET deck? Through a series of algebraic reformulations and reinterpretations, we show the answer to this question satisfies two polynomiality conditions.

  15. Organizational Probes:Exploring Playful Interactions in Work Environment

    NARCIS (Netherlands)

    Vyas, Dhaval; Eliens, A.P.W.; Eliëns, A.; van de Watering, M.R.; van der Veer, Gerrit C.; Jorge, J

    2008-01-01

    Playfulness, with non-intrusive elements, can be considered a useful resource for enhancing social awareness and community building within work organizations. Taking inspirations from the cultural probes approach, we developed organizational probes as a set of investigation tools that could provide

  16. Versatile Gaussian probes for squeezing estimation

    Science.gov (United States)

    Rigovacca, Luca; Farace, Alessandro; Souza, Leonardo A. M.; De Pasquale, Antonella; Giovannetti, Vittorio; Adesso, Gerardo

    2017-05-01

    We consider an instance of "black-box" quantum metrology in the Gaussian framework, where we aim to estimate the amount of squeezing applied on an input probe, without previous knowledge on the phase of the applied squeezing. By taking the quantum Fisher information (QFI) as the figure of merit, we evaluate its average and variance with respect to this phase in order to identify probe states that yield good precision for many different squeezing directions. We first consider the case of single-mode Gaussian probes with the same energy, and find that pure squeezed states maximize the average quantum Fisher information (AvQFI) at the cost of a performance that oscillates strongly as the squeezing direction is changed. Although the variance can be brought to zero by correlating the probing system with a reference mode, the maximum AvQFI cannot be increased in the same way. A different scenario opens if one takes into account the effects of photon losses: coherent states represent the optimal single-mode choice when losses exceed a certain threshold and, moreover, correlated probes can now yield larger AvQFI values than all single-mode states, on top of having zero variance.

  17. Model for resonant plasma probe.

    Energy Technology Data Exchange (ETDEWEB)

    Warne, Larry Kevin; Johnson, William Arthur; Hebner, Gregory Albert; Jorgenson, Roy E.; Coats, Rebecca Sue

    2007-04-01

    This report constructs simple circuit models for a hairpin shaped resonant plasma probe. Effects of the plasma sheath region surrounding the wires making up the probe are determined. Electromagnetic simulations of the probe are compared to the circuit model results. The perturbing effects of the disc cavity in which the probe operates are also found.

  18. Optimal control

    CERN Document Server

    Aschepkov, Leonid T; Kim, Taekyun; Agarwal, Ravi P

    2016-01-01

    This book is based on lectures from a one-year course at the Far Eastern Federal University (Vladivostok, Russia) as well as on workshops on optimal control offered to students at various mathematical departments at the university level. The main themes of the theory of linear and nonlinear systems are considered, including the basic problem of establishing the necessary and sufficient conditions of optimal processes. In the first part of the course, the theory of linear control systems is constructed on the basis of the separation theorem and the concept of a reachability set. The authors prove the closure of a reachability set in the class of piecewise continuous controls, and the problems of controllability, observability, identification, performance and terminal control are also considered. The second part of the course is devoted to nonlinear control systems. Using the method of variations and the Lagrange multipliers rule of nonlinear problems, the authors prove the Pontryagin maximum principle for prob...

  19. Totally optimal decision trees for Boolean functions

    KAUST Repository

    Chikalov, Igor; Hussain, Shahid; Moshkov, Mikhail

    2016-01-01

    We study decision trees which are totally optimal relative to different sets of complexity parameters for Boolean functions. A totally optimal tree is an optimal tree relative to each parameter from the set simultaneously. We consider the parameters

  20. Convective heat flow probe

    Science.gov (United States)

    Dunn, James C.; Hardee, Harry C.; Striker, Richard P.

    1985-01-01

    A convective heat flow probe device is provided which measures heat flow and fluid flow magnitude in the formation surrounding a borehole. The probe comprises an elongate housing adapted to be lowered down into the borehole; a plurality of heaters extending along the probe for heating the formation surrounding the borehole; a plurality of temperature sensors arranged around the periphery of the probe for measuring the temperature of the surrounding formation after heating thereof by the heater elements. The temperature sensors and heater elements are mounted in a plurality of separate heater pads which are supported by the housing and which are adapted to be radially expanded into firm engagement with the walls of the borehole. The heat supplied by the heater elements and the temperatures measured by the temperature sensors are monitored and used in providing the desired measurements. The outer peripheral surfaces of the heater pads are configured as segments of a cylinder and form a full cylinder when taken together. A plurality of temperature sensors are located on each pad so as to extend along the length and across the width thereof, with a heating element being located in each pad beneath the temperature sensors. An expansion mechanism driven by a clamping motor provides expansion and retraction of the heater pads and expandable packer-type seals are provided along the probe above and below the heater pads.

  1. Magnetic micro-manipulations to probe the local physical properties of porous scaffolds and to confine stem cells.

    Science.gov (United States)

    Robert, Damien; Fayol, Delphine; Le Visage, Catherine; Frasca, Guillaume; Brulé, Séverine; Ménager, Christine; Gazeau, Florence; Letourneur, Didier; Wilhelm, Claire

    2010-03-01

    The in vitro generation of engineered tissue constructs involves the seeding of cells into porous scaffolds. Ongoing challenges are to design scaffolds to meet biochemical and mechanical requirements and to optimize cell seeding in the constructs. In this context, we have developed a simple method based on a magnetic tweezer set-up to manipulate, probe, and position magnetic objects inside a porous scaffold. The magnetic force acting on magnetic objects of various sizes serves as a control parameter to retrieve the local viscosity of the scaffolds internal channels as well as the stiffness of the scaffolds pores. Labeling of human stem cells with iron oxide magnetic nanoparticles makes it possible to perform the same type of measurement with cells as probes and evaluate their own microenvironment. For 18 microm diameter magnetic beads or magnetically labeled stem cells of similar diameter, the viscosity was equivalently equal to 20 mPa s in average. This apparent viscosity was then found to increase with the magnetic probes sizes. The stiffness probed with 100 microm magnetic beads was found in the 50 Pa range, and was lowered by a factor 5 when probed with cells aggregates. The magnetic forces were also successfully applied to the stem cells to enhance the cell seeding process and impose a well defined spatial organization into the scaffold. (c) 2009 Elsevier Ltd. All rights reserved.

  2. Adaptive stimulus optimization for sensory systems neuroscience.

    Science.gov (United States)

    DiMattina, Christopher; Zhang, Kechen

    2013-01-01

    In this paper, we review several lines of recent work aimed at developing practical methods for adaptive on-line stimulus generation for sensory neurophysiology. We consider various experimental paradigms where on-line stimulus optimization is utilized, including the classical optimal stimulus paradigm where the goal of experiments is to identify a stimulus which maximizes neural responses, the iso-response paradigm which finds sets of stimuli giving rise to constant responses, and the system identification paradigm where the experimental goal is to estimate and possibly compare sensory processing models. We discuss various theoretical and practical aspects of adaptive firing rate optimization, including optimization with stimulus space constraints, firing rate adaptation, and possible network constraints on the optimal stimulus. We consider the problem of system identification, and show how accurate estimation of non-linear models can be highly dependent on the stimulus set used to probe the network. We suggest that optimizing stimuli for accurate model estimation may make it possible to successfully identify non-linear models which are otherwise intractable, and summarize several recent studies of this type. Finally, we present a two-stage stimulus design procedure which combines the dual goals of model estimation and model comparison and may be especially useful for system identification experiments where the appropriate model is unknown beforehand. We propose that fast, on-line stimulus optimization enabled by increasing computer power can make it practical to move sensory neuroscience away from a descriptive paradigm and toward a new paradigm of real-time model estimation and comparison.

  3. Verifying optimal depth settings for LFAS

    NARCIS (Netherlands)

    Lam, F.P.A.; Beerens, S.P.; Ainslie, M.A.

    2006-01-01

    Naval operations in coastal waters are challenging the modelling support in several disciplines. An important instrument for undersea defence in the littoral is the LFAS sonar. To adapt to the local acoustic environment, LFAS sonars can adjust their operation depth to increase the coverage of the

  4. Optimizing anesthesia techniques in the ambulatory setting

    NARCIS (Netherlands)

    E. Galvin (Eilish)

    2007-01-01

    textabstractAmbulatory surgery refers to the process of admitting patients, administering anesthesia and surgical care, and discharging patients home following an appropriate level of recovery on the same day. The word ambulatory is derived from the latin word ambulare, which means ''to walk''. This

  5. The Cell Probe Complexity of Succinct Data Structures

    DEFF Research Database (Denmark)

    Gal, Anna; Miltersen, Peter Bro

    2003-01-01

    In the cell probe model with word size 1 (the bit probe model), a static data structure problem is given by a map , where is a set of possible data to be stored, is a set of possible queries (for natural problems, we have ) and is the answer to question about data . A solution is given by a repre......In the cell probe model with word size 1 (the bit probe model), a static data structure problem is given by a map , where is a set of possible data to be stored, is a set of possible queries (for natural problems, we have ) and is the answer to question about data . A solution is given...

  6. Automatic sets and Delone sets

    International Nuclear Information System (INIS)

    Barbe, A; Haeseler, F von

    2004-01-01

    Automatic sets D part of Z m are characterized by having a finite number of decimations. They are equivalently generated by fixed points of certain substitution systems, or by certain finite automata. As examples, two-dimensional versions of the Thue-Morse, Baum-Sweet, Rudin-Shapiro and paperfolding sequences are presented. We give a necessary and sufficient condition for an automatic set D part of Z m to be a Delone set in R m . The result is then extended to automatic sets that are defined as fixed points of certain substitutions. The morphology of automatic sets is discussed by means of examples

  7. Prediction of the GC-MS Retention Indices for a Diverse Set of Terpenes as Constituent Components of Camu-camu (Myrciaria dubia (HBK Mc Vaugh Volatile Oil, Using Particle Swarm Optimization-Multiple Linear Regression (PSO-MLR

    Directory of Open Access Journals (Sweden)

    Majid Mohammadhosseini

    2014-05-01

    Full Text Available A reliable quantitative structure retention relationship (QSRR study has been evaluated to predict the retention indices (RIs of a broad spectrum of compounds, namely 118 non-linear, cyclic and heterocyclic terpenoids (both saturated and unsaturated, on an HP-5MS fused silica column. A principal component analysis showed that seven compounds lay outside of the main cluster. After elimination of the outliers, the data set was divided into training and test sets involving 80 and 28 compounds. The method was tested by application of the particle swarm optimization (PSO method to find the most effective molecular descriptors, followed by multiple linear regressions (MLR. The PSO-MLR model was further confirmed through “leave one out cross validation” (LOO-CV and “leave group out cross validation” (LGO-CV, as well as external validations. The promising statistical figures of merit associated with the proposed model (R2train=0.936, Q2LOO=0.928, Q2LGO=0.921, F=376.4 confirm its high ability to predict RIs with negligible relative errors of predictions (REP train=4.8%, REP test=6.0%.

  8. Versatile robotic probe calibration for position tracking in ultrasound imaging

    International Nuclear Information System (INIS)

    Bø, Lars Eirik; Hofstad, Erlend Fagertun; Lindseth, Frank; Hernes, Toril A N

    2015-01-01

    Within the field of ultrasound-guided procedures, there are a number of methods for ultrasound probe calibration. While these methods are usually developed for a specific probe, they are in principle easily adapted to other probes. In practice, however, the adaptation often proves tedious and this is impractical in a research setting, where new probes are tested regularly. Therefore, we developed a method which can be applied to a large variety of probes without adaptation. The method used a robot arm to move a plastic sphere submerged in water through the ultrasound image plane, providing a slow and precise movement. The sphere was then segmented from the recorded ultrasound images using a MATLAB programme and the calibration matrix was computed based on this segmentation in combination with tracking information. The method was tested on three very different probes demonstrating both great versatility and high accuracy. (paper)

  9. Versatile robotic probe calibration for position tracking in ultrasound imaging

    Science.gov (United States)

    Eirik Bø, Lars; Fagertun Hofstad, Erlend; Lindseth, Frank; Hernes, Toril A. N.

    2015-05-01

    Within the field of ultrasound-guided procedures, there are a number of methods for ultrasound probe calibration. While these methods are usually developed for a specific probe, they are in principle easily adapted to other probes. In practice, however, the adaptation often proves tedious and this is impractical in a research setting, where new probes are tested regularly. Therefore, we developed a method which can be applied to a large variety of probes without adaptation. The method used a robot arm to move a plastic sphere submerged in water through the ultrasound image plane, providing a slow and precise movement. The sphere was then segmented from the recorded ultrasound images using a MATLAB programme and the calibration matrix was computed based on this segmentation in combination with tracking information. The method was tested on three very different probes demonstrating both great versatility and high accuracy.

  10. Recommendation Sets and Choice Queries

    DEFF Research Database (Denmark)

    Viappiani, Paolo Renato; Boutilier, Craig

    2011-01-01

    Utility elicitation is an important component of many applications, such as decision support systems and recommender systems. Such systems query users about their preferences and offer recommendations based on the system's belief about the user's utility function. We analyze the connection between...... the problem of generating optimal recommendation sets and the problem of generating optimal choice queries, considering both Bayesian and regret-based elicitation. Our results show that, somewhat surprisingly, under very general circumstances, the optimal recommendation set coincides with the optimal query....

  11. One-Probe Search

    DEFF Research Database (Denmark)

    Östlin, Anna; Pagh, Rasmus

    2002-01-01

    We consider dictionaries that perform lookups by probing a single word of memory, knowing only the size of the data structure. We describe a randomized dictionary where a lookup returns the correct answer with probability 1 - e, and otherwise returns don't know. The lookup procedure uses an expan...

  12. Probing the Solar System

    Science.gov (United States)

    Wilkinson, John

    2013-01-01

    Humans have always had the vision to one day live on other planets. This vision existed even before the first person was put into orbit. Since the early space missions of putting humans into orbit around Earth, many advances have been made in space technology. We have now sent many space probes deep into the Solar system to explore the planets and…

  13. Probing the Solar Interior

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 3; Issue 3. Probing the Solar Interior Hearing the Heartbeats of the Sun. Ashok Ambastha. General ... Author Affiliations. Ashok Ambastha1. Joint In-Charge Udaipur Solar Observatory Physical Research laboratory P.O. Box No. 198 Udaipur 313 001, India ...

  14. Flexible position probe assembly

    International Nuclear Information System (INIS)

    Schmitz, J.J.

    1977-01-01

    The combination of a plurality of tubular transducer sections and a flexible supporting member extending through the tubular transducer sections forms a flexible elongated probe of a design suitable for monitoring the level of an element, such as a nuclear magnetically permeable control rod or liquid. 3 claims, 23 figures

  15. Logarithmic axicon characterized by scanning optical probe system.

    Science.gov (United States)

    Cao, Zhaolou; Wang, Keyi; Wu, Qinglin

    2013-05-15

    A scanning optical probe system is proposed to measure a logarithmic axicon (LA) with subwavelength resolution. Multiple plane intensity profiles measured by a fiber probe are interpreted by solving an optimization problem to get the phase retardation function (PRF) of the LA. Experimental results show that this approach can accurately obtain the PRF with which the optical path difference of the generated quasi-nondiffracting beam in the propagation is calculated.

  16. Fabrication of all diamond scanning probes for nanoscale magnetometry

    OpenAIRE

    Appel Patrick; Neu Elke; Ganzhorn Marc; Barfuss Arne; Batzer Marietta; Gratz Micha; Tschoepe Andreas; Maletinsky Patrick

    2016-01-01

    The electronic spin of the nitrogen vacancy (NV) center in diamond forms an atomically sized, highly sensitive sensor for magnetic fields. To harness the full potential of individual NV centers for sensing with high sensitivity and nanoscale spatial resolution, NV centers have to be incorporated into scanning probe structures enabling controlled scanning in close proximity to the sample surface. Here, we present an optimized procedure to fabricate single-crystal, all-diamond scanning probes s...

  17. Pseudolinear functions and optimization

    CERN Document Server

    Mishra, Shashi Kant

    2015-01-01

    Pseudolinear Functions and Optimization is the first book to focus exclusively on pseudolinear functions, a class of generalized convex functions. It discusses the properties, characterizations, and applications of pseudolinear functions in nonlinear optimization problems.The book describes the characterizations of solution sets of various optimization problems. It examines multiobjective pseudolinear, multiobjective fractional pseudolinear, static minmax pseudolinear, and static minmax fractional pseudolinear optimization problems and their results. The authors extend these results to locally

  18. Contribution of stimulations for the optimization of quantitative electron probe micro analysis of heterogeneous catalysts; Apport de la simulation dans l'optimisation de l'analyse quantitative par microsonde electronique de catalyseurs heterogenes

    Energy Technology Data Exchange (ETDEWEB)

    Sorbier, L.

    2001-11-01

    Electron Probe Micro Analysis (EPMA) is frequently used to measure the local concentration of active elements in heterogeneous catalysts. However, when classical procedures are used, a significant deficit is observed both in local total concentration and mean total concentrations. A Monte Carlo program simulating measured intensities (characteristic lines and continuous background) has been written using PENELOPE routines. We have included in this program models taking into account the different physical phenomena likely to lead to the observed signal loss (insulating properties, roughness, porosity, energy loss at interfaces). Simulation results have shown that an important roughness (Ra>200 nm) was the only parameter apt to lead to a significant total signal loss. This led us to inquire into another origin to explain the signal loss observed on meso-porous samples. Measurements conducted on a meso-porous alumina confirmed that measuring aluminum, oxygen and carbon leads to a correct total of concentrations. Signal loss is thus explained by the contamination of the sample during its preparation, the components of the embedding resin diffusing into the porosity and reacting with the reactive surface of the catalyst support. In the case of macroporous catalysts, local roughness effect is very important. The simulations have shown the efficiency of the Peak to Background method to correct these local roughness effects. Measurements conducted on reforming and hydro-treating catalysts have led to a correct total concentration and confirmed the contribution of the Peak to Background method to achieve local quantitative measurement. (author)

  19. Contribution of stimulations for the optimization of quantitative electron probe micro analysis of heterogeneous catalysts; Apport de la simulation dans l'optimisation de l'analyse quantitative par microsonde electronique de catalyseurs heterogenes

    Energy Technology Data Exchange (ETDEWEB)

    Sorbier, L

    2001-11-01

    Electron Probe Micro Analysis (EPMA) is frequently used to measure the local concentration of active elements in heterogeneous catalysts. However, when classical procedures are used, a significant deficit is observed both in local total concentration and mean total concentrations. A Monte Carlo program simulating measured intensities (characteristic lines and continuous background) has been written using PENELOPE routines. We have included in this program models taking into account the different physical phenomena likely to lead to the observed signal loss (insulating properties, roughness, porosity, energy loss at interfaces). Simulation results have shown that an important roughness (Ra>200 nm) was the only parameter apt to lead to a significant total signal loss. This led us to inquire into another origin to explain the signal loss observed on meso-porous samples. Measurements conducted on a meso-porous alumina confirmed that measuring aluminum, oxygen and carbon leads to a correct total of concentrations. Signal loss is thus explained by the contamination of the sample during its preparation, the components of the embedding resin diffusing into the porosity and reacting with the reactive surface of the catalyst support. In the case of macroporous catalysts, local roughness effect is very important. The simulations have shown the efficiency of the Peak to Background method to correct these local roughness effects. Measurements conducted on reforming and hydro-treating catalysts have led to a correct total concentration and confirmed the contribution of the Peak to Background method to achieve local quantitative measurement. (author)

  20. EDITORIAL: Probing the nanoworld Probing the nanoworld

    Science.gov (United States)

    Miles, Mervyn

    2009-10-01

    In nanotechnology, it is the unique properties arising from nanometre-scale structures that lead not only to their technological importance but also to a better understanding of the underlying science. Over the last twenty years, material properties at the nanoscale have been dominated by the properties of carbon in the form of the C60 molecule, single- and multi-wall carbon nanotubes, nanodiamonds, and recently graphene. During this period, research published in the journal Nanotechnology has revealed the amazing mechanical properties of such materials as well as their remarkable electronic properties with the promise of new devices. Furthermore, nanoparticles, nanotubes, nanorods, and nanowires from metals and dielectrics have been characterized for their electronic, mechanical, optical, chemical and catalytic properties. Scanning probe microscopy (SPM) has become the main characterization technique and atomic force microscopy (AFM) the most frequently used SPM. Over the past twenty years, SPM techniques that were previously experimental in nature have become routine. At the same time, investigations using AFM continue to yield impressive results that demonstrate the great potential of this powerful imaging tool, particularly in close to physiological conditions. In this special issue a collaboration of researchers in Europe report the use of AFM to provide high-resolution topographical images of individual carbon nanotubes immobilized on various biological membranes, including a nuclear membrane for the first time (Lamprecht C et al 2009 Nanotechnology 20 434001). Other SPM developments such as high-speed AFM appear to be making a transition from specialist laboratories to the mainstream, and perhaps the same may be said for non-contact AFM. Looking to the future, characterisation techniques involving SPM and spectroscopy, such as tip-enhanced Raman spectroscopy, could emerge as everyday methods. In all these advanced techniques, routinely available probes will

  1. Optimization and Optimal Control

    CERN Document Server

    Chinchuluun, Altannar; Enkhbat, Rentsen; Tseveendorj, Ider

    2010-01-01

    During the last four decades there has been a remarkable development in optimization and optimal control. Due to its wide variety of applications, many scientists and researchers have paid attention to fields of optimization and optimal control. A huge number of new theoretical, algorithmic, and computational results have been observed in the last few years. This book gives the latest advances, and due to the rapid development of these fields, there are no other recent publications on the same topics. Key features: Provides a collection of selected contributions giving a state-of-the-art accou

  2. Dynamic pressure probe response tests for robust measurements in periodic flows close to probe resonating frequency

    Science.gov (United States)

    Ceyhun Şahin, Fatma; Schiffmann, Jürg

    2018-02-01

    A single-hole probe was designed to measure steady and periodic flows with high fluctuation amplitudes and with minimal flow intrusion. Because of its high aspect ratio, estimations showed that the probe resonates at a frequency two orders of magnitude lower than the fast response sensor cut-off frequencies. The high fluctuation amplitudes cause a non-linear behavior of the probe and available models are neither adequate for a quantitative estimation of the resonating frequencies nor for predicting the system damping. Instead, a non-linear data correction procedure based on individual transfer functions defined for each harmonic contribution is introduced for pneumatic probes that allows to extend their operating range beyond the resonating frequencies and linear dynamics. This data correction procedure was assessed on a miniature single-hole probe of 0.35 mm inner diameter which was designed to measure flow speed and direction. For the reliable use of such a probe in periodic flows, its frequency response was reproduced with a siren disk, which allows exciting the probe up to 10 kHz with peak-to-peak amplitudes ranging between 20%-170% of the absolute mean pressure. The effect of the probe interior design on the phase lag and amplitude distortion in periodic flow measurements was investigated on probes with similar inner diameters and different lengths or similar aspect ratios (L/D) and different total interior volumes. The results suggest that while the tube length consistently sets the resonance frequency, the internal total volume affects the non-linear dynamic response in terms of varying gain functions. A detailed analysis of the introduced calibration methodology shows that the goodness of the reconstructed data compared to the reference data is above 75% for fundamental frequencies up to twice the probe resonance frequency. The results clearly suggest that the introduced procedure is adequate to capture non-linear pneumatic probe dynamics and to

  3. Measurements of the superconducting fluctuations in optimally doped BaFe2−xNixAs2 under high magnetic fields: probing the 3D-anisotropic Ginzburg–Landau approach

    International Nuclear Information System (INIS)

    Rey, R I; Ramos-Álvarez, A; Carballeira, C; Mosqueira, J; Vidal, F; Salem-Sugui, S Jr.; Alvarenga, A D; Zhang, Rui; Luo, Huiqian

    2014-01-01

    The superconducting fluctuations well inside the normal state of Fe-based superconductors were experimentally studied through the in-plane paraconductivity in several high-quality, optimally doped BaFe 2−x Ni x As 2 crystals. These measurements were performed in magnetic fields with amplitudes up to 14 T, and different orientations relative to the c-axis of the crystals (θ=0 ∘ , 53 ∘ , and 90 ∘ ). The results allowed a stringent check of the applicability of a recently proposed Ginzburg–Landau approach for the fluctuating electrical conductivity of three-dimensional (3D) anisotropic materials in the presence of finite applied magnetic fields. (papers)

  4. MPAI (mass probes aided ionization) method for total analysis of biomolecules by mass spectrometry.

    Science.gov (United States)

    Honda, Aki; Hayashi, Shinichiro; Hifumi, Hiroki; Honma, Yuya; Tanji, Noriyuki; Iwasawa, Naoko; Suzuki, Yoshio; Suzuki, Koji

    2007-01-01

    We have designed and synthesized various mass probes, which enable us to effectively ionize various molecules to be detected with mass spectrometry. We call the ionization method using mass probes the "MPAI (mass probes aided ionization)" method. We aim at the sensitive detection of various biological molecules, and also the detection of bio-molecules by a single mass spectrometry serially without changing the mechanical settings. Here, we review mass probes for small molecules with various functional groups and mass probes for proteins. Further, we introduce newly developed mass probes for proteins for highly sensitive detection.

  5. Continuously tunable nucleic acid hybridization probes.

    Science.gov (United States)

    Wu, Lucia R; Wang, Juexiao Sherry; Fang, John Z; Evans, Emily R; Pinto, Alessandro; Pekker, Irena; Boykin, Richard; Ngouenet, Celine; Webster, Philippa J; Beechem, Joseph; Zhang, David Yu

    2015-12-01

    In silico-designed nucleic acid probes and primers often do not achieve favorable specificity and sensitivity tradeoffs on the first try, and iterative empirical sequence-based optimization is needed, particularly in multiplexed assays. We present a novel, on-the-fly method of tuning probe affinity and selectivity by adjusting the stoichiometry of auxiliary species, which allows for independent and decoupled adjustment of the hybridization yield for different probes in multiplexed assays. Using this method, we achieved near-continuous tuning of probe effective free energy. To demonstrate our approach, we enforced uniform capture efficiency of 31 DNA molecules (GC content, 0-100%), maximized the signal difference for 11 pairs of single-nucleotide variants and performed tunable hybrid capture of mRNA from total RNA. Using the Nanostring nCounter platform, we applied stoichiometric tuning to simultaneously adjust yields for a 24-plex assay, and we show multiplexed quantitation of RNA sequences and variants from formalin-fixed, paraffin-embedded samples.

  6. Modular Rake of Pitot Probes

    Science.gov (United States)

    Dunlap, Timothy A.; Henry, Michael W.; Homyk, Raymond P.

    2004-01-01

    The figure presents selected views of a modular rake of 17 pitot probes for measuring both transient and steady-state pressures in a supersonic wind tunnel. In addition to pitot tubes visible in the figure, the probe modules contain (1) high-frequency dynamic-pressure transducers connected through wires to remote monitoring circuitry and (2) flow passages that lead to tubes that, in turn, lead to remote steady-state pressure transducers. Prior pitot-probe rakes were fabricated as unitary structures, into which the individual pitot probes were brazed. Repair or replacement of individual probes was difficult, costly, and time-consuming because (1) it was necessary to remove entire rakes in order to unbraze individual malfunctioning probes and (2) the heat of unbrazing a failed probe and of brazing a new probe in place could damage adjacent probes. In contrast, the modules in the present probe are designed to be relatively quickly and easily replaceable with no heating and, in many cases, without need for removal of the entire rake from the wind tunnel. To remove a malfunctioning probe, one first removes a screw-mounted V-cross-section cover that holds the probe and adjacent probes in place. Then one removes a screw-mounted cover plate to gain access to the steady-state pressure tubes and dynamicpressure wires. Next, one disconnects the tube and wires of the affected probe. Finally, one installs a new probe in the reverse of the aforementioned sequence. The wire connections can be made by soldering, but to facilitate removal and installation, they can be made via miniature plugs and sockets. The connections between the probe flow passages and the tubes leading to the remote pressure sensors can be made by use of any of a variety of readily available flexible tubes that can be easily pulled off and slid back on for removal and installation, respectively.

  7. Heavy ion beam probing

    International Nuclear Information System (INIS)

    Hickok, R.L.

    1980-07-01

    This report consists of the notes distributed to the participants at the IEEE Mini-Course on Modern Plasma Diagnostics that was held in Madison, Wisconsin in May 1980. It presents an overview of Heavy Ion Beam Probing that briefly describes the principles and discuss the types of measurements that can be made. The problems associated with implementing beam probes are noted, possible variations are described, estimated costs of present day systems, and the scaling requirements for large plasma devices are presented. The final chapter illustrates typical results that have been obtained on a variety of plasma devices. No detailed calculations are included in the report, but a list of references that will provide more detailed information is included

  8. Gravity Probe B Inspection

    Science.gov (United States)

    2000-01-01

    The space vehicle Gravity Probe B (GP-B) is the relativity experiment developed at Stanford University to test two extraordinary predictions of Albert Einstein's general theory of relativity. The experiment will measure, very precisely, the expected tiny changes in the direction of the spin axes of four gyroscopes contained in an Earth-orbiting satellite at a 400-mile altitude. So free are the gyroscopes from disturbance that they will provide an almost perfect space-time reference system. They will measure how space and time are very slightly warped by the presence of the Earth, and, more profoundly, how the Earth's rotation very slightly drags space-time around with it. These effects, though small for the Earth, have far-reaching implications for the nature of matter and the structure of the Universe. GP-B is among the most thoroughly researched programs ever undertaken by NASA. This is the story of a scientific quest in which physicists and engineers have collaborated closely over many years. Inspired by their quest, they have invented a whole range of technologies that are already enlivening other branches of science and engineering. In this photograph, engineer Gary Reynolds is inspecting the inside of the probe neck during probe thermal repairs. GP-B is scheduled for launch in April 2004 and managed for NASA by the Marshall Space Flight Center. Development of the GP-B is the responsibility of Stanford University along with major subcontractor Lockheed Martin Corporation. (Image credit to Russ Leese, Gravity Probe B, Stanford University)

  9. Probing lipid membrane electrostatics

    Science.gov (United States)

    Yang, Yi

    The electrostatic properties of lipid bilayer membranes play a significant role in many biological processes. Atomic force microscopy (AFM) is highly sensitive to membrane surface potential in electrolyte solutions. With fully characterized probe tips, AFM can perform quantitative electrostatic analysis of lipid membranes. Electrostatic interactions between Silicon nitride probes and supported zwitterionic dioleoylphosphatidylcholine (DOPC) bilayer with a variable fraction of anionic dioleoylphosphatidylserine (DOPS) were measured by AFM. Classical Gouy-Chapman theory was used to model the membrane electrostatics. The nonlinear Poisson-Boltzmann equation was numerically solved with finite element method to provide the potential distribution around the AFM tips. Theoretical tip-sample electrostatic interactions were calculated with the surface integral of both Maxwell and osmotic stress tensors on tip surface. The measured forces were interpreted with theoretical forces and the resulting surface charge densities of the membrane surfaces were in quantitative agreement with the Gouy-Chapman-Stern model of membrane charge regulation. It was demonstrated that the AFM can quantitatively detect membrane surface potential at a separation of several screening lengths, and that the AFM probe only perturbs the membrane surface potential by external field created by the internai membrane dipole moment. The analysis yields a dipole moment of 1.5 Debye per lipid with a dipole potential of +275 mV for supported DOPC membranes. This new ability to quantitatively measure the membrane dipole density in a noninvasive manner will be useful in identifying the biological effects of the dipole potential. Finally, heterogeneous model membranes were studied with fluid electric force microscopy (FEFM). Electrostatic mapping was demonstrated with 50 nm resolution. The capabilities of quantitative electrostatic measurement and lateral charge density mapping make AFM a unique and powerful

  10. Induced current heating probe

    International Nuclear Information System (INIS)

    Thatcher, G.; Ferguson, B.G.; Winstanley, J.P.

    1984-01-01

    An induced current heating probe is of thimble form and has an outer conducting sheath and a water flooded flux-generating unit formed from a stack of ferrite rings coaxially disposed in the sheath. The energising coil is made of solid wire which connects at one end with a coaxial water current tube and at the other end with the sheath. The stack of ferrite rings may include non-magnetic insulating rings which help to shape the flux. (author)

  11. Far Western: probing membranes.

    Science.gov (United States)

    Einarson, Margret B; Pugacheva, Elena N; Orlinick, Jason R

    2007-08-01

    INTRODUCTIONThe far-Western technique described in this protocol is fundamentally similar to Western blotting. In Western blots, an antibody is used to detect a query protein on a membrane. In contrast, in a far-Western blot (also known as an overlay assay) the antibody is replaced by a recombinant GST fusion protein (produced and purified from bacteria), and the assay detects the interaction of this protein with target proteins on a membrane. The membranes are washed and blocked, incubated with probe protein, washed again, and subjected to autoradiography. The GST fusion (probe) proteins are often labeled with (32)P; alternatively, the membrane can be probed with unlabeled GST fusion protein, followed by detection using commercially available GST antibodies. The nonradioactive approach is substantially more expensive (due to the purchase of antibody and detection reagents) than using radioactively labeled proteins. In addition, care must be taken to control for nonspecific interactions with GST alone and a signal resulting from antibody cross-reactivity. In some instances, proteins on the membrane are not able to interact after transfer. This may be due to improper folding, particularly in the case of proteins expressed from a phage expression library. This protocol describes a way to overcome this by washing the membrane in denaturation buffer, which is then serially diluted to permit slow renaturation of the proteins.

  12. NASA's interstellar probe mission

    International Nuclear Information System (INIS)

    Liewer, P.C.; Ayon, J.A.; Wallace, R.A.; Mewaldt, R.A.

    2000-01-01

    NASA's Interstellar Probe will be the first spacecraft designed to explore the nearby interstellar medium and its interaction with our solar system. As envisioned by NASA's Interstellar Probe Science and Technology Definition Team, the spacecraft will be propelled by a solar sail to reach >200 AU in 15 years. Interstellar Probe will investigate how the Sun interacts with its environment and will directly measure the properties and composition of the dust, neutrals and plasma of the local interstellar material which surrounds the solar system. In the mission concept developed in the spring of 1999, a 400-m diameter solar sail accelerates the spacecraft to ∼15 AU/year, roughly 5 times the speed of Voyager 1 and 2. The sail is used to first bring the spacecraft to ∼0.25 AU to increase the radiation pressure before heading out in the interstellar upwind direction. After jettisoning the sail at ∼5 AU, the spacecraft coasts to 200-400 AU, exploring the Kuiper Belt, the boundaries of the heliosphere, and the nearby interstellar medium

  13. A Harmonic Resonance Suppression Strategy for a High-Speed Railway Traction Power Supply System with a SHE-PWM Four-Quadrant Converter Based on Active-Set Secondary Optimization

    Directory of Open Access Journals (Sweden)

    Runze Zhang

    2017-10-01

    Full Text Available Pulse width modulation (PWM technology is widely used in traction converters for high-speed railways. The harmonic distribution caused by PWM is quite extensive, and increases the possibility of grid–train coupling resonance in the traction power supply system (TPSS. This paper first analyzes the mechanism of resonance, when the characteristic harmonic frequency of a four-quadrant converter (4QC current that injects into the traction grid matches the resonant frequency of the traction grid, which may result in resonance in the system. To suppress resonance, this paper adopts specific harmonic elimination–pulse width modulation (SHE-PWM technology combined with a transient direct current control strategy to eliminate the harmonics in the resonant frequency, which may suppress the grid–train coupling resonance. Due to the fact that the SHE-PWM process with multiple switching angles contains complex transcendental equations, the initial value is difficult to provide, and is difficult to solve using ordinary iterative algorithms. In this paper, an active-set secondary optimization method is used to solve the equation. The algorithm has the benefits of low dependence on initial values, fast convergence and high solution accuracy. Finally, the feasibility of the resonant suppression algorithm is verified by means of Matlab simulation.

  14. Optimal set of grid size and angular increment for practical dose calculation using the dynamic conformal arc technique: a systematic evaluation of the dosimetric effects in lung stereotactic body radiation therapy

    International Nuclear Information System (INIS)

    Park, Ji-Yeon; Kim, Siyong; Park, Hae-Jin; Lee, Jeong-Woo; Kim, Yeon-Sil; Suh, Tae-Suk

    2014-01-01

    To recommend the optimal plan parameter set of grid size and angular increment for dose calculations in treatment planning for lung stereotactic body radiation therapy (SBRT) using dynamic conformal arc therapy (DCAT) considering both accuracy and computational efficiency. Dose variations with varying grid sizes (2, 3, and 4 mm) and angular increments (2°, 4°, 6°, and 10°) were analyzed in a thorax phantom for 3 spherical target volumes and in 9 patient cases. A 2-mm grid size and 2° angular increment are assumed sufficient to serve as reference values. The dosimetric effect was evaluated using dose–volume histograms, monitor units (MUs), and dose to organs at risk (OARs) for a definite volume corresponding to the dose–volume constraint in lung SBRT. The times required for dose calculations using each parameter set were compared for clinical practicality. Larger grid sizes caused a dose increase to the structures and required higher MUs to achieve the target coverage. The discrete beam arrangements at each angular increment led to over- and under-estimated OARs doses due to the undulating dose distribution. When a 2° angular increment was used in both studies, a 4-mm grid size changed the dose variation by up to 3–4% (50 cGy) for the heart and the spinal cord, while a 3-mm grid size produced a dose difference of <1% (12 cGy) in all tested OARs. When a 3-mm grid size was employed, angular increments of 6° and 10° caused maximum dose variations of 3% (23 cGy) and 10% (61 cGy) in the spinal cord, respectively, while a 4° increment resulted in a dose difference of <1% (8 cGy) in all cases except for that of one patient. The 3-mm grid size and 4° angular increment enabled a 78% savings in computation time without making any critical sacrifices to dose accuracy. A parameter set with a 3-mm grid size and a 4° angular increment is found to be appropriate for predicting patient dose distributions with a dose difference below 1% while reducing the

  15. 3D CMM strain-gauge triggering probe error characteristics modeling using fuzzy logic

    DEFF Research Database (Denmark)

    Achiche, Sofiane; Wozniak, A; Fan, Zhun

    2008-01-01

    FKBs based on two optimization paradigms are used for the reconstruction of the direction- dependent probe error w. The angles beta and gamma are used as input variables of the FKBs; they describe the spatial direction of probe triggering. The learning algorithm used to generate the FKBs is a real......The error values of CMMs depends on the probing direction; hence its spatial variation is a key part of the probe inaccuracy. This paper presents genetically-generated fuzzy knowledge bases (FKBs) to model the spatial error characteristics of a CMM module-changing probe. Two automatically generated...

  16. 3D CMM Strain-Gauge Triggering Probe Error Characteristics Modeling

    DEFF Research Database (Denmark)

    Achiche, Sofiane; Wozniak, Adam; Fan, Zhun

    2008-01-01

    FKBs based on two optimization paradigms are used for the reconstruction of the directiondependent probe error w. The angles β and γ are used as input variables of the FKBs; they describe the spatial direction of probe triggering. The learning algorithm used to generate the FKBs is a real/ binary like......The error values of CMMs depends on the probing direction; hence its spatial variation is a key part of the probe inaccuracy. This paper presents genetically-generated fuzzy knowledge bases (FKBs) to model the spatial error characteristics of a CMM module-changing probe. Two automatically generated...

  17. Nine New Fluorescent Probes

    Science.gov (United States)

    Lin, Tsung-I.; Jovanovic, Misa V.; Dowben, Robert M.

    1989-06-01

    Absorption and fluorescence spectroscopic studies are reported here for nine new fluorescent probes recently synthesized in our laboratories: four pyrene derivatives with substituents of (i) 1,3-diacetoxy-6,8-dichlorosulfonyl, (ii) 1,3-dihydroxy-6,8-disodiumsulfonate, (iii) 1,3-disodiumsulfonate, and (iv) l-ethoxy-3,6,8-trisodiumsulfonate groups, and five [7-julolidino] coumarin derivatives with substituents of (v) 3-carboxylate-4-methyl, (vi) 3- methylcarboxylate, (vii) 3-acetate-4-methyl, (viii) 3-propionate-4-methyl, and (ix) 3-sulfonate-4-methyl groups. Pyrene compounds i and ii and coumarin compounds v and vi exhibit interesting absorbance and fluorescence properties: their absorption maxima are red shifted compared to the parent compound to the blue-green region, and the band width broadens considerably. All four blue-absorbing dyes fluoresce intensely in the green region, and the two pyrene compounds emit at such long wavelengths without formation of excimers. The fluorescence properties of these compounds are quite environment-sensitive: considerable spectral shifts and fluorescence intensity changes have been observed in the pH range from 3 to 10 and in a wide variety of polar and hydrophobic solvents with vastly different dielectric constants. The high extinction and fluorescence quantum yield of these probes make them ideal fluorescent labeling reagents for proteins, antibodies, nucleic acids, and cellular organelles. The pH and hydrophobicity-dependent fluorescence changes can be utilized as optical pH and/or hydrophobicity indicators for mapping environmental difference in various cellular components in a single cell. Since all nine probes absorb in the UV, but emit at different wavelengths in the visible, these two groups of compounds offer an advantage of utilizing a single monochromatic light source (e.g., a nitrogen laser) to achieve multi-wavelength detection for flow cytometry application. As a first step to explore potential application in

  18. Wearable probes for service design

    DEFF Research Database (Denmark)

    Mullane, Aaron; Laaksolahti, Jarmo Matti; Svanæs, Dag

    2014-01-01

    Probes are used as a design method in user-centred design to allow end-users to inform design by collecting data from their lives. Probes are potentially useful in service innovation, but current probing methods require users to interrupt their activity and are consequently not ideal for use...... by service employees in reflecting on the delivery of a service. In this paper, we present the ‘wearable probe’, a probe concept that captures sensor data without distracting service employees. Data captured by the probe can be used by the service employees to reflect and co-reflect on the service journey......, helping to identify opportunities for service evolution and innovation....

  19. Integrated cosmological probes: concordance quantified

    Energy Technology Data Exchange (ETDEWEB)

    Nicola, Andrina; Amara, Adam; Refregier, Alexandre, E-mail: andrina.nicola@phys.ethz.ch, E-mail: adam.amara@phys.ethz.ch, E-mail: alexandre.refregier@phys.ethz.ch [Department of Physics, ETH Zürich, Wolfgang-Pauli-Strasse 27, CH-8093 Zürich (Switzerland)

    2017-10-01

    Assessing the consistency of parameter constraints derived from different cosmological probes is an important way to test the validity of the underlying cosmological model. In an earlier work [1], we computed constraints on cosmological parameters for ΛCDM from an integrated analysis of CMB temperature anisotropies and CMB lensing from Planck, galaxy clustering and weak lensing from SDSS, weak lensing from DES SV as well as Type Ia supernovae and Hubble parameter measurements. In this work, we extend this analysis and quantify the concordance between the derived constraints and those derived by the Planck Collaboration as well as WMAP9, SPT and ACT. As a measure for consistency, we use the Surprise statistic [2], which is based on the relative entropy. In the framework of a flat ΛCDM cosmological model, we find all data sets to be consistent with one another at a level of less than 1σ. We highlight that the relative entropy is sensitive to inconsistencies in the models that are used in different parts of the analysis. In particular, inconsistent assumptions for the neutrino mass break its invariance on the parameter choice. When consistent model assumptions are used, the data sets considered in this work all agree with each other and ΛCDM, without evidence for tensions.

  20. Characterizing Water Quenching Systems with a Quench Probe

    Science.gov (United States)

    Ferguson, B. Lynn; Li, Zhichao; Freborg, Andrew M.

    2014-12-01

    Quench probes have been used effectively to characterize the quality of quenchants for many years. For this purpose, a variety of commercial probes, as well as the necessary data acquisition system for determining the time-temperature data for a set of standardized test conditions, are available for purchase. The type of information obtained from such probes provides a good basis for comparing media, characterizing general cooling capabilities, and checking media condition over time. However, these data do not adequately characterize the actual production quenching process in terms of heat transfer behavior in many cases, especially when high temperature gradients are present. Faced with the need to characterize water quenching practices, including conventional and intensive practices, a quench probe was developed. This paper describes that probe, the data collection system, the data gathered for both intensive quenching and conventional water quenching, and the heat transfer coefficients determined for these processes. Process sensitivities are investigated and highlight some intricacies of quenching.

  1. Optimization and anti-optimization of structures under uncertainty

    National Research Council Canada - National Science Library

    Elishakoff, Isaac; Ohsaki, Makoto

    2010-01-01

    .... The necessity of anti-optimization approach is first demonstrated, then the anti-optimization techniques are applied to static, dynamic and buckling problems, thus covering the broadest possible set of applications...

  2. An ongoing struggle: a mixed-method systematic review of interventions, barriers and facilitators to achieving optimal self-care by children and young people with type 1 diabetes in educational settings.

    Science.gov (United States)

    Edwards, Deborah; Noyes, Jane; Lowes, Lesley; Haf Spencer, Llinos; Gregory, John W

    2014-09-12

    Type 1 diabetes occurs more frequently in younger children who are often pre-school age and enter the education system with diabetes-related support needs that evolve over time. It is important that children are supported to optimally manage their diet, exercise, blood glucose monitoring and insulin regime at school. Young people self-manage at college/university. Theory-informed mixed-method systematic review to determine intervention effectiveness and synthesise child/parent/professional views of barriers and facilitators to achieving optimal diabetes self-care and management for children and young people age 3-25 years in educational settings. Eleven intervention and 55 views studies were included. Meta-analysis was not possible. Study foci broadly matched school diabetes guidance. Intervention studies were limited to specific contexts with mostly high risk of bias. Views studies were mostly moderate quality with common transferrable findings.Health plans, and school nurse support (various types) were effective. Telemedicine in school was effective for individual case management. Most educational interventions to increase knowledge and confidence of children or school staff had significant short-term effects but longer follow-up is required. Children, parents and staff said they struggled with many common structural, organisational, educational and attitudinal school barriers. Aspects of school guidance had not been generally implemented (e.g. individual health plans). Children recognized and appreciated school staff who were trained and confident in supporting diabetes management.Research with college/university students was lacking. Campus-based college/university student support significantly improved knowledge, attitudes and diabetes self-care. Self-management was easier for students who juggled diabetes-management with student lifestyle, such as adopting strategies to manage alcohol consumption. This novel mixed-method systematic review is the first to

  3. Optimization theory with applications

    CERN Document Server

    Pierre, Donald A

    1987-01-01

    Optimization principles are of undisputed importance in modern design and system operation. They can be used for many purposes: optimal design of systems, optimal operation of systems, determination of performance limitations of systems, or simply the solution of sets of equations. While most books on optimization are limited to essentially one approach, this volume offers a broad spectrum of approaches, with emphasis on basic techniques from both classical and modern work.After an introductory chapter introducing those system concepts that prevail throughout optimization problems of all typ

  4. The solar probe mission

    International Nuclear Information System (INIS)

    Feldman, W.C.; Anderson, J.; Bohlin, J.D.; Burlaga, L.F.; Farquhar, R.; Gloeckler, G.; Goldstein, B.E.; Harvey, J.W.; Holzer, T.E.; Jones, W.V.; Kellogg, P.J.; Krimigis, S.M.; Kundu, M.R.; Lazarus, A.J.; Mellott, M.M.; Parker, E.N.; Rosner, R.; Rottman, G.J.; Slavin, J.A.; Suess, S.T.; Tsurutani, B.T.; Woo, R.T.; Zwickl, R.D.

    1990-01-01

    The Solar Probe will deliver a 133.5 kg science payload into a 4 R s perihelion solar polar orbit (with the first perihelion passage in 2004) to explore in situ one of the last frontiers in the solar system---the solar corona. This mission is both affordable and technologically feasible. Using a payload of 12 (predominantly particles and fields) scientific experiments, it will be possible to answer many long-standing, fundamental problems concerning the structure and dynamics of the outer solar atmosphere, including the acceleration, storage, and transport of energetic particles near the Sun and in the inner ( s ) heliosphere

  5. Mobile Probing Kit

    DEFF Research Database (Denmark)

    Larsen, Jakob Eg; Sørensen, Lene Tolstrup; Sørensen, J.K.

    2007-01-01

    Mobile Probing Kit is a low tech and low cost methodology for obtaining inspiration and insights into user needs, requirements and ideas in the early phases of a system's development process. The methodology is developed to identify user needs, requirements and ideas among knowledge workers...... characterized as being highly nomadic and thus potential users of mobile and ubiquitous technologies. The methodology has been applied in the 1ST MAGNET Beyond project in order to obtain user needs and requirements in the process of developing pilot services. We report on the initial findings from applying...

  6. Probing echoic memory with different voices.

    Science.gov (United States)

    Madden, D J; Bastian, J

    1977-05-01

    Considerable evidence has indicated that some acoustical properties of spoken items are preserved in an "echoic" memory for approximately 2 sec. However, some of this evidence has also shown that changing the voice speaking the stimulus items has a disruptive effect on memory which persists longer than that of other acoustical variables. The present experiment examined the effect of voice changes on response bias as well as on accuracy in a recognition memory task. The task involved judging recognition probes as being present in or absent from sets of dichotically presented digits. Recognition of probes spoken in the same voice as that of the dichotic items was more accurate than recognition of different-voice probes at each of three retention intervals of up to 4 sec. Different-voice probes increased the likelihood of "absent" responses, but only up to a 1.4-sec delay. These shifts in response bias may represent a property of echoic memory which should be investigated further.

  7. A complex probe for tokamak plasma edge conditions

    International Nuclear Information System (INIS)

    Castro, R.M. de; Silva, R.P. da; Heller, M.V.A.P.; Caldas, I.L.; Nascimento, I.C.; Degasperi, F.T.

    1995-01-01

    The study of the physical processes that occur in the plasma edge of tokamak machines has recently grown due to the evidence that these processes influence those that occur in the center of the plasma column. Experimental studies show the existence of a strong level of fluctuations in the plasma edge. The results of these studies indicate that these fluctuations enhance particle and energy transport and degrade the confinement. In order to investigate these processes in the plasma edge of the TBR-1 Tokamak, a Langmuir probe array, a triple and a set of magnetic probes have been designed and constructed. With this set probes the mean and fluctuation values of the magnetic field were detected and correlated with the fluctuating parameters obtained with the electrostatic probes. (author). 7 refs., 5 figs

  8. High spatial resolution Kelvin probe force microscopy with coaxial probes

    International Nuclear Information System (INIS)

    Brown, Keith A; Westervelt, Robert M; Satzinger, Kevin J

    2012-01-01

    Kelvin probe force microscopy (KPFM) is a widely used technique to measure the local contact potential difference (CPD) between an AFM probe and the sample surface via the electrostatic force. The spatial resolution of KPFM is intrinsically limited by the long range of the electrostatic interaction, which includes contributions from the macroscopic cantilever and the conical tip. Here, we present coaxial AFM probes in which the cantilever and cone are shielded by a conducting shell, confining the tip–sample electrostatic interaction to a small region near the end of the tip. We have developed a technique to measure the true CPD despite the presence of the shell electrode. We find that the behavior of these probes agrees with an electrostatic model of the force, and we observe a factor of five improvement in spatial resolution relative to unshielded probes. Our discussion centers on KPFM, but the field confinement offered by these probes may improve any variant of electrostatic force microscopy. (paper)

  9. Effect of laser power and specimen temperature on atom probe analyses of magnesium alloys

    International Nuclear Information System (INIS)

    Oh-ishi, K.; Mendis, C.L.; Ohkubo, T.; Hono, K.

    2011-01-01

    The influence of laser power, wave length, and specimen temperature on laser assisted atom probe analyses for Mg alloys was investigated. Higher laser power and lower specimen temperature led to improved mass and spatial resolutions. Background noise and mass resolutions were degraded with lower laser power and higher specimen temperature. By adjusting the conditions for laser assisted atom probe analyses, atom probe results with atomic layer resolutions were obtained from all the Mg alloys so far investigated. Laser assisted atom probe investigations revealed detailed chemical information on Guinier-Preston zones in Mg alloys. -- Research highlights: → We study performance of UV laser assisted atom probe analysis for Mg alloys. → There is an optimized range of laser power and specimen temperature. → Optimized UV laser enables atom probe data of Mg alloys with high special resolution.

  10. Biomolecule recognition using piezoresistive nanomechanical force probes

    Science.gov (United States)

    Tosolini, Giordano; Scarponi, Filippo; Cannistraro, Salvatore; Bausells, Joan

    2013-06-01

    Highly sensitive sensors are one of the enabling technologies for the biomarker detection in early stage diagnosis of pathologies. We have developed a self-sensing nanomechanical force probe able for detecting the unbinding of single couples of biomolecular partners in nearly physiological conditions. The embedding of a piezoresistive transducer into a nanomechanical cantilever enabled high force measurement capability with sub 10-pN resolution. Here, we present the design, microfabrication, optimization, and complete characterization of the sensor. The exceptional electromechanical performance obtained allowed us to detect biorecognition specific events underlying the biotin-avidin complex formation, by integrating the sensor in a commercial atomic force microscope.

  11. Design and performance of low-wattage electrical heater probe

    International Nuclear Information System (INIS)

    Biddle, R.; Wetzel, J.R.; Cech, R.

    1997-01-01

    A mound electrical calibration heater (MECH) has been used in several EG and G Mound developed calorimeters as a calibration tool. They are very useful over the wattage range of a few to 500 W. At the lower end of the range, a bias develops between the MECH probe and calibrated heat standards. A low-wattage electrical calibration heater (L WECH) probe is being developed by the Safeguards Science and Technology group (NIS-5) of Los Alamos National Laboratory based upon a concept proposed by EG and G Mound personnel. The probe combines electrical resistive heating and laser-light powered heating. The LWECH probe is being developed for use with power settings up to 2W. The electrical heater will be used at the high end of the range, and laser-light power will be used low end of the wattage range. The system consists of two components: the heater probe and a control unit. The probe is inserted into the measuring cavity through an opening in the insulating baffle, and a sleeve is required to adapt to the measuring chamber. The probe is powered and controlled using electronics modules located separately. This paper will report on the design of the LWECH probe, initial tests, and expected performance

  12. Neutral helium beam probe

    Science.gov (United States)

    Karim, Rezwanul

    1999-10-01

    This article discusses the development of a code where diagnostic neutral helium beam can be used as a probe. The code solves numerically the evolution of the population densities of helium atoms at their several different energy levels as the beam propagates through the plasma. The collisional radiative model has been utilized in this numerical calculation. The spatial dependence of the metastable states of neutral helium atom, as obtained in this numerical analysis, offers a possible diagnostic tool for tokamak plasma. The spatial evolution for several hypothetical plasma conditions was tested. Simulation routines were also run with the plasma parameters (density and temperature profiles) similar to a shot in the Princeton beta experiment modified (PBX-M) tokamak and a shot in Tokamak Fusion Test Reactor tokamak. A comparison between the simulation result and the experimentally obtained data (for each of these two shots) is presented. A good correlation in such comparisons for a number of such shots can establish the accurateness and usefulness of this probe. The result can possibly be extended for other plasma machines and for various plasma conditions in those machines.

  13. A support vector machine designed to identify breasts at high risk using multi-probe generated REIS signals: a preliminary assessment

    Science.gov (United States)

    Gur, David; Zheng, Bin; Lederman, Dror; Dhurjaty, Sreeram; Sumkin, Jules; Zuley, Margarita

    2010-02-01

    A new resonance-frequency based electronic impedance spectroscopy (REIS) system with multi-probes, including one central probe and six external probes that are designed to contact the breast skin in a circular form with a radius of 60 millimeters to the central ("nipple") probe, has been assembled and installed in our breast imaging facility. We are conducting a prospective clinical study to test the performance of this REIS system in identifying younger women (detection of a highly suspicious breast lesion and 50 were determined negative during mammography screening. REIS output signal sweeps that we used to compute an initial feature included both amplitude and phase information representing differences between corresponding (matched) EIS signal values acquired from the left and right breasts. A genetic algorithm was applied to reduce the feature set and optimize a support vector machine (SVM) to classify the REIS examinations into "biopsy recommended" and "non-biopsy" recommended groups. Using the leave-one-case-out testing method, the classification performance as measured by the area under the receiver operating characteristic (ROC) curve was 0.816 +/- 0.042. This pilot analysis suggests that the new multi-probe-based REIS system could potentially be used as a risk stratification tool to identify pre-screened young women who are at higher risk of having or developing breast cancer.

  14. Development of phased-array ultrasonic testing probe

    International Nuclear Information System (INIS)

    Kawanami, Seiichi; Kurokawa, Masaaki; Taniguchi, Masaru; Tada, Yoshihisa

    2001-01-01

    Phased-array ultrasonic testing was developed for nondestructive evaluation of power plants. Phased-array UT scans and focuses an ultrasonic beam to inspect areas difficult to inspect by conventional UT. We developed a highly sensitive piezoelectric composite, and designed optimized phased-array UT probes. We are applying our phased-array UT to different areas of power plants. (author)

  15. Comparison of two threshold detection criteria methodologies for determination of probe positivity for intraoperative in situ identification of presumed abnormal 18F-FDG-avid tissue sites during radioguided oncologic surgery.

    Science.gov (United States)

    Chapman, Gregg J; Povoski, Stephen P; Hall, Nathan C; Murrey, Douglas A; Lee, Robert; Martin, Edward W

    2014-09-13

    Intraoperative in situ identification of (18)F-FDG-avid tissue sites during radioguided oncologic surgery remains a significant challenge for surgeons. The purpose of our study was to evaluate the 1.5-to-1 ratiometric threshold criteria method versus the three-sigma statistical threshold criteria method for determination of gamma detection probe positivity for intraoperative in situ identification of presumed abnormal (18)F-FDG-avid tissue sites in a manner that was independent of the specific type of gamma detection probe used. From among 52 patients undergoing appropriate in situ evaluation of presumed abnormal (18)F-FDG-avid tissue sites during (18)F-FDG-directed surgery using 6 available gamma detection probe systems, a total of 401 intraoperative gamma detection probe measurement sets of in situ counts per second measurements were cumulatively taken. For the 401 intraoperative gamma detection probe measurement sets, probe positivity was successfully met by the 1.5-to-1 ratiometric threshold criteria method in 150/401 instances (37.4%) and by the three-sigma statistical threshold criteria method in 259/401 instances (64.6%) (P < 0.001). Likewise, the three-sigma statistical threshold criteria method detected true positive results at target-to-background ratios much lower than the 1.5-to-1 target-to-background ratio of the 1.5-to-1 ratiometric threshold criteria method. The three-sigma statistical threshold criteria method was significantly better than the 1.5-to-1 ratiometric threshold criteria method for determination of gamma detection probe positivity for intraoperative in situ detection of presumed abnormal (18)F-FDG-avid tissue sites during radioguided oncologic surgery. This finding may be extremely important for reshaping the ongoing and future research and development of gamma detection probe systems that are necessary for optimizing the in situ detection of radioisotopes of higher-energy gamma photon emissions used during radioguided oncologic surgery.

  16. The Antartic Ice Borehole Probe

    Science.gov (United States)

    Behar, A.; Carsey, F.; Lane, A.; Engelhardt, H.

    2000-01-01

    The Antartic Ice Borehole Probe mission is a glaciological investigation, scheduled for November 2000-2001, that will place a probe in a hot-water drilled hole in the West Antartic ice sheet. The objectives of the probe are to observe ice-bed interactions with a downward looking camera, and ice inclusions and structure, including hypothesized ice accretion, with a side-looking camera.

  17. The Galaxy Evolution Probe

    Science.gov (United States)

    Glenn, Jason; Galaxy Evolution Probe Team

    2018-01-01

    The Galaxy Evolution Probe (GEP) is a concept for a far-infrared observatory to survey large regions of sky for star-forming galaxies from z = 0 to beyond z = 3. Our knowledge of galaxy formation is incomplete and requires uniform surveys over a large range of redshifts and environments to accurately describe mass assembly, star formation, supermassive black hole growth, interactions between these processes, and what led to their decline from z ~ 2 to the present day. Infrared observations are sensitive to dusty, star-forming galaxies, which have bright polycyclic aromatic hydrocarbon (PAH) emission features and warm dust continuum in the rest-frame mid infrared and cooler thermal dust emission in the far infrared. Unlike previous far-infrared continuum surveys, the GEP will measure photometric redshifts commensurate with galaxy detections from PAH emission and Si absorption features, without the need for obtaining spectroscopic redshifts of faint counterparts at other wavelengths.The GEP design includes a 2 m diameter telescope actively cooled to 4 K and two instruments: (1) An imager covering 10 to 300 um with 25 spectral resolution R ~ 8 bands (with lower R at the longest wavelengths) to detect star-forming galaxies and measure their redshifts photometrically. (2) A 23 – 190 um, R ~ 250 dispersive spectrometer for redshift confirmation and identification of obscured AGN using atomic fine-structure lines. Lines including [Ne V], [O IV], [O III], [O I], and [C II] will probe gas physical conditions, radiation field hardness, and metallicity. Notionally, the GEP will have a two-year mission: galaxy surveys with photometric redshifts in the first year and a second year devoted to follow-up spectroscopy. A comprehensive picture of star formation in galaxies over the last 10 billion years will be assembled from cosmologically relevant volumes, spanning environments from field galaxies and groups, to protoclusters, to dense galaxy clusters.Commissioned by NASA, the

  18. OpenCV-Based Nanomanipulation Information Extraction and the Probe Operation in SEM

    Directory of Open Access Journals (Sweden)

    Dongjie Li

    2015-02-01

    Full Text Available Aimed at the established telenanomanipulation system, the method of extracting location information and the strategies of probe operation were studied in this paper. First, the machine learning algorithm of OpenCV was used to extract location information from SEM images. Thus nanowires and probe in SEM images can be automatically tracked and the region of interest (ROI can be marked quickly. Then the location of nanowire and probe can be extracted from the ROI. To study the probe operation strategy, the Van der Waals force between probe and a nanowire was computed; thus relevant operating parameters can be obtained. With these operating parameters, the nanowire in 3D virtual environment can be preoperated and an optimal path of the probe can be obtained. The actual probe runs automatically under the telenanomanipulation system's control. Finally, experiments were carried out to verify the above methods, and results show the designed methods have achieved the expected effect.

  19. Raman probe. Innovative technology summary report

    International Nuclear Information System (INIS)

    1999-07-01

    The Raman probe is deployed in high-level waste tanks with the cone penetrometer (CPT). These technologies are engineered and optimized to work together. All of the hardware is radiation hardened, designed for and tested in the high-radiation, highly caustic chemical environment of US Department of Energy's (DOE's) waste storage tanks. When deployed in tanks, the system is useful for rapidly assessing the species and concentrations of organic-bearing tank wastes. The CPT was originally developed for geological and groundwater applications, with sensors that measure physical parameters such as soil moisture, temperature, and pH. When deployed, it is hydraulically forced directly into the ground rather than using boring techniques utilized by rotary drilling systems. There is a separate Innovative Technology Summary Report for the CPT, so this report will focus on the changes made specifically to support the Raman probe. The most significant changes involve adapting the Raman probe for in-tank and subsurface field use and developing meaningful real-time data analysis. Testing of the complete LLNL system was conducted in a hot cell in the 222-S Laboratory at the Hanford site in summer 1997. Both instruments were tested in situ on solvent-contaminated soils (TCE and PCE) at the Savannah River Site in February and June 1998. This report describes the technology, its performance, its uses, cost, regulatory and policy issues, and lessons learned

  20. Developments in quantitative electron probe microanalysis

    International Nuclear Information System (INIS)

    Tixier, R.

    1977-01-01

    A study of the range of validity of the formulae for corrections used with massive specimen analysis is made. The method used is original; we have shown that it was possible to use a property of invariability of corrected intensity ratios for standards. This invariance property provides a test for the self consistency of the theory. The theoretical and experimental conditions required for quantitative electron probe microanalysis of thin transmission electron microscope specimens are examined. The correction formulae for atomic number, absorption and fluorescence effects are calculated. Several examples of experimental results are given, relative to the quantitative analysis of intermetallic precipitates and carbides in steels. Advances in applications of electron probe instruments related to the use of computer and the present development of fully automated instruments are reviewed. The necessary statistics for measurements of X ray count data are studied. Estimation procedure and tests are developed. These methods are used to perform a statistical check of electron probe microanalysis measurements and to reject rogue values. An estimator of the confidence interval of the apparent concentration is derived. Formulae were also obtained to optimize the counting time in order to obtain the best precision in a minimum amount of time [fr

  1. REFLECTANCE PULSE OXIMETRY AT THE FOREHEAD IMPROVES BY PRESSURE ON THE PROBE

    NARCIS (Netherlands)

    DASSEL, ACM; GRAAFF, R; SIKKEMA, M; ZIJLSTRA, WG; AARNOUDSE, JG

    In this study, we investigated the possibility of improving reflectance (back-scatter) pulse oximetry measurements by pressure applied to the probe. Optimal signal detection, with the probe applied to an easily accessible location, is important to prevent erroneous oxygen saturation readouts. At the

  2. A probe station for testing silicon sensors

    CERN Multimedia

    Ulysse, Fichet

    2017-01-01

    A probe station for testing silicon sensors. The probe station is located inside a dark box that can keep away light during the measurement. The set-up is located in the DSF (Department Silicon Facility). The golden plate is the "chuck" where the sensor is usually placed on. With the help of "manipulators", thin needles can be precisely positioned that can contact the sensor surface. Using these needles and the golden chuck, a high voltage can be applied to the sensor to test its behaviour under high voltage. We will use the silicon sensors that we test here for building prototypes of a highly granular sandwich calorimeter, the CMS HGC (Highly granular Calorimeter) upgrade for High-Luminosity LHC.

  3. Probing the Terrain

    DEFF Research Database (Denmark)

    Johannessen, Runa

    2016-01-01

    Whether manifest in built structures or invisible infrastructures, architectures of control in the occupied Palestinian West Bank is structurally defined by endemic uncertainty. Shifting lines and frontiers are recorded on the terrain, creating elastic zones of uncertainty necessitating navigatio...... to the territory through its lines and laws, and how the very structure of the occupation has changed over the years, I seek to make visible the ways in which architectures of uncertainty compensate for the fleeting terrain that HH is probing.......Whether manifest in built structures or invisible infrastructures, architectures of control in the occupied Palestinian West Bank is structurally defined by endemic uncertainty. Shifting lines and frontiers are recorded on the terrain, creating elastic zones of uncertainty necessitating...

  4. Heat transfer probe

    Science.gov (United States)

    Frank, Jeffrey I.; Rosengart, Axel J.; Kasza, Ken; Yu, Wenhua; Chien, Tai-Hsin; Franklin, Jeff

    2006-10-10

    Apparatuses, systems, methods, and computer code for, among other things, monitoring the health of samples such as the brain while providing local cooling or heating. A representative device is a heat transfer probe, which includes an inner channel, a tip, a concentric outer channel, a first temperature sensor, and a second temperature sensor. The inner channel is configured to transport working fluid from an inner inlet to an inner outlet. The tip is configured to receive at least a portion of the working fluid from the inner outlet. The concentric outer channel is configured to transport the working fluid from the inner outlet to an outer outlet. The first temperature sensor is coupled to the tip, and the second temperature sensor spaced apart from the first temperature sensor.

  5. Solar Probe Plus

    Science.gov (United States)

    Szabo, Adam

    2011-01-01

    The NASA Solar Probe Plus mission is planned to be launched in 2018 to study the upper solar corona with both.in-situ and remote sensing instrumentation. The mission will utilize 6 Venus gravity assist maneuver to gradually lower its perihelion to 9.5 Rs below the expected Alfven pOint to study the sub-alfvenic solar wind that is still at least partially co-rotates with the Sun. The detailed science objectives of this mission will be discussed. SPP will have a strong synergy with The ESA/NASA Solar orbiter mission to be launched a year ahead. Both missions will focus on the inner heliosphere and will have complimentary instrumentations. Strategies to exploit this synergy will be also presented.

  6. Cosmological Probes for Supersymmetry

    Directory of Open Access Journals (Sweden)

    Maxim Khlopov

    2015-05-01

    Full Text Available The multi-parameter character of supersymmetric dark-matter models implies the combination of their experimental studies with astrophysical and cosmological probes. The physics of the early Universe provides nontrivial effects of non-equilibrium particles and primordial cosmological structures. Primordial black holes (PBHs are a profound signature of such structures that may arise as a cosmological consequence of supersymmetric (SUSY models. SUSY-based mechanisms of baryosynthesis can lead to the possibility of antimatter domains in a baryon asymmetric Universe. In the context of cosmoparticle physics, which studies the fundamental relationship of the micro- and macro-worlds, the development of SUSY illustrates the main principles of this approach, as the physical basis of the modern cosmology provides cross-disciplinary tests in physical and astronomical studies.

  7. Trapping and Probing Antihydrogen

    Energy Technology Data Exchange (ETDEWEB)

    Wurtele, Jonathan [UC Berkeley and LBNL

    2013-03-27

    Precision spectroscopy of antihydrogen is a promising path to sensitive tests of CPT symmetry. The most direct route to achieve this goal is to create and probe antihydrogen in a magnetic minimum trap. Antihydrogen has been synthesized and trapped for 1000s at CERN by the ALPHA Collaboration. Some of the challenges associated with achieving these milestones will be discussed, including mixing cryogenic positron and antiproton plasmas to synthesize antihydrogen with kinetic energy less than the trap potential of .5K. Recent experiments in which hyperfine transitions were resonantly induced with microwaves will be presented. The opportunity for gravitational measurements in traps based on detailed studies of antihydrogen dynamics will be described. The talk will conclude with a discussion future antihydrogen research that will use a new experimental apparatus, ALPHA-I.

  8. Traversing incore probe device

    International Nuclear Information System (INIS)

    Yoshioka, Michiko.

    1985-01-01

    Purpose: To measure the neutron flux distribution in the reactor core always at a high accuracy. Constitution: A nuclear fission ionizing chamber type detector is disposed at the end of a cable for sending a detection signal of a traversing incore probe device and, further, a gamma-ray ionizing chamber type detector is connected in adjacent therewith and a selection circuit for selecting both of the detection signals and inputting them to a display device is disposed. Then, compensation for the neutron monitors is conducted by the gamma-ray ionizing chamber type detector during normal operation in which control rods are not driven and the positioning is carried out by the nuclear fission ionizing chamber type detector. Furthermore, both of the compensation for the neutron detector and the positioning are carried out by the nuclear fission ionizing chamber type detector upon starting where the control rods are driven. (Sekiya, K.)

  9. Sparse sampling and reconstruction for electron and scanning probe microscope imaging

    Science.gov (United States)

    Anderson, Hyrum; Helms, Jovana; Wheeler, Jason W.; Larson, Kurt W.; Rohrer, Brandon R.

    2015-07-28

    Systems and methods for conducting electron or scanning probe microscopy are provided herein. In a general embodiment, the systems and methods for conducting electron or scanning probe microscopy with an undersampled data set include: driving an electron beam or probe to scan across a sample and visit a subset of pixel locations of the sample that are randomly or pseudo-randomly designated; determining actual pixel locations on the sample that are visited by the electron beam or probe; and processing data collected by detectors from the visits of the electron beam or probe at the actual pixel locations and recovering a reconstructed image of the sample.

  10. Science Planning for the Solar Probe Plus NASA Mission

    Science.gov (United States)

    Kusterer, M. B.; Fox, N. J.; Turner, F. S.; Vandegriff, J. D.

    2015-12-01

    With a planned launch in 2018, there are a number of challenges for the Science Planning Team (SPT) of the Solar Probe Plus mission. The geometry of the celestial bodies and the spacecraft during some of the Solar Probe Plus mission orbits cause limited uplink and downlink opportunities. The payload teams must manage the volume of data that they write to the spacecraft solid-state recorders (SSR) for their individual instruments for downlink to the ground. The aim is to write the instrument data to the spacecraft SSR for downlink before a set of data downlink opportunities large enough to get the data to the ground and before the start of another data collection cycle. The SPT also intend to coordinate observations with other spacecraft and ground based systems. To add further complexity, two of the spacecraft payloads have the capability to write a large volumes of data to their internal payload SSR while sending a smaller "survey" portion of the data to the spacecraft SSR for downlink. The instrument scientists would then view the survey data on the ground, determine the most interesting data from their payload SSR, send commands to transfer that data from their payload SSR to the spacecraft SSR for downlink. The timing required for downlink and analysis of the survey data, identifying uplink opportunities for commanding data transfers, and downlink opportunities big enough for the selected data within the data collection period is critical. To solve these challenges, the Solar Probe Plus Science Working Group has designed a orbit-type optimized data file priority downlink scheme to downlink high priority survey data quickly. This file priority scheme would maximize the reaction time that the payload teams have to perform the survey and selected data method on orbits where the downlink and uplink availability will support using this method. An interactive display and analysis science planning tool is being designed for the SPT to use as an aid to planning. The

  11. Optimal selection of TLD chips

    International Nuclear Information System (INIS)

    Phung, P.; Nicoll, J.J.; Edmonds, P.; Paris, M.; Thompson, C.

    1996-01-01

    Large sets of TLD chips are often used to measure beam dose characteristics in radiotherapy. A sorting method is presented to allow optimal selection of chips from a chosen set. This method considers the variation

  12. Optimization of the imaging response of scanning microwave microscopy measurements

    Energy Technology Data Exchange (ETDEWEB)

    Sardi, G. M.; Lucibello, A.; Proietti, E.; Marcelli, R., E-mail: romolo.marcelli@imm.cnr.it [National Research Council, Institute for Microelectronics and Microsystems, Via del Fosso del Cavaliere 100, 00133 Rome (Italy); Kasper, M.; Gramse, G. [Biophysics Institute, Johannes Kepler University, Gruberstrasse 40, 4020 Linz (Austria); Kienberger, F. [Keysight Technologies Austria GmbH, Gruberstrasse 40, 4020 Linz (Austria)

    2015-07-20

    In this work, we present the analytical modeling and preliminary experimental results for the choice of the optimal frequencies when performing amplitude and phase measurements with a scanning microwave microscope. In particular, the analysis is related to the reflection mode operation of the instrument, i.e., the acquisition of the complex reflection coefficient data, usually referred as S{sub 11}. The studied configuration is composed of an atomic force microscope with a microwave matched nanometric cantilever probe tip, connected by a λ/2 coaxial cable resonator to a vector network analyzer. The set-up is provided by Keysight Technologies. As a peculiar result, the optimal frequencies, where the maximum sensitivity is achieved, are different for the amplitude and for the phase signals. The analysis is focused on measurements of dielectric samples, like semiconductor devices, textile pieces, and biological specimens.

  13. Nanobits: customizable scanning probe tips

    DEFF Research Database (Denmark)

    Kumar, Rajendra; Shaik, Hassan Uddin; Sardan Sukas, Özlem

    2009-01-01

    We present here a proof-of-principle study of scanning probe tips defined by planar nanolithography and integrated with AFM probes using nanomanipulation. The so-called 'nanobits' are 2-4 mu m long and 120-150 nm thin flakes of Si3N4 or SiO2, fabricated by electron beam lithography and standard s...

  14. Gene probes: principles and protocols

    National Research Council Canada - National Science Library

    Aquino de Muro, Marilena; Rapley, Ralph

    2002-01-01

    ... of labeled DNA has allowed genes to be mapped to single chromosomes and in many cases to a single chromosome band, promoting significant advance in human genome mapping. Gene Probes: Principles and Protocols presents the principles for gene probe design, labeling, detection, target format, and hybridization conditions together with detailed protocols, accom...

  15. Non-inductive current probe

    DEFF Research Database (Denmark)

    Bak, Christen Kjeldahl

    1977-01-01

    The current probe described is a low-cost, shunt resistor for monitoring current pulses in e.g., pulsed lasers. Rise time is......The current probe described is a low-cost, shunt resistor for monitoring current pulses in e.g., pulsed lasers. Rise time is...

  16. Optical probes in biology

    CERN Document Server

    Zhang, Jin; Schultz, Carsten

    2015-01-01

    Introduction and BasicsEngineering of Optimized Fluorescent Proteins: An Overview from a Cyan and FRET Perspective Lindsay Haarbosch, Joachim Goedhart, Mark A. Hink, Laura van Weeren, Daphne S. Bindels, and Theodorus W.J. GadellaFluorescent Imaging Techniques: FRET and Complementary Methods Stefan Terjung and Yury BelyaevTracking: Sensors for Tracking BiomoleculesProtein-Based Calcium Sensors Thomas Thestrup and Oliver GriesbeckMonitoring Membrane Lipids with Protein Domains Expressed in Living Cells Peter Varnai

  17. Water cooled static pressure probe

    Science.gov (United States)

    Lagen, Nicholas T. (Inventor); Eves, John W. (Inventor); Reece, Garland D. (Inventor); Geissinger, Steve L. (Inventor)

    1991-01-01

    An improved static pressure probe containing a water cooling mechanism is disclosed. This probe has a hollow interior containing a central coolant tube and multiple individual pressure measurement tubes connected to holes placed on the exterior. Coolant from the central tube symmetrically immerses the interior of the probe, allowing it to sustain high temperature (in the region of 2500 F) supersonic jet flow indefinitely, while still recording accurate pressure data. The coolant exits the probe body by way of a reservoir attached to the aft of the probe. The pressure measurement tubes are joined to a single, larger manifold in the reservoir. This manifold is attached to a pressure transducer that records the average static pressure.

  18. Gravity Probe B Encapsulated

    Science.gov (United States)

    2004-01-01

    In this photo, the Gravity Probe B (GP-B) space vehicle is being encapsulated atop the Delta II launch vehicle. The GP-B is the relativity experiment developed at Stanford University to test two extraordinary predictions of Albert Einstein's general theory of relativity. The experiment will measure, very precisely, the expected tiny changes in the direction of the spin axes of four gyroscopes contained in an Earth-orbiting satellite at a 400-mile altitude. So free are the gyroscopes from disturbance that they will provide an almost perfect space-time reference system. They will measure how space and time are very slightly warped by the presence of the Earth, and, more profoundly, how the Earth's rotation very slightly drags space-time around with it. These effects, though small for the Earth, have far-reaching implications for the nature of matter and the structure of the Universe. GP-B is among the most thoroughly researched programs ever undertaken by NASA. This is the story of a scientific quest in which physicists and engineers have collaborated closely over many years. Inspired by their quest, they have invented a whole range of technologies that are already enlivening other branches of science and engineering. Launched April 20, 2004 , the GP-B program was managed for NASA by the Marshall Space Flight Center. Development of the GP-B is the responsibility of Stanford University along with major subcontractor Lockheed Martin Corporation. (Image credit to Russ Underwood, Lockheed Martin Corporation).

  19. Steerable Doppler transducer probes

    International Nuclear Information System (INIS)

    Fidel, H.F.; Greenwood, D.L.

    1986-01-01

    An ultrasonic diagnostic probe is described which is capable of performing ultrasonic imaging and Doppler measurement consisting of: a hollow case having an acoustic window which passes ultrasonic energy and including chamber means for containing fluid located within the hollow case and adjacent to a portion of the acoustic window; imaging transducer means, located in the hollow case and outside the fluid chamber means, and oriented to direct ultrasonic energy through the acoustic window toward an area which is to be imaged; Doppler transducer means, located in the hollow case within the fluid chamber means, and movably oriented to direct Doppler signals through the acoustic window toward the imaged area; means located within the fluid chamber means and externally controlled for controllably moving the Doppler transducer means to select one of a plurality of axes in the imaged area along which the Doppler signals are to be directed; and means, located external to the fluid chamber means and responsive to the means for moving, for providing an indication signal for identifying the selected axis

  20. Probe branes thermalization in external electric and magnetic fields

    International Nuclear Information System (INIS)

    Ali-Akbari, M.; Ebrahim, H.; Rezaei, Z.

    2014-01-01

    We study thermalization on rotating probe branes in AdS 5 ×S 5 background in the presence of constant external electric and magnetic fields. In the AdS/CFT framework this corresponds to thermalization in the flavour sector in field theory. The horizon appears on the worldvolume of the probe brane due to its rotation in one of the sphere directions. For both electric and magnetic fields the behaviour of the temperature is independent of the probe brane dimension. We also study the open string metric and the fluctuations of the probe brane in such a set-up. We show that the temperatures obtained from open string metric and observed by the fluctuations are larger than the one calculated from the induced metric

  1. Measuring reactive oxygen and nitrogen species with fluorescent probes: challenges and limitations

    Science.gov (United States)

    Kalyanaraman, Balaraman; Darley-Usmar, Victor; Davies, Kelvin J.A.; Dennery, Phyllis A.; Forman, Henry Jay; Grisham, Matthew B.; Mann, Giovanni E.; Moore, Kevin; Roberts, L. Jackson; Ischiropoulos, Harry

    2013-01-01

    The purpose of this position paper is to present a critical analysis of the challenges and limitations of the most widely used fluorescent probes for detecting and measuring reactive oxygen and nitrogen species. Where feasible, we have made recommendations for the use of alternate probes and appropriate analytical techniques that measure the specific products formed from the reactions between fluorescent probes and reactive oxygen and nitrogen species. We have proposed guidelines that will help present and future researchers with regard to the optimal use of selected fluorescent probes and interpretation of results. PMID:22027063

  2. Use of oligodeoxynucleotide signature probes for identification of physiological groups of methylotrophic bacteria

    International Nuclear Information System (INIS)

    Tsien, H.C.; Bratina, B.J.; Tsuji, K.; Hanson, R.S.

    1990-01-01

    Oligodeoxynucleotide sequences that uniquely complemented 16S rRNAs of each group of methylotrophs were synthesized and used as hybridization probes for the identification of methylotrophic bacteria possessing the serine and ribulose monophosphate (RuMP) pathways for formaldehyde fixation. The specificity of the probes was determined by hybridizing radiolabeled probes with slot-blotted RNAs of methylotrophs and other eubacteria followed by autoradiography. The washing temperature was determined experimentally to be 50 and 52 degrees C for 9-α (serine pathway) and 10-γ (RuMP pathway) probes, respectively. RNAs isolated from serine pathway methylotrophs bound to probe 9-α, and RNAs from RuMP pathway methylotrophs bound to probe 10-γ. Nonmethylotrophic eubacterial RNAs did not bind to either probe. The probes were also labeled with fluorescent dyes. Cells fixed to microscope slides were hybridized with these probes, washed, and examined in a fluorescence microscope equipped with appropriate filter sets. Cells of methylotrophic bacteria possessing the serine or RuMP pathway specifically bind probes designed for each group. Samples with a mixture of cells of type I and II methanotrophs were detected and differentiated with single probes or mixed probes labeled with different fluorescent dyes, which enabled the detection of both types of cells in the same microscopic field

  3. A Common Probe Design for Multiple Planetary Destinations

    Science.gov (United States)

    Hwang, H. H.; Allen, G. A., Jr.; Alunni, A. I.; Amato, M. J.; Atkinson, D. H.; Bienstock, B. J.; Cruz, J. R.; Dillman, R. A.; Cianciolo, A. D.; Elliott, J. O.; hide

    2018-01-01

    Atmospheric probes have been successfully flown to planets and moons in the solar system to conduct in situ measurements. They include the Pioneer Venus multi-probes, the Galileo Jupiter probe, and Huygens probe. Probe mission concepts to five destinations, including Venus, Jupiter, Saturn, Uranus, and Neptune, have all utilized similar-shaped aeroshells and concept of operations, namely a 45-degree sphere cone shape with high density heatshield material and parachute system for extracting the descent vehicle from the aeroshell. Each concept designed its probe to meet specific mission requirements and to optimize mass, volume, and cost. At the 2017 International Planetary Probe Workshop (IPPW), NASA Headquarters postulated that a common aeroshell design could be used successfully for multiple destinations and missions. This "common probe"� design could even be assembled with multiple copies, properly stored, and made available for future NASA missions, potentially realizing savings in cost and schedule and reducing the risk of losing technologies and skills difficult to sustain over decades. Thus the NASA Planetary Science Division funded a study to investigate whether a common probe design could meet most, if not all, mission needs to the five planetary destinations with extreme entry environments. The Common Probe study involved four NASA Centers and addressed these issues, including constraints and inefficiencies that occur in specifying a common design. Study methodology: First, a notional payload of instruments for each destination was defined based on priority measurements from the Planetary Science Decadal Survey. Steep and shallow entry flight path angles (EFPA) were defined for each planet based on qualification and operational g-load limits for current, state-of-the-art instruments. Interplanetary trajectories were then identified for a bounding range of EFPA. Next, 3-degrees-of-freedom simulations for entry trajectories were run using the entry state

  4. Ionization probes of molecular structure and chemistry

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, P.M. [State Univ. of New York, Stony Brook (United States)

    1993-12-01

    Various photoionization processes provide very sensitive probes for the detection and understanding of the spectra of molecules relevant to combustion processes. The detection of ionization can be selective by using resonant multiphoton ionization or by exploiting the fact that different molecules have different sets of ionization potentials. Therefore, the structure and dynamics of individual molecules can be studied even in a mixed sample. The authors are continuing to develop methods for the selective spectroscopic detection of molecules by ionization, and to use these methods for the study of some molecules of combustion interest.

  5. Solar Probe ANalyzer for Ions - Laboratory Performance

    Science.gov (United States)

    Livi, R.; Larson, D. E.; Kasper, J. C.; Korreck, K. E.; Whittlesey, P. L.

    2017-12-01

    The Parker Solar Probe (PSP) mission is a heliospheric satellite that will orbit the Sun closer than any prior mission to date with a perihelion of 35 solar radii (RS) and an aphelion of 10 RS. PSP includes the Solar Wind Electrons Alphas and Protons (SWEAP) instrument suite, which in turn consists of four instruments: the Solar Probe Cup (SPC) and three Solar Probe ANalyzers (SPAN) for ions and electrons. Together, this suite will take local measurements of particles and electromagnetic fields within the Sun's corona. SPAN-Ai has completed flight calibration and spacecraft integration and is set to be launched in July of 2018. The main mode of operation consists of an electrostatic analyzer (ESA) at its aperture followed by a Time-of-Flight section to measure the energy and mass per charge (m/q) of the ambient ions. SPAN-Ai's main objective is to measure solar wind ions within an energy range of 5 eV - 20 keV, a mass/q between 1-60 [amu/q] and a field of view of 2400x1200. Here we will show flight calibration results and performance.

  6. Workshop on Computational Optimization

    CERN Document Server

    2015-01-01

    Our everyday life is unthinkable without optimization. We try to minimize our effort and to maximize the achieved profit. Many real world and industrial problems arising in engineering, economics, medicine and other domains can be formulated as optimization tasks. This volume is a comprehensive collection of extended contributions from the Workshop on Computational Optimization 2013. It presents recent advances in computational optimization. The volume includes important real life problems like parameter settings for controlling processes in bioreactor, resource constrained project scheduling, problems arising in transport services, error correcting codes, optimal system performance and energy consumption and so on. It shows how to develop algorithms for them based on new metaheuristic methods like evolutionary computation, ant colony optimization, constrain programming and others.

  7. Efficient oligonucleotide probe selection for pan-genomic tiling arrays

    Directory of Open Access Journals (Sweden)

    Zhang Wei

    2009-09-01

    Full Text Available Abstract Background Array comparative genomic hybridization is a fast and cost-effective method for detecting, genotyping, and comparing the genomic sequence of unknown bacterial isolates. This method, as with all microarray applications, requires adequate coverage of probes targeting the regions of interest. An unbiased tiling of probes across the entire length of the genome is the most flexible design approach. However, such a whole-genome tiling requires that the genome sequence is known in advance. For the accurate analysis of uncharacterized bacteria, an array must query a fully representative set of sequences from the species' pan-genome. Prior microarrays have included only a single strain per array or the conserved sequences of gene families. These arrays omit potentially important genes and sequence variants from the pan-genome. Results This paper presents a new probe selection algorithm (PanArray that can tile multiple whole genomes using a minimal number of probes. Unlike arrays built on clustered gene families, PanArray uses an unbiased, probe-centric approach that does not rely on annotations, gene clustering, or multi-alignments. Instead, probes are evenly tiled across all sequences of the pan-genome at a consistent level of coverage. To minimize the required number of probes, probes conserved across multiple strains in the pan-genome are selected first, and additional probes are used only where necessary to span polymorphic regions of the genome. The viability of the algorithm is demonstrated by array designs for seven different bacterial pan-genomes and, in particular, the design of a 385,000 probe array that fully tiles the genomes of 20 different Listeria monocytogenes strains with overlapping probes at greater than twofold coverage. Conclusion PanArray is an oligonucleotide probe selection algorithm for tiling multiple genome sequences using a minimal number of probes. It is capable of fully tiling all genomes of a species on

  8. Optimization strategies for ultrasound volume registration

    International Nuclear Information System (INIS)

    Ijaz, Umer Zeeshan; Prager, Richard W; Gee, Andrew H; Treece, Graham M

    2010-01-01

    This paper considers registration of 3D ultrasound volumes acquired in multiple views for display in a single image volume. One way to acquire 3D data is to use a mechanically swept 3D probe. However, the usefulness of these probes is restricted by their limited field of view. This problem can be overcome by attaching a six-degree-of-freedom (DOF) position sensor to the probe, and displaying the information from multiple sweeps in their proper positions. However, an external six-DOF position sensor can be an inconvenience in a clinical setting. The objective of this paper is to propose a hybrid strategy that replaces the sensor with a combination of three-DOF image registration and an unobtrusive inertial sensor for measuring orientation. We examine a range of optimization algorithms and similarity measures for registration and compare them in in vitro and in vivo experiments. We register based on multiple reslice images rather than a whole voxel array. In this paper, we use a large number of reslices for improved reliability at the expense of computational speed. We have found that the Levenberg–Marquardt method is very fast but is not guaranteed to give the correct solution all the time. We conclude that normalized mutual information used in the Nelder–Mead simplex algorithm is potentially suitable for the registration task with an average execution time of around 5 min, in the majority of cases, with two restarts in a C++ implementation on a 3.0 GHz Intel Core 2 Duo CPU machine

  9. STM-SQUID probe microscope

    International Nuclear Information System (INIS)

    Hayashi, Tadayuki; Tachiki, Minoru; Itozaki, Hideo

    2007-01-01

    We have developed a STM-SQUID probe microscope. A high T C SQUID probe microscope was combined with a scanning tunneling microscope for investigation of samples at room temperature in air. A high permeability probe needle was used as a magnetic flux guide to improve the spatial resolution. The probe with tip radius of less than 100 nm was prepared by microelectropolishing. The probe was also used as a scanning tunneling microscope tip. Topography of the sample surface could be measured by the scanning tunneling microscope with high spatial resolution prior to observation by SQUID microscopy. The SQUID probe microscope image could be observed while keeping the distance from the sample surface to the probe tip constant. We observed a topographic image and a magnetic image of Ni fine pattern and also a magnetically recorded hard disk. Furthermore we have investigated a sample vibration method of the static magnetic field emanating from a sample with the aim of achieving a higher signal-to-noise (S/N) ratio

  10. The AMEMIYA probe. Theoretical background

    International Nuclear Information System (INIS)

    Belitz, Hans Joahim; Althausen, Bernhard; Uehara, Kazuya; Amemiya, Hiroshi

    2010-01-01

    The present probe was developed in order to measure the temperature T i of positive ions in the scrape-off layer (SOL) of tokamak where T i is usually larger than the electron temperature Ti so that the presheath in front of the probe need not be considered and the ions reach the probe with the thermal velocity. The axis of the cylindrical probe is placed parallel to the magnetic field. The important parameter are L/a, the ratio of the length to the radius of the cylindrical probe and κ, the ratio of the probe radius to (π/4) 1/2 , where is the mean ion Larmor radius. The ion current densities to the side and the end surfaces are expressed by the double integral, which can give an analytical formula with respect to the value of κ. If two electrodes with different lengths are placed parallel to the magnetic field, the difference of current densities can be reduced to κ and hence to Ti. Some examples of the application of the probe to tokamaks, JFT-2M and Textor, are demonstrated. (author)

  11. Frequency Optimization for Enhancement of Surface Defect Classification Using the Eddy Current Technique

    Science.gov (United States)

    Fan, Mengbao; Wang, Qi; Cao, Binghua; Ye, Bo; Sunny, Ali Imam; Tian, Guiyun

    2016-01-01

    Eddy current testing is quite a popular non-contact and cost-effective method for nondestructive evaluation of product quality and structural integrity. Excitation frequency is one of the key performance factors for defect characterization. In the literature, there are many interesting papers dealing with wide spectral content and optimal frequency in terms of detection sensitivity. However, research activity on frequency optimization with respect to characterization performances is lacking. In this paper, an investigation into optimum excitation frequency has been conducted to enhance surface defect classification performance. The influences of excitation frequency for a group of defects were revealed in terms of detection sensitivity, contrast between defect features, and classification accuracy using kernel principal component analysis (KPCA) and a support vector machine (SVM). It is observed that probe signals are the most sensitive on the whole for a group of defects when excitation frequency is set near the frequency at which maximum probe signals are retrieved for the largest defect. After the use of KPCA, the margins between the defect features are optimum from the perspective of the SVM, which adopts optimal hyperplanes for structure risk minimization. As a result, the best classification accuracy is obtained. The main contribution is that the influences of excitation frequency on defect characterization are interpreted, and experiment-based procedures are proposed to determine the optimal excitation frequency for a group of defects rather than a single defect with respect to optimal characterization performances. PMID:27164112

  12. Analyzing Planck and low redshift data sets with advanced statistical methods

    Science.gov (United States)

    Eifler, Tim

    The recent ESA/NASA Planck mission has provided a key data set to constrain cosmology that is most sensitive to physics of the early Universe, such as inflation and primordial NonGaussianity (Planck 2015 results XIII). In combination with cosmological probes of the LargeScale Structure (LSS), the Planck data set is a powerful source of information to investigate late time phenomena (Planck 2015 results XIV), e.g. the accelerated expansion of the Universe, the impact of baryonic physics on the growth of structure, and the alignment of galaxies in their dark matter halos. It is the main objective of this proposal to re-analyze the archival Planck data, 1) with different, more recently developed statistical methods for cosmological parameter inference, and 2) to combine Planck and ground-based observations in an innovative way. We will make the corresponding analysis framework publicly available and believe that it will set a new standard for future CMB-LSS analyses. Advanced statistical methods, such as the Gibbs sampler (Jewell et al 2004, Wandelt et al 2004) have been critical in the analysis of Planck data. More recently, Approximate Bayesian Computation (ABC, see Weyant et al 2012, Akeret et al 2015, Ishida et al 2015, for cosmological applications) has matured to an interesting tool in cosmological likelihood analyses. It circumvents several assumptions that enter the standard Planck (and most LSS) likelihood analyses, most importantly, the assumption that the functional form of the likelihood of the CMB observables is a multivariate Gaussian. Beyond applying new statistical methods to Planck data in order to cross-check and validate existing constraints, we plan to combine Planck and DES data in a new and innovative way and run multi-probe likelihood analyses of CMB and LSS observables. The complexity of multiprobe likelihood analyses scale (non-linearly) with the level of correlations amongst the individual probes that are included. For the multi-probe

  13. Measurements of plasma density fluctuations and electric wave fields using spherical electrostatic probes

    International Nuclear Information System (INIS)

    Eriksson, A.I.; Bostroem, R.

    1995-04-01

    Spherical electrostatic probes are in wide use for the measurements of electric fields and plasma density. This report concentrates on the measurements of fluctuations of these quantities rather than background values. Potential problems with the technique include the influence of density fluctuations on electric field measurements and vice versa, effects of varying satellite potential, and non-linear rectification in the probe and satellite sheaths. To study the actual importance of these and other possible effects, we simulate the response of the probe-satellite system to various wave phenomena in the plasma by applying approximate analytical as well as numerical methods. We use a set of non-linear probe equations, based on probe characteristics experimentally obtained in space, and therefore essentially independent of any specific probe theory. This approach is very useful since the probe theory for magnetized plasmas is incomplete. 47 refs

  14. Integrated microfluidic probe station.

    Science.gov (United States)

    Perrault, C M; Qasaimeh, M A; Brastaviceanu, T; Anderson, K; Kabakibo, Y; Juncker, D

    2010-11-01

    The microfluidic probe (MFP) consists of a flat, blunt tip with two apertures for the injection and reaspiration of a microjet into a solution--thus hydrodynamically confining the microjet--and is operated atop an inverted microscope that enables live imaging. By scanning across a surface, the microjet can be used for surface processing with the capability of both depositing and removing material; as it operates under immersed conditions, sensitive biological materials and living cells can be processed. During scanning, the MFP is kept immobile and centered over the objective of the inverted microscope, a few micrometers above a substrate that is displaced by moving the microscope stage and that is flushed continuously with the microjet. For consistent and reproducible surface processing, the gap between the MFP and the substrate, the MFP's alignment, the scanning speed, the injection and aspiration flow rates, and the image capture need all to be controlled and synchronized. Here, we present an automated MFP station that integrates all of these functionalities and automates the key operational parameters. A custom software program is used to control an independent motorized Z stage for adjusting the gap, a motorized microscope stage for scanning the substrate, up to 16 syringe pumps for injecting and aspirating fluids, and an inverted fluorescence microscope equipped with a charge-coupled device camera. The parallelism between the MFP and the substrate is adjusted using manual goniometer at the beginning of the experiment. The alignment of the injection and aspiration apertures along the scanning axis is performed using a newly designed MFP screw holder. We illustrate the integrated MFP station by the programmed, automated patterning of fluorescently labeled biotin on a streptavidin-coated surface.

  15. Gravity Probe B Assembled

    Science.gov (United States)

    2000-01-01

    In this photo, the Gravity Probe B (GP-B) space vehicle is being assembled at the Sunnyvale, California location of the Lockheed Martin Corporation. The GP-B is the relativity experiment developed at Stanford University to test two extraordinary predictions of Albert Einstein's general theory of relativity. The experiment will measure, very precisely, the expected tiny changes in the direction of the spin axes of four gyroscopes contained in an Earth-orbiting satellite at a 400-mile altitude. So free are the gyroscopes from disturbance that they will provide an almost perfect space-time reference system. They will measure how space and time are very slightly warped by the presence of the Earth, and, more profoundly, how the Earth's rotation very slightly drags space-time around with it. These effects, though small for the Earth, have far-reaching implications for the nature of matter and the structure of the Universe. GP-B is among the most thoroughly researched programs ever undertaken by NASA. This is the story of a scientific quest in which physicists and engineers have collaborated closely over many years. Inspired by their quest, they have invented a whole range of technologies that are already enlivening other branches of science and engineering. Launched April 20, 2004 , the GP-B program was managed for NASA by the Marshall Space Flight Center. Development of the GP-B is the responsibility of Stanford University along with major subcontractor Lockheed Martin Corporation. (Image credit to Russ Underwood, Lockheed Martin Corporation).

  16. Short recovery time NMR probe

    International Nuclear Information System (INIS)

    Ramia, M.E.; Martin, C.A.; Jeandrevin, S.

    2011-01-01

    A NMR probe for low frequency and short recovery time is presented in this work. The probe contains the tuning circuit, diode expanders and quarter wavelength networks to protect the receiver from both the amplifier noise and the coil ringing following the transmitter power pulse. It also possesses a coil damper which is activated by of non active components. The probe performance shows a recovery time of about of 15μs a sensitive Q factor reduction and an increase of the signal to noise ratio of about 68% during the reception at a work frequency of 2 MHz. (author)

  17. Simplified Real-Time Multiplex Detection of Loop-Mediated Isothermal Amplification Using Novel Mediator Displacement Probes with Universal Reporters.

    Science.gov (United States)

    Becherer, Lisa; Bakheit, Mohammed; Frischmann, Sieghard; Stinco, Silvina; Borst, Nadine; Zengerle, Roland; von Stetten, Felix

    2018-04-03

    A variety of real-time detection techniques for loop-mediated isothermal amplification (LAMP) based on the change in fluorescence intensity during DNA amplification enable simultaneous detection of multiple targets. However, these techniques depend on fluorogenic probes containing target-specific sequences. That complicates the adaption to different targets leading to time-consuming assay optimization. Here, we present the first universal real-time detection technique for multiplex LAMP. The novel approach allows simple assay design and is easy to implement for various targets. The innovation features a mediator displacement probe and a universal reporter. During amplification of target DNA the mediator is displaced from the mediator displacement probe. Then it hybridizes to the reporter generating a fluorescence signal. The novel mediator displacement (MD) detection was validated against state-of-the-art molecular beacon (MB) detection by means of a HIV-1 RT-LAMP: MD surpassed MB detection by accelerated probe design (MD: 10 min, MB: 3-4 h), shorter times to positive (MD 4.1 ± 0.1 min shorter than MB, n = 36), improved signal-to-noise fluorescence ratio (MD: 5.9 ± 0.4, MB: 2.7 ± 0.4; n = 15), and showed equally good or better analytical performance parameters. The usability of one universal mediator-reporter set in different multiplex assays was successfully demonstrated for a biplex RT-LAMP of HIV-1 and HTLV-1 and a biplex LAMP of Haemophilus ducreyi and Treponema pallidum, both showing good correlation between target concentration and time to positive. Due to its simple implementation it is suggested to extend the use of the universal mediator-reporter sets to the detection of various other diagnostic panels.

  18. Ultraspecific probes for high throughput HLA typing

    Directory of Open Access Journals (Sweden)

    Eggers Rick

    2009-02-01

    Full Text Available Abstract Background The variations within an individual's HLA (Human Leukocyte Antigen genes have been linked to many immunological events, e.g. susceptibility to disease, response to vaccines, and the success of blood, tissue, and organ transplants. Although the microarray format has the potential to achieve high-resolution typing, this has yet to be attained due to inefficiencies of current probe design strategies. Results We present a novel three-step approach for the design of high-throughput microarray assays for HLA typing. This approach first selects sequences containing the SNPs present in all alleles of the locus of interest and next calculates the number of base changes necessary to convert a candidate probe sequences to the closest subsequence within the set of sequences that are likely to be present in the sample including the remainder of the human genome in order to identify those candidate probes which are "ultraspecific" for the allele of interest. Due to the high specificity of these sequences, it is possible that preliminary steps such as PCR amplification are no longer necessary. Lastly, the minimum number of these ultraspecific probes is selected such that the highest resolution typing can be achieved for the minimal cost of production. As an example, an array was designed and in silico results were obtained for typing of the HLA-B locus. Conclusion The assay presented here provides a higher resolution than has previously been developed and includes more alleles than previously considered. Based upon the in silico and preliminary experimental results, we believe that the proposed approach can be readily applied to any highly polymorphic gene system.

  19. Aligned ion implementation using scanning probes

    Energy Technology Data Exchange (ETDEWEB)

    Persaud, A

    2006-12-12

    A new technique for precision ion implantation has been developed. A scanning probe has been equipped with a small aperture and incorporated into an ion beamline, so that ions can be implanted through the aperture into a sample. By using a scanning probe the target can be imaged in a non-destructive way prior to implantation and the probe together with the aperture can be placed at the desired location with nanometer precision. In this work first results of a scanning probe integrated into an ion beamline are presented. A placement resolution of about 120 nm is reported. The final placement accuracy is determined by the size of the aperture hole and by the straggle of the implanted ion inside the target material. The limits of this technology are expected to be set by the latter, which is of the order of 10 nm for low energy ions. This research has been carried out in the context of a larger program concerned with the development of quantum computer test structures. For that the placement accuracy needs to be increased and a detector for single ion detection has to be integrated into the setup. Both issues are discussed in this thesis. To achieve single ion detection highly charged ions are used for the implantation, as in addition to their kinetic energy they also deposit their potential energy in the target material, therefore making detection easier. A special ion source for producing these highly charged ions was used and their creation and interactions with solids of are discussed in detail. (orig.)

  20. Aligned ion implantation using scanning probes

    International Nuclear Information System (INIS)

    Persaud, A.

    2006-01-01

    A new technique for precision ion implantation has been developed. A scanning probe has been equipped with a small aperture and incorporated into an ion beamline, so that ions can be implanted through the aperture into a sample. By using a scanning probe the target can be imaged in a non-destructive way prior to implantation and the probe together with the aperture can be placed at the desired location with nanometer precision. In this work first results of a scanning probe integrated into an ion beamline are presented. A placement resolution of about 120 nm is reported. The final placement accuracy is determined by the size of the aperture hole and by the straggle of the implanted ion inside the target material. The limits of this technology are expected to be set by the latter, which is of the order of 10 nm for low energy ions. This research has been carried out in the context of a larger program concerned with the development of quantum computer test structures. For that the placement accuracy needs to be increased and a detector for single ion detection has to be integrated into the setup. Both issues are discussed in this thesis. To achieve single ion detection highly charged ions are used for the implantation, as in addition to their kinetic energy they also deposit their potential energy in the target material, therefore making detection easier. A special ion source for producing these highly charged ions was used and their creation and interactions with solids of are discussed in detail. (orig.)

  1. Observational probes of cosmic acceleration

    International Nuclear Information System (INIS)

    Weinberg, David H.; Mortonson, Michael J.; Eisenstein, Daniel J.; Hirata, Christopher; Riess, Adam G.; Rozo, Eduardo

    2013-01-01

    The accelerating expansion of the universe is the most surprising cosmological discovery in many decades, implying that the universe is dominated by some form of “dark energy” with exotic physical properties, or that Einstein’s theory of gravity breaks down on cosmological scales. The profound implications of cosmic acceleration have inspired ambitious efforts to understand its origin, with experiments that aim to measure the history of expansion and growth of structure with percent-level precision or higher. We review in detail the four most well established methods for making such measurements: Type Ia supernovae, baryon acoustic oscillations (BAO), weak gravitational lensing, and the abundance of galaxy clusters. We pay particular attention to the systematic uncertainties in these techniques and to strategies for controlling them at the level needed to exploit “Stage IV” dark energy facilities such as BigBOSS, LSST, Euclid, and WFIRST. We briefly review a number of other approaches including redshift-space distortions, the Alcock–Paczynski effect, and direct measurements of the Hubble constant H 0 . We present extensive forecasts for constraints on the dark energy equation of state and parameterized deviations from General Relativity, achievable with Stage III and Stage IV experimental programs that incorporate supernovae, BAO, weak lensing, and cosmic microwave background data. We also show the level of precision required for clusters or other methods to provide constraints competitive with those of these fiducial programs. We emphasize the value of a balanced program that employs several of the most powerful methods in combination, both to cross-check systematic uncertainties and to take advantage of complementary information. Surveys to probe cosmic acceleration produce data sets that support a wide range of scientific investigations, and they continue the longstanding astronomical tradition of mapping the universe in ever greater detail over ever

  2. Observational probes of cosmic acceleration

    Energy Technology Data Exchange (ETDEWEB)

    Weinberg, David H., E-mail: dhw@astronomy.ohio-state.edu [Department of Astronomy, Ohio State University, Columbus, OH (United States); Center for Cosmology and Astro-Particle Physics, Ohio State University, Columbus, OH (United States); Mortonson, Michael J. [Center for Cosmology and Astro-Particle Physics, Ohio State University, Columbus, OH (United States); Eisenstein, Daniel J. [Steward Observatory, University of Arizona, Tucson, AZ (United States); Harvard College Observatory, Cambridge, MA (United States); Hirata, Christopher [California Institute of Technology, Pasadena, CA (United States); Riess, Adam G. [Department of Physics and Astronomy, Johns Hopkins University, Baltimore, MD (United States); Rozo, Eduardo [Kavli Institute for Cosmological Physics, University of Chicago, Chicago, IL (United States)

    2013-09-10

    The accelerating expansion of the universe is the most surprising cosmological discovery in many decades, implying that the universe is dominated by some form of “dark energy” with exotic physical properties, or that Einstein’s theory of gravity breaks down on cosmological scales. The profound implications of cosmic acceleration have inspired ambitious efforts to understand its origin, with experiments that aim to measure the history of expansion and growth of structure with percent-level precision or higher. We review in detail the four most well established methods for making such measurements: Type Ia supernovae, baryon acoustic oscillations (BAO), weak gravitational lensing, and the abundance of galaxy clusters. We pay particular attention to the systematic uncertainties in these techniques and to strategies for controlling them at the level needed to exploit “Stage IV” dark energy facilities such as BigBOSS, LSST, Euclid, and WFIRST. We briefly review a number of other approaches including redshift-space distortions, the Alcock–Paczynski effect, and direct measurements of the Hubble constant H{sub 0}. We present extensive forecasts for constraints on the dark energy equation of state and parameterized deviations from General Relativity, achievable with Stage III and Stage IV experimental programs that incorporate supernovae, BAO, weak lensing, and cosmic microwave background data. We also show the level of precision required for clusters or other methods to provide constraints competitive with those of these fiducial programs. We emphasize the value of a balanced program that employs several of the most powerful methods in combination, both to cross-check systematic uncertainties and to take advantage of complementary information. Surveys to probe cosmic acceleration produce data sets that support a wide range of scientific investigations, and they continue the longstanding astronomical tradition of mapping the universe in ever greater detail over

  3. Optimization and optimal control in automotive systems

    CERN Document Server

    Kolmanovsky, Ilya; Steinbuch, Maarten; Re, Luigi

    2014-01-01

    This book demonstrates the use of the optimization techniques that are becoming essential to meet the increasing stringency and variety of requirements for automotive systems. It shows the reader how to move away from earlier  approaches, based on some degree of heuristics, to the use of  more and more common systematic methods. Even systematic methods can be developed and applied in a large number of forms so the text collects contributions from across the theory, methods and real-world automotive applications of optimization. Greater fuel economy, significant reductions in permissible emissions, new drivability requirements and the generally increasing complexity of automotive systems are among the criteria that the contributing authors set themselves to meet. In many cases multiple and often conflicting requirements give rise to multi-objective constrained optimization problems which are also considered. Some of these problems fall into the domain of the traditional multi-disciplinary optimization applie...

  4. Measurement of locus copy number by hybridisation with amplifiable probes

    Science.gov (United States)

    Armour, John A. L.; Sismani, Carolina; Patsalis, Philippos C.; Cross, Gareth

    2000-01-01

    Despite its fundamental importance in genome analysis, it is only recently that systematic approaches have been developed to assess copy number at specific genetic loci, or to examine genomic DNA for submicroscopic deletions of unknown location. In this report we show that short probes can be recovered and amplified quantitatively following hybridisation to genomic DNA. This simple observation forms the basis of a new approach to determining locus copy number in complex genomes. The power and specificity of multiplex amplifiable probe hybridisation is demonstrated by the simultaneous assessment of copy number at a set of 40 human loci, including detection of deletions causing Duchenne muscular dystrophy and Prader–Willi/Angelman syndromes. Assembly of other probe sets will allow novel, technically simple approaches to a wide variety of genetic analyses, including the potential for extension to high resolution genome-wide screens for deletions and amplifications. PMID:10606661

  5. Measurement of locus copy number by hybridisation with amplifiable probes.

    Science.gov (United States)

    Armour, J A; Sismani, C; Patsalis, P C; Cross, G

    2000-01-15

    Despite its fundamental importance in genome analysis, it is only recently that systematic approaches have been developed to assess copy number at specific genetic loci, or to examine genomic DNA for submicro-scopic deletions of unknown location. In this report we show that short probes can be recovered and amplified quantitatively following hybridisation to genomic DNA. This simple observation forms the basis of a new approach to determining locus copy number in complex genomes. The power and specificity of multiplex amplifiable probe hybridisation is demonstrated by the simultaneous assessment of copy number at a set of 40 human loci, including detection of deletions causing Duchenne muscular dystrophy and Prader-Willi/Angelman syndromes. Assembly of other probe sets will allow novel, technically simple approaches to a wide variety of genetic analyses, including the potential for extension to high resolution genome-wide screens for deletions and amplifications.

  6. A General Protocol for Temperature Calibration of MAS NMR Probes at Arbitrary Spinning Speeds

    Science.gov (United States)

    Guan, Xudong; Stark, Ruth E.

    2010-01-01

    A protocol using 207Pb NMR of solid lead nitrate was developed to determine the temperature of magic-angle spinning (MAS) NMR probes over a range of nominal set temperatures and spinning speeds. Using BioMAS and fastMAS probes with typical sample spinning rates of 8 and 35 kHz, respectively, empirical equations were devised to predict the respective sample temperatures. These procedures provide a straightforward recipe for temperature calibration of any MAS probe. PMID:21036557

  7. New control system: the commands of the settings equipments

    International Nuclear Information System (INIS)

    David, L.; Maugeais, C.

    1992-01-01

    The equipments allowing the setting of the GANIL beam (motors, probes, supplies...) may be operated by three distinct ways: by dial (pseudo potentiometer), by menu or by slider. These processes are described. (A.B.)

  8. Lepton probes in nuclear physics

    Energy Technology Data Exchange (ETDEWEB)

    Arvieux, J. [Laboratoire National Saturne, Centre d`Etudes de Saclay, 91 - Gif-sur-Yvette (France)

    1994-12-31

    Facilities are overviewed which use the lepton probe to learn about nuclear physics. The lepton accelerating methods out some existing facilities are considered. The ELFE project is discussed in detail. (K.A.). 43 refs., 15 figs., 4 tabs.

  9. Probing of flowing electron plasmas

    International Nuclear Information System (INIS)

    Himura, H.; Nakashima, C.; Saito, H.; Yoshida, Z.

    2001-01-01

    Probing of streaming electron plasmas with finite temperature is studied. For the first time, a current-voltage characteristic of an electric probe is measured in electron plasmas. Due to the fast flow of the electron plasmas, the characteristic curve spreads out significantly and exhibits a long tail. This feature can be explained calculating the currents collected to the probe. In flowing electron plasmas, the distribution function observed in the laboratory frame is non-Maxwellian even if the plasmas come to a state of thermal equilibrium. Another significant feature of the characteristic is that it determines a floating potential where the current equals zero, despite there being very few ions in the electron plasma. A high impedance probe, which is popularly used to determine the space potential of electron plasmas, outputs the potential. The method is available only for plasmas with density much smaller than the Brillouin limit

  10. Monitoring probe for groundwater flow

    Science.gov (United States)

    Looney, B.B.; Ballard, S.

    1994-08-23

    A monitoring probe for detecting groundwater migration is disclosed. The monitor features a cylinder made of a permeable membrane carrying an array of electrical conductivity sensors on its outer surface. The cylinder is filled with a fluid that has a conductivity different than the groundwater. The probe is placed in the ground at an area of interest to be monitored. The fluid, typically saltwater, diffuses through the permeable membrane into the groundwater. The flow of groundwater passing around the permeable membrane walls of the cylinder carries the conductive fluid in the same general direction and distorts the conductivity field measured by the sensors. The degree of distortion from top to bottom and around the probe is precisely related to the vertical and horizontal flow rates, respectively. The electrical conductivities measured by the sensors about the outer surface of the probe are analyzed to determine the rate and direction of the groundwater flow. 4 figs.

  11. Pneumatic probe with laser interferometer

    International Nuclear Information System (INIS)

    Wilkens, P.H.

    1978-01-01

    Improvements to upgrade the accuracy of Rotacon probes by a complete redesign of probe to include a Michelson interferometer to replace the existing long-range capacity transducer are described. This has resulted in a compact and interchangeable probe cartridge with a 3 μin. resolution and accuracy; the cartridge can be installed and replaced in the Rotacon gauge with the minimum of realignment, which should reduce our dependence on operator skill. In addition, the stylus contact force can be reduced to 750 mg for the contacting types, but an alternative feature, which we are still developing, will use a gas jet cushion in place of the stylus to provide a noncontacting version of the same basic probe cartridge. This device is very sensitive to external vibration effects because it is virtually frictionless

  12. Lepton probes in nuclear physics

    International Nuclear Information System (INIS)

    Arvieux, J.

    1994-01-01

    Facilities are overviewed which use the lepton probe to learn about nuclear physics. The lepton accelerating methods out some existing facilities are considered. The ELFE project is discussed in detail. (K.A.). 43 refs., 15 figs., 4 tabs

  13. DNA probe for lactobacillus delbrueckii

    Energy Technology Data Exchange (ETDEWEB)

    Delley, M.; Mollet, B.; Hottinger, H. (Nestle Research Centre, Lausanne (Switzerland))

    1990-06-01

    From a genomic DNA library of Lactobacillus delbrueckii subsp. bulgaricus, a clone was isolated which complements a leucine auxotrophy of an Escherichia coli strain (GE891). Subsequent analysis of the clone indicated that it could serve as a specific DNA probe. Dot-blot hybridizations with over 40 different Lactobacillus strains showed that this clone specifically recognized L. delbrueckii subsp. delbrueckii, bulgaricus, and lactis. The sensitivity of the method was tested by using an {alpha}-{sup 32}P-labeled probe.

  14. DNA probe for lactobacillus delbrueckii

    International Nuclear Information System (INIS)

    Delley, M.; Mollet, B.; Hottinger, H.

    1990-01-01

    From a genomic DNA library of Lactobacillus delbrueckii subsp. bulgaricus, a clone was isolated which complements a leucine auxotrophy of an Escherichia coli strain (GE891). Subsequent analysis of the clone indicated that it could serve as a specific DNA probe. Dot-blot hybridizations with over 40 different Lactobacillus strains showed that this clone specifically recognized L. delbrueckii subsp. delbrueckii, bulgaricus, and lactis. The sensitivity of the method was tested by using an α- 32 P-labeled probe

  15. Introduction to optimal control theory

    International Nuclear Information System (INIS)

    Agrachev, A.A.

    2002-01-01

    These are lecture notes of the introductory course in Optimal Control theory treated from the geometric point of view. Optimal Control Problem is reduced to the study of controls (and corresponding trajectories) leading to the boundary of attainable sets. We discuss Pontryagin Maximum Principle, basic existence results, and apply these tools to concrete simple optimal control problems. Special sections are devoted to the general theory of linear time-optimal problems and linear-quadratic problems. (author)

  16. High quality-factor quartz tuning fork glass probe used in tapping mode atomic force microscopy for surface profile measurement

    Science.gov (United States)

    Chen, Yuan-Liu; Xu, Yanhao; Shimizu, Yuki; Matsukuma, Hiraku; Gao, Wei

    2018-06-01

    This paper presents a high quality-factor (Q-factor) quartz tuning fork (QTF) with a glass probe attached, used in frequency modulation tapping mode atomic force microscopy (AFM) for the surface profile metrology of micro and nanostructures. Unlike conventionally used QTFs, which have tungsten or platinum probes for tapping mode AFM, and suffer from a low Q-factor influenced by the relatively large mass of the probe, the glass probe, which has a lower density, increases the Q-factor of the QTF probe unit allowing it to obtain better measurement sensitivity. In addition, the process of attaching the probe to the QTF with epoxy resin, which is necessary for tapping mode AFM, is also optimized to further improve the Q-factor of the QTF glass probe. The Q-factor of the optimized QTF glass probe unit is demonstrated to be very close to that of a bare QTF without a probe attached. To verify the effectiveness and the advantages of the optimized QTF glass probe unit, the probe unit is integrated into a home-built tapping mode AFM for conducting surface profile measurements of micro and nanostructures. A blazed grating with fine tool marks of 100 nm, a microprism sheet with a vertical amplitude of 25 µm and a Fresnel lens with a steep slope of 90 degrees are used as measurement specimens. From the measurement results, it is demonstrated that the optimized QTF glass probe unit can achieve higher sensitivity as well as better stability than conventional probes in the measurement of micro and nanostructures.

  17. Optimal dynamic detection of explosives

    Energy Technology Data Exchange (ETDEWEB)

    Moore, David Steven [Los Alamos National Laboratory; Mcgrane, Shawn D [Los Alamos National Laboratory; Greenfield, Margo T [Los Alamos National Laboratory; Scharff, R J [Los Alamos National Laboratory; Rabitz, Herschel A [PRINCETON UNIV; Roslund, J [PRINCETON UNIV

    2009-01-01

    The detection of explosives is a notoriously difficult problem, especially at stand-off distances, due to their (generally) low vapor pressure, environmental and matrix interferences, and packaging. We are exploring optimal dynamic detection to exploit the best capabilities of recent advances in laser technology and recent discoveries in optimal shaping of laser pulses for control of molecular processes to significantly enhance the standoff detection of explosives. The core of the ODD-Ex technique is the introduction of optimally shaped laser pulses to simultaneously enhance sensitivity of explosives signatures while reducing the influence of noise and the signals from background interferents in the field (increase selectivity). These goals are being addressed by operating in an optimal nonlinear fashion, typically with a single shaped laser pulse inherently containing within it coherently locked control and probe sub-pulses. With sufficient bandwidth, the technique is capable of intrinsically providing orthogonal broad spectral information for data fusion, all from a single optimal pulse.

  18. Design - manufacturing and characterization of specific ultrasonic probes

    International Nuclear Information System (INIS)

    Petit, J.

    1985-01-01

    Optimization of ultrasonic examinations requires essentially to determine precisely parameters used for manufacturing of probes and to check characteristics of beams used. The system presented permits an automatic determination of dimensions of beams in conditions which are totally representative of those of their use. In the field of ultrasonic examinations a good estimate or knowledge of sound beams is of great help to solve difficult examination problems. The FRAMATOME's Centre d'Etude et de Recherche en Essais Non Destructifs (CEREND) : (Study and Research Center in Non-Destructive Testing) has developed and elaborated various techniques in order to improve ultrasonic examinations with specific probes. These techniques concern design, manufacturing and characterization of these probes

  19. A Miniature Probe for Ultrasonic Penetration of a Single Cell

    Directory of Open Access Journals (Sweden)

    Mingfei Xiao

    2009-05-01

    Full Text Available Although ultrasound cavitation must be avoided for safe diagnostic applications, the ability of ultrasound to disrupt cell membranes has taken on increasing significance as a method to facilitate drug and gene delivery. A new ultrasonic resonance driving method is introduced to penetrate rigid wall plant cells or oocytes with springy cell membranes. When a reasonable design is created, ultrasound can gather energy and increase the amplitude factor. Ultrasonic penetration enables exogenous materials to enter cells without damaging them by utilizing instant acceleration. This paper seeks to develop a miniature ultrasonic probe experiment system for cell penetration. A miniature ultrasonic probe is designed and optimized using the Precise Four Terminal Network Method and Finite Element Method (FEM and an ultrasonic generator to drive the probe is designed. The system was able to successfully puncture a single fish cell.

  20. Optimization method development of the core characteristics of a fast reactor in order to explore possible high performance solutions (a solution being a consistent set of fuel, core, system and safety)

    International Nuclear Information System (INIS)

    Ingremeau, J.-J.X.

    2011-01-01

    In the study of any new nuclear reactor, the design of the core is an important step. However designing and optimising a reactor core is quite complex as it involves neutronics, thermal-hydraulics and fuel thermomechanics and usually design of such a system is achieved through an iterative process, involving several different disciplines. In order to solve quickly such a multi-disciplinary system, while observing the appropriate constraints, a new approach has been developed to optimise both the core performance (in-cycle Pu inventory, fuel burn-up, etc...) and the core safety characteristics (safety estimators) of a Fast Neutron Reactor. This new approach, called FARM (Fast Reactor Methodology) uses analytical models and interpolations (Meta-models) from CEA reference codes for neutronics, thermal-hydraulics and fuel behaviour, which are coupled to automatically design a core based on several optimization variables. This global core model is then linked to a genetic algorithm and used to explore and optimise new core designs with improved performance. Consideration has also been given to which parameters can be best used to define the core performance and how safety can be taken into account.This new approach has been used to optimize the design of three concepts of Gas cooled Fast Reactor (GFR). For the first one, using a SiC/SiCf-cladded carbide-fuelled helium-bonded pin, the results demonstrate that the CEA reference core obtained with the traditional iterative method was an optimal core, but among many other possibilities (that is to say on the Pareto front). The optimization also found several other cores which exhibit some improved features at the expense of other safety or performance estimators. An evolution of this concept using a 'buffer', a new technology being developed at CEA, has hence been introduced in FARM. The FARM optimisation produced several core designs using this technology, and estimated their performance. The results obtained show that

  1. Acoustics of the piezo-electric pressure probe

    Science.gov (United States)

    Dutt, G. S.

    1974-01-01

    Acoustical properties of a piezoelectric device are reported for measuring the pressure in the plasma flow from an MPD arc. A description and analysis of the acoustical behavior in a piezoelectric probe is presented for impedance matching and damping. The experimental results are presented in a set of oscillographic records.

  2. Improved analysis techniques for cylindrical and spherical double probes

    Energy Technology Data Exchange (ETDEWEB)

    Beal, Brian; Brown, Daniel; Bromaghim, Daron [Air Force Research Laboratory, 1 Ara Rd., Edwards Air Force Base, California 93524 (United States); Johnson, Lee [Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Dr., Pasadena, California 91109 (United States); Blakely, Joseph [ERC Inc., 1 Ara Rd., Edwards Air Force Base, California 93524 (United States)

    2012-07-15

    A versatile double Langmuir probe technique has been developed by incorporating analytical fits to Laframboise's numerical results for ion current collection by biased electrodes of various sizes relative to the local electron Debye length. Application of these fits to the double probe circuit has produced a set of coupled equations that express the potential of each electrode relative to the plasma potential as well as the resulting probe current as a function of applied probe voltage. These equations can be readily solved via standard numerical techniques in order to determine electron temperature and plasma density from probe current and voltage measurements. Because this method self-consistently accounts for the effects of sheath expansion, it can be readily applied to plasmas with a wide range of densities and low ion temperature (T{sub i}/T{sub e} Much-Less-Than 1) without requiring probe dimensions to be asymptotically large or small with respect to the electron Debye length. The presented approach has been successfully applied to experimental measurements obtained in the plume of a low-power Hall thruster, which produced a quasineutral, flowing xenon plasma during operation at 200 W on xenon. The measured plasma densities and electron temperatures were in the range of 1 Multiplication-Sign 10{sup 12}-1 Multiplication-Sign 10{sup 17} m{sup -3} and 0.5-5.0 eV, respectively. The estimated measurement uncertainty is +6%/-34% in density and +/-30% in electron temperature.

  3. Electric probe data analysis for glow discharge diagnostics

    International Nuclear Information System (INIS)

    Cain, B.L.

    1987-01-01

    This report summarizes the development and application of digital computations for the analysis of data from an electric probe used for glow discharge diagnostics. The essential physics of the probe/discharge interaction is presented, along with formulations from modern electric probe theory. These results are then digitally implemented by a set of computer programs which both calculate discharge properties of electron temperature and density, and aid in the interpretation of these property estimates. The method of analysis, and the theories selected for implementation, are valid only for low pressure, collisionless sheath, and quiescent discharges where the single electric probe has a much smaller area than the discharge reference electrode. However, certain algorithms are included which, in some cases, can extend the analysis into intermediate pressure regimes. The digital programs' functional capabilities are demonstrated by the analysis of experimental probe data, collected using a laboratory glow discharge. Typical sources of error inherent in the electric probe method are discussed, along with an analysis of error induced by the computational methods of the programs. 27 refs., 49 figs., 20 tabs

  4. Electromagnetic microscope compared with a conventional pulsed eddy-current probe

    Science.gov (United States)

    Podney, Walter N.

    1998-03-01

    A superconductive probe presently can detect a crack at a rivet hole that is two to three times smaller than the smallest crack detectable by a conventional probe. As the technology matures and noise resolution approaches a limit set by SQUIDS, approximately 1 fH, it will enable detecting submillimeter cracks down to approximately 15 mm.

  5. Using the lambda function to evaluate probe measurements of charged dielectric surfaces

    DEFF Research Database (Denmark)

    Rerup, T. O.; Crichton, George C; McAllister, Iain Wilson

    1996-01-01

    The use of Pedersen's λ function to evaluate electrostatic probe measurements of charged dielectric surfaces is demonstrated. With a knowledge of the probe λ function, the procedure by which this function is employed is developed, and thereafter applied to a set of experimental measurements avail...

  6. Probe suppression in conformal phased array

    CERN Document Server

    Singh, Hema; Neethu, P S

    2017-01-01

    This book considers a cylindrical phased array with microstrip patch antenna elements and half-wavelength dipole antenna elements. The effect of platform and mutual coupling effect is included in the analysis. The non-planar geometry is tackled by using Euler's transformation towards the calculation of array manifold. Results are presented for both conducting and dielectric cylinder. The optimal weights obtained are used to generate adapted pattern according to a given signal scenario. It is shown that array along with adaptive algorithm is able to cater to an arbitrary signal environment even when the platform effect and mutual coupling is taken into account. This book provides a step-by-step approach for analyzing the probe suppression in non-planar geometry. Its detailed illustrations and analysis will be a useful text for graduate and research students, scientists and engineers working in the area of phased arrays, low-observables and stealth technology.

  7. Workshop on Computational Optimization

    CERN Document Server

    2016-01-01

    This volume is a comprehensive collection of extended contributions from the Workshop on Computational Optimization 2014, held at Warsaw, Poland, September 7-10, 2014. The book presents recent advances in computational optimization. The volume includes important real problems like parameter settings for controlling processes in bioreactor and other processes, resource constrained project scheduling, infection distribution, molecule distance geometry, quantum computing, real-time management and optimal control, bin packing, medical image processing, localization the abrupt atmospheric contamination source and so on. It shows how to develop algorithms for them based on new metaheuristic methods like evolutionary computation, ant colony optimization, constrain programming and others. This research demonstrates how some real-world problems arising in engineering, economics, medicine and other domains can be formulated as optimization tasks.

  8. Encyclopedia of optimization

    CERN Document Server

    Pardalos, Panos

    2001-01-01

    Optimization problems are widespread in the mathematical modeling of real world systems and their applications arise in all branches of science, applied science and engineering. The goal of the Encyclopedia of Optimization is to introduce the reader to a complete set of topics in order to show the spectrum of recent research activities and the richness of ideas in the development of theories, algorithms and the applications of optimization. It is directed to a diverse audience of students, scientists, engineers, decision makers and problem solvers in academia, business, industry, and government.

  9. IVVS probe mechanical concept design

    Energy Technology Data Exchange (ETDEWEB)

    Rossi, Paolo, E-mail: paolo.rossi@enea.it; Neri, Carlo; De Collibus, Mario Ferri; Mugnaini, Giampiero; Pollastrone, Fabio; Crescenzi, Fabio

    2015-10-15

    Highlights: • ENEA designed, developed and tested a laser based In Vessel Viewing System (IVVS). • IVVS mechanical design has been revised from 2011 to 2013 to meet ITER requirements. • Main improvements are piezoceramic actuators and a step focus system. • Successful qualification activities validated the concept design for ITER environment. - Abstract: ENEA has been deeply involved in the design, development and testing of a laser based In Vessel Viewing System (IVVS) required for the inspection of ITER plasma-facing components. The IVVS probe shall be deployed into the vacuum vessel, providing high resolution images and metrology measurements to detect damages and possible erosion. ENEA already designed and manufactured an IVVS probe prototype based on a rad-hard concept and driven by commercial micro-step motors, which demonstrated satisfying viewing and metrology performances at room conditions. The probe sends a laser beam through a reflective rotating prism. By rotating the axes of the prism, the probe can scan all the environment points except those present in a shadow cone and the backscattered light signal is then processed to measure the intensity level (viewing) and the distance from the probe (metrology). During the last years, in order to meet all the ITER environmental conditions, such as high vacuum, gamma radiation lifetime dose up to 5 MGy, cumulative neutron fluence of about 2.3 × 10{sup 17} n/cm{sup 2}, temperature of 120 °C and magnetic field of 8 T, the probe mechanical design was significantly revised introducing a new actuating system based on piezo-ceramic actuators and improved with a new step focus system. The optical and mechanical schemes have been then modified and refined to meet also the geometrical constraints. The paper describes the mechanical concept design solutions adopted in order to fulfill IVVS probe functional performance requirements considering ITER working environment and geometrical constraints.

  10. Optimization of source and detector configurations based on Cramer-Rao lower bound analysis

    Science.gov (United States)

    Chen, Ling; Chen, Nanguang

    2011-03-01

    Optimization of source and detector (SD) arrangements in a diffuse optical tomography system is helpful for improving measurements' sensitivity to localized changes in imaging domain and enhancing the capacity of noise resistance. We introduced a rigorous and computationally efficient methodology and adapt it into the diffuse optics field to realize the optimizations of SD arrangements. Our method is based on Cramer-Rao lower bound analysis, which combines the diffusion-forward model and a noise model together. This method can be used to investigate the performance of the SD arrangements through quantitative estimations of lower bounds of the standard variances of the reconstructed perturbation depths and values. More importantly, it provides direct estimations of parameters without solving the inverse problem. Simulations are conducted in the reflection geometry to validate the effectiveness of the method on selections of the optimized SD sets, with a fixed number of sources and detectors, from an SD group on a planar probe surface. The impacts of different noise levels and target perturbation depths are considered in the simulations. It is demonstrated that the SD sets selected by this method afford better reconstructed images. This methodology can be adapted to other probe surfaces and other imaging geometries.

  11. Mapping Rotational Wavepacket Dynamics with Chirped Probe Pulses

    Science.gov (United States)

    Romanov, Dmitri; Odhner, Johanan; Levis, Robert

    2014-05-01

    We develop an analytical model description of the strong-field pump-probe polarization spectroscopy of rotational transients in molecular gases in a situation when the probe pulse is considerably chirped: the frequency modulation over the pulse duration is comparable with the carrier frequency. In this scenario, a femtosecond pump laser pulse prepares a rotational wavepacket in a gas-phase sample at room temperature. The rotational revivals of the wavepacket are then mapped onto a chirped broadband probe pulse derived from a laser filament. The slow-varying envelope approximation being inapplicable, an alternative approach is proposed which is capable of incorporating the substantial chirp and the related temporal dispersion of refractive indices. Analytical expressions are obtained for the probe signal modulation over the interaction region and for the resulting heterodyned transient birefringence spectra. Dependencies of the outputs on the probe pulse parameters reveal the trade-offs and the ways to optimize the temporal-spectral imaging. The results are in good agreement with the experiments on snapshot imaging of rotational revival patterns in nitrogen gas. We gratefully acknowledge financial support through AFOSR MURI Grant No. FA9550-10-1-0561.

  12. Nanomaterials and MRI molecular probe

    International Nuclear Information System (INIS)

    Inubushi, Toshiro

    2008-01-01

    This paper presents the current state and future prospect of enhancing probes in MRI which enable to image specific cells and molecules mainly from the aspect of cell trafficking. Although MRI requires such probes for specific imaging, it has an advantage that anatomical images are simultaneously available even during surgical operation without radiation exposure, differing from X-CT, -transillumination and positron emission tomography (PET). In the development of novel MRI molecular probes, the recent topic concerns the cell trafficking biology where cells related with transplantation and immunological therapy can be traced. Although superparamagnetic iron oxide (SPIO) has been used as a commercially available enhancer, this nanoparticle has problems like a difficulty to penetrate cell, cytotoxicity and others. For these, authors have developed the nanoparticle SPIO covered with silica shell, which can be chemically modified, e.g., by binding fluorescent pigments to possibly allow MR bimodal molecular imaging. For penetration of particles in cells, envelop of Sendai virus is used. PET-CT has been more popular these days; however, MRI is superior to CT for imaging soft tissues, and development of PET-MRI is actively under progress aiming the multi-modal imaging. At present, molecular probes for MRI are certainly not so many as those for PET and cooperative efforts to develop the probes are required in medical, technological and pharmaceutical fields. (R.T.)

  13. Thermal motion of a holographically trapped SPM-like probe

    International Nuclear Information System (INIS)

    Simpson, Stephen H; Hanna, Simon

    2009-01-01

    By holding a complex object in multiple optical traps, it may be harmonically bound with respect to both its position and its orientation. In this way a small probe, or nanotool, can be manipulated in three dimensions and used to measure and apply directed forces, in the manner of a scanning probe microscope. In this paper we evaluate the thermal motion of such a probe held in holographic optical tweezers, by solving the Langevin equation for the general case of a set of spherical vertices linked by cylindrical rods. The concept of a corner frequency, familiar from the case of an optically trapped sphere, is appropriately extended to represent a set of characteristic frequencies given by the eigenvalues of the product of the stiffness matrix and the inverse hydrodynamic resistance matrix of the tool. These eigenvalues may alternatively be interpreted as inverses of a set of characteristic relaxation times for the system. The approach is illustrated by reference to a hypothetical tool consisting of a triangular arrangement of spheres with a lateral probe. The characteristic frequencies and theoretical resolution of the device are derived; variations of these quantities with tool size and orientation and with the optical power distribution, are also considered.

  14. Scanning vector Hall probe microscopy

    International Nuclear Information System (INIS)

    Cambel, V.; Gregusova, D.; Fedor, J.; Kudela, R.; Bending, S.J.

    2004-01-01

    We have developed a scanning vector Hall probe microscope for mapping magnetic field vector over magnetic samples. The microscope is based on a micromachined Hall sensor and the cryostat with scanning system. The vector Hall sensor active area is ∼5x5 μm 2 . It is realized by patterning three Hall probes on the tilted faces of GaAs pyramids. Data from these 'tilted' Hall probes are used to reconstruct the full magnetic field vector. The scanning area of the microscope is 5x5 mm 2 , space resolution 2.5 μm, field resolution ∼1 μT Hz -1/2 at temperatures 10-300 K

  15. Spaser as a biological probe

    Science.gov (United States)

    Galanzha, Ekaterina I.; Weingold, Robert; Nedosekin, Dmitry A.; Sarimollaoglu, Mustafa; Nolan, Jacqueline; Harrington, Walter; Kuchyanov, Alexander S.; Parkhomenko, Roman G.; Watanabe, Fumiya; Nima, Zeid; Biris, Alexandru S.; Plekhanov, Alexander I.; Stockman, Mark I.; Zharov, Vladimir P.

    2017-06-01

    Understanding cell biology greatly benefits from the development of advanced diagnostic probes. Here we introduce a 22-nm spaser (plasmonic nanolaser) with the ability to serve as a super-bright, water-soluble, biocompatible probe capable of generating stimulated emission directly inside living cells and animal tissues. We have demonstrated a lasing regime associated with the formation of a dynamic vapour nanobubble around the spaser that leads to giant spasing with emission intensity and spectral width >100 times brighter and 30-fold narrower, respectively, than for quantum dots. The absorption losses in the spaser enhance its multifunctionality, allowing for nanobubble-amplified photothermal and photoacoustic imaging and therapy. Furthermore, the silica spaser surface has been covalently functionalized with folic acid for molecular targeting of cancer cells. All these properties make a nanobubble spaser a promising multimodal, super-contrast, ultrafast cellular probe with a single-pulse nanosecond excitation for a variety of in vitro and in vivo biomedical applications.

  16. Experimental studies with a stimulated Raman backscatter probe beam in laser-irradiated plasmas

    International Nuclear Information System (INIS)

    Jiang, Z.M.; Meng, S.X.; Xu, Z.Z.

    1986-01-01

    This paper reports on the optical diagnostic experiments accomplished with a stimulated Raman backscatter probe beam set up recently in the sixbeam Nd:glass laser facility for laser fusion research at the Shanghai Insitute of Optics and Fine Mechanics

  17. DNA Probe for Lactobacillus delbrueckii

    Science.gov (United States)

    Delley, Michèle; Mollet, Beat; Hottinger, Herbert

    1990-01-01

    From a genomic DNA library of Lactobacillus delbrueckii subsp. bulgaricus, a clone was isolated which complements a leucine auxotrophy of an Escherichia coli strain (GE891). Subsequent analysis of the clone indicated that it could serve as a specific DNA probe. Dot-blot hybridizations with over 40 different Lactobacillus strains showed that this clone specifically recognizes L. delbrueckii subsp. delbrueckii, bulgaricus, and lactis. The sensitivity of the method was tested by using an α-32P-labeled DNA probe. Images PMID:16348233

  18. Radical probing of spliceosome assembly.

    Science.gov (United States)

    Grewal, Charnpal S; Kent, Oliver A; MacMillan, Andrew M

    2017-08-01

    Here we describe the synthesis and use of a directed hydroxyl radical probe, tethered to a pre-mRNA substrate, to map the structure of this substrate during the spliceosome assembly process. These studies indicate an early organization and proximation of conserved pre-mRNA sequences during spliceosome assembly. This methodology may be adapted to the synthesis of a wide variety of modified RNAs for use as probes of RNA structure and RNA-protein interaction. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Architectural Probes of the Infraordinary

    DEFF Research Database (Denmark)

    Lunde Nielsen, Espen

    2017-01-01

    of the city plays a vital role for the social coexistence of and the correlation between its inhabitants. In an era of explosive growth of our cities, it is crucial to critically examine the everyday social dimension, if our cities are to be liveable in the future. To enquire into the everyday topography...... approaches for probing into and interrogating the infraordinary: frameworks of perception and situated probes. Both are deployed in order to get at distance of the familiar and by-pass the usual hierarchies of perception to gain new knowledge. These critical spatial practices span an interdisciplinary...

  20. Detecting device of atomic probe

    International Nuclear Information System (INIS)

    Nikonenkov, N.V.

    1979-01-01

    Operation of an atomic-probe recording device is discussed in detail and its flowsheet is given. The basic elements of the atomic-probe recording device intented for microanalysis of metals and alloys in an atomic level are the storage oscillograph with a raster-sweep unit, a two-channel timer using frequency meters, a digital printer, and a control unit. The digital printer records information supplied by four digital devices (two frequency meters and two digital voltmeters) in a four-digit binary-decimal code. The described device provides simultaneous recording of two ions produced per one vaporation event

  1. Probing nuclear matter with dileptons

    International Nuclear Information System (INIS)

    Schroeder, L.S.

    1986-06-01

    Dileptons are shown to be of interest in helping probe extreme conditions of temperature and density in nuclear matter. The current state of experimental knowledge about dileptons is briefly described, and their use in upcoming experiments with light ions at CERN SPS are reviewed, including possible signatures of quark matter formation. Use of dileptons in an upcoming experiment with a new spectrometer at Berkeley is also discussed. This experiment will probe the nuclear matter equation of state at high temperature and density. 16 refs., 8 figs

  2. Radioactive Probes on Ferromagnetic Surfaces

    CERN Multimedia

    2002-01-01

    On the (broad) basis of our studies of nonmagnetic radioactive probe atoms on magnetic surfaces and at interfaces, we propose to investigate the magnetic interaction of magnetic probe atoms with their immediate environment, in particular of rare earth (RE) elements positioned on and in ferromagnetic surfaces. The preparation and analysis of the structural properties of such samples will be performed in the UHV chamber HYDRA at the HMI/Berlin. For the investigations of the magnetic properties of RE atoms on surfaces Perturbed Angular Correlation (PAC) measurements and Mössbauer Spectroscopy (MS) in the UHV chamber ASPIC (Apparatus for Surface Physics and Interfaces at CERN) are proposed.

  3. Automatic emissive probe apparatus for accurate plasma and vacuum space potential measurements

    Science.gov (United States)

    Jianquan, LI; Wenqi, LU; Jun, XU; Fei, GAO; Younian, WANG

    2018-02-01

    We have developed an automatic emissive probe apparatus based on the improved inflection point method of the emissive probe for accurate measurements of both plasma potential and vacuum space potential. The apparatus consists of a computer controlled data acquisition card, a working circuit composed by a biasing unit and a heating unit, as well as an emissive probe. With the set parameters of the probe scanning bias, the probe heating current and the fitting range, the apparatus can automatically execute the improved inflection point method and give the measured result. The validity of the automatic emissive probe apparatus is demonstrated in a test measurement of vacuum potential distribution between two parallel plates, showing an excellent accuracy of 0.1 V. Plasma potential was also measured, exhibiting high efficiency and convenient use of the apparatus for space potential measurements.

  4. A practical model for pressure probe system response estimation (with review of existing models)

    Science.gov (United States)

    Hall, B. F.; Povey, T.

    2018-04-01

    The accurate estimation of the unsteady response (bandwidth) of pneumatic pressure probe systems (probe, line and transducer volume) is a common practical problem encountered in the design of aerodynamic experiments. Understanding the bandwidth of the probe system is necessary to capture unsteady flow features accurately. Where traversing probes are used, the desired traverse speed and spatial gradients in the flow dictate the minimum probe system bandwidth required to resolve the flow. Existing approaches for bandwidth estimation are either complex or inaccurate in implementation, so probes are often designed based on experience. Where probe system bandwidth is characterized, it is often done experimentally, requiring careful experimental set-up and analysis. There is a need for a relatively simple but accurate model for estimation of probe system bandwidth. A new model is presented for the accurate estimation of pressure probe bandwidth for simple probes commonly used in wind tunnel environments; experimental validation is provided. An additional, simple graphical method for air is included for convenience.

  5. Optimization of externalities using DTM measures: a Pareto optimal multi objective optimization using the evolutionary algorithm SPEA2+

    NARCIS (Netherlands)

    Wismans, Luc Johannes Josephus; van Berkum, Eric C.; Bliemer, Michiel; Allkim, T.P.; van Arem, Bart

    2010-01-01

    Multi objective optimization of externalities of traffic is performed solving a network design problem in which Dynamic Traffic Management measures are used. The resulting Pareto optimal set is determined by employing the SPEA2+ evolutionary algorithm.

  6. Rough set classification based on quantum logic

    Science.gov (United States)

    Hassan, Yasser F.

    2017-11-01

    By combining the advantages of quantum computing and soft computing, the paper shows that rough sets can be used with quantum logic for classification and recognition systems. We suggest the new definition of rough set theory as quantum logic theory. Rough approximations are essential elements in rough set theory, the quantum rough set model for set-valued data directly construct set approximation based on a kind of quantum similarity relation which is presented here. Theoretical analyses demonstrate that the new model for quantum rough sets has new type of decision rule with less redundancy which can be used to give accurate classification using principles of quantum superposition and non-linear quantum relations. To our knowledge, this is the first attempt aiming to define rough sets in representation of a quantum rather than logic or sets. The experiments on data-sets have demonstrated that the proposed model is more accuracy than the traditional rough sets in terms of finding optimal classifications.

  7. Fabrication of tungsten probe for hard tapping operation in atomic force microscopy

    Energy Technology Data Exchange (ETDEWEB)

    Han, Guebum, E-mail: hanguebum@live.co.kr [Department of Physics and Optical Engineering, Rose-Hulman Institute of Technology, 5500 Wabash Avenue, Terre Haute, Indiana 47803 (United States); Department of Mechanical Design and Robot Engineering, Seoul National University of Science and Technology, 232 Gongneung-ro, Nowon-gu, Seoul 139-743 (Korea, Republic of); Ahn, Hyo-Sok, E-mail: hsahn@seoultech.ac.kr [Manufacturing Systems and Design Engineering Programme, Seoul National University of Science & Technology, 232 Gongneung-ro, Nowon-gu, Seoul 139-743 (Korea, Republic of)

    2016-02-15

    We propose a method of producing a tungsten probe with high stiffness for atomic force microscopy (AFM) in order to acquire enhanced phase contrast images and efficiently perform lithography. A tungsten probe with a tip radius between 20 nm and 50 nm was fabricated using electrochemical etching optimized by applying pulse waves at different voltages. The spring constant of the tungsten probe was determined by finite element analysis (FEA), and its applicability as an AFM probe was evaluated by obtaining topography and phase contrast images of a Si wafer sample partly coated with Au. Enhanced hard tapping performance of the tungsten probe compared with a commercial Si probe was confirmed by conducting hard tapping tests at five different oscillation amplitudes on single layer graphene grown by chemical vapor deposition (CVD). To analyze the damaged graphene sample, the test areas were investigated using tip-enhanced Raman spectroscopy (TERS). The test results demonstrate that the tungsten probe with high stiffness was capable of inducing sufficient elastic and plastic deformation to enable obtaining enhanced phase contrast images and performing lithography, respectively. - Highlights: • We propose a method of producing highly stiff tungsten probes for hard tapping AFM. • Spring constant of tungsten probe is determined by finite element method. • Enhanced hard tapping performance is confirmed. • Tip-enhanced Raman spectroscopy is used to identify damage to graphene.

  8. Fabrication of tungsten probe for hard tapping operation in atomic force microscopy

    International Nuclear Information System (INIS)

    Han, Guebum; Ahn, Hyo-Sok

    2016-01-01

    We propose a method of producing a tungsten probe with high stiffness for atomic force microscopy (AFM) in order to acquire enhanced phase contrast images and efficiently perform lithography. A tungsten probe with a tip radius between 20 nm and 50 nm was fabricated using electrochemical etching optimized by applying pulse waves at different voltages. The spring constant of the tungsten probe was determined by finite element analysis (FEA), and its applicability as an AFM probe was evaluated by obtaining topography and phase contrast images of a Si wafer sample partly coated with Au. Enhanced hard tapping performance of the tungsten probe compared with a commercial Si probe was confirmed by conducting hard tapping tests at five different oscillation amplitudes on single layer graphene grown by chemical vapor deposition (CVD). To analyze the damaged graphene sample, the test areas were investigated using tip-enhanced Raman spectroscopy (TERS). The test results demonstrate that the tungsten probe with high stiffness was capable of inducing sufficient elastic and plastic deformation to enable obtaining enhanced phase contrast images and performing lithography, respectively. - Highlights: • We propose a method of producing highly stiff tungsten probes for hard tapping AFM. • Spring constant of tungsten probe is determined by finite element method. • Enhanced hard tapping performance is confirmed. • Tip-enhanced Raman spectroscopy is used to identify damage to graphene.

  9. Colorimetric DNA detection of transgenic plants using gold nanoparticles functionalized with L-shaped DNA probes

    Science.gov (United States)

    Nourisaeid, Elham; Mousavi, Amir; Arpanaei, Ayyoob

    2016-01-01

    In this study, a DNA colorimetric detection system based on gold nanoparticles functionalized with L-shaped DNA probes was prepared and evaluated. We investigated the hybridization efficiency of the L-shaped probes and studied the effect of nanoparticle size and the L-shaped DNA probe length on the performance of the as-prepared system. Probes were attached to the surface of gold nanoparticles using an adenine sequence. An optimal sequence of 35S rRNA gene promoter from the cauliflower mosaic virus, which is frequently used in the development of transgenic plants, and the two complementary ends of this gene were employed as model target strands and probe molecules, respectively. The spectrophotometric properties of the as-prepared systems indicated that the large NPs show better changes in the absorption spectrum and consequently present a better performance. The results of this study revealed that the probe/Au-NPs prepared using a vertical spacer containing 5 thymine oligonucleotides exhibited a stronger spectrophotometric response in comparison to that of larger probes. These results in general indicate the suitable performance of the L-shaped DNA probe-functionalized Au-NPs, and in particular emphasize the important role of the gold nanoparticle size and length of the DNA probes in enhancing the performance of such a system.

  10. Optimization of the working distance of an ion microprobe-forming system

    International Nuclear Information System (INIS)

    Melnik, K.I.; Magilin, D.V.; Ponomarev, A.G.

    2009-01-01

    A high-resolution ion microprobe necessitates the use of a small working distance (the distance from the final quadrupole lens of a probe-forming system to the specimen) in order to produce a large demagnification. But at the same time a small working distance is a source of a number of practical difficulties. We have presented an approach for determining a working distance that provides the best spatial resolution with the main practical limitations taken into account. We used a probe-forming system acceptance as a criterion of optimality. The calculations have revealed the existence of an optimal working distance in a set of common probe-forming systems, but it can be achieved only after changing of a design of a final quadrupole lens. We proposed a possible design of conic lens that allows solving the problem of detectors location and creating a short focus system. Three-dimensional calculations of magnetic field within this lens predicted a good quality of field structure.

  11. Alternative technique to neutron probe calibration in situ

    International Nuclear Information System (INIS)

    Encarnacao, F.; Carneiro, C.; Dall'Olio, A.

    1990-01-01

    An alternative technique of neutron probe calibration in situ was applied for Podzolic soil. Under field condition, the neutron probe calibration was performed using a special arrangement that prevented the lateral movement of water around the access tube of the neutron probe. During the experiments, successive amounts of water were uniformly infiltrated through the soil profile. Two plots were set to study the effect of the plot dimension on the slope of the calibration curve. The results obtained shown that the amounts of water transferred to the soil profile were significantly correlated to the integrals of count ratio along the soil profile on both plots. In consequence, the slope of calibration curve in field condition was determined. (author)

  12. Persistence of microbial contamination on transvaginal ultrasound probes despite low-level disinfection procedure.

    Directory of Open Access Journals (Sweden)

    Fatima M'Zali

    Full Text Available AIM OF THE STUDY: In many countries, Low Level Disinfection (LLD of covered transvaginal ultrasound probes is recommended between patients' examinations. The aim of this study was to evaluate the antimicrobial efficacy of LLD under routine conditions on a range of microorganisms. MATERIALS AND METHODS: Samples were taken over a six month period in a private French Radiology Center. 300 specimens derived from endovaginal ultrasound probes were analyzed after disinfection of the probe with wipes impregnated with a quaternary ammonium compound and chlorhexidine. Human papillomavirus (HPV was sought in the first set of s100 samples, Chlamydia trachomatis and mycoplasmas were searched in the second set of 100 samples, bacteria and fungi in the third 100 set samples. HPV, C. trachomatis and mycoplasmas were detected by PCR amplification. PCR positive samples were subjected to a nuclease treatment before an additional PCR assay to assess the likely viable microorganisms. Bacteria and fungi were investigated by conventional methods. RESULTS: A substantial persistence of microorganisms was observed on the disinfected probes: HPV DNA was found on 13% of the samples and 7% in nuclease-resistant form. C. trachomatis DNA was detected on 20% of the probes by primary PCR but only 2% after nuclease treatment, while mycoplasma DNA was amplified in 8% and 4%, respectively. Commensal and/or environmental bacterial flora was present on 86% of the probes, occasionally in mixed culture, and at various levels (10->3000 CFU/probe; Staphylococcus aureus was cultured from 4% of the probes (10-560 CFU/probe. No fungi were isolated. CONCLUSION: Our findings raise concerns about the efficacy of impregnated towels as a sole mean for disinfection of ultrasound probes. Although the ultrasound probes are used with disposable covers, our results highlight the potential risk of cross contamination between patients during ultrasound examination and emphasize the need for reviewing

  13. Slope constrained Topology Optimization

    DEFF Research Database (Denmark)

    Petersson, J.; Sigmund, Ole

    1998-01-01

    The problem of minimum compliance topology optimization of an elastic continuum is considered. A general continuous density-energy relation is assumed, including variable thickness sheet models and artificial power laws. To ensure existence of solutions, the design set is restricted by enforcing...

  14. Novel Probes of Gravity and Dark Energy

    Energy Technology Data Exchange (ETDEWEB)

    Jain, Bhuvnesh; et al.

    2013-09-20

    The discovery of cosmic acceleration has stimulated theorists to consider dark energy or modifications to Einstein's General Relativity as possible explanations. The last decade has seen advances in theories that go beyond smooth dark energy -- modified gravity and interactions of dark energy. While the theoretical terrain is being actively explored, the generic presence of fifth forces and dark sector couplings suggests a set of distinct observational signatures. This report focuses on observations that differ from the conventional probes that map the expansion history or large-scale structure. Examples of such novel probes are: detection of scalar fields via lab experiments, tests of modified gravity using stars and galaxies in the nearby universe, comparison of lensing and dynamical masses of galaxies and clusters, and the measurements of fundamental constants at high redshift. The observational expertise involved is very broad as it spans laboratory experiments, high resolution astronomical imaging and spectroscopy and radio observations. In the coming decade, searches for these effects have the potential for discovering fundamental new physics. We discuss how the searches can be carried out using experiments that are already under way or with modest adaptations of existing telescopes or planned experiments. The accompanying paper on the Growth of Cosmic Structure describes complementary tests of gravity with observations of large-scale structure.

  15. First course in optimization

    CERN Document Server

    Byrne, Charles L

    2014-01-01

    Optimization without Calculus Chapter Summary The Arithmetic Mean-Geometric Mean Inequality An Application of the AGM Inequality: the Number e Extending the AGM Inequality Optimization Using the AGM Inequality The Holder and Minkowski Inequalities Cauchy's Inequality Optimizing using Cauchy's Inequality An Inner Product for Square Matrices Discrete Allocation Problems Geometric Programming Chapter Summary An Example of a GP Problem Posynomials and the GP Problem The Dual GP Problem Solving the GP Problem Solving the DGP Problem Constrained Geometric Programming Basic Analysis Chapter Summary Minima and Infima Limits Completeness Continuity Limsup and Liminf Another View Semi-Continuity Convex Sets Chapter SummaryThe Geometry of Real Euclidean Space A Bit of Topology Convex Sets in RJ More on Projections Linear and Affine Operators on RJ The Fundamental Theorems Block-Matrix Notation Theorems of the Alternative Another Proof of Farkas' Lemma Gordan's Theorem Revisited Vector Spaces and Matrices Chapter Summary...

  16. Characterization of near-field optical probes

    DEFF Research Database (Denmark)

    Vohnsen, Brian; Bozhevolnyi, Sergey I.

    1999-01-01

    Radiation and collection characteristics of four different near-field optical-fiber probes, namely, three uncoated probes and an aluminium-coated small-aperture probe, are investigated and compared. Their radiation properties are characterized by observation of light-induced topography changes...... in a photo-sensitive film illuminated with the probes, and it is confirmed that the radiated optical field is unambigiously confined only for the coated probe. Near-field optical imaging of a standing evanescent-wave pattern is used to compare the detection characteristics of the probes, and it is concluded...... that, for the imaging of optical-field intensity distributions containing predominantly evanescent-wave components, a sharp uncoated tip is the probe of choice. Complementary results obtained with optical phase-conjugation experiments with he uncoated probes are discussed in relation to the probe...

  17. Nonlinear optimization

    CERN Document Server

    Ruszczynski, Andrzej

    2011-01-01

    Optimization is one of the most important areas of modern applied mathematics, with applications in fields from engineering and economics to finance, statistics, management science, and medicine. While many books have addressed its various aspects, Nonlinear Optimization is the first comprehensive treatment that will allow graduate students and researchers to understand its modern ideas, principles, and methods within a reasonable time, but without sacrificing mathematical precision. Andrzej Ruszczynski, a leading expert in the optimization of nonlinear stochastic systems, integrates the theory and the methods of nonlinear optimization in a unified, clear, and mathematically rigorous fashion, with detailed and easy-to-follow proofs illustrated by numerous examples and figures. The book covers convex analysis, the theory of optimality conditions, duality theory, and numerical methods for solving unconstrained and constrained optimization problems. It addresses not only classical material but also modern top...

  18. A Comparative Theoretical and Computational Study on Robust Counterpart Optimization: I. Robust Linear Optimization and Robust Mixed Integer Linear Optimization

    Science.gov (United States)

    Li, Zukui; Ding, Ran; Floudas, Christodoulos A.

    2011-01-01

    Robust counterpart optimization techniques for linear optimization and mixed integer linear optimization problems are studied in this paper. Different uncertainty sets, including those studied in literature (i.e., interval set; combined interval and ellipsoidal set; combined interval and polyhedral set) and new ones (i.e., adjustable box; pure ellipsoidal; pure polyhedral; combined interval, ellipsoidal, and polyhedral set) are studied in this work and their geometric relationship is discussed. For uncertainty in the left hand side, right hand side, and objective function of the optimization problems, robust counterpart optimization formulations induced by those different uncertainty sets are derived. Numerical studies are performed to compare the solutions of the robust counterpart optimization models and applications in refinery production planning and batch process scheduling problem are presented. PMID:21935263

  19. Nuclear physics with electroweak probes

    International Nuclear Information System (INIS)

    Benhar, Omar

    2009-01-01

    In recent years, the italian theoretical Nuclear Physics community has played a leading role in the development of a unified approach, allowing for a consistent and fully quantitative description of the nuclear response to electromagnetic and weak probes. In this paper I review the main achievements in both fields, point out some of the open problems, and outline the most promising prospects

  20. Resolution analysis by random probing

    NARCIS (Netherlands)

    Fichtner, Andreas; van Leeuwen, T.

    2015-01-01

    We develop and apply methods for resolution analysis in tomography, based on stochastic probing of the Hessian or resolution operators. Key properties of our methods are (i) low algorithmic complexity and easy implementation, (ii) applicability to any tomographic technique, including full‐waveform