WorldWideScience

Sample records for highly-sensitive regular analyses

  1. Hospital Standardized Mortality Ratios: Sensitivity Analyses on the Impact of Coding

    Science.gov (United States)

    Bottle, Alex; Jarman, Brian; Aylin, Paul

    2011-01-01

    Introduction Hospital standardized mortality ratios (HSMRs) are derived from administrative databases and cover 80 percent of in-hospital deaths with adjustment for available case mix variables. They have been criticized for being sensitive to issues such as clinical coding but on the basis of limited quantitative evidence. Methods In a set of sensitivity analyses, we compared regular HSMRs with HSMRs resulting from a variety of changes, such as a patient-based measure, not adjusting for comorbidity, not adjusting for palliative care, excluding unplanned zero-day stays ending in live discharge, and using more or fewer diagnoses. Results Overall, regular and variant HSMRs were highly correlated (ρ > 0.8), but differences of up to 10 points were common. Two hospitals were particularly affected when palliative care was excluded from the risk models. Excluding unplanned stays ending in same-day live discharge had the least impact despite their high frequency. The largest impacts were seen when capturing postdischarge deaths and using just five high-mortality diagnosis groups. Conclusions HSMRs in most hospitals changed by only small amounts from the various adjustment methods tried here, though small-to-medium changes were not uncommon. However, the position relative to funnel plot control limits could move in a significant minority even with modest changes in the HSMR. PMID:21790587

  2. Sensitivity and uncertainty analyses for performance assessment modeling

    International Nuclear Information System (INIS)

    Doctor, P.G.

    1988-08-01

    Sensitivity and uncertainty analyses methods for computer models are being applied in performance assessment modeling in the geologic high level radioactive waste repository program. The models used in performance assessment tend to be complex physical/chemical models with large numbers of input variables. There are two basic approaches to sensitivity and uncertainty analyses: deterministic and statistical. The deterministic approach to sensitivity analysis involves numerical calculation or employs the adjoint form of a partial differential equation to compute partial derivatives; the uncertainty analysis is based on Taylor series expansions of the input variables propagated through the model to compute means and variances of the output variable. The statistical approach to sensitivity analysis involves a response surface approximation to the model with the sensitivity coefficients calculated from the response surface parameters; the uncertainty analysis is based on simulation. The methods each have strengths and weaknesses. 44 refs

  3. Safety and sensitivity analyses of a generic geologic disposal system for high-level radioactive waste

    International Nuclear Information System (INIS)

    Kimura, Hideo; Takahashi, Tomoyuki; Shima, Shigeki; Matsuzuru, Hideo

    1994-11-01

    This report describes safety and sensitivity analyses of a generic geologic disposal system for HLW, using a GSRW code and an automated sensitivity analysis methodology based on the Differential Algebra. An exposure scenario considered here is based on a normal evolution scenario which excludes events attributable to probabilistic alterations in the environment. The results of sensitivity analyses indicate that parameters related to a homogeneous rock surrounding a disposal facility have higher sensitivities to the output analyzed here than those of a fractured zone and engineered barriers. The sensitivity analysis methodology provides technical information which might be bases for the optimization of design of the disposal facility. Safety analyses were performed on the reference disposal system which involve HLW in amounts corresponding to 16,000 MTU of spent fuels. The individual dose equivalent due to the exposure pathway ingesting drinking water was calculated using both the conservative and realistic values of geochemical parameters. In both cases, the committed dose equivalent evaluated here is the order of 10 -7 Sv, and thus geologic disposal of HLW may be feasible if the disposal conditions assumed here remain unchanged throughout the periods assessed here. (author)

  4. Sensitivity in risk analyses with uncertain numbers.

    Energy Technology Data Exchange (ETDEWEB)

    Tucker, W. Troy; Ferson, Scott

    2006-06-01

    Sensitivity analysis is a study of how changes in the inputs to a model influence the results of the model. Many techniques have recently been proposed for use when the model is probabilistic. This report considers the related problem of sensitivity analysis when the model includes uncertain numbers that can involve both aleatory and epistemic uncertainty and the method of calculation is Dempster-Shafer evidence theory or probability bounds analysis. Some traditional methods for sensitivity analysis generalize directly for use with uncertain numbers, but, in some respects, sensitivity analysis for these analyses differs from traditional deterministic or probabilistic sensitivity analyses. A case study of a dike reliability assessment illustrates several methods of sensitivity analysis, including traditional probabilistic assessment, local derivatives, and a ''pinching'' strategy that hypothetically reduces the epistemic uncertainty or aleatory uncertainty, or both, in an input variable to estimate the reduction of uncertainty in the outputs. The prospects for applying the methods to black box models are also considered.

  5. Substructural Regularization With Data-Sensitive Granularity for Sequence Transfer Learning.

    Science.gov (United States)

    Sun, Shichang; Liu, Hongbo; Meng, Jiana; Chen, C L Philip; Yang, Yu

    2018-06-01

    Sequence transfer learning is of interest in both academia and industry with the emergence of numerous new text domains from Twitter and other social media tools. In this paper, we put forward the data-sensitive granularity for transfer learning, and then, a novel substructural regularization transfer learning model (STLM) is proposed to preserve target domain features at substructural granularity in the light of the condition of labeled data set size. Our model is underpinned by hidden Markov model and regularization theory, where the substructural representation can be integrated as a penalty after measuring the dissimilarity of substructures between target domain and STLM with relative entropy. STLM can achieve the competing goals of preserving the target domain substructure and utilizing the observations from both the target and source domains simultaneously. The estimation of STLM is very efficient since an analytical solution can be derived as a necessary and sufficient condition. The relative usability of substructures to act as regularization parameters and the time complexity of STLM are also analyzed and discussed. Comprehensive experiments of part-of-speech tagging with both Brown and Twitter corpora fully justify that our model can make improvements on all the combinations of source and target domains.

  6. Subcortical processing of speech regularities underlies reading and music aptitude in children

    Science.gov (United States)

    2011-01-01

    Background Neural sensitivity to acoustic regularities supports fundamental human behaviors such as hearing in noise and reading. Although the failure to encode acoustic regularities in ongoing speech has been associated with language and literacy deficits, how auditory expertise, such as the expertise that is associated with musical skill, relates to the brainstem processing of speech regularities is unknown. An association between musical skill and neural sensitivity to acoustic regularities would not be surprising given the importance of repetition and regularity in music. Here, we aimed to define relationships between the subcortical processing of speech regularities, music aptitude, and reading abilities in children with and without reading impairment. We hypothesized that, in combination with auditory cognitive abilities, neural sensitivity to regularities in ongoing speech provides a common biological mechanism underlying the development of music and reading abilities. Methods We assessed auditory working memory and attention, music aptitude, reading ability, and neural sensitivity to acoustic regularities in 42 school-aged children with a wide range of reading ability. Neural sensitivity to acoustic regularities was assessed by recording brainstem responses to the same speech sound presented in predictable and variable speech streams. Results Through correlation analyses and structural equation modeling, we reveal that music aptitude and literacy both relate to the extent of subcortical adaptation to regularities in ongoing speech as well as with auditory working memory and attention. Relationships between music and speech processing are specifically driven by performance on a musical rhythm task, underscoring the importance of rhythmic regularity for both language and music. Conclusions These data indicate common brain mechanisms underlying reading and music abilities that relate to how the nervous system responds to regularities in auditory input

  7. Subcortical processing of speech regularities underlies reading and music aptitude in children.

    Science.gov (United States)

    Strait, Dana L; Hornickel, Jane; Kraus, Nina

    2011-10-17

    Neural sensitivity to acoustic regularities supports fundamental human behaviors such as hearing in noise and reading. Although the failure to encode acoustic regularities in ongoing speech has been associated with language and literacy deficits, how auditory expertise, such as the expertise that is associated with musical skill, relates to the brainstem processing of speech regularities is unknown. An association between musical skill and neural sensitivity to acoustic regularities would not be surprising given the importance of repetition and regularity in music. Here, we aimed to define relationships between the subcortical processing of speech regularities, music aptitude, and reading abilities in children with and without reading impairment. We hypothesized that, in combination with auditory cognitive abilities, neural sensitivity to regularities in ongoing speech provides a common biological mechanism underlying the development of music and reading abilities. We assessed auditory working memory and attention, music aptitude, reading ability, and neural sensitivity to acoustic regularities in 42 school-aged children with a wide range of reading ability. Neural sensitivity to acoustic regularities was assessed by recording brainstem responses to the same speech sound presented in predictable and variable speech streams. Through correlation analyses and structural equation modeling, we reveal that music aptitude and literacy both relate to the extent of subcortical adaptation to regularities in ongoing speech as well as with auditory working memory and attention. Relationships between music and speech processing are specifically driven by performance on a musical rhythm task, underscoring the importance of rhythmic regularity for both language and music. These data indicate common brain mechanisms underlying reading and music abilities that relate to how the nervous system responds to regularities in auditory input. Definition of common biological underpinnings

  8. Subcortical processing of speech regularities underlies reading and music aptitude in children

    Directory of Open Access Journals (Sweden)

    Strait Dana L

    2011-10-01

    Full Text Available Abstract Background Neural sensitivity to acoustic regularities supports fundamental human behaviors such as hearing in noise and reading. Although the failure to encode acoustic regularities in ongoing speech has been associated with language and literacy deficits, how auditory expertise, such as the expertise that is associated with musical skill, relates to the brainstem processing of speech regularities is unknown. An association between musical skill and neural sensitivity to acoustic regularities would not be surprising given the importance of repetition and regularity in music. Here, we aimed to define relationships between the subcortical processing of speech regularities, music aptitude, and reading abilities in children with and without reading impairment. We hypothesized that, in combination with auditory cognitive abilities, neural sensitivity to regularities in ongoing speech provides a common biological mechanism underlying the development of music and reading abilities. Methods We assessed auditory working memory and attention, music aptitude, reading ability, and neural sensitivity to acoustic regularities in 42 school-aged children with a wide range of reading ability. Neural sensitivity to acoustic regularities was assessed by recording brainstem responses to the same speech sound presented in predictable and variable speech streams. Results Through correlation analyses and structural equation modeling, we reveal that music aptitude and literacy both relate to the extent of subcortical adaptation to regularities in ongoing speech as well as with auditory working memory and attention. Relationships between music and speech processing are specifically driven by performance on a musical rhythm task, underscoring the importance of rhythmic regularity for both language and music. Conclusions These data indicate common brain mechanisms underlying reading and music abilities that relate to how the nervous system responds to

  9. Uncertainty and Sensitivity Analyses Plan

    International Nuclear Information System (INIS)

    Simpson, J.C.; Ramsdell, J.V. Jr.

    1993-04-01

    Hanford Environmental Dose Reconstruction (HEDR) Project staff are developing mathematical models to be used to estimate the radiation dose that individuals may have received as a result of emissions since 1944 from the US Department of Energy's (DOE) Hanford Site near Richland, Washington. An uncertainty and sensitivity analyses plan is essential to understand and interpret the predictions from these mathematical models. This is especially true in the case of the HEDR models where the values of many parameters are unknown. This plan gives a thorough documentation of the uncertainty and hierarchical sensitivity analysis methods recommended for use on all HEDR mathematical models. The documentation includes both technical definitions and examples. In addition, an extensive demonstration of the uncertainty and sensitivity analysis process is provided using actual results from the Hanford Environmental Dose Reconstruction Integrated Codes (HEDRIC). This demonstration shows how the approaches used in the recommended plan can be adapted for all dose predictions in the HEDR Project

  10. Accelerated safety analyses - structural analyses Phase I - structural sensitivity evaluation of single- and double-shell waste storage tanks

    International Nuclear Information System (INIS)

    Becker, D.L.

    1994-11-01

    Accelerated Safety Analyses - Phase I (ASA-Phase I) have been conducted to assess the appropriateness of existing tank farm operational controls and/or limits as now stipulated in the Operational Safety Requirements (OSRs) and Operating Specification Documents, and to establish a technical basis for the waste tank operating safety envelope. Structural sensitivity analyses were performed to assess the response of the different waste tank configurations to variations in loading conditions, uncertainties in loading parameters, and uncertainties in material characteristics. Extensive documentation of the sensitivity analyses conducted and results obtained are provided in the detailed ASA-Phase I report, Structural Sensitivity Evaluation of Single- and Double-Shell Waste Tanks for Accelerated Safety Analysis - Phase I. This document provides a summary of the accelerated safety analyses sensitivity evaluations and the resulting findings

  11. How often do sensitivity analyses for economic parameters change cost-utility analysis conclusions?

    Science.gov (United States)

    Schackman, Bruce R; Gold, Heather Taffet; Stone, Patricia W; Neumann, Peter J

    2004-01-01

    There is limited evidence about the extent to which sensitivity analysis has been used in the cost-effectiveness literature. Sensitivity analyses for health-related QOL (HR-QOL), cost and discount rate economic parameters are of particular interest because they measure the effects of methodological and estimation uncertainties. To investigate the use of sensitivity analyses in the pharmaceutical cost-utility literature in order to test whether a change in economic parameters could result in a different conclusion regarding the cost effectiveness of the intervention analysed. Cost-utility analyses of pharmaceuticals identified in a prior comprehensive audit (70 articles) were reviewed and further audited. For each base case for which sensitivity analyses were reported (n = 122), up to two sensitivity analyses for HR-QOL (n = 133), cost (n = 99), and discount rate (n = 128) were examined. Article mentions of thresholds for acceptable cost-utility ratios were recorded (total 36). Cost-utility ratios were denominated in US dollars for the year reported in each of the original articles in order to determine whether a different conclusion would have been indicated at the time the article was published. Quality ratings from the original audit for articles where sensitivity analysis results crossed the cost-utility ratio threshold above the base-case result were compared with those that did not. The most frequently mentioned cost-utility thresholds were $US20,000/QALY, $US50,000/QALY, and $US100,000/QALY. The proportions of sensitivity analyses reporting quantitative results that crossed the threshold above the base-case results (or where the sensitivity analysis result was dominated) were 31% for HR-QOL sensitivity analyses, 20% for cost-sensitivity analyses, and 15% for discount-rate sensitivity analyses. Almost half of the discount-rate sensitivity analyses did not report quantitative results. Articles that reported sensitivity analyses where results crossed the cost

  12. How the definition of acceptable antigens and epitope analysis can facilitate transplantation of highly sensitized patients with excellent long-term graft survival.

    Science.gov (United States)

    Heidt, Sebastiaan; Haasnoot, Geert W; Claas, Frans H J

    2018-05-24

    Highly sensitized patients awaiting a renal transplant have a low chance of receiving an organ offer. Defining acceptable antigens and using this information for allocation purposes can vastly enhance transplantation of this subgroup of patients, which is the essence of the Eurotransplant Acceptable Mismatch program. Acceptable antigens can be determined by extensive laboratory testing, as well as on basis of human leukocyte antigen (HLA) epitope analyses. Within the Acceptable Mismatch program, there is no effect of HLA mismatches on long-term graft survival. Furthermore, patients transplanted through the Acceptable Mismatch program have similar long-term graft survival to nonsensitized patients transplanted through regular allocation. Although HLA epitope analysis is already being used for defining acceptable HLA antigens for highly sensitized patients in the Acceptable Mismatch program, increasing knowledge on HLA antibody - epitope interactions will pave the way toward the definition of acceptable epitopes for highly sensitized patients in the future. Allocation based on acceptable antigens can facilitate transplantation of highly sensitized patients with excellent long-term graft survival.

  13. Selection of regularization parameter for l1-regularized damage detection

    Science.gov (United States)

    Hou, Rongrong; Xia, Yong; Bao, Yuequan; Zhou, Xiaoqing

    2018-06-01

    The l1 regularization technique has been developed for structural health monitoring and damage detection through employing the sparsity condition of structural damage. The regularization parameter, which controls the trade-off between data fidelity and solution size of the regularization problem, exerts a crucial effect on the solution. However, the l1 regularization problem has no closed-form solution, and the regularization parameter is usually selected by experience. This study proposes two strategies of selecting the regularization parameter for the l1-regularized damage detection problem. The first method utilizes the residual and solution norms of the optimization problem and ensures that they are both small. The other method is based on the discrepancy principle, which requires that the variance of the discrepancy between the calculated and measured responses is close to the variance of the measurement noise. The two methods are applied to a cantilever beam and a three-story frame. A range of the regularization parameter, rather than one single value, can be determined. When the regularization parameter in this range is selected, the damage can be accurately identified even for multiple damage scenarios. This range also indicates the sensitivity degree of the damage identification problem to the regularization parameter.

  14. SENSITIVITY ANALYSIS FOR SALTSTONE DISPOSAL UNIT COLUMN DEGRADATION ANALYSES

    Energy Technology Data Exchange (ETDEWEB)

    Flach, G.

    2014-10-28

    PORFLOW related analyses supporting a Sensitivity Analysis for Saltstone Disposal Unit (SDU) column degradation were performed. Previous analyses, Flach and Taylor 2014, used a model in which the SDU columns degraded in a piecewise manner from the top and bottom simultaneously. The current analyses employs a model in which all pieces of the column degrade at the same time. Information was extracted from the analyses which may be useful in determining the distribution of Tc-99 in the various SDUs throughout time and in determining flow balances for the SDUs.

  15. A geometric buckling expression for regular polygons: II. Analyses based on the multiple reciprocity boundary element method

    International Nuclear Information System (INIS)

    Itagaki, Masafumi; Miyoshi, Yoshinori; Hirose, Hideyuki

    1993-01-01

    A procedure is presented for the determination of geometric buckling for regular polygons. A new computation technique, the multiple reciprocity boundary element method (MRBEM), has been applied to solve the one-group neutron diffusion equation. The main difficulty in applying the ordinary boundary element method (BEM) to neutron diffusion problems has been the need to compute a domain integral, resulting from the fission source. The MRBEM has been developed for transforming this type of domain integral into an equivalent boundary integral. The basic idea of the MRBEM is to apply repeatedly the reciprocity theorem (Green's second formula) using a sequence of higher order fundamental solutions. The MRBEM requires discretization of the boundary only rather than of the domain. This advantage is useful for extensive survey analyses of buckling for complex geometries. The results of survey analyses have indicated that the general form of geometric buckling is B g 2 = (a n /R c ) 2 , where R c represents the radius of the circumscribed circle of the regular polygon under consideration. The geometric constant A n depends on the type of regular polygon and takes the value of π for a square and 2.405 for a circle, an extreme case that has an infinite number of sides. Values of a n for a triangle, pentagon, hexagon, and octagon have been calculated as 4.190, 2.281, 2.675, and 2.547, respectively

  16. The Effect of Ocular Surface Regularity on Contrast Sensitivity and Straylight in Dry Eye

    OpenAIRE

    Koh, Shizuka; Maeda, Naoyuki; Ikeda, Chikako; Asonuma, Sanae; Ogawa, Mai; Hiraoka, Takahiro; Oshika, Tetsuro; Nishida, Kohji

    2017-01-01

    Purpose: To investigate the association between visual function and ocular surface regularity in dry eye.Methods: We enrolled 52 eyes of 52 dry eye patients (34 dry eyes with superficial punctate keratopathy [SPK] in the central corneal region [central SPK] and 18 dry eyes without central SPK) and 20 eyes of 20 normal control subjects. All eyes had a best-corrected distance visual acuity better than 20/20. We measured two indices of contrast sensitivity function under photopic conditions: con...

  17. Optimal Tikhonov Regularization in Finite-Frequency Tomography

    Science.gov (United States)

    Fang, Y.; Yao, Z.; Zhou, Y.

    2017-12-01

    The last decade has witnessed a progressive transition in seismic tomography from ray theory to finite-frequency theory which overcomes the resolution limit of the high-frequency approximation in ray theory. In addition to approximations in wave propagation physics, a main difference between ray-theoretical tomography and finite-frequency tomography is the sparseness of the associated sensitivity matrix. It is well known that seismic tomographic problems are ill-posed and regularizations such as damping and smoothing are often applied to analyze the tradeoff between data misfit and model uncertainty. The regularizations depend on the structure of the matrix as well as noise level of the data. Cross-validation has been used to constrain data uncertainties in body-wave finite-frequency inversions when measurements at multiple frequencies are available to invert for a common structure. In this study, we explore an optimal Tikhonov regularization in surface-wave phase-velocity tomography based on minimization of an empirical Bayes risk function using theoretical training datasets. We exploit the structure of the sensitivity matrix in the framework of singular value decomposition (SVD) which also allows for the calculation of complete resolution matrix. We compare the optimal Tikhonov regularization in finite-frequency tomography with traditional tradeo-off analysis using surface wave dispersion measurements from global as well as regional studies.

  18. Uncertainty and sensitivity analyses for age-dependent unavailability model integrating test and maintenance

    International Nuclear Information System (INIS)

    Kančev, Duško; Čepin, Marko

    2012-01-01

    Highlights: ► Application of analytical unavailability model integrating T and M, ageing, and test strategy. ► Ageing data uncertainty propagation on system level assessed via Monte Carlo simulation. ► Uncertainty impact is growing with the extension of the surveillance test interval. ► Calculated system unavailability dependence on two different sensitivity study ageing databases. ► System unavailability sensitivity insights regarding specific groups of BEs as test intervals extend. - Abstract: The interest in operational lifetime extension of the existing nuclear power plants is growing. Consequently, plants life management programs, considering safety components ageing, are being developed and employed. Ageing represents a gradual degradation of the physical properties and functional performance of different components consequently implying their reduced availability. Analyses, which are being made in the direction of nuclear power plants lifetime extension are based upon components ageing management programs. On the other side, the large uncertainties of the ageing parameters as well as the uncertainties associated with most of the reliability data collections are widely acknowledged. This paper addresses the uncertainty and sensitivity analyses conducted utilizing a previously developed age-dependent unavailability model, integrating effects of test and maintenance activities, for a selected stand-by safety system in a nuclear power plant. The most important problem is the lack of data concerning the effects of ageing as well as the relatively high uncertainty associated to these data, which would correspond to more detailed modelling of ageing. A standard Monte Carlo simulation was coded for the purpose of this paper and utilized in the process of assessment of the component ageing parameters uncertainty propagation on system level. The obtained results from the uncertainty analysis indicate the extent to which the uncertainty of the selected

  19. msgbsR: An R package for analysing methylation-sensitive restriction enzyme sequencing data.

    Science.gov (United States)

    Mayne, Benjamin T; Leemaqz, Shalem Y; Buckberry, Sam; Rodriguez Lopez, Carlos M; Roberts, Claire T; Bianco-Miotto, Tina; Breen, James

    2018-02-01

    Genotyping-by-sequencing (GBS) or restriction-site associated DNA marker sequencing (RAD-seq) is a practical and cost-effective method for analysing large genomes from high diversity species. This method of sequencing, coupled with methylation-sensitive enzymes (often referred to as methylation-sensitive restriction enzyme sequencing or MRE-seq), is an effective tool to study DNA methylation in parts of the genome that are inaccessible in other sequencing techniques or are not annotated in microarray technologies. Current software tools do not fulfil all methylation-sensitive restriction sequencing assays for determining differences in DNA methylation between samples. To fill this computational need, we present msgbsR, an R package that contains tools for the analysis of methylation-sensitive restriction enzyme sequencing experiments. msgbsR can be used to identify and quantify read counts at methylated sites directly from alignment files (BAM files) and enables verification of restriction enzyme cut sites with the correct recognition sequence of the individual enzyme. In addition, msgbsR assesses DNA methylation based on read coverage, similar to RNA sequencing experiments, rather than methylation proportion and is a useful tool in analysing differential methylation on large populations. The package is fully documented and available freely online as a Bioconductor package ( https://bioconductor.org/packages/release/bioc/html/msgbsR.html ).

  20. Diverse Regular Employees and Non-regular Employment (Japanese)

    OpenAIRE

    MORISHIMA Motohiro

    2011-01-01

    Currently there are high expectations for the introduction of policies related to diverse regular employees. These policies are a response to the problem of disparities between regular and non-regular employees (part-time, temporary, contract and other non-regular employees) and will make it more likely that workers can balance work and their private lives while companies benefit from the advantages of regular employment. In this paper, I look at two issues that underlie this discussion. The ...

  1. Balancing data sharing requirements for analyses with data sensitivity

    Science.gov (United States)

    Jarnevich, C.S.; Graham, J.J.; Newman, G.J.; Crall, A.W.; Stohlgren, T.J.

    2007-01-01

    Data sensitivity can pose a formidable barrier to data sharing. Knowledge of species current distributions from data sharing is critical for the creation of watch lists and an early warning/rapid response system and for model generation for the spread of invasive species. We have created an on-line system to synthesize disparate datasets of non-native species locations that includes a mechanism to account for data sensitivity. Data contributors are able to mark their data as sensitive. This data is then 'fuzzed' in mapping applications and downloaded files to quarter-quadrangle grid cells, but the actual locations are available for analyses. We propose that this system overcomes the hurdles to data sharing posed by sensitive data. ?? 2006 Springer Science+Business Media B.V.

  2. Sensitivity of surface meteorological analyses to observation networks

    Science.gov (United States)

    Tyndall, Daniel Paul

    A computationally efficient variational analysis system for two-dimensional meteorological fields is developed and described. This analysis approach is most efficient when the number of analysis grid points is much larger than the number of available observations, such as for large domain mesoscale analyses. The analysis system is developed using MATLAB software and can take advantage of multiple processors or processor cores. A version of the analysis system has been exported as a platform independent application (i.e., can be run on Windows, Linux, or Macintosh OS X desktop computers without a MATLAB license) with input/output operations handled by commonly available internet software combined with data archives at the University of Utah. The impact of observation networks on the meteorological analyses is assessed by utilizing a percentile ranking of individual observation sensitivity and impact, which is computed by using the adjoint of the variational surface assimilation system. This methodology is demonstrated using a case study of the analysis from 1400 UTC 27 October 2010 over the entire contiguous United States domain. The sensitivity of this approach to the dependence of the background error covariance on observation density is examined. Observation sensitivity and impact provide insight on the influence of observations from heterogeneous observing networks as well as serve as objective metrics for quality control procedures that may help to identify stations with significant siting, reporting, or representativeness issues.

  3. Joint Adaptive Mean-Variance Regularization and Variance Stabilization of High Dimensional Data.

    Science.gov (United States)

    Dazard, Jean-Eudes; Rao, J Sunil

    2012-07-01

    The paper addresses a common problem in the analysis of high-dimensional high-throughput "omics" data, which is parameter estimation across multiple variables in a set of data where the number of variables is much larger than the sample size. Among the problems posed by this type of data are that variable-specific estimators of variances are not reliable and variable-wise tests statistics have low power, both due to a lack of degrees of freedom. In addition, it has been observed in this type of data that the variance increases as a function of the mean. We introduce a non-parametric adaptive regularization procedure that is innovative in that : (i) it employs a novel "similarity statistic"-based clustering technique to generate local-pooled or regularized shrinkage estimators of population parameters, (ii) the regularization is done jointly on population moments, benefiting from C. Stein's result on inadmissibility, which implies that usual sample variance estimator is improved by a shrinkage estimator using information contained in the sample mean. From these joint regularized shrinkage estimators, we derived regularized t-like statistics and show in simulation studies that they offer more statistical power in hypothesis testing than their standard sample counterparts, or regular common value-shrinkage estimators, or when the information contained in the sample mean is simply ignored. Finally, we show that these estimators feature interesting properties of variance stabilization and normalization that can be used for preprocessing high-dimensional multivariate data. The method is available as an R package, called 'MVR' ('Mean-Variance Regularization'), downloadable from the CRAN website.

  4. Application of the Tikhonov regularization method to wind retrieval from scatterometer data I. Sensitivity analysis and simulation experiments

    International Nuclear Information System (INIS)

    Zhong Jian; Huang Si-Xun; Du Hua-Dong; Zhang Liang

    2011-01-01

    Scatterometer is an instrument which provides all-day and large-scale wind field information, and its application especially to wind retrieval always attracts meteorologists. Certain reasons cause large direction error, so it is important to find where the error mainly comes. Does it mainly result from the background field, the normalized radar cross-section (NRCS) or the method of wind retrieval? It is valuable to research. First, depending on SDP2.0, the simulated ‘true’ NRCS is calculated from the simulated ‘true’ wind through the geophysical model function NSCAT2. The simulated background field is configured by adding a noise to the simulated ‘true’ wind with the non-divergence constraint. Also, the simulated ‘measured’ NRCS is formed by adding a noise to the simulated ‘true’ NRCS. Then, the sensitivity experiments are taken, and the new method of regularization is used to improve the ambiguity removal with simulation experiments. The results show that the accuracy of wind retrieval is more sensitive to the noise in the background than in the measured NRCS; compared with the two-dimensional variational (2DVAR) ambiguity removal method, the accuracy of wind retrieval can be improved with the new method of Tikhonov regularization through choosing an appropriate regularization parameter, especially for the case of large error in the background. The work will provide important information and a new method for the wind retrieval with real data. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)

  5. Sensitivity analyses of biodiesel thermo-physical properties under diesel engine conditions

    DEFF Research Database (Denmark)

    Cheng, Xinwei; Ng, Hoon Kiat; Gan, Suyin

    2016-01-01

    This reported work investigates the sensitivities of spray and soot developments to the change of thermo-physical properties for coconut and soybean methyl esters, using two-dimensional computational fluid dynamics fuel spray modelling. The choice of test fuels made was due to their contrasting...... saturation-unsaturation compositions. The sensitivity analyses for non-reacting and reacting sprays were carried out against a total of 12 thermo-physical properties, at an ambient temperature of 900 K and density of 22.8 kg/m3. For the sensitivity analyses, all the thermo-physical properties were set...... as the baseline case and each property was individually replaced by that of diesel. The significance of individual thermo-physical property was determined based on the deviations found in predictions such as liquid penetration, ignition delay period and peak soot concentration when compared to those of baseline...

  6. More on zeta-function regularization of high-temperature expansions

    International Nuclear Information System (INIS)

    Actor, A.

    1987-01-01

    A recent paper using the Riemann ζ-function to regularize the (divergent) coefficients occurring in the high-temperature expansions of one-loop thermodynamic potentials is extended. This method proves to be a powerful tool for converting Dirichlet-type series Σ m a m (x i )/m s into power series in the dimensionless parameters x i . The coefficients occurring in the power series are (proportional to) ζ-functions evaluated away from their poles - this is where the regularization occurs. High-temperature expansions are just one example of this highly-nontrivial rearrangement of Dirichlet series into power series form. We discuss in considerable detail series in which a m (x i ) is a product of trigonometric, algebraic and Bessel function factors. The ζ-function method is carefully explained, and a large number of new formulae are provided. The means to generalize these formulae are also provided. Previous results on thermodynamic potentials are generalized to include a nonzero constant term in the gauge potential (time component) which can be used to probe the electric sector of temperature gauge theories. (author)

  7. A multiresolution method for solving the Poisson equation using high order regularization

    DEFF Research Database (Denmark)

    Hejlesen, Mads Mølholm; Walther, Jens Honore

    2016-01-01

    We present a novel high order multiresolution Poisson solver based on regularized Green's function solutions to obtain exact free-space boundary conditions while using fast Fourier transforms for computational efficiency. Multiresolution is a achieved through local refinement patches and regulari......We present a novel high order multiresolution Poisson solver based on regularized Green's function solutions to obtain exact free-space boundary conditions while using fast Fourier transforms for computational efficiency. Multiresolution is a achieved through local refinement patches...... and regularized Green's functions corresponding to the difference in the spatial resolution between the patches. The full solution is obtained utilizing the linearity of the Poisson equation enabling super-position of solutions. We show that the multiresolution Poisson solver produces convergence rates...

  8. Sampling and sensitivity analyses tools (SaSAT for computational modelling

    Directory of Open Access Journals (Sweden)

    Wilson David P

    2008-02-01

    Full Text Available Abstract SaSAT (Sampling and Sensitivity Analysis Tools is a user-friendly software package for applying uncertainty and sensitivity analyses to mathematical and computational models of arbitrary complexity and context. The toolbox is built in Matlab®, a numerical mathematical software package, and utilises algorithms contained in the Matlab® Statistics Toolbox. However, Matlab® is not required to use SaSAT as the software package is provided as an executable file with all the necessary supplementary files. The SaSAT package is also designed to work seamlessly with Microsoft Excel but no functionality is forfeited if that software is not available. A comprehensive suite of tools is provided to enable the following tasks to be easily performed: efficient and equitable sampling of parameter space by various methodologies; calculation of correlation coefficients; regression analysis; factor prioritisation; and graphical output of results, including response surfaces, tornado plots, and scatterplots. Use of SaSAT is exemplified by application to a simple epidemic model. To our knowledge, a number of the methods available in SaSAT for performing sensitivity analyses have not previously been used in epidemiological modelling and their usefulness in this context is demonstrated.

  9. Sensitivity and uncertainty analyses of the HCLL mock-up experiment

    International Nuclear Information System (INIS)

    Leichtle, D.; Fischer, U.; Kodeli, I.; Perel, R.L.; Klix, A.; Batistoni, P.; Villari, R.

    2010-01-01

    Within the European Fusion Technology Programme dedicated computational methods, tools and data have been developed and validated for sensitivity and uncertainty analyses of fusion neutronics experiments. The present paper is devoted to this kind of analyses on the recent neutronics experiment on a mock-up of the Helium-Cooled Lithium Lead Test Blanket Module for ITER at the Frascati neutron generator. They comprise both probabilistic and deterministic methodologies for the assessment of uncertainties of nuclear responses due to nuclear data uncertainties and their sensitivities to the involved reaction cross-section data. We have used MCNP and MCSEN codes in the Monte Carlo approach and DORT and SUSD3D in the deterministic approach for transport and sensitivity calculations, respectively. In both cases JEFF-3.1 and FENDL-2.1 libraries for the transport data and mainly ENDF/B-VI.8 and SCALE6.0 libraries for the relevant covariance data have been used. With a few exceptions, the two different methodological approaches were shown to provide consistent results. A total nuclear data related uncertainty in the range of 1-2% (1σ confidence level) was assessed for the tritium production in the HCLL mock-up experiment.

  10. Pre-waste-emplacement ground-water travel time sensitivity and uncertainty analyses for Yucca Mountain, Nevada

    International Nuclear Information System (INIS)

    Kaplan, P.G.

    1993-01-01

    Yucca Mountain, Nevada is a potential site for a high-level radioactive-waste repository. Uncertainty and sensitivity analyses were performed to estimate critical factors in the performance of the site with respect to a criterion in terms of pre-waste-emplacement ground-water travel time. The degree of failure in the analytical model to meet the criterion is sensitive to the estimate of fracture porosity in the upper welded unit of the problem domain. Fracture porosity is derived from a number of more fundamental measurements including fracture frequency, fracture orientation, and the moisture-retention characteristic inferred for the fracture domain

  11. Regularized maximum correntropy machine

    KAUST Repository

    Wang, Jim Jing-Yan; Wang, Yunji; Jing, Bing-Yi; Gao, Xin

    2015-01-01

    In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

  12. Regularized maximum correntropy machine

    KAUST Repository

    Wang, Jim Jing-Yan

    2015-02-12

    In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

  13. Toward robust high resolution fluorescence tomography: a hybrid row-action edge preserving regularization

    Science.gov (United States)

    Behrooz, Ali; Zhou, Hao-Min; Eftekhar, Ali A.; Adibi, Ali

    2011-02-01

    Depth-resolved localization and quantification of fluorescence distribution in tissue, called Fluorescence Molecular Tomography (FMT), is highly ill-conditioned as depth information should be extracted from limited number of surface measurements. Inverse solvers resort to regularization algorithms that penalize Euclidean norm of the solution to overcome ill-posedness. While these regularization algorithms offer good accuracy, their smoothing effects result in continuous distributions which lack high-frequency edge-type features of the actual fluorescence distribution and hence limit the resolution offered by FMT. We propose an algorithm that penalizes the total variation (TV) norm of the solution to preserve sharp transitions and high-frequency components in the reconstructed fluorescence map while overcoming ill-posedness. The hybrid algorithm is composed of two levels: 1) An Algebraic Reconstruction Technique (ART), performed on FMT data for fast recovery of a smooth solution that serves as an initial guess for the iterative TV regularization, 2) A time marching TV regularization algorithm, inspired by the Rudin-Osher-Fatemi TV image restoration, performed on the initial guess to further enhance the resolution and accuracy of the reconstruction. The performance of the proposed method in resolving fluorescent tubes inserted in a liquid tissue phantom imaged by a non-contact CW trans-illumination FMT system is studied and compared to conventional regularization schemes. It is observed that the proposed method performs better in resolving fluorescence inclusions at higher depths.

  14. From recreational to regular drug use

    DEFF Research Database (Denmark)

    Järvinen, Margaretha; Ravn, Signe

    2011-01-01

    This article analyses the process of going from recreational use to regular and problematic use of illegal drugs. We present a model containing six career contingencies relevant for young people’s progress from recreational to regular drug use: the closing of social networks, changes in forms...

  15. Scenario sensitivity analyses performed on the PRESTO-EPA LLW risk assessment models

    International Nuclear Information System (INIS)

    Bandrowski, M.S.

    1988-01-01

    The US Environmental Protection Agency (EPA) is currently developing standards for the land disposal of low-level radioactive waste. As part of the standard development, EPA has performed risk assessments using the PRESTO-EPA codes. A program of sensitivity analysis was conducted on the PRESTO-EPA codes, consisting of single parameter sensitivity analysis and scenario sensitivity analysis. The results of the single parameter sensitivity analysis were discussed at the 1987 DOE LLW Management Conference. Specific scenario sensitivity analyses have been completed and evaluated. Scenario assumptions that were analyzed include: site location, disposal method, form of waste, waste volume, analysis time horizon, critical radionuclides, use of buffer zones, and global health effects

  16. Probabilistic and Nonprobabilistic Sensitivity Analyses of Uncertain Parameters

    Directory of Open Access Journals (Sweden)

    Sheng-En Fang

    2014-01-01

    Full Text Available Parameter sensitivity analyses have been widely applied to industrial problems for evaluating parameter significance, effects on responses, uncertainty influence, and so forth. In the interest of simple implementation and computational efficiency, this study has developed two sensitivity analysis methods corresponding to the situations with or without sufficient probability information. The probabilistic method is established with the aid of the stochastic response surface and the mathematical derivation proves that the coefficients of first-order items embody the parameter main effects on the response. Simultaneously, a nonprobabilistic interval analysis based method is brought forward for the circumstance when the parameter probability distributions are unknown. The two methods have been verified against a numerical beam example with their accuracy compared to that of a traditional variance-based method. The analysis results have demonstrated the reliability and accuracy of the developed methods. And their suitability for different situations has also been discussed.

  17. Comparison of Ultrasound-Assisted and Regular Leaching of Vanadium and Chromium from Roasted High Chromium Vanadium Slag

    Science.gov (United States)

    Wen, Jing; Jiang, Tao; Gao, Huiyang; Liu, Yajing; Zheng, Xiaole; Xue, Xiangxin

    2018-02-01

    Ultrasound-assisted leaching (UAL) was used for vanadium and chromium leaching from roasted material obtained by the calcification roasting of high-chromium-vanadium slag. UAL was compared with regular leaching. The effect of the leaching time and temperature, acid concentration, and liquid-solid ratio on the vanadium and chromium leaching behaviors was investigated. The UAL mechanism was determined from particle-size-distribution and microstructure analyses. UAL decreased the reaction time and leaching temperature significantly. Furthermore, 96.67% vanadium and less than 1% chromium were leached at 60°C for 60 min with 20% H2SO4 at a liquid-solid ratio of 8, which was higher than the maximum vanadium leaching rate of 90.89% obtained using regular leaching at 80°C for 120 min. Ultrasonic waves broke and dispersed the solid sample because of ultrasonic cavitation, which increased the contact area of the roasted sample and the leaching medium, the solid-liquid mass transfer, and the vanadium leaching rate.

  18. Analysing the physics learning environment of visually impaired students in high schools

    NARCIS (Netherlands)

    Toenders, F.G.C.; de Putter - Smits, L.G.A.; Sanders, W.T.M.; den Brok, P.J.

    2017-01-01

    Although visually impaired students attend regular high school, their enrolment in advanced science classes is dramatically low. In our research we evaluated the physics learning environment of a blind high school student in a regular Dutch high school. For visually impaired students to grasp

  19. High performance liquid chromatography in pharmaceutical analyses

    Directory of Open Access Journals (Sweden)

    Branko Nikolin

    2004-05-01

    Full Text Available In testing the pre-sale procedure the marketing of drugs and their control in the last ten years, high performance liquid chromatographyreplaced numerous spectroscopic methods and gas chromatography in the quantitative and qualitative analysis. In the first period of HPLC application it was thought that it would become a complementary method of gas chromatography, however, today it has nearly completely replaced gas chromatography in pharmaceutical analysis. The application of the liquid mobile phase with the possibility of transformation of mobilized polarity during chromatography and all other modifications of mobile phase depending upon the characteristics of substance which are being tested, is a great advantage in the process of separation in comparison to other methods. The greater choice of stationary phase is the next factor which enables realization of good separation. The separation line is connected to specific and sensitive detector systems, spectrafluorimeter, diode detector, electrochemical detector as other hyphernated systems HPLC-MS and HPLC-NMR, are the basic elements on which is based such wide and effective application of the HPLC method. The purpose high performance liquid chromatography(HPLC analysis of any drugs is to confirm the identity of a drug and provide quantitative results and also to monitor the progress of the therapy of a disease.1 Measuring presented on the Fig. 1. is chromatogram obtained for the plasma of depressed patients 12 h before oral administration of dexamethasone. It may also be used to further our understanding of the normal and disease process in the human body trough biomedical and therapeutically research during investigation before of the drugs registration. The analyses of drugs and metabolites in biological fluids, particularly plasma, serum or urine is one of the most demanding but one of the most common uses of high performance of liquid chromatography. Blood, plasma or

  20. High perfomance liquid chromatography in pharmaceutical analyses.

    Science.gov (United States)

    Nikolin, Branko; Imamović, Belma; Medanhodzić-Vuk, Saira; Sober, Miroslav

    2004-05-01

    In testing the pre-sale procedure the marketing of drugs and their control in the last ten years, high performance liquid chromatography replaced numerous spectroscopic methods and gas chromatography in the quantitative and qualitative analysis. In the first period of HPLC application it was thought that it would become a complementary method of gas chromatography, however, today it has nearly completely replaced gas chromatography in pharmaceutical analysis. The application of the liquid mobile phase with the possibility of transformation of mobilized polarity during chromatography and all other modifications of mobile phase depending upon the characteristics of substance which are being tested, is a great advantage in the process of separation in comparison to other methods. The greater choice of stationary phase is the next factor which enables realization of good separation. The separation line is connected to specific and sensitive detector systems, spectrafluorimeter, diode detector, electrochemical detector as other hyphernated systems HPLC-MS and HPLC-NMR, are the basic elements on which is based such wide and effective application of the HPLC method. The purpose high performance liquid chromatography (HPLC) analysis of any drugs is to confirm the identity of a drug and provide quantitative results and also to monitor the progress of the therapy of a disease.1) Measuring presented on the Fig. 1. is chromatogram obtained for the plasma of depressed patients 12 h before oral administration of dexamethasone. It may also be used to further our understanding of the normal and disease process in the human body trough biomedical and therapeutically research during investigation before of the drugs registration. The analyses of drugs and metabolites in biological fluids, particularly plasma, serum or urine is one of the most demanding but one of the most common uses of high performance of liquid chromatography. Blood, plasma or serum contains numerous endogenous

  1. Likelihood ratio decisions in memory: three implied regularities.

    Science.gov (United States)

    Glanzer, Murray; Hilford, Andrew; Maloney, Laurence T

    2009-06-01

    We analyze four general signal detection models for recognition memory that differ in their distributional assumptions. Our analyses show that a basic assumption of signal detection theory, the likelihood ratio decision axis, implies three regularities in recognition memory: (1) the mirror effect, (2) the variance effect, and (3) the z-ROC length effect. For each model, we present the equations that produce the three regularities and show, in computed examples, how they do so. We then show that the regularities appear in data from a range of recognition studies. The analyses and data in our study support the following generalization: Individuals make efficient recognition decisions on the basis of likelihood ratios.

  2. High-throughput, Highly Sensitive Analyses of Bacterial Morphogenesis Using Ultra Performance Liquid Chromatography*

    Science.gov (United States)

    Desmarais, Samantha M.; Tropini, Carolina; Miguel, Amanda; Cava, Felipe; Monds, Russell D.; de Pedro, Miguel A.; Huang, Kerwyn Casey

    2015-01-01

    The bacterial cell wall is a network of glycan strands cross-linked by short peptides (peptidoglycan); it is responsible for the mechanical integrity of the cell and shape determination. Liquid chromatography can be used to measure the abundance of the muropeptide subunits composing the cell wall. Characteristics such as the degree of cross-linking and average glycan strand length are known to vary across species. However, a systematic comparison among strains of a given species has yet to be undertaken, making it difficult to assess the origins of variability in peptidoglycan composition. We present a protocol for muropeptide analysis using ultra performance liquid chromatography (UPLC) and demonstrate that UPLC achieves resolution comparable with that of HPLC while requiring orders of magnitude less injection volume and a fraction of the elution time. We also developed a software platform to automate the identification and quantification of chromatographic peaks, which we demonstrate has improved accuracy relative to other software. This combined experimental and computational methodology revealed that peptidoglycan composition was approximately maintained across strains from three Gram-negative species despite taxonomical and morphological differences. Peptidoglycan composition and density were maintained after we systematically altered cell size in Escherichia coli using the antibiotic A22, indicating that cell shape is largely decoupled from the biochemistry of peptidoglycan synthesis. High-throughput, sensitive UPLC combined with our automated software for chromatographic analysis will accelerate the discovery of peptidoglycan composition and the molecular mechanisms of cell wall structure determination. PMID:26468288

  3. Interactions of Chemistry Teachers with Gifted Students in a Regular High-School Chemistry Classroom

    Science.gov (United States)

    Benny, Naama; Blonder, Ron

    2018-01-01

    Regular high-school chemistry teachers view gifted students as one of several types of students in a regular (mixed-ability) classroom. Gifted students have a range of unique abilities that characterize their learning process: mostly they differ in three key learning aspects: their faster learning pace, increased depth of understanding, and…

  4. Aleatoric and epistemic uncertainties in sampling based nuclear data uncertainty and sensitivity analyses

    International Nuclear Information System (INIS)

    Zwermann, W.; Krzykacz-Hausmann, B.; Gallner, L.; Klein, M.; Pautz, A.; Velkov, K.

    2012-01-01

    Sampling based uncertainty and sensitivity analyses due to epistemic input uncertainties, i.e. to an incomplete knowledge of uncertain input parameters, can be performed with arbitrary application programs to solve the physical problem under consideration. For the description of steady-state particle transport, direct simulations of the microscopic processes with Monte Carlo codes are often used. This introduces an additional source of uncertainty, the aleatoric sampling uncertainty, which is due to the randomness of the simulation process performed by sampling, and which adds to the total combined output sampling uncertainty. So far, this aleatoric part of uncertainty is minimized by running a sufficiently large number of Monte Carlo histories for each sample calculation, thus making its impact negligible as compared to the impact from sampling the epistemic uncertainties. Obviously, this process may cause high computational costs. The present paper shows that in many applications reliable epistemic uncertainty results can also be obtained with substantially lower computational effort by performing and analyzing two appropriately generated series of samples with much smaller number of Monte Carlo histories each. The method is applied along with the nuclear data uncertainty and sensitivity code package XSUSA in combination with the Monte Carlo transport code KENO-Va to various critical assemblies and a full scale reactor calculation. It is shown that the proposed method yields output uncertainties and sensitivities equivalent to the traditional approach, with a high reduction of computing time by factors of the magnitude of 100. (authors)

  5. Regularity and chaos in Vlasov evolution of nuclear matter

    Energy Technology Data Exchange (ETDEWEB)

    Jacquot, B.; Guarnera, A.; Chomaz, Ph.; Colonna, M.

    1995-12-31

    A careful analysis of the mean-field dynamics inside the spinodal instability region is performed. It is shown that, conversely to some recently published results the mean-field evolution appears mostly regular over a long time scale, while some disorder is observed only very late, when fragments are already formed This onset of chaos can be related to the fragment interaction which induces some coalescence effects. Moreover it is shown that the time scale over which the chaos start to develop are very sensitive to the range of the considered force. All the presented results support the various analyses of spinodal instabilities obtained using stochastic mean field approaches. (author). 16 refs. Submitted to Physical Review, C (US).

  6. Sensitivity Analyses for Cross-Coupled Parameters in Automotive Powertrain Optimization

    Directory of Open Access Journals (Sweden)

    Pongpun Othaganont

    2014-06-01

    Full Text Available When vehicle manufacturers are developing new hybrid and electric vehicles, modeling and simulation are frequently used to predict the performance of the new vehicles from an early stage in the product lifecycle. Typically, models are used to predict the range, performance and energy consumption of their future planned production vehicle; they also allow the designer to optimize a vehicle’s configuration. Another use for the models is in performing sensitivity analysis, which helps us understand which parameters have the most influence on model predictions and real-world behaviors. There are various techniques for sensitivity analysis, some are numerical, but the greatest insights are obtained analytically with sensitivity defined in terms of partial derivatives. Existing methods in the literature give us a useful, quantified measure of parameter sensitivity, a first-order effect, but they do not consider second-order effects. Second-order effects could give us additional insights: for example, a first order analysis might tell us that a limiting factor is the efficiency of the vehicle’s prime-mover; our new second order analysis will tell us how quickly the efficiency of the powertrain will become of greater significance. In this paper, we develop a method based on formal optimization mathematics for rapid second-order sensitivity analyses and illustrate these through a case study on a C-segment electric vehicle.

  7. UNFOLDED REGULAR AND SEMI-REGULAR POLYHEDRA

    Directory of Open Access Journals (Sweden)

    IONIŢĂ Elena

    2015-06-01

    Full Text Available This paper proposes a presentation unfolding regular and semi-regular polyhedra. Regular polyhedra are convex polyhedra whose faces are regular and equal polygons, with the same number of sides, and whose polyhedral angles are also regular and equal. Semi-regular polyhedra are convex polyhedra with regular polygon faces, several types and equal solid angles of the same type. A net of a polyhedron is a collection of edges in the plane which are the unfolded edges of the solid. Modeling and unfolding Platonic and Arhimediene polyhedra will be using 3dsMAX program. This paper is intended as an example of descriptive geometry applications.

  8. Regular-soda intake independent of weight status is associated with asthma among US high school students.

    Science.gov (United States)

    Park, Sohyun; Blanck, Heidi M; Sherry, Bettylou; Jones, Sherry Everett; Pan, Liping

    2013-01-01

    Limited research shows an inconclusive association between soda intake and asthma, potentially attributable to certain preservatives in sodas. This cross-sectional study examined the association between regular (nondiet)-soda intake and current asthma among a nationally representative sample of high school students. Analysis was based on the 2009 national Youth Risk Behavior Survey and included 15,960 students (grades 9 through 12) with data for both regular-soda intake and current asthma status. The outcome measure was current asthma (ie, told by doctor/nurse that they had asthma and still have asthma). The main exposure variable was regular-soda intake (ie, drank a can/bottle/glass of soda during the 7 days before the survey). Multivariable logistic regression was used to estimate the adjusted odds ratios for regular-soda intake with current asthma after controlling for age, sex, race/ethnicity, weight status, and current cigarette use. Overall, 10.8% of students had current asthma. In addition, 9.7% of students who did not drink regular soda had current asthma, and 14.7% of students who drank regular soda three or more times per day had current asthma. Compared with those who did not drink regular soda, odds of having current asthma were higher among students who drank regular soda two times per day (adjusted odds ratio=1.28; 95% CI 1.02 to 1.62) and three or more times per day (adjusted odds ratio=1.64; 95% CI 1.25 to 2.16). The association between high regular-soda intake and current asthma suggests efforts to reduce regular-soda intake among youth might have benefits beyond improving diet quality. However, this association needs additional research, such as a longitudinal examination. Published by Elsevier Inc.

  9. Sensitivity and uncertainty analyses applied to criticality safety validation. Volume 2

    International Nuclear Information System (INIS)

    Broadhead, B.L.; Hopper, C.M.; Parks, C.V.

    1999-01-01

    This report presents the application of sensitivity and uncertainty (S/U) analysis methodologies developed in Volume 1 to the code/data validation tasks of a criticality safety computational study. Sensitivity and uncertainty analysis methods were first developed for application to fast reactor studies in the 1970s. This work has revitalized and updated the existing S/U computational capabilities such that they can be used as prototypic modules of the SCALE code system, which contains criticality analysis tools currently in use by criticality safety practitioners. After complete development, simplified tools are expected to be released for general use. The methods for application of S/U and generalized linear-least-square methodology (GLLSM) tools to the criticality safety validation procedures were described in Volume 1 of this report. Volume 2 of this report presents the application of these procedures to the validation of criticality safety analyses supporting uranium operations where enrichments are greater than 5 wt %. Specifically, the traditional k eff trending analyses are compared with newly developed k eff trending procedures, utilizing the D and c k coefficients described in Volume 1. These newly developed procedures are applied to a family of postulated systems involving U(11)O 2 fuel, with H/X values ranging from 0--1,000. These analyses produced a series of guidance and recommendations for the general usage of these various techniques. Recommendations for future work are also detailed

  10. Multiple graph regularized protein domain ranking.

    Science.gov (United States)

    Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin

    2012-11-19

    Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications.

  11. Regular Topographic Patterning of Karst Depressions Suggests Landscape Self-Organization

    Science.gov (United States)

    Quintero, C.; Cohen, M. J.

    2017-12-01

    Thousands of wetland depressions that are commonly host to cypress domes dot the sub-tropical limestone landscape of South Florida. The origin of these depression features has been the topic of debate. Here we build upon the work of previous surveyors of this landscape to analyze the morphology and spatial distribution of depressions on the Big Cypress landscape. We took advantage of the emergence and availability of high resolution Light Direction and Ranging (LiDAR) technology and ArcMap GIS software to analyze the structure and regularity of landscape features with methods unavailable to past surveyors. Six 2.25 km2 LiDAR plots within the preserve were selected for remote analysis and one depression feature within each plot was selected for more intensive sediment and water depth surveying. Depression features on the Big Cypress landscape were found to show strong evidence of regular spatial patterning. Periodicity, a feature of regularly patterned landscapes, is apparent in both Variograms and Radial Spectrum Analyses. Size class distributions of the identified features indicate constrained feature sizes while Average Nearest Neighbor analyses support the inference of dispersed features with non-random spacing. The presence of regular patterning on this landscape strongly implies biotic reinforcement of spatial structure by way of the scale dependent feedback. In characterizing the structure of this wetland landscape we add to the growing body of work dedicated to documenting how water, life and geology may interact to shape the natural landscapes we see today.

  12. Sensitivity analyses of fast reactor systems including thorium and uranium

    International Nuclear Information System (INIS)

    Marable, J.H.; Weisbin, C.R.

    1978-01-01

    The Cross Section Evaluation Working Group (CSEWG) has, in conjunction with the development of the fifth version of ENDF/B, assembled new evaluations for 232 Th and 233 U. It is the purpose of this paper to describe briefly some of the more important features of these evaluations relative to ENDF/B-4 to project the change in reactor performance based upon the newer evaluated files and sensitivity coefficients for interesting design problems, and to indicate preliminary results from ongoing uncertainty analyses

  13. A wavelet-based regularized reconstruction algorithm for SENSE parallel MRI with applications to neuroimaging

    International Nuclear Information System (INIS)

    Chaari, L.; Pesquet, J.Ch.; Chaari, L.; Ciuciu, Ph.; Benazza-Benyahia, A.

    2011-01-01

    To reduce scanning time and/or improve spatial/temporal resolution in some Magnetic Resonance Imaging (MRI) applications, parallel MRI acquisition techniques with multiple coils acquisition have emerged since the early 1990's as powerful imaging methods that allow a faster acquisition process. In these techniques, the full FOV image has to be reconstructed from the resulting acquired under sampled k-space data. To this end, several reconstruction techniques have been proposed such as the widely-used Sensitivity Encoding (SENSE) method. However, the reconstructed image generally presents artifacts when perturbations occur in both the measured data and the estimated coil sensitivity profiles. In this paper, we aim at achieving accurate image reconstruction under degraded experimental conditions (low magnetic field and high reduction factor), in which neither the SENSE method nor the Tikhonov regularization in the image domain give convincing results. To this end, we present a novel method for SENSE-based reconstruction which proceeds with regularization in the complex wavelet domain by promoting sparsity. The proposed approach relies on a fast algorithm that enables the minimization of regularized non-differentiable criteria including more general penalties than a classical l 1 term. To further enhance the reconstructed image quality, local convex constraints are added to the regularization process. In vivo human brain experiments carried out on Gradient-Echo (GRE) anatomical and Echo Planar Imaging (EPI) functional MRI data at 1.5 T indicate that our algorithm provides reconstructed images with reduced artifacts for high reduction factors. (authors)

  14. Chimeric mitochondrial peptides from contiguous regular and swinger RNA.

    Science.gov (United States)

    Seligmann, Hervé

    2016-01-01

    Previous mass spectrometry analyses described human mitochondrial peptides entirely translated from swinger RNAs, RNAs where polymerization systematically exchanged nucleotides. Exchanges follow one among 23 bijective transformation rules, nine symmetric exchanges (X ↔ Y, e.g. A ↔ C) and fourteen asymmetric exchanges (X → Y → Z → X, e.g. A → C → G → A), multiplying by 24 DNA's protein coding potential. Abrupt switches from regular to swinger polymerization produce chimeric RNAs. Here, human mitochondrial proteomic analyses assuming abrupt switches between regular and swinger transcriptions, detect chimeric peptides, encoded by part regular, part swinger RNA. Contiguous regular- and swinger-encoded residues within single peptides are stronger evidence for translation of swinger RNA than previously detected, entirely swinger-encoded peptides: regular parts are positive controls matched with contiguous swinger parts, increasing confidence in results. Chimeric peptides are 200 × rarer than swinger peptides (3/100,000 versus 6/1000). Among 186 peptides with > 8 residues for each regular and swinger parts, regular parts of eleven chimeric peptides correspond to six among the thirteen recognized, mitochondrial protein-coding genes. Chimeric peptides matching partly regular proteins are rarer and less expressed than chimeric peptides matching non-coding sequences, suggesting targeted degradation of misfolded proteins. Present results strengthen hypotheses that the short mitogenome encodes far more proteins than hitherto assumed. Entirely swinger-encoded proteins could exist.

  15. Analysing the physics learning environment of visually impaired students in high schools

    Science.gov (United States)

    Toenders, Frank G. C.; de Putter-Smits, Lesley G. A.; Sanders, Wendy T. M.; den Brok, Perry

    2017-07-01

    Although visually impaired students attend regular high school, their enrolment in advanced science classes is dramatically low. In our research we evaluated the physics learning environment of a blind high school student in a regular Dutch high school. For visually impaired students to grasp physics concepts, time and additional materials to support the learning process are key. Time for teachers to develop teaching methods for such students is scarce. Suggestions for changes to the learning environment and of materials used are given.

  16. A Highly Accurate Regular Domain Collocation Method for Solving Potential Problems in the Irregular Doubly Connected Domains

    Directory of Open Access Journals (Sweden)

    Zhao-Qing Wang

    2014-01-01

    Full Text Available Embedding the irregular doubly connected domain into an annular regular region, the unknown functions can be approximated by the barycentric Lagrange interpolation in the regular region. A highly accurate regular domain collocation method is proposed for solving potential problems on the irregular doubly connected domain in polar coordinate system. The formulations of regular domain collocation method are constructed by using barycentric Lagrange interpolation collocation method on the regular domain in polar coordinate system. The boundary conditions are discretized by barycentric Lagrange interpolation within the regular domain. An additional method is used to impose the boundary conditions. The least square method can be used to solve the overconstrained equations. The function values of points in the irregular doubly connected domain can be calculated by barycentric Lagrange interpolation within the regular domain. Some numerical examples demonstrate the effectiveness and accuracy of the presented method.

  17. Multiple graph regularized protein domain ranking

    KAUST Repository

    Wang, Jim Jing-Yan

    2012-11-19

    Background: Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods.Results: To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods.Conclusion: The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications. 2012 Wang et al; licensee BioMed Central Ltd.

  18. Multiple graph regularized protein domain ranking

    KAUST Repository

    Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin

    2012-01-01

    Background: Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods.Results: To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods.Conclusion: The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications. 2012 Wang et al; licensee BioMed Central Ltd.

  19. Multiple graph regularized protein domain ranking

    Directory of Open Access Journals (Sweden)

    Wang Jim

    2012-11-01

    Full Text Available Abstract Background Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. Results To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. Conclusion The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications.

  20. Performance Assessment Modeling and Sensitivity Analyses of Generic Disposal System Concepts.

    Energy Technology Data Exchange (ETDEWEB)

    Sevougian, S. David; Freeze, Geoffrey A.; Gardner, William Payton; Hammond, Glenn Edward; Mariner, Paul

    2014-09-01

    directly, rather than through simplified abstractions. It also a llows for complex representations of the source term, e.g., the explicit representation of many individual waste packages (i.e., meter - scale detail of an entire waste emplacement drift). This report fulfills the Generic Disposal System Analysis Work Packa ge Level 3 Milestone - Performance Assessment Modeling and Sensitivity Analyses of Generic Disposal System Concepts (M 3 FT - 1 4 SN08080 3 2 ).

  1. Empirical laws, regularity and necessity

    NARCIS (Netherlands)

    Koningsveld, H.

    1973-01-01

    In this book I have tried to develop an analysis of the concept of an empirical law, an analysis that differs in many ways from the alternative analyse's found in contemporary literature dealing with the subject.

    1 am referring especially to two well-known views, viz. the regularity and

  2. Regular self-microstructuring on CR39 using high UV laser dose

    International Nuclear Information System (INIS)

    Parvin, P.; Refahizadeh, M.; Mortazavi, S.Z.; Silakhori, K.; Mahdiloo, A.; Aghaii, P.

    2014-01-01

    The UV laser induced replicas in the form of self-lining microstructures are created by high dose (with high fluence) ArF laser irradiation on CR39. Microstructures as the self-induced contours, in the form of concentric circles, appear when the laser fluence is well above the ablation threshold. It leads to the regular periodic parallel lines, i.e. circles with large radii having spatial separation 100–200 nm and line width 300–600 nm, where the number of shots increases to achieve higher UV doses. The surface wettability is also investigated after laser texturing to exhibit that a notable hydrophilicity takes place at high doses.

  3. Sobol method application in dimensional sensitivity analyses of different AFM cantilevers for biological particles

    Science.gov (United States)

    Korayem, M. H.; Taheri, M.; Ghahnaviyeh, S. D.

    2015-08-01

    Due to the more delicate nature of biological micro/nanoparticles, it is necessary to compute the critical force of manipulation. The modeling and simulation of reactions and nanomanipulator dynamics in a precise manipulation process require an exact modeling of cantilevers stiffness, especially the stiffness of dagger cantilevers because the previous model is not useful for this investigation. The stiffness values for V-shaped cantilevers can be obtained through several methods. One of them is the PBA method. In another approach, the cantilever is divided into two sections: a triangular head section and two slanted rectangular beams. Then, deformations along different directions are computed and used to obtain the stiffness values in different directions. The stiffness formulations of dagger cantilever are needed for this sensitivity analyses so the formulations have been driven first and then sensitivity analyses has been started. In examining the stiffness of the dagger-shaped cantilever, the micro-beam has been divided into two triangular and rectangular sections and by computing the displacements along different directions and using the existing relations, the stiffness values for dagger cantilever have been obtained. In this paper, after investigating the stiffness of common types of cantilevers, Sobol sensitivity analyses of the effects of various geometric parameters on the stiffness of these types of cantilevers have been carried out. Also, the effects of different cantilevers on the dynamic behavior of nanoparticles have been studied and the dagger-shaped cantilever has been deemed more suitable for the manipulation of biological particles.

  4. High Sensitivity, Wearable, Piezoresistive Pressure Sensors Based on Irregular Microhump Structures and Its Applications in Body Motion Sensing.

    Science.gov (United States)

    Wang, Zongrong; Wang, Shan; Zeng, Jifang; Ren, Xiaochen; Chee, Adrian J Y; Yiu, Billy Y S; Chung, Wai Choi; Yang, Yong; Yu, Alfred C H; Roberts, Robert C; Tsang, Anderson C O; Chow, Kwok Wing; Chan, Paddy K L

    2016-07-01

    A pressure sensor based on irregular microhump patterns has been proposed and developed. The devices show high sensitivity and broad operating pressure regime while comparing with regular micropattern devices. Finite element analysis (FEA) is utilized to confirm the sensing mechanism and predict the performance of the pressure sensor based on the microhump structures. Silicon carbide sandpaper is employed as the mold to develop polydimethylsiloxane (PDMS) microhump patterns with various sizes. The active layer of the piezoresistive pressure sensor is developed by spin coating PSS on top of the patterned PDMS. The devices show an averaged sensitivity as high as 851 kPa(-1) , broad operating pressure range (20 kPa), low operating power (100 nW), and fast response speed (6.7 kHz). Owing to their flexible properties, the devices are applied to human body motion sensing and radial artery pulse. These flexible high sensitivity devices show great potential in the next generation of smart sensors for robotics, real-time health monitoring, and biomedical applications. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Uncertainty and sensitivity analyses of the complete program system UFOMOD and of selected submodels

    International Nuclear Information System (INIS)

    Fischer, F.; Ehrhardt, J.; Hasemann, I.

    1990-09-01

    Uncertainty and sensitivity studies with the program system UFOMOD have been performed since several years on a submodel basis to get a deeper insight into the propagation of parameter uncertainties through the different modules and to quantify their contribution to the confidence bands of the intermediate and final results of an accident consequence assessment. In a series of investigations with the atmospheric dispersion module, the models describing early protective actions, the models calculating short-term organ doses and the health effects model of the near range subsystem NE of UFOMOD, a great deal of experience has been gained with methods and evaluation techniques for uncertainty and sensitivity analyses. Especially the influence on results of different sampling techniques and sample sizes, parameter distributions and correlations could be quantified and the usefulness of sensitivity measures for the interpretation of results could be demonstrated. In each submodel investigation, the (5%, 95%)-confidende bounds of the complementary cumulative frequency distributions (CCFDs) of various consequence types (activity concentrations of I-131 and Cs-137, individual acute organ doses, individual risks of nonstochastic health effects, and the number of early deaths) were calculated. The corresponding sensitivity analyses for each of these endpoints led to a list of parameters contributing significantly to the variation of mean values and 99% - fractiles. The most important parameters were extracted and combined for the final overall analysis. (orig.) [de

  6. Bypassing the Limits of Ll Regularization: Convex Sparse Signal Processing Using Non-Convex Regularization

    Science.gov (United States)

    Parekh, Ankit

    Sparsity has become the basis of some important signal processing methods over the last ten years. Many signal processing problems (e.g., denoising, deconvolution, non-linear component analysis) can be expressed as inverse problems. Sparsity is invoked through the formulation of an inverse problem with suitably designed regularization terms. The regularization terms alone encode sparsity into the problem formulation. Often, the ℓ1 norm is used to induce sparsity, so much so that ℓ1 regularization is considered to be `modern least-squares'. The use of ℓ1 norm, as a sparsity-inducing regularizer, leads to a convex optimization problem, which has several benefits: the absence of extraneous local minima, well developed theory of globally convergent algorithms, even for large-scale problems. Convex regularization via the ℓ1 norm, however, tends to under-estimate the non-zero values of sparse signals. In order to estimate the non-zero values more accurately, non-convex regularization is often favored over convex regularization. However, non-convex regularization generally leads to non-convex optimization, which suffers from numerous issues: convergence may be guaranteed to only a stationary point, problem specific parameters may be difficult to set, and the solution is sensitive to the initialization of the algorithm. The first part of this thesis is aimed toward combining the benefits of non-convex regularization and convex optimization to estimate sparse signals more effectively. To this end, we propose to use parameterized non-convex regularizers with designated non-convexity and provide a range for the non-convex parameter so as to ensure that the objective function is strictly convex. By ensuring convexity of the objective function (sum of data-fidelity and non-convex regularizer), we can make use of a wide variety of convex optimization algorithms to obtain the unique global minimum reliably. The second part of this thesis proposes a non-linear signal

  7. High order depletion sensitivity analysis

    International Nuclear Information System (INIS)

    Naguib, K.; Adib, M.; Morcos, H.N.

    2002-01-01

    A high order depletion sensitivity method was applied to calculate the sensitivities of build-up of actinides in the irradiated fuel due to cross-section uncertainties. An iteration method based on Taylor series expansion was applied to construct stationary principle, from which all orders of perturbations were calculated. The irradiated EK-10 and MTR-20 fuels at their maximum burn-up of 25% and 65% respectively were considered for sensitivity analysis. The results of calculation show that, in case of EK-10 fuel (low burn-up), the first order sensitivity was found to be enough to perform an accuracy of 1%. While in case of MTR-20 (high burn-up) the fifth order was found to provide 3% accuracy. A computer code SENS was developed to provide the required calculations

  8. High-intensity interval training improves insulin sensitivity in older individuals

    DEFF Research Database (Denmark)

    Søgaard, D; Lund, M T; Scheuer, C M

    2017-01-01

    AIM: Metabolic health may deteriorate with age as a result of altered body composition and decreased physical activity. Endurance exercise is known to counter these changes delaying or even preventing onset of metabolic diseases. High-intensity interval training (HIIT) is a time efficient...... alternative to regular endurance exercise, and the aim of this study was to investigate the metabolic benefit of HIIT in older subjects. METHODS: Twenty-two sedentary male (n = 11) and female (n = 11) subjects aged 63 ± 1 years performed HIIT training three times/week for 6 weeks on a bicycle ergometer. Each...... HIIT session consisted of five 1-minute intervals interspersed with 1½-minute rest. Prior to the first and after the last HIIT session whole-body insulin sensitivity, measured by a hyperinsulinaemic-euglycaemic clamp, plasma lipid levels, HbA1c, glycaemic parameters, body composition and maximal oxygen...

  9. Recursive regularization step for high-order lattice Boltzmann methods

    Science.gov (United States)

    Coreixas, Christophe; Wissocq, Gauthier; Puigt, Guillaume; Boussuge, Jean-François; Sagaut, Pierre

    2017-09-01

    A lattice Boltzmann method (LBM) with enhanced stability and accuracy is presented for various Hermite tensor-based lattice structures. The collision operator relies on a regularization step, which is here improved through a recursive computation of nonequilibrium Hermite polynomial coefficients. In addition to the reduced computational cost of this procedure with respect to the standard one, the recursive step allows to considerably enhance the stability and accuracy of the numerical scheme by properly filtering out second- (and higher-) order nonhydrodynamic contributions in under-resolved conditions. This is first shown in the isothermal case where the simulation of the doubly periodic shear layer is performed with a Reynolds number ranging from 104 to 106, and where a thorough analysis of the case at Re=3 ×104 is conducted. In the latter, results obtained using both regularization steps are compared against the Bhatnagar-Gross-Krook LBM for standard (D2Q9) and high-order (D2V17 and D2V37) lattice structures, confirming the tremendous increase of stability range of the proposed approach. Further comparisons on thermal and fully compressible flows, using the general extension of this procedure, are then conducted through the numerical simulation of Sod shock tubes with the D2V37 lattice. They confirm the stability increase induced by the recursive approach as compared with the standard one.

  10. A high-sensitivity neutron counter and waste-drum counting with the high-sensitivity neutron instrument

    International Nuclear Information System (INIS)

    Hankins, D.E.; Thorngate, J.H.

    1993-04-01

    At Lawrence Livermore National Laboratory (LLNL), a highly sensitive neutron counter was developed that can detect and accurately measure the neutrons from small quantities of plutonium or from other low-level neutron sources. This neutron counter was originally designed to survey waste containers leaving the Plutonium Facility. However, it has proven to be useful in other research applications requiring a high-sensitivity neutron instrument

  11. Regularization design for high-quality cone-beam CT of intracranial hemorrhage using statistical reconstruction

    Science.gov (United States)

    Dang, H.; Stayman, J. W.; Xu, J.; Sisniega, A.; Zbijewski, W.; Wang, X.; Foos, D. H.; Aygun, N.; Koliatsos, V. E.; Siewerdsen, J. H.

    2016-03-01

    Intracranial hemorrhage (ICH) is associated with pathologies such as hemorrhagic stroke and traumatic brain injury. Multi-detector CT is the current front-line imaging modality for detecting ICH (fresh blood contrast 40-80 HU, down to 1 mm). Flat-panel detector (FPD) cone-beam CT (CBCT) offers a potential alternative with a smaller scanner footprint, greater portability, and lower cost potentially well suited to deployment at the point of care outside standard diagnostic radiology and emergency room settings. Previous studies have suggested reliable detection of ICH down to 3 mm in CBCT using high-fidelity artifact correction and penalized weighted least-squared (PWLS) image reconstruction with a post-artifact-correction noise model. However, ICH reconstructed by traditional image regularization exhibits nonuniform spatial resolution and noise due to interaction between the statistical weights and regularization, which potentially degrades the detectability of ICH. In this work, we propose three regularization methods designed to overcome these challenges. The first two compute spatially varying certainty for uniform spatial resolution and noise, respectively. The third computes spatially varying regularization strength to achieve uniform "detectability," combining both spatial resolution and noise in a manner analogous to a delta-function detection task. Experiments were conducted on a CBCT test-bench, and image quality was evaluated for simulated ICH in different regions of an anthropomorphic head. The first two methods improved the uniformity in spatial resolution and noise compared to traditional regularization. The third exhibited the highest uniformity in detectability among all methods and best overall image quality. The proposed regularization provides a valuable means to achieve uniform image quality in CBCT of ICH and is being incorporated in a CBCT prototype for ICH imaging.

  12. Uncertainty and sensitivity analyses of ballast life-cycle cost and payback period

    OpenAIRE

    Mcmahon, James E.

    2000-01-01

    The paper introduces an innovative methology for evaluating the relative significance of energy-efficient technologies applied to fluorescent lamp ballasts. The method involves replacing the point estimates of life cycle cost of the ballasts with uncertainty distributions reflecting the whole spectrum of possible costs, and the assessed probability associated with each value. The results of uncertainty and sensitivity analyses will help analysts reduce effort in data collection and carry on a...

  13. High-Sensitivity GaN Microchemical Sensors

    Science.gov (United States)

    Son, Kyung-ah; Yang, Baohua; Liao, Anna; Moon, Jeongsun; Prokopuk, Nicholas

    2009-01-01

    Systematic studies have been performed on the sensitivity of GaN HEMT (high electron mobility transistor) sensors using various gate electrode designs and operational parameters. The results here show that a higher sensitivity can be achieved with a larger W/L ratio (W = gate width, L = gate length) at a given D (D = source-drain distance), and multi-finger gate electrodes offer a higher sensitivity than a one-finger gate electrode. In terms of operating conditions, sensor sensitivity is strongly dependent on transconductance of the sensor. The highest sensitivity can be achieved at the gate voltage where the slope of the transconductance curve is the largest. This work provides critical information about how the gate electrode of a GaN HEMT, which has been identified as the most sensitive among GaN microsensors, needs to be designed, and what operation parameters should be used for high sensitivity detection.

  14. High-resolution seismic data regularization and wavefield separation

    Science.gov (United States)

    Cao, Aimin; Stump, Brian; DeShon, Heather

    2018-04-01

    We present a new algorithm, non-equispaced fast antileakage Fourier transform (NFALFT), for irregularly sampled seismic data regularization. Synthetic tests from 1-D to 5-D show that the algorithm may efficiently remove leaked energy in the frequency wavenumber domain, and its corresponding regularization process is accurate and fast. Taking advantage of the NFALFT algorithm, we suggest a new method (wavefield separation) for the detection of the Earth's inner core shear wave with irregularly distributed seismic arrays or networks. All interfering seismic phases that propagate along the minor arc are removed from the time window around the PKJKP arrival. The NFALFT algorithm is developed for seismic data, but may also be used for other irregularly sampled temporal or spatial data processing.

  15. An UPLC-MS/MS method for highly sensitive high-throughput analysis of phytohormones in plant tissues

    Directory of Open Access Journals (Sweden)

    Balcke Gerd Ulrich

    2012-11-01

    Full Text Available Abstract Background Phytohormones are the key metabolites participating in the regulation of multiple functions of plant organism. Among them, jasmonates, as well as abscisic and salicylic acids are responsible for triggering and modulating plant reactions targeted against pathogens and herbivores, as well as resistance to abiotic stress (drought, UV-irradiation and mechanical wounding. These factors induce dramatic changes in phytohormone biosynthesis and transport leading to rapid local and systemic stress responses. Understanding of underlying mechanisms is of principle interest for scientists working in various areas of plant biology. However, highly sensitive, precise and high-throughput methods for quantification of these phytohormones in small samples of plant tissues are still missing. Results Here we present an LC-MS/MS method for fast and highly sensitive determination of jasmonates, abscisic and salicylic acids. A single-step sample preparation procedure based on mixed-mode solid phase extraction was efficiently combined with essential improvements in mobile phase composition yielding higher efficiency of chromatographic separation and MS-sensitivity. This strategy resulted in dramatic increase in overall sensitivity, allowing successful determination of phytohormones in small (less than 50 mg of fresh weight tissue samples. The method was completely validated in terms of analyte recovery, sensitivity, linearity and precision. Additionally, it was cross-validated with a well-established GC-MS-based procedure and its applicability to a variety of plant species and organs was verified. Conclusion The method can be applied for the analyses of target phytohormones in small tissue samples obtained from any plant species and/or plant part relying on any commercially available (even less sensitive tandem mass spectrometry instrumentation.

  16. High-Sensitivity Spectrophotometry.

    Science.gov (United States)

    Harris, T. D.

    1982-01-01

    Selected high-sensitivity spectrophotometric methods are examined, and comparisons are made of their relative strengths and weaknesses and the circumstances for which each can best be applied. Methods include long path cells, noise reduction, laser intracavity absorption, thermocouple calorimetry, photoacoustic methods, and thermo-optical methods.…

  17. Sensitivity and uncertainty analyses of unsaturated flow travel time in the CHnz unit of Yucca Mountain, Nevada

    International Nuclear Information System (INIS)

    Nichols, W.E.; Freshley, M.D.

    1991-10-01

    This report documents the results of sensitivity and uncertainty analyses conducted to improve understanding of unsaturated zone ground-water travel time distribution at Yucca Mountain, Nevada. The US Department of Energy (DOE) is currently performing detailed studies at Yucca Mountain to determine its suitability as a host for a geologic repository for the containment of high-level nuclear wastes. As part of these studies, DOE is conducting a series of Performance Assessment Calculational Exercises, referred to as the PACE problems. The work documented in this report represents a part of the PACE-90 problems that addresses the effects of natural barriers of the site that will stop or impede the long-term movement of radionuclides from the potential repository to the accessible environment. In particular, analyses described in this report were designed to investigate the sensitivity of the ground-water travel time distribution to different input parameters and the impact of uncertainty associated with those input parameters. Five input parameters were investigated in this study: recharge rate, saturated hydraulic conductivity, matrix porosity, and two curve-fitting parameters used for the van Genuchten relations to quantify the unsaturated moisture-retention and hydraulic characteristics of the matrix. 23 refs., 20 figs., 10 tabs

  18. Assesment risk of fracture in thin-walled fiber reinforced and regular High Performance Concretes sandwich elements

    DEFF Research Database (Denmark)

    Hodicky, Kamil; Hulin, Thomas; Schmidt, Jacob Wittrup

    2013-01-01

    load. Due to structural restraints, autogenous shrinkage may lead to high self-induced stresses. Therefore autogenous shrinkage plays important role in design of HPCSE. The present paper assesses risk of fracture due to autogenous shrinkage-induced stresses in three fiber reinforced and regular High....... Finally the paper describes the modeling work with HPCSE predicting structural cracking provoked by autogenous shrinkage. It was observed that risk of cracking due to autogenous shrinkage rapidly rises after 3 days in case of regular HPC and after 7 days in case of fiber reinforced HPC.......High Performance Concrete Sandwich Elements (HPCSE) are an interesting option for future low or plus energy building construction. Recent research and development work, however, indicate that such elements are prone to structural cracking due to the combined effect of shrinkage and high temperature...

  19. Analytic regularization of the Yukawa model at finite temperature

    International Nuclear Information System (INIS)

    Malbouisson, A.P.C.; Svaiter, N.F.; Svaiter, B.F.

    1996-07-01

    It is analysed the one-loop fermionic contribution for the scalar effective potential in the temperature dependent Yukawa model. Ir order to regularize the model a mix between dimensional and analytic regularization procedures is used. It is found a general expression for the fermionic contribution in arbitrary spacetime dimension. It is also found that in D = 3 this contribution is finite. (author). 19 refs

  20. Sensitivity analyses on in-vessel hydrogen generation for KNGR

    International Nuclear Information System (INIS)

    Kim, See Darl; Park, S.Y.; Park, S.H.; Park, J.H.

    2001-03-01

    Sensitivity analyses for the in-vessel hydrogen generation, using the MELCOR program, are described in this report for the Korean Next Generation Reactor. The typical accident sequences of a station blackout and a large LOCA scenario are selected. A lower head failure model, a Zircaloy oxidation reaction model and a B 4 C reaction model are considered for the sensitivity parameters. As for the base case, 1273.15K for a failure temperature of the penetrations or the lower head, an Urbanic-Heidrich correlation for the Zircaloy oxidation reaction model and the B 4 C reaction model are used. Case 1 used 1650K as the failure temperature for the penetrations and Case 2 considered creep rupture instead of penetration failure. Case 3 used a MATPRO-EG and G correlation for the Zircaloy oxidation reaction model and Case 4 turned off the B 4 C reaction model. The results of the studies are summarized below : (1) When the penetration failure temperature is higher, or the creep rupture failure model is considered, the amount of hydrogen increases for two sequences. (2) When the MATPRO-EG and G correlation for a Zircaloy oxidation reaction is considered, the amount of hydrogen is less than the Urbanic-Heidrich correlation (Base case) for both scenarios. (3) When the B 4 C reaction model turns off, the amount of hydrogen decreases for two sequences

  1. High-resolution linkage analyses to identify genes that influence Varroa sensitive hygiene behavior in honey bees.

    Science.gov (United States)

    Tsuruda, Jennifer M; Harris, Jeffrey W; Bourgeois, Lanie; Danka, Robert G; Hunt, Greg J

    2012-01-01

    Varroa mites (V. destructor) are a major threat to honey bees (Apis melilfera) and beekeeping worldwide and likely lead to colony decline if colonies are not treated. Most treatments involve chemical control of the mites; however, Varroa has evolved resistance to many of these miticides, leaving beekeepers with a limited number of alternatives. A non-chemical control method is highly desirable for numerous reasons including lack of chemical residues and decreased likelihood of resistance. Varroa sensitive hygiene behavior is one of two behaviors identified that are most important for controlling the growth of Varroa populations in bee hives. To identify genes influencing this trait, a study was conducted to map quantitative trait loci (QTL). Individual workers of a backcross family were observed and evaluated for their VSH behavior in a mite-infested observation hive. Bees that uncapped or removed pupae were identified. The genotypes for 1,340 informative single nucleotide polymorphisms were used to construct a high-resolution genetic map and interval mapping was used to analyze the association of the genotypes with the performance of Varroa sensitive hygiene. We identified one major QTL on chromosome 9 (LOD score = 3.21) and a suggestive QTL on chromosome 1 (LOD = 1.95). The QTL confidence interval on chromosome 9 contains the gene 'no receptor potential A' and a dopamine receptor. 'No receptor potential A' is involved in vision and olfaction in Drosophila, and dopamine signaling has been previously shown to be required for aversive olfactory learning in honey bees, which is probably necessary for identifying mites within brood cells. Further studies on these candidate genes may allow for breeding bees with this trait using marker-assisted selection.

  2. High sensitivity optical molecular imaging system

    Science.gov (United States)

    An, Yu; Yuan, Gao; Huang, Chao; Jiang, Shixin; Zhang, Peng; Wang, Kun; Tian, Jie

    2018-02-01

    Optical Molecular Imaging (OMI) has the advantages of high sensitivity, low cost and ease of use. By labeling the regions of interest with fluorescent or bioluminescence probes, OMI can noninvasively obtain the distribution of the probes in vivo, which play the key role in cancer research, pharmacokinetics and other biological studies. In preclinical and clinical application, the image depth, resolution and sensitivity are the key factors for researchers to use OMI. In this paper, we report a high sensitivity optical molecular imaging system developed by our group, which can improve the imaging depth in phantom to nearly 5cm, high resolution at 2cm depth, and high image sensitivity. To validate the performance of the system, special designed phantom experiments and weak light detection experiment were implemented. The results shows that cooperated with high performance electron-multiplying charge coupled device (EMCCD) camera, precision design of light path system and high efficient image techniques, our OMI system can simultaneously collect the light-emitted signals generated by fluorescence molecular imaging, bioluminescence imaging, Cherenkov luminance and other optical imaging modality, and observe the internal distribution of light-emitting agents fast and accurately.

  3. Optimizing human activity patterns using global sensitivity analysis.

    Science.gov (United States)

    Fairchild, Geoffrey; Hickmann, Kyle S; Mniszewski, Susan M; Del Valle, Sara Y; Hyman, James M

    2014-12-01

    Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule's regularity for a population. We show how to tune an activity's regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimization problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. We use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations.

  4. Sensitivity and uncertainty analyses in aging risk-based prioritizations

    International Nuclear Information System (INIS)

    Hassan, M.; Uryas'ev, S.; Vesely, W.E.

    1993-01-01

    Aging risk evaluations of nuclear power plants using Probabilistic Risk Analyses (PRAs) involve assessments of the impact of aging structures, systems, and components (SSCs) on plant core damage frequency (CDF). These assessments can be used to prioritize the contributors to aging risk reflecting the relative risk potential of the SSCs. Aging prioritizations are important for identifying the SSCs contributing most to plant risk and can provide a systematic basis on which aging risk control and management strategies for a plant can be developed. However, these prioritizations are subject to variabilities arising from uncertainties in data, and/or from various modeling assumptions. The objective of this paper is to present an evaluation of the sensitivity of aging prioritizations of active components to uncertainties in aging risk quantifications. Approaches for robust prioritization of SSCs also are presented which are less susceptible to the uncertainties

  5. Low social rhythm regularity predicts first onset of bipolar spectrum disorders among at-risk individuals with reward hypersensitivity.

    Science.gov (United States)

    Alloy, Lauren B; Boland, Elaine M; Ng, Tommy H; Whitehouse, Wayne G; Abramson, Lyn Y

    2015-11-01

    The social zeitgeber model (Ehlers, Frank, & Kupfer, 1988) suggests that irregular daily schedules or social rhythms provide vulnerability to bipolar spectrum disorders. This study tested whether social rhythm regularity prospectively predicted first lifetime onset of bipolar spectrum disorders in adolescents already at risk for bipolar disorder based on exhibiting reward hypersensitivity. Adolescents (ages 14-19 years) previously screened to have high (n = 138) or moderate (n = 95) reward sensitivity, but no lifetime history of bipolar spectrum disorder, completed measures of depressive and manic symptoms, family history of bipolar disorder, and the Social Rhythm Metric. They were followed prospectively with semistructured diagnostic interviews every 6 months for an average of 31.7 (SD = 20.1) months. Hierarchical logistic regression indicated that low social rhythm regularity at baseline predicted greater likelihood of first onset of bipolar spectrum disorder over follow-up among high-reward-sensitivity adolescents but not moderate-reward-sensitivity adolescents, controlling for follow-up time, gender, age, family history of bipolar disorder, and initial manic and depressive symptoms (β = -.150, Wald = 4.365, p = .037, odds ratio = .861, 95% confidence interval [.748, .991]). Consistent with the social zeitgeber theory, low social rhythm regularity provides vulnerability to first onset of bipolar spectrum disorder among at-risk adolescents. It may be possible to identify adolescents at risk for developing a bipolar spectrum disorder based on exhibiting both reward hypersensitivity and social rhythm irregularity before onset occurs. (c) 2015 APA, all rights reserved).

  6. Cross-section sensitivity analyses for a Tokamak Experimental Power Reactor

    International Nuclear Information System (INIS)

    Simmons, E.L.; Gerstl, S.A.W.; Dudziak, D.J.

    1977-09-01

    The objectives of this report were (1) to determine the sensitivity of neutronic responses in the preliminary design of the Tokamak Experimental Power Reactor by Argonne National Laboratory, and (2) to develop the use of a neutron-gamma coupled cross-section set in the calculation of cross-section sensitivity analysis. Response functions such as neutron plus gamma kerma, Mylar dose, copper transmutation, copper dpa, and activation of the toroidal field coil dewar were investigated. Calculations revealed that the responses were most sensitive to the high-energy group cross sections of iron in the innermost regions containing stainless steel. For example, both the neutron heating of the toroidal field coil and the activation of the toroidal field coil dewar show an integral sensitivity of about -5 with respect to the iron total cross sections. Major contributors are the scattering cross sections of iron, with -2.7 and -4.4 for neutron heating and activation, respectively. The effects of changes in gamma cross sections were generally an order of 10 lower

  7. High-resolution linkage analyses to identify genes that influence Varroa sensitive hygiene behavior in honey bees.

    Directory of Open Access Journals (Sweden)

    Jennifer M Tsuruda

    Full Text Available Varroa mites (V. destructor are a major threat to honey bees (Apis melilfera and beekeeping worldwide and likely lead to colony decline if colonies are not treated. Most treatments involve chemical control of the mites; however, Varroa has evolved resistance to many of these miticides, leaving beekeepers with a limited number of alternatives. A non-chemical control method is highly desirable for numerous reasons including lack of chemical residues and decreased likelihood of resistance. Varroa sensitive hygiene behavior is one of two behaviors identified that are most important for controlling the growth of Varroa populations in bee hives. To identify genes influencing this trait, a study was conducted to map quantitative trait loci (QTL. Individual workers of a backcross family were observed and evaluated for their VSH behavior in a mite-infested observation hive. Bees that uncapped or removed pupae were identified. The genotypes for 1,340 informative single nucleotide polymorphisms were used to construct a high-resolution genetic map and interval mapping was used to analyze the association of the genotypes with the performance of Varroa sensitive hygiene. We identified one major QTL on chromosome 9 (LOD score = 3.21 and a suggestive QTL on chromosome 1 (LOD = 1.95. The QTL confidence interval on chromosome 9 contains the gene 'no receptor potential A' and a dopamine receptor. 'No receptor potential A' is involved in vision and olfaction in Drosophila, and dopamine signaling has been previously shown to be required for aversive olfactory learning in honey bees, which is probably necessary for identifying mites within brood cells. Further studies on these candidate genes may allow for breeding bees with this trait using marker-assisted selection.

  8. Differential responsiveness to caffeine and perceived effects of caffeine in moderate and high regular caffeine consumers.

    Science.gov (United States)

    Attwood, A S; Higgs, S; Terry, P

    2007-03-01

    Individual differences in responsiveness to caffeine occur even within a caffeine-consuming population, but the factors that mediate differential responsiveness remain unclear. To compare caffeine's effects on performance and mood in a group of high vs moderate consumers of caffeine and to examine the potential role of subjective awareness of the effects of caffeine in mediating any differential responsiveness. Two groups of regular caffeine consumers (200 mg/day) attended two sessions at which mood and cognitive functions were measured before and 30 min after consumption of 400-mg caffeine or placebo in a capsule. Cognitive tests included visual information processing, match-to-sample visual search (MTS) and simple and choice reaction times. Post-session questionnaires asked participants to describe any perceived effect of capsule consumption. High consumers, but not moderate consumers, demonstrated significantly faster simple and choice reaction times after caffeine relative to placebo. These effects were not attributable to obvious group differences in withdrawal or tolerance because there were no group differences in baseline mood or in reports of negative affect after caffeine. Instead, the high consumers were more likely to report experiencing positive effects of caffeine, whereas the moderate consumers were more likely to report no effect. The sensitivity of caffeine consumers to the mood- and performance-enhancing effects of caffeine is related to their levels of habitual intake. High caffeine consumers are more likely than moderate consumers to perceive broadly positive effects of caffeine, and this may contribute to their levels of use.

  9. Iterative Regularization with Minimum-Residual Methods

    DEFF Research Database (Denmark)

    Jensen, Toke Koldborg; Hansen, Per Christian

    2007-01-01

    subspaces. We provide a combination of theory and numerical examples, and our analysis confirms the experience that MINRES and MR-II can work as general regularization methods. We also demonstrate theoretically and experimentally that the same is not true, in general, for GMRES and RRGMRES their success......We study the regularization properties of iterative minimum-residual methods applied to discrete ill-posed problems. In these methods, the projection onto the underlying Krylov subspace acts as a regularizer, and the emphasis of this work is on the role played by the basis vectors of these Krylov...... as regularization methods is highly problem dependent....

  10. Iterative regularization with minimum-residual methods

    DEFF Research Database (Denmark)

    Jensen, Toke Koldborg; Hansen, Per Christian

    2006-01-01

    subspaces. We provide a combination of theory and numerical examples, and our analysis confirms the experience that MINRES and MR-II can work as general regularization methods. We also demonstrate theoretically and experimentally that the same is not true, in general, for GMRES and RRGMRES - their success......We study the regularization properties of iterative minimum-residual methods applied to discrete ill-posed problems. In these methods, the projection onto the underlying Krylov subspace acts as a regularizer, and the emphasis of this work is on the role played by the basis vectors of these Krylov...... as regularization methods is highly problem dependent....

  11. Analytic stochastic regularization and gange invariance

    International Nuclear Information System (INIS)

    Abdalla, E.; Gomes, M.; Lima-Santos, A.

    1986-05-01

    A proof that analytic stochastic regularization breaks gauge invariance is presented. This is done by an explicit one loop calculation of the vaccum polarization tensor in scalar electrodynamics, which turns out not to be transversal. The counterterm structure, Langevin equations and the construction of composite operators in the general framework of stochastic quantization, are also analysed. (Author) [pt

  12. Coordinate-invariant regularization

    International Nuclear Information System (INIS)

    Halpern, M.B.

    1987-01-01

    A general phase-space framework for coordinate-invariant regularization is given. The development is geometric, with all regularization contained in regularized DeWitt Superstructures on field deformations. Parallel development of invariant coordinate-space regularization is obtained by regularized functional integration of the momenta. As representative examples of the general formulation, the regularized general non-linear sigma model and regularized quantum gravity are discussed. copyright 1987 Academic Press, Inc

  13. Regularized plane-wave least-squares Kirchhoff migration

    KAUST Repository

    Wang, Xin

    2013-09-22

    A Kirchhoff least-squares migration (LSM) is developed in the prestack plane-wave domain to increase the quality of migration images. A regularization term is included that accounts for mispositioning of reflectors due to errors in the velocity model. Both synthetic and field results show that: 1) LSM with a reflectivity model common for all the plane-wave gathers provides the best image when the migration velocity model is accurate, but it is more sensitive to the velocity errors, 2) the regularized plane-wave LSM is more robust in the presence of velocity errors, and 3) LSM achieves both computational and IO saving by plane-wave encoding compared to shot-domain LSM for the models tested.

  14. Discharge regularity in the turtle posterior crista: comparisons between experiment and theory.

    Science.gov (United States)

    Goldberg, Jay M; Holt, Joseph C

    2013-12-01

    Intra-axonal recordings were made from bouton fibers near their termination in the turtle posterior crista. Spike discharge, miniature excitatory postsynaptic potentials (mEPSPs), and afterhyperpolarizations (AHPs) were monitored during resting activity in both regularly and irregularly discharging units. Quantal size (qsize) and quantal rate (qrate) were estimated by shot-noise theory. Theoretically, the ratio, σV/(dμV/dt), between synaptic noise (σV) and the slope of the mean voltage trajectory (dμV/dt) near threshold crossing should determine discharge regularity. AHPs are deeper and more prolonged in regular units; as a result, dμV/dt is larger, the more regular the discharge. The qsize is larger and qrate smaller in irregular units; these oppositely directed trends lead to little variation in σV with discharge regularity. Of the two variables, dμV/dt is much more influential than the nearly constant σV in determining regularity. Sinusoidal canal-duct indentations at 0.3 Hz led to modulations in spike discharge and synaptic voltage. Gain, the ratio between the amplitudes of the two modulations, and phase leads re indentation of both modulations are larger in irregular units. Gain variations parallel the sensitivity of the postsynaptic spike encoder, the set of conductances that converts synaptic input into spike discharge. Phase variations reflect both synaptic inputs to the encoder and postsynaptic processes. Experimental data were interpreted using a stochastic integrate-and-fire model. Advantages of an irregular discharge include an enhanced encoder gain and the prevention of nonlinear phase locking. Regular and irregular units are more efficient, respectively, in the encoding of low- and high-frequency head rotations, respectively.

  15. Regularities, Natural Patterns and Laws of Nature

    Directory of Open Access Journals (Sweden)

    Stathis Psillos

    2014-02-01

    Full Text Available  The goal of this paper is to sketch an empiricist metaphysics of laws of nature. The key idea is that there are regularities without regularity-enforcers. Differently put, there are natural laws without law-makers of a distinct metaphysical kind. This sketch will rely on the concept of a natural pattern and more significantly on the existence of a network of natural patterns in nature. The relation between a regularity and a pattern will be analysed in terms of mereology.  Here is the road map. In section 2, I will briefly discuss the relation between empiricism and metaphysics, aiming to show that an empiricist metaphysics is possible. In section 3, I will offer arguments against stronger metaphysical views of laws. Then, in section 4 I will motivate nomic objectivism. In section 5, I will address the question ‘what is a regularity?’ and will develop a novel answer to it, based on the notion of a natural pattern. In section 6, I will raise the question: ‘what is a law of nature?’, the answer to which will be: a law of nature is a regularity that is characterised by the unity of a natural pattern.

  16. Wedge Splitting Test on Fracture Behaviour of Fiber Reinforced and Regular High Performance Concretes

    DEFF Research Database (Denmark)

    Hodicky, Kamil; Hulin, Thomas; Schmidt, Jacob Wittrup

    2013-01-01

    The fracture behaviour of three fiber reinforced and regular High Performance Concretes (HPC) is presented in this paper. Two mixes are based on optimization of HPC whereas the third mix was a commercial mix developed by CONTEC ApS (Denmark). The wedge splitting test setup with 48 cubical specimens...

  17. Experimental reconstruction of a highly reflecting fiber Bragg grating by using spectral regularization and inverse scattering.

    Science.gov (United States)

    Rosenthal, Amir; Horowitz, Moshe; Kieckbusch, Sven; Brinkmeyer, Ernst

    2007-10-01

    We demonstrate experimentally, for the first time to our knowledge, a reconstruction of a highly reflecting fiber Bragg grating from its complex reflection spectrum by using a regularization algorithm. The regularization method is based on correcting the measured reflection spectrum at the Bragg zone frequencies and enables the reconstruction of the grating profile using the integral-layer-peeling algorithm. A grating with an approximately uniform profile and with a maximum reflectivity of 99.98% was accurately reconstructed by measuring only its complex reflection spectrum.

  18. Synthesis of Trigeneration Systems: Sensitivity Analyses and Resilience

    Directory of Open Access Journals (Sweden)

    Monica Carvalho

    2013-01-01

    Full Text Available This paper presents sensitivity and resilience analyses for a trigeneration system designed for a hospital. The following information is utilized to formulate an integer linear programming model: (1 energy service demands of the hospital, (2 technical and economical characteristics of the potential technologies for installation, (3 prices of the available utilities interchanged, and (4 financial parameters of the project. The solution of the model, minimizing the annual total cost, provides the optimal configuration of the system (technologies installed and number of pieces of equipment and the optimal operation mode (operational load of equipment, interchange of utilities with the environment, convenience of wasting cogenerated heat, etc. at each temporal interval defining the demand. The broad range of technical, economic, and institutional uncertainties throughout the life cycle of energy supply systems for buildings makes it necessary to delve more deeply into the fundamental properties of resilient systems: feasibility, flexibility and robustness. The resilience of the obtained solution is tested by varying, within reasonable limits, selected parameters: energy demand, amortization and maintenance factor, natural gas price, self-consumption of electricity, and time-of-delivery feed-in tariffs.

  19. ℓ1/2-norm regularized nonnegative low-rank and sparse affinity graph for remote sensing image segmentation

    Science.gov (United States)

    Tian, Shu; Zhang, Ye; Yan, Yiming; Su, Nan

    2016-10-01

    Segmentation of real-world remote sensing images is a challenge due to the complex texture information with high heterogeneity. Thus, graph-based image segmentation methods have been attracting great attention in the field of remote sensing. However, most of the traditional graph-based approaches fail to capture the intrinsic structure of the feature space and are sensitive to noises. A ℓ-norm regularization-based graph segmentation method is proposed to segment remote sensing images. First, we use the occlusion of the random texture model (ORTM) to extract the local histogram features. Then, a ℓ-norm regularized low-rank and sparse representation (LNNLRS) is implemented to construct a ℓ-regularized nonnegative low-rank and sparse graph (LNNLRS-graph), by the union of feature subspaces. Moreover, the LNNLRS-graph has a high ability to discriminate the manifold intrinsic structure of highly homogeneous texture information. Meanwhile, the LNNLRS representation takes advantage of the low-rank and sparse characteristics to remove the noises and corrupted data. Last, we introduce the LNNLRS-graph into the graph regularization nonnegative matrix factorization to enhance the segmentation accuracy. The experimental results using remote sensing images show that when compared to five state-of-the-art image segmentation methods, the proposed method achieves more accurate segmentation results.

  20. Highly Sensitive Optical Receivers

    CERN Document Server

    Schneider, Kerstin

    2006-01-01

    Highly Sensitive Optical Receivers primarily treats the circuit design of optical receivers with external photodiodes. Continuous-mode and burst-mode receivers are compared. The monograph first summarizes the basics of III/V photodetectors, transistor and noise models, bit-error rate, sensitivity and analog circuit design, thus enabling readers to understand the circuits described in the main part of the book. In order to cover the topic comprehensively, detailed descriptions of receivers for optical data communication in general and, in particular, optical burst-mode receivers in deep-sub-µm CMOS are presented. Numerous detailed and elaborate illustrations facilitate better understanding.

  1. Sensitivity analyses of factors influencing CMAQ performance for fine particulate nitrate.

    Science.gov (United States)

    Shimadera, Hikari; Hayami, Hiroshi; Chatani, Satoru; Morino, Yu; Mori, Yasuaki; Morikawa, Tazuko; Yamaji, Kazuyo; Ohara, Toshimasa

    2014-04-01

    Improvement of air quality models is required so that they can be utilized to design effective control strategies for fine particulate matter (PM2.5). The Community Multiscale Air Quality modeling system was applied to the Greater Tokyo Area of Japan in winter 2010 and summer 2011. The model results were compared with observed concentrations of PM2.5 sulfate (SO4(2-)), nitrate (NO3(-)) and ammonium, and gaseous nitric acid (HNO3) and ammonia (NH3). The model approximately reproduced PM2.5 SO4(2-) concentration, but clearly overestimated PM2.5 NO3(-) concentration, which was attributed to overestimation of production of ammonium nitrate (NH4NO3). This study conducted sensitivity analyses of factors associated with the model performance for PM2.5 NO3(-) concentration, including temperature and relative humidity, emission of nitrogen oxides, seasonal variation of NH3 emission, HNO3 and NH3 dry deposition velocities, and heterogeneous reaction probability of dinitrogen pentoxide. Change in NH3 emission directly affected NH3 concentration, and substantially affected NH4NO3 concentration. Higher dry deposition velocities of HNO3 and NH3 led to substantial reductions of concentrations of the gaseous species and NH4NO3. Because uncertainties in NH3 emission and dry deposition processes are probably large, these processes may be key factors for improvement of the model performance for PM2.5 NO3(-). The Community Multiscale Air Quality modeling system clearly overestimated the concentration of fine particulate nitrate in the Greater Tokyo Area of Japan, which was attributed to overestimation of production of ammonium nitrate. Sensitivity analyses were conducted for factors associated with the model performance for nitrate. Ammonia emission and dry deposition of nitric acid and ammonia may be key factors for improvement of the model performance.

  2. Condition Number Regularized Covariance Estimation.

    Science.gov (United States)

    Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala

    2013-06-01

    Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the "large p small n " setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required.

  3. Analytic stochastic regularization and gauge theories

    International Nuclear Information System (INIS)

    Abdalla, E.; Gomes, M.; Lima-Santos, A.

    1987-04-01

    We prove that analytic stochatic regularization braks gauge invariance. This is done by an explicit one loop calculation of the two three and four point vertex functions of the gluon field in scalar chromodynamics, which turns out not to be geuge invariant. We analyse the counter term structure, Langevin equations and the construction of composite operators in the general framework of stochastic quantization. (author) [pt

  4. Uncertainty and sensitivity analyses of ballast life-cycle cost and payback period

    Energy Technology Data Exchange (ETDEWEB)

    McMahon, James E.; Liu, Xiaomin; Turiel, Ike; Hakim, Sajid; Fisher, Diane

    2000-06-01

    The paper introduces an innovative methodology for evaluating the relative significance of energy-efficient technologies applied to fluorescent lamp ballasts. The method involves replacing the point estimates of life cycle cost of the ballasts with uncertainty distributions reflecting the whole spectrum of possible costs, and the assessed probability associated with each value. The results of uncertainty and sensitivity analyses will help analysts reduce effort in data collection and carry on analysis more efficiently. These methods also enable policy makers to gain an insightful understanding of which efficient technology alternatives benefit or cost what fraction of consumers, given the explicit assumptions of the analysis.

  5. Robust artificial neural network for reliability and sensitivity analyses of complex non-linear systems.

    Science.gov (United States)

    Oparaji, Uchenna; Sheu, Rong-Jiun; Bankhead, Mark; Austin, Jonathan; Patelli, Edoardo

    2017-12-01

    Artificial Neural Networks (ANNs) are commonly used in place of expensive models to reduce the computational burden required for uncertainty quantification, reliability and sensitivity analyses. ANN with selected architecture is trained with the back-propagation algorithm from few data representatives of the input/output relationship of the underlying model of interest. However, different performing ANNs might be obtained with the same training data as a result of the random initialization of the weight parameters in each of the network, leading to an uncertainty in selecting the best performing ANN. On the other hand, using cross-validation to select the best performing ANN based on the ANN with the highest R 2 value can lead to biassing in the prediction. This is as a result of the fact that the use of R 2 cannot determine if the prediction made by ANN is biased. Additionally, R 2 does not indicate if a model is adequate, as it is possible to have a low R 2 for a good model and a high R 2 for a bad model. Hence, in this paper, we propose an approach to improve the robustness of a prediction made by ANN. The approach is based on a systematic combination of identical trained ANNs, by coupling the Bayesian framework and model averaging. Additionally, the uncertainties of the robust prediction derived from the approach are quantified in terms of confidence intervals. To demonstrate the applicability of the proposed approach, two synthetic numerical examples are presented. Finally, the proposed approach is used to perform a reliability and sensitivity analyses on a process simulation model of a UK nuclear effluent treatment plant developed by National Nuclear Laboratory (NNL) and treated in this study as a black-box employing a set of training data as a test case. This model has been extensively validated against plant and experimental data and used to support the UK effluent discharge strategy. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Three regularities of recognition memory: the role of bias.

    Science.gov (United States)

    Hilford, Andrew; Maloney, Laurence T; Glanzer, Murray; Kim, Kisok

    2015-12-01

    A basic assumption of Signal Detection Theory is that decisions are made on the basis of likelihood ratios. In a preceding paper, Glanzer, Hilford, and Maloney (Psychonomic Bulletin & Review, 16, 431-455, 2009) showed that the likelihood ratio assumption implies that three regularities will occur in recognition memory: (1) the Mirror Effect, (2) the Variance Effect, (3) the normalized Receiver Operating Characteristic (z-ROC) Length Effect. The paper offered formal proofs and computational demonstrations that decisions based on likelihood ratios produce the three regularities. A survey of data based on group ROCs from 36 studies validated the likelihood ratio assumption by showing that its three implied regularities are ubiquitous. The study noted, however, that bias, another basic factor in Signal Detection Theory, can obscure the Mirror Effect. In this paper we examine how bias affects the regularities at the theoretical level. The theoretical analysis shows: (1) how bias obscures the Mirror Effect, not the other two regularities, and (2) four ways to counter that obscuring. We then report the results of five experiments that support the theoretical analysis. The analyses and the experimental results also demonstrate: (1) that the three regularities govern individual, as well as group, performance, (2) alternative explanations of the regularities are ruled out, and (3) that Signal Detection Theory, correctly applied, gives a simple and unified explanation of recognition memory data.

  7. Regularization of DT-MRI Using 3D Median Filtering Methods

    Directory of Open Access Journals (Sweden)

    Soondong Kwon

    2014-01-01

    Full Text Available DT-MRI (diffusion tensor magnetic resonance imaging tractography is a method to determine the architecture of axonal fibers in the central nervous system by computing the direction of the principal eigenvectors obtained from tensor matrix, which is different from the conventional isotropic MRI. Tractography based on DT-MRI is known to need many computations and is highly sensitive to noise. Hence, adequate regularization methods, such as image processing techniques, are in demand. Among many regularization methods we are interested in the median filtering method. In this paper, we extended two-dimensional median filters already developed to three-dimensional median filters. We compared four median filtering methods which are two-dimensional simple median method (SM2D, two-dimensional successive Fermat method (SF2D, three-dimensional simple median method (SM3D, and three-dimensional successive Fermat method (SF3D. Three kinds of synthetic data with different altitude angles from axial slices and one kind of human data from MR scanner are considered for numerical implementation by the four filtering methods.

  8. Output regularization of SVM seizure predictors: Kalman Filter versus the "Firing Power" method.

    Science.gov (United States)

    Teixeira, Cesar; Direito, Bruno; Bandarabadi, Mojtaba; Dourado, António

    2012-01-01

    Two methods for output regularization of support vector machines (SVMs) classifiers were applied for seizure prediction in 10 patients with long-term annotated data. The output of the classifiers were regularized by two methods: one based on the Kalman Filter (KF) and other based on a measure called the "Firing Power" (FP). The FP is a quantification of the rate of the classification in the preictal class in a past time window. In order to enable the application of the KF, the classification problem was subdivided in a two two-class problem, and the real-valued output of SVMs was considered. The results point that the FP method raise less false alarms than the KF approach. However, the KF approach presents an higher sensitivity, but the high number of false alarms turns their applicability negligible in some situations.

  9. Condition Number Regularized Covariance Estimation*

    Science.gov (United States)

    Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala

    2012-01-01

    Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the “large p small n” setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required. PMID:23730197

  10. Dose domain regularization of MLC leaf patterns for highly complex IMRT plans

    Energy Technology Data Exchange (ETDEWEB)

    Nguyen, Dan; Yu, Victoria Y.; Ruan, Dan; Cao, Minsong; Low, Daniel A.; Sheng, Ke, E-mail: ksheng@mednet.ucla.edu [Department of Radiation Oncology, University of California Los Angeles, Los Angeles, California 90095 (United States); O’Connor, Daniel [Department of Mathematics, University of California Los Angeles, Los Angeles, California 90095 (United States)

    2015-04-15

    Purpose: The advent of automated beam orientation and fluence optimization enables more complex intensity modulated radiation therapy (IMRT) planning using an increasing number of fields to exploit the expanded solution space. This has created a challenge in converting complex fluences to robust multileaf collimator (MLC) segments for delivery. A novel method to regularize the fluence map and simplify MLC segments is introduced to maximize delivery efficiency, accuracy, and plan quality. Methods: In this work, we implemented a novel approach to regularize optimized fluences in the dose domain. The treatment planning problem was formulated in an optimization framework to minimize the segmentation-induced dose distribution degradation subject to a total variation regularization to encourage piecewise smoothness in fluence maps. The optimization problem was solved using a first-order primal-dual algorithm known as the Chambolle-Pock algorithm. Plans for 2 GBM, 2 head and neck, and 2 lung patients were created using 20 automatically selected and optimized noncoplanar beams. The fluence was first regularized using Chambolle-Pock and then stratified into equal steps, and the MLC segments were calculated using a previously described level reducing method. Isolated apertures with sizes smaller than preset thresholds of 1–3 bixels, which are square units of an IMRT fluence map from MLC discretization, were removed from the MLC segments. Performance of the dose domain regularized (DDR) fluences was compared to direct stratification and direct MLC segmentation (DMS) of the fluences using level reduction without dose domain fluence regularization. Results: For all six cases, the DDR method increased the average planning target volume dose homogeneity (D95/D5) from 0.814 to 0.878 while maintaining equivalent dose to organs at risk (OARs). Regularized fluences were more robust to MLC sequencing, particularly to the stratification and small aperture removal. The maximum and

  11. The relationship between rs3779084 in the dopa decarboxylase (DDC) gene and alcohol consumption is mediated by drinking motives in regular smokers.

    Science.gov (United States)

    Kristjansson, Sean D; Agrawal, Arpana; Lessov-Schlaggar, Christina N; Madden, Pamela A F; Cooper, M Lynne; Bucholz, Kathleen K; Sher, Kenneth J; Lynskey, Michael T; Heath, Andrew C

    2012-01-01

    Motivational models of alcohol use propose that the motivation to consume alcohol is the final common pathway to its use. Both alcohol consumption and drinking motives are influenced by latent genetic factors that partially overlap. This study investigated whether drinking motives mediate the associations between alcohol consumption and 2 single-nucleotide polymorphisms (SNPs) from genes involved in serotonin (TPH2; rs1386496) and dopamine synthesis (DDC; rs3779084). Based on earlier work showing that enhancement and coping motives were heritable in regular smokers but not in nonregular smokers, we hypothesized these motives would mediate the relationships between alcohol consumption and these SNPs in regular smokers. Drinking motives data were available from 830 young adult female twins (n = 344 regular smokers and n = 486 never/nonregular smokers). We used confirmatory factor analyses to model enhancement, coping, and alcohol consumption factors and to conduct mediation analyses in the regular smoker and never/nonregular smoker groups. Our hypothesis was partially supported. The relationship between alcohol consumption and rs1386496 was not mediated by drinking motives in either group. However, in the regular smokers, the relationship between alcohol consumption and rs3779084 was mediated by enhancement and coping motives. Carriers of the rs3779084 minor allele who were regular smokers reported more motivation to consume alcohol. Given this pattern of results was absent in the never/nonregular smokers, our results are consistent with a gene × smoking status interaction. In regular smokers, variability at the locus marked by rs3779084 in the DDC gene appears to index biologically based individual differences in the motivation to consume alcohol to attain or improve a positive affective state or to relieve a negative one. These results could be because of increased sensitivity to the reinforcing effects of alcohol among minor allele carriers who smoke, which might

  12. Dental plaque pH variation with regular soft drink, diet soft drink and high energy drink: an in vivo study.

    Science.gov (United States)

    Jawale, Bhushan Arun; Bendgude, Vikas; Mahuli, Amit V; Dave, Bhavana; Kulkarni, Harshal; Mittal, Simpy

    2012-03-01

    A high incidence of dental caries and dental erosion associated with frequent consumption of soft drinks has been reported. The purpose of this study was to evaluate the pH response of dental plaque to a regular, diet and high energy drink. Twenty subjects were recruited for this study. All subjects were between the ages of 20 and 25 and had at least four restored tooth surfaces present. The subjects were asked to refrain from brushing for 48 hours prior to the study. At baseline, plaque pH was measured from four separate locations using harvesting method. Subjects were asked to swish with 15 ml of the respective soft drink for 1 minute. Plaque pH was measured at the four designated tooth sites at 5, 10 and 20 minutes intervals. Subjects then repeated the experiment using the other two soft drinks. pH was minimum for regular soft drink (2.65 ± 0.026) followed by high energy drink (3.39 ± 0.026) and diet soft drink (3.78 ± 0.006). The maximum drop in plaque pH was seen with regular soft drink followed by high energy drink and diet soft drink. Regular soft drink possesses a greater acid challenge potential on enamel than diet and high energy soft drinks. However, in this clinical trial, the pH associated with either soft drink did not reach the critical pH which is expected for enamel demineralization and dissolution.

  13. SPES3 Facility RELAP5 Sensitivity Analyses on the Containment System for Design Review

    International Nuclear Information System (INIS)

    Achilli, A.; Congiu, C.; Ferri, R.; Bianchi, F.; Meloni, P.; Grgic, D.; Dzodzo, M.

    2012-01-01

    An Italian MSE R and D programme on Nuclear Fission is funding, through ENEA, the design and testing of SPES3 facility at SIET, for IRIS reactor simulation. IRIS is a modular, medium size, advanced, integral PWR, developed by an international consortium of utilities, industries, research centres and universities. SPES3 simulates the primary, secondary and containment systems of IRIS, with 1:100 volume scale, full elevation and prototypical thermal-hydraulic conditions. The RELAP5 code was extensively used in support to the design of the facility to identify criticalities and weak points in the reactor simulation. FER, at Zagreb University, performed the IRIS reactor analyses with the RELAP5 and GOTHIC coupled codes. The comparison between IRIS and SPES3 simulation results led to a simulation-design feedback process with step-by-step modifications of the facility design, up to the final configuration. For this, a series of sensitivity cases was run to investigate specific aspects affecting the trend of the main parameters of the plant, as the containment pressure and EHRS removed power, to limit fuel clad temperature excursions during accidental transients. This paper summarizes the sensitivity analyses on the containment system that allowed to review the SPES3 facility design and confirm its capability to appropriately simulate the IRIS plant.

  14. SPES3 Facility RELAP5 Sensitivity Analyses on the Containment System for Design Review

    Directory of Open Access Journals (Sweden)

    Andrea Achilli

    2012-01-01

    Full Text Available An Italian MSE R&D programme on Nuclear Fission is funding, through ENEA, the design and testing of SPES3 facility at SIET, for IRIS reactor simulation. IRIS is a modular, medium size, advanced, integral PWR, developed by an international consortium of utilities, industries, research centres and universities. SPES3 simulates the primary, secondary and containment systems of IRIS, with 1:100 volume scale, full elevation and prototypical thermal-hydraulic conditions. The RELAP5 code was extensively used in support to the design of the facility to identify criticalities and weak points in the reactor simulation. FER, at Zagreb University, performed the IRIS reactor analyses with the RELAP5 and GOTHIC coupled codes. The comparison between IRIS and SPES3 simulation results led to a simulation-design feedback process with step-by-step modifications of the facility design, up to the final configuration. For this, a series of sensitivity cases was run to investigate specific aspects affecting the trend of the main parameters of the plant, as the containment pressure and EHRS removed power, to limit fuel clad temperature excursions during accidental transients. This paper summarizes the sensitivity analyses on the containment system that allowed to review the SPES3 facility design and confirm its capability to appropriately simulate the IRIS plant.

  15. Sensitivity of MENA Tropical Rainbelt to Dust Shortwave Absorption: A High Resolution AGCM Experiment

    KAUST Repository

    Bangalath, Hamza Kunhu; Stenchikov, Georgiy L.

    2016-01-01

    Shortwave absorption is one of the most important, but the most uncertain, components of direct radiative effect by mineral dust. It has a broad range of estimates from different observational and modeling studies and there is no consensus on the strength of absorption. To elucidate the sensitivity of the Middle East and North Africa (MENA) tropical summer rainbelt to a plausible range of uncertainty in dust shortwave absorption, AMIP-style global high resolution (25 km) simulations are conducted with and without dust, using the High-Resolution Atmospheric Model (HiRAM). Simulations with dust comprise three different cases by assuming dust as a very efficient, standard and inefficient absorber. Inter-comparison of these simulations shows that the response of the MENA tropical rainbelt is extremely sensitive to the strength of shortwave absorption. Further analyses reveal that the sensitivity of the rainbelt stems from the sensitivity of the multi-scale circulations that define the rainbelt. The maximum response and sensitivity are predicted over the northern edge of the rainbelt, geographically over Sahel. The sensitivity of the responses over the Sahel, especially that of precipitation, is comparable to the mean state. Locally, the response in precipitation reaches up to 50% of the mean, while dust is assumed to be a very efficient absorber. Taking into account that Sahel has a very high climate variability and is extremely vulnerable to changes in precipitation, the present study suggests the importance of reducing uncertainty in dust shortwave absorption for a better simulation and interpretation of the Sahel climate.

  16. Sensitivity of MENA Tropical Rainbelt to Dust Shortwave Absorption: A High Resolution AGCM Experiment

    KAUST Repository

    Bangalath, Hamza Kunhu

    2016-06-13

    Shortwave absorption is one of the most important, but the most uncertain, components of direct radiative effect by mineral dust. It has a broad range of estimates from different observational and modeling studies and there is no consensus on the strength of absorption. To elucidate the sensitivity of the Middle East and North Africa (MENA) tropical summer rainbelt to a plausible range of uncertainty in dust shortwave absorption, AMIP-style global high resolution (25 km) simulations are conducted with and without dust, using the High-Resolution Atmospheric Model (HiRAM). Simulations with dust comprise three different cases by assuming dust as a very efficient, standard and inefficient absorber. Inter-comparison of these simulations shows that the response of the MENA tropical rainbelt is extremely sensitive to the strength of shortwave absorption. Further analyses reveal that the sensitivity of the rainbelt stems from the sensitivity of the multi-scale circulations that define the rainbelt. The maximum response and sensitivity are predicted over the northern edge of the rainbelt, geographically over Sahel. The sensitivity of the responses over the Sahel, especially that of precipitation, is comparable to the mean state. Locally, the response in precipitation reaches up to 50% of the mean, while dust is assumed to be a very efficient absorber. Taking into account that Sahel has a very high climate variability and is extremely vulnerable to changes in precipitation, the present study suggests the importance of reducing uncertainty in dust shortwave absorption for a better simulation and interpretation of the Sahel climate.

  17. Regularizations of two-fold bifurcations in planar piecewise smooth systems using blowup

    DEFF Research Database (Denmark)

    Kristiansen, Kristian Uldall; Hogan, S. J.

    2015-01-01

    type of limit cycle that does not appear to be present in the original PWS system. For both types of limit cycle, we show that the criticality of the Hopf bifurcation that gives rise to periodic orbits is strongly dependent on the precise form of the regularization. Finally, we analyse the limit cycles...... as locally unique families of periodic orbits of the regularization and connect them, when possible, to limit cycles of the PWS system. We illustrate our analysis with numerical simulations and show how the regularized system can undergo a canard explosion phenomenon...

  18. High-intensity interval training improves insulin sensitivity in older individuals.

    Science.gov (United States)

    Søgaard, D; Lund, M T; Scheuer, C M; Dehlbaek, M S; Dideriksen, S G; Abildskov, C V; Christensen, K K; Dohlmann, T L; Larsen, S; Vigelsø, A H; Dela, F; Helge, J W

    2018-04-01

    Metabolic health may deteriorate with age as a result of altered body composition and decreased physical activity. Endurance exercise is known to counter these changes delaying or even preventing onset of metabolic diseases. High-intensity interval training (HIIT) is a time efficient alternative to regular endurance exercise, and the aim of this study was to investigate the metabolic benefit of HIIT in older subjects. Twenty-two sedentary male (n = 11) and female (n = 11) subjects aged 63 ± 1 years performed HIIT training three times/week for 6 weeks on a bicycle ergometer. Each HIIT session consisted of five 1-minute intervals interspersed with 1½-minute rest. Prior to the first and after the last HIIT session whole-body insulin sensitivity, measured by a hyperinsulinaemic-euglycaemic clamp, plasma lipid levels, HbA1c, glycaemic parameters, body composition and maximal oxygen uptake were assessed. Muscle biopsies were obtained wherefrom content of glycogen and proteins involved in muscle glucose handling were determined. Insulin sensitivity (P = .011) and maximal oxygen uptake increased (P body fat (P < .05) decreased after 6 weeks of HIIT. HbA1c decreased only in males (P = .001). Muscle glycogen content increased in both genders (P = .001) and in line GLUT4 (P < .05), glycogen synthase (P = .001) and hexokinase II (P < .05) content all increased. Six weeks of HIIT significantly improves metabolic health in older males and females by reducing age-related risk factors for cardiometabolic disease. © 2017 Scandinavian Physiological Society. Published by John Wiley & Sons Ltd.

  19. Uncertainty and sensitivity analyses of energy and visual performances of office building with external venetian blind shading in hot-dry climate

    International Nuclear Information System (INIS)

    Singh, Ramkishore; Lazarus, I.J.; Kishore, V.V.N.

    2016-01-01

    Highlights: • Various alternatives of glazing and venetian blind were simulated for office space. • Daylighting and energy performances were assessed for each alternative. • Large uncertainties were estimated in the energy consumptions and UDI values. • Glazing design parameters were prioritised by performing sensitivity analysis. • WWR, glazing type, blind orientation and slat angle were identified top in priority. - Abstract: Fenestration has become an integral part of the buildings and has a significant impact on the energy and indoor visual performances. Inappropriate design of the fenestration component may lead to low energy efficiency and visual discomfort as a result of high solar and thermal heat gains, excessive daylight and direct sunlight. External venetian blind has been identified as one of the effective shading devices for controlling the heat gains and daylight through fenestration. This study explores uncertainty and sensitivity analyses to identify and prioritize the most influencing parameters for designing glazed components that include external shading devices for office buildings. The study was performed for hot-dry climate of Jodhpur (Latitude 26° 180′N, longitude 73° 010′E) using EnergyPlus, a whole building energy simulation tool providing a large number of inputs for eight façade orientations. A total 150 and 845 data points (for each orientation) for input variables were generated using Hyper Cubic Sampling and extended FAST methods for uncertainty and sensitivity analyses respectively. Results indicated a large uncertainty in the lighting, HVAC, source energy consumptions and useful daylight illuminance (UDI). The estimated coefficients of variation were highest (up to 106%) for UDI, followed by lighting energy (up to 45%) and HVAC energy use (around 33%). The sensitivity analysis identified window to wall ratio, glazing type, blind type (orientation of slats) and slat angle as highly influencing factors for energy and

  20. Accelerator mass spectrometry analyses of environmental radionuclides: sensitivity, precision and standardisation

    Science.gov (United States)

    Hotchkis; Fink; Tuniz; Vogt

    2000-07-01

    Accelerator Mass Spectrometry (AMS) is the analytical technique of choice for the detection of long-lived radionuclides which cannot be practically analysed with decay counting or conventional mass spectrometry. AMS allows an isotopic sensitivity as low as one part in 10(15) for 14C (5.73 ka), 10Be (1.6 Ma), 26Al (720 ka), 36Cl (301 ka), 41Ca (104 ka), 129I (16 Ma) and other long-lived radionuclides occurring in nature at ultra-trace levels. These radionuclides can be used as tracers and chronometers in many disciplines: geology, archaeology, astrophysics, biomedicine and materials science. Low-level decay counting techniques have been developed in the last 40-50 years to detect the concentration of cosmogenic, radiogenic and anthropogenic radionuclides in a variety of specimens. Radioactivity measurements for long-lived radionuclides are made difficult by low counting rates and in some cases the need for complicated radiochemistry procedures and efficient detectors of soft beta-particles and low energy x-rays. The sensitivity of AMS is unaffected by the half-life of the isotope being measured, since the atoms not the radiations that result from their decay, are counted directly. Hence, the efficiency of AMS in the detection of long-lived radionuclides is 10(6)-10(9) times higher than decay counting and the size of the sample required for analysis is reduced accordingly. For example, 14C is being analysed in samples containing as little as 20 microg carbon. There is also a world-wide effort to use AMS for the analysis of rare nuclides of heavy mass, such as actinides, with important applications in safeguards and nuclear waste disposal. Finally, AMS microprobes are being developed for the in-situ analysis of stable isotopes in geological samples, semiconductors and other materials. Unfortunately, the use of AMS is limited by the expensive accelerator technology required, but there are several attempts to develop compact AMS spectrometers at low (advances in AMS

  1. Development of a Tandem Repeat-Based Polymerase Chain Displacement Reaction Method for Highly Sensitive Detection of 'Candidatus Liberibacter asiaticus'.

    Science.gov (United States)

    Lou, Binghai; Song, Yaqin; RoyChowdhury, Moytri; Deng, Chongling; Niu, Ying; Fan, Qijun; Tang, Yan; Zhou, Changyong

    2018-02-01

    Huanglongbing (HLB) is one of the most destructive diseases in citrus production worldwide. Early detection of HLB pathogens can facilitate timely removal of infected citrus trees in the field. However, low titer and uneven distribution of HLB pathogens in host plants make reliable detection challenging. Therefore, the development of effective detection methods with high sensitivity is imperative. This study reports the development of a novel method, tandem repeat-based polymerase chain displacement reaction (TR-PCDR), for the detection of 'Candidatus Liberibacter asiaticus', a widely distributed HLB-associated bacterium. A uniquely designed primer set (TR2-PCDR-F/TR2-PCDR-1R) and a thermostable Taq DNA polymerase mutant with strand displacement activity were used for TR-PCDR amplification. Performed in a regular thermal cycler, TR-PCDR could produce more than two amplicons after each amplification cycle. Sensitivity of the developed TR-PCDR was 10 copies of target DNA fragment. The sensitive level was proven to be 100× higher than conventional PCR and similar to real-time PCR. Data from the detection of 'Ca. L. asiaticus' with filed samples using the above three methods also showed similar results. No false-positive TR-PCDR amplification was observed from healthy citrus samples and water controls. These results thereby illustrated that the developed TR-PCDR method can be applied to the reliable, highly sensitive, and cost-effective detection of 'Ca. L. asiaticus'.

  2. Methylation-Sensitive High Resolution Melting (MS-HRM).

    Science.gov (United States)

    Hussmann, Dianna; Hansen, Lise Lotte

    2018-01-01

    Methylation-Sensitive High Resolution Melting (MS-HRM) is an in-tube, PCR-based method to detect methylation levels at specific loci of interest. A unique primer design facilitates a high sensitivity of the assays enabling detection of down to 0.1-1% methylated alleles in an unmethylated background.Primers for MS-HRM assays are designed to be complementary to the methylated allele, and a specific annealing temperature enables these primers to anneal both to the methylated and the unmethylated alleles thereby increasing the sensitivity of the assays. Bisulfite treatment of the DNA prior to performing MS-HRM ensures a different base composition between methylated and unmethylated DNA, which is used to separate the resulting amplicons by high resolution melting.The high sensitivity of MS-HRM has proven useful for detecting cancer biomarkers in a noninvasive manner in urine from bladder cancer patients, in stool from colorectal cancer patients, and in buccal mucosa from breast cancer patients. MS-HRM is a fast method to diagnose imprinted diseases and to clinically validate results from whole-epigenome studies. The ability to detect few copies of methylated DNA makes MS-HRM a key player in the quest for establishing links between environmental exposure, epigenetic changes, and disease.

  3. From regular text to artistic writing and artworks: Fourier statistics of images with low and high aesthetic appeal

    Directory of Open Access Journals (Sweden)

    Tamara eMelmer

    2013-04-01

    Full Text Available The spatial characteristics of letters and their influence on readability and letter identification have been intensely studied during the last decades. There have been few studies, however, on statistical image properties that reflect more global aspects of text, for example properties that may relate to its aesthetic appeal. It has been shown that natural scenes and a large variety of visual artworks possess a scale-invariant Fourier power spectrum that falls off linearly with increasing frequency in log-log plots. We asked whether images of text share this property. As expected, the Fourier spectrum of images of regular typed or handwritten text is highly anisotropic, i.e. the spectral image properties in vertical, horizontal and oblique orientations differ. Moreover, the spatial frequency spectra of text images are not scale invariant in any direction. The decline is shallower in the low-frequency part of the spectrum for text than for aesthetic artworks, whereas, in the high-frequency part, it is steeper. These results indicate that, in general, images of regular text contain less global structure (low spatial frequencies relative to fine detail (high spatial frequencies than images of aesthetics artworks. Moreover, we studied images of text with artistic claim (ornate print and calligraphy and ornamental art. For some measures, these images assume average values intermediate between regular text and aesthetic artworks. Finally, to answer the question of whether the statistical properties measured by us are universal amongst humans or are subject to intercultural differences, we compared images from three different cultural backgrounds (Western, East Asian and Arabic. Results for different categories (regular text, aesthetic writing, ornamental art and fine art were similar across cultures.

  4. From regular text to artistic writing and artworks: Fourier statistics of images with low and high aesthetic appeal

    Science.gov (United States)

    Melmer, Tamara; Amirshahi, Seyed A.; Koch, Michael; Denzler, Joachim; Redies, Christoph

    2013-01-01

    The spatial characteristics of letters and their influence on readability and letter identification have been intensely studied during the last decades. There have been few studies, however, on statistical image properties that reflect more global aspects of text, for example properties that may relate to its aesthetic appeal. It has been shown that natural scenes and a large variety of visual artworks possess a scale-invariant Fourier power spectrum that falls off linearly with increasing frequency in log-log plots. We asked whether images of text share this property. As expected, the Fourier spectrum of images of regular typed or handwritten text is highly anisotropic, i.e., the spectral image properties in vertical, horizontal, and oblique orientations differ. Moreover, the spatial frequency spectra of text images are not scale-invariant in any direction. The decline is shallower in the low-frequency part of the spectrum for text than for aesthetic artworks, whereas, in the high-frequency part, it is steeper. These results indicate that, in general, images of regular text contain less global structure (low spatial frequencies) relative to fine detail (high spatial frequencies) than images of aesthetics artworks. Moreover, we studied images of text with artistic claim (ornate print and calligraphy) and ornamental art. For some measures, these images assume average values intermediate between regular text and aesthetic artworks. Finally, to answer the question of whether the statistical properties measured by us are universal amongst humans or are subject to intercultural differences, we compared images from three different cultural backgrounds (Western, East Asian, and Arabic). Results for different categories (regular text, aesthetic writing, ornamental art, and fine art) were similar across cultures. PMID:23554592

  5. X-ray computed tomography using curvelet sparse regularization.

    Science.gov (United States)

    Wieczorek, Matthias; Frikel, Jürgen; Vogel, Jakob; Eggl, Elena; Kopp, Felix; Noël, Peter B; Pfeiffer, Franz; Demaret, Laurent; Lasser, Tobias

    2015-04-01

    Reconstruction of x-ray computed tomography (CT) data remains a mathematically challenging problem in medical imaging. Complementing the standard analytical reconstruction methods, sparse regularization is growing in importance, as it allows inclusion of prior knowledge. The paper presents a method for sparse regularization based on the curvelet frame for the application to iterative reconstruction in x-ray computed tomography. In this work, the authors present an iterative reconstruction approach based on the alternating direction method of multipliers using curvelet sparse regularization. Evaluation of the method is performed on a specifically crafted numerical phantom dataset to highlight the method's strengths. Additional evaluation is performed on two real datasets from commercial scanners with different noise characteristics, a clinical bone sample acquired in a micro-CT and a human abdomen scanned in a diagnostic CT. The results clearly illustrate that curvelet sparse regularization has characteristic strengths. In particular, it improves the restoration and resolution of highly directional, high contrast features with smooth contrast variations. The authors also compare this approach to the popular technique of total variation and to traditional filtered backprojection. The authors conclude that curvelet sparse regularization is able to improve reconstruction quality by reducing noise while preserving highly directional features.

  6. Sensitivity Study of Poisson's Ratio Used in Soil Structure Interaction (SSI) Analyses

    International Nuclear Information System (INIS)

    Han, Seung-ju; You, Dong-Hyun; Jang, Jung-bum; Yun, Kwan-hee

    2016-01-01

    The preliminary review for Design Certification (DC) of APR1400 was accepted by NRC on March 4, 2015. After the acceptance of the application for standard DC of APR1400, KHNP has responded the Request for Additional Information (RAI) raised by NRC to undertake a full design certification review. Design certification is achieved through the NRC's rulemaking process, and is founded on the staff's review of the application, which addresses the various safety issues associated with the proposed nuclear power plant design, independent of a specific site. The USNRC issued RAIs pertain to Design Control Document (DCD) Ch.3.7 'Seismic Design' is DCD Tables 3.7A-1 and 3.7A-2 show Poisson’s ratios in the S1 and S2 soil profiles used for SSI analysis as great as 0.47 and 0.48 respectively. Based on staff experience, use of Poisson's ratio approaching these values may result in numerical instability of the SSI analysis results. Sensitivity study is performed using the ACS SASSI NI model of APR1400 with S1 and S2 soil profiles to demonstrate that the Poisson’s ratio values used in the SSI analyses of S1 and S2 soil profile cases do not produce numerical instabilities in the SSI analysis results. No abrupt changes or spurious peaks, which tend to indicate existence of numerical sensitivities in the SASSI solutions, appear in the computed transfer functions of the original SSI analyses that have the maximum dynamic Poisson’s ratio values of 0.47 and 0.48 as well as in the re-computed transfer functions that have the maximum dynamic Poisson’s ratio values limited to 0.42 and 0.45

  7. Sensitivity Analyses of Alternative Methods for Disposition of High-Level Salt Waste: A Position Statement

    International Nuclear Information System (INIS)

    Harris, S.P.; Tuckfield, R.C.

    1998-01-01

    This position paper provides the approach and detail pertaining to a sensitivity analysis for the Phase II definition of weighted evaluation criteria weights and utility function values on the total utility scores for each Initial List alternative due to uncertainty and bias in engineering judgment

  8. High blood pressure and visual sensitivity

    Science.gov (United States)

    Eisner, Alvin; Samples, John R.

    2003-09-01

    The study had two main purposes: (1) to determine whether the foveal visual sensitivities of people treated for high blood pressure (vascular hypertension) differ from the sensitivities of people who have not been diagnosed with high blood pressure and (2) to understand how visual adaptation is related to standard measures of systemic cardiovascular function. Two groups of middle-aged subjects-hypertensive and normotensive-were examined with a series of test/background stimulus combinations. All subjects met rigorous inclusion criteria for excellent ocular health. Although the visual sensitivities of the two subject groups overlapped extensively, the age-related rate of sensitivity loss was, for some measures, greater for the hypertensive subjects, possibly because of adaptation differences between the two groups. Overall, the degree of steady-state sensitivity loss resulting from an increase of background illuminance (for 580-nm backgrounds) was slightly less for the hypertensive subjects. Among normotensive subjects, the ability of a bright (3.8-log-td), long-wavelength (640-nm) adapting background to selectively suppress the flicker response of long-wavelength-sensitive (LWS) cones was related inversely to the ratio of mean arterial blood pressure to heart rate. The degree of selective suppression was also related to heart rate alone, and there was evidence that short-term changes of cardiovascular response were important. The results suggest that (1) vascular hypertension, or possibly its treatment, subtly affects visual function even in the absence of eye disease and (2) changes in blood flow affect retinal light-adaptation processes involved in the selective suppression of the flicker response from LWS cones caused by bright, long-wavelength backgrounds.

  9. Highly sensitive high resolution Raman spectroscopy using resonant ionization methods

    International Nuclear Information System (INIS)

    Owyoung, A.; Esherick, P.

    1984-05-01

    In recent years, the introduction of stimulated Raman methods has offered orders of magnitude improvement in spectral resolving power for gas phase Raman studies. Nevertheless, the inherent weakness of the Raman process suggests the need for significantly more sensitive techniques in Raman spectroscopy. In this we describe a new approach to this problem. Our new technique, which we call ionization-detected stimulated Raman spectroscopy (IDSRS), combines high-resolution SRS with highly-sensitive resonant laser ionization to achieve an increase in sensitivity of over three orders of magnitude. The excitation/detection process involves three sequential steps: (1) population of a vibrationally excited state via stimulated Raman pumping; (2) selective ionization of the vibrationally excited molecule with a tunable uv source; and (3) collection of the ionized species at biased electrodes where they are detected as current in an external circuit

  10. Demonstrating the efficiency of the EFPC criterion by means of Sensitivity analyses

    International Nuclear Information System (INIS)

    Munier, Raymond

    2007-04-01

    Within the framework of a project to characterise large fractures, a modelling effort was initiated to evaluate the use of a pair of full perimeter criteria, FPC and EFPC, for detecting fractures that could jeopardize the integrity of the canisters in the case of a large nearby earthquake. Though some sensitivity studies were performed in the method study of these mainly targeted aspects of Monte-Carlo simulations. The impact of uncertainties in the DFN model upon the efficiency of the FPI criteria was left unattended. The main purpose of this report is, therefore, to explore the impact of DFN variability upon the efficiency of the FPI criteria. The outcome of the present report may thus be regarded as complementary analyses to the ones presented in SKB-R-06-54. To appreciate the details of the present report, the reader should be acquainted with the simulation procedure described the earlier report. The most important conclusion of this study is that the efficiency of the EFPC is high for all tested model variants. That is, compared to blind deposition, the EFPC is a very powerful tool to identify unsuitable deposition holes and it is essentially insensitive to variations in the DFN Model. If information from adjacent tunnels is used in addition to EFPC, then the probability of detecting a critical deposition hole is almost 100%

  11. Stochastic methods for the quantification of sensitivities and uncertainties in criticality analyses; Stochastische Methoden zur Quantifizierung von Sensitivitaeten und Unsicherheiten in Kritikalitaetsanalysen

    Energy Technology Data Exchange (ETDEWEB)

    Behler, Matthias; Bock, Matthias; Stuke, Maik; Wagner, Markus

    2014-06-15

    This work describes statistical analyses based on Monte Carlo sampling methods for criticality safety analyses. The methods analyse a large number of calculations of a given problem with statistically varied model parameters to determine uncertainties and sensitivities of the computed results. The GRS development SUnCISTT (Sensitivities and Uncertainties in Criticality Inventory and Source Term Tool) is a modular, easily extensible abstract interface program, designed to perform such Monte Carlo sampling based uncertainty and sensitivity analyses in the field of criticality safety. It couples different criticality and depletion codes commonly used in nuclear criticality safety assessments to the well-established GRS tool SUSA for sensitivity and uncertainty analyses. For uncertainty analyses of criticality calculations, SunCISTT couples various SCALE sequences developed at Oak Ridge National Laboratory and the general Monte Carlo N-particle transport code MCNP from Los Alamos National Laboratory to SUSA. The impact of manufacturing tolerances of a fuel assembly configuration on the neutron multiplication factor for the various sequences is shown. Uncertainties in nuclear inventories, dose rates, or decay heat can be investigated via the coupling of the GRS depletion system OREST to SUSA. Some results for a simplified irradiated Pressurized Water Reactor (PWR) UO{sub 2} fuel assembly are shown. SUnCISTT also combines the two aforementioned modules for burnup credit criticality analysis of spent nuclear fuel to ensures an uncertainty and sensitivity analysis using the variations of manufacturing tolerances in the burn-up code and criticality code simultaneously. Calculations and results for a storage cask loaded with typical irradiated PWR UO{sub 2} fuel are shown, including Monte Carlo sampled axial burn-up profiles. The application of SUnCISTT in the field of code validation, specifically, how it is applied to compare a simulation model to available benchmark

  12. Real time QRS complex detection using DFA and regular grammar.

    Science.gov (United States)

    Hamdi, Salah; Ben Abdallah, Asma; Bedoui, Mohamed Hedi

    2017-02-28

    The sequence of Q, R, and S peaks (QRS) complex detection is a crucial procedure in electrocardiogram (ECG) processing and analysis. We propose a novel approach for QRS complex detection based on the deterministic finite automata with the addition of some constraints. This paper confirms that regular grammar is useful for extracting QRS complexes and interpreting normalized ECG signals. A QRS is assimilated to a pair of adjacent peaks which meet certain criteria of standard deviation and duration. The proposed method was applied on several kinds of ECG signals issued from the standard MIT-BIH arrhythmia database. A total of 48 signals were used. For an input signal, several parameters were determined, such as QRS durations, RR distances, and the peaks' amplitudes. σRR and σQRS parameters were added to quantify the regularity of RR distances and QRS durations, respectively. The sensitivity rate of the suggested method was 99.74% and the specificity rate was 99.86%. Moreover, the sensitivity and the specificity rates variations according to the Signal-to-Noise Ratio were performed. Regular grammar with the addition of some constraints and deterministic automata proved functional for ECG signals diagnosis. Compared to statistical methods, the use of grammar provides satisfactory and competitive results and indices that are comparable to or even better than those cited in the literature.

  13. Distance-regular graphs

    NARCIS (Netherlands)

    van Dam, Edwin R.; Koolen, Jack H.; Tanaka, Hajime

    2016-01-01

    This is a survey of distance-regular graphs. We present an introduction to distance-regular graphs for the reader who is unfamiliar with the subject, and then give an overview of some developments in the area of distance-regular graphs since the monograph 'BCN'[Brouwer, A.E., Cohen, A.M., Neumaier,

  14. Operationalization of the Russian Version of Highly Sensitive Person Scale

    Directory of Open Access Journals (Sweden)

    Регина Вячеславовна Ершова

    2018-12-01

    Full Text Available The aim of the present study was to operationalize a Russian version of the Highly Sensitive Person Scale (HSPS. The empirical data were collected in two ways: active, through oral advertising and inviting those who wish to take part in the study (snowball technique and passive (placement of ads about taking part in a research in social networks VKontakte and Facebook. As a result, 350 university students (117 men, 233 women, an average age of 18,2 (± 1,7 applied to a research laboratory and filled out the HSPS questionnaire, and another 510 respondents (380 women, 130 men, average age 22,6 ( ± 7,9 filled the HSPS online. The results of the study did not confirm the one-dimensional model of the construct, proposed by Aron & Aron (1997, as well as the most commonly used in the English-language studies three-factor solution. The hierarchical claster and confirmatory analyses used in the operationalization procedure allowed us to conclude that the variance of the Russian version of HSPS is best described in the framework of a two-factor model including the two separate subscales: Ease of Excitation (EOE, Low threshold of sensitivity (LTS. Sensory Processing Sensitivity may be defined as an increased susceptibility to external and internal stimuli, realized through negative emotional responses and deep susceptibility (distress to excessive stimulation.

  15. Regular expressions cookbook

    CERN Document Server

    Goyvaerts, Jan

    2009-01-01

    This cookbook provides more than 100 recipes to help you crunch data and manipulate text with regular expressions. Every programmer can find uses for regular expressions, but their power doesn't come worry-free. Even seasoned users often suffer from poor performance, false positives, false negatives, or perplexing bugs. Regular Expressions Cookbook offers step-by-step instructions for some of the most common tasks involving this tool, with recipes for C#, Java, JavaScript, Perl, PHP, Python, Ruby, and VB.NET. With this book, you will: Understand the basics of regular expressions through a

  16. Epidermis Microstructure Inspired Graphene Pressure Sensor with Random Distributed Spinosum for High Sensitivity and Large Linearity.

    Science.gov (United States)

    Pang, Yu; Zhang, Kunning; Yang, Zhen; Jiang, Song; Ju, Zhenyi; Li, Yuxing; Wang, Xuefeng; Wang, Danyang; Jian, Muqiang; Zhang, Yingying; Liang, Renrong; Tian, He; Yang, Yi; Ren, Tian-Ling

    2018-03-27

    Recently, wearable pressure sensors have attracted tremendous attention because of their potential applications in monitoring physiological signals for human healthcare. Sensitivity and linearity are the two most essential parameters for pressure sensors. Although various designed micro/nanostructure morphologies have been introduced, the trade-off between sensitivity and linearity has not been well balanced. Human skin, which contains force receptors in a reticular layer, has a high sensitivity even for large external stimuli. Herein, inspired by the skin epidermis with high-performance force sensing, we have proposed a special surface morphology with spinosum microstructure of random distribution via the combination of an abrasive paper template and reduced graphene oxide. The sensitivity of the graphene pressure sensor with random distribution spinosum (RDS) microstructure is as high as 25.1 kPa -1 in a wide linearity range of 0-2.6 kPa. Our pressure sensor exhibits superior comprehensive properties compared with previous surface-modified pressure sensors. According to simulation and mechanism analyses, the spinosum microstructure and random distribution contribute to the high sensitivity and large linearity range, respectively. In addition, the pressure sensor shows promising potential in detecting human physiological signals, such as heartbeat, respiration, phonation, and human motions of a pushup, arm bending, and walking. The wearable pressure sensor array was further used to detect gait states of supination, neutral, and pronation. The RDS microstructure provides an alternative strategy to improve the performance of pressure sensors and extend their potential applications in monitoring human activities.

  17. Analysis of regularized inversion of data corrupted by white Gaussian noise

    International Nuclear Information System (INIS)

    Kekkonen, Hanne; Lassas, Matti; Siltanen, Samuli

    2014-01-01

    Tikhonov regularization is studied in the case of linear pseudodifferential operator as the forward map and additive white Gaussian noise as the measurement error. The measurement model for an unknown function u(x) is m(x) = Au(x) + δ ε (x), where δ > 0 is the noise magnitude. If ε was an L 2 -function, Tikhonov regularization gives an estimate T α (m) = u∈H r arg min { ||Au-m|| L 2 2 + α||u|| H r 2 } for u where α = α(δ) is the regularization parameter. Here penalization of the Sobolev norm ||u|| H r covers the cases of standard Tikhonov regularization (r = 0) and first derivative penalty (r = 1). Realizations of white Gaussian noise are almost never in L 2 , but do belong to H s with probability one if s < 0 is small enough. A modification of Tikhonov regularization theory is presented, covering the case of white Gaussian measurement noise. Furthermore, the convergence of regularized reconstructions to the correct solution as δ → 0 is proven in appropriate function spaces using microlocal analysis. The convergence of the related finite-dimensional problems to the infinite-dimensional problem is also analysed. (paper)

  18. Structural characterization of the packings of granular regular polygons.

    Science.gov (United States)

    Wang, Chuncheng; Dong, Kejun; Yu, Aibing

    2015-12-01

    By using a recently developed method for discrete modeling of nonspherical particles, we simulate the random packings of granular regular polygons with three to 11 edges under gravity. The effects of shape and friction on the packing structures are investigated by various structural parameters, including packing fraction, the radial distribution function, coordination number, Voronoi tessellation, and bond-orientational order. We find that packing fraction is generally higher for geometrically nonfrustrated regular polygons, and can be increased by the increase of edge number and decrease of friction. The changes of packing fraction are linked with those of the microstructures, such as the variations of the translational and orientational orders and local configurations. In particular, the free areas of Voronoi tessellations (which are related to local packing fractions) can be described by log-normal distributions for all polygons. The quantitative analyses establish a clearer picture for the packings of regular polygons.

  19. Retinal sensitivity and choroidal thickness in high myopia.

    Science.gov (United States)

    Zaben, Ahmad; Zapata, Miguel Á; Garcia-Arumi, Jose

    2015-03-01

    To estimate the association between choroidal thickness in the macular area and retinal sensitivity in eyes with high myopia. This investigation was a transversal study of patients with high myopia, all of whom had their retinal sensitivity measured with macular integrity assessment microperimetry. The choroidal thicknesses in the macular area were then measured by optical coherence tomography, and statistical correlations between their functionality and the anatomical structuralism, as assessed by both types of measurements, were analyzed. Ninety-six eyes from 77 patients with high myopia were studied. The patients had a mean age ± standard deviation of 38.9 ± 13.2 years, with spherical equivalent values ranging from -6.00 diopter to -20.00 diopter (8.74 ± 2.73 diopter). The mean central choroidal thickness was 159.00 ± 50.57. The mean choroidal thickness was directly correlated with sensitivity (r = 0.306; P = 0.004) and visual acuity but indirectly correlated with the spherical equivalent values and patient age. The mean sensitivity was not significantly correlated with the macular foveal thickness (r = -0.174; P = 0.101) or with the overall macular thickness (r = 0.103; P = 0.334); furthermore, the mean sensitivity was significantly correlated with visual acuity (r = 0.431; P < 0.001) and the spherical equivalent values (r = -0.306; P = 0.003). Retinal sensitivity in highly myopic eyes is directly correlated with choroidal thickness and does not seem to be associated with retinal thickness. Thus, in patients with high myopia, accurate measurements of choroidal thickness may provide more accurate information about this pathologic condition because choroidal thickness correlates to a greater degree with the functional parameters, patient age, and spherical equivalent values.

  20. High-Sensitivity C-Reactive Protein as a Predictor of Cardiovascular Events after ST-Elevation Myocardial Infarction

    Energy Technology Data Exchange (ETDEWEB)

    Ribeiro, Daniel Rios Pinto; Ramos, Adriane Monserrat; Vieira, Pedro Lima; Menti, Eduardo; Bordin, Odemir Luiz Jr.; Souza, Priscilla Azambuja Lopes de; Quadros, Alexandre Schaan de; Portal, Vera Lúcia, E-mail: veraportal.pesquisa@gmail.com [Programa de Pós-Graduação em Ciências da Saúde: Cardiologia - Instituto de Cardiologia/Fundação Universitária de Cardiologia, Porto Alegre, RS (Brazil)

    2014-07-15

    The association between high-sensitivity C-reactive protein and recurrent major adverse cardiovascular events (MACE) in patients with ST-elevation myocardial infarction who undergo primary percutaneous coronary intervention remains controversial. To investigate the potential association between high-sensitivity C-reactive protein and an increased risk of MACE such as death, heart failure, reinfarction, and new revascularization in patients with ST-elevation myocardial infarction treated with primary percutaneous coronary intervention. This prospective cohort study included 300 individuals aged >18 years who were diagnosed with ST-elevation myocardial infarction and underwent primary percutaneous coronary intervention at a tertiary health center. An instrument evaluating clinical variables and the Thrombolysis in Myocardial Infarction (TIMI) and Global Registry of Acute Coronary Events (GRACE) risk scores was used. High-sensitivity C-reactive protein was determined by nephelometry. The patients were followed-up during hospitalization and up to 30 days after infarction for the occurrence of MACE. Student's t, Mann-Whitney, chi-square, and logistic regression tests were used for statistical analyses. P values of ≤0.05 were considered statistically significant. The mean age was 59.76 years, and 69.3% of patients were male. No statistically significant association was observed between high-sensitivity C-reactive protein and recurrent MACE (p = 0.11). However, high-sensitivity C-reactive protein was independently associated with 30-day mortality when adjusted for TIMI [odds ratio (OR), 1.27; 95% confidence interval (CI), 1.07-1.51; p = 0.005] and GRACE (OR, 1.26; 95% CI, 1.06-1.49; p = 0.007) risk scores. Although high-sensitivity C-reactive protein was not predictive of combined major cardiovascular events within 30 days after ST-elevation myocardial infarction in patients who underwent primary angioplasty and stent implantation, it was an independent predictor

  1. LL-regular grammars

    NARCIS (Netherlands)

    Nijholt, Antinus

    1980-01-01

    Culik II and Cogen introduced the class of LR-regular grammars, an extension of the LR(k) grammars. In this paper we consider an analogous extension of the LL(k) grammars called the LL-regular grammars. The relation of this class of grammars to other classes of grammars will be shown. Any LL-regular

  2. Linear deflectometry - Regularization and experimental design [Lineare deflektometrie - Regularisierung und experimentelles design

    KAUST Repository

    Balzer, Jonathan

    2011-01-01

    Specular surfaces can be measured with deflectometric methods. The solutions form a one-parameter family whose properties are discussed in this paper. We show in theory and experiment that the shape sensitivity of solutions decreases with growing distance from the optical center of the imaging component of the sensor system and propose a novel regularization strategy. Recommendations for the construction of a measurement setup aim for benefiting this strategy as well as the contrarian standard approach of regularization by specular stereo. © Oldenbourg Wissenschaftsverlag.

  3. Prospective associations of social self-control with drug use among youth from regular and alternative high schools

    Directory of Open Access Journals (Sweden)

    Sun Ping

    2007-07-01

    Full Text Available Abstract Background This study examined the one year prospective associations between adolescent social self-control and drug outcomes (cigarette use, alcohol use, marijuana use, hard drug use, and problem drug use among adolescents from regular and continuation high schools. In our previous cross-sectional study, poor social self-control was found to be associated with higher drug use, controlling for 12 personality disorder categories. In this study, we attempted to find out (a whether lack of social self-control predicted drug use one year later, and (b whether drug use at baseline predicted social self-control one year later. Methods We surveyed 2081 older adolescents from 9 regular (N = 1529 and 9 continuation (alternative (N = 552 high schools in the Los Angeles area. Data were collected at two time points in an interval of approximately 1 year. Results Past 30-day cigarette smoking, marijuana use, hard drug use, and problem drug use at baseline were found to predict lower social self-control at follow-up, controlling for baseline social self-control and demographic variables. The effect of problem drug use as a one-year predictor of social self-control was found to be moderated by school type (regular or continuation high school, such that the relationship was significant for continuation high school students only. Conversely, social self-control was found to predict past 30-day alcohol use, marijuana use, and problem drug use, controlling for baseline drug use and demographic variables. For alcohol use, marijuana use, and problem drug use outcomes, school type was not found to moderate the effects of social self-control, though an interaction effect was found regarding cigarette smoking. Social self-control was a significant predictor of cigarette use only at regular high school. Conclusion The results indicate that social self-control and drug use share a reciprocal relationship. Lack of social self-control in adolescents seems to

  4. Laser-engraved carbon nanotube paper for instilling high sensitivity, high stretchability, and high linearity in strain sensors

    KAUST Repository

    Xin, Yangyang

    2017-06-29

    There is an increasing demand for strain sensors with high sensitivity and high stretchability for new applications such as robotics or wearable electronics. However, for the available technologies, the sensitivity of the sensors varies widely. These sensors are also highly nonlinear, making reliable measurement challenging. Here we introduce a new family of sensors composed of a laser-engraved carbon nanotube paper embedded in an elastomer. A roll-to-roll pressing of these sensors activates a pre-defined fragmentation process, which results in a well-controlled, fragmented microstructure. Such sensors are reproducible and durable and can attain ultrahigh sensitivity and high stretchability (with a gauge factor of over 4.2 × 10(4) at 150% strain). Moreover, they can attain high linearity from 0% to 15% and from 22% to 150% strain. They are good candidates for stretchable electronic applications that require high sensitivity and linearity at large strains.

  5. Regular graph construction for semi-supervised learning

    International Nuclear Information System (INIS)

    Vega-Oliveros, Didier A; Berton, Lilian; Eberle, Andre Mantini; Lopes, Alneu de Andrade; Zhao, Liang

    2014-01-01

    Semi-supervised learning (SSL) stands out for using a small amount of labeled points for data clustering and classification. In this scenario graph-based methods allow the analysis of local and global characteristics of the available data by identifying classes or groups regardless data distribution and representing submanifold in Euclidean space. Most of methods used in literature for SSL classification do not worry about graph construction. However, regular graphs can obtain better classification accuracy compared to traditional methods such as k-nearest neighbor (kNN), since kNN benefits the generation of hubs and it is not appropriate for high-dimensionality data. Nevertheless, methods commonly used for generating regular graphs have high computational cost. We tackle this problem introducing an alternative method for generation of regular graphs with better runtime performance compared to methods usually find in the area. Our technique is based on the preferential selection of vertices according some topological measures, like closeness, generating at the end of the process a regular graph. Experiments using the global and local consistency method for label propagation show that our method provides better or equal classification rate in comparison with kNN

  6. EIT Imaging Regularization Based on Spectral Graph Wavelets.

    Science.gov (United States)

    Gong, Bo; Schullcke, Benjamin; Krueger-Ziolek, Sabine; Vauhkonen, Marko; Wolf, Gerhard; Mueller-Lisse, Ullrich; Moeller, Knut

    2017-09-01

    The objective of electrical impedance tomographic reconstruction is to identify the distribution of tissue conductivity from electrical boundary conditions. This is an ill-posed inverse problem usually solved under the finite-element method framework. In previous studies, standard sparse regularization was used for difference electrical impedance tomography to achieve a sparse solution. However, regarding elementwise sparsity, standard sparse regularization interferes with the smoothness of conductivity distribution between neighboring elements and is sensitive to noise. As an effect, the reconstructed images are spiky and depict a lack of smoothness. Such unexpected artifacts are not realistic and may lead to misinterpretation in clinical applications. To eliminate such artifacts, we present a novel sparse regularization method that uses spectral graph wavelet transforms. Single-scale or multiscale graph wavelet transforms are employed to introduce local smoothness on different scales into the reconstructed images. The proposed approach relies on viewing finite-element meshes as undirected graphs and applying wavelet transforms derived from spectral graph theory. Reconstruction results from simulations, a phantom experiment, and patient data suggest that our algorithm is more robust to noise and produces more reliable images.

  7. Synchronisation phenomenon in three blades rotor driven by regular or chaotic oscillations

    Directory of Open Access Journals (Sweden)

    Szmit Zofia

    2018-01-01

    Full Text Available The goal of the paper is to analysed the influence of the different types of excitation on the synchronisation phenomenon in case of the rotating system composed of a rigid hub and three flexible composite beams. In the model is assumed that two blades, due to structural differences, are de-tuned. Numerical calculation are divided on two parts, firstly the rotating system is exited by a torque given by regular harmonic function, than in the second part the torque is produced by chaotic Duffing oscillator. The synchronisation phenomenon between the beams is analysed both either for regular or chaotic motions. Partial differential equations of motion are solved numerically and resonance curves, time series and Poincaré maps are presented for selected excitation torques.

  8. Highly sensitive detection using microring resonator and nanopores

    Science.gov (United States)

    Bougot-Robin, K.; Hoste, J. W.; Le Thomas, N.; Bienstman, P.; Edel, J. B.

    2016-04-01

    One of the most significant challenges facing physical and biological scientists is the accurate detection and identification of single molecules in free-solution environments. The ability to perform such sensitive and selective measurements opens new avenues for a large number of applications in biological, medical and chemical analysis, where small sample volumes and low analyte concentrations are the norm. Access to information at the single or few molecules scale is rendered possible by a fine combination of recent advances in technologies. We propose a novel detection method that combines highly sensitive label-free resonant sensing obtained with high-Q microcavities and position control in nanoscale pores (nanopores). In addition to be label-free and highly sensitive, our technique is immobilization free and does not rely on surface biochemistry to bind probes on a chip. This is a significant advantage, both in term of biology uncertainties and fewer biological preparation steps. Through combination of high-Q photonic structures with translocation through nanopore at the end of a pipette, or through a solid-state membrane, we believe significant advances can be achieved in the field of biosensing. Silicon microrings are highly advantageous in term of sensitivity, multiplexing, and microfabrication and are chosen for this study. In term of nanopores, we both consider nanopore at the end of a nanopipette, with the pore being approach from the pipette with nanoprecise mechanical control. Alternatively, solid state nanopores can be fabricated through a membrane, supporting the ring. Both configuration are discussed in this paper, in term of implementation and sensitivity.

  9. Reproduction of the Yucca Mountain Project TSPA-LA Uncertainty and Sensitivity Analyses and Preliminary Upgrade of Models

    Energy Technology Data Exchange (ETDEWEB)

    Hadgu, Teklu [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Nuclear Waste Disposal Research and Analysis; Appel, Gordon John [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Nuclear Waste Disposal Research and Analysis

    2016-09-01

    Sandia National Laboratories (SNL) continued evaluation of total system performance assessment (TSPA) computing systems for the previously considered Yucca Mountain Project (YMP). This was done to maintain the operational readiness of the computing infrastructure (computer hardware and software) and knowledge capability for total system performance assessment (TSPA) type analysis, as directed by the National Nuclear Security Administration (NNSA), DOE 2010. This work is a continuation of the ongoing readiness evaluation reported in Lee and Hadgu (2014) and Hadgu et al. (2015). The TSPA computing hardware (CL2014) and storage system described in Hadgu et al. (2015) were used for the current analysis. One floating license of GoldSim with Versions 9.60.300, 10.5 and 11.1.6 was installed on the cluster head node, and its distributed processing capability was mapped on the cluster processors. Other supporting software were tested and installed to support the TSPA-type analysis on the server cluster. The current tasks included verification of the TSPA-LA uncertainty and sensitivity analyses, and preliminary upgrade of the TSPA-LA from Version 9.60.300 to the latest version 11.1. All the TSPA-LA uncertainty and sensitivity analyses modeling cases were successfully tested and verified for the model reproducibility on the upgraded 2014 server cluster (CL2014). The uncertainty and sensitivity analyses used TSPA-LA modeling cases output generated in FY15 based on GoldSim Version 9.60.300 documented in Hadgu et al. (2015). The model upgrade task successfully converted the Nominal Modeling case to GoldSim Version 11.1. Upgrade of the remaining of the modeling cases and distributed processing tasks will continue. The 2014 server cluster and supporting software systems are fully operational to support TSPA-LA type analysis.

  10. Wedge Splitting Test and Inverse Analysis on Fracture Behaviour of Fiber Reinforced and Regular High Performance Concretes

    DEFF Research Database (Denmark)

    Hodicky, Kamil; Hulin, Thomas; Schmidt, Jacob Wittrup

    2014-01-01

    The fracture behaviour of three fiber reinforced and regular HPC (high performance concretes) is presented in this paper. Two mixes are based on optimization of HPC whereas the third mix was a commercial mix developed by CONTEC ApS (Denmark). The wedge splitting test setup with 48 cubical specimens...

  11. An iterative method for Tikhonov regularization with a general linear regularization operator

    NARCIS (Netherlands)

    Hochstenbach, M.E.; Reichel, L.

    2010-01-01

    Tikhonov regularization is one of the most popular approaches to solve discrete ill-posed problems with error-contaminated data. A regularization operator and a suitable value of a regularization parameter have to be chosen. This paper describes an iterative method, based on Golub-Kahan

  12. A regularization method for extrapolation of solar potential magnetic fields

    Science.gov (United States)

    Gary, G. A.; Musielak, Z. E.

    1992-01-01

    The mathematical basis of a Tikhonov regularization method for extrapolating the chromospheric-coronal magnetic field using photospheric vector magnetograms is discussed. The basic techniques show that the Cauchy initial value problem can be formulated for potential magnetic fields. The potential field analysis considers a set of linear, elliptic partial differential equations. It is found that, by introducing an appropriate smoothing of the initial data of the Cauchy potential problem, an approximate Fourier integral solution is found, and an upper bound to the error in the solution is derived. This specific regularization technique, which is a function of magnetograph measurement sensitivities, provides a method to extrapolate the potential magnetic field above an active region into the chromosphere and low corona.

  13. Development of High Temperature/High Sensitivity Novel Chemical Resistive Sensor

    Energy Technology Data Exchange (ETDEWEB)

    Ma, Chunrui [Univ. of Texas, San Antonio, TX (United States); Enriquez, Erik [Univ. of Texas, San Antonio, TX (United States); Wang, Haibing [Univ. of Texas, San Antonio, TX (United States); Xu, Xing [Univ. of Texas, San Antonio, TX (United States); Bao, Shangyong [Univ. of Texas, San Antonio, TX (United States); Collins, Gregory [Univ. of Texas, San Antonio, TX (United States)

    2013-08-13

    The research has been focused to design, fabricate, and develop high temperature/high sensitivity novel multifunctional chemical sensors for the selective detection of fossil energy gases used in power and fuel systems. By systematically studying the physical properties of the LnBaCo2O5+d (LBCO) [Ln=Pr or La] thin-films, a new concept chemical sensor based high temperature chemical resistant change has been developed for the application for the next generation highly efficient and near zero emission power generation technologies. We also discovered that the superfast chemical dynamic behavior and an ultrafast surface exchange kinetics in the highly epitaxial LBCO thin films. Furthermore, our research indicates that hydrogen can superfast diffuse in the ordered oxygen vacancy structures in the highly epitaxial LBCO thin films, which suggest that the LBCO thin film not only can be an excellent candidate for the fabrication of high temperature ultra sensitive chemical sensors and control systems for power and fuel monitoring systems, but also can be an excellent candidate for the low temperature solid oxide fuel cell anode and cathode materials.

  14. High prevalence of lipid transfer protein sensitization in apple allergic patients with systemic symptoms.

    Directory of Open Access Journals (Sweden)

    Francisca Gomez

    Full Text Available Apple allergy manifests as two main groups of clinical entities reflecting different patterns of allergen sensitization: oral allergy syndrome (OAS and generalized symptoms (GS.We analysed the sensitization profile to a wide panel of different components of food allergens (rMal d 1, Mal d 2, rMal d 3, rMal d 4, rPru p 3, rBet v 1 and Pho d 2 for a population of Mediterranean patients with OAS and GS to apple.Patients (N = 81 with a history of apple allergy that could be confirmed by positive prick-prick test and/or double-blind-placebo-controlled food challenge (DBPCFC, were included. Skin prick test (SPT and ELISA were performed using a panel of inhalant, fruit and nut allergens. ELISA and ELISA inhibition studies were performed in order to analyse the sensitization patterns.Thirty-five cases (43.2% had OAS and 46 (56.8% GS. SPT showed a significantly higher number of positive results with peach, cherry and hazelnut in those with GS. ELISA showed a significantly high percentage of positive cases to rMal d 3, rMal d 4, rPru p 3 and Pho d 2 in patients with OAS and GS compared to controls, and to rBet v 1 in patients with OAS vs controls and between OAS and GS patients. Three different patterns of recognition were detected: positive to LTP (rMal d 3 or rPru p 3, positive to profilin (rMal d 4 and Pho d 2, or positive to both. There were also patients with rMal d 1 recognition who showed cross-reactivity to rBet v 1.In an apple allergy population with a high incidence of pollinosis different patterns of sensitization may occur. LTP is most often involved in those with GS. Profilin, though more prevalent in patients with OAS, has been shown to sensitise patients with both types of symptoms.

  15. Regular Expression Pocket Reference

    CERN Document Server

    Stubblebine, Tony

    2007-01-01

    This handy little book offers programmers a complete overview of the syntax and semantics of regular expressions that are at the heart of every text-processing application. Ideal as a quick reference, Regular Expression Pocket Reference covers the regular expression APIs for Perl 5.8, Ruby (including some upcoming 1.9 features), Java, PHP, .NET and C#, Python, vi, JavaScript, and the PCRE regular expression libraries. This concise and easy-to-use reference puts a very powerful tool for manipulating text and data right at your fingertips. Composed of a mixture of symbols and text, regular exp

  16. Network-based regularization for high dimensional SNP data in the case-control study of Type 2 diabetes.

    Science.gov (United States)

    Ren, Jie; He, Tao; Li, Ye; Liu, Sai; Du, Yinhao; Jiang, Yu; Wu, Cen

    2017-05-16

    Over the past decades, the prevalence of type 2 diabetes mellitus (T2D) has been steadily increasing around the world. Despite large efforts devoted to better understand the genetic basis of the disease, the identified susceptibility loci can only account for a small portion of the T2D heritability. Some of the existing approaches proposed for the high dimensional genetic data from the T2D case-control study are limited by analyzing a few number of SNPs at a time from a large pool of SNPs, by ignoring the correlations among SNPs and by adopting inefficient selection techniques. We propose a network constrained regularization method to select important SNPs by taking the linkage disequilibrium into account. To accomodate the case control study, an iteratively reweighted least square algorithm has been developed within the coordinate descent framework where optimization of the regularized logistic loss function is performed with respect to one parameter at a time and iteratively cycle through all the parameters until convergence. In this article, a novel approach is developed to identify important SNPs more effectively through incorporating the interconnections among them in the regularized selection. A coordinate descent based iteratively reweighed least squares (IRLS) algorithm has been proposed. Both the simulation study and the analysis of the Nurses's Health Study, a case-control study of type 2 diabetes data with high dimensional SNP measurements, demonstrate the advantage of the network based approach over the competing alternatives.

  17. Parameterization and sensitivity analyses of a radiative transfer model for remote sensing plant canopies

    Science.gov (United States)

    Hall, Carlton Raden

    A major objective of remote sensing is determination of biochemical and biophysical characteristics of plant canopies utilizing high spectral resolution sensors. Canopy reflectance signatures are dependent on absorption and scattering processes of the leaf, canopy properties, and the ground beneath the canopy. This research investigates, through field and laboratory data collection, and computer model parameterization and simulations, the relationships between leaf optical properties, canopy biophysical features, and the nadir viewed above-canopy reflectance signature. Emphasis is placed on parameterization and application of an existing irradiance radiative transfer model developed for aquatic systems. Data and model analyses provide knowledge on the relative importance of leaves and canopy biophysical features in estimating the diffuse absorption a(lambda,m-1), diffuse backscatter b(lambda,m-1), beam attenuation alpha(lambda,m-1), and beam to diffuse conversion c(lambda,m-1 ) coefficients of the two-flow irradiance model. Data sets include field and laboratory measurements from three plant species, live oak (Quercus virginiana), Brazilian pepper (Schinus terebinthifolius) and grapefruit (Citrus paradisi) sampled on Cape Canaveral Air Force Station and Kennedy Space Center Florida in March and April of 1997. Features measured were depth h (m), projected foliage coverage PFC, leaf area index LAI, and zenith leaf angle. Optical measurements, collected with a Spectron SE 590 high sensitivity narrow bandwidth spectrograph, included above canopy reflectance, internal canopy transmittance and reflectance and bottom reflectance. Leaf samples were returned to laboratory where optical and physical and chemical measurements of leaf thickness, leaf area, leaf moisture and pigment content were made. A new term, the leaf volume correction index LVCI was developed and demonstrated in support of model coefficient parameterization. The LVCI is based on angle adjusted leaf

  18. Preliminary performance assessment for the Waste Isolation Pilot Plant, December 1992. Volume 5, Uncertainty and sensitivity analyses of gas and brine migration for undisturbed performance

    Energy Technology Data Exchange (ETDEWEB)

    1993-08-01

    Before disposing of transuranic radioactive waste in the Waste Isolation Pilot Plant (WIPP), the United States Department of Energy (DOE) must evaluate compliance with applicable long-term regulations of the United States Environmental Protection Agency (EPA). Sandia National Laboratories is conducting iterative performance assessments (PAs) of the WIPP for the DOE to provide interim guidance while preparing for a final compliance evaluation. This volume of the 1992 PA contains results of uncertainty and sensitivity analyses with respect to migration of gas and brine from the undisturbed repository. Additional information about the 1992 PA is provided in other volumes. Volume 1 contains an overview of WIPP PA and results of a preliminary comparison with 40 CFR 191, Subpart B. Volume 2 describes the technical basis for the performance assessment, including descriptions of the linked computational models used in the Monte Carlo analyses. Volume 3 contains the reference data base and values for input parameters used in consequence and probability modeling. Volume 4 contains uncertainty and sensitivity analyses with respect to the EPA`s Environmental Standards for the Management and Disposal of Spent Nuclear Fuel, High-Level and Transuranic Radioactive Wastes (40 CFR 191, Subpart B). Finally, guidance derived from the entire 1992 PA is presented in Volume 6. Results of the 1992 uncertainty and sensitivity analyses indicate that, conditional on the modeling assumptions and the assigned parameter-value distributions, the most important parameters for which uncertainty has the potential to affect gas and brine migration from the undisturbed repository are: initial liquid saturation in the waste, anhydrite permeability, biodegradation-reaction stoichiometry, gas-generation rates for both corrosion and biodegradation under inundated conditions, and the permeability of the long-term shaft seal.

  19. Sensitivity analyses of the peach bottom turbine trip 2 experiment

    International Nuclear Information System (INIS)

    Bousbia Salah, A.; D'Auria, F.

    2003-01-01

    In the light of the sustained development in computer technology, the possibilities for code calculations in predicting more realistic transient scenarios in nuclear power plants have been enlarged substantially. Therefore, it becomes feasible to perform 'Best-estimate' simulations through the incorporation of three-dimensional modeling of reactor core into system codes. This method is particularly suited for complex transients that involve strong feedback effects between thermal-hydraulics and kinetics as well as to transient involving local asymmetric effects. The Peach bottom turbine trip test is characterized by a prompt core power excursion followed by a self limiting power behavior. To emphasize and understand the feedback mechanisms involved during this transient, a series of sensitivity analyses were carried out. This should allow the characterization of discrepancies between measured and calculated trends and assess the impact of the thermal-hydraulic and kinetic response of the used models. On the whole, the data comparison revealed a close dependency of the power excursion with the core feedback mechanisms. Thus for a better best estimate simulation of the transient, both of the thermal-hydraulic and the kinetic models should be made more accurate. (author)

  20. Hypothalamic Leptin Gene Therapy Reduces Bone Marrow Adiposity in ob/ob Mice Fed Regular and High Fat Diets

    Directory of Open Access Journals (Sweden)

    Laurence B Lindenmaier

    2016-08-01

    Full Text Available Low bone mass is often associated with increased bone marrow adiposity. Since osteoblasts and adipocytes are derived from the same mesenchymal stem cell progenitor, adipocyte formation may increase at the expense of osteoblast formation. Leptin is an adipocyte-derived hormone known to regulate energy and bone metabolism. Genetic (e.g., leptin deficiency and high fat diet-induced (e.g., leptin resistance obesity are associated with increased marrow adipose tissue (MAT and reduced bone formation. Short-duration studies suggest that leptin treatment reduces MAT and increases bone formation in leptin-deficient ob/ob mice fed a regular diet. Here, we determined the long-duration impact of increased hypothalamic leptin on marrow adipocytes and osteoblasts in ob/ob mice using recombinant adeno-associated virus (rAAV gene therapy. In a first study, eight- to ten-week-old male ob/ob mice were randomized into 4 groups: (1 untreated, (2 rAAV-Lep, (3 rAAV-green fluorescent protein (rAAV-GFP, or (4 pair-fed to rAAV-Lep. For vector administration, mice were placed in a Kopf stereotaxic apparatus, and injected intracerebroventricularly with either rAAV-Lep or rAAV-GFP (9 × 107 particles in 1.5 µl. The mice were maintained for 30 weeks following vector administration. In a second study, the impact of increased hypothalamic leptin levels on MAT was determined in mice fed high fat diets. Eight- to ten-week-old male ob/ob mice were randomized into 2 groups and treated with either rAAV-Lep or rAAV-GFP. At 7 weeks post-vector administration, half the mice in each group were switched to a high fat diet for 8 weeks. Wild type (WT controls included age-matched mice fed regular or high fat diet. Hypothalamic leptin gene therapy increased osteoblast perimeter and osteoclast perimeter with minor change in cancellous bone architecture. The gene therapy decreased MAT levels in ob/ob mice fed regular or high fat diet to values similar to WT mice fed regular diet. These

  1. High Sensitivity TSS Prediction: Estimates of Locations Where TSS Cannot Occur

    KAUST Repository

    Schaefer, Ulf

    2013-10-10

    Background Although transcription in mammalian genomes can initiate from various genomic positions (e.g., 3′UTR, coding exons, etc.), most locations on genomes are not prone to transcription initiation. It is of practical and theoretical interest to be able to estimate such collections of non-TSS locations (NTLs). The identification of large portions of NTLs can contribute to better focusing the search for TSS locations and thus contribute to promoter and gene finding. It can help in the assessment of 5′ completeness of expressed sequences, contribute to more successful experimental designs, as well as more accurate gene annotation. Methodology Using comprehensive collections of Cap Analysis of Gene Expression (CAGE) and other transcript data from mouse and human genomes, we developed a methodology that allows us, by performing computational TSS prediction with very high sensitivity, to annotate, with a high accuracy in a strand specific manner, locations of mammalian genomes that are highly unlikely to harbor transcription start sites (TSSs). The properties of the immediate genomic neighborhood of 98,682 accurately determined mouse and 113,814 human TSSs are used to determine features that distinguish genomic transcription initiation locations from those that are not likely to initiate transcription. In our algorithm we utilize various constraining properties of features identified in the upstream and downstream regions around TSSs, as well as statistical analyses of these surrounding regions. Conclusions Our analysis of human chromosomes 4, 21 and 22 estimates ~46%, ~41% and ~27% of these chromosomes, respectively, as being NTLs. This suggests that on average more than 40% of the human genome can be expected to be highly unlikely to initiate transcription. Our method represents the first one that utilizes high-sensitivity TSS prediction to identify, with high accuracy, large portions of mammalian genomes as NTLs. The server with our algorithm implemented is

  2. Sensitivity of the direct stop pair production analyses in phenomenological MSSM simplified models with the ATLAS detectors

    CERN Document Server

    Snyder, Ian Michael; The ATLAS collaboration

    2018-01-01

    The sensitivity of the searches for the direct pair production of stops often has been evaluated in simple SUSY scenarios, where only a limited set of supersymmetric particles take part to the stop decay. In this talk, the interpretations of the analyses requiring zero, one or two leptons in the final states to simple but well motivated MSSM scenarios will be discussed.

  3. Regular periodical public disclosure obligations of public companies

    Directory of Open Access Journals (Sweden)

    Marjanski Vladimir

    2011-01-01

    Full Text Available Public companies in the capacity of capital market participants have the obligation to inform the public on their legal and financial status, their general business operations, as well as on the issuance of securities and other financial instruments. Such obligations may be divided into two groups: The first group consists of regular periodical public disclosures, such as the publication of financial reports (annual, semi-annual and quarterly, and the management's reports on the public company's business operations. The second group comprises the obligation of occasional (ad hoc public disclosure. The thesis analyses the obligation of public companies to inform the public in course of their regular reporting. The new Capital Market Law based on two EU Directives (the Transparency Directive and the Directive on Public Disclosure of Inside Information and the Definition of Market Manipulation regulates such obligation of public companies in substantially more detail than the prior Law on the Market of Securities and Other Financial Instruments (hereinafter: ZTHV. Due to the above the ZTHV's provisions are compared to the new solutions within the domain of regular periodical disclosure of the Capital Market Law.

  4. The persistence of the attentional bias to regularities in a changing environment.

    Science.gov (United States)

    Yu, Ru Qi; Zhao, Jiaying

    2015-10-01

    The environment often is stable, but some aspects may change over time. The challenge for the visual system is to discover and flexibly adapt to the changes. We examined how attention is shifted in the presence of changes in the underlying structure of the environment. In six experiments, observers viewed four simultaneous streams of objects while performing a visual search task. In the first half of each experiment, the stream in the structured location contained regularities, the shapes in the random location were randomized, and gray squares appeared in two neutral locations. In the second half, the stream in the structured or the random location may change. In the first half of all experiments, visual search was facilitated in the structured location, suggesting that attention was consistently biased toward regularities. In the second half, this bias persisted in the structured location when no change occurred (Experiment 1), when the regularities were removed (Experiment 2), or when new regularities embedded in the original or novel stimuli emerged in the previously random location (Experiments 3 and 6). However, visual search was numerically but no longer reliably faster in the structured location when the initial regularities were removed and new regularities were introduced in the previously random location (Experiment 4), or when novel random stimuli appeared in the random location (Experiment 5). This suggests that the attentional bias was weakened. Overall, the results demonstrate that the attentional bias to regularities was persistent but also sensitive to changes in the environment.

  5. High-field modulated ion-selective field-effect-transistor (FET) sensors with sensitivity higher than the ideal Nernst sensitivity.

    Science.gov (United States)

    Chen, Yi-Ting; Sarangadharan, Indu; Sukesan, Revathi; Hseih, Ching-Yen; Lee, Geng-Yen; Chyi, Jen-Inn; Wang, Yu-Lin

    2018-05-29

    Lead ion selective membrane (Pb-ISM) coated AlGaN/GaN high electron mobility transistors (HEMT) was used to demonstrate a whole new methodology for ion-selective FET sensors, which can create ultra-high sensitivity (-36 mV/log [Pb 2+ ]) surpassing the limit of ideal sensitivity (-29.58 mV/log [Pb 2+ ]) in a typical Nernst equation for lead ion. The largely improved sensitivity has tremendously reduced the detection limit (10 -10  M) for several orders of magnitude of lead ion concentration compared to typical ion-selective electrode (ISE) (10 -7  M). The high sensitivity was obtained by creating a strong filed between the gate electrode and the HEMT channel. Systematical investigation was done by measuring different design of the sensor and gate bias, indicating ultra-high sensitivity and ultra-low detection limit obtained only in sufficiently strong field. Theoretical study in the sensitivity consistently agrees with the experimental finding and predicts the maximum and minimum sensitivity. The detection limit of our sensor is comparable to that of Inductively-Coupled-Plasma Mass Spectrum (ICP-MS), which also has detection limit near 10 -10  M.

  6. Examining the Moderating Effect of Depressive Symptoms on the Relation Between Exercise and Self-Efficacy During the Initiation of Regular Exercise

    Science.gov (United States)

    Kangas, Julie L.; Baldwin, Austin S.; Rosenfield, David; Smits, Jasper A. J.; Rethorst, Chad D.

    2016-01-01

    Objective People with depressive symptoms typically report lower levels of exercise self-efficacy and are more likely to discontinue regular exercise than others, but it is unclear how depressive symptoms affect people’s exercise self-efficacy. Among potential sources of self-efficacy, engaging in the relevant behavior is the strongest (Bandura, 1997). Thus, we sought to clarify how depressive symptoms affect the same-day relation between engaging in exercise and self-efficacy during the initiation of regular exercise. Methods Participants (N=116) were physically inactive adults (35% reported clinically significant depressive symptoms at baseline) who initiated regular exercise and completed daily assessments of exercise minutes and self-efficacy for four weeks. We tested whether (a) self-efficacy differed on days when exercise did and did not occur, and (b) the difference was moderated by depressive symptoms. Mixed linear models were used to examine these relations. Results An interaction between exercise occurrence and depressive symptoms (pself-efficacy was lower on days when no exercise occurred, but this difference was significantly larger for people with high depressive symptoms. People with high depressive symptoms had lower self-efficacy than those with low depressive symptoms on days when no exercise occurred (p=.03), but self-efficacy did not differ on days when exercise occurred (p=.34). Conclusions During the critical period of initiating regular exercise, daily self-efficacy for people with high depressive symptoms is more sensitive to whether they exercised than for people with low depressive symptoms. This may partially explain why people with depression tend to have difficulty maintaining regular exercise. PMID:25110850

  7. Silicon nanowire structures as high-sensitive pH-sensors

    International Nuclear Information System (INIS)

    Belostotskaya, S O; Chuyko, O V; Kuznetsov, A E; Kuznetsov, E V; Rybachek, E N

    2012-01-01

    Sensitive elements for pH-sensors created on silicon nanostructures were researched. Silicon nanostructures have been used as ion-sensitive field effect transistor (ISFET) for the measurement of solution pH. Silicon nanostructures have been fabricated by 'top-down' approach and have been studied as pH sensitive elements. Nanowires have the higher sensitivity. It was shown, that sensitive element, which is made of 'one-dimensional' silicon nanostructure have bigger pH-sensitivity as compared with 'two-dimensional' structure. Integrated element formed from two p- and n-type nanowire ISFET ('inverter') can be used as high sensitivity sensor for local relative change [H+] concentration in very small volume.

  8. Development of a highly sensitive lithium fluoride thermoluminescence dosimeter

    International Nuclear Information System (INIS)

    Moraes da Silva, Teresinha de; Campos, Leticia Lucente

    1995-01-01

    In recent times, LiF: Mg, Cu, P thermoluminescent phosphor has been increasingly in use for radiation monitoring due its high sensitivity and ease of preparation. The Dosimetric Materials Production Laboratory of IPEN, (Nuclear Energy Institute) has developed a simple method to obtain high sensitivity LiF. The preparation method is described. (author). 4 refs., 1 fig., 1 tab

  9. Phase sensitive diffraction sensor for high sensitivity refractive index measurement

    Science.gov (United States)

    Kumawat, Nityanand; Varma, Manoj; Kumar, Sunil

    2018-02-01

    In this study a diffraction based sensor has been developed for bio molecular sensing applications and performing assays in real time. A diffraction grating fabricated on a glass substrate produced diffraction patterns both in transmission and reflection when illuminated by a laser diode. We used zeroth order I(0,0) as reference and first order I(0,1) as signal channel and conducted ratiometric measurements that reduced noise by more than 50 times. The ratiometric approach resulted in a very simple instrumentation with very high sensitivity. In the past, we have shown refractive index measurements both for bulk and surface adsorption using the diffractive self-referencing approach. In the current work we extend the same concept to higher diffraction orders. We have considered order I(0,1) and I(1,1) and performed ratiometric measurements I(0,1)/I(1,1) to eliminate the common mode fluctuations. Since orders I(0,1) and I(1,1) behaved opposite to each other, the resulting ratio signal amplitude increased more than twice compared to our previous results. As a proof of concept we used different salt concentrations in DI water. Increased signal amplitude and improved fluid injection system resulted in more than 4 times improvement in detection limit, giving limit of detection 1.3×10-7 refractive index unit (RIU) compared to our previous results. The improved refractive index sensitivity will help significantly for high sensitivity label free bio sensing application in a very cost-effective and simple experimental set-up.

  10. The geometry of continuum regularization

    International Nuclear Information System (INIS)

    Halpern, M.B.

    1987-03-01

    This lecture is primarily an introduction to coordinate-invariant regularization, a recent advance in the continuum regularization program. In this context, the program is seen as fundamentally geometric, with all regularization contained in regularized DeWitt superstructures on field deformations

  11. Heterogeneous catalysis in highly sensitive microreactors

    DEFF Research Database (Denmark)

    Olsen, Jakob Lind

    This thesis present a highly sensitive silicon microreactor and examples of its use in studying catalysis. The experimental setup built for gas handling and temperature control for the microreactor is described. The implementation of LabVIEW interfacing for all the experimental parts makes...

  12. Regular expression containment

    DEFF Research Database (Denmark)

    Henglein, Fritz; Nielsen, Lasse

    2011-01-01

    We present a new sound and complete axiomatization of regular expression containment. It consists of the conventional axiomatiza- tion of concatenation, alternation, empty set and (the singleton set containing) the empty string as an idempotent semiring, the fixed- point rule E* = 1 + E × E......* for Kleene-star, and a general coin- duction rule as the only additional rule. Our axiomatization gives rise to a natural computational inter- pretation of regular expressions as simple types that represent parse trees, and of containment proofs as coercions. This gives the axiom- atization a Curry......-Howard-style constructive interpretation: Con- tainment proofs do not only certify a language-theoretic contain- ment, but, under our computational interpretation, constructively transform a membership proof of a string in one regular expres- sion into a membership proof of the same string in another regular expression. We...

  13. Supersymmetric dimensional regularization

    International Nuclear Information System (INIS)

    Siegel, W.; Townsend, P.K.; van Nieuwenhuizen, P.

    1980-01-01

    There is a simple modification of dimension regularization which preserves supersymmetry: dimensional reduction to real D < 4, followed by analytic continuation to complex D. In terms of component fields, this means fixing the ranges of all indices on the fields (and therefore the numbers of Fermi and Bose components). For superfields, it means continuing in the dimensionality of x-space while fixing the dimensionality of theta-space. This regularization procedure allows the simple manipulation of spinor derivatives in supergraph calculations. The resulting rules are: (1) First do all algebra exactly as in D = 4; (2) Then do the momentum integrals as in ordinary dimensional regularization. This regularization procedure needs extra rules before one can say that it is consistent. Such extra rules needed for superconformal anomalies are discussed. Problems associated with renormalizability and higher order loops are also discussed

  14. Acrylamide levels in Finnish foodstuffs analysed with liquid chromatography tandem mass spectrometry.

    Science.gov (United States)

    Eerola, Susanna; Hollebekkers, Koen; Hallikainen, Anja; Peltonen, Kimmo

    2007-02-01

    Sample clean-up and HPLC with tandem mass spectrometric detection (LC-MS/MS) was validated for the routine analysis of acrylamide in various foodstuffs. The method used proved to be reliable and the detection limit for routine monitoring was sensitive enough for foods and drinks (38 microg/kg for foods and 5 microg/L for drinks). The RSDs for repeatability and day-to-day variation were below 15% in all food matrices. Two hundred and one samples which included more than 30 different types of food and foods manufactured and prepared in various ways were analysed. The main types of food analysed were potato and cereal-based foods, processed foods (pizza, minced beef meat, meat balls, chicken nuggets, potato-ham casserole and fried bacon) and coffee. Acrylamide was detected at levels, ranging from nondetectable to 1480 microg/kg level in solid food, with crisp bread exhibiting the highest levels. In drinks, the highest value (29 microg/L) was found in regular coffee drinks.

  15. Total variation regularization in measurement and image space for PET reconstruction

    KAUST Repository

    Burger, M

    2014-09-18

    © 2014 IOP Publishing Ltd. The aim of this paper is to test and analyse a novel technique for image reconstruction in positron emission tomography, which is based on (total variation) regularization on both the image space and the projection space. We formulate our variational problem considering both total variation penalty terms on the image and on an idealized sinogram to be reconstructed from a given Poisson distributed noisy sinogram. We prove existence, uniqueness and stability results for the proposed model and provide some analytical insight into the structures favoured by joint regularization. For the numerical solution of the corresponding discretized problem we employ the split Bregman algorithm and extensively test the approach in comparison to standard total variation regularization on the image. The numerical results show that an additional penalty on the sinogram performs better on reconstructing images with thin structures.

  16. The selectively bred high alcohol sensitivity (HAS) and low alcohol sensitivity (LAS) rats differ in sensitivity to nicotine.

    Science.gov (United States)

    de Fiebre, NancyEllen C; Dawson, Ralph; de Fiebre, Christopher M

    2002-06-01

    Studies in rodents selectively bred to differ in alcohol sensitivity have suggested that nicotine and ethanol sensitivities may cosegregate during selective breeding. This suggests that ethanol and nicotine sensitivities may in part be genetically correlated. Male and female high alcohol sensitivity (HAS), control alcohol sensitivity, and low alcohol sensitivity (LAS) rats were tested for nicotine-induced alterations in locomotor activity, body temperature, and seizure activity. Plasma and brain levels of nicotine and its primary metabolite, cotinine, were measured in these animals, as was the binding of [3H]cytisine, [3H]epibatidine, and [125I]alpha-bungarotoxin in eight brain regions. Both replicate HAS lines were more sensitive to nicotine-induced locomotor activity depression than the replicate LAS lines. No consistent HAS/LAS differences were seen on other measures of nicotine sensitivity; however, females were more susceptible to nicotine-induced seizures than males. No HAS/LAS differences in nicotine or cotinine levels were seen, nor were differences seen in the binding of nicotinic ligands. Females had higher levels of plasma cotinine and brain nicotine than males but had lower brain cotinine levels than males. Sensitivity to a specific action of nicotine cosegregates during selective breeding for differential sensitivity to a specific action of ethanol. The differential sensitivity of the HAS/LAS rats is due to differences in central nervous system sensitivity and not to pharmacokinetic differences. The differential central nervous system sensitivity cannot be explained by differences in the numbers of nicotinic receptors labeled in ligand-binding experiments. The apparent genetic correlation between ethanol and nicotine sensitivities suggests that common genes modulate, in part, the actions of both ethanol and nicotine and may explain the frequent coabuse of these agents.

  17. Detecting violations of temporal regularities in waking and sleeping two-month-old infants

    NARCIS (Netherlands)

    Otte, R.A.; Winkler, I.; Braeken, M.A.K.A.; Stekelenburg, J.J.; van der Stelt, O.; Van den Bergh, B.R.H.

    2013-01-01

    Correctly processing rapid sequences of sounds is essential for developmental milestones, such as language acquisition. We investigated the sensitivity of two-month-old infants to violations of a temporal regularity, by recording event-related brain potentials (ERPs) in an auditory oddball paradigm

  18. Regularization by External Variables

    DEFF Research Database (Denmark)

    Bossolini, Elena; Edwards, R.; Glendinning, P. A.

    2016-01-01

    Regularization was a big topic at the 2016 CRM Intensive Research Program on Advances in Nonsmooth Dynamics. There are many open questions concerning well known kinds of regularization (e.g., by smoothing or hysteresis). Here, we propose a framework for an alternative and important kind of regula......Regularization was a big topic at the 2016 CRM Intensive Research Program on Advances in Nonsmooth Dynamics. There are many open questions concerning well known kinds of regularization (e.g., by smoothing or hysteresis). Here, we propose a framework for an alternative and important kind...

  19. Regularization scheme dependence of virtual corrections to DY and DIS

    International Nuclear Information System (INIS)

    Khalafi, F.; Landshoff, P.V.

    1981-01-01

    One loop virtual corrections to the quark photon vertex are calculated under various assumptions and their sensitivity to the manner in which infra-red and mass singularities are regularized is studied. A method based on the use of Mellin-transforms in the Feynman parametric space is developed and shown to be convenient in calculating virtual diagrams beyond the leading logarithm in perturbative QCD. (orig.)

  20. High-resolution numerical modeling of mesoscale island wakes and sensitivity to static topographic relief data

    Directory of Open Access Journals (Sweden)

    C. G. Nunalee

    2015-08-01

    Full Text Available Recent decades have witnessed a drastic increase in the fidelity of numerical weather prediction (NWP modeling. Currently, both research-grade and operational NWP models regularly perform simulations with horizontal grid spacings as fine as 1 km. This migration towards higher resolution potentially improves NWP model solutions by increasing the resolvability of mesoscale processes and reducing dependency on empirical physics parameterizations. However, at the same time, the accuracy of high-resolution simulations, particularly in the atmospheric boundary layer (ABL, is also sensitive to orographic forcing which can have significant variability on the same spatial scale as, or smaller than, NWP model grids. Despite this sensitivity, many high-resolution atmospheric simulations do not consider uncertainty with respect to selection of static terrain height data set. In this paper, we use the Weather Research and Forecasting (WRF model to simulate realistic cases of lower tropospheric flow over and downstream of mountainous islands using the default global 30 s United States Geographic Survey terrain height data set (GTOPO30, the Shuttle Radar Topography Mission (SRTM, and the Global Multi-resolution Terrain Elevation Data set (GMTED2010 terrain height data sets. While the differences between the SRTM-based and GMTED2010-based simulations are extremely small, the GTOPO30-based simulations differ significantly. Our results demonstrate cases where the differences between the source terrain data sets are significant enough to produce entirely different orographic wake mechanics, such as vortex shedding vs. no vortex shedding. These results are also compared to MODIS visible satellite imagery and ASCAT near-surface wind retrievals. Collectively, these results highlight the importance of utilizing accurate static orographic boundary conditions when running high-resolution mesoscale models.

  1. Regular Single Valued Neutrosophic Hypergraphs

    Directory of Open Access Journals (Sweden)

    Muhammad Aslam Malik

    2016-12-01

    Full Text Available In this paper, we define the regular and totally regular single valued neutrosophic hypergraphs, and discuss the order and size along with properties of regular and totally regular single valued neutrosophic hypergraphs. We also extend work on completeness of single valued neutrosophic hypergraphs.

  2. A DNA microarray-based methylation-sensitive (MS)-AFLP hybridization method for genetic and epigenetic analyses.

    Science.gov (United States)

    Yamamoto, F; Yamamoto, M

    2004-07-01

    We previously developed a PCR-based DNA fingerprinting technique named the Methylation Sensitive (MS)-AFLP method, which permits comparative genome-wide scanning of methylation status with a manageable number of fingerprinting experiments. The technique uses the methylation sensitive restriction enzyme NotI in the context of the existing Amplified Fragment Length Polymorphism (AFLP) method. Here we report the successful conversion of this gel electrophoresis-based DNA fingerprinting technique into a DNA microarray hybridization technique (DNA Microarray MS-AFLP). By performing a total of 30 (15 x 2 reciprocal labeling) DNA Microarray MS-AFLP hybridization experiments on genomic DNA from two breast and three prostate cancer cell lines in all pairwise combinations, and Southern hybridization experiments using more than 100 different probes, we have demonstrated that the DNA Microarray MS-AFLP is a reliable method for genetic and epigenetic analyses. No statistically significant differences were observed in the number of differences between the breast-prostate hybridization experiments and the breast-breast or prostate-prostate comparisons.

  3. Effect of Regular Exercise on the Histochemical Changes of d-Galactose-Induced Oxidative Renal Injury in High-Fat Diet-Fed Rats

    International Nuclear Information System (INIS)

    Park, Sok; Kim, Chan-Sik; Lee, Jin; Suk Kim, Jung; Kim, Junghyun

    2013-01-01

    Renal lipid accumulation exhibits slowly developing chronic kidney disease and is associated with increased oxidative stress. The impact of exercise on the obese- and oxidative stress-related renal disease is not well understood. The purpose of this study was to investigate whether a high-fat diet (HFD) would accelerate d-galactose-induced aging process in rat kidney and to examine the preventive effect of regular exercise on the obese- and oxidative stress-related renal disease. Oxidative stress was induced by an administration of d-galactose (100 mg/kg intraperitoneally injected) for 9 weeks, and d-galactose-treated rats were also fed with a high-fat diet (60% kcal as fat) for 9 weeks to induce obesity. We investigated the efficacy of regular exercise in reducing renal injury by analyzing Nε-carboxymethyllysine (CML), 8-hydroxygluanine (8-OHdG) and apoptosis. When rats were fed with a HFD for 9 weeks in d-galactose-treated rats, an increased CML accumulation, oxidative DNA damage and renal podocyte loss were observed in renal glomerular cells and tubular epithelial cells. However, the regular exercise restored all these renal changes in HFD plus d-galactose-treated rats. Our data suggested that long-term HFD may accelerate the deposition of lipoxidation adducts and oxidative renal injury in d-galactose-treated rats. The regular exercise protects against obese- and oxidative stress-related renal injury by inhibiting this lipoxidation burden

  4. A high sensitivity nanomaterial based SAW humidity sensor

    Energy Technology Data Exchange (ETDEWEB)

    Wu, T-T; Chou, T-H [Institute of Applied Mechanics, National Taiwan University, Taipei 106, Taiwan (China); Chen, Y-Y [Department of Mechanical Engineering, Tatung University, Taipei 104, Taiwan (China)], E-mail: wutt@ndt.iam.ntu.edu.tw

    2008-04-21

    In this paper, a highly sensitive humidity sensor is reported. The humidity sensor is configured by a 128{sup 0}YX-LiNbO{sub 3} based surface acoustic wave (SAW) resonator whose operating frequency is at 145 MHz. A dual delay line configuration is realized to eliminate external temperature fluctuations. Moreover, for nanostructured materials possessing high surface-to-volume ratio, large penetration depth and fast charge diffusion rate, camphor sulfonic acid doped polyaniline (PANI) nanofibres are synthesized by the interfacial polymerization method and further deposited on the SAW resonator as selective coating to enhance sensitivity. The humidity sensor is used to measure various relative humidities in the range 5-90% at room temperature. Results show that the PANI nanofibre based SAW humidity sensor exhibits excellent sensitivity and short-term repeatability.

  5. Beneficial metabolic effects of regular meal frequency on dietary thermogenesis, insulin sensitivity, and fasting lipid profiles in healthy obese women.

    Science.gov (United States)

    Farshchi, Hamid R; Taylor, Moira A; Macdonald, Ian A

    2005-01-01

    Although a regular meal pattern is recommended for obese people, its effects on energy metabolism have not been examined. We investigated whether a regular meal frequency affects energy intake (EI), energy expenditure, or circulating insulin, glucose, and lipid concentrations in healthy obese women. Ten women [x +/- SD body mass index (in kg/m(2)): 37.1 +/- 4.8] participated in a randomized crossover trial. In phase 1 (14 d), the subjects consumed their normal diet on 6 occasions/d (regular meal pattern) or followed a variable meal frequency (3-9 meals/d, irregular meal pattern). In phase 2 (14 d), the subjects followed the alternative pattern. At the start and end of each phase, a test meal was fed, and blood glucose, lipid, and insulin concentrations were determined before and for 3 h after (glucose and insulin only) the test meal. Subjects recorded their food intake on 3 d during each phase. The thermogenic response to the test meal was ascertained by indirect calorimetry. Regular eating was associated with lower EI (P thermogenesis (P meal pattern, but peak insulin concentrations and area under the curve of insulin responses to the test meal were lower after the regular than after the irregular meal pattern (P thermogenesis.

  6. Regularization dependence on phase diagram in Nambu–Jona-Lasinio model

    International Nuclear Information System (INIS)

    Kohyama, H.; Kimura, D.; Inagaki, T.

    2015-01-01

    We study the regularization dependence on meson properties and the phase diagram of quark matter by using the two flavor Nambu–Jona-Lasinio model. The model also has the parameter dependence in each regularization, so we explicitly give the model parameters for some sets of the input observables, then investigate its effect on the phase diagram. We find that the location or the existence of the critical end point highly depends on the regularization methods and the model parameters. Then we think that regularization and parameters are carefully considered when one investigates the QCD critical end point in the effective model studies

  7. Highly sensitive microcalorimeters for radiation research

    International Nuclear Information System (INIS)

    Avaev, V.N.; Demchuk, B.N.; Ioffe, L.A.; Efimov, E.P.

    1984-01-01

    Calorimetry is used in research at various types of nuclear-physics installations to obtain information on the quantitative and qualitative composition of ionizing radiation in a reactor core and in the surrounding layers of the biological shield. In this paper, the authors examine the characteristics of highly sensitive microcalorimeters with modular semiconductor heat pickups designed for operation in reactor channels. The microcalorimeters have a thin-walled aluminum housing on whose inner surface modular heat pickups are placed radially as shown here. The results of measurements of the temperature dependence of the sensitivity of the microcalorimeters are shown. The results of measuring the sensitivity of a PMK-2 microcalorimeter assembly as a function of integrated neutron flux for three energy intervals and the adsorbed gamma energy are shown. In order to study specimens with different shapes and sizes, microcalorimeters with chambers in the form of cylinders and a parallelepiped were built and tested

  8. A novel approach of ensuring layout regularity correct by construction in advanced technologies

    Science.gov (United States)

    Ahmed, Shafquat Jahan; Vaderiya, Yagnesh; Gupta, Radhika; Parthasarathy, Chittoor; Marin, Jean-Claude; Robert, Frederic

    2017-03-01

    In advanced technology nodes, layout regularity has become a mandatory prerequisite to create robust designs less sensitive to variations in manufacturing process in order to improve yield and minimizing electrical variability. In this paper we describe a method for designing regular full custom layouts based on design and process co-optimization. The method includes various design rule checks that can be used on-the-fly during leaf-cell layout development. We extract a Layout Regularity Index (LRI) from the layouts based on the jogs, alignments and pitches used in the design for any given metal layer. Regularity Index of a layout is the direct indicator of manufacturing yield and is used to compare the relative health of different layout blocks in terms of process friendliness. The method has been deployed for 28nm and 40nm technology nodes for Memory IP and is being extended to other IPs (IO, standard-cell). We have quantified the gain of layout regularity with the deployed method on printability and electrical characteristics by process-variation (PV) band simulation analysis and have achieved up-to 5nm reduction in PV band.

  9. High-sensitivity cardiac troponin I assay to screen for acute rejection in patients with heart transplant.

    Science.gov (United States)

    Patel, Parag C; Hill, Douglas A; Ayers, Colby R; Lavingia, Bhavna; Kaiser, Patricia; Dyer, Adrian K; Barnes, Aliessa P; Thibodeau, Jennifer T; Mishkin, Joseph D; Mammen, Pradeep P A; Markham, David W; Stastny, Peter; Ring, W Steves; de Lemos, James A; Drazner, Mark H

    2014-05-01

    A noninvasive biomarker that could accurately diagnose acute rejection (AR) in heart transplant recipients could obviate the need for surveillance endomyocardial biopsies. We assessed the performance metrics of a novel high-sensitivity cardiac troponin I (cTnI) assay for this purpose. Stored serum samples were retrospectively matched to endomyocardial biopsies in 98 cardiac transplant recipients, who survived ≥3 months after transplant. AR was defined as International Society for Heart and Lung Transplantation grade 2R or higher cellular rejection, acellular rejection, or allograft dysfunction of uncertain pathogenesis, leading to treatment for presumed rejection. cTnI was measured with a high-sensitivity assay (Abbott Diagnostics, Abbott Park, IL). Cross-sectional analyses determined the association of cTnI concentrations with rejection and International Society for Heart and Lung Transplantation grade and the performance metrics of cTnI for the detection of AR. Among 98 subjects, 37% had ≥1 rejection episode. cTnI was measured in 418 serum samples, including 35 paired to a rejection episode. cTnI concentrations were significantly higher in rejection versus nonrejection samples (median, 57.1 versus 10.2 ng/L; P<0.0001) and increased in a graded manner with higher biopsy scores (P(trend)<0.0001). The c-statistic to discriminate AR was 0.82 (95% confidence interval, 0.76-0.88). Using a cut point of 15 ng/L, sensitivity was 94%, specificity 60%, positive predictive value 18%, and negative predictive value 99%. A high-sensitivity cTnI assay seems useful to rule out AR in cardiac transplant recipients. If validated in prospective studies, a strategy of serial monitoring with a high-sensitivity cTnI assay may offer a low-cost noninvasive strategy for rejection surveillance. © 2014 American Heart Association, Inc.

  10. 'Regular' and 'emergency' repair

    International Nuclear Information System (INIS)

    Luchnik, N.V.

    1975-01-01

    Experiments on the combined action of radiation and a DNA inhibitor using Crepis roots and on split-dose irradiation of human lymphocytes lead to the conclusion that there are two types of repair. The 'regular' repair takes place twice in each mitotic cycle and ensures the maintenance of genetic stability. The 'emergency' repair is induced at all stages of the mitotic cycle by high levels of injury. (author)

  11. Sensitive parameters' optimization of the permanent magnet supporting mechanism

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Yongguang; Gao, Xiaohui; Wang, Yixuan; Yang, Xiaowei [Beihang University, Beijing (China)

    2014-07-15

    The fast development of the ultra-high speed vertical rotor promotes the study and exploration for the supporting mechanism. It has become the focus of research that how to improve the speed and overcome the vibration when the rotors pass through the low-order critical frequencies. This paper introduces a kind of permanent magnet (PM) supporting mechanism and describes an optimization method of its sensitive parameters, which can make the vertical rotor system reach 80000 r/min smoothly. Firstly we find the sensitive parameters through analyzing the rotor's features in the process of achieving high-speed, then, study these sensitive parameters and summarize the regularities with the method of combining the experiment and the finite element method (FEM), at last, achieve the optimization method of these parameters. That will not only get a stable effect of raising speed and shorten the debugging time greatly, but also promote the extensive application of the PM supporting mechanism in the ultra-high speed vertical rotors.

  12. On a correspondence between regular and non-regular operator monotone functions

    DEFF Research Database (Denmark)

    Gibilisco, P.; Hansen, Frank; Isola, T.

    2009-01-01

    We prove the existence of a bijection between the regular and the non-regular operator monotone functions satisfying a certain functional equation. As an application we give a new proof of the operator monotonicity of certain functions related to the Wigner-Yanase-Dyson skew information....

  13. Sensitivity and uncertainty analysis

    CERN Document Server

    Cacuci, Dan G; Navon, Ionel Michael

    2005-01-01

    As computer-assisted modeling and analysis of physical processes have continued to grow and diversify, sensitivity and uncertainty analyses have become indispensable scientific tools. Sensitivity and Uncertainty Analysis. Volume I: Theory focused on the mathematical underpinnings of two important methods for such analyses: the Adjoint Sensitivity Analysis Procedure and the Global Adjoint Sensitivity Analysis Procedure. This volume concentrates on the practical aspects of performing these analyses for large-scale systems. The applications addressed include two-phase flow problems, a radiative c

  14. Effect of von Karman Vortex Shedding on Regular and Open-slit V-gutter Stabilized Turbulent Premixed Flames

    Science.gov (United States)

    2012-04-01

    Both flame lengths shrink and large scale disruptions occur downstream with vortex shedding carrying reaction zones. Flames in both flameholders...9) the flame structure changes dramatically for both regular and open-slit V-gutter. Both flame lengths shrink and large scale disruptions occur...reduces the flame length . However, qualitatively the open-slit V-gutter appears to be more sensitive than the regular V-gutter. Both flames remain

  15. Stochastic analytic regularization

    International Nuclear Information System (INIS)

    Alfaro, J.

    1984-07-01

    Stochastic regularization is reexamined, pointing out a restriction on its use due to a new type of divergence which is not present in the unregulated theory. Furthermore, we introduce a new form of stochastic regularization which permits the use of a minimal subtraction scheme to define the renormalized Green functions. (author)

  16. The patterning of retinal horizontal cells: normalizing the regularity index enhances the detection of genomic linkage

    Directory of Open Access Journals (Sweden)

    Patrick W. Keeley

    2014-10-01

    Full Text Available Retinal neurons are often arranged as non-random distributions called mosaics, as their somata minimize proximity to neighboring cells of the same type. The horizontal cells serve as an example of such a mosaic, but little is known about the developmental mechanisms that underlie their patterning. To identify genes involved in this process, we have used three different spatial statistics to assess the patterning of the horizontal cell mosaic across a panel of genetically distinct recombinant inbred strains. To avoid the confounding effect cell density, which varies two-fold across these different strains, we computed the real/random regularity ratio, expressing the regularity of a mosaic relative to a randomly distributed simulation of similarly sized cells. To test whether this latter statistic better reflects the variation in biological processes that contribute to horizontal cell spacing, we subsequently compared the genetic linkage for each of these two traits, the regularity index and the real/random regularity ratio, each computed from the distribution of nearest neighbor (NN distances and from the Voronoi domain (VD areas. Finally, we compared each of these analyses with another index of patterning, the packing factor. Variation in the regularity indexes, as well as their real/random regularity ratios, and the packing factor, mapped quantitative trait loci (QTL to the distal ends of Chromosomes 1 and 14. For the NN and VD analyses, we found that the degree of linkage was greater when using the real/random regularity ratio rather than the respective regularity index. Using informatic resources, we narrow the list of prospective genes positioned at these two intervals to a small collection of six genes that warrant further investigation to determine their potential role in shaping the patterning of the horizontal cell mosaic.

  17. Sensitivity studies for 3-D rod ejection analyses on axial power shape

    Energy Technology Data Exchange (ETDEWEB)

    Park, Min-Ho; Park, Jin-Woo; Park, Guen-Tae; Ryu, Seok-Hee; Um, Kil-Sup; Lee, Jae-Il [KEPCO NF, Daejeon (Korea, Republic of)

    2015-10-15

    The current safety analysis methodology using the point kinetics model combined with numerous conservative assumptions result in unrealistic prediction of the transient behavior wasting huge margin for safety analyses while the safety regulation criteria for the reactivity initiated accident are going strict. To deal with this, KNF is developing a 3-D rod ejection analysis methodology using the multi-dimensional code coupling system CHASER. The CHASER system couples three-dimensional core neutron kinetics code ASTRA, sub-channel analysis code THALES, and fuel performance analysis code FROST using message passing interface (MPI). A sensitivity study for 3-D rod ejection analysis on axial power shape (APS) is carried out to survey the tendency of safety parameters by power distributions and to build up a realistic safety analysis methodology while maintaining conservatism. The currently developing 3-D rod ejection analysis methodology using the multi-dimensional core transient analysis code system, CHASER was shown to reasonably reflect the conservative assumptions by tuning up kinetic parameters.

  18. The Impact of Computerization on Regular Employment (Japanese)

    OpenAIRE

    SUNADA Mitsuru; HIGUCHI Yoshio; ABE Masahiro

    2004-01-01

    This paper uses micro data from the Basic Survey of Japanese Business Structure and Activity to analyze the effects of companies' introduction of information and telecommunications technology on employment structures, especially regular versus non-regular employment. Firstly, examination of trends in the ratio of part-time workers recorded in the Basic Survey shows that part-time worker ratios in manufacturing firms are rising slightly, but that companies with a high proportion of part-timers...

  19. Information-theoretic semi-supervised metric learning via entropy regularization.

    Science.gov (United States)

    Niu, Gang; Dai, Bo; Yamada, Makoto; Sugiyama, Masashi

    2014-08-01

    We propose a general information-theoretic approach to semi-supervised metric learning called SERAPH (SEmi-supervised metRic leArning Paradigm with Hypersparsity) that does not rely on the manifold assumption. Given the probability parameterized by a Mahalanobis distance, we maximize its entropy on labeled data and minimize its entropy on unlabeled data following entropy regularization. For metric learning, entropy regularization improves manifold regularization by considering the dissimilarity information of unlabeled data in the unsupervised part, and hence it allows the supervised and unsupervised parts to be integrated in a natural and meaningful way. Moreover, we regularize SERAPH by trace-norm regularization to encourage low-dimensional projections associated with the distance metric. The nonconvex optimization problem of SERAPH could be solved efficiently and stably by either a gradient projection algorithm or an EM-like iterative algorithm whose M-step is convex. Experiments demonstrate that SERAPH compares favorably with many well-known metric learning methods, and the learned Mahalanobis distance possesses high discriminability even under noisy environments.

  20. POWER CYCLE AND STRESS ANALYSES FOR HIGH TEMPERATURE GAS-COOLED REACTOR

    International Nuclear Information System (INIS)

    Oh, Chang H; Davis, Cliff; Hawkes, Brian D; Sherman, Steven R

    2007-01-01

    The Department of Energy and the Idaho National Laboratory are developing a Next Generation Nuclear Plant (NGNP) to serve as a demonstration of state-of-the-art nuclear technology. The purpose of the demonstration is two fold (1) efficient low cost energy generation and (2) hydrogen production. Although a next generation plant could be developed as a single-purpose facility, early designs are expected to be dual-purpose. While hydrogen production and advanced energy cycles are still in its early stages of development, research towards coupling a high temperature reactor, electrical generation and hydrogen production is under way. Many aspects of the NGNP must be researched and developed in order to make recommendations on the final design of the plant. Parameters such as working conditions, cycle components, working fluids, and power conversion unit configurations must be understood. Three configurations of the power conversion unit were demonstrated in this study. A three-shaft design with three turbines and four compressors, a combined cycle with a Brayton top cycle and a Rankine bottoming cycle, and a reheated cycle with three stages of reheat were investigated. An intermediate heat transport loop for transporting process heat to a High Temperature Steam Electrolysis (HTSE) hydrogen production plant was used. Helium, CO2, and a 80% nitrogen, 20% helium mixture (by weight) were studied to determine the best working fluid in terms cycle efficiency and development cost. In each of these configurations the relative component size were estimated for the different working fluids. The relative size of the turbomachinery was measured by comparing the power input/output of the component. For heat exchangers the volume was computed and compared. Parametric studies away from the baseline values of the three-shaft and combined cycles were performed to determine the effect of varying conditions in the cycle. This gives some insight into the sensitivity of these cycles to

  1. Long-term gas and brine migration at the Waste Isolation Pilot Plant: Preliminary sensitivity analyses for post-closure 40 CFR 268 (RCRA), May 1992

    International Nuclear Information System (INIS)

    1992-12-01

    This report describes preliminary probabilistic sensitivity analyses of long term gas and brine migration at the Waste Isolation Pilot Plant (WIPP). Because gas and brine are potential transport media for organic compounds and heavy metals, understanding two-phase flow in the repository and the surrounding Salado Formation is essential to evaluating long-term compliance with 40 CFR 268.6, which is the portion of the Land Disposal Restrictions of the Hazardous and Solid Waste Amendments to the Resource Conservation and Recovery Act that states the conditions for disposal of specified hazardous wastes. Calculations described here are designed to provide guidance to the WIPP Project by identifying important parameters and helping to recognize processes not yet modeled that may affect compliance. Based on these analyses, performance is sensitive to shaft-seal permeabilities, parameters affecting gas generation, and the conceptual model used for the disturbed rock zone surrounding the excavation. Brine migration is less likely to affect compliance with 40 CFR 268.6 than gas migration. However, results are preliminary, and additional iterations of uncertainty and sensitivity analyses will be required to provide the confidence needed for a defensible compliance evaluation. Specifically, subsequent analyses will explicitly include effects of salt creep and, when conceptual and computational models are available, pressure-dependent fracturing of anhydrite marker beds

  2. Low Power and High Sensitivity MOSFET-Based Pressure Sensor

    International Nuclear Information System (INIS)

    Zhang Zhao-Hua; Ren Tian-Ling; Zhang Yan-Hong; Han Rui-Rui; Liu Li-Tian

    2012-01-01

    Based on the metal-oxide-semiconductor field effect transistor (MOSFET) stress sensitive phenomenon, a low power MOSFET pressure sensor is proposed. Compared with the traditional piezoresistive pressure sensor, the present pressure sensor displays high performances on sensitivity and power consumption. The sensitivity of the MOSFET sensor is raised by 87%, meanwhile the power consumption is decreased by 20%. (cross-disciplinary physics and related areas of science and technology)

  3. Reducing errors in the GRACE gravity solutions using regularization

    Science.gov (United States)

    Save, Himanshu; Bettadpur, Srinivas; Tapley, Byron D.

    2012-09-01

    The nature of the gravity field inverse problem amplifies the noise in the GRACE data, which creeps into the mid and high degree and order harmonic coefficients of the Earth's monthly gravity fields provided by GRACE. Due to the use of imperfect background models and data noise, these errors are manifested as north-south striping in the monthly global maps of equivalent water heights. In order to reduce these errors, this study investigates the use of the L-curve method with Tikhonov regularization. L-curve is a popular aid for determining a suitable value of the regularization parameter when solving linear discrete ill-posed problems using Tikhonov regularization. However, the computational effort required to determine the L-curve is prohibitively high for a large-scale problem like GRACE. This study implements a parameter-choice method, using Lanczos bidiagonalization which is a computationally inexpensive approximation to L-curve. Lanczos bidiagonalization is implemented with orthogonal transformation in a parallel computing environment and projects a large estimation problem on a problem of the size of about 2 orders of magnitude smaller for computing the regularization parameter. Errors in the GRACE solution time series have certain characteristics that vary depending on the ground track coverage of the solutions. These errors increase with increasing degree and order. In addition, certain resonant and near-resonant harmonic coefficients have higher errors as compared with the other coefficients. Using the knowledge of these characteristics, this study designs a regularization matrix that provides a constraint on the geopotential coefficients as a function of its degree and order. This regularization matrix is then used to compute the appropriate regularization parameter for each monthly solution. A 7-year time-series of the candidate regularized solutions (Mar 2003-Feb 2010) show markedly reduced error stripes compared with the unconstrained GRACE release 4

  4. Single photon detector with high polarization sensitivity.

    Science.gov (United States)

    Guo, Qi; Li, Hao; You, LiXing; Zhang, WeiJun; Zhang, Lu; Wang, Zhen; Xie, XiaoMing; Qi, Ming

    2015-04-15

    Polarization is one of the key parameters of light. Most optical detectors are intensity detectors that are insensitive to the polarization of light. A superconducting nanowire single photon detector (SNSPD) is naturally sensitive to polarization due to its nanowire structure. Previous studies focused on producing a polarization-insensitive SNSPD. In this study, by adjusting the width and pitch of the nanowire, we systematically investigate the preparation of an SNSPD with high polarization sensitivity. Subsequently, an SNSPD with a system detection efficiency of 12% and a polarization extinction ratio of 22 was successfully prepared.

  5. Sensitivity analyses of a colloid-facilitated contaminant transport model for unsaturated heterogeneous soil conditions.

    Science.gov (United States)

    Périard, Yann; José Gumiere, Silvio; Rousseau, Alain N.; Caron, Jean

    2013-04-01

    Certain contaminants may travel faster through soils when they are sorbed to subsurface colloidal particles. Indeed, subsurface colloids may act as carriers of some contaminants accelerating their translocation through the soil into the water table. This phenomenon is known as colloid-facilitated contaminant transport. It plays a significant role in contaminant transport in soils and has been recognized as a source of groundwater contamination. From a mechanistic point of view, the attachment/detachment of the colloidal particles from the soil matrix or from the air-water interface and the straining process may modify the hydraulic properties of the porous media. Šimůnek et al. (2006) developed a model that can simulate the colloid-facilitated contaminant transport in variably saturated porous media. The model is based on the solution of a modified advection-dispersion equation that accounts for several processes, namely: straining, exclusion and attachement/detachement kinetics of colloids through the soil matrix. The solutions of these governing, partial differential equations are obtained using a standard Galerkin-type, linear finite element scheme, implemented in the HYDRUS-2D/3D software (Šimůnek et al., 2012). Modeling colloid transport through the soil and the interaction of colloids with the soil matrix and other contaminants is complex and requires the characterization of many model parameters. In practice, it is very difficult to assess actual transport parameter values, so they are often calibrated. However, before calibration, one needs to know which parameters have the greatest impact on output variables. This kind of information can be obtained through a sensitivity analysis of the model. The main objective of this work is to perform local and global sensitivity analyses of the colloid-facilitated contaminant transport module of HYDRUS. Sensitivity analysis was performed in two steps: (i) we applied a screening method based on Morris' elementary

  6. NK sensitivity of neuroblastoma cells determined by a highly sensitive coupled luminescent method

    International Nuclear Information System (INIS)

    Ogbomo, Henry; Hahn, Anke; Geiler, Janina; Michaelis, Martin; Doerr, Hans Wilhelm; Cinatl, Jindrich

    2006-01-01

    The measurement of natural killer (NK) cells toxicity against tumor or virus-infected cells especially in cases with small blood samples requires highly sensitive methods. Here, a coupled luminescent method (CLM) based on glyceraldehyde-3-phosphate dehydrogenase release from injured target cells was used to evaluate the cytotoxicity of interleukin-2 activated NK cells against neuroblastoma cell lines. In contrast to most other methods, CLM does not require the pretreatment of target cells with labeling substances which could be toxic or radioactive. The effective killing of tumor cells was achieved by low effector/target ratios ranging from 0.5:1 to 4:1. CLM provides highly sensitive, safe, and fast procedure for measurement of NK cell activity with small blood samples such as those obtained from pediatric patients

  7. Analysing spatially extended high-dimensional dynamics by recurrence plots

    Energy Technology Data Exchange (ETDEWEB)

    Marwan, Norbert, E-mail: marwan@pik-potsdam.de [Potsdam Institute for Climate Impact Research, 14412 Potsdam (Germany); Kurths, Jürgen [Potsdam Institute for Climate Impact Research, 14412 Potsdam (Germany); Humboldt Universität zu Berlin, Institut für Physik (Germany); Nizhny Novgorod State University, Department of Control Theory, Nizhny Novgorod (Russian Federation); Foerster, Saskia [GFZ German Research Centre for Geosciences, Section 1.4 Remote Sensing, Telegrafenberg, 14473 Potsdam (Germany)

    2015-05-08

    Recurrence plot based measures of complexity are capable tools for characterizing complex dynamics. In this letter we show the potential of selected recurrence plot measures for the investigation of even high-dimensional dynamics. We apply this method on spatially extended chaos, such as derived from the Lorenz96 model and show that the recurrence plot based measures can qualitatively characterize typical dynamical properties such as chaotic or periodic dynamics. Moreover, we demonstrate its power by analysing satellite image time series of vegetation cover with contrasting dynamics as a spatially extended and potentially high-dimensional example from the real world. - Highlights: • We use recurrence plots for analysing partially extended dynamics. • We investigate the high-dimensional chaos of the Lorenz96 model. • The approach distinguishes different spatio-temporal dynamics. • We use the method for studying vegetation cover time series.

  8. Influence of regular reporting on local Pseudomonas aeruginosa and Acinetobacter spp. sensitivity to antibiotics on consumption of antibiotics and resistance patterns.

    Science.gov (United States)

    Djordjevic, Z M; Folic, M M; Jankovic, S M

    2017-10-01

    Regular surveillance of antimicrobial resistance is an important component of multifaceted interventions directed at the problem with resistance of bacteria causing healthcare-associated infections (HAIs) in intensive care units (ICUs). Our aim was to analyse antimicrobial consumption and resistance among isolates of Pseudomonas aeruginosa and Acinetobacter spp. causing HAIs, before and after the introduction of mandatory reporting of resistance patterns to prescribers. A retrospective observational study was conducted between January 2011 and December 2015, at an interdisciplinary ICU of the Clinical Centre Kragujevac, Serbia. The intervention consisted of continuous resistance monitoring of all bacterial isolates from ICU patients and biannual reporting of results per isolate to prescribers across the hospital. Both utilization of antibiotics and density of resistant isolates of P. aeruginosa and Acinetobacter spp. were followed within the ICU. Resistance densities of P. aeruginosa to all tested antimicrobials were lower in 2015, in comparison with 2011. Although isolates of Acinetobacter spp. had lower resistance density in 2015 than in 2011 to the majority of investigated antibiotics, a statistically significant decrease was noted only for piperacillin/tazobactam. Statistically significant decreasing trends of consumption were recorded for third-generation cephalosporins, aminoglycosides and fluoroquinolones, whereas for the piperacillin/tazobactam, ampicillin/sulbactam and carbapenems, utilization trends were decreasing, but without statistical significance. In the same period, increasing trends of consumption were observed for tigecycline and colistin. Regular monitoring of resistance of bacterial isolates in ICUs and reporting of summary results to prescribers may lead to a significant decrease in utilization of some antibiotics and slow restoration of P. aeruginosa and Acinetobacter spp. susceptibility. © 2017 John Wiley & Sons Ltd.

  9. Preliminary performance assessment for the Waste Isolation Pilot Plant, December 1992. Volume 4: Uncertainty and sensitivity analyses for 40 CFR 191, Subpart B

    Energy Technology Data Exchange (ETDEWEB)

    1993-08-01

    Before disposing of transuranic radioactive waste in the Waste Isolation Pilot Plant (WIPP), the United States Department of Energy (DOE) must evaluate compliance with applicable long-term regulations of the United States Environmental Protection Agency (EPA). Sandia National Laboratories is conducting iterative performance assessments (PAs) of the WIPP for the DOE to provide interim guidance while preparing for a final compliance evaluation. This volume of the 1992 PA contains results of uncertainty and sensitivity analyses with respect to the EPA`s Environmental Protection Standards for Management and Disposal of Spent Nuclear Fuel, High-Level and Transuranic Radioactive Wastes (40 CFR 191, Subpart B). Additional information about the 1992 PA is provided in other volumes. Results of the 1992 uncertainty and sensitivity analyses indicate that, conditional on the modeling assumptions, the choice of parameters selected for sampling, and the assigned parameter-value distributions, the most important parameters for which uncertainty has the potential to affect compliance with 40 CFR 191B are: drilling intensity, intrusion borehole permeability, halite and anhydrite permeabilities, radionuclide solubilities and distribution coefficients, fracture spacing in the Culebra Dolomite Member of the Rustler Formation, porosity of the Culebra, and spatial variability of Culebra transmissivity. Performance with respect to 40 CFR 191B is insensitive to uncertainty in other parameters; however, additional data are needed to confirm that reality lies within the assigned distributions.

  10. Associations between hypo-HDL cholesterolemia and cardiometabolic risk factors in middle-aged men and women: Independence of habitual alcohol drinking, smoking and regular exercise.

    Science.gov (United States)

    Wakabayashi, Ichiro; Daimon, Takashi

    Hypo-HDL cholesterolemia is a potent cardiovascular risk factor, and HDL cholesterol level is influenced by lifestyles including alcohol drinking, smoking and regular exercise. The aim of this study was to clarify the relationships between hypo-HDL cholesterolemia and cardiovascular risk factors and to determine whether or not these relationships depend on the above-mentioned lifestyles. The subjects were 3456 men and 2510 women (35-60 years of age) showing low HDL cholesterol levels (smoking and regular exercise (men, n=333; women, n=1410) and their age-matched control subjects were also analysed. Both in men and in women of overall subjects and subjects without histories of alcohol drinking, smoking and regular exercise, odds ratios of subjects with hypo-HDL cholesterolemia vs. subjects with normo-HDL cholesterolemia for high body mass index, high waist-to-height ratio, high triglycerides, high lipid accumulation product and multiple risk factors (three or more out of obesity, hypertension, dyslipidaemia and diabetes) were significantly higher than the reference level of 1.00. These associations in overall subjects were found when the above habits were adjusted. Hypo-HDL cholesterolemic men and women have adverse cardiovascular profiles, such as obesity, hypertriglyceridemia and multiple risk factors, independently of age, alcohol drinking, smoking and regular exercise. Copyright © 2016 Asia Oceania Association for the Study of Obesity. Published by Elsevier Ltd. All rights reserved.

  11. Performance of terahertz metamaterials as high-sensitivity sensor

    Science.gov (United States)

    He, Yanan; Zhang, Bo; Shen, Jingling

    2017-09-01

    A high-sensitivity sensor based on the resonant transmission characteristics of terahertz (THz) metamaterials was investigated, with the proposal and fabrication of rectangular bar arrays of THz metamaterials exhibiting a period of 180 μm on a 25 μm thick flexible polyimide. Varying the size of the metamaterial structure revealed that the length of the rectangular unit modulated the resonant frequency, which was verified by both experiment and simulation. The sensing characteristics upon varying the surrounding media in the sample were tested by simulation and experiment. Changing the surrounding medium from that of air to that of alcohol or oil produced resonant frequency redshifts of 80 GHz or 150 GHz, respectively, which indicates that the sensor possessed a high sensitivity of 667 GHz per unit of refractive index. Finally, the influence of the sample substrate thickness on the sensor sensitivity was investigated by simulation. It may be a reference for future sensor design.

  12. Effective field theory dimensional regularization

    International Nuclear Information System (INIS)

    Lehmann, Dirk; Prezeau, Gary

    2002-01-01

    A Lorentz-covariant regularization scheme for effective field theories with an arbitrary number of propagating heavy and light particles is given. This regularization scheme leaves the low-energy analytic structure of Greens functions intact and preserves all the symmetries of the underlying Lagrangian. The power divergences of regularized loop integrals are controlled by the low-energy kinematic variables. Simple diagrammatic rules are derived for the regularization of arbitrary one-loop graphs and the generalization to higher loops is discussed

  13. Effective field theory dimensional regularization

    Science.gov (United States)

    Lehmann, Dirk; Prézeau, Gary

    2002-01-01

    A Lorentz-covariant regularization scheme for effective field theories with an arbitrary number of propagating heavy and light particles is given. This regularization scheme leaves the low-energy analytic structure of Greens functions intact and preserves all the symmetries of the underlying Lagrangian. The power divergences of regularized loop integrals are controlled by the low-energy kinematic variables. Simple diagrammatic rules are derived for the regularization of arbitrary one-loop graphs and the generalization to higher loops is discussed.

  14. A High-Sensitivity Current Sensor Utilizing CrNi Wire and Microfiber Coils

    Directory of Open Access Journals (Sweden)

    Xiaodong Xie

    2014-05-01

    Full Text Available We obtain an extremely high current sensitivity by wrapping a section of microfiber on a thin-diameter chromium-nickel wire. Our detected current sensitivity is as high as 220.65 nm/A2 for a structure length of only 35 μm. Such sensitivity is two orders of magnitude higher than the counterparts reported in the literature. Analysis shows that a higher resistivity or/and a thinner diameter of the metal wire may produce higher sensitivity. The effects of varying the structure parameters on sensitivity are discussed. The presented structure has potential for low-current sensing or highly electrically-tunable filtering applications.

  15. Hierarchical regular small-world networks

    International Nuclear Information System (INIS)

    Boettcher, Stefan; Goncalves, Bruno; Guclu, Hasan

    2008-01-01

    Two new networks are introduced that resemble small-world properties. These networks are recursively constructed but retain a fixed, regular degree. They possess a unique one-dimensional lattice backbone overlaid by a hierarchical sequence of long-distance links, mixing real-space and small-world features. Both networks, one 3-regular and the other 4-regular, lead to distinct behaviors, as revealed by renormalization group studies. The 3-regular network is planar, has a diameter growing as √N with system size N, and leads to super-diffusion with an exact, anomalous exponent d w = 1.306..., but possesses only a trivial fixed point T c = 0 for the Ising ferromagnet. In turn, the 4-regular network is non-planar, has a diameter growing as ∼2 √(log 2 N 2 ) , exhibits 'ballistic' diffusion (d w = 1), and a non-trivial ferromagnetic transition, T c > 0. It suggests that the 3-regular network is still quite 'geometric', while the 4-regular network qualifies as a true small world with mean-field properties. As an engineering application we discuss synchronization of processors on these networks. (fast track communication)

  16. Nanowire-templated microelectrodes for high-sensitivity pH detection

    DEFF Research Database (Denmark)

    Antohe, V.A.; Radu, Adrian; Mátéfi-Tempfli, Mária

    2009-01-01

    A highly sensitive pH capacitive sensor has been designed by confined growth of vertically aligned nanowire arrays on interdigited microelectrodes. The active surface of the device has been functionalized with an electrochemical pH transducer (polyaniline). We easily tune the device features...... by combining lithographic techniques with electrochemical synthesis. The reported electrical LC resonance measurements show considerable sensitivity enhancement compared to conventional capacitive pH sensors realized with microfabricated interdigited electrodes. The sensitivity can be easily improved...

  17. 75 FR 76006 - Regular Meeting

    Science.gov (United States)

    2010-12-07

    ... FARM CREDIT SYSTEM INSURANCE CORPORATION Regular Meeting AGENCY: Farm Credit System Insurance Corporation Board. ACTION: Regular meeting. SUMMARY: Notice is hereby given of the regular meeting of the Farm Credit System Insurance Corporation Board (Board). Date and Time: The meeting of the Board will be held...

  18. General inverse problems for regular variation

    DEFF Research Database (Denmark)

    Damek, Ewa; Mikosch, Thomas Valentin; Rosinski, Jan

    2014-01-01

    Regular variation of distributional tails is known to be preserved by various linear transformations of some random structures. An inverse problem for regular variation aims at understanding whether the regular variation of a transformed random object is caused by regular variation of components ...

  19. Aluminum nanocantilevers for high sensitivity mass sensors

    DEFF Research Database (Denmark)

    Davis, Zachary James; Boisen, Anja

    2005-01-01

    We have fabricated Al nanocantilevers using a simple, one mask contact UV lithography technique with lateral and vertical dimensions under 500 and 100 nm, respectively. These devices are demonstrated as highly sensitive mass sensors by measuring their dynamic properties. Furthermore, it is shown ...

  20. The Pajarito Monitor: a high-sensitivity monitoring system for highly enriched uranium

    International Nuclear Information System (INIS)

    Fehlau, P.E.; Coop, K.; Garcia, C.; Martinez, J.

    1984-01-01

    The Pajarito Monitor for Special Nuclear Material is a high-sensitivity gamma-ray monitoring system for detecting small quantities of highly enriched uranium transported by pedestrians or motor vehicles. The monitor consists of two components: a walk-through personnel monitor and a vehicle monitor. The personnel monitor has a plastic-scintillator detector portal, a microwave occupancy monitor, and a microprocessor control unit that measures the radiation intensity during background and monitoring periods to detect transient diversion signals. The vehicle monitor examines stationary motor vehicles while the vehicle's occupants pass through the personnel portal to exchange their badges. The vehicle monitor has four groups of large plastic scintillators that scan the vehicle from above and below. Its microprocessor control unit measures separate radiation intensities in each detector group. Vehicle occupancy is sensed by a highway traffic detection system. Each monitor's controller is responsible for detecting diversion as well as serving as a calibration and trouble-shooting aid. Diversion signals are detected by a sequential probability ratio hypothesis test that minimizes the monitoring time in the vehicle monitor and adapts itself well to variations in individual passage speed in the personnel monitor. Designed to be highly sensitive to diverted enriched uranium, the monitoring system also exhibits exceptional sensitivity for plutonium

  1. An Underwater Acoustic Vector Sensor with High Sensitivity and Broad Band

    Directory of Open Access Journals (Sweden)

    Hu Zhang

    2014-05-01

    Full Text Available Recently, acoustic vector sensor that use accelerators as sensing elements are widely used in underwater acoustic engineering, but the sensitivity of which at low frequency band is usually lower than -220 dB. In this paper, using a piezoelectric trilaminar optimized low frequency sensing element, we designed a high sensitivity internal placed ICP piezoelectric accelerometer as sensing element. Through structure optimization, we made a high sensitivity, broadband, small scale vector sensor. The working band is 10-2000 Hz, sound pressure sensitivity is -185 dB (at 100 Hz, outer diameter is 42 mm, length is 80 mm.

  2. Low-Complexity Regularization Algorithms for Image Deblurring

    KAUST Repository

    Alanazi, Abdulrahman

    2016-11-01

    Image restoration problems deal with images in which information has been degraded by blur or noise. In practice, the blur is usually caused by atmospheric turbulence, motion, camera shake, and several other mechanical or physical processes. In this study, we present two regularization algorithms for the image deblurring problem. We first present a new method based on solving a regularized least-squares (RLS) problem. This method is proposed to find a near-optimal value of the regularization parameter in the RLS problems. Experimental results on the non-blind image deblurring problem are presented. In all experiments, comparisons are made with three benchmark methods. The results demonstrate that the proposed method clearly outperforms the other methods in terms of both the output PSNR and structural similarity, as well as the visual quality of the deblurred images. To reduce the complexity of the proposed algorithm, we propose a technique based on the bootstrap method to estimate the regularization parameter in low and high-resolution images. Numerical results show that the proposed technique can effectively reduce the computational complexity of the proposed algorithms. In addition, for some cases where the point spread function (PSF) is separable, we propose using a Kronecker product so as to reduce the computations. Furthermore, in the case where the image is smooth, it is always desirable to replace the regularization term in the RLS problems by a total variation term. Therefore, we propose a novel method for adaptively selecting the regularization parameter in a so-called square root regularized total variation (SRTV). Experimental results demonstrate that our proposed method outperforms the other benchmark methods when applied to smooth images in terms of PSNR, SSIM and the restored image quality. In this thesis, we focus on the non-blind image deblurring problem, where the blur kernel is assumed to be known. However, we developed algorithms that also work

  3. Continuum-regularized quantum gravity

    International Nuclear Information System (INIS)

    Chan Huesum; Halpern, M.B.

    1987-01-01

    The recent continuum regularization of d-dimensional Euclidean gravity is generalized to arbitrary power-law measure and studied in some detail as a representative example of coordinate-invariant regularization. The weak-coupling expansion of the theory illustrates a generic geometrization of regularized Schwinger-Dyson rules, generalizing previous rules in flat space and flat superspace. The rules are applied in a non-trivial explicit check of Einstein invariance at one loop: the cosmological counterterm is computed and its contribution is included in a verification that the graviton mass is zero. (orig.)

  4. Graded effects of regularity in language revealed by N400 indices of morphological priming.

    Science.gov (United States)

    Kielar, Aneta; Joanisse, Marc F

    2010-07-01

    Differential electrophysiological effects for regular and irregular linguistic forms have been used to support the theory that grammatical rules are encoded using a dedicated cognitive mechanism. The alternative hypothesis is that language systematicities are encoded probabilistically in a way that does not categorically distinguish rule-like and irregular forms. In the present study, this matter was investigated more closely by focusing specifically on whether the regular-irregular distinction in English past tenses is categorical or graded. We compared the ERP priming effects of regulars (baked-bake), vowel-change irregulars (sang-sing), and "suffixed" irregulars that display a partial regularity (suffixed irregular verbs, e.g., slept-sleep), as well as forms that are related strictly along formal or semantic dimensions. Participants performed a visual lexical decision task with either visual (Experiment 1) or auditory prime (Experiment 2). Stronger N400 priming effects were observed for regular than vowel-change irregular verbs, whereas suffixed irregulars tended to group with regular verbs. Subsequent analyses decomposed early versus late-going N400 priming, and suggested that differences among forms can be attributed to the orthographic similarity of prime and target. Effects of morphological relatedness were observed in the later-going time period, however, we failed to observe true regular-irregular dissociations in either experiment. The results indicate that morphological effects emerge from the interaction of orthographic, phonological, and semantic overlap between words.

  5. Online co-regularized algorithms

    NARCIS (Netherlands)

    Ruijter, T. de; Tsivtsivadze, E.; Heskes, T.

    2012-01-01

    We propose an online co-regularized learning algorithm for classification and regression tasks. We demonstrate that by sequentially co-regularizing prediction functions on unlabeled data points, our algorithm provides improved performance in comparison to supervised methods on several UCI benchmarks

  6. Geometric continuum regularization of quantum field theory

    International Nuclear Information System (INIS)

    Halpern, M.B.

    1989-01-01

    An overview of the continuum regularization program is given. The program is traced from its roots in stochastic quantization, with emphasis on the examples of regularized gauge theory, the regularized general nonlinear sigma model and regularized quantum gravity. In its coordinate-invariant form, the regularization is seen as entirely geometric: only the supermetric on field deformations is regularized, and the prescription provides universal nonperturbative invariant continuum regularization across all quantum field theory. 54 refs

  7. Intrinsic motivation factors based on the self-determinant theory for regular breast cancer screening.

    Science.gov (United States)

    Jung, Su Mi; Jo, Heui-Sug

    2014-01-01

    The purpose of this study was to identify factors of intrinsic motivation that affect regular breast cancer screening and contribute to development of a program for strategies to improve effective breast cancer screening. Subjects were residing in South Korea Gangwon-Province and were female over 40 and under 69 years of age. For the investigation, the Intrinsic Motivation Inventory (IMI) was modified to the situation of cancer screening and was used to survey 905 inhabitants. Multinominal logistic regression analyses were conducted for regular breast cancer screening (RS), one-time breast cancer screening (OS) and non-breast cancer screening (NS). For statistical analysis, IBM SPSS 20.0 was utilized. The determinant factors between RS and NS were "perceived effort and choice" and "stress and strain" - internal motivations related to regular breast cancer screening. Also, determinant factors between RS and OS are "age" and "perceived effort and choice" for internal motivation related to cancer screening. To increase regular screening, strategies that address individual perceived effort and choice are recommended.

  8. Contraction of high eccentricity satellite orbits using uniformly regular KS canonical elements with oblate diurnally varying atmosphere.

    Science.gov (United States)

    Raj, Xavier James

    2016-07-01

    Accurate orbit prediction of an artificial satellite under the influence of air drag is one of the most difficult and untraceable problem in orbital dynamics. The orbital decay of these satellites is mainly controlled by the atmospheric drag effects. The effects of the atmosphere are difficult to determine, since the atmospheric density undergoes large fluctuations. The classical Newtonian equations of motion, which is non linear is not suitable for long-term integration. Many transformations have emerged in the literature to stabilize the equations of motion either to reduce the accumulation of local numerical errors or allowing the use of large integration step sizes, or both in the transformed space. One such transformation is known as KS transformation by Kustaanheimo and Stiefel, who regularized the nonlinear Kepler equations of motion and reduced it into linear differential equations of a harmonic oscillator of constant frequency. The method of KS total energy element equations has been found to be a very powerful method for obtaining numerical as well as analytical solution with respect to any type of perturbing forces, as the equations are less sensitive to round off and truncation errors. The uniformly regular KS canonical equations are a particular canonical form of the KS differential equations, where all the ten KS Canonical elements αi and βi are constant for unperturbed motion. These equations permit the uniform formulation of the basic laws of elliptic, parabolic and hyperbolic motion. Using these equations, developed analytical solution for short term orbit predictions with respect to Earth's zonal harmonic terms J2, J3, J4. Further, these equations were utilized to include the canonical forces and analytical theories with air drag were developed for low eccentricity orbits (e 0.2) orbits by assuming the atmosphere to be oblate only. In this paper a new non-singular analytical theory is developed for the motion of high eccentricity satellite

  9. Regularization parameter selection methods for ill-posed Poisson maximum likelihood estimation

    International Nuclear Information System (INIS)

    Bardsley, Johnathan M; Goldes, John

    2009-01-01

    In image processing applications, image intensity is often measured via the counting of incident photons emitted by the object of interest. In such cases, image data noise is accurately modeled by a Poisson distribution. This motivates the use of Poisson maximum likelihood estimation for image reconstruction. However, when the underlying model equation is ill-posed, regularization is needed. Regularized Poisson likelihood estimation has been studied extensively by the authors, though a problem of high importance remains: the choice of the regularization parameter. We will present three statistically motivated methods for choosing the regularization parameter, and numerical examples will be presented to illustrate their effectiveness

  10. Regularity increases middle latency evoked and late induced beta brain response following proprioceptive stimulation

    DEFF Research Database (Denmark)

    Arnfred, Sidse M.; Hansen, Lars Kai; Parnas, Josef

    2008-01-01

    as an indication of increased readiness. This is achieved through detailed analysis of both evoked and induced responses in the time-frequency domain. Electroencephalography in a 64 channels montage was recorded in four-teen healthy subjects. Two paradigms were explored: A Regular alternation between hand......). After initial exploration of the AvVVT and Induced collapsed files of all subjects using two-way factor analyses (Non-Negative Matrix Factorization), further data decomposition was performed in restricted windows of interest (WOI). Main effects of side of stimulation, onset or offset, regularity...

  11. High-intensity xenon plasma discharge lamp for bulk-sensitive high-resolution photoemission spectroscopy.

    Science.gov (United States)

    Souma, S; Sato, T; Takahashi, T; Baltzer, P

    2007-12-01

    We have developed a highly brilliant xenon (Xe) discharge lamp operated by microwave-induced electron cyclotron resonance (ECR) for ultrahigh-resolution bulk-sensitive photoemission spectroscopy (PES). We observed at least eight strong radiation lines from neutral or singly ionized Xe atoms in the energy region of 8.4-10.7 eV. The photon flux of the strongest Xe I resonance line at 8.437 eV is comparable to that of the He Ialpha line (21.218 eV) from the He-ECR discharge lamp. Stable operation for more than 300 h is achieved by efficient air-cooling of a ceramic tube in the resonance cavity. The high bulk sensitivity and high-energy resolution of PES using the Xe lines are demonstrated for some typical materials.

  12. Systematic comparative and sensitivity analyses of additive and outranking techniques for supporting impact significance assessments

    International Nuclear Information System (INIS)

    Cloquell-Ballester, Vicente-Agustin; Monterde-Diaz, Rafael; Cloquell-Ballester, Victor-Andres; Santamarina-Siurana, Maria-Cristina

    2007-01-01

    Assessing the significance of environmental impacts is one of the most important and all together difficult processes of Environmental Impact Assessment. This is largely due to the multicriteria nature of the problem. To date, decision techniques used in the process suffer from two drawbacks, namely the problem of compensation and the problem of identification of the 'exact boundary' between sub-ranges. This article discusses these issues and proposes a methodology for determining the significance of environmental impacts based on comparative and sensitivity analyses using the Electre TRI technique. An application of the methodology for the environmental assessment of a Power Plant project within the Valencian Region (Spain) is presented, and its performance evaluated. It is concluded that contrary to other techniques, Electre TRI automatically identifies those cases where allocation of significance categories is most difficult and, when combined with sensitivity analysis, offers greatest robustness in the face of variation in weights of the significance attributes. Likewise, this research demonstrates the efficacy of systematic comparison between Electre TRI and sum-based techniques, in the solution of assignment problems. The proposed methodology can therefore be regarded as a successful aid to the decision-maker, who will ultimately take the final decision

  13. Spiking Regularity and Coherence in Complex Hodgkin–Huxley Neuron Networks

    International Nuclear Information System (INIS)

    Zhi-Qiang, Sun; Ping, Xie; Wei, Li; Peng-Ye, Wang

    2010-01-01

    We study the effects of the strength of coupling between neurons on the spiking regularity and coherence in a complex network with randomly connected Hodgkin–Huxley neurons driven by colored noise. It is found that for the given topology realization and colored noise correlation time, there exists an optimal strength of coupling, at which the spiking regularity of the network reaches the best level. Moreover, when the temporal regularity reaches the best level, the spatial coherence of the system has already increased to a relatively high level. In addition, for the given number of neurons and noise correlation time, the values of average regularity and spatial coherence at the optimal strength of coupling are nearly independent of the topology realization. Furthermore, there exists an optimal value of colored noise correlation time at which the spiking regularity can reach its best level. These results may be helpful for understanding of the real neuron world. (cross-disciplinary physics and related areas of science and technology)

  14. Using Tikhonov Regularization for Spatial Projections from CSR Regularized Spherical Harmonic GRACE Solutions

    Science.gov (United States)

    Save, H.; Bettadpur, S. V.

    2013-12-01

    It has been demonstrated before that using Tikhonov regularization produces spherical harmonic solutions from GRACE that have very little residual stripes while capturing all the signal observed by GRACE within the noise level. This paper demonstrates a two-step process and uses Tikhonov regularization to remove the residual stripes in the CSR regularized spherical harmonic coefficients when computing the spatial projections. We discuss methods to produce mass anomaly grids that have no stripe features while satisfying the necessary condition of capturing all observed signal within the GRACE noise level.

  15. Regular examinations for toxic maculopathy in long-term chloroquine or hydroxychloroquine users.

    Science.gov (United States)

    Nika, Melisa; Blachley, Taylor S; Edwards, Paul; Lee, Paul P; Stein, Joshua D

    2014-10-01

    According to evidence-based, expert recommendations, long-term users of chloroquine or hydroxychloroquine sulfate should undergo regular visits to eye care providers and diagnostic testing to check for maculopathy. To determine whether patients with rheumatoid arthritis (RA) or systemic lupus erythematosus (SLE) taking chloroquine or hydroxychloroquine are regularly visiting eye care providers and being screened for maculopathy. Patients with RA or SLE who were continuously enrolled in a particular managed care network for at least 5 years between January 1, 2001, and December 31, 2011, were studied. Patients' amount of chloroquine or hydroxychloroquine use in the 5 years since the initial RA or SLE diagnosis was calculated, along with their number of eye care visits and diagnostic tests for maculopathy. Those at high risk for maculopathy were identified. Logistic regression was performed to assess potential factors associated with regular eye care visits (annual visits in ≥3 of 5 years) among chloroquine or hydroxychloroquine users, including those at highest risk for maculopathy. Among chloroquine or hydroxychloroquine users and those at high risk for toxic maculopathy, the proportions with regular eye care visits and diagnostic testing, as well as the likelihood of regular eye care visits. Among 18 051 beneficiaries with RA or SLE, 6339 (35.1%) had at least 1 record of chloroquine or hydroxychloroquine use, and 1409 (7.8%) had used chloroquine or hydroxychloroquine for at least 4 years. Among those at high risk for maculopathy, 27.9% lacked regular eye care visits, 6.1% had no visits to eye care providers, and 34.5% had no diagnostic testing for maculopathy during the 5-year period. Among high-risk patients, each additional month of chloroquine or hydroxychloroquine use was associated with a 2.0% increased likelihood of regular eye care (adjusted odds ratio, 1.02; 95% CI, 1.01-1.03). High-risk patients whose SLE or RA was managed by rheumatologists had a 77

  16. High-speed high-sensitivity infrared spectroscopy using mid-infrared swept lasers (Conference Presentation)

    Science.gov (United States)

    Childs, David T. D.; Groom, Kristian M.; Hogg, Richard A.; Revin, Dmitry G.; Cockburn, John W.; Rehman, Ihtesham U.; Matcher, Stephen J.

    2016-03-01

    Infrared spectroscopy is a highly attractive read-out technology for compositional analysis of biomedical specimens because of its unique combination of high molecular sensitivity without the need for exogenous labels. Traditional techniques such as FTIR and Raman have suffered from comparatively low speed and sensitivity however recent innovations are challenging this situation. Direct mid-IR spectroscopy is being speeded up by innovations such as MEMS-based FTIR instruments with very high mirror speeds and supercontinuum sources producing very high sample irradiation levels. Here we explore another possible method - external cavity quantum cascade lasers (EC-QCL's) with high cavity tuning speeds (mid-IR swept lasers). Swept lasers have been heavily developed in the near-infrared where they are used for non-destructive low-coherence imaging (OCT). We adapt these concepts in two ways. Firstly by combining mid-IR quantum cascade gain chips with external cavity designs adapted from OCT we achieve spectral acquisition rates approaching 1 kHz and demonstrate potential to reach 100 kHz. Secondly we show that mid-IR swept lasers share a fundamental sensitivity advantage with near-IR OCT swept lasers. This makes them potentially able to achieve the same spectral SNR as an FTIR instrument in a time x N shorter (N being the number of spectral points) under otherwise matched conditions. This effect is demonstrated using measurements of a PDMS sample. The combination of potentially very high spectral acquisition rates, fundamental SNR advantage and the use of low-cost detector systems could make mid-IR swept lasers a powerful technology for high-throughput biomedical spectroscopy.

  17. Regularities of radiation heredity

    International Nuclear Information System (INIS)

    Skakov, M.K.; Melikhov, V.D.

    2001-01-01

    One analyzed regularities of radiation heredity in metals and alloys. One made conclusion about thermodynamically irreversible changes in structure of materials under irradiation. One offers possible ways of heredity transmittance of radiation effects at high-temperature transformations in the materials. Phenomenon of radiation heredity may be turned to practical use to control structure of liquid metal and, respectively, structure of ingot via preliminary radiation treatment of charge. Concentration microheterogeneities in material defect structure induced by preliminary irradiation represent the genetic factor of radiation heredity [ru

  18. Stochastic dynamic modeling of regular and slow earthquakes

    Science.gov (United States)

    Aso, N.; Ando, R.; Ide, S.

    2017-12-01

    Both regular and slow earthquakes are slip phenomena on plate boundaries and are simulated by a (quasi-)dynamic modeling [Liu and Rice, 2005]. In these numerical simulations, spatial heterogeneity is usually considered not only for explaining real physical properties but also for evaluating the stability of the calculations or the sensitivity of the results on the condition. However, even though we discretize the model space with small grids, heterogeneity at smaller scales than the grid size is not considered in the models with deterministic governing equations. To evaluate the effect of heterogeneity at the smaller scales we need to consider stochastic interactions between slip and stress in a dynamic modeling. Tidal stress is known to trigger or affect both regular and slow earthquakes [Yabe et al., 2015; Ide et al., 2016], and such an external force with fluctuation can also be considered as a stochastic external force. A healing process of faults may also be stochastic, so we introduce stochastic friction law. In the present study, we propose a stochastic dynamic model to explain both regular and slow earthquakes. We solve mode III problem, which corresponds to the rupture propagation along the strike direction. We use BIEM (boundary integral equation method) scheme to simulate slip evolution, but we add stochastic perturbations in the governing equations, which is usually written in a deterministic manner. As the simplest type of perturbations, we adopt Gaussian deviations in the formulation of the slip-stress kernel, external force, and friction. By increasing the amplitude of perturbations of the slip-stress kernel, we reproduce complicated rupture process of regular earthquakes including unilateral and bilateral ruptures. By perturbing external force, we reproduce slow rupture propagation at a scale of km/day. The slow propagation generated by a combination of fast interaction at S-wave velocity is analogous to the kinetic theory of gasses: thermal

  19. REGULARITIES OF THE INFLUENCE OF ORGANIZATIONAL AND TECHNOLOGICAL FACTORS ON THE DURATION OF CONSTRUCTION OF HIGH-RISE MULTIFUNCTIONAL COMPLEXES

    Directory of Open Access Journals (Sweden)

    ZAIATS Yi. I.

    2015-10-01

    Full Text Available Problem statement. Technical and economic indexes of projects of construction of high-rise multifunctional complexes, namely: the duration of construction works and the cost of building products depends on the technology of construction works and method of construction organization, and on their choice influence the architectural and design, constructional and engineering made decisions. Purpose. To reveal the regularity of influence of organizational and technological factors on the duration of construction of high-rise multifunctional complexes in the conditions of dense city building. Conclusion. To reveal the regularity of the influence of organizational and technological factors (the height, the factor complexity of design of project and and estimate documentation, factor of complexity of construction works, the factor of complexity of control of investment and construction project, economy factor, comfort factor, factor of technology of projected solutions for the duration of the construction of high-rise multifunctional complexes (depending on their height: from 73,5 m to 100 m inclusively; from 100 m to 200 m inclusively allow us to quantitatively assess their influence and can be used in the development of the methodology of substantiation of the expediency and effectiveness of the realization of projects of high-rise construction in condition of compacted urban development, based on the consideration of the influence of organizational and technological aspects.

  20. Development of high sensitivity and high speed large size blank inspection system LBIS

    Science.gov (United States)

    Ohara, Shinobu; Yoshida, Akinori; Hirai, Mitsuo; Kato, Takenori; Moriizumi, Koichi; Kusunose, Haruhiko

    2017-07-01

    The production of high-resolution flat panel displays (FPDs) for mobile phones today requires the use of high-quality large-size photomasks (LSPMs). Organic light emitting diode (OLED) displays use several transistors on each pixel for precise current control and, as such, the mask patterns for OLED displays are denser and finer than the patterns for the previous generation displays throughout the entire mask surface. It is therefore strongly demanded that mask patterns be produced with high fidelity and free of defect. To enable the production of a high quality LSPM in a short lead time, the manufacturers need a high-sensitivity high-speed mask blank inspection system that meets the requirement of advanced LSPMs. Lasertec has developed a large-size blank inspection system called LBIS, which achieves high sensitivity based on a laser-scattering technique. LBIS employs a high power laser as its inspection light source. LBIS's delivery optics, including a scanner and F-Theta scan lens, focus the light from the source linearly on the surface of the blank. Its specially-designed optics collect the light scattered by particles and defects generated during the manufacturing process, such as scratches, on the surface and guide it to photo multiplier tubes (PMTs) with high efficiency. Multiple PMTs are used on LBIS for the stable detection of scattered light, which may be distributed at various angles due to irregular shapes of defects. LBIS captures 0.3mμ PSL at a detection rate of over 99.5% with uniform sensitivity. Its inspection time is 20 minutes for a G8 blank and 35 minutes for G10. The differential interference contrast (DIC) microscope on the inspection head of LBIS captures high-contrast review images after inspection. The images are classified automatically.

  1. High-resolution, high-sensitivity NMR of nano-litre anisotropic samples by coil spinning

    Energy Technology Data Exchange (ETDEWEB)

    Sakellariou, D [CEA Saclay, DSM, DRECAM, SCM, Lab Struct and Dynam Resonance Magnet, CNRS URA 331, F-91191 Gif Sur Yvette, (France); Le Goff, G; Jacquinot, J F [CEA Saclay, DSM, DRECAM, SPEC: Serv Phys Etat Condense, CNRS URA 2464, F-91191 Gif Sur Yvette, (France)

    2007-07-01

    Nuclear magnetic resonance (NMR) can probe the local structure and dynamic properties of liquids and solids, making it one of the most powerful and versatile analytical methods available today. However, its intrinsically low sensitivity precludes NMR analysis of very small samples - as frequently used when studying isotopically labelled biological molecules or advanced materials, or as preferred when conducting high-throughput screening of biological samples or 'lab-on-a-chip' studies. The sensitivity of NMR has been improved by using static micro-coils, alternative detection schemes and pre-polarization approaches. But these strategies cannot be easily used in NMR experiments involving the fast sample spinning essential for obtaining well-resolved spectra from non-liquid samples. Here we demonstrate that inductive coupling allows wireless transmission of radio-frequency pulses and the reception of NMR signals under fast spinning of both detector coil and sample. This enables NMR measurements characterized by an optimal filling factor, very high radio-frequency field amplitudes and enhanced sensitivity that increases with decreasing sample volume. Signals obtained for nano-litre-sized samples of organic powders and biological tissue increase by almost one order of magnitude (or, equivalently, are acquired two orders of magnitude faster), compared to standard NMR measurements. Our approach also offers optimal sensitivity when studying samples that need to be confined inside multiple safety barriers, such as radioactive materials. In principle, the co-rotation of a micrometer-sized detector coil with the sample and the use of inductive coupling (techniques that are at the heart of our method) should enable highly sensitive NMR measurements on any mass-limited sample that requires fast mechanical rotation to obtain well-resolved spectra. The method is easy to implement on a commercial NMR set-up and exhibits improved performance with miniaturization, and we

  2. Multimode fiber tip Fabry-Perot cavity for highly sensitive pressure measurement.

    Science.gov (United States)

    Chen, W P; Wang, D N; Xu, Ben; Zhao, C L; Chen, H F

    2017-03-23

    We demonstrate an optical Fabry-Perot interferometer fiber tip sensor based on an etched end of multimode fiber filled with ultraviolet adhesive. The fiber device is miniature (with diameter of less than 60 μm), robust and low cost, in a convenient reflection mode of operation, and has a very high gas pressure sensitivity of -40.94 nm/MPa, a large temperature sensitivity of 213 pm/°C within the range from 55 to 85 °C, and a relatively low temperature cross-sensitivity of 5.2 kPa/°C. This device has a high potential in monitoring environment of high pressure.

  3. A sensitive nonenzymatic hydrogen peroxide sensor based on ...

    Indian Academy of Sciences (India)

    ple, H2O2 is useful for food production, sterilization, clin- ical applications and environmental analyses.1–4 Further, ... and showed a fast response and high sensitivity.9. Gu et al10 have synthesized Cu–. Ni(OH)2 nanocomposites and applied it as the fast and sensitive H2O2 sensor material. Ag nanoparticles were. ∗.

  4. The relationship between lifestyle regularity and subjective sleep quality

    Science.gov (United States)

    Monk, Timothy H.; Reynolds, Charles F 3rd; Buysse, Daniel J.; DeGrazia, Jean M.; Kupfer, David J.

    2003-01-01

    In previous work we have developed a diary instrument-the Social Rhythm Metric (SRM), which allows the assessment of lifestyle regularity-and a questionnaire instrument--the Pittsburgh Sleep Quality Index (PSQI), which allows the assessment of subjective sleep quality. The aim of the present study was to explore the relationship between lifestyle regularity and subjective sleep quality. Lifestyle regularity was assessed by both standard (SRM-17) and shortened (SRM-5) metrics; subjective sleep quality was assessed by the PSQI. We hypothesized that high lifestyle regularity would be conducive to better sleep. Both instruments were given to a sample of 100 healthy subjects who were studied as part of a variety of different experiments spanning a 9-yr time frame. Ages ranged from 19 to 49 yr (mean age: 31.2 yr, s.d.: 7.8 yr); there were 48 women and 52 men. SRM scores were derived from a two-week diary. The hypothesis was confirmed. There was a significant (rho = -0.4, p subjects with higher levels of lifestyle regularity reported fewer sleep problems. This relationship was also supported by a categorical analysis, where the proportion of "poor sleepers" was doubled in the "irregular types" group as compared with the "non-irregular types" group. Thus, there appears to be an association between lifestyle regularity and good sleep, though the direction of causality remains to be tested.

  5. High mass resolution time of flight mass spectrometer for measuring products in heterogeneous catalysis in highly sensitive microreactors

    DEFF Research Database (Denmark)

    Andersen, Thomas; Jensen, Robert; Christensen, M. K.

    2012-01-01

    We demonstrate a combined microreactor and time of flight system for testing and characterization of heterogeneous catalysts with high resolution mass spectrometry and high sensitivity. Catalyst testing is performed in silicon-based microreactors which have high sensitivity and fast thermal...

  6. Modelling the Cost Performance of a Given Logistics Network Operating Under Regular and Irregular Conditions

    NARCIS (Netherlands)

    Janic, M.

    2009-01-01

    This paper develops an analytical model for the assessment of the cost performance of a given logistics network operating under regular and irregular (disruptive) conditions. In addition, the paper aims to carry out a sensitivity analysis of this cost with respect to changes of the most influencing

  7. Achieving sensitive, high-resolution laser spectroscopy at CRIS

    Energy Technology Data Exchange (ETDEWEB)

    Groote, R. P. de [Instituut voor Kern- en Stralingsfysica, KU Leuven (Belgium); Lynch, K. M., E-mail: kara.marie.lynch@cern.ch [EP Department, CERN, ISOLDE (Switzerland); Wilkins, S. G. [The University of Manchester, School of Physics and Astronomy (United Kingdom); Collaboration: the CRIS collaboration

    2017-11-15

    The Collinear Resonance Ionization Spectroscopy (CRIS) experiment, located at the ISOLDE facility, has recently performed high-resolution laser spectroscopy, with linewidths down to 20 MHz. In this article, we present the modifications to the beam line and the newly-installed laser systems that have made sensitive, high-resolution measurements possible. Highlights of recent experimental campaigns are presented.

  8. Regularities of Multifractal Measures

    Indian Academy of Sciences (India)

    First, we prove the decomposition theorem for the regularities of multifractal Hausdorff measure and packing measure in R R d . This decomposition theorem enables us to split a set into regular and irregular parts, so that we can analyze each separately, and recombine them without affecting density properties. Next, we ...

  9. Wavelet domain image restoration with adaptive edge-preserving regularization.

    Science.gov (United States)

    Belge, M; Kilmer, M E; Miller, E L

    2000-01-01

    In this paper, we consider a wavelet based edge-preserving regularization scheme for use in linear image restoration problems. Our efforts build on a collection of mathematical results indicating that wavelets are especially useful for representing functions that contain discontinuities (i.e., edges in two dimensions or jumps in one dimension). We interpret the resulting theory in a statistical signal processing framework and obtain a highly flexible framework for adapting the degree of regularization to the local structure of the underlying image. In particular, we are able to adapt quite easily to scale-varying and orientation-varying features in the image while simultaneously retaining the edge preservation properties of the regularizer. We demonstrate a half-quadratic algorithm for obtaining the restorations from observed data.

  10. Rapid analysis of heterogeneously methylated DNA using digital methylation-sensitive high resolution melting: application to the CDKN2B (p15) gene

    DEFF Research Database (Denmark)

    Candiloro, Ida Lm; Mikeska, Thomas; Hokland, Peter

    2008-01-01

    ABSTRACT: BACKGROUND: Methylation-sensitive high resolution melting (MS-HRM) methodology is able to recognise heterogeneously methylated sequences by their characteristic melting profiles. To further analyse heterogeneously methylated sequences, we adopted a digital approach to MS-HRM (dMS-HRM) t......ABSTRACT: BACKGROUND: Methylation-sensitive high resolution melting (MS-HRM) methodology is able to recognise heterogeneously methylated sequences by their characteristic melting profiles. To further analyse heterogeneously methylated sequences, we adopted a digital approach to MS-HRM (d......MS-HRM) that involves the amplification of single templates after limiting dilution to quantify and to determine the degree of methylation. We used this approach to study methylation of the CDKN2B (p15) cell cycle progression inhibitor gene which is inactivated by DNA methylation in haematological malignancies...... the methylated alleles and assess the degree of methylation. Direct sequencing of selected dMS-HRM products was used to determine the exact DNA methylation pattern and confirmed the degree of methylation estimated by dMS-HRM. CONCLUSION: dMS-HRM is a powerful technique for the analysis of methylation in CDKN2B...

  11. Highly efficient and stable cyclometalated ruthenium(II) complexes as sensitizers for dye-sensitized solar cells

    International Nuclear Information System (INIS)

    Huang, Jian-Feng; Liu, Jun-Min; Su, Pei-Yang; Chen, Yi-Fan; Shen, Yong; Xiao, Li-Min; Kuang, Dai-Bin; Su, Cheng-Yong

    2015-01-01

    Highlights: • Four novel thiocyanate-free cyclometalated ruthenium sensitizer were conveniently synthesized. • The D-CF 3 -sensitized DSSCs show higher efficiency compared to N719 based cells. • The DSSCs based on D-CF 3 and D-bisCF 3 sensitizers exhibit excellent long-term stability. • The diverse cyclometalated Ru complexes can be developed as high-performance sensitizers for use in DSSC. - Abstract: Four novel thiocyanate-free cyclometallted Ru(II) complexes, D-bisCF 3 , D-CF 3 , D-OMe, and D-DPA, with two 4,4′-dicarboxylic acid-2,2′-bipyridine together with a functionalized phenylpyridine ancillary ligand, have been designed and synthesized. The effect of different substituents (R = bisCF 3 , CF 3 , OMe, and DPA) on the ancillary C^N ligand on the photophysical properties and photovoltaic performance is investigated. Under standard global AM 1.5 solar conditions, the device based on D-CF 3 sensitizer gives a higher conversion efficiency of 8.74% than those based on D-bisCF 3 , D-OMe, and D-DPA, which can be ascribed to its broad range of visible light absorption, appropriate localization of the frontier orbitals, weak hydrogen bonds between -CF 3 and -OH groups at the TiO 2 surface, moderate dye loading on TiO 2 , and high charge collection efficiency. Moreover, the D-bisCF 3 and D-CF 3 based DSSCs exhibit good stability under 100 mW cm −2 light soaking at 60 °C for 400 h

  12. Adaptive Regularization of Neural Classifiers

    DEFF Research Database (Denmark)

    Andersen, Lars Nonboe; Larsen, Jan; Hansen, Lars Kai

    1997-01-01

    We present a regularization scheme which iteratively adapts the regularization parameters by minimizing the validation error. It is suggested to use the adaptive regularization scheme in conjunction with optimal brain damage pruning to optimize the architecture and to avoid overfitting. Furthermo......, we propose an improved neural classification architecture eliminating an inherent redundancy in the widely used SoftMax classification network. Numerical results demonstrate the viability of the method...

  13. Sensitivity analyses of seismic behavior of spent fuel dry cask storage systems

    International Nuclear Information System (INIS)

    Luk, V.K.; Spencer, B.W.; Shaukat, S.K.; Lam, I.P.; Dameron, R.A.

    2003-01-01

    Sandia National Laboratories is conducting a research project to develop a comprehensive methodology for evaluating the seismic behavior of spent fuel dry cask storage systems (DCSS) for the Office of Nuclear Regulatory Research of the U.S. Nuclear Regulatory Commission (NRC). A typical Independent Spent Fuel Storage Installation (ISFSI) consists of arrays of free-standing storage casks resting on concrete pads. In the safety review process of these cask systems, their seismically induced horizontal displacements and angular rotations must be quantified to determine whether casks will overturn or neighboring casks will collide during a seismic event. The ABAQUS/Explicit code is used to analyze three-dimensional coupled finite element models consisting of three submodels, which are a cylindrical cask or a rectangular module, a flexible concrete pad, and an underlying soil foundation. The coupled model includes two sets of contact surfaces between the submodels with prescribed coefficients of friction. The seismic event is described by one vertical and two horizontal components of statistically independent seismic acceleration time histories. A deconvolution procedure is used to adjust the amplitudes and frequency contents of these three-component reference surface motions before applying them simultaneously at the soil foundation base. The research project focused on examining the dynamic and nonlinear seismic behavior of the coupled model of free-standing DCSS including soil-structure interaction effects. This paper presents a subset of analysis results for a series of parametric analyses. Input variables in the parametric analyses include: designs of the cask/module, time histories of the seismic accelerations, coefficients of friction at the cask/pad interface, and material properties of the soil foundation. In subsequent research, the analysis results will be compiled and presented in nomograms to highlight the sensitivity of seismic response of DCSS to

  14. LOD score exclusion analyses for candidate genes using random population samples.

    Science.gov (United States)

    Deng, H W; Li, J; Recker, R R

    2001-05-01

    While extensive analyses have been conducted to test for, no formal analyses have been conducted to test against, the importance of candidate genes with random population samples. We develop a LOD score approach for exclusion analyses of candidate genes with random population samples. Under this approach, specific genetic effects and inheritance models at candidate genes can be analysed and if a LOD score is < or = - 2.0, the locus can be excluded from having an effect larger than that specified. Computer simulations show that, with sample sizes often employed in association studies, this approach has high power to exclude a gene from having moderate genetic effects. In contrast to regular association analyses, population admixture will not affect the robustness of our analyses; in fact, it renders our analyses more conservative and thus any significant exclusion result is robust. Our exclusion analysis complements association analysis for candidate genes in random population samples and is parallel to the exclusion mapping analyses that may be conducted in linkage analyses with pedigrees or relative pairs. The usefulness of the approach is demonstrated by an application to test the importance of vitamin D receptor and estrogen receptor genes underlying the differential risk to osteoporotic fractures.

  15. Formation factor of regular porous pattern in poly-α-methylstyrene film

    International Nuclear Information System (INIS)

    Yang Ruizhuang; Xu Jiajing; Gao Cong; Ma Shuang; Chen Sufen; Luo Xuan; Fang Yu; Li Bo

    2015-01-01

    Regular poly-α-methylstyrene (PAMS) porous film with macron-sized cells was prepared by casting the solution in the condition with high humidity. In this paper, the effects of the molecular weight of PAMS, PAMS concentration, humidity, temperature, volatile solvents and the thickness of liquid of solution on formation of regular porous pattern in PAMS film were discussed. The results show that these factors significantly affect the pore size and the pore distribution. The capillary force and Benard-Marangoni convection are main driving forces for the water droplet moving and making pores regular arrangement. (authors)

  16. Fourier Transform Mass Spectrometry: The Transformation of Modern Environmental Analyses

    Science.gov (United States)

    Lim, Lucy; Yan, Fangzhi; Bach, Stephen; Pihakari, Katianna; Klein, David

    2016-01-01

    Unknown compounds in environmental samples are difficult to identify using standard mass spectrometric methods. Fourier transform mass spectrometry (FTMS) has revolutionized how environmental analyses are performed. With its unsurpassed mass accuracy, high resolution and sensitivity, researchers now have a tool for difficult and complex environmental analyses. Two features of FTMS are responsible for changing the face of how complex analyses are accomplished. First is the ability to quickly and with high mass accuracy determine the presence of unknown chemical residues in samples. For years, the field has been limited by mass spectrometric methods that were based on knowing what compounds of interest were. Secondly, by utilizing the high resolution capabilities coupled with the low detection limits of FTMS, analysts also could dilute the sample sufficiently to minimize the ionization changes from varied matrices. PMID:26784175

  17. Are inflationary predictions sensitive to very high energy physics?

    International Nuclear Information System (INIS)

    Burgess, C.P.; Lemieux, F.; Holman, R.; Cline, J.M.

    2003-01-01

    It has been proposed that the successful inflationary description of density perturbations on cosmological scales is sensitive to the details of physics at extremely high (trans-Planckian) energies. We test this proposal by examining how inflationary predictions depend on higher-energy scales within a simple model where the higher-energy physics is well understood. We find the best of all possible worlds: inflationary predictions are robust against the vast majority of high-energy effects, but can be sensitive to some effects in certain circumstances, in a way which does not violate ordinary notions of decoupling. This implies both that the comparison of inflationary predictions with CMB data is meaningful, and that it is also worth searching for small deviations from the standard results in the hopes of learning about very high energies. (author)

  18. Regularity and predictability of human mobility in personal space.

    Directory of Open Access Journals (Sweden)

    Daniel Austin

    Full Text Available Fundamental laws governing human mobility have many important applications such as forecasting and controlling epidemics or optimizing transportation systems. These mobility patterns, studied in the context of out of home activity during travel or social interactions with observations recorded from cell phone use or diffusion of money, suggest that in extra-personal space humans follow a high degree of temporal and spatial regularity - most often in the form of time-independent universal scaling laws. Here we show that mobility patterns of older individuals in their home also show a high degree of predictability and regularity, although in a different way than has been reported for out-of-home mobility. Studying a data set of almost 15 million observations from 19 adults spanning up to 5 years of unobtrusive longitudinal home activity monitoring, we find that in-home mobility is not well represented by a universal scaling law, but that significant structure (predictability and regularity is uncovered when explicitly accounting for contextual data in a model of in-home mobility. These results suggest that human mobility in personal space is highly stereotyped, and that monitoring discontinuities in routine room-level mobility patterns may provide an opportunity to predict individual human health and functional status or detect adverse events and trends.

  19. Prevalence and Correlates of Having a Regular Physician among Women Presenting for Induced Abortion.

    Science.gov (United States)

    Chor, Julie; Hebert, Luciana E; Hasselbacher, Lee A; Whitaker, Amy K

    2016-01-01

    To determine the prevalence and correlates of having a regular physician among women presenting for induced abortion. We conducted a retrospective review of women presenting to an urban, university-based family planning clinic for abortion between January 2008 and September 2011. We conducted bivariate analyses, comparing women with and without a regular physician, and multivariable regression modeling, to identify factors associated with not having a regular physician. Of 834 women, 521 (62.5%) had a regular physician and 313 (37.5%) did not. Women with a prior pregnancy, live birth, or spontaneous abortion were more likely than women without these experiences to have a regular physician. Women with a prior induced abortion were not more likely than women who had never had a prior induced abortion to have a regular physician. Compared with women younger than 18 years, women aged 18 to 26 years were less likely to have a physician (adjusted odds ratio [aOR], 0.25; 95% confidence interval [CI], 0.10-0.62). Women with a prior live birth had increased odds of having a regular physician compared with women without a prior pregnancy (aOR, 1.89; 95% CI, 1.13-3.16). Women without medical/fetal indications and who had not been victims of sexual assault (self-indicated) were less likely to report having a regular physician compared with women with medical/fetal indications (aOR, 0.55; 95% CI, 0.17-0.82). The abortion visit is a point of contact with a large number of women without a regular physician and therefore provides an opportunity to integrate women into health care. Copyright © 2016 Jacobs Institute of Women's Health. Published by Elsevier Inc. All rights reserved.

  20. Rotating Hayward’s regular black hole as particle accelerator

    International Nuclear Information System (INIS)

    Amir, Muhammed; Ghosh, Sushant G.

    2015-01-01

    Recently, Bañados, Silk and West (BSW) demonstrated that the extremal Kerr black hole can act as a particle accelerator with arbitrarily high center-of-mass energy (E CM ) when the collision takes place near the horizon. The rotating Hayward’s regular black hole, apart from Mass (M) and angular momentum (a), has a new parameter g (g>0 is a constant) that provides a deviation from the Kerr black hole. We demonstrate that for each g, with M=1, there exist critical a E and r H E , which corresponds to a regular extremal black hole with degenerate horizons, and a E decreases whereas r H E increases with increase in g. While aregular non-extremal black hole with outer and inner horizons. We apply the BSW process to the rotating Hayward’s regular black hole, for different g, and demonstrate numerically that the E CM diverges in the vicinity of the horizon for the extremal cases thereby suggesting that a rotating regular black hole can also act as a particle accelerator and thus in turn provide a suitable framework for Plank-scale physics. For a non-extremal case, there always exist a finite upper bound for the E CM , which increases with the deviation parameter g.

  1. Desensitization protocol in highly HLA-sensitized and ABO-incompatible high titer kidney transplantation.

    Science.gov (United States)

    Uchida, J; Machida, Y; Iwai, T; Naganuma, T; Kitamoto, K; Iguchi, T; Maeda, S; Kamada, Y; Kuwabara, N; Kim, T; Nakatani, T

    2010-12-01

    A positive crossmatch indicates the presence of donor-specific alloantibodies and is associated with a graft loss rate of >80%; anti-ABO blood group antibodies develop in response to exposure to foreign blood groups, resulting in immediate graft loss. However, a desensitization protocol for highly HLA-sensitized and ABO-incompatible high-titer kidney transplantation has not yet been established. We treated 6 patients with high (≥1:512) anti-A/B antibody titers and 2 highly HLA-sensitized patients. Our immunosuppression protocol was initiated 1 month before surgery and included mycophenolate mofetil (1 g/d) and/or low-dose steroid (methylprednisolone 8 mg/d). Two doses of the anti-CD20 antibody rituximab (150 mg/m(2)) were administered 2 weeks before and on the day of transplantation. We performed antibody removal with 6-12 sessions of plasmapheresis (plasma exchange or double-filtration plasmapheresis) before transplantation. Splenectomy was also performed on the day of transplantation. Postoperative immunosuppression followed the same regimen as ABO-compatible cases, in which calcineurin inhibitors were initiated 3 days before transplantation, combined with 2 doses of basiliximab. Of the 8 patients, 7 subsequently underwent successful living-donor kidney transplantation. Follow-up of our recipients showed that the patient and graft survival rates were 100%. Acute cellular rejection and antibody-mediated rejection episodes occurred in 1 of the 7 recipients. These findings suggest that our immunosuppression regimen consisting of rituximab infusions, splenectomy, plasmapheresis, and pharmacologic immunosuppression may prove to be effective as a desensitization protocol for highly HLA-sensitized and ABO-incompatible high-titer kidney transplantation. Copyright © 2010 Elsevier Inc. All rights reserved.

  2. Preparation and characterization of AuNPs/CNTs-ErGO electrochemical sensors for highly sensitive detection of hydrazine.

    Science.gov (United States)

    Zhao, Zhenting; Sun, Yongjiao; Li, Pengwei; Zhang, Wendong; Lian, Kun; Hu, Jie; Chen, Yong

    2016-09-01

    A highly sensitive electrochemical sensor of hydrazine has been fabricated by Au nanoparticles (AuNPs) coating of carbon nanotubes-electrochemical reduced graphene oxide composite film (CNTs-ErGO) on glassy carbon electrode (GCE). Cyclic voltammetry and potential amperometry have been used to investigate the electrochemical properties of the fabricated sensors for hydrazine detection. The performances of the sensors were optimized by varying the CNTs to ErGO ratio and the quantity of Au nanoparticles. The results show that under optimal conditions, a sensitivity of 9.73μAμM(-1)cm(-2), a short response time of 3s, and a low detection limit of 0.065μM could be achieved with a linear concentration response range from 0.3μM to 319μM. The enhanced electrochemical performances could be attributed to the synergistic effect between AuNPs and CNTs-ErGO film and the outstanding catalytic effect of the Au nanoparticles. Finally, the sensor was successfully used to analyse the tap water, showing high potential for practical applications. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Are Long-Term Chloroquine or Hydroxychloroquine Users Being Checked Regularly for Toxic Maculopathy?

    Science.gov (United States)

    Nika, Melisa; Blachley, Taylor S.; Edwards, Paul; Lee, Paul P.; Stein, Joshua D.

    2014-01-01

    Importance According to evidence-based, expert recommendations, long-term users of chloroquine (CQ) or hydroxychloroquine (HCQ) should undergo regular visits to eye-care providers and diagnostic testing to check for maculopathy. Objective To determine whether patients with rheumatoid arthritis (RA) or systemic lupus erythematosus (SLE) taking CQ or HCQ are regularly visiting eye-care providers and being screened for maculopathy. Setting, Design and Participants Patients with RA or SLE who were continuously enrolled in a particular managed-care network for ≥5 years during 2001-2011 were studied. Patients' amount of CQ/HCQ use in the 5 years since initial RA/SLE diagnosis was calculated, along with their number of eye-care visits and diagnostic tests for maculopathy. Those at high risk for maculopathy were identified. Visits to eye providers and diagnostic testing for maculopathy were assessed for each enrollee over the study period. Logistic regression was performed to assess potential factors associated with regular eye-care-provider visits (≥3 in 5 years) among CQ/HCQ users, including those at greatest risk for maculopathy. Main Outcome Measures Among CQ/HCQ users and those at high risk for toxic maculopathy, the proportions with regular eye-care visits and diagnostic testing, and the likelihood of regular eye-care visits (odds ratios [ORs] with 95% confidence intervals [CI]). Results Among 18,051 beneficiaries with RA or SLE, 6,339 (35.1%) had ≥1 record of HCQ/CQ use and 1,409 (7.8%) used HCQ/CQ for ≥4 years. Among those at high risk for maculopathy, 27.9% lacked regular eye-provider visits, 6.1% had no visits to eye providers, and 34.5% had no diagnostic testing for maculopathy during the 5-year period. Among high-risk patients, each additional month of HCQ/CQ use was associated with a 2.0%-increased likelihood of regular eye care (adjusted OR=1.02, CI=1.01-1.03). High-risk patients whose SLE/RA were managed by rheumatologists had a 77%-increased

  4. A Multisurface Interpersonal Circumplex Assessment of Rejection Sensitivity.

    Science.gov (United States)

    Cain, Nicole M; De Panfilis, Chiara; Meehan, Kevin B; Clarkin, John F

    2017-01-01

    Individuals high in rejection sensitivity (RS) are at risk for experiencing high levels of interpersonal distress, yet little is known about the interpersonal profiles associated with RS. This investigation examined the interpersonal problems, sensitivities, and values associated with RS in 2 samples: 763 multicultural undergraduate students (Study 1) and 365 community adults (Study 2). In Study 1, high anxious RS was associated with socially avoidant interpersonal problems, whereas low anxious RS was associated with vindictive interpersonal problems. In Study 2, we assessed both anxious and angry expectations of rejection. Circumplex profile analyses showed that the high anxious RS group reported socially avoidant interpersonal problems, sensitivities to remoteness in others, and valuing connections with others, whereas the high angry RS group reported vindictive interpersonal problems, sensitivities to submissiveness in others, and valuing detached interpersonal behavior. Low anxious RS was related to domineering interpersonal problems, sensitivity to attention-seeking behavior, and valuing detached interpersonal behavior, whereas low angry RS was related to submissive interpersonal problems, sensitivity to attention-seeking behavior, and valuing receiving approval from others. Overall, results suggest that there are distinct interpersonal profiles associated with varying levels and types of RS.

  5. Development and Verification of Tritium Analyses Code for a Very High Temperature Reactor

    International Nuclear Information System (INIS)

    Oh, Chang H.; Kim, Eung S.

    2009-01-01

    A tritium permeation analyses code (TPAC) has been developed by Idaho National Laboratory for the purpose of analyzing tritium distributions in the VHTR systems including integrated hydrogen production systems. A MATLAB SIMULINK software package was used for development of the code. The TPAC is based on the mass balance equations of tritium-containing species and a various form of hydrogen (i.e., HT, H2, HTO, HTSO4, and TI) coupled with a variety of tritium source, sink, and permeation models. In the TPAC, ternary fission and neutron reactions with 6Li, 7Li 10B, 3He were taken into considerations as tritium sources. Purification and leakage models were implemented as main tritium sinks. Permeation of HT and H2 through pipes, vessels, and heat exchangers were importantly considered as main tritium transport paths. In addition, electrolyzer and isotope exchange models were developed for analyzing hydrogen production systems including both high-temperature electrolysis and sulfur-iodine process. The TPAC has unlimited flexibility for the system configurations, and provides easy drag-and-drops for making models by adopting a graphical user interface. Verification of the code has been performed by comparisons with the analytical solutions and the experimental data based on the Peach Bottom reactor design. The preliminary results calculated with a former tritium analyses code, THYTAN which was developed in Japan and adopted by Japan Atomic Energy Agency were also compared with the TPAC solutions. This report contains descriptions of the basic tritium pathways, theory, simple user guide, verifications, sensitivity studies, sample cases, and code tutorials. Tritium behaviors in a very high temperature reactor/high temperature steam electrolysis system have been analyzed by the TPAC based on the reference indirect parallel configuration proposed by Oh et al. (2007). This analysis showed that only 0.4% of tritium released from the core is transferred to the product hydrogen

  6. Depleted Nanocrystal-Oxide Heterojunctions for High-Sensitivity Infrared Detection

    Science.gov (United States)

    2015-08-28

    Approved for Public Release; Distribution Unlimited Final Report: 4.3 Electronic Sensing - Depleted Nanocrystal- Oxide Heterojunctions for High...reviewed journals: Final Report: 4.3 Electronic Sensing - Depleted Nanocrystal- Oxide Heterojunctions for High-Sensitivity Infrared Detection Report Title...PERCENT_SUPPORTEDNAME FTE Equivalent: Total Number: 1 1 Final Progress Report Project title: Depleted Nanocrystal- Oxide Heterojunctions for High

  7. Regular meal frequency creates more appropriate insulin sensitivity and lipid profiles compared with irregular meal frequency in healthy lean women.

    Science.gov (United States)

    Farshchi, H R; Taylor, M A; Macdonald, I A

    2004-07-01

    To investigate the impact of irregular meal frequency on circulating lipids, insulin, glucose and uric acid concentrations which are known cardiovascular risk factors. A randomised crossover dietary intervention study. Nottingham, UK--Healthy free-living women. A total of nine lean healthy women aged 18-42 y recruited via advertisement. A randomised crossover trial with two phases of 14 days each. In Phase 1, subjects consumed their normal diet on either 6 occasions per day (regular) or by following a variable meal frequency (3-9 meals/day, irregular). In Phase 2, subjects followed the alternative meal pattern to that followed in Phase 1, after a 2-week (wash-out) period. Subjects were asked to come to the laboratory after an overnight fast at the start and end of each phase. Blood samples were taken for measurement of circulating glucose, lipids, insulin and uric acid concentrations before and for 3 h after consumption of a high-carbohydrate test meal. Fasting glucose and insulin values were not affected by meal frequency, but peak insulin and AUC of insulin responses to the test meal were higher after the irregular compared to the regular eating patterns (P meal frequency was associated with higher fasting total (P meal frequency appears to produce a degree of insulin resistance and higher fasting lipid profiles, which may indicate a deleterious effect on these cardiovascular risk factors. : The Ministry of Health and Medical Education, IR Iran.

  8. Image deblurring using a perturbation-basec regularization approach

    KAUST Repository

    Alanazi, Abdulrahman

    2017-11-02

    The image restoration problem deals with images in which information has been degraded by blur or noise. In this work, we present a new method for image deblurring by solving a regularized linear least-squares problem. In the proposed method, a synthetic perturbation matrix with a bounded norm is forced into the discrete ill-conditioned model matrix. This perturbation is added to enhance the singular-value structure of the matrix and hence to provide an improved solution. A method is proposed to find a near-optimal value of the regularization parameter for the proposed approach. To reduce the computational complexity, we present a technique based on the bootstrapping method to estimate the regularization parameter for both low and high-resolution images. Experimental results on the image deblurring problem are presented. Comparisons are made with three benchmark methods and the results demonstrate that the proposed method clearly outperforms the other methods in terms of both the output PSNR and SSIM values.

  9. Image deblurring using a perturbation-basec regularization approach

    KAUST Repository

    Alanazi, Abdulrahman; Ballal, Tarig; Masood, Mudassir; Al-Naffouri, Tareq Y.

    2017-01-01

    The image restoration problem deals with images in which information has been degraded by blur or noise. In this work, we present a new method for image deblurring by solving a regularized linear least-squares problem. In the proposed method, a synthetic perturbation matrix with a bounded norm is forced into the discrete ill-conditioned model matrix. This perturbation is added to enhance the singular-value structure of the matrix and hence to provide an improved solution. A method is proposed to find a near-optimal value of the regularization parameter for the proposed approach. To reduce the computational complexity, we present a technique based on the bootstrapping method to estimate the regularization parameter for both low and high-resolution images. Experimental results on the image deblurring problem are presented. Comparisons are made with three benchmark methods and the results demonstrate that the proposed method clearly outperforms the other methods in terms of both the output PSNR and SSIM values.

  10. Impulsivity in spontaneously hypertensive rats: Within-subjects comparison of sensitivity to delay and to amount of reinforcement.

    Science.gov (United States)

    Orduña, Vladimir; Mercado, Eduardo

    2017-06-15

    Previous research has shown that spontaneously hypertensive rats (SHR) display higher levels of impulsive choice behavior, which is accompanied by a higher sensitivity to the delay of reinforcement, and by a normal sensitivity to the amount of reinforcement. Because those results were based on three different samples of subjects, in the present report we evaluated these three processes in the same individuals. SHR and WIS rats were exposed to concurrent-chains schedules in which the terminal links were manipulated to assess impulsivity, sensitivity to delay, and sensitivity to amount. For exploring impulsivity, a terminal link was associated with a small reinforcer (1 pellet) delivered after a short delay (2s) while the other terminal link was associated with a larger reinforcer (4 pellets) delivered after a longer delay (28s). For assessing sensitivity to delay, both alternatives delivered the same amount of reinforcement (1 pellet) and the only difference between them was in the delay before reinforcement delivery (2s vs 28s). For assessing sensitivity to amount, both alternatives were associated with the same delay (15s), but the alternatives differed in the amount of reinforcement (1 vs 4 pellets). In addition to replicating previously observed effects within-subjects, we were interested in analyzing different aspects of the regularity of rats' actions in the choice task. The results confirmed that previous findings were not a consequence of between-group differences: SHR were more impulsive and more sensitive to delay, while their sensitivity to amount was normal. Analyses of response regularity indicated that SHR subjects were more periodic in their responses to levers and in their feeder entries, had a higher number of short-duration bouts of responding, and made a substantially higher number of switches between the alternatives. We discuss the potential implications of these findings for the possible behavioral mechanisms driving the increased sensitivity

  11. Efficient L1 regularization-based reconstruction for fluorescent molecular tomography using restarted nonlinear conjugate gradient.

    Science.gov (United States)

    Shi, Junwei; Zhang, Bin; Liu, Fei; Luo, Jianwen; Bai, Jing

    2013-09-15

    For the ill-posed fluorescent molecular tomography (FMT) inverse problem, the L1 regularization can protect the high-frequency information like edges while effectively reduce the image noise. However, the state-of-the-art L1 regularization-based algorithms for FMT reconstruction are expensive in memory, especially for large-scale problems. An efficient L1 regularization-based reconstruction algorithm based on nonlinear conjugate gradient with restarted strategy is proposed to increase the computational speed with low memory consumption. The reconstruction results from phantom experiments demonstrate that the proposed algorithm can obtain high spatial resolution and high signal-to-noise ratio, as well as high localization accuracy for fluorescence targets.

  12. A fast and highly sensitive blood culture PCR method for clinical detection of Salmonella enterica serovar Typhi

    Directory of Open Access Journals (Sweden)

    Zhou Liqing

    2010-04-01

    Full Text Available Abstract Background Salmonella Typhi causes an estimated 21 million new cases of typhoid fever and 216,000 deaths every year. Blood culture is currently the gold standard for diagnosis of typhoid fever, but it is time-consuming and takes several days for isolation and identification of causative organisms. It is then too late to initiate proper antibiotic therapy. Serological tests have very low sensitivity and specificity, and no practical value in endemic areas. As early diagnosis of the disease and prompt treatment are essential for optimal management, especially in children, a rapid sensitive detection method for typhoid fever is urgently needed. Although PCR is sensitive and rapid, initial research indicated similar sensitivity to blood culture and lower specificity. We developed a fast and highly sensitive blood culture PCR method for detection of Salmonella Typhi, allowing same-day initiation of treatment after accurate diagnosis of typhoid. Methods An ox bile tryptone soy broth was optimized for blood culture, which allows the complete lysis of blood cells to release intracellular bacteria without inhibiting the growth of Salmonella Typhi. Using the optimised broth Salmonella Typhi bacteria in artificial blood samples were enriched in blood culture and then detected by a PCR targeting the fliC-d gene of Salmonella Typhi. Results Tests demonstrated that 2.4% ox bile in blood culture not only lyzes blood cells completely within 1.5 hours so that the intracellular bacteria could be released, but also has no inhibiting effect on the growth of Salmonella Typhi. Three hour enrichment of Salmonella Typhi in tryptone soya broth containing 2.4% ox bile could increase the bacterial number from 0.75 CFU per millilitre of blood which is similar to clinical typhoid samples to the level which regular PCR can detect. The whole blood culture PCR assay takes less than 8 hours to complete rather than several days for conventional blood culture

  13. Design of highly sensitive multichannel bimetallic photonic crystal fiber biosensor

    Science.gov (United States)

    Hameed, Mohamed Farhat O.; Alrayk, Yassmin K. A.; Shaalan, Abdelhamid A.; El Deeb, Walid S.; Obayya, Salah S. A.

    2016-10-01

    A design of a highly sensitive multichannel biosensor based on photonic crystal fiber is proposed and analyzed. The suggested design has a silver layer as a plasmonic material coated by a gold layer to protect silver oxidation. The reported sensor is based on detection using the quasi transverse electric (TE) and quasi transverse magnetic (TM) modes, which offers the possibility of multichannel/multianalyte sensing. The numerical results are obtained using a finite element method with perfect matched layer boundary conditions. The sensor geometrical parameters are optimized to achieve high sensitivity for the two polarized modes. High-refractive index sensitivity of about 4750 nm/RIU (refractive index unit) and 4300 nm/RIU with corresponding resolutions of 2.1×10-5 RIU, and 2.33×10-5 RIU can be obtained according to the quasi TM and quasi TE modes of the proposed sensor, respectively. Further, the reported design can be used as a self-calibration biosensor within an unknown analyte refractive index ranging from 1.33 to 1.35 with high linearity and high accuracy. Moreover, the suggested biosensor has advantages in terms of compactness and better integration of microfluidics setup, waveguide, and metallic layers into a single structure.

  14. Evaluation of bentonite alteration due to interactions with iron. Sensitivity analyses to identify the important factors for the bentonite alteration

    International Nuclear Information System (INIS)

    Sasamoto, Hiroshi; Wilson, James; Sato, Tsutomu

    2013-01-01

    Performance assessment of geological disposal systems for high-level radioactive waste requires a consideration of long-term systems behaviour. It is possible that the alteration of swelling clay present in bentonite buffers might have an impact on buffer functions. In the present study, iron (as a candidate overpack material)-bentonite (I-B) interactions were evaluated as the main buffer alteration scenario. Existing knowledge on alteration of bentonite during I-B interactions was first reviewed, then the evaluation methodology was developed considering modeling techniques previously used overseas. A conceptual model for smectite alteration during I-B interactions was produced. The following reactions and processes were selected: 1) release of Fe 2+ due to overpack corrosion; 2) diffusion of Fe 2+ in compacted bentonite; 3) sorption of Fe 2+ on smectite edge and ion exchange in interlayers; 4) dissolution of primary phases and formation of alteration products. Sensitivity analyses were performed to identify the most important factors for the alteration of bentonite by I-B interactions. (author)

  15. Serine Protease Zymography: Low-Cost, Rapid, and Highly Sensitive RAMA Casein Zymography.

    Science.gov (United States)

    Yasumitsu, Hidetaro

    2017-01-01

    To detect serine protease activity by zymography, casein and CBB stain have been used as a substrate and a detection procedure, respectively. Casein zymography has been using substrate concentration at 1 mg/mL and employing conventional CBB stain. Although ordinary casein zymography provides reproducible results, it has several disadvantages including time-consuming and relative low sensitivity. Improved casein zymography, RAMA casein zymography, is rapid and highly sensitive. RAMA casein zymography completes the detection process within 1 h after incubation and increases the sensitivity at least by tenfold. In addition to serine protease, the method also detects metalloprotease 7 (MMP7, Matrilysin) with high sensitivity.

  16. Design of 4D x-ray tomography experiments for reconstruction using regularized iterative algorithms

    Science.gov (United States)

    Mohan, K. Aditya

    2017-10-01

    4D X-ray computed tomography (4D-XCT) is widely used to perform non-destructive characterization of time varying physical processes in various materials. The conventional approach to improving temporal resolution in 4D-XCT involves the development of expensive and complex instrumentation that acquire data faster with reduced noise. It is customary to acquire data with many tomographic views at a high signal to noise ratio. Instead, temporal resolution can be improved using regularized iterative algorithms that are less sensitive to noise and limited views. These algorithms benefit from optimization of other parameters such as the view sampling strategy while improving temporal resolution by reducing the total number of views or the detector exposure time. This paper presents the design principles of 4D-XCT experiments when using regularized iterative algorithms derived using the framework of model-based reconstruction. A strategy for performing 4D-XCT experiments is presented that allows for improving the temporal resolution by progressively reducing the number of views or the detector exposure time. Theoretical analysis of the effect of the data acquisition parameters on the detector signal to noise ratio, spatial reconstruction resolution, and temporal reconstruction resolution is also presented in this paper.

  17. Regular-, irregular-, and pseudo-character processing in Chinese: The regularity effect in normal adult readers

    Directory of Open Access Journals (Sweden)

    Dustin Kai Yan Lau

    2014-03-01

    Full Text Available Background Unlike alphabetic languages, Chinese uses a logographic script. However, the pronunciation of many character’s phonetic radical has the same pronunciation as the character as a whole. These are considered regular characters and can be read through a lexical non-semantic route (Weekes & Chen, 1999. Pseudocharacters are another way to study this non-semantic route. A pseudocharacter is the combination of existing semantic and phonetic radicals in their legal positions resulting in a non-existing character (Ho, Chan, Chung, Lee, & Tsang, 2007. Pseudocharacters can be pronounced by direct derivation from the sound of its phonetic radical. Conversely, if the pronunciation of a character does not follow that of the phonetic radical, it is considered as irregular and can only be correctly read through the lexical-semantic route. The aim of the current investigation was to examine reading aloud in normal adults. We hypothesized that the regularity effect, previously described for alphabetical scripts and acquired dyslexic patients of Chinese (Weekes & Chen, 1999; Wu, Liu, Sun, Chromik, & Zhang, 2014, would also be present in normal adult Chinese readers. Method Participants. Thirty (50% female native Hong Kong Cantonese speakers with a mean age of 19.6 years and a mean education of 12.9 years. Stimuli. Sixty regular-, 60 irregular-, and 60 pseudo-characters (with at least 75% of name agreement in Chinese were matched by initial phoneme, number of strokes and family size. Additionally, regular- and irregular-characters were matched by frequency (low and consistency. Procedure. Each participant was asked to read aloud the stimuli presented on a laptop using the DMDX software. The order of stimuli presentation was randomized. Data analysis. ANOVAs were carried out by participants and items with RTs and errors as dependent variables and type of stimuli (regular-, irregular- and pseudo-character as repeated measures (F1 or between subject

  18. Fourier Transform Mass Spectrometry: The Transformation of Modern Environmental Analyses

    Directory of Open Access Journals (Sweden)

    Lucy Lim

    2016-01-01

    Full Text Available Unknown compounds in environmental samples are difficult to identify using standard mass spectrometric methods. Fourier transform mass spectrometry (FTMS has revolutionized how environmental analyses are performed. With its unsurpassed mass accuracy, high resolution and sensitivity, researchers now have a tool for difficult and complex environmental analyses. Two features of FTMS are responsible for changing the face of how complex analyses are accomplished. First is the ability to quickly and with high mass accuracy determine the presence of unknown chemical residues in samples. For years, the field has been limited by mass spectrometric methods that were based on knowing what compounds of interest were. Secondly, by utilizing the high resolution capabilities coupled with the low detection limits of FTMS, analysts also could dilute the sample sufficiently to minimize the ionization changes from varied matrices.

  19. Functional dissociation between regularity encoding and deviance detection along the auditory hierarchy.

    Science.gov (United States)

    Aghamolaei, Maryam; Zarnowiec, Katarzyna; Grimm, Sabine; Escera, Carles

    2016-02-01

    Auditory deviance detection based on regularity encoding appears as one of the basic functional properties of the auditory system. It has traditionally been assessed with the mismatch negativity (MMN) long-latency component of the auditory evoked potential (AEP). Recent studies have found earlier correlates of deviance detection based on regularity encoding. They occur in humans in the first 50 ms after sound onset, at the level of the middle-latency response of the AEP, and parallel findings of stimulus-specific adaptation observed in animal studies. However, the functional relationship between these different levels of regularity encoding and deviance detection along the auditory hierarchy has not yet been clarified. Here we addressed this issue by examining deviant-related responses at different levels of the auditory hierarchy to stimulus changes varying in their degree of deviation regarding the spatial location of a repeated standard stimulus. Auditory stimuli were presented randomly from five loudspeakers at azimuthal angles of 0°, 12°, 24°, 36° and 48° during oddball and reversed-oddball conditions. Middle-latency responses and MMN were measured. Our results revealed that middle-latency responses were sensitive to deviance but not the degree of deviation, whereas the MMN amplitude increased as a function of deviance magnitude. These findings indicated that acoustic regularity can be encoded at the level of the middle-latency response but that it takes a higher step in the auditory hierarchy for deviance magnitude to be encoded, thus providing a functional dissociation between regularity encoding and deviance detection along the auditory hierarchy. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  20. Regularity effect in prospective memory during aging

    Directory of Open Access Journals (Sweden)

    Geoffrey Blondelle

    2016-10-01

    Full Text Available Background: Regularity effect can affect performance in prospective memory (PM, but little is known on the cognitive processes linked to this effect. Moreover, its impacts with regard to aging remain unknown. To our knowledge, this study is the first to examine regularity effect in PM in a lifespan perspective, with a sample of young, intermediate, and older adults. Objective and design: Our study examined the regularity effect in PM in three groups of participants: 28 young adults (18–30, 16 intermediate adults (40–55, and 25 older adults (65–80. The task, adapted from the Virtual Week, was designed to manipulate the regularity of the various activities of daily life that were to be recalled (regular repeated activities vs. irregular non-repeated activities. We examine the role of several cognitive functions including certain dimensions of executive functions (planning, inhibition, shifting, and binding, short-term memory, and retrospective episodic memory to identify those involved in PM, according to regularity and age. Results: A mixed-design ANOVA showed a main effect of task regularity and an interaction between age and regularity: an age-related difference in PM performances was found for irregular activities (older < young, but not for regular activities. All participants recalled more regular activities than irregular ones with no age effect. It appeared that recalling of regular activities only involved planning for both intermediate and older adults, while recalling of irregular ones were linked to planning, inhibition, short-term memory, binding, and retrospective episodic memory. Conclusion: Taken together, our data suggest that planning capacities seem to play a major role in remembering to perform intended actions with advancing age. Furthermore, the age-PM-paradox may be attenuated when the experimental design is adapted by implementing a familiar context through the use of activities of daily living. The clinical

  1. J-regular rings with injectivities

    OpenAIRE

    Shen, Liang

    2010-01-01

    A ring $R$ is called a J-regular ring if R/J(R) is von Neumann regular, where J(R) is the Jacobson radical of R. It is proved that if R is J-regular, then (i) R is right n-injective if and only if every homomorphism from an $n$-generated small right ideal of $R$ to $R_{R}$ can be extended to one from $R_{R}$ to $R_{R}$; (ii) R is right FP-injective if and only if R is right (J, R)-FP-injective. Some known results are improved.

  2. Regular breakfast consumption is associated with increased IQ in kindergarten children.

    Science.gov (United States)

    Liu, Jianghong; Hwang, Wei-Ting; Dickerman, Barbra; Compher, Charlene

    2013-04-01

    Studies have documented a positive relationship between regular breakfast consumption and cognitive outcomes in youth. However, most of these studies have emphasized specific measures of cognition rather than cognitive performance as a broad construct (e.g., IQ test scores) and have been limited to Western samples of school-age children and adolescents. This study aims to extend the literature on breakfast consumption and cognition by examining these constructs in a sample of Chinese kindergarten-age children. This cross-sectional study consisted of a sample of 1269 children (697 boys and 572 girls) aged 6 years from the Chinese city of Jintan. Cognition was assessed with the Chinese version of the Wechsler preschool and primary scale of intelligence-revised. Breakfast habits were assessed through parental questionnaire. Analyses of variance and linear regression models were used to analyze the association between breakfast habits and IQ. Socioeconomic and parental psychosocial variables related to intelligence were controlled for. Findings showed that children who regularly have breakfast on a near-daily basis had significantly higher full scale, verbal, and performance IQ test scores (all pbreakfast. This relationship persisted for VIQ (verbal IQ) and FIQ (full IQ) even after adjusting for gender, current living location, parental education, parental occupation, and primary child caregiver. Findings may reflect nutritional as well as social benefits of regular breakfast consumption on cognition, and regular breakfast consumption should be encouraged among young children. Copyright © 2013 Elsevier Ltd. All rights reserved.

  3. Aluminum nano-cantilevers for high sensitivity mass sensors

    DEFF Research Database (Denmark)

    Davis, Zachary James; Boisen, Anja

    2005-01-01

    We have fabricated Al nano-cantilevers using a very simple one mask contact UV lithography technique with lateral dimensions under 500 nm and vertical dimensions of approximately 100 nm. These devices are demonstrated as highly sensitive mass sensors by measuring their dynamic properties. Further...

  4. Regularized forecasting of chaotic dynamical systems

    International Nuclear Information System (INIS)

    Bollt, Erik M.

    2017-01-01

    While local models of dynamical systems have been highly successful in terms of using extensive data sets observing even a chaotic dynamical system to produce useful forecasts, there is a typical problem as follows. Specifically, with k-near neighbors, kNN method, local observations occur due to recurrences in a chaotic system, and this allows for local models to be built by regression to low dimensional polynomial approximations of the underlying system estimating a Taylor series. This has been a popular approach, particularly in context of scalar data observations which have been represented by time-delay embedding methods. However such local models can generally allow for spatial discontinuities of forecasts when considered globally, meaning jumps in predictions because the collected near neighbors vary from point to point. The source of these discontinuities is generally that the set of near neighbors varies discontinuously with respect to the position of the sample point, and so therefore does the model built from the near neighbors. It is possible to utilize local information inferred from near neighbors as usual but at the same time to impose a degree of regularity on a global scale. We present here a new global perspective extending the general local modeling concept. In so doing, then we proceed to show how this perspective allows us to impose prior presumed regularity into the model, by involving the Tikhonov regularity theory, since this classic perspective of optimization in ill-posed problems naturally balances fitting an objective with some prior assumed form of the result, such as continuity or derivative regularity for example. This all reduces to matrix manipulations which we demonstrate on a simple data set, with the implication that it may find much broader context.

  5. Prototype of high resolution PET using resistive electrode position sensitive CdTe detectors

    International Nuclear Information System (INIS)

    Kikuchi, Yohei; Ishii, Keizo; Matsuyama, Shigeo; Yamazaki, Hiromichi

    2008-01-01

    Downsizing detector elements makes it possible that spatial resolutions of positron emission tomography (PET) cameras are improved very much. From this point of view, semiconductor detectors are preferable. To obtain high resolution, the pixel type or the multi strip type of semiconductor detectors can be used. However, in this case, there is a low packing ratio problem, because a dead area between detector arrays cannot be neglected. Here, we propose the use of position sensitive semiconductor detectors with resistive electrode. The CdTe detector is promising as a detector for PET camera because of its high sensitivity. In this paper, we report development of prototype of high resolution PET using resistive electrode position sensitive CdTe detectors. We made 1-dimensional position sensitive CdTe detectors experimentally by changing the electrode thickness. We obtained 750 A as an appropriate thickness of position sensitive detectors, and evaluated the performance of the detector using a collimated 241 Am source. A good position resolution of 1.2 mm full width half maximum (FWHM) was obtained. On the basis of the fundamental development of resistive electrode position sensitive detectors, we constructed a prototype of high resolution PET which was a dual head type and was consisted of thirty-two 1-dimensional position sensitive detectors. In conclusion, we obtained high resolutions which are 0.75 mm (FWHM) in transaxial, and 1.5 mm (FWHM) in axial. (author)

  6. Low Cost, Low Power, High Sensitivity Magnetometer

    Science.gov (United States)

    2008-12-01

    which are used to measure the small magnetic signals from brain. Other types of vector magnetometers are fluxgate , coil based, and magnetoresistance...concentrator with the magnetometer currently used in Army multimodal sensor systems, the Brown fluxgate . One sees the MEMS fluxgate magnetometer is...Guedes, A.; et al., 2008: Hybrid - LOW COST, LOW POWER, HIGH SENSITIVITY MAGNETOMETER A.S. Edelstein*, James E. Burnette, Greg A. Fischer, M.G

  7. Highly sensitive assay for tyrosine hydroxylase activity by high-performance liquid chromatography.

    Science.gov (United States)

    Nagatsu, T; Oka, K; Kato, T

    1979-07-21

    A highly sensitive assay for tyrosine hydroxylase (TH) activity by high-performance liquid chromatography (HPLC) with amperometric detection was devised based on the rapid isolation of enzymatically formed DOPA by a double-column procedure, the columns fitted together sequentially (the top column of Amberlite CG-50 and the bottom column of aluminium oxide). DOPA was adsorbed on the second aluminium oxide column, then eluted with 0.5 M hydrochloric acid, and assayed by HPLC with amperometric detection. D-Tyrosine was used for the control. alpha-Methyldopa was added to the incubation mixture as an internal standard after incubation. This assay was more sensitive than radioassays and 5 pmol of DOPA formed enzymatically could be measured in the presence of saturating concentrations of tyrosine and 6-methyltetrahydropterin. The TH activity in 2 mg of human putamen could be easily measured, and this method was found to be particularly suitable for the assay of TH activity in a small number of nuclei from animal and human brain.

  8. Sensitive high performance liquid chromatographic method for the ...

    African Journals Online (AJOL)

    A new simple, sensitive, cost-effective and reproducible high performance liquid chromatographic (HPLC) method for the determination of proguanil (PG) and its metabolites, cycloguanil (CG) and 4-chlorophenylbiguanide (4-CPB) in urine and plasma is described. The extraction procedure is a simple three-step process ...

  9. Returning Special Education Students to Regular Classrooms: Externalities on Peers’ Outcomes

    DEFF Research Database (Denmark)

    Rangvid, Beatrice Schindler

    Policy reforms to boost full inclusion and conventional return flows send students with special educational needs (SEN) from segregated settings to regular classrooms. Using full population micro data from Denmark, I investigate whether becoming exposed to returning SEN students affects...... on test score gains of moderate size (-0.036 SD), while no significant effect is found in non-reform years. The results are robust to sensitivity checks. The negative exposure effect is significant only for boys, but does not differ by parental education or grade-level....

  10. Identification of moving vehicle forces on bridge structures via moving average Tikhonov regularization

    Science.gov (United States)

    Pan, Chu-Dong; Yu, Ling; Liu, Huan-Lin

    2017-08-01

    Traffic-induced moving force identification (MFI) is a typical inverse problem in the field of bridge structural health monitoring. Lots of regularization-based methods have been proposed for MFI. However, the MFI accuracy obtained from the existing methods is low when the moving forces enter into and exit a bridge deck due to low sensitivity of structural responses to the forces at these zones. To overcome this shortcoming, a novel moving average Tikhonov regularization method is proposed for MFI by combining with the moving average concepts. Firstly, the bridge-vehicle interaction moving force is assumed as a discrete finite signal with stable average value (DFS-SAV). Secondly, the reasonable signal feature of DFS-SAV is quantified and introduced for improving the penalty function (∣∣x∣∣2 2) defined in the classical Tikhonov regularization. Then, a feasible two-step strategy is proposed for selecting regularization parameter and balance coefficient defined in the improved penalty function. Finally, both numerical simulations on a simply-supported beam and laboratory experiments on a hollow tube beam are performed for assessing the accuracy and the feasibility of the proposed method. The illustrated results show that the moving forces can be accurately identified with a strong robustness. Some related issues, such as selection of moving window length, effect of different penalty functions, and effect of different car speeds, are discussed as well.

  11. Review of high-sensitivity Radon studies

    Science.gov (United States)

    Wojcik, M.; Zuzel, G.; Simgen, H.

    2017-10-01

    A challenge in many present cutting-edge particle physics experiments is the stringent requirements in terms of radioactive background. In peculiar, the prevention of Radon, a radioactive noble gas, which occurs from ambient air and it is also released by emanation from the omnipresent progenitor Radium. In this paper we review various high-sensitivity Radon detection techniques and approaches, applied in the experiments looking for rare nuclear processes happening at low energies. They allow to identify, quantitatively measure and finally suppress the numerous sources of Radon in the detectors’ components and plants.

  12. On accuracy problems for semi-analytical sensitivity analyses

    DEFF Research Database (Denmark)

    Pedersen, P.; Cheng, G.; Rasmussen, John

    1989-01-01

    The semi-analytical method of sensitivity analysis combines ease of implementation with computational efficiency. A major drawback to this method, however, is that severe accuracy problems have recently been reported. A complete error analysis for a beam problem with changing length is carried ou...... pseudo loads in order to obtain general load equilibrium with rigid body motions. Such a method would be readily applicable for any element type, whether analytical expressions for the element stiffnesses are available or not. This topic is postponed for a future study....

  13. Recent trends in high spin sensitivity magnetic resonance

    Science.gov (United States)

    Blank, Aharon; Twig, Ygal; Ishay, Yakir

    2017-07-01

    new ideas, show how these limiting factors can be mitigated to significantly improve the sensitivity of induction detection. Finally, we outline some directions for the possible applications of high-sensitivity induction detection in the field of electron spin resonance.

  14. A wide-bandwidth and high-sensitivity robust microgyroscope

    International Nuclear Information System (INIS)

    Sahin, Korhan; Sahin, Emre; Akin, Tayfun; Alper, Said Emre

    2009-01-01

    This paper reports a microgyroscope design concept with the help of a 2 degrees of freedom (DoF) sense mode to achieve a wide bandwidth without sacrificing mechanical and electronic sensitivity and to obtain robust operation against variations under ambient conditions. The design concept is demonstrated with a tuning fork microgyroscope fabricated with an in-house silicon-on-glass micromachining process. When the fabricated gyroscope is operated with a relatively wide bandwidth of 1 kHz, measurements show a relatively high raw mechanical sensitivity of 131 µV (° s −1 ) −1 . The variation in the amplified mechanical sensitivity (scale factor) of the gyroscope is measured to be less than 0.38% for large ambient pressure variations such as from 40 to 500 mTorr. The bias instability and angle random walk of the gyroscope are measured to be 131° h −1 and 1.15° h −1/2 , respectively

  15. Compton imaging with a highly-segmented, position-sensitive HPGe detector

    Energy Technology Data Exchange (ETDEWEB)

    Steinbach, T.; Hirsch, R.; Reiter, P.; Birkenbach, B.; Bruyneel, B.; Eberth, J.; Hess, H.; Lewandowski, L. [Universitaet zu Koeln, Institut fuer Kernphysik, Koeln (Germany); Gernhaeuser, R.; Maier, L.; Schlarb, M.; Weiler, B.; Winkel, M. [Technische Universitaet Muenchen, Physik Department, Garching (Germany)

    2017-02-15

    A Compton camera based on a highly-segmented high-purity germanium (HPGe) detector and a double-sided silicon-strip detector (DSSD) was developed, tested, and put into operation; the origin of γ radiation was determined successfully. The Compton camera is operated in two different modes. Coincidences from Compton-scattered γ-ray events between DSSD and HPGe detector allow for best angular resolution; while the high-efficiency mode takes advantage of the position sensitivity of the highly-segmented HPGe detector. In this mode the setup is sensitive to the whole 4π solid angle. The interaction-point positions in the 36-fold segmented large-volume HPGe detector are determined by pulse-shape analysis (PSA) of all HPGe detector signals. Imaging algorithms were developed for each mode and successfully implemented. The angular resolution sensitively depends on parameters such as geometry, selected multiplicity and interaction-point distances. Best results were obtained taking into account the crosstalk properties, the time alignment of the signals and the distance metric for the PSA for both operation modes. An angular resolution between 13.8 {sup circle} and 19.1 {sup circle}, depending on the minimal interaction-point distance for the high-efficiency mode at an energy of 1275 keV, was achieved. In the coincidence mode, an increased angular resolution of 4.6 {sup circle} was determined for the same γ-ray energy. (orig.)

  16. Salt-body Inversion with Minimum Gradient Support and Sobolev Space Norm Regularizations

    KAUST Repository

    Kazei, Vladimir

    2017-05-26

    Full-waveform inversion (FWI) is a technique which solves the ill-posed seismic inversion problem of fitting our model data to the measured ones from the field. FWI is capable of providing high-resolution estimates of the model, and of handling wave propagation of arbitrary complexity (visco-elastic, anisotropic); yet, it often fails to retrieve high-contrast geological structures, such as salt. One of the reasons for the FWI failure is that the updates at earlier iterations are too smooth to capture the sharp edges of the salt boundary. We compare several regularization approaches, which promote sharpness of the edges. Minimum gradient support (MGS) regularization focuses the inversion on blocky models, even more than the total variation (TV) does. However, both approaches try to invert undesirable high wavenumbers in the model too early for a model of complex structure. Therefore, we apply the Sobolev space norm as a regularizing term in order to maintain a balance between sharp and smooth updates in FWI. We demonstrate the application of these regularizations on a Marmousi model, enriched by a chunk of salt. The model turns out to be too complex in some parts to retrieve its full velocity distribution, yet the salt shape and contrast are retrieved.

  17. Novel charge sensitive preamplifier without high-value feedback resistor

    International Nuclear Information System (INIS)

    Xi Deming

    1992-01-01

    A novel charge sensitive preamplifier is introduced. The method of removing the high value feedback resistor, the circuit design and analysis are described. A practical circuit and its measured performances are provided

  18. Sparse regularization for force identification using dictionaries

    Science.gov (United States)

    Qiao, Baijie; Zhang, Xingwu; Wang, Chenxi; Zhang, Hang; Chen, Xuefeng

    2016-04-01

    The classical function expansion method based on minimizing l2-norm of the response residual employs various basis functions to represent the unknown force. Its difficulty lies in determining the optimum number of basis functions. Considering the sparsity of force in the time domain or in other basis space, we develop a general sparse regularization method based on minimizing l1-norm of the coefficient vector of basis functions. The number of basis functions is adaptively determined by minimizing the number of nonzero components in the coefficient vector during the sparse regularization process. First, according to the profile of the unknown force, the dictionary composed of basis functions is determined. Second, a sparsity convex optimization model for force identification is constructed. Third, given the transfer function and the operational response, Sparse reconstruction by separable approximation (SpaRSA) is developed to solve the sparse regularization problem of force identification. Finally, experiments including identification of impact and harmonic forces are conducted on a cantilever thin plate structure to illustrate the effectiveness and applicability of SpaRSA. Besides the Dirac dictionary, other three sparse dictionaries including Db6 wavelets, Sym4 wavelets and cubic B-spline functions can also accurately identify both the single and double impact forces from highly noisy responses in a sparse representation frame. The discrete cosine functions can also successfully reconstruct the harmonic forces including the sinusoidal, square and triangular forces. Conversely, the traditional Tikhonov regularization method with the L-curve criterion fails to identify both the impact and harmonic forces in these cases.

  19. BH3105 type neutron dose equivalent meter of high sensitivity

    International Nuclear Information System (INIS)

    Ji Changsong; Zhang Enshan; Yang Jianfeng; Zhang Hong; Huang Jiling

    1995-10-01

    It is noted that to design a neutron dose meter of high sensitivity is almost impossible in the frame of traditional designing principle--'absorption net principle'. Based on a newly proposed principle of obtaining neutron dose equi-biological effect adjustment--' absorption stick principle', a brand-new neutron dose-equivalent meter with high neutron sensitivity BH3105 has been developed. Its sensitivity reaches 10 cps/(μSv·h -1 ), which is 18∼40 times higher than one of foreign products of the same kind and is 10 4 times higher than that of domestic FJ342 neutron rem-meter. BH3105 has a measurement range from 0.1μSv/h to 1 Sv/h which is 1 or 2 orders wider than that of the other's. It has the advanced properties of gamma-resistance, energy response, orientation, etc. (6 tabs., 5 figs.)

  20. Monte carlo calculation of energy-dependent response of high-sensitive neutron monitor, HISENS

    International Nuclear Information System (INIS)

    Imanaka, Tetsuji; Ebisawa, Tohru; Kobayashi, Keiji; Koide, Hiroaki; Seo, Takeshi; Kawano, Shinji

    1988-01-01

    A highly sensitive neutron monitor system, HISENS, has been developed to measure leakage neutrons from nuclear facilities. The counter system of HISENS contains a detector bank which consists of ten cylindrical proportional counters filled with 10 atm 3 He gas and a paraffin moderator mounted in an aluminum case. The size of the detector bank is 56 cm high, 66 cm wide and 10 cm thick. It is revealed by a calibration experiment using an 241 Am-Be neutron source that the sensitivity of HISENS is about 2000 times as large as that of a typical commercial rem-counter. Since HISENS is designed to have a high sensitivity in a wide range of neutron energy, the shape of its energy dependent response curve cannot be matched to that of the dose equivalent conversion factor. To estimate dose equivalent values from neutron counts by HISENS, it is necessary to know the energy and angular characteristics of both HISENS and the neutron field. The area of one side of the detector bank is 3700 cm 2 and the detection efficiency in the constant region of the response curve is about 30 %. Thus, the sensitivity of HISENS for this energy range is 740 cps/(n/cm 2 /sec). This value indicates the extremely high sensitivity of HISENS as compared with exsisting highly sensitive neutron monitors. (Nogami, K.)

  1. CONSTRUCTION OF A DIFFERENTIAL ISOTHERMAL CALORIMETER OF HIGH SENSITIVITY AND LOW COST.

    OpenAIRE

    Trinca, RB; Perles, CE; Volpe, PLO

    2009-01-01

    CONSTRUCTION OF A DIFFERENTIAL ISOTHERMAL CALORIMETER OF HIGH SENSITIVITY AND LOW COST The high cost of sensitivity commercial calorimeters may represent an obstacle for many calorimetric research groups. This work describes (fie construction and calibration of a batch differential heat conduction calorimeter with sample cells volumes of about 400 mu L. The calorimeter was built using two small high sensibility square Peltier thermoelectric sensors and the total cost was estimated to be about...

  2. Higher derivative regularization and chiral anomaly

    International Nuclear Information System (INIS)

    Nagahama, Yoshinori.

    1985-02-01

    A higher derivative regularization which automatically leads to the consistent chiral anomaly is analyzed in detail. It explicitly breaks all the local gauge symmetry but preserves global chiral symmetry and leads to the chirally symmetric consistent anomaly. This regularization thus clarifies the physics content contained in the consistent anomaly. We also briefly comment on the application of this higher derivative regularization to massless QED. (author)

  3. Mathematical Modeling the Geometric Regularity in Proteus Mirabilis Colonies

    Science.gov (United States)

    Zhang, Bin; Jiang, Yi; Minsu Kim Collaboration

    Proteus Mirabilis colony exhibits striking spatiotemporal regularity, with concentric ring patterns with alternative high and low bacteria density in space, and periodicity for repetition process of growth and swarm in time. We present a simple mathematical model to explain the spatiotemporal regularity of P. Mirabilis colonies. We study a one-dimensional system. Using a reaction-diffusion model with thresholds in cell density and nutrient concentration, we recreated periodic growth and spread patterns, suggesting that the nutrient constraint and cell density regulation might be sufficient to explain the spatiotemporal periodicity in P. Mirabilis colonies. We further verify this result using a cell based model.

  4. EIT image reconstruction with four dimensional regularization.

    Science.gov (United States)

    Dai, Tao; Soleimani, Manuchehr; Adler, Andy

    2008-09-01

    Electrical impedance tomography (EIT) reconstructs internal impedance images of the body from electrical measurements on body surface. The temporal resolution of EIT data can be very high, although the spatial resolution of the images is relatively low. Most EIT reconstruction algorithms calculate images from data frames independently, although data are actually highly correlated especially in high speed EIT systems. This paper proposes a 4-D EIT image reconstruction for functional EIT. The new approach is developed to directly use prior models of the temporal correlations among images and 3-D spatial correlations among image elements. A fast algorithm is also developed to reconstruct the regularized images. Image reconstruction is posed in terms of an augmented image and measurement vector which are concatenated from a specific number of previous and future frames. The reconstruction is then based on an augmented regularization matrix which reflects the a priori constraints on temporal and 3-D spatial correlations of image elements. A temporal factor reflecting the relative strength of the image correlation is objectively calculated from measurement data. Results show that image reconstruction models which account for inter-element correlations, in both space and time, show improved resolution and noise performance, in comparison to simpler image models.

  5. A high sensitivity process variation sensor utilizing sub-threshold operation

    OpenAIRE

    Meterelliyoz, Mesut; Song, Peilin; Stellari, Franco; Kulkarni, Jaydeep P.; Roy, Kaushik

    2008-01-01

    In this paper, we propose a novel low-power, bias-free, high-sensitivity process variation sensor for monitoring random variations in the threshold voltage. The proposed sensor design utilizes the exponential current-voltage relationship of sub-threshold operation thereby improving the sensitivity by 2.3X compared to the above-threshold operation. A test-chip containing 128 PMOS and 128 NMOS devices has been fabri...

  6. 75 FR 53966 - Regular Meeting

    Science.gov (United States)

    2010-09-02

    ... FARM CREDIT SYSTEM INSURANCE CORPORATION Regular Meeting AGENCY: Farm Credit System Insurance Corporation Board. SUMMARY: Notice is hereby given of the regular meeting of the Farm Credit System Insurance Corporation Board (Board). DATE AND TIME: The meeting of the Board will be held at the offices of the Farm...

  7. Total variation regularization for fMRI-based prediction of behavior

    Science.gov (United States)

    Michel, Vincent; Gramfort, Alexandre; Varoquaux, Gaël; Eger, Evelyn; Thirion, Bertrand

    2011-01-01

    While medical imaging typically provides massive amounts of data, the extraction of relevant information for predictive diagnosis remains a difficult challenge. Functional MRI (fMRI) data, that provide an indirect measure of task-related or spontaneous neuronal activity, are classically analyzed in a mass-univariate procedure yielding statistical parametric maps. This analysis framework disregards some important principles of brain organization: population coding, distributed and overlapping representations. Multivariate pattern analysis, i.e., the prediction of behavioural variables from brain activation patterns better captures this structure. To cope with the high dimensionality of the data, the learning method has to be regularized. However, the spatial structure of the image is not taken into account in standard regularization methods, so that the extracted features are often hard to interpret. More informative and interpretable results can be obtained with the ℓ1 norm of the image gradient, a.k.a. its Total Variation (TV), as regularization. We apply for the first time this method to fMRI data, and show that TV regularization is well suited to the purpose of brain mapping while being a powerful tool for brain decoding. Moreover, this article presents the first use of TV regularization for classification. PMID:21317080

  8. Work and family life of childrearing women workers in Japan: comparison of non-regular employees with short working hours, non-regular employees with long working hours, and regular employees.

    Science.gov (United States)

    Seto, Masako; Morimoto, Kanehisa; Maruyama, Soichiro

    2006-05-01

    This study assessed the working and family life characteristics, and the degree of domestic and work strain of female workers with different employment statuses and weekly working hours who are rearing children. Participants were the mothers of preschoolers in a large Japanese city. We classified the women into three groups according to the hours they worked and their employment conditions. The three groups were: non-regular employees working less than 30 h a week (n=136); non-regular employees working 30 h or more per week (n=141); and regular employees working 30 h or more a week (n=184). We compared among the groups the subjective values of work, financial difficulties, childcare and housework burdens, psychological effects, and strains such as work and family strain, work-family conflict, and work dissatisfaction. Regular employees were more likely to report job pressures and inflexible work schedules and to experience more strain related to work and family than non-regular employees. Non-regular employees were more likely to be facing financial difficulties. In particular, non-regular employees working longer hours tended to encounter socioeconomic difficulties and often lacked support from family and friends. Female workers with children may have different social backgrounds and different stressors according to their working hours and work status.

  9. Sensitivity to apomorphine-induced yawning and hypothermia in rats eating standard or high-fat chow.

    Science.gov (United States)

    Baladi, Michelle G; Thomas, Yvonne M; France, Charles P

    2012-07-01

    Feeding conditions modify sensitivity to indirect- and direct-acting dopamine receptor agonists as well as the development of sensitization to these drugs. This study examined whether feeding condition affects acute sensitivity to apomorphine-induced yawning or changes in sensitivity that occur over repeated drug administration. Quinpirole-induced yawning was also evaluated to see whether sensitization to apomorphine confers cross-sensitization to quinpirole. Drug-induced yawning was measured in different groups of male Sprague Dawley rats (n = 6/group) eating high (34.3%) fat or standard (5.7% fat) chow. Five weeks of eating high-fat chow rendered otherwise drug-naïve rats more sensitive to apomorphine- (0.01-1.0 mg/kg, i.p.) and quinpirole- (0.0032-0.32 mg/kg, i.p.) induced yawning, compared with rats eating standard chow. In other rats, tested weekly with apomorphine, sensitivity to apomorphine-induced yawning increased (sensitization) similarly in rats with free access to standard or high-fat chow; conditioning to the testing environment appeared to contribute to increased yawning in both groups of rats. Food restriction decreased sensitivity to apomorphine-induced yawning across five weekly tests. Rats with free access to standard or high-fat chow and sensitized to apomorphine were cross-sensitized to quinpirole-induced yawning. The hypothermic effects of apomorphine and quinpirole were not different regardless of drug history or feeding condition. Eating high-fat chow or restricting access to food alters sensitivity to direct-acting dopamine receptor agonists (apomorphine, quinpirole), although the relative contribution of drug history and dietary conditions to sensitivity changes appears to vary among agonists.

  10. Improving detection sensitivity for partial discharge monitoring of high voltage equipment

    Science.gov (United States)

    Hao, L.; Lewin, P. L.; Swingler, S. G.

    2008-05-01

    Partial discharge (PD) measurements are an important technique for assessing the health of power apparatus. Previous published research by the authors has shown that an electro-optic system can be used for PD measurement of oil-filled power transformers. A PD signal generated within an oil-filled power transformer may reach a winding and then travel along the winding to the bushing core bar. The bushing, acting like a capacitor, can transfer the high frequency components of the partial discharge signal to its earthed tap point. Therefore, an effective PD current measurement can be implemented at the bushing tap by using a radio frequency current transducer around the bushing-tap earth connection. In addition, the use of an optical transmission technique not only improves the electrical noise immunity and provides the possibility of remote measurement but also realizes electrical isolation and enhances safety for operators. However, the bushing core bar can act as an aerial and in addition noise induced by the electro-optic modulation system may influence overall measurement sensitivity. This paper reports on a machine learning technique, namely the use of a support vector machine (SVM), to improve the detection sensitivity of the system. Comparison between the signal extraction performances of a passive hardware filter and the SVM technique has been assessed. The results obtained from the laboratory-based experiment have been analysed and indicate that the SVM approach provides better performance than the passive hardware filter and it can reliably detect discharge signals with apparent charge greater than 30 pC.

  11. Improving detection sensitivity for partial discharge monitoring of high voltage equipment

    International Nuclear Information System (INIS)

    Hao, L; Lewin, P L; Swingler, S G

    2008-01-01

    Partial discharge (PD) measurements are an important technique for assessing the health of power apparatus. Previous published research by the authors has shown that an electro-optic system can be used for PD measurement of oil-filled power transformers. A PD signal generated within an oil-filled power transformer may reach a winding and then travel along the winding to the bushing core bar. The bushing, acting like a capacitor, can transfer the high frequency components of the partial discharge signal to its earthed tap point. Therefore, an effective PD current measurement can be implemented at the bushing tap by using a radio frequency current transducer around the bushing-tap earth connection. In addition, the use of an optical transmission technique not only improves the electrical noise immunity and provides the possibility of remote measurement but also realizes electrical isolation and enhances safety for operators. However, the bushing core bar can act as an aerial and in addition noise induced by the electro-optic modulation system may influence overall measurement sensitivity. This paper reports on a machine learning technique, namely the use of a support vector machine (SVM), to improve the detection sensitivity of the system. Comparison between the signal extraction performances of a passive hardware filter and the SVM technique has been assessed. The results obtained from the laboratory-based experiment have been analysed and indicate that the SVM approach provides better performance than the passive hardware filter and it can reliably detect discharge signals with apparent charge greater than 30 pC

  12. RES: Regularized Stochastic BFGS Algorithm

    Science.gov (United States)

    Mokhtari, Aryan; Ribeiro, Alejandro

    2014-12-01

    RES, a regularized stochastic version of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton method is proposed to solve convex optimization problems with stochastic objectives. The use of stochastic gradient descent algorithms is widespread, but the number of iterations required to approximate optimal arguments can be prohibitive in high dimensional problems. Application of second order methods, on the other hand, is impracticable because computation of objective function Hessian inverses incurs excessive computational cost. BFGS modifies gradient descent by introducing a Hessian approximation matrix computed from finite gradient differences. RES utilizes stochastic gradients in lieu of deterministic gradients for both, the determination of descent directions and the approximation of the objective function's curvature. Since stochastic gradients can be computed at manageable computational cost RES is realizable and retains the convergence rate advantages of its deterministic counterparts. Convergence results show that lower and upper bounds on the Hessian egeinvalues of the sample functions are sufficient to guarantee convergence to optimal arguments. Numerical experiments showcase reductions in convergence time relative to stochastic gradient descent algorithms and non-regularized stochastic versions of BFGS. An application of RES to the implementation of support vector machines is developed.

  13. Incremental projection approach of regularization for inverse problems

    Energy Technology Data Exchange (ETDEWEB)

    Souopgui, Innocent, E-mail: innocent.souopgui@usm.edu [The University of Southern Mississippi, Department of Marine Science (United States); Ngodock, Hans E., E-mail: hans.ngodock@nrlssc.navy.mil [Naval Research Laboratory (United States); Vidard, Arthur, E-mail: arthur.vidard@imag.fr; Le Dimet, François-Xavier, E-mail: ledimet@imag.fr [Laboratoire Jean Kuntzmann (France)

    2016-10-15

    This paper presents an alternative approach to the regularized least squares solution of ill-posed inverse problems. Instead of solving a minimization problem with an objective function composed of a data term and a regularization term, the regularization information is used to define a projection onto a convex subspace of regularized candidate solutions. The objective function is modified to include the projection of each iterate in the place of the regularization. Numerical experiments based on the problem of motion estimation for geophysical fluid images, show the improvement of the proposed method compared with regularization methods. For the presented test case, the incremental projection method uses 7 times less computation time than the regularization method, to reach the same error target. Moreover, at convergence, the incremental projection is two order of magnitude more accurate than the regularization method.

  14. Geometric regularizations and dual conifold transitions

    International Nuclear Information System (INIS)

    Landsteiner, Karl; Lazaroiu, Calin I.

    2003-01-01

    We consider a geometric regularization for the class of conifold transitions relating D-brane systems on noncompact Calabi-Yau spaces to certain flux backgrounds. This regularization respects the SL(2,Z) invariance of the flux superpotential, and allows for computation of the relevant periods through the method of Picard-Fuchs equations. The regularized geometry is a noncompact Calabi-Yau which can be viewed as a monodromic fibration, with the nontrivial monodromy being induced by the regulator. It reduces to the original, non-monodromic background when the regulator is removed. Using this regularization, we discuss the simple case of the local conifold, and show how the relevant field-theoretic information can be extracted in this approach. (author)

  15. Eating high-fat chow enhances sensitization to the effects of methamphetamine on locomotion in rats.

    Science.gov (United States)

    McGuire, Blaine A; Baladi, Michelle G; France, Charles P

    2011-05-11

    Eating high-fat chow can modify the effects of drugs acting directly or indirectly on dopamine systems and repeated intermittent drug administration can markedly increase sensitivity (i.e., sensitization) to the behavioral effects of indirect-acting dopamine receptor agonists (e.g., methamphetamine). This study examined whether eating high-fat chow alters the sensitivity of male Sprague Dawley rats to the locomotor stimulating effects of acute or repeated administration of methamphetamine. The acute effects of methamphetamine on locomotion were not different between rats (n=6/group) eating high-fat or standard chow for 1 or 4 weeks. Sensitivity to the effects of methamphetamine (0.1-10mg/kg, i.p.) increased progressively across 4 once per week tests; this sensitization developed more rapidly and to a greater extent in rats eating high-fat chow as compared with rats eating standard chow. Thus, while eating high-fat chow does not appear to alter sensitivity of rats to acutely-administered methamphetamine, it significantly increases the sensitization that develops to repeated intermittent administration of methamphetamine. These data suggest that eating certain foods influences the development of sensitization to drugs acting on dopamine systems. Copyright © 2011 Elsevier B.V. All rights reserved.

  16. Gender and age inequalities in regular sports participation: a cross-national study of 25 European countries.

    Science.gov (United States)

    Van Tuyckom, Charlotte; Scheerder, Jeroen; Bracke, Piet

    2010-08-01

    This article provides a unique opportunity to compare gender inequalities in sports participation across Europe, and the extent to which this varies by age using large, cross-sections of the population. The Eurobarometer Survey 62.0 (carried out in 2004 at the request of the European Commission and covering the adult population of 25 European member states, N = 23,909) was used to analyse differences in regular sports participation by gender and by age in the different countries. For the majority of countries, the occurrence of regular sporting activity was less than 40%. Additionally, binary logistic regression analyses identified significant gender differences in sports participation in 12 countries. In Belgium, France, Greece, Latvia, Lithuania, Slovakia, Spain, and the UK, men were more likely to report being regularly active in sports than women, whereas in Denmark, Finland, Sweden, and the Netherlands the opposite was true. Moreover, the extent to which these gender inequalities differ by age varies considerably across countries. The results imply that: (i) in some European countries more efforts must be undertaken to promote the original goals of the Sport for All Charter, and (ii) to achieve more female participation in sports will require different policy responses in the diverse European member states.

  17. Manifold regularized multitask learning for semi-supervised multilabel image classification.

    Science.gov (United States)

    Luo, Yong; Tao, Dacheng; Geng, Bo; Xu, Chao; Maybank, Stephen J

    2013-02-01

    It is a significant challenge to classify images with multiple labels by using only a small number of labeled samples. One option is to learn a binary classifier for each label and use manifold regularization to improve the classification performance by exploring the underlying geometric structure of the data distribution. However, such an approach does not perform well in practice when images from multiple concepts are represented by high-dimensional visual features. Thus, manifold regularization is insufficient to control the model complexity. In this paper, we propose a manifold regularized multitask learning (MRMTL) algorithm. MRMTL learns a discriminative subspace shared by multiple classification tasks by exploiting the common structure of these tasks. It effectively controls the model complexity because different tasks limit one another's search volume, and the manifold regularization ensures that the functions in the shared hypothesis space are smooth along the data manifold. We conduct extensive experiments, on the PASCAL VOC'07 dataset with 20 classes and the MIR dataset with 38 classes, by comparing MRMTL with popular image classification algorithms. The results suggest that MRMTL is effective for image classification.

  18. High sensitivity amplifier/discriminator for PWC's

    International Nuclear Information System (INIS)

    Hansen, S.

    1983-01-01

    The facility support group at Fermilab is designing and building a general purpose beam chamber for use in several locations at the laboratory. This pwc has 128 wires per plane spaced 1 mm apart. An initial production of 25 signal planes is anticipated. In proportional chambers, the size of the signal depends exponentially on the charge stored per unit of length along the anode wire. As the wire spacing decreases, the capacitance per unit length decreases, thereby requiring increased applied voltage to restore the necessary charge per unit length. In practical terms, this phenomenon is responsible for difficulties in constructing chambers with less than 2 mm wire spacing. 1 mm chambers, therefore, are frequently operated very near to their breakdown point and/or a high gain gas containing organic compounds such as magic gas is used. This argon/iso-butane mixture has three drawbacks: it is explosive when exposed to the air, it leaves a residue on the wires after extended use and is costly. An amplifier with higher sensitivity would reduce the problems associated with operating chambers with small wire spacings and allow them to be run a safe margin below their breakdown voltage even with an inorganic gas mixture such as argon/CO2, this eliminating the need to use magic gas. Described here is a low cost amplifier with a usable threshold of less than 0.5 μA. Data on the performance of this amplifier/discriminator in operation on a prototype beam chamber are given. This data shows the advantages of the high sensitivity of this design

  19. Adaptive regularization

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Rasmussen, Carl Edward; Svarer, C.

    1994-01-01

    Regularization, e.g., in the form of weight decay, is important for training and optimization of neural network architectures. In this work the authors provide a tool based on asymptotic sampling theory, for iterative estimation of weight decay parameters. The basic idea is to do a gradient desce...

  20. A highly sensitive and specific assay for vertebrate collagenase

    International Nuclear Information System (INIS)

    Sodek, J.; Hurum, S.; Feng, J.

    1981-01-01

    A highly sensitive and specific assay for vertebrate collagenase has been developed using a [ 14 C]-labeled collagen substrate and a combination of SDS-PAGE (sodium dodecyl sulfate-polyacrylamide gel electrophoresis) and fluorography to identify and quantitate the digestion products. The assay was sufficiently sensitive to permit the detection and quantitation of collagenase activity in 0.1 μl of gingival sulcal fluid, and in samples of cell culture medium without prior concentration. The assay has also been used to detect the presence of inhibitors of collagenolytic enzymes in various cell culture fluids. (author)

  1. Unsupervised seismic facies analysis with spatial constraints using regularized fuzzy c-means

    Science.gov (United States)

    Song, Chengyun; Liu, Zhining; Cai, Hanpeng; Wang, Yaojun; Li, Xingming; Hu, Guangmin

    2017-12-01

    Seismic facies analysis techniques combine classification algorithms and seismic attributes to generate a map that describes main reservoir heterogeneities. However, most of the current classification algorithms only view the seismic attributes as isolated data regardless of their spatial locations, and the resulting map is generally sensitive to noise. In this paper, a regularized fuzzy c-means (RegFCM) algorithm is used for unsupervised seismic facies analysis. Due to the regularized term of the RegFCM algorithm, the data whose adjacent locations belong to same classification will play a more important role in the iterative process than other data. Therefore, this method can reduce the effect of seismic data noise presented in discontinuous regions. The synthetic data with different signal/noise values are used to demonstrate the noise tolerance ability of the RegFCM algorithm. Meanwhile, the fuzzy factor, the neighbour window size and the regularized weight are tested using various values, to provide a reference of how to set these parameters. The new approach is also applied to a real seismic data set from the F3 block of the Netherlands. The results show improved spatial continuity, with clear facies boundaries and channel morphology, which reveals that the method is an effective seismic facies analysis tool.

  2. Regularizing portfolio optimization

    International Nuclear Information System (INIS)

    Still, Susanne; Kondor, Imre

    2010-01-01

    The optimization of large portfolios displays an inherent instability due to estimation error. This poses a fundamental problem, because solutions that are not stable under sample fluctuations may look optimal for a given sample, but are, in effect, very far from optimal with respect to the average risk. In this paper, we approach the problem from the point of view of statistical learning theory. The occurrence of the instability is intimately related to over-fitting, which can be avoided using known regularization methods. We show how regularized portfolio optimization with the expected shortfall as a risk measure is related to support vector regression. The budget constraint dictates a modification. We present the resulting optimization problem and discuss the solution. The L2 norm of the weight vector is used as a regularizer, which corresponds to a diversification 'pressure'. This means that diversification, besides counteracting downward fluctuations in some assets by upward fluctuations in others, is also crucial because it improves the stability of the solution. The approach we provide here allows for the simultaneous treatment of optimization and diversification in one framework that enables the investor to trade off between the two, depending on the size of the available dataset.

  3. Regularizing portfolio optimization

    Science.gov (United States)

    Still, Susanne; Kondor, Imre

    2010-07-01

    The optimization of large portfolios displays an inherent instability due to estimation error. This poses a fundamental problem, because solutions that are not stable under sample fluctuations may look optimal for a given sample, but are, in effect, very far from optimal with respect to the average risk. In this paper, we approach the problem from the point of view of statistical learning theory. The occurrence of the instability is intimately related to over-fitting, which can be avoided using known regularization methods. We show how regularized portfolio optimization with the expected shortfall as a risk measure is related to support vector regression. The budget constraint dictates a modification. We present the resulting optimization problem and discuss the solution. The L2 norm of the weight vector is used as a regularizer, which corresponds to a diversification 'pressure'. This means that diversification, besides counteracting downward fluctuations in some assets by upward fluctuations in others, is also crucial because it improves the stability of the solution. The approach we provide here allows for the simultaneous treatment of optimization and diversification in one framework that enables the investor to trade off between the two, depending on the size of the available dataset.

  4. Structural analysis and biological activity of a highly regular glycosaminoglycan from Achatina fulica.

    Science.gov (United States)

    Liu, Jie; Zhou, Lutan; He, Zhicheng; Gao, Na; Shang, Feineng; Xu, Jianping; Li, Zi; Yang, Zengming; Wu, Mingyi; Zhao, Jinhua

    2018-02-01

    Edible snails have been widely used as a health food and medicine in many countries. A unique glycosaminoglycan (AF-GAG) was purified from Achatina fulica. Its structure was analyzed and characterized by chemical and instrumental methods, such as Fourier transform infrared spectroscopy, analysis of monosaccharide composition, and 1D/2D nuclear magnetic resonance spectroscopy. Chemical composition analysis indicated that AF-GAG is composed of iduronic acid (IdoA) and N-acetyl-glucosamine (GlcNAc) and its average molecular weight is 118kDa. Structural analysis clarified that the uronic acid unit in glycosaminoglycan (GAG) is the fully epimerized and the sequence of AF-GAG is →4)-α-GlcNAc (1→4)-α-IdoA2S (1→. Although its structure with a uniform repeating disaccharide is similar to those of heparin and heparan sulfate, this GAG is structurally highly regular and homogeneous. Anticoagulant activity assays indicated that AF-GAG exhibits no anticoagulant activities, but considering its structural characteristic, other bioactivities such as heparanase inhibition may be worthy of further study. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Regularity of mitosis in different varieties of winter bread wheat under the action of herbicides

    Directory of Open Access Journals (Sweden)

    Tatyana Eugenivna KOPYTCHUK

    2012-05-01

    Full Text Available The influence of the most widespread herbicides on winter wheat in Ukraine was studied by anaphase test. Treatment with herbicides reduced the germination of the seeds and disturbed the regularity of mitosis in all varieties of wheat. The range of violations of mitosis was demonstrated by the formation of chromosomal aberrations and dysfunctions of cell cytoskeleton which occurred while processing herbicides. Varietal differences between investigated wheat by sensitivity to herbicides were discovered. The most resistant to herbicides was variety Fantasya Odesskaya, and the most sensitive – Nikoniya, while the most harmful herbicide for wheat was Napalm.

  6. Resting serum concentration of high-sensitivity C-reactive protein ...

    African Journals Online (AJOL)

    Resting serum concentration of high-sensitivity C-reactive protein (hs-CRP) in sportsmen and untrained male adults. F.A. Niyi-Odumosu, O. A. Bello, S.A. Biliaminu, B.V. Owoyele, T.O. Abu, O.L. Dominic ...

  7. Tessellating the Sphere with Regular Polygons

    Science.gov (United States)

    Soto-Johnson, Hortensia; Bechthold, Dawn

    2004-01-01

    Tessellations in the Euclidean plane and regular polygons that tessellate the sphere are reviewed. The regular polygons that can possibly tesellate the sphere are spherical triangles, squares and pentagons.

  8. 1 Analyse du bruit dans un préamplificateur de harge en ...

    African Journals Online (AJOL)

    PR BOKO

    afriquescience.info. Daniel TEKO et al. Analyse du bruit dans un préamplificateur de ...... Nuclear Sciences and Techniques, Vol.21 (5) (2010) 312-315. [5] - M. Weng, et al. A high-speed low-noise CMOS 16-channel charge-sensitive preamplifier ASIC for.

  9. ZnO nanorod biosensor for highly sensitive detection of specific protein binding

    International Nuclear Information System (INIS)

    Kim, Jin Suk; Park, Won Il; Lee, Chul Ho; Yi, Gyu Chul

    2006-01-01

    We report on the fabrication of electrical biosensors based on functionalized ZnO nanorod surfaces with biotin for highly sensitive detection of biological molecules. Due to the clean interface and easy surface modification, the ZnO nanorod sensors can easily detect streptavidin binding down to a concentration of 25 nM, which is more sensitive than previously reported one-dimensional (1D) nanostructure electrical biosensors. In addition, the unique device structure with a micrometer-scale hole at the center of the ZnO nanorod's conducting channel reduces the leakage current from the aqueous solution, hence enhancing device sensitivity. Moreover, ZnO nanorod field-effect-transistor (FET) sensors may open up opportunities to create many other oxide nanorod electrical sensors for highly sensitive and selective real-time detection of a wide variety of biomolecules.

  10. High Throughput Measurement of Locomotor Sensitization to Volatilized Cocaine in Drosophila melanogaster.

    Science.gov (United States)

    Filošević, Ana; Al-Samarai, Sabina; Andretić Waldowski, Rozi

    2018-01-01

    Drosophila melanogaster can be used to identify genes with novel functional roles in neuronal plasticity induced by repeated consumption of addictive drugs. Behavioral sensitization is a relatively simple behavioral output of plastic changes that occur in the brain after repeated exposures to drugs of abuse. The development of screening procedures for genes that control behavioral sensitization has stalled due to a lack of high-throughput behavioral tests that can be used in genetically tractable organism, such as Drosophila . We have developed a new behavioral test, FlyBong, which combines delivery of volatilized cocaine (vCOC) to individually housed flies with objective quantification of their locomotor activity. There are two main advantages of FlyBong: it is high-throughput and it allows for comparisons of locomotor activity of individual flies before and after single or multiple exposures. At the population level, exposure to vCOC leads to transient and concentration-dependent increase in locomotor activity, representing sensitivity to an acute dose. A second exposure leads to further increase in locomotion, representing locomotor sensitization. We validate FlyBong by showing that locomotor sensitization at either the population or individual level is absent in the mutants for circadian genes period (per) , Clock (Clk) , and cycle (cyc) . The locomotor sensitization that is present in timeless (tim) and pigment dispersing factor (pdf) mutant flies is in large part not cocaine specific, but derived from increased sensitivity to warm air. Circadian genes are not only integral part of the neural mechanism that is required for development of locomotor sensitization, but in addition, they modulate the intensity of locomotor sensitization as a function of the time of day. Motor-activating effects of cocaine are sexually dimorphic and require a functional dopaminergic transporter. FlyBong is a new and improved method for inducing and measuring locomotor sensitization

  11. High Throughput Measurement of Locomotor Sensitization to Volatilized Cocaine in Drosophila melanogaster

    Directory of Open Access Journals (Sweden)

    Ana Filošević

    2018-02-01

    Full Text Available Drosophila melanogaster can be used to identify genes with novel functional roles in neuronal plasticity induced by repeated consumption of addictive drugs. Behavioral sensitization is a relatively simple behavioral output of plastic changes that occur in the brain after repeated exposures to drugs of abuse. The development of screening procedures for genes that control behavioral sensitization has stalled due to a lack of high-throughput behavioral tests that can be used in genetically tractable organism, such as Drosophila. We have developed a new behavioral test, FlyBong, which combines delivery of volatilized cocaine (vCOC to individually housed flies with objective quantification of their locomotor activity. There are two main advantages of FlyBong: it is high-throughput and it allows for comparisons of locomotor activity of individual flies before and after single or multiple exposures. At the population level, exposure to vCOC leads to transient and concentration-dependent increase in locomotor activity, representing sensitivity to an acute dose. A second exposure leads to further increase in locomotion, representing locomotor sensitization. We validate FlyBong by showing that locomotor sensitization at either the population or individual level is absent in the mutants for circadian genes period (per, Clock (Clk, and cycle (cyc. The locomotor sensitization that is present in timeless (tim and pigment dispersing factor (pdf mutant flies is in large part not cocaine specific, but derived from increased sensitivity to warm air. Circadian genes are not only integral part of the neural mechanism that is required for development of locomotor sensitization, but in addition, they modulate the intensity of locomotor sensitization as a function of the time of day. Motor-activating effects of cocaine are sexually dimorphic and require a functional dopaminergic transporter. FlyBong is a new and improved method for inducing and measuring locomotor

  12. Asymptotic performance of regularized quadratic discriminant analysis based classifiers

    KAUST Repository

    Elkhalil, Khalil

    2017-12-13

    This paper carries out a large dimensional analysis of the standard regularized quadratic discriminant analysis (QDA) classifier designed on the assumption that data arise from a Gaussian mixture model. The analysis relies on fundamental results from random matrix theory (RMT) when both the number of features and the cardinality of the training data within each class grow large at the same pace. Under some mild assumptions, we show that the asymptotic classification error converges to a deterministic quantity that depends only on the covariances and means associated with each class as well as the problem dimensions. Such a result permits a better understanding of the performance of regularized QDA and can be used to determine the optimal regularization parameter that minimizes the misclassification error probability. Despite being valid only for Gaussian data, our theoretical findings are shown to yield a high accuracy in predicting the performances achieved with real data sets drawn from popular real data bases, thereby making an interesting connection between theory and practice.

  13. The Effect of Integration Policies on the Time until Regular Employment of Newly Arrived Immigrants:

    DEFF Research Database (Denmark)

    Clausen, Jens; Heinesen, Eskil; Hummelgaard, Hans

    We analyse the effect of active labour-market programmes on the hazard rate into regular employment for newly arrived immigrants using the timing-of-events duration model. We take account of language course participation and progression in destination country language skills. We use rich...... administrative data from Denmark. We find substantial lock-in effects of participation in active labour-market programmes. Post programme effects on the hazard rate to regular employment are significantly positive for wage subsidy programmes, but not for other types of programmes. For language course...... participants, improvement in language proficiency has significant and substantial positive effects on the hazard rate to employment....

  14. Accretion onto some well-known regular black holes

    International Nuclear Information System (INIS)

    Jawad, Abdul; Shahzad, M.U.

    2016-01-01

    In this work, we discuss the accretion onto static spherically symmetric regular black holes for specific choices of the equation of state parameter. The underlying regular black holes are charged regular black holes using the Fermi-Dirac distribution, logistic distribution, nonlinear electrodynamics, respectively, and Kehagias-Sftesos asymptotically flat regular black holes. We obtain the critical radius, critical speed, and squared sound speed during the accretion process near the regular black holes. We also study the behavior of radial velocity, energy density, and the rate of change of the mass for each of the regular black holes. (orig.)

  15. Accretion onto some well-known regular black holes

    Energy Technology Data Exchange (ETDEWEB)

    Jawad, Abdul; Shahzad, M.U. [COMSATS Institute of Information Technology, Department of Mathematics, Lahore (Pakistan)

    2016-03-15

    In this work, we discuss the accretion onto static spherically symmetric regular black holes for specific choices of the equation of state parameter. The underlying regular black holes are charged regular black holes using the Fermi-Dirac distribution, logistic distribution, nonlinear electrodynamics, respectively, and Kehagias-Sftesos asymptotically flat regular black holes. We obtain the critical radius, critical speed, and squared sound speed during the accretion process near the regular black holes. We also study the behavior of radial velocity, energy density, and the rate of change of the mass for each of the regular black holes. (orig.)

  16. Accretion onto some well-known regular black holes

    Science.gov (United States)

    Jawad, Abdul; Shahzad, M. Umair

    2016-03-01

    In this work, we discuss the accretion onto static spherically symmetric regular black holes for specific choices of the equation of state parameter. The underlying regular black holes are charged regular black holes using the Fermi-Dirac distribution, logistic distribution, nonlinear electrodynamics, respectively, and Kehagias-Sftesos asymptotically flat regular black holes. We obtain the critical radius, critical speed, and squared sound speed during the accretion process near the regular black holes. We also study the behavior of radial velocity, energy density, and the rate of change of the mass for each of the regular black holes.

  17. The Effect of a Diet Moderately High in Protein and Fiber on Insulin Sensitivity Measured Using the Dynamic Insulin Sensitivity and Secretion Test (DISST

    Directory of Open Access Journals (Sweden)

    Lisa Te Morenga

    2017-11-01

    Full Text Available Evidence shows that weight loss improves insulin sensitivity but few studies have examined the effect of macronutrient composition independently of weight loss on direct measures of insulin sensitivity. We randomised 89 overweight or obese women to either a standard diet (StdD, that was intended to be low in fat and relatively high in carbohydrate (n = 42 or to a relatively high protein (up to 30% of energy, relatively high fibre (>30 g/day diet (HPHFib (n = 47 for 10 weeks. Advice regarding strict adherence to energy intake goals was not given. Insulin sensitivity and secretion was assessed by a novel method—the Dynamic Insulin Sensitivity and Secretion Test (DISST. Although there were significant improvements in body composition and most cardiometabolic risk factors on HPHFib, insulin sensitivity was reduced by 19.3% (95% CI: 31.8%, 4.5%; p = 0.013 in comparison with StdD. We conclude that the reduction in insulin sensitivity after a diet relatively high in both protein and fibre, despite cardiometabolic improvements, suggests insulin sensitivity may reflect metabolic adaptations to dietary composition for maintenance of glucose homeostasis, rather than impaired metabolism.

  18. Development and initial evaluation of a spectral microdensitometer for analysing radiochromic films

    International Nuclear Information System (INIS)

    Lee, K Y; Fung, K L; Kwok, C S

    2004-01-01

    Radiation dose deposited on a radiochromic film is considered as a dose image. A precise image extraction system with commensurate capabilities is required to measure the transmittance of the image and translate it to radiation dose. This paper describes the development of a spectral microdensitometer which has been designed to achieve this goal under the conditions of (a) the linearity and sensitivity of the dose response curve of the radiochromic film being highly dependent on the wavelength of the analysing light, and (b) the inherent high spatial resolution of the film. The microdensitometer consists of a monochromator which provides an analysing light of variable wavelength, a film tray on a high-precision scanning stage, a transmission microscope coupled to a thermoelectrically cooled CCD camera, a microcomputer and corresponding interfaces. The measurement of the transmittance of the radiochromic film is made at the two absorption peaks with maximum sensitivities. The high spatial resolution of the instrument, of the order of micrometres, is achieved through the use of the microscope combined with a measure-and-step technique to cover the whole film. The performance of the instrument in regard to the positional accuracy, system reproducibility and dual-peak film calibration was evaluated. The results show that the instrument fulfils the design objective of providing a precise image extraction system for radiochromic films with micrometre spatial resolution and sensitive dose response

  19. Development and initial evaluation of a spectral microdensitometer for analysing radiochromic films

    Energy Technology Data Exchange (ETDEWEB)

    Lee, K Y [Department of Optometry and Radiography, Hong Kong Polytechnic University, Hong Kong (China); Fung, K L [Department of Optometry and Radiography, Hong Kong Polytechnic University, Hong Kong (China); Kwok, C S [Department of Radioimmunotherapy, City of Hope National Medical Centre, Duarte, CA 91010 (United States)

    2004-11-21

    Radiation dose deposited on a radiochromic film is considered as a dose image. A precise image extraction system with commensurate capabilities is required to measure the transmittance of the image and translate it to radiation dose. This paper describes the development of a spectral microdensitometer which has been designed to achieve this goal under the conditions of (a) the linearity and sensitivity of the dose response curve of the radiochromic film being highly dependent on the wavelength of the analysing light, and (b) the inherent high spatial resolution of the film. The microdensitometer consists of a monochromator which provides an analysing light of variable wavelength, a film tray on a high-precision scanning stage, a transmission microscope coupled to a thermoelectrically cooled CCD camera, a microcomputer and corresponding interfaces. The measurement of the transmittance of the radiochromic film is made at the two absorption peaks with maximum sensitivities. The high spatial resolution of the instrument, of the order of micrometres, is achieved through the use of the microscope combined with a measure-and-step technique to cover the whole film. The performance of the instrument in regard to the positional accuracy, system reproducibility and dual-peak film calibration was evaluated. The results show that the instrument fulfils the design objective of providing a precise image extraction system for radiochromic films with micrometre spatial resolution and sensitive dose response.

  20. High Sensitivity TSS Prediction: Estimates of Locations Where TSS Cannot Occur

    KAUST Repository

    Schaefer, Ulf; Kodzius, Rimantas; Kai, Chikatoshi; Kawai, Jun; Carninci, Piero; Hayashizaki, Yoshihide; Bajic, Vladimir B.

    2013-01-01

    from mouse and human genomes, we developed a methodology that allows us, by performing computational TSS prediction with very high sensitivity, to annotate, with a high accuracy in a strand specific manner, locations of mammalian genomes that are highly

  1. Routes to chaos in continuous mechanical systems: Part 2. Modelling transitions from regular to chaotic dynamics

    International Nuclear Information System (INIS)

    Krysko, A.V.; Awrejcewicz, J.; Papkova, I.V.; Krysko, V.A.

    2012-01-01

    In second part of the paper both classical and novel scenarios of transition from regular to chaotic dynamics of dissipative continuous mechanical systems are studied. A detailed analysis allowed us to detect the already known classical scenarios of transition from periodic to chaotic dynamics, and in particular the Feigenbaum scenario. The Feigenbaum constant was computed for all continuous mechanical objects studied in the first part of the paper. In addition, we illustrate and discuss different and novel scenarios of transition of the analysed systems from regular to chaotic dynamics, and we show that the type of scenario depends essentially on excitation parameters.

  2. Using uncertainty and sensitivity analyses in socioecological agent-based models to improve their analytical performance and policy relevance.

    Science.gov (United States)

    Ligmann-Zielinska, Arika; Kramer, Daniel B; Spence Cheruvelil, Kendra; Soranno, Patricia A

    2014-01-01

    Agent-based models (ABMs) have been widely used to study socioecological systems. They are useful for studying such systems because of their ability to incorporate micro-level behaviors among interacting agents, and to understand emergent phenomena due to these interactions. However, ABMs are inherently stochastic and require proper handling of uncertainty. We propose a simulation framework based on quantitative uncertainty and sensitivity analyses to build parsimonious ABMs that serve two purposes: exploration of the outcome space to simulate low-probability but high-consequence events that may have significant policy implications, and explanation of model behavior to describe the system with higher accuracy. The proposed framework is applied to the problem of modeling farmland conservation resulting in land use change. We employ output variance decomposition based on quasi-random sampling of the input space and perform three computational experiments. First, we perform uncertainty analysis to improve model legitimacy, where the distribution of results informs us about the expected value that can be validated against independent data, and provides information on the variance around this mean as well as the extreme results. In our last two computational experiments, we employ sensitivity analysis to produce two simpler versions of the ABM. First, input space is reduced only to inputs that produced the variance of the initial ABM, resulting in a model with output distribution similar to the initial model. Second, we refine the value of the most influential input, producing a model that maintains the mean of the output of initial ABM but with less spread. These simplifications can be used to 1) efficiently explore model outcomes, including outliers that may be important considerations in the design of robust policies, and 2) conduct explanatory analysis that exposes the smallest number of inputs influencing the steady state of the modeled system.

  3. Using uncertainty and sensitivity analyses in socioecological agent-based models to improve their analytical performance and policy relevance.

    Directory of Open Access Journals (Sweden)

    Arika Ligmann-Zielinska

    Full Text Available Agent-based models (ABMs have been widely used to study socioecological systems. They are useful for studying such systems because of their ability to incorporate micro-level behaviors among interacting agents, and to understand emergent phenomena due to these interactions. However, ABMs are inherently stochastic and require proper handling of uncertainty. We propose a simulation framework based on quantitative uncertainty and sensitivity analyses to build parsimonious ABMs that serve two purposes: exploration of the outcome space to simulate low-probability but high-consequence events that may have significant policy implications, and explanation of model behavior to describe the system with higher accuracy. The proposed framework is applied to the problem of modeling farmland conservation resulting in land use change. We employ output variance decomposition based on quasi-random sampling of the input space and perform three computational experiments. First, we perform uncertainty analysis to improve model legitimacy, where the distribution of results informs us about the expected value that can be validated against independent data, and provides information on the variance around this mean as well as the extreme results. In our last two computational experiments, we employ sensitivity analysis to produce two simpler versions of the ABM. First, input space is reduced only to inputs that produced the variance of the initial ABM, resulting in a model with output distribution similar to the initial model. Second, we refine the value of the most influential input, producing a model that maintains the mean of the output of initial ABM but with less spread. These simplifications can be used to 1 efficiently explore model outcomes, including outliers that may be important considerations in the design of robust policies, and 2 conduct explanatory analysis that exposes the smallest number of inputs influencing the steady state of the modeled system.

  4. Diagrammatic methods in phase-space regularization

    International Nuclear Information System (INIS)

    Bern, Z.; Halpern, M.B.; California Univ., Berkeley

    1987-11-01

    Using the scalar prototype and gauge theory as the simplest possible examples, diagrammatic methods are developed for the recently proposed phase-space form of continuum regularization. A number of one-loop and all-order applications are given, including general diagrammatic discussions of the nogrowth theorem and the uniqueness of the phase-space stochastic calculus. The approach also generates an alternate derivation of the equivalence of the large-β phase-space regularization to the more conventional coordinate-space regularization. (orig.)

  5. Metric regularity and subdifferential calculus

    International Nuclear Information System (INIS)

    Ioffe, A D

    2000-01-01

    The theory of metric regularity is an extension of two classical results: the Lyusternik tangent space theorem and the Graves surjection theorem. Developments in non-smooth analysis in the 1980s and 1990s paved the way for a number of far-reaching extensions of these results. It was also well understood that the phenomena behind the results are of metric origin, not connected with any linear structure. At the same time it became clear that some basic hypotheses of the subdifferential calculus are closely connected with the metric regularity of certain set-valued maps. The survey is devoted to the metric theory of metric regularity and its connection with subdifferential calculus in Banach spaces

  6. Comparison of Subcutaneous Regular Insulin and Lispro Insulin in Diabetics Receiving Continuous Nutrition

    Science.gov (United States)

    Stull, Mamie C.; Strilka, Richard J.; Clemens, Michael S.; Armen, Scott B.

    2015-01-01

    Background: Optimal management of non–critically ill patients with diabetes maintained on continuous enteral feeding (CEN) is poorly defined. Subcutaneous (SQ) lispro and SQ regular insulin were compared in a simulated type 1 and type 2 diabetic patient receiving CEN. Method: A glucose-insulin feedback mathematical model was employed to simulate type 1 and type 2 diabetic patients on CEN. Each patient received 25 SQ injections of regular insulin or insulin lispro, ranging from 0-6 U. Primary endpoints were the change in mean glucose concentration (MGC) and change in glucose variability (GV); hypoglycemic episodes were also reported. The model was first validated against patient data. Results: Both SQ insulin preparations linearly decreased MGC, however, SQ regular insulin decreased GV whereas SQ lispro tended to increase GV. Hourly glucose concentration measurements were needed to capture the increase in GV. In the type 2 diabetic patient, “rebound hyperglycemia” occurred after SQ lispro was rapidly metabolized. Although neither SQ insulin preparation caused hypoglycemia, SQ lispro significantly lowered MGC compared to SQ regular insulin. Thus, it may be more likely to cause hypoglycemia. Analyses of the detailed glucose concentration versus time data suggest that the inferior performance of lispro resulted from its shorter duration of action. Finally, the effects of both insulin preparations persisted beyond their duration of actions in the type 2 diabetic patient. Conclusions: Subcutaneous regular insulin may be the short-acting insulin preparation of choice for this subset of diabetic patients. Clinical trial is required before a definitive recommendation can be made. PMID:26134836

  7. Highly Sensitive Reentrant Cavity-Microstrip Patch Antenna Integrated Wireless Passive Pressure Sensor for High Temperature Applications

    Directory of Open Access Journals (Sweden)

    Fei Lu

    2017-01-01

    Full Text Available A novel reentrant cavity-microstrip patch antenna integrated wireless passive pressure sensor was proposed in this paper for high temperature applications. The reentrant cavity was analyzed from aspects of distributed model and equivalent lumped circuit model, on the basis of which an optimal sensor structure integrated with a rectangular microstrip patch antenna was proposed to better transmit/receive wireless signals. In this paper, the proposed sensor was fabricated with high temperature resistant alumina ceramic and silver metalization with weld sealing, and it was measured in a hermetic metal tank with nitrogen pressure loading. It was verified that the sensor was highly sensitive, keeping stable performance up to 300 kPa with an average sensitivity of 981.8 kHz/kPa at temperature 25°C, while, for high temperature measurement, the sensor can operate properly under pressure of 60–120 kPa in the temperature range of 25–300°C with maximum pressure sensitivity of 179.2 kHz/kPa. In practical application, the proposed sensor is used in a method called table lookup with a maximum error of 5.78%.

  8. Sensitivity and specificity considerations for fMRI encoding, decoding, and mapping of auditory cortex at ultra-high field.

    Science.gov (United States)

    Moerel, Michelle; De Martino, Federico; Kemper, Valentin G; Schmitter, Sebastian; Vu, An T; Uğurbil, Kâmil; Formisano, Elia; Yacoub, Essa

    2018-01-01

    Following rapid technological advances, ultra-high field functional MRI (fMRI) enables exploring correlates of neuronal population activity at an increasing spatial resolution. However, as the fMRI blood-oxygenation-level-dependent (BOLD) contrast is a vascular signal, the spatial specificity of fMRI data is ultimately determined by the characteristics of the underlying vasculature. At 7T, fMRI measurement parameters determine the relative contribution of the macro- and microvasculature to the acquired signal. Here we investigate how these parameters affect relevant high-end fMRI analyses such as encoding, decoding, and submillimeter mapping of voxel preferences in the human auditory cortex. Specifically, we compare a T 2 * weighted fMRI dataset, obtained with 2D gradient echo (GE) EPI, to a predominantly T 2 weighted dataset obtained with 3D GRASE. We first investigated the decoding accuracy based on two encoding models that represented different hypotheses about auditory cortical processing. This encoding/decoding analysis profited from the large spatial coverage and sensitivity of the T 2 * weighted acquisitions, as evidenced by a significantly higher prediction accuracy in the GE-EPI dataset compared to the 3D GRASE dataset for both encoding models. The main disadvantage of the T 2 * weighted GE-EPI dataset for encoding/decoding analyses was that the prediction accuracy exhibited cortical depth dependent vascular biases. However, we propose that the comparison of prediction accuracy across the different encoding models may be used as a post processing technique to salvage the spatial interpretability of the GE-EPI cortical depth-dependent prediction accuracy. Second, we explored the mapping of voxel preferences. Large-scale maps of frequency preference (i.e., tonotopy) were similar across datasets, yet the GE-EPI dataset was preferable due to its larger spatial coverage and sensitivity. However, submillimeter tonotopy maps revealed biases in assigned frequency

  9. Temporal regularity of the environment drives time perception

    OpenAIRE

    van Rijn, H; Rhodes, D; Di Luca, M

    2016-01-01

    It’s reasonable to assume that a regularly paced sequence should be perceived as regular, but here we show that perceived regularity depends on the context in which the sequence is embedded. We presented one group of participants with perceptually regularly paced sequences, and another group of participants with mostly irregularly paced sequences (75% irregular, 25% regular). The timing of the final stimulus in each sequence could be var- ied. In one experiment, we asked whether the last stim...

  10. Polarization-sensitive and broadband germanium sulfide photodetectors with excellent high-temperature performance.

    Science.gov (United States)

    Tan, Dezhi; Zhang, Wenjin; Wang, Xiaofan; Koirala, Sandhaya; Miyauchi, Yuhei; Matsuda, Kazunari

    2017-08-31

    Layered materials, such as graphene, transition metal dichalcogenides and black phosphorene, have been established rapidly as intriguing building blocks for optoelectronic devices. Here, we introduce highly polarization sensitive, broadband, and high-temperature-operation photodetectors based on multilayer germanium sulfide (GeS). The GeS photodetector shows a high photoresponsivity of about 6.8 × 10 3 A W -1 , an extremely high specific detectivity of 5.6 × 10 14 Jones, and broad spectral response in the wavelength range of 300-800 nm. More importantly, the GeS photodetector has high polarization sensitivity to incident linearly polarized light, which provides another degree of freedom for photodetectors. Tremendously enhanced photoresponsivity is observed with a temperature increase, and high responsivity is achievable at least up to 423 K. The establishment of larger photoinduced reduction of the Schottky barrier height will be significant for the investigation of the photoresponse mechanism of 2D layered material-based photodetectors. These attributes of high photocurrent generation in a wide temperature range, broad spectral response, and polarization sensitivity coupled with environmental stability indicate that the proposed GeS photodetector is very suitable for optoelectronic applications.

  11. The problem of oxidation state stabilisation and some regularities of a Periodic system of the elements

    International Nuclear Information System (INIS)

    Kiselev, Yurii M; Tretyakov, Yuri D

    1999-01-01

    The general principles of the concept of oxidation state stabilisation are formulated. Problems associated with the preparation and provision of the highest valent forms of transition elements are considered. The empirical data concerning the synthesis of new compounds of rare-earth elements and d elements in unusually high oxidation states are analysed. The possibility of occurrence of the oxidation states + 9 and + 10 for some elements (for example, for iridium and platinum in tetraoxo ions) are discussed. Approaches to the realisation of these states are outlined and it is demonstrated that solid phases or matrices containing alkali metal cations are the most promising systems for the stabilisation of these high oxidation states. Selected thermodynamic features typical of metal halides and oxides and the regularities of the changes in the extreme oxidation states of d elements are considered. The bibliography includes 266 references.

  12. Development of High Sensitivity Nuclear Emulsion and Fine Grained Emulsion

    Science.gov (United States)

    Kawahara, H.; Asada, T.; Naka, T.; Naganawa, N.; Kuwabara, K.; Nakamura, M.

    2014-08-01

    Nuclear emulsion is a particle detector having high spacial resolution and angular resolution. It became useful for large statistics experiment thanks to the development of automatic scanning system. In 2010, a facility for emulsion production was introduced and R&D of nuclear emulsion began at Nagoya university. In this paper, we present results of development of the high sensitivity emulsion and fine grained emulsion for dark matter search experiment. Improvement of sensitivity is achieved by raising density of silver halide crystals and doping well-adjusted amount of chemicals. Production of fine grained emulsion was difficult because of unexpected crystal condensation. By mixing polyvinyl alcohol (PVA) to gelatin as a binder, we succeeded in making a stable fine grained emulsion.

  13. The uniqueness of the regularization procedure

    International Nuclear Information System (INIS)

    Brzezowski, S.

    1981-01-01

    On the grounds of the BPHZ procedure, the criteria of correct regularization in perturbation calculations of QFT are given, together with the prescription for dividing the regularized formulas into the finite and infinite parts. (author)

  14. Coupling regularizes individual units in noisy populations

    International Nuclear Information System (INIS)

    Ly Cheng; Ermentrout, G. Bard

    2010-01-01

    The regularity of a noisy system can modulate in various ways. It is well known that coupling in a population can lower the variability of the entire network; the collective activity is more regular. Here, we show that diffusive (reciprocal) coupling of two simple Ornstein-Uhlenbeck (O-U) processes can regularize the individual, even when it is coupled to a noisier process. In cellular networks, the regularity of individual cells is important when a select few play a significant role. The regularizing effect of coupling surprisingly applies also to general nonlinear noisy oscillators. However, unlike with the O-U process, coupling-induced regularity is robust to different kinds of coupling. With two coupled noisy oscillators, we derive an asymptotic formula assuming weak noise and coupling for the variance of the period (i.e., spike times) that accurately captures this effect. Moreover, we find that reciprocal coupling can regularize the individual period of higher dimensional oscillators such as the Morris-Lecar and Brusselator models, even when coupled to noisier oscillators. Coupling can have a counterintuitive and beneficial effect on noisy systems. These results have implications for the role of connectivity with noisy oscillators and the modulation of variability of individual oscillators.

  15. Learning regularization parameters for general-form Tikhonov

    International Nuclear Information System (INIS)

    Chung, Julianne; Español, Malena I

    2017-01-01

    Computing regularization parameters for general-form Tikhonov regularization can be an expensive and difficult task, especially if multiple parameters or many solutions need to be computed in real time. In this work, we assume training data is available and describe an efficient learning approach for computing regularization parameters that can be used for a large set of problems. We consider an empirical Bayes risk minimization framework for finding regularization parameters that minimize average errors for the training data. We first extend methods from Chung et al (2011 SIAM J. Sci. Comput. 33 3132–52) to the general-form Tikhonov problem. Then we develop a learning approach for multi-parameter Tikhonov problems, for the case where all involved matrices are simultaneously diagonalizable. For problems where this is not the case, we describe an approach to compute near-optimal regularization parameters by using operator approximations for the original problem. Finally, we propose a new class of regularizing filters, where solutions correspond to multi-parameter Tikhonov solutions, that requires less data than previously proposed optimal error filters, avoids the generalized SVD, and allows flexibility and novelty in the choice of regularization matrices. Numerical results for 1D and 2D examples using different norms on the errors show the effectiveness of our methods. (paper)

  16. Stream Processing Using Grammars and Regular Expressions

    DEFF Research Database (Denmark)

    Rasmussen, Ulrik Terp

    disambiguation. The first algorithm operates in two passes in a semi-streaming fashion, using a constant amount of working memory and an auxiliary tape storage which is written in the first pass and consumed by the second. The second algorithm is a single-pass and optimally streaming algorithm which outputs...... as much of the parse tree as is semantically possible based on the input prefix read so far, and resorts to buffering as many symbols as is required to resolve the next choice. Optimality is obtained by performing a PSPACE-complete pre-analysis on the regular expression. In the second part we present...... Kleenex, a language for expressing high-performance streaming string processing programs as regular grammars with embedded semantic actions, and its compilation to streaming string transducers with worst-case linear-time performance. Its underlying theory is based on transducer decomposition into oracle...

  17. Least square regularized regression in sum space.

    Science.gov (United States)

    Xu, Yong-Li; Chen, Di-Rong; Li, Han-Xiong; Liu, Lu

    2013-04-01

    This paper proposes a least square regularized regression algorithm in sum space of reproducing kernel Hilbert spaces (RKHSs) for nonflat function approximation, and obtains the solution of the algorithm by solving a system of linear equations. This algorithm can approximate the low- and high-frequency component of the target function with large and small scale kernels, respectively. The convergence and learning rate are analyzed. We measure the complexity of the sum space by its covering number and demonstrate that the covering number can be bounded by the product of the covering numbers of basic RKHSs. For sum space of RKHSs with Gaussian kernels, by choosing appropriate parameters, we tradeoff the sample error and regularization error, and obtain a polynomial learning rate, which is better than that in any single RKHS. The utility of this method is illustrated with two simulated data sets and five real-life databases.

  18. 5 CFR 551.421 - Regular working hours.

    Science.gov (United States)

    2010-01-01

    ... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Regular working hours. 551.421 Section... Activities § 551.421 Regular working hours. (a) Under the Act there is no requirement that a Federal employee... distinction based on whether the activity is performed by an employee during regular working hours or outside...

  19. Regular extensions of some classes of grammars

    NARCIS (Netherlands)

    Nijholt, Antinus

    Culik and Cohen introduced the class of LR-regular grammars, an extension of the LR(k) grammars. In this report we consider the analogous extension of the LL(k) grammers, called the LL-regular grammars. The relations of this class of grammars to other classes of grammars are shown. Every LL-regular

  20. High sensitivity troponin and valvular heart disease.

    Science.gov (United States)

    McCarthy, Cian P; Donnellan, Eoin; Phelan, Dermot; Griffin, Brian P; Enriquez-Sarano, Maurice; McEvoy, John W

    2017-07-01

    Blood-based biomarkers have been extensively studied in a range of cardiovascular diseases and have established utility in routine clinical care, most notably in the diagnosis of acute coronary syndrome (e.g., troponin) and the management of heart failure (e.g., brain-natriuretic peptide). The role of biomarkers is less well established in the management of valvular heart disease (VHD), in which the optimal timing of surgical intervention is often challenging. One promising biomarker that has been the subject of a number of recent VHD research studies is high sensitivity troponin (hs-cTn). Novel high-sensitivity assays can detect subclinical myocardial damage in asymptomatic individuals. Thus, hs-cTn may have utility in the assessment of asymptomatic patients with severe VHD who do not have a clear traditional indication for surgical intervention. In this state-of-the-art review, we examine the current evidence for hs-cTn as a potential biomarker in the most commonly encountered VHD conditions, aortic stenosis and mitral regurgitation. This review provides a synopsis of early evidence indicating that hs-cTn has promise as a biomarker in VHD. However, the impact of its measurement on clinical practice and VHD outcomes needs to be further assessed in prospective studies before routine clinical use becomes a reality. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Organic dye for highly efficient solid-state dye-sensitized solar cells

    Energy Technology Data Exchange (ETDEWEB)

    Schmidt-Mende, L.; Bach, U.; Humphry-Baker, R.; Ito, S.; Graetzel, M. [Institut des Sciences et Ingenierie Chimiques (ISIC), Laboratoire de Photonique et Interfaces (LPI), Ecole Polytechnique Federale de Lausanne (EPFL), CH-1015 Lausanne (Switzerland); Horiuchi, T.; Miura, H. [Technology Research Laboratory, Corporate Research Center, Mitsubishi Paper Mills Limited, 46, Wadai, Tsukuba City, Ibaraki 300-4247 (Japan); Uchida, S. [Institute of Multidisciplinary Research for Advanced Materials (IMRAM), Tohoku University, 1-1 Katahira 2-chome, Aoba-ku, Sendai 980-8577 (Japan)

    2005-04-04

    The feasibility of solid-state dye-sensitized solar cells as a low-cost alternative to amorphous silicon cells is demonstrated. Such a cell with a record efficiency of over 4 % under simulated sunlight is reported, made possible by using a new organic metal-free indoline dye as the sensitizer with high absorption coefficient. (Abstract Copyright [2005], Wiley Periodicals, Inc.)

  2. Regular non-twisting S-branes

    International Nuclear Information System (INIS)

    Obregon, Octavio; Quevedo, Hernando; Ryan, Michael P.

    2004-01-01

    We construct a family of time and angular dependent, regular S-brane solutions which corresponds to a simple analytical continuation of the Zipoy-Voorhees 4-dimensional vacuum spacetime. The solutions are asymptotically flat and turn out to be free of singularities without requiring a twist in space. They can be considered as the simplest non-singular generalization of the singular S0-brane solution. We analyze the properties of a representative of this family of solutions and show that it resembles to some extent the asymptotic properties of the regular Kerr S-brane. The R-symmetry corresponds, however, to the general lorentzian symmetry. Several generalizations of this regular solution are derived which include a charged S-brane and an additional dilatonic field. (author)

  3. Differential sensitivity of long-sleep and short-sleep mice to high doses of cocaine.

    Science.gov (United States)

    de Fiebre, C M; Ruth, J A; Collins, A C

    1989-12-01

    The cocaine sensitivity of male and female long-sleep (LS) and short-sleep (SS) mice, which have been selectively bred for differential ethanol-induced "sleep-time," was examined in a battery of behavioral and physiological tests. Differences between these two mouse lines were subtle and were seen primarily at high doses. At high doses, SS mice were more sensitive than LS mice, particularly to cocaine-induced hypothermia; however, significant hypothermia was not seen except at doses which were very near to the seizure threshold. During a 60-min test of locomotor activity, LS mice showed greater stimulation of Y-maze activity by 20 mg/kg cocaine than SS mice. Consistent with the finding of subtle differences in sensitivity to low doses of cocaine. LS and SS mice did not differ in sensitivity to cocaine inhibition of synaptosomal uptake of [3H]-dopamine, [3H]-norepinephrine or [3H]-5-hydroxytryptamine. However, consistent with the finding of differential sensitivity to high doses of cocaine, SS mice were more sensitive to the seizure-producing effects of the cocaine and lidocaine, a local anesthetic. It is hypothesized that the differential sensitivity of these mouse lines to high doses of cocaine is due to differential sensitivity to cocaine's actions on systems that regulate local anesthetic effects. Selective breeding for differential duration of alcohol-induced "sleep-time" may have resulted in differential ion channel structure or function in these mice.

  4. The Svalbard intertidal zone: a concept for the use of GIS in applied oil sensitivity, vulnerability and impact analyses

    International Nuclear Information System (INIS)

    Moe, K.A.; Skeie, G.M.; Brude, O.W.; Loevas, S.M.; Nedreboes, M.; Weslawski, J.M.

    2000-01-01

    Historical oil spills have shown that environmental damage on the seashore can be measured by acute mortality of single species and destabilisation of the communities. The biota, however, has the potential to recover over some period of time. Applied to the understanding of the fate of oil and population and community dynamics, the impact can be described by the function of the following two factors: the immediate extent and the duration of damage. A simple and robust mathematical model is developed to describe this process in the Svalbard intertidal. Based on the integral of key biological and physical factors, i.e., community specific sensitivity, oil accumulation and retention capacity of the substrate, ice-cover and wave exposure, the model is implemented by a Geographical Information System (GIS) for characterisation of the habitat's sensitivity and vulnerability. Geomorphologic maps and georeferenced biological data are used as input. Digital maps of intertidal zone are compiled, indicating the shoreline sensitivity and vulnerability in terms of coastal segments and grid aggregations. Selected results have been used in the national assessment programme of oil development in the Barents Sea for priorities in environmental impact assessments and risk analyses as well as oil spill contingency planning. (Author)

  5. Characterization of a high resolution and high sensitivity pre-clinical PET scanner with 3D event reconstruction

    CERN Document Server

    Rissi, M; Bolle, E; Dorholt, O; Hines, K E; Rohne, O; Skretting, A; Stapnes, S; Volgyes, D

    2012-01-01

    COMPET is a preclinical PET scanner aiming towards a high sensitivity, a high resolution and MRI compatibility by implementing a novel detector geometry. In this approach, long scintillating LYSO crystals are used to absorb the gamma-rays. To determine the point of interaction (P01) between gamma-ray and crystal, the light exiting the crystals on one of the long sides is collected with wavelength shifters (WLS) perpendicularly arranged to the crystals. This concept has two main advantages: (1) The parallax error is reduced to a minimum and is equal for the whole field of view (FOV). (2) The P01 and its energy deposit is known in all three dimension with a high resolution, allowing for the reconstruction of Compton scattered gamma-rays. Point (1) leads to a uniform point source resolution (PSR) distribution over the whole FOV, and also allows to place the detector close to the object being imaged. Both points (1) and (2) lead to an increased sensitivity and allow for both high resolution and sensitivity at the...

  6. Near-Regular Structure Discovery Using Linear Programming

    KAUST Repository

    Huang, Qixing

    2014-06-02

    Near-regular structures are common in manmade and natural objects. Algorithmic detection of such regularity greatly facilitates our understanding of shape structures, leads to compact encoding of input geometries, and enables efficient generation and manipulation of complex patterns on both acquired and synthesized objects. Such regularity manifests itself both in the repetition of certain geometric elements, as well as in the structured arrangement of the elements. We cast the regularity detection problem as an optimization and efficiently solve it using linear programming techniques. Our optimization has a discrete aspect, that is, the connectivity relationships among the elements, as well as a continuous aspect, namely the locations of the elements of interest. Both these aspects are captured by our near-regular structure extraction framework, which alternates between discrete and continuous optimizations. We demonstrate the effectiveness of our framework on a variety of problems including near-regular structure extraction, structure-preserving pattern manipulation, and markerless correspondence detection. Robustness results with respect to geometric and topological noise are presented on synthesized, real-world, and also benchmark datasets. © 2014 ACM.

  7. The Relationship between Ethical Sensitivity, High Ability and Gender in Higher Education Students

    Science.gov (United States)

    Schutte, Ingrid; Wolfensberger, Marca; Tirri, Kirsi

    2014-01-01

    This study examined the ethical sensitivity of high-ability undergraduate students (n=731) in the Netherlands who completed the 28-item Ethical Sensitivity Scale Questionnaire (ESSQ) developed by Tirri & Nokelainen (2007; 2011). The ESSQ is based on Narvaez' (2001) operationalization of ethical sensitivity in seven dimensions. The following…

  8. Regular Expression Matching and Operational Semantics

    Directory of Open Access Journals (Sweden)

    Asiri Rathnayake

    2011-08-01

    Full Text Available Many programming languages and tools, ranging from grep to the Java String library, contain regular expression matchers. Rather than first translating a regular expression into a deterministic finite automaton, such implementations typically match the regular expression on the fly. Thus they can be seen as virtual machines interpreting the regular expression much as if it were a program with some non-deterministic constructs such as the Kleene star. We formalize this implementation technique for regular expression matching using operational semantics. Specifically, we derive a series of abstract machines, moving from the abstract definition of matching to increasingly realistic machines. First a continuation is added to the operational semantics to describe what remains to be matched after the current expression. Next, we represent the expression as a data structure using pointers, which enables redundant searches to be eliminated via testing for pointer equality. From there, we arrive both at Thompson's lockstep construction and a machine that performs some operations in parallel, suitable for implementation on a large number of cores, such as a GPU. We formalize the parallel machine using process algebra and report some preliminary experiments with an implementation on a graphics processor using CUDA.

  9. Implicit learning out of the lab: the case of orthographic regularities.

    Science.gov (United States)

    Pacton, S; Perruchet, P; Fayol, M; Cleeremans, A

    2001-09-01

    Children's (Grades 1 to 5) implicit learning of French orthographic regularities was investigated through nonword judgment (Experiments 1 and 2) and completion (Experiments 3a and 3b) tasks. Children were increasingly sensitive to (a) the frequency of double consonants (Experiments 1, 2, and 3a), (b) the fact that vowels can never be doubled (Experiment 2), and (c) the legal position of double consonants (Experiments 2 and 3b). The latter effect transferred to never doubled consonants but with a decrement in performance. Moreover, this decrement persisted without any trend toward fading, even after the massive amounts of experience provided by years of practice. This result runs against the idea that transfer to novel material is indicative of abstract rule-based knowledge and suggests instead the action of mechanisms sensitive to the statistical properties of the material. A connectionist model is proposed as an instantiation of such mechanisms.

  10. Tetravalent one-regular graphs of order 4p2

    DEFF Research Database (Denmark)

    Feng, Yan-Quan; Kutnar, Klavdija; Marusic, Dragan

    2014-01-01

    A graph is one-regular if its automorphism group acts regularly on the set of its arcs. In this paper tetravalent one-regular graphs of order 4p2, where p is a prime, are classified.......A graph is one-regular if its automorphism group acts regularly on the set of its arcs. In this paper tetravalent one-regular graphs of order 4p2, where p is a prime, are classified....

  11. Major earthquakes occur regularly on an isolated plate boundary fault.

    Science.gov (United States)

    Berryman, Kelvin R; Cochran, Ursula A; Clark, Kate J; Biasi, Glenn P; Langridge, Robert M; Villamor, Pilar

    2012-06-29

    The scarcity of long geological records of major earthquakes, on different types of faults, makes testing hypotheses of regular versus random or clustered earthquake recurrence behavior difficult. We provide a fault-proximal major earthquake record spanning 8000 years on the strike-slip Alpine Fault in New Zealand. Cyclic stratigraphy at Hokuri Creek suggests that the fault ruptured to the surface 24 times, and event ages yield a 0.33 coefficient of variation in recurrence interval. We associate this near-regular earthquake recurrence with a geometrically simple strike-slip fault, with high slip rate, accommodating a high proportion of plate boundary motion that works in isolation from other faults. We propose that it is valid to apply time-dependent earthquake recurrence models for seismic hazard estimation to similar faults worldwide.

  12. Study and realization of a beam analyser of high intensity (10610)

    International Nuclear Information System (INIS)

    Perret-Gallix, D.

    1975-01-01

    A beam analyser working under high-beam intensity in the range of 10 6 to 10 10 particles per burst and giving position profile and intensity of this beam is studied. The reasons of this study, the principle of measurement, the construction of hardware and the different tests carried out on the chamber in order to evaluate the main features are related. The analyser is a multi-cellular ionisation chamber or stripe chamber; each cell made by a copper stripe (0.25mm wide) inserted between two high voltage planes (500V) forms a small independent ionisation chamber. This system, working under the on-line control of a mini-computer allows to associate to each event or event group the instantaneous position and profile of the beam [fr

  13. Multilinear Graph Embedding: Representation and Regularization for Images.

    Science.gov (United States)

    Chen, Yi-Lei; Hsu, Chiou-Ting

    2014-02-01

    Given a set of images, finding a compact and discriminative representation is still a big challenge especially when multiple latent factors are hidden in the way of data generation. To represent multifactor images, although multilinear models are widely used to parameterize the data, most methods are based on high-order singular value decomposition (HOSVD), which preserves global statistics but interprets local variations inadequately. To this end, we propose a novel method, called multilinear graph embedding (MGE), as well as its kernelization MKGE to leverage the manifold learning techniques into multilinear models. Our method theoretically links the linear, nonlinear, and multilinear dimensionality reduction. We also show that the supervised MGE encodes informative image priors for image regularization, provided that an image is represented as a high-order tensor. From our experiments on face and gait recognition, the superior performance demonstrates that MGE better represents multifactor images than classic methods, including HOSVD and its variants. In addition, the significant improvement in image (or tensor) completion validates the potential of MGE for image regularization.

  14. Development of the "Highly Sensitive Dog" questionnaire to evaluate the personality dimension "Sensory Processing Sensitivity" in dogs.

    Directory of Open Access Journals (Sweden)

    Maya Braem

    Full Text Available In humans, the personality dimension 'sensory processing sensitivity (SPS', also referred to as "high sensitivity", involves deeper processing of sensory information, which can be associated with physiological and behavioral overarousal. However, it has not been studied up to now whether this dimension also exists in other species. SPS can influence how people perceive the environment and how this affects them, thus a similar dimension in animals would be highly relevant with respect to animal welfare. We therefore explored whether SPS translates to dogs, one of the primary model species in personality research. A 32-item questionnaire to assess the "highly sensitive dog score" (HSD-s was developed based on the "highly sensitive person" (HSP questionnaire. A large-scale, international online survey was conducted, including the HSD questionnaire, as well as questions on fearfulness, neuroticism, "demographic" (e.g. dog sex, age, weight; age at adoption, etc. and "human" factors (e.g. owner age, sex, profession, communication style, etc., and the HSP questionnaire. Data were analyzed using linear mixed effect models with forward stepwise selection to test prediction of HSD-s by the above-mentioned factors, with country of residence and dog breed treated as random effects. A total of 3647 questionnaires were fully completed. HSD-, fearfulness, neuroticism and HSP-scores showed good internal consistencies, and HSD-s only moderately correlated with fearfulness and neuroticism scores, paralleling previous findings in humans. Intra- (N = 447 and inter-rater (N = 120 reliabilities were good. Demographic and human factors, including HSP score, explained only a small amount of the variance of HSD-s. A PCA analysis identified three subtraits of SPS, comparable to human findings. Overall, the measured personality dimension in dogs showed good internal consistency, partial independence from fearfulness and neuroticism, and good intra- and inter

  15. Sensitivity analyses for simulating pesticide impacts on honey bee colonies

    Science.gov (United States)

    We employ Monte Carlo simulation and sensitivity analysis techniques to describe the population dynamics of pesticide exposure to a honey bee colony using the VarroaPop + Pesticide model. Simulations are performed of hive population trajectories with and without pesti...

  16. Regularization and error assignment to unfolded distributions

    CERN Document Server

    Zech, Gunter

    2011-01-01

    The commonly used approach to present unfolded data only in graphical formwith the diagonal error depending on the regularization strength is unsatisfac-tory. It does not permit the adjustment of parameters of theories, the exclusionof theories that are admitted by the observed data and does not allow the com-bination of data from different experiments. We propose fixing the regulariza-tion strength by a p-value criterion, indicating the experimental uncertaintiesindependent of the regularization and publishing the unfolded data in additionwithout regularization. These considerations are illustrated with three differentunfolding and smoothing approaches applied to a toy example.

  17. Directional Total Generalized Variation Regularization for Impulse Noise Removal

    DEFF Research Database (Denmark)

    Kongskov, Rasmus Dalgas; Dong, Yiqiu

    2017-01-01

    this regularizer for directional images is highly advantageous. In order to estimate directions in impulse noise corrupted images, which is much more challenging compared to Gaussian noise corrupted images, we introduce a new Fourier transform-based method. Numerical experiments show that this method is more...

  18. Uncertainty and Sensitivity Analyses of a Pebble Bed HTGR Loss of Cooling Event

    Directory of Open Access Journals (Sweden)

    Gerhard Strydom

    2013-01-01

    Full Text Available The Very High Temperature Reactor Methods Development group at the Idaho National Laboratory identified the need for a defensible and systematic uncertainty and sensitivity approach in 2009. This paper summarizes the results of an uncertainty and sensitivity quantification investigation performed with the SUSA code, utilizing the International Atomic Energy Agency CRP 5 Pebble Bed Modular Reactor benchmark and the INL code suite PEBBED-THERMIX. Eight model input parameters were selected for inclusion in this study, and after the input parameters variations and probability density functions were specified, a total of 800 steady state and depressurized loss of forced cooling (DLOFC transient PEBBED-THERMIX calculations were performed. The six data sets were statistically analyzed to determine the 5% and 95% DLOFC peak fuel temperature tolerance intervals with 95% confidence levels. It was found that the uncertainties in the decay heat and graphite thermal conductivities were the most significant contributors to the propagated DLOFC peak fuel temperature uncertainty. No significant differences were observed between the results of Simple Random Sampling (SRS or Latin Hypercube Sampling (LHS data sets, and use of uniform or normal input parameter distributions also did not lead to any significant differences between these data sets.

  19. Towards highly sensitive strain sensing based on nanostructured materials

    International Nuclear Information System (INIS)

    Dao, Dzung Viet; Nakamura, Koichi; Sugiyama, Susumu; Bui, Tung Thanh; Dau, Van Thanh; Yamada, Takeo; Hata, Kenji

    2010-01-01

    This paper presents our recent theoretical and experimental study of piezo-effects in nanostructured materials for highly sensitive, high resolution mechanical sensors. The piezo-effects presented here include the piezoresistive effect in a silicon nanowire (SiNW) and single wall carbon nanotube (SWCNT) thin film, as well as the piezo-optic effect in a Si photonic crystal (PhC) nanocavity. Firstly, the electronic energy band structure of the silicon nanostructure is discussed and simulated by using the First-Principles Calculations method. The result showed a remarkably different energy band structure compared with that of bulk silicon. This difference in the electronic state will result in different physical, chemical, and therefore, sensing properties of silicon nanostructures. The piezoresistive effects of SiNW and SWCNT thin film were investigated experimentally. We found that, when the width of ( 110 ) p-type SiNW decreases from 500 to 35 nm, the piezoresistive effect increases by more than 60%. The longitudinal piezoresistive coefficient of SWCNT thin film was measured to be twice that of bulk p-type silicon. Finally, theoretical investigations of the piezo-optic effect in a PhC nanocavity based on Finite Difference Time Domain (FDTD) showed extremely high resolution strain sensing. These nanostructures were fabricated based on top-down nanofabrication technology. The achievements of this work are significant for highly sensitive, high resolution and miniaturized mechanical sensors

  20. An inter-laboratory comparison of PNH clone detection by high-sensitivity flow cytometry in a Russian cohort.

    Science.gov (United States)

    Sipol, Alexandra A; Babenko, Elena V; Borisov, Vyacheslav I; Naumova, Elena V; Boyakova, Elena V; Yakunin, Dimitry I; Glazanova, Tatyana V; Chubukina, Zhanna V; Pronkina, Natalya V; Popov, Alexander M; Saveliev, Leonid I; Lugovskaya, Svetlana A; Lisukov, Igor A; Kulagin, Alexander D; Illingworth, Andrea J

    2015-01-01

    Paroxysmal nocturnal hemoglobinuria (PNH) is an acquired clonal stem cell disorder characterized by partial or absolute deficiency of glycophosphatidyl-inositol (GPI) anchor-linked surface proteins on blood cells. A lack of precise diagnostic standards for flow cytometry has hampered useful comparisons of data between laboratories. We report data from the first study evaluating the reproducibility of high-sensitivity flow cytometry for PNH in Russia. PNH clone sizes were determined at diagnosis in PNH patients at a central laboratory and compared with follow-up measurements in six laboratories across the country. Analyses in each laboratory were performed according to recommendations from the International Clinical Cytometry Society (ICCS) and the more recent 'practical guidelines'. Follow-up measurements were compared with each other and with the values determined at diagnosis. PNH clone size measurements were determined in seven diagnosed PNH patients (five females, two males: mean age 37 years); five had a history of aplastic anemia and three (one with and two without aplastic anemia) had severe hemolytic PNH and elevated plasma lactate dehydrogenase. PNH clone sizes at diagnosis were low in patients with less severe clinical symptoms (0.41-9.7% of granulocytes) and high in patients with severe symptoms (58-99%). There were only minimal differences in the follow-up clone size measurement for each patient between the six laboratories, particularly in those with high values at diagnosis. The ICCS-recommended high-sensitivity flow cytometry protocol was effective for detecting major and minor PNH clones in Russian PNH patients, and showed high reproducibility between laboratories.

  1. High-sensitivity bend angle measurements using optical fiber gratings.

    Science.gov (United States)

    Rauf, Abdul; Zhao, Jianlin; Jiang, Biqiang

    2013-07-20

    We present a high-sensitivity and more flexible bend measurement method, which is based on the coupling of core mode to the cladding modes at the bending region in concatenation with optical fiber grating serving as band reflector. The characteristics of a bend sensing arm composed of bending region and optical fiber grating is examined for different configurations including single fiber Bragg grating (FBG), chirped FBG (CFBG), and double FBGs. The bend loss curves for coated, stripped, and etched sections of fiber in the bending region with FBG, CFBG, and double FBG are obtained experimentally. The effect of separation between bending region and optical fiber grating on loss is measured. The loss responses for single FBG and CFBG configurations are compared to discover the effectiveness for practical applications. It is demonstrated that the sensitivity of the double FBG scheme is twice that of the single FBG and CFBG configurations, and hence acts as sensitivity multiplier. The bend loss response for different fiber diameters obtained through etching in 40% hydrofluoric acid, is measured in double FBG scheme that resulted in a significant increase in the sensitivity, and reduction of dead-zone.

  2. High Temperature and High Sensitive NOx Gas Sensor with Hetero-Junction Structure using Laser Ablation Method

    Science.gov (United States)

    Gao, Wei; Shi, Liqin; Hasegawa, Yuki; Katsube, Teruaki

    In order to develop a high temperature (200°C˜400°C) and high sensitive NOx gas sensor, we developed a new structure of SiC-based hetero-junction device Pt/SnO2/SiC/Ni, Pt/In2O3/SiC/Ni and Pt/WO3/SiC/Ni using a laser ablation method for the preparation of both metal (Pt) electrode and metal-oxide film. It was found that Pt/In2O3/SiC/Ni sensor shows higher sensitivity to NO2 gas compared with the Pt/SnO2/SiC/Ni and Pt/WO3/SiC/Ni sensor, whereas the Pt/WO3/SiC/Ni sensor had better sensitivity to NO gas. These results suggest that selective detection of NO and NO2 gases may be obtained by choosing different metal oxide films.

  3. Development of High Sensitivity Nuclear Emulsion and Fine Grained Emulsion

    International Nuclear Information System (INIS)

    Kawahara, H.; Asada, T.; Naka, T.; Naganawa, N.; Kuwabara, K.; Nakamura, M.

    2014-01-01

    Nuclear emulsion is a particle detector having high spacial resolution and angular resolution. It became useful for large statistics experiment thanks to the development of automatic scanning system. In 2010, a facility for emulsion production was introduced and R and D of nuclear emulsion began at Nagoya university. In this paper, we present results of development of the high sensitivity emulsion and fine grained emulsion for dark matter search experiment. Improvement of sensitivity is achieved by raising density of silver halide crystals and doping well-adjusted amount of chemicals. Production of fine grained emulsion was difficult because of unexpected crystal condensation. By mixing polyvinyl alcohol (PVA) to gelatin as a binder, we succeeded in making a stable fine grained emulsion

  4. High pressure-sensitive gene expression in Lactobacillus sanfranciscensis

    Directory of Open Access Journals (Sweden)

    R.F. Vogel

    2005-08-01

    Full Text Available Lactobacillus sanfranciscensis is a Gram-positive lactic acid bacterium used in food biotechnology. It is necessary to investigate many aspects of a model organism to elucidate mechanisms of stress response, to facilitate preparation, application and performance in food fermentation, to understand mechanisms of inactivation, and to identify novel tools for high pressure biotechnology. To investigate the mechanisms of the complex bacterial response to high pressure we have analyzed changes in the proteome and transcriptome by 2-D electrophoresis, and by microarrays and real time PCR, respectively. More than 16 proteins were found to be differentially expressed upon high pressure stress and were compared to those sensitive to other stresses. Except for one apparently high pressure-specific stress protein, no pressure-specific stress proteins were found, and the proteome response to pressure was found to differ from that induced by other stresses. Selected pressure-sensitive proteins were partially sequenced and their genes were identified by reverse genetics. In a transcriptome analysis of a redundancy cleared shot gun library, about 7% of the genes investigated were found to be affected. Most of them appeared to be up-regulated 2- to 4-fold and these results were confirmed by real time PCR. Gene induction was shown for some genes up-regulated at the proteome level (clpL/groEL/rbsK, while the response of others to high hydrostatic pressure at the transcriptome level seemed to differ from that observed at the proteome level. The up-regulation of selected genes supports the view that the cell tries to compensate for pressure-induced impairment of translation and membrane transport.

  5. New approach to 3-D, high sensitivity, high mass resolution space plasma composition measurements

    International Nuclear Information System (INIS)

    McComas, D.J.; Nordholt, J.E.

    1990-01-01

    This paper describes a new type of 3-D space plasma composition analyzer. The design combines high sensitivity, high mass resolution measurements with somewhat lower mass resolution but even higher sensitivity measurements in a single compact and robust design. While the lower resolution plasma measurements are achieved using conventional straight-through time-of-flight mass spectrometry, the high mass resolution measurements are made by timing ions reflected in a linear electric field (LEF), where the restoring force that an ion experiences is proportional to the depth it travels into the LEF region. Consequently, the ion's equation of motion in that dimension is that of a simple harmonic oscillator and its travel time is simply proportional to the square root of the ion's mass/charge (m/q). While in an ideal LEF, the m/q resolution can be arbitrarily high, in a real device the resolution is limited by the field linearity which can be achieved. In this paper we describe how a nearly linear field can be produced and discuss how the design can be optimized for various different plasma regimes and spacecraft configurations

  6. A CMOS In-Pixel CTIA High Sensitivity Fluorescence Imager.

    Science.gov (United States)

    Murari, Kartikeya; Etienne-Cummings, Ralph; Thakor, Nitish; Cauwenberghs, Gert

    2011-10-01

    Traditionally, charge coupled device (CCD) based image sensors have held sway over the field of biomedical imaging. Complementary metal oxide semiconductor (CMOS) based imagers so far lack sensitivity leading to poor low-light imaging. Certain applications including our work on animal-mountable systems for imaging in awake and unrestrained rodents require the high sensitivity and image quality of CCDs and the low power consumption, flexibility and compactness of CMOS imagers. We present a 132×124 high sensitivity imager array with a 20.1 μm pixel pitch fabricated in a standard 0.5 μ CMOS process. The chip incorporates n-well/p-sub photodiodes, capacitive transimpedance amplifier (CTIA) based in-pixel amplification, pixel scanners and delta differencing circuits. The 5-transistor all-nMOS pixel interfaces with peripheral pMOS transistors for column-parallel CTIA. At 70 fps, the array has a minimum detectable signal of 4 nW/cm(2) at a wavelength of 450 nm while consuming 718 μA from a 3.3 V supply. Peak signal to noise ratio (SNR) was 44 dB at an incident intensity of 1 μW/cm(2). Implementing 4×4 binning allowed the frame rate to be increased to 675 fps. Alternately, sensitivity could be increased to detect about 0.8 nW/cm(2) while maintaining 70 fps. The chip was used to image single cell fluorescence at 28 fps with an average SNR of 32 dB. For comparison, a cooled CCD camera imaged the same cell at 20 fps with an average SNR of 33.2 dB under the same illumination while consuming over a watt.

  7. Studies of Superfluid 3He Confined to a Regular Submicron Slab Geometry, Using SQUID NMR

    International Nuclear Information System (INIS)

    Casey, Andrew; Corcoles, Antonio; Lusher, Chris; Cowan, Brian; Saunders, John

    2006-01-01

    The effect on the superfluid ground state of confining p-wave superfluid 3He in regular geometries of characteristic size comparable to the diameter of the Cooper pair remains relatively unexplored, in part because of the demands placed by experiments on the sensitivity of the measuring technique. In this paper we report preliminary experiments aimed at the study of 3He confined to a slab geometry. The NMR response of a series of superfluid samples has been investigated using a SQUID NMR amplifier. The sensitivity of this NMR spectrometer enables samples of order 1017 spins, with low filling factor, to be studied with good resolution

  8. High order effects in cross section sensitivity analysis

    International Nuclear Information System (INIS)

    Greenspan, E.; Karni, Y.; Gilai, D.

    1978-01-01

    Two types of high order effects associated with perturbations in the flux shape are considered: Spectral Fine Structure Effects (SFSE) and non-linearity between changes in performance parameters and data uncertainties. SFSE are investigated in Part I using a simple single resonance model. Results obtained for each of the resolved and for representative unresolved resonances of 238 U in a ZPR-6/7 like environment indicate that SFSE can have a significant contribution to the sensitivity of group constants to resonance parameters. Methods to account for SFSE both for the propagation of uncertainties and for the adjustment of nuclear data are discussed. A Second Order Sensitivity Theory (SOST) is presented, and its accuracy relative to that of the first order sensitivity theory and of the direct substitution method is investigated in Part II. The investigation is done for the non-linear problem of the effect of changes in the 297 keV sodium minimum cross section on the transport of neutrons in a deep-penetration problem. It is found that the SOST provides a satisfactory accuracy for cross section uncertainty analysis. For the same degree of accuracy, the SOST can be significantly more efficient than the direct substitution method

  9. Higher order total variation regularization for EIT reconstruction.

    Science.gov (United States)

    Gong, Bo; Schullcke, Benjamin; Krueger-Ziolek, Sabine; Zhang, Fan; Mueller-Lisse, Ullrich; Moeller, Knut

    2018-01-08

    Electrical impedance tomography (EIT) attempts to reveal the conductivity distribution of a domain based on the electrical boundary condition. This is an ill-posed inverse problem; its solution is very unstable. Total variation (TV) regularization is one of the techniques commonly employed to stabilize reconstructions. However, it is well known that TV regularization induces staircase effects, which are not realistic in clinical applications. To reduce such artifacts, modified TV regularization terms considering a higher order differential operator were developed in several previous studies. One of them is called total generalized variation (TGV) regularization. TGV regularization has been successively applied in image processing in a regular grid context. In this study, we adapted TGV regularization to the finite element model (FEM) framework for EIT reconstruction. Reconstructions using simulation and clinical data were performed. First results indicate that, in comparison to TV regularization, TGV regularization promotes more realistic images. Graphical abstract Reconstructed conductivity changes located on selected vertical lines. For each of the reconstructed images as well as the ground truth image, conductivity changes located along the selected left and right vertical lines are plotted. In these plots, the notation GT in the legend stands for ground truth, TV stands for total variation method, and TGV stands for total generalized variation method. Reconstructed conductivity distributions from the GREIT algorithm are also demonstrated.

  10. Application of Turchin's method of statistical regularization

    Science.gov (United States)

    Zelenyi, Mikhail; Poliakova, Mariia; Nozik, Alexander; Khudyakov, Alexey

    2018-04-01

    During analysis of experimental data, one usually needs to restore a signal after it has been convoluted with some kind of apparatus function. According to Hadamard's definition this problem is ill-posed and requires regularization to provide sensible results. In this article we describe an implementation of the Turchin's method of statistical regularization based on the Bayesian approach to the regularization strategy.

  11. High-Sensitivity Temperature-Independent Silicon Photonic Microfluidic Biosensors

    Science.gov (United States)

    Kim, Kangbaek

    Optical biosensors that can precisely quantify the presence of specific molecular species in real time without the need for labeling have seen increased use in the drug discovery industry and molecular biology in general. Of the many possible optical biosensors, the TM mode Si biosensor is shown to be very attractive in the sensing application because of large field amplitude on the surface and cost effective CMOS VLSI fabrication. Noise is the most fundamental factor that limits the performance of sensors in development of high-sensitivity biosensors, and noise reduction techniques require precise studies and analysis. One such example stems from thermal fluctuations. Generally SOI biosensors are vulnerable to ambient temperature fluctuations because of large thermo-optic coefficient of silicon (˜2x10 -4 RIU/K), typically requiring another reference ring and readout sequence to compensate temperature induced noise. To address this problem, we designed sensors with a novel TM-mode shallow-ridge waveguide that provides both large surface amplitude for bulk and surface sensing. With proper design, this also provides large optical confinement in the aqueous cladding that renders the device athermal using the negative thermo-optic coefficient of water (~ --1x10-4RIU/K), demonstrating cancellation of thermo-optic effects for aqueous solution operation near 300K. Additional limitations resulting from mechanical actuator fluctuations, stability of tunable lasers, and large 1/f noise of lasers and sensor electronics can limit biosensor performance. Here we also present a simple harmonic feedback readout technique that obviates the need for spectrometers and tunable lasers. This feedback technique reduces the impact of 1/f noise to enable high-sensitivity, and a DSP lock-in with 256 kHz sampling rate can provide down to micros time scale monitoring for fast transitions in biomolecular concentration with potential for small volume and low cost. In this dissertation, a novel

  12. Regularization based on steering parameterized Gaussian filters and a Bhattacharyya distance functional

    Science.gov (United States)

    Lopes, Emerson P.

    2001-08-01

    Template regularization embeds the problem of class separability. In the machine vision perspective, this problem is critical when a textural classification procedure is applied to non-stationary pattern mosaic images. These applications often present low accuracy performance due to disturbance of the classifiers produced by exogenous or endogenous signal regularity perturbations. Natural scene imaging, where the images present certain degree of homogeneity in terms of texture element size or shape (primitives) shows a variety of behaviors, especially varying the preferential spatial directionality. The space-time image pattern characterization is only solved if classification procedures are designed considering the most robust tools within a parallel and hardware perspective. The results to be compared in this paper are obtained using a framework based on multi-resolution, frame and hypothesis approach. Two strategies for the bank of Gabor filters applications are considered: adaptive strategy using the KL transform and fix configuration strategy. The regularization under discussion is accomplished in the pyramid building system instance. The filterings are steering Gaussians controlled by free parameters which are adjusted in accordance with a feedback process driven by hints obtained from sequence of frames interaction functionals pos-processed in the training process and including classification of training set samples as examples. Besides these adjustments there is continuous input data sensitive adaptiveness. The experimental result assessments are focused on two basic issues: Bhattacharyya distance as pattern characterization feature and the combination of KL transform as feature selection and adaptive criterion with the regularization of the pattern Bhattacharyya distance functional (BDF) behavior, using the BDF state separability and symmetry as the main indicators of an optimum framework parameter configuration.

  13. A highly sensitive CMOS digital Hall sensor for low magnetic field applications.

    Science.gov (United States)

    Xu, Yue; Pan, Hong-Bin; He, Shu-Zhuan; Li, Li

    2012-01-01

    Integrated CMOS Hall sensors have been widely used to measure magnetic fields. However, they are difficult to work with in a low magnetic field environment due to their low sensitivity and large offset. This paper describes a highly sensitive digital Hall sensor fabricated in 0.18 μm high voltage CMOS technology for low field applications. The sensor consists of a switched cross-shaped Hall plate and a novel signal conditioner. It effectively eliminates offset and low frequency 1/f noise by applying a dynamic quadrature offset cancellation technique. The measured results show the optimal Hall plate achieves a high current related sensitivity of about 310 V/AT. The whole sensor has a remarkable ability to measure a minimum ± 2 mT magnetic field and output a digital Hall signal in a wide temperature range from -40 °C to 120 °C.

  14. Hidden regularity for a strongly nonlinear wave equation

    International Nuclear Information System (INIS)

    Rivera, J.E.M.

    1988-08-01

    The nonlinear wave equation u''-Δu+f(u)=v in Q=Ωx]0,T[;u(0)=u 0 ,u'(0)=u 1 in Ω; u(x,t)=0 on Σ= Γx]0,T[ where f is a continuous function satisfying, lim |s| sup →+∞ f(s)/s>-∞, and Ω is a bounded domain of R n with smooth boundary Γ, is analysed. It is shown that there exist a solution for the presented nonlinear wave equation that satisfies the regularity condition: |∂u/∂ η|ε L 2 (Σ). Moreover, it is shown that there exist a constant C>0 such that, |∂u/∂ η|≤c{ E(0)+|v| 2 Q }. (author) [pt

  15. On the regularized fermionic projector of the vacuum

    Science.gov (United States)

    Finster, Felix

    2008-03-01

    We construct families of fermionic projectors with spherically symmetric regularization, which satisfy the condition of a distributional MP-product. The method is to analyze regularization tails with a power law or logarithmic scaling in composite expressions in the fermionic projector. The resulting regularizations break the Lorentz symmetry and give rise to a multilayer structure of the fermionic projector near the light cone. Furthermore, we construct regularizations which go beyond the distributional MP-product in that they yield additional distributional contributions supported at the origin. The remaining freedom for the regularization parameters and the consequences for the normalization of the fermionic states are discussed.

  16. On the regularized fermionic projector of the vacuum

    International Nuclear Information System (INIS)

    Finster, Felix

    2008-01-01

    We construct families of fermionic projectors with spherically symmetric regularization, which satisfy the condition of a distributional MP-product. The method is to analyze regularization tails with a power law or logarithmic scaling in composite expressions in the fermionic projector. The resulting regularizations break the Lorentz symmetry and give rise to a multilayer structure of the fermionic projector near the light cone. Furthermore, we construct regularizations which go beyond the distributional MP-product in that they yield additional distributional contributions supported at the origin. The remaining freedom for the regularization parameters and the consequences for the normalization of the fermionic states are discussed

  17. A Large Dimensional Analysis of Regularized Discriminant Analysis Classifiers

    KAUST Repository

    Elkhalil, Khalil

    2017-11-01

    This article carries out a large dimensional analysis of standard regularized discriminant analysis classifiers designed on the assumption that data arise from a Gaussian mixture model with different means and covariances. The analysis relies on fundamental results from random matrix theory (RMT) when both the number of features and the cardinality of the training data within each class grow large at the same pace. Under mild assumptions, we show that the asymptotic classification error approaches a deterministic quantity that depends only on the means and covariances associated with each class as well as the problem dimensions. Such a result permits a better understanding of the performance of regularized discriminant analsysis, in practical large but finite dimensions, and can be used to determine and pre-estimate the optimal regularization parameter that minimizes the misclassification error probability. Despite being theoretically valid only for Gaussian data, our findings are shown to yield a high accuracy in predicting the performances achieved with real data sets drawn from the popular USPS data base, thereby making an interesting connection between theory and practice.

  18. From Molecular Design to Co-sensitization; High performance indole based photosensitizers for dye-sensitized solar cells

    International Nuclear Information System (INIS)

    Babu, Dickson D.; Su, Rui; El-Shafei, Ahmed; Adhikari, Airody Vasudeva

    2016-01-01

    displays promising photovoltaic results and exhibited an enhanced efficiency of 8.06%. Further, good agreement between the calculated and experimental results showcase the precision of the energy functional and basis set utilized in this study. All these findings provide a deeper insight and better understanding into the intricacies involved in the design of superior co-sensitizers for development of highly efficient DSSCs.

  19. Towards high resolution polarisation analysis using double polarisation and ellipsoidal analysers

    CERN Document Server

    Martin-Y-Marero, D

    2002-01-01

    Classical polarisation analysis methods lack the combination of high resolution and high count rate necessary to cope with the demand of modern condensed-matter experiments. In this work, we present a method to achieve high resolution polarisation analysis based on a double polarisation system. Coupling this method with an ellipsoidal wavelength analyser, a high count rate can be achieved whilst delivering a resolution of around 10 mu eV. This method is ideally suited to pulsed sources, although it can be adapted to continuous sources as well. (orig.)

  20. Regularization modeling for large-eddy simulation

    NARCIS (Netherlands)

    Geurts, Bernardus J.; Holm, D.D.

    2003-01-01

    A new modeling approach for large-eddy simulation (LES) is obtained by combining a "regularization principle" with an explicit filter and its inversion. This regularization approach allows a systematic derivation of the implied subgrid model, which resolves the closure problem. The central role of

  1. High degree gravitational sensitivity from Mars orbiters for the GMM-1 gravity model

    Science.gov (United States)

    Lerch, F. J.; Smith, D. E.; Chan, J. C.; Patel, G. B.; Chinn, D. S.

    1994-01-01

    Orbital sensitivity of the gravity field for high degree terms (greater than 30) is analyzed on satellites employed in a Goddard Mars Model GMM-1, complete in spherical harmonics through degree and order 50. The model is obtained from S-band Doppler data on Mariner 9 (M9), Viking Orbiter 1 (VO1), and Viking Orbiter 2 (VO2) spacecraft, which were tracked by the NASA Deep Space Network on seven different highly eccentric orbits. The main sensitivity of the high degree terms is obtained from the VO1 and VO2 low orbits (300 km periapsis altitude), where significant spectral sensitivity is seen for all degrees out through degree 50. The velocity perturbations show a dominant effect at periapsis and significant effects out beyond the semi-latus rectum covering over 180 degrees of the orbital groundtrack for the low altitude orbits. Because of the wideband of periapsis motion covering nearly 180 degrees in w and +39 degrees in latitude coverage, the VO1 300 km periapsis altitude orbit with inclination of 39 degrees gave the dominant sensitivity in the GMM-1 solution for the high degree terms. Although the VO2 low periapsis orbit has a smaller band of periapsis mapping coverage, it strongly complements the VO1 orbit sensitivity for the GMM-1 solution with Doppler tracking coverage over a different inclination of 80 degrees.

  2. Photon Counting System for High-Sensitivity Detection of Bioluminescence at Optical Fiber End.

    Science.gov (United States)

    Iinuma, Masataka; Kadoya, Yutaka; Kuroda, Akio

    2016-01-01

    The technique of photon counting is widely used for various fields and also applicable to a high-sensitivity detection of luminescence. Thanks to recent development of single photon detectors with avalanche photodiodes (APDs), the photon counting system with an optical fiber has become powerful for a detection of bioluminescence at an optical fiber end, because it allows us to fully use the merits of compactness, simple operation, highly quantum efficiency of the APD detectors. This optical fiber-based system also has a possibility of improving the sensitivity to a local detection of Adenosine triphosphate (ATP) by high-sensitivity detection of the bioluminescence. In this chapter, we are introducing a basic concept of the optical fiber-based system and explaining how to construct and use this system.

  3. Regularized inversion of controlled source and earthquake data

    International Nuclear Information System (INIS)

    Ramachandran, Kumar

    2012-01-01

    Estimation of the seismic velocity structure of the Earth's crust and upper mantle from travel-time data has advanced greatly in recent years. Forward modelling trial-and-error methods have been superseded by tomographic methods which allow more objective analysis of large two-dimensional and three-dimensional refraction and/or reflection data sets. The fundamental purpose of travel-time tomography is to determine the velocity structure of a medium by analysing the time it takes for a wave generated at a source point within the medium to arrive at a distribution of receiver points. Tomographic inversion of first-arrival travel-time data is a nonlinear problem since both the velocity of the medium and ray paths in the medium are unknown. The solution for such a problem is typically obtained by repeated application of linearized inversion. Regularization of the nonlinear problem reduces the ill posedness inherent in the tomographic inversion due to the under-determined nature of the problem and the inconsistencies in the observed data. This paper discusses the theory of regularized inversion for joint inversion of controlled source and earthquake data, and results from synthetic data testing and application to real data. The results obtained from tomographic inversion of synthetic data and real data from the northern Cascadia subduction zone show that the velocity model and hypocentral parameters can be efficiently estimated using this approach. (paper)

  4. CALDER: High-sensitivity cryogenic light detectors

    International Nuclear Information System (INIS)

    Casali, N.; Bellini, F.; Cardani, L.

    2017-01-01

    The current bolometric experiments searching for rare processes such as neutrinoless double-beta decay or dark matter interaction demand for cryogenic light detectors with high sensitivity, large active area and excellent scalability and radio-purity in order to reduce their background budget. The CALDER project aims to develop such kind of light detectors implementing phonon-mediated Kinetic Inductance Detectors (KIDs). The goal for this project is the realization of a 5 × 5 cm"2 light detector working between 10 and 100mK with a baseline resolution RMS below 20 eV. In this work the characteristics and the performances of the prototype detectors developed in the first project phase will be shown.

  5. Spatially-Variant Tikhonov Regularization for Double-Difference Waveform Inversion

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Youzuo [Los Alamos National Laboratory; Huang, Lianjie [Los Alamos National Laboratory; Zhang, Zhigang [Los Alamos National Laboratory

    2011-01-01

    Double-difference waveform inversion is a potential tool for quantitative monitoring for geologic carbon storage. It jointly inverts time-lapse seismic data for changes in reservoir geophysical properties. Due to the ill-posedness of waveform inversion, it is a great challenge to obtain reservoir changes accurately and efficiently, particularly when using time-lapse seismic reflection data. Regularization techniques can be utilized to address the issue of ill-posedness. The regularization parameter controls the smoothness of inversion results. A constant regularization parameter is normally used in waveform inversion, and an optimal regularization parameter has to be selected. The resulting inversion results are a trade off among regions with different smoothness or noise levels; therefore the images are either over regularized in some regions while under regularized in the others. In this paper, we employ a spatially-variant parameter in the Tikhonov regularization scheme used in double-difference waveform tomography to improve the inversion accuracy and robustness. We compare the results obtained using a spatially-variant parameter with those obtained using a constant regularization parameter and those produced without any regularization. We observe that, utilizing a spatially-variant regularization scheme, the target regions are well reconstructed while the noise is reduced in the other regions. We show that the spatially-variant regularization scheme provides the flexibility to regularize local regions based on the a priori information without increasing computational costs and the computer memory requirement.

  6. Psychosocial functioning among regular cannabis users with and without cannabis use disorder.

    Science.gov (United States)

    Foster, Katherine T; Arterberry, Brooke J; Iacono, William G; McGue, Matt; Hicks, Brian M

    2017-11-27

    In the United States, cannabis accessibility has continued to rise as the perception of its harmfulness has decreased. Only about 30% of regular cannabis users develop cannabis use disorder (CUD), but it is unclear if individuals who use cannabis regularly without ever developing CUD experience notable psychosocial impairment across the lifespan. Therefore, psychosocial functioning was compared across regular cannabis users with or without CUD and a non-user control group during adolescence (age 17; early risk) and young adulthood (ages 18-25; peak CUD prevalence). Weekly cannabis users with CUD (n = 311), weekly users without CUD (n = 111), and non-users (n = 996) were identified in the Minnesota Twin Family Study. Groups were compared on alcohol and illicit drug use, psychiatric problems, personality, and social functioning at age 17 and from ages 18 to 25. Self-reported cannabis use and problem use were independently verified using co-twin informant report. In both adolescence and young adulthood, non-CUD users reported significantly higher levels of substance use problems and externalizing behaviors than non-users, but lower levels than CUD users. High agreement between self- and co-twin informant reports confirmed the validity of self-reported cannabis use problems. Even in the absence of CUD, regular cannabis use was associated with psychosocial impairment in adolescence and young adulthood. However, regular users with CUD endorsed especially high psychiatric comorbidity and psychosocial impairment. The need for early prevention and intervention - regardless of CUD status - was highlighted by the presence of these patterns in adolescence.

  7. On the K-term and dispersion ratios of semi-regular variables

    International Nuclear Information System (INIS)

    Aslan, Z.

    1981-01-01

    Optical velocities of semi-regular (SR) and irregular (Lb) variables are analysed for a K-term. There is evidence for a dependence upon stellar period. Absorption lines in shorter period non-emission SR variables are blue-shifted relative to the centre-of-mass velocity by about 6 +- 3 km s -1 . Emission-line SR variables give a non-negative absorption K-term and Lb variables give no K-terms other than zero. Comparison is made with the K-terms implied by the OH velocity pattern in long-period variables. Dispersion ratios are also calculated. (author)

  8. Micropatterned comet assay enables high throughput and sensitive DNA damage quantification.

    Science.gov (United States)

    Ge, Jing; Chow, Danielle N; Fessler, Jessica L; Weingeist, David M; Wood, David K; Engelward, Bevin P

    2015-01-01

    The single cell gel electrophoresis assay, also known as the comet assay, is a versatile method for measuring many classes of DNA damage, including base damage, abasic sites, single strand breaks and double strand breaks. However, limited throughput and difficulties with reproducibility have limited its utility, particularly for clinical and epidemiological studies. To address these limitations, we created a microarray comet assay. The use of a micrometer scale array of cells increases the number of analysable comets per square centimetre and enables automated imaging and analysis. In addition, the platform is compatible with standard 24- and 96-well plate formats. Here, we have assessed the consistency and sensitivity of the microarray comet assay. We showed that the linear detection range for H2O2-induced DNA damage in human lymphoblastoid cells is between 30 and 100 μM, and that within this range, inter-sample coefficient of variance was between 5 and 10%. Importantly, only 20 comets were required to detect a statistically significant induction of DNA damage for doses within the linear range. We also evaluated sample-to-sample and experiment-to-experiment variation and found that for both conditions, the coefficient of variation was lower than what has been reported for the traditional comet assay. Finally, we also show that the assay can be performed using a 4× objective (rather than the standard 10× objective for the traditional assay). This adjustment combined with the microarray format makes it possible to capture more than 50 analysable comets in a single image, which can then be automatically analysed using in-house software. Overall, throughput is increased more than 100-fold compared to the traditional assay. Together, the results presented here demonstrate key advances in comet assay technology that improve the throughput, sensitivity, and robustness, thus enabling larger scale clinical and epidemiological studies. © The Author 2014. Published by

  9. Polymer-Particle Pressure-Sensitive Paint with High Photostability

    Directory of Open Access Journals (Sweden)

    Yu Matsuda

    2016-04-01

    Full Text Available We propose a novel fast-responding and paintable pressure-sensitive paint (PSP based on polymer particles, i.e. polymer-particle (pp-PSP. As a fast-responding PSP, polymer-ceramic (PC-PSP is widely studied. Since PC-PSP generally consists of titanium (IV oxide (TiO2 particles, a large reduction in the luminescent intensity will occur due to the photocatalytic action of TiO2. We propose the usage of polymer particles instead of TiO2 particles to prevent the reduction in the luminescent intensity. Here, we fabricate pp-PSP based on the polystyrene particle with a diameter of 1 μm, and investigate the pressure- and temperature-sensitives, the response time, and the photostability. The performances of pp-PSP are compared with those of PC-PSP, indicating the high photostability with the other characteristics comparable to PC-PSP.

  10. Manifold Regularized Correlation Object Tracking.

    Science.gov (United States)

    Hu, Hongwei; Ma, Bo; Shen, Jianbing; Shao, Ling

    2018-05-01

    In this paper, we propose a manifold regularized correlation tracking method with augmented samples. To make better use of the unlabeled data and the manifold structure of the sample space, a manifold regularization-based correlation filter is introduced, which aims to assign similar labels to neighbor samples. Meanwhile, the regression model is learned by exploiting the block-circulant structure of matrices resulting from the augmented translated samples over multiple base samples cropped from both target and nontarget regions. Thus, the final classifier in our method is trained with positive, negative, and unlabeled base samples, which is a semisupervised learning framework. A block optimization strategy is further introduced to learn a manifold regularization-based correlation filter for efficient online tracking. Experiments on two public tracking data sets demonstrate the superior performance of our tracker compared with the state-of-the-art tracking approaches.

  11. Unconscious analyses of visual scenes based on feature conjunctions.

    Science.gov (United States)

    Tachibana, Ryosuke; Noguchi, Yasuki

    2015-06-01

    To efficiently process a cluttered scene, the visual system analyzes statistical properties or regularities of visual elements embedded in the scene. It is controversial, however, whether those scene analyses could also work for stimuli unconsciously perceived. Here we show that our brain performs the unconscious scene analyses not only using a single featural cue (e.g., orientation) but also based on conjunctions of multiple visual features (e.g., combinations of color and orientation information). Subjects foveally viewed a stimulus array (duration: 50 ms) where 4 types of bars (red-horizontal, red-vertical, green-horizontal, and green-vertical) were intermixed. Although a conscious perception of those bars was inhibited by a subsequent mask stimulus, the brain correctly analyzed the information about color, orientation, and color-orientation conjunctions of those invisible bars. The information of those features was then used for the unconscious configuration analysis (statistical processing) of the central bars, which induced a perceptual bias and illusory feature binding in visible stimuli at peripheral locations. While statistical analyses and feature binding are normally 2 key functions of the visual system to construct coherent percepts of visual scenes, our results show that a high-level analysis combining those 2 functions is correctly performed by unconscious computations in the brain. (c) 2015 APA, all rights reserved).

  12. Regular variation on measure chains

    Czech Academy of Sciences Publication Activity Database

    Řehák, Pavel; Vitovec, J.

    2010-01-01

    Roč. 72, č. 1 (2010), s. 439-448 ISSN 0362-546X R&D Projects: GA AV ČR KJB100190701 Institutional research plan: CEZ:AV0Z10190503 Keywords : regularly varying function * regularly varying sequence * measure chain * time scale * embedding theorem * representation theorem * second order dynamic equation * asymptotic properties Subject RIV: BA - General Mathematics Impact factor: 1.279, year: 2010 http://www.sciencedirect.com/science/article/pii/S0362546X09008475

  13. New regular black hole solutions

    International Nuclear Information System (INIS)

    Lemos, Jose P. S.; Zanchin, Vilson T.

    2011-01-01

    In the present work we consider general relativity coupled to Maxwell's electromagnetism and charged matter. Under the assumption of spherical symmetry, there is a particular class of solutions that correspond to regular charged black holes whose interior region is de Sitter, the exterior region is Reissner-Nordstroem and there is a charged thin-layer in-between the two. The main physical and geometrical properties of such charged regular black holes are analyzed.

  14. Manifold Regularized Correlation Object Tracking

    OpenAIRE

    Hu, Hongwei; Ma, Bo; Shen, Jianbing; Shao, Ling

    2017-01-01

    In this paper, we propose a manifold regularized correlation tracking method with augmented samples. To make better use of the unlabeled data and the manifold structure of the sample space, a manifold regularization-based correlation filter is introduced, which aims to assign similar labels to neighbor samples. Meanwhile, the regression model is learned by exploiting the block-circulant structure of matrices resulting from the augmented translated samples over multiple base samples cropped fr...

  15. On geodesics in low regularity

    Science.gov (United States)

    Sämann, Clemens; Steinbauer, Roland

    2018-02-01

    We consider geodesics in both Riemannian and Lorentzian manifolds with metrics of low regularity. We discuss existence of extremal curves for continuous metrics and present several old and new examples that highlight their subtle interrelation with solutions of the geodesic equations. Then we turn to the initial value problem for geodesics for locally Lipschitz continuous metrics and generalize recent results on existence, regularity and uniqueness of solutions in the sense of Filippov.

  16. Smoking status of parents, siblings and friends: Predictors of regular smoking? Findings from a longitudinal twin-family study

    NARCIS (Netherlands)

    Vink, J.M.; Willemsen, G.; Engels, R.C.M.E.; Boomsma, D.I.

    2003-01-01

    The relationship between regular smoking behavior and the smoking behavior of parents, siblings and friends was investigated using data from the Netherlands Twin Register. Cross-sectional analyses of data of 3906 twins showed significant associations between smoking behavior of the participant and

  17. Safety and deterministic failure analyses in high-beta D-D tokamak reactors

    International Nuclear Information System (INIS)

    Selcow, E.C.

    1984-01-01

    Safety and deterministic failure analyses were performed to compare major component failure characteristics for different high-beta D-D tokamak reactors. The primary focus was on evaluating damage to the reactor facility. The analyses also considered potential hazards to the general public and operational personnel. Parametric designs of high-beta D-D tokamak reactors were developed, using WILDCAT as the reference. The size, and toroidal field strength were reduced, and the fusion power increased in an independent manner. These changes were expected to improve the economics of D-D tokamaks. Issues examined using these designs were radiation induced failurs, radiation safety, first wall failure from plasma disruptions, and toroidal field magnet coil failure

  18. Manifold Regularized Reinforcement Learning.

    Science.gov (United States)

    Li, Hongliang; Liu, Derong; Wang, Ding

    2018-04-01

    This paper introduces a novel manifold regularized reinforcement learning scheme for continuous Markov decision processes. Smooth feature representations for value function approximation can be automatically learned using the unsupervised manifold regularization method. The learned features are data-driven, and can be adapted to the geometry of the state space. Furthermore, the scheme provides a direct basis representation extension for novel samples during policy learning and control. The performance of the proposed scheme is evaluated on two benchmark control tasks, i.e., the inverted pendulum and the energy storage problem. Simulation results illustrate the concepts of the proposed scheme and show that it can obtain excellent performance.

  19. The LPM effect in sequential bremsstrahlung: dimensional regularization

    Energy Technology Data Exchange (ETDEWEB)

    Arnold, Peter; Chang, Han-Chih [Department of Physics, University of Virginia,382 McCormick Road, Charlottesville, VA 22894-4714 (United States); Iqbal, Shahin [National Centre for Physics,Quaid-i-Azam University Campus, Islamabad, 45320 (Pakistan)

    2016-10-19

    The splitting processes of bremsstrahlung and pair production in a medium are coherent over large distances in the very high energy limit, which leads to a suppression known as the Landau-Pomeranchuk-Migdal (LPM) effect. Of recent interest is the case when the coherence lengths of two consecutive splitting processes overlap (which is important for understanding corrections to standard treatments of the LPM effect in QCD). In previous papers, we have developed methods for computing such corrections without making soft-gluon approximations. However, our methods require consistent treatment of canceling ultraviolet (UV) divergences associated with coincident emission times, even for processes with tree-level amplitudes. In this paper, we show how to use dimensional regularization to properly handle the UV contributions. We also present a simple diagnostic test that any consistent UV regularization method for this problem needs to pass.

  20. Approximate message passing for nonconvex sparse regularization with stability and asymptotic analysis

    Science.gov (United States)

    Sakata, Ayaka; Xu, Yingying

    2018-03-01

    We analyse a linear regression problem with nonconvex regularization called smoothly clipped absolute deviation (SCAD) under an overcomplete Gaussian basis for Gaussian random data. We propose an approximate message passing (AMP) algorithm considering nonconvex regularization, namely SCAD-AMP, and analytically show that the stability condition corresponds to the de Almeida-Thouless condition in spin glass literature. Through asymptotic analysis, we show the correspondence between the density evolution of SCAD-AMP and the replica symmetric (RS) solution. Numerical experiments confirm that for a sufficiently large system size, SCAD-AMP achieves the optimal performance predicted by the replica method. Through replica analysis, a phase transition between replica symmetric and replica symmetry breaking (RSB) region is found in the parameter space of SCAD. The appearance of the RS region for a nonconvex penalty is a significant advantage that indicates the region of smooth landscape of the optimization problem. Furthermore, we analytically show that the statistical representation performance of the SCAD penalty is better than that of \

  1. Phenotypic and genetic analyses of the varroa sensitive hygienic trait in Russian honey bee (hymenoptera: apidae) colonies.

    Science.gov (United States)

    Kirrane, Maria J; de Guzman, Lilia I; Holloway, Beth; Frake, Amanda M; Rinderer, Thomas E; Whelan, Pádraig M

    2014-01-01

    Varroa destructor continues to threaten colonies of European honey bees. General hygiene, and more specific Varroa Sensitive Hygiene (VSH), provide resistance towards the Varroa mite in a number of stocks. In this study, 32 Russian (RHB) and 14 Italian honey bee colonies were assessed for the VSH trait using two different assays. Firstly, colonies were assessed using the standard VSH behavioural assay of the change in infestation of a highly infested donor comb after a one-week exposure. Secondly, the same colonies were assessed using an "actual brood removal assay" that measured the removal of brood in a section created within the donor combs as a potential alternative measure of hygiene towards Varroa-infested brood. All colonies were then analysed for the recently discovered VSH quantitative trait locus (QTL) to determine whether the genetic mechanisms were similar across different stocks. Based on the two assays, RHB colonies were consistently more hygienic toward Varroa-infested brood than Italian honey bee colonies. The actual number of brood cells removed in the defined section was negatively correlated with the Varroa infestations of the colonies (r2 = 0.25). Only two (percentages of brood removed and reproductive foundress Varroa) out of nine phenotypic parameters showed significant associations with genotype distributions. However, the allele associated with each parameter was the opposite of that determined by VSH mapping. In this study, RHB colonies showed high levels of hygienic behaviour towards Varroa -infested brood. The genetic mechanisms are similar to those of the VSH stock, though the opposite allele associates in RHB, indicating a stable recombination event before the selection of the VSH stock. The measurement of brood removal is a simple, reliable alternative method of measuring hygienic behaviour towards Varroa mites, at least in RHB stock.

  2. Phenotypic and genetic analyses of the varroa sensitive hygienic trait in Russian honey bee (hymenoptera: apidae colonies.

    Directory of Open Access Journals (Sweden)

    Maria J Kirrane

    Full Text Available Varroa destructor continues to threaten colonies of European honey bees. General hygiene, and more specific Varroa Sensitive Hygiene (VSH, provide resistance towards the Varroa mite in a number of stocks. In this study, 32 Russian (RHB and 14 Italian honey bee colonies were assessed for the VSH trait using two different assays. Firstly, colonies were assessed using the standard VSH behavioural assay of the change in infestation of a highly infested donor comb after a one-week exposure. Secondly, the same colonies were assessed using an "actual brood removal assay" that measured the removal of brood in a section created within the donor combs as a potential alternative measure of hygiene towards Varroa-infested brood. All colonies were then analysed for the recently discovered VSH quantitative trait locus (QTL to determine whether the genetic mechanisms were similar across different stocks. Based on the two assays, RHB colonies were consistently more hygienic toward Varroa-infested brood than Italian honey bee colonies. The actual number of brood cells removed in the defined section was negatively correlated with the Varroa infestations of the colonies (r2 = 0.25. Only two (percentages of brood removed and reproductive foundress Varroa out of nine phenotypic parameters showed significant associations with genotype distributions. However, the allele associated with each parameter was the opposite of that determined by VSH mapping. In this study, RHB colonies showed high levels of hygienic behaviour towards Varroa -infested brood. The genetic mechanisms are similar to those of the VSH stock, though the opposite allele associates in RHB, indicating a stable recombination event before the selection of the VSH stock. The measurement of brood removal is a simple, reliable alternative method of measuring hygienic behaviour towards Varroa mites, at least in RHB stock.

  3. Instruction manual for ORNL tandem high abundance sensitivity mass spectrometer

    International Nuclear Information System (INIS)

    Smith, D.H.; McKown, H.S.; Chrisite, W.H.; Walker, R.L.; Carter, J.A.

    1976-06-01

    This manual describes the physical characteristics of the tandem mass spectrometer built by Oak Ridge National Laboratory for the International Atomic Energy Agency. Specific requirements met include ability to run small samples, high abundance sensitivity, good precision and accuracy, and adequate sample throughput. The instrument is capable of running uranium samples as small as 10 -12 g and has an abundance sensitivity in excess of 10 6 . Precision and accuracy are enhanced by a special sweep control circuit. Sample throughput is 6 to 12 samples per day. Operating instructions are also given

  4. Optimal Design of the Adaptive Normalized Matched Filter Detector using Regularized Tyler Estimators

    KAUST Repository

    Kammoun, Abla; Couillet, Romain; Pascal, Frederic; Alouini, Mohamed-Slim

    2017-01-01

    This article addresses improvements on the design of the adaptive normalized matched filter (ANMF) for radar detection. It is well-acknowledged that the estimation of the noise-clutter covariance matrix is a fundamental step in adaptive radar detection. In this paper, we consider regularized estimation methods which force by construction the eigenvalues of the covariance estimates to be greater than a positive regularization parameter ρ. This makes them more suitable for high dimensional problems with a limited number of secondary data samples than traditional sample covariance estimates. The motivation behind this work is to understand the effect and properly set the value of ρthat improves estimate conditioning while maintaining a low estimation bias. More specifically, we consider the design of the ANMF detector for two kinds of regularized estimators, namely the regularized sample covariance matrix (RSCM), the regularized Tyler estimator (RTE). The rationale behind this choice is that the RTE is efficient in mitigating the degradation caused by the presence of impulsive noises while inducing little loss when the noise is Gaussian. Based on asymptotic results brought by recent tools from random matrix theory, we propose a design for the regularization parameter that maximizes the asymptotic detection probability under constant asymptotic false alarm rates. Provided Simulations support the efficiency of the proposed method, illustrating its gain over conventional settings of the regularization parameter.

  5. Optimal Design of the Adaptive Normalized Matched Filter Detector using Regularized Tyler Estimators

    KAUST Repository

    Kammoun, Abla

    2017-10-25

    This article addresses improvements on the design of the adaptive normalized matched filter (ANMF) for radar detection. It is well-acknowledged that the estimation of the noise-clutter covariance matrix is a fundamental step in adaptive radar detection. In this paper, we consider regularized estimation methods which force by construction the eigenvalues of the covariance estimates to be greater than a positive regularization parameter ρ. This makes them more suitable for high dimensional problems with a limited number of secondary data samples than traditional sample covariance estimates. The motivation behind this work is to understand the effect and properly set the value of ρthat improves estimate conditioning while maintaining a low estimation bias. More specifically, we consider the design of the ANMF detector for two kinds of regularized estimators, namely the regularized sample covariance matrix (RSCM), the regularized Tyler estimator (RTE). The rationale behind this choice is that the RTE is efficient in mitigating the degradation caused by the presence of impulsive noises while inducing little loss when the noise is Gaussian. Based on asymptotic results brought by recent tools from random matrix theory, we propose a design for the regularization parameter that maximizes the asymptotic detection probability under constant asymptotic false alarm rates. Provided Simulations support the efficiency of the proposed method, illustrating its gain over conventional settings of the regularization parameter.

  6. Spectral and Concentration Sensitivity of Multijunction Solar Cells at High Temperature: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Friedman, Daniel J.; Steiner, Myles A.; Perl, Emmett E.; Simon, John

    2017-06-14

    We model the performance of two-junction solar cells at very high temperatures of ~400 degrees C and beyond for applications such as hybrid PV/solar-thermal power production, and identify areas in which the design and performance characteristics behave significantly differently than at more conventional near-room-temperature operating conditions. We show that high-temperature operation reduces the sensitivity of the cell efficiency to spectral content, but increases the sensitivity to concentration, both of which have implications for energy yield in terrestrial PV applications. For other high-temperature applications such as near-sun space missions, our findings indicate that concentration may be a useful tool to enhance cell efficiency.

  7. REGULAR PATTERN MINING (WITH JITTER ON WEIGHTED-DIRECTED DYNAMIC GRAPHS

    Directory of Open Access Journals (Sweden)

    A. GUPTA

    2017-02-01

    Full Text Available Real world graphs are mostly dynamic in nature, exhibiting time-varying behaviour in structure of the graph, weight on the edges and direction of the edges. Mining regular patterns in the occurrence of edge parameters gives an insight into the consumer trends over time in ecommerce co-purchasing networks. But such patterns need not necessarily be precise as in the case when some product goes out of stock or a group of customers becomes unavailable for a short period of time. Ignoring them may lead to loss of useful information and thus taking jitter into account becomes vital. To the best of our knowledge, no work has been yet reported to extract regular patterns considering a jitter of length greater than unity. In this article, we propose a novel method to find quasi regular patterns on weight and direction sequences of such graphs. The method involves analysing the dynamic network considering the inconsistencies in the occurrence of edges. It utilizes the relation between the occurrence sequence and the corresponding weight and direction sequences to speed up this process. Further, these patterns are used to determine the most central nodes (such as the most profit yielding products. To accomplish this we introduce the concept of dynamic closeness centrality and dynamic betweenness centrality. Experiments on Enron e-mail dataset and a synthetic dynamic network show that the presented approach is efficient, so it can be used to find patterns in large scale networks consisting of many timestamps.

  8. Increased sensitivity in thick-target particle induced X-ray emission analyses using dry ashing for preconcentration

    International Nuclear Information System (INIS)

    Lill, J.-O.; Harju, L.; Saarela, K.-E.; Lindroos, A.; Heselius, S.-J.

    1999-01-01

    The sensitivity in thick-target particle induced X-ray emission (PIXE) analyses of biological materials can be enhanced by dry ashing. The gain depends mainly on the mass reduction factor and the composition of the residual ash. The enhancement factor was 7 for the certified reference material Pine Needles and the limits of detection (LODs) were below 0.2 μg/g for Zn, Cu, Rb and Sr. When ashing biological materials with low ash contents such as wood of pine or spruce (0.3% of dry weight) and honey (0.1% of wet weight) the gain was far greater. The LODs for these materials were 30 ng/g for wood and below 10 ng/g for honey. In addition, the ashed samples were more homogenous and more resistant to changes during the irradiation than the original biological samples. (Copyright (c) 1999 Elsevier Science B.V., Amsterdam. All rights reserved.)

  9. Graph Regularized Auto-Encoders for Image Representation.

    Science.gov (United States)

    Yiyi Liao; Yue Wang; Yong Liu

    2017-06-01

    Image representation has been intensively explored in the domain of computer vision for its significant influence on the relative tasks such as image clustering and classification. It is valuable to learn a low-dimensional representation of an image which preserves its inherent information from the original image space. At the perspective of manifold learning, this is implemented with the local invariant idea to capture the intrinsic low-dimensional manifold embedded in the high-dimensional input space. Inspired by the recent successes of deep architectures, we propose a local invariant deep nonlinear mapping algorithm, called graph regularized auto-encoder (GAE). With the graph regularization, the proposed method preserves the local connectivity from the original image space to the representation space, while the stacked auto-encoders provide explicit encoding model for fast inference and powerful expressive capacity for complex modeling. Theoretical analysis shows that the graph regularizer penalizes the weighted Frobenius norm of the Jacobian matrix of the encoder mapping, where the weight matrix captures the local property in the input space. Furthermore, the underlying effects on the hidden representation space are revealed, providing insightful explanation to the advantage of the proposed method. Finally, the experimental results on both clustering and classification tasks demonstrate the effectiveness of our GAE as well as the correctness of the proposed theoretical analysis, and it also suggests that GAE is a superior solution to the current deep representation learning techniques comparing with variant auto-encoders and existing local invariant methods.

  10. New and highly sensitive assay for L-5-hydroxytryptophan decarboxylase activity by high-performance liquid chromatography-voltammetry.

    Science.gov (United States)

    Rahman, M K; Nagatsu, T; Kato, T

    1980-12-12

    This paper describes a new, inexpensive and highly sensitive assay for aromatic L-amino acid decarboxylase (AADC) activity, using L-5-hydroxytryptophan (L-5-HTP) as substrate, in rat and human brains and serum by high-performance liquid chromatography (HPLC) with voltammetric detection. L-5-HTP was used as substrate and D-5-HTP for the blank. After isolating serotonin (5-HT) formed enzymatically from L-5-HTP on a small Amberlite CG-50 column, the 5-HT was eluted with hydrochloric acid and assayed by HPLC with a voltammetric detector. N-Methyldopamine was added to each incubation mixture as an internal standard. This method is sensitive enough to measure 5-HT, formed by the enzyme, 100 fmol to 140 pmol or more. An advantage of this method is that one can incubate the enzyme for longer time (up to 150 min), as compared with AADC assay using L-DOPA as substrate, resulting in a very high sensitivity. By using this new method, AADC activity was discovered in rat serum.

  11. Laplacian manifold regularization method for fluorescence molecular tomography

    Science.gov (United States)

    He, Xuelei; Wang, Xiaodong; Yi, Huangjian; Chen, Yanrong; Zhang, Xu; Yu, Jingjing; He, Xiaowei

    2017-04-01

    Sparse regularization methods have been widely used in fluorescence molecular tomography (FMT) for stable three-dimensional reconstruction. Generally, ℓ1-regularization-based methods allow for utilizing the sparsity nature of the target distribution. However, in addition to sparsity, the spatial structure information should be exploited as well. A joint ℓ1 and Laplacian manifold regularization model is proposed to improve the reconstruction performance, and two algorithms (with and without Barzilai-Borwein strategy) are presented to solve the regularization model. Numerical studies and in vivo experiment demonstrate that the proposed Gradient projection-resolved Laplacian manifold regularization method for the joint model performed better than the comparative algorithm for ℓ1 minimization method in both spatial aggregation and location accuracy.

  12. Learning Sparse Visual Representations with Leaky Capped Norm Regularizers

    OpenAIRE

    Wangni, Jianqiao; Lin, Dahua

    2017-01-01

    Sparsity inducing regularization is an important part for learning over-complete visual representations. Despite the popularity of $\\ell_1$ regularization, in this paper, we investigate the usage of non-convex regularizations in this problem. Our contribution consists of three parts. First, we propose the leaky capped norm regularization (LCNR), which allows model weights below a certain threshold to be regularized more strongly as opposed to those above, therefore imposes strong sparsity and...

  13. Adaptive regularization of noisy linear inverse problems

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Madsen, Kristoffer Hougaard; Lehn-Schiøler, Tue

    2006-01-01

    In the Bayesian modeling framework there is a close relation between regularization and the prior distribution over parameters. For prior distributions in the exponential family, we show that the optimal hyper-parameter, i.e., the optimal strength of regularization, satisfies a simple relation: T......: The expectation of the regularization function, i.e., takes the same value in the posterior and prior distribution. We present three examples: two simulations, and application in fMRI neuroimaging....

  14. High-speed tapping-mode atomic force microscopy using a Q-controlled regular cantilever acting as the actuator: Proof-of-principle experiments

    Energy Technology Data Exchange (ETDEWEB)

    Balantekin, M., E-mail: mujdatbalantekin@iyte.edu.tr [Electrical and Electronics Engineering, İzmir Institute of Technology, Urla, İzmir 35430 (Turkey); Satır, S.; Torello, D.; Değertekin, F. L. [Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332-0405 (United States)

    2014-12-15

    We present the proof-of-principle experiments of a high-speed actuation method to be used in tapping-mode atomic force microscopes (AFM). In this method, we do not employ a piezotube actuator to move the tip or the sample as in conventional AFM systems, but, we utilize a Q-controlled eigenmode of a cantilever to perform the fast actuation. We show that the actuation speed can be increased even with a regular cantilever.

  15. High sensitivity probe absorption technique for time-of-flight ...

    Indian Academy of Sciences (India)

    Abstract. We report on a phase-sensitive probe absorption technique with high sen- sitivity, capable of detecting a few hundred ultra-cold atoms in flight in an observation time of a few milliseconds. The large signal-to-noise ratio achieved is sufficient for reliable measurements on low intensity beams of cold atoms.

  16. Nitrogen detected TROSY at high field yields high resolution and sensitivity for protein NMR

    International Nuclear Information System (INIS)

    Takeuchi, Koh; Arthanari, Haribabu; Shimada, Ichio; Wagner, Gerhard

    2015-01-01

    Detection of 15 N in multidimensional NMR experiments of proteins has sparsely been utilized because of the low gyromagnetic ratio (γ) of nitrogen and the presumed low sensitivity of such experiments. Here we show that selecting the TROSY components of proton-attached 15 N nuclei (TROSY 15 N H ) yields high quality spectra in high field magnets (>600 MHz) by taking advantage of the slow 15 N transverse relaxation and compensating for the inherently low 15 N sensitivity. The 15 N TROSY transverse relaxation rates increase modestly with molecular weight but the TROSY gain in peak heights depends strongly on the magnetic field strength. Theoretical simulations predict that the narrowest line width for the TROSY 15 N H component can be obtained at 900 MHz, but sensitivity reaches its maximum around 1.2 GHz. Based on these considerations, a 15 N-detected 2D 1 H– 15 N TROSY-HSQC ( 15 N-detected TROSY-HSQC) experiment was developed and high-quality 2D spectra were recorded at 800 MHz in 2 h for 1 mM maltose-binding protein at 278 K (τ c  ∼ 40 ns). Unlike for 1 H detected TROSY, deuteration is not mandatory to benefit 15 N detected TROSY due to reduced dipolar broadening, which facilitates studies of proteins that cannot be deuterated, especially in cases where production requires eukaryotic expression systems. The option of recording 15 N TROSY of proteins expressed in H 2 O media also alleviates the problem of incomplete amide proton back exchange, which often hampers the detection of amide groups in the core of large molecular weight proteins that are expressed in D 2 O culture media and cannot be refolded for amide back exchange. These results illustrate the potential of 15 N H -detected TROSY experiments as a means to exploit the high resolution offered by high field magnets near and above 1 GHz

  17. A highly sensitive RF-to-DC power converter with an extended dynamic range

    KAUST Repository

    Almansouri, Abdullah Saud Mohammed

    2017-10-24

    This paper proposes a highly sensitive RF-to-DC power converter with an extended dynamic range that is designed to operate at the medical band 433 MHz and simulated using 0.18 μm CMOS technology. Compared to the conventional fully cross-coupled rectifier, the proposed design offers 3.2× the dynamic range. It is also highly sensitive and requires −18 dBm of input power to produce a 1 V-output voltage when operating with a 100 kΩ load. Furthermore, the proposed design offers an open circuit sensitivity of −23.4 dBm and a peak power conversion efficiency of 67%.

  18. Sensitivity Analysis of Multidisciplinary Rotorcraft Simulations

    Science.gov (United States)

    Wang, Li; Diskin, Boris; Biedron, Robert T.; Nielsen, Eric J.; Bauchau, Olivier A.

    2017-01-01

    A multidisciplinary sensitivity analysis of rotorcraft simulations involving tightly coupled high-fidelity computational fluid dynamics and comprehensive analysis solvers is presented and evaluated. An unstructured sensitivity-enabled Navier-Stokes solver, FUN3D, and a nonlinear flexible multibody dynamics solver, DYMORE, are coupled to predict the aerodynamic loads and structural responses of helicopter rotor blades. A discretely-consistent adjoint-based sensitivity analysis available in FUN3D provides sensitivities arising from unsteady turbulent flows and unstructured dynamic overset meshes, while a complex-variable approach is used to compute DYMORE structural sensitivities with respect to aerodynamic loads. The multidisciplinary sensitivity analysis is conducted through integrating the sensitivity components from each discipline of the coupled system. Numerical results verify accuracy of the FUN3D/DYMORE system by conducting simulations for a benchmark rotorcraft test model and comparing solutions with established analyses and experimental data. Complex-variable implementation of sensitivity analysis of DYMORE and the coupled FUN3D/DYMORE system is verified by comparing with real-valued analysis and sensitivities. Correctness of adjoint formulations for FUN3D/DYMORE interfaces is verified by comparing adjoint-based and complex-variable sensitivities. Finally, sensitivities of the lift and drag functions obtained by complex-variable FUN3D/DYMORE simulations are compared with sensitivities computed by the multidisciplinary sensitivity analysis, which couples adjoint-based flow and grid sensitivities of FUN3D and FUN3D/DYMORE interfaces with complex-variable sensitivities of DYMORE structural responses.

  19. Exclusion of children with intellectual disabilities from regular ...

    African Journals Online (AJOL)

    Study investigated why teachers exclude children with intellectual disability from the regular classrooms in Nigeria. Participants were, 169 regular teachers randomly selected from Oyo and Ogun states. Questionnaire was used to collect data result revealed that 57.4% regular teachers could not cope with children with ID ...

  20. On infinite regular and chiral maps

    OpenAIRE

    Arredondo, John A.; Valdez, Camilo Ramírez y Ferrán

    2015-01-01

    We prove that infinite regular and chiral maps take place on surfaces with at most one end. Moreover, we prove that an infinite regular or chiral map on an orientable surface with genus can only be realized on the Loch Ness monster, that is, the topological surface of infinite genus with one end.

  1. 29 CFR 779.18 - Regular rate.

    Science.gov (United States)

    2010-07-01

    ... employee under subsection (a) or in excess of the employee's normal working hours or regular working hours... Relating to Labor (Continued) WAGE AND HOUR DIVISION, DEPARTMENT OF LABOR STATEMENTS OF GENERAL POLICY OR... not less than one and one-half times their regular rates of pay. Section 7(e) of the Act defines...

  2. A highly sensitive monoclonal antibody based biosensor for quantifying 3–5 ring polycyclic aromatic hydrocarbons (PAHs in aqueous environmental samples

    Directory of Open Access Journals (Sweden)

    Xin Li

    2016-03-01

    Full Text Available Immunoassays based on monoclonal antibodies (mAbs are highly sensitive for the detection of polycyclic aromatic hydrocarbons (PAHs and can be employed to determine concentrations in near real-time. A sensitive generic mAb against PAHs, named as 2G8, was developed by a three-step screening procedure. It exhibited nearly uniformly high sensitivity against 3-ring to 5-ring unsubstituted PAHs and their common environmental methylated PAHs, with IC50 values between 1.68 and 31 μg/L (ppb. 2G8 has been successfully applied on the KinExA Inline Biosensor system for quantifying 3–5 ring PAHs in aqueous environmental samples. PAHs were detected at a concentration as low as 0.2 μg/L. Furthermore, the analyses only required 10 min for each sample. To evaluate the accuracy of the 2G8-based biosensor, the total PAH concentrations in a series of environmental samples analyzed by biosensor and GC–MS were compared. In most cases, the results yielded a good correlation between methods. This indicates that generic antibody 2G8 based biosensor possesses significant promise for a low cost, rapid method for PAH determination in aqueous samples. Keywords: Monoclonal antibody, PAH, Pore water, Biosensor, Pyrene

  3. A regularization method for solving the Poisson equation for mixed unbounded-periodic domains

    DEFF Research Database (Denmark)

    Spietz, Henrik Juul; Mølholm Hejlesen, Mads; Walther, Jens Honoré

    2018-01-01

    the regularized unbounded-periodic Green's functions can be implemented in an FFT-based Poisson solver to obtain a convergence rate corresponding to the regularization order of the Green's function. The high order is achieved without any additional computational cost from the conventional FFT-based Poisson solver...... and enables the calculation of the derivative of the solution to the same high order by direct spectral differentiation. We illustrate an application of the FFT-based Poisson solver by using it with a vortex particle mesh method for the approximation of incompressible flow for a problem with a single periodic...

  4. Continuum regularized Yang-Mills theory

    International Nuclear Information System (INIS)

    Sadun, L.A.

    1987-01-01

    Using the machinery of stochastic quantization, Z. Bern, M. B. Halpern, C. Taubes and I recently proposed a continuum regularization technique for quantum field theory. This regularization may be implemented by applying a regulator to either the (d + 1)-dimensional Parisi-Wu Langevin equation or, equivalently, to the d-dimensional second order Schwinger-Dyson (SD) equations. This technique is non-perturbative, respects all gauge and Lorentz symmetries, and is consistent with a ghost-free gauge fixing (Zwanziger's). This thesis is a detailed study of this regulator, and of regularized Yang-Mills theory, using both perturbative and non-perturbative techniques. The perturbative analysis comes first. The mechanism of stochastic quantization is reviewed, and a perturbative expansion based on second-order SD equations is developed. A diagrammatic method (SD diagrams) for evaluating terms of this expansion is developed. We apply the continuum regulator to a scalar field theory. Using SD diagrams, we show that all Green functions can be rendered finite to all orders in perturbation theory. Even non-renormalizable theories can be regularized. The continuum regulator is then applied to Yang-Mills theory, in conjunction with Zwanziger's gauge fixing. A perturbative expansion of the regulator is incorporated into the diagrammatic method. It is hoped that the techniques discussed in this thesis will contribute to the construction of a renormalized Yang-Mills theory is 3 and 4 dimensions

  5. Real-time, single-step bioassay using nanoplasmonic resonator with ultra-high sensitivity

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Xiang; Ellman, Jonathan A; Chen, Fanqing Frank; Su, Kai-Hang; Wei, Qi-Huo; Sun, Cheng

    2014-04-01

    A nanoplasmonic resonator (NPR) comprising a metallic nanodisk with alternating shielding layer(s), having a tagged biomolecule conjugated or tethered to the surface of the nanoplasmonic resonator for highly sensitive measurement of enzymatic activity. NPRs enhance Raman signals in a highly reproducible manner, enabling fast detection of protease and enzyme activity, such as Prostate Specific Antigen (paPSA), in real-time, at picomolar sensitivity levels. Experiments on extracellular fluid (ECF) from paPSA-positive cells demonstrate specific detection in a complex bio-fluid background in real-time single-step detection in very small sample volumes.

  6. Transitioning high sensitivity cardiac troponin I (hs-cTnI) into routine diagnostic use: More than just a sensitivity issue

    LENUS (Irish Health Repository)

    Lee, Graham R

    2016-04-01

    High sensitivity cardiac troponin T and I (hs-cTnT and hs-cTnI) assays show analytical, diagnostic and prognostic improvement over contemporary sensitive cTn assays. However, given the importance of troponin in the diagnosis of myocardial infarction, implementing this test requires rigorous analytical and clinical verification across the total testing pathway. This was the aim of this study.

  7. Performance of high-resolution position-sensitive detectors developed for storage-ring decay experiments

    International Nuclear Information System (INIS)

    Yamaguchi, T.; Suzaki, F.; Izumikawa, T.; Miyazawa, S.; Morimoto, K.; Suzuki, T.; Tokanai, F.; Furuki, H.; Ichihashi, N.; Ichikawa, C.; Kitagawa, A.; Kuboki, T.; Momota, S.; Nagae, D.; Nagashima, M.; Nakamura, Y.; Nishikiori, R.; Niwa, T.; Ohtsubo, T.; Ozawa, A.

    2013-01-01

    Highlights: • Position-sensitive detectors were developed for storage-ring decay spectroscopy. • Fiber scintillation and silicon strip detectors were tested with heavy ion beams. • A new fiber scintillation detector showed an excellent position resolution. • Position and energy detection by silicon strip detectors enable full identification. -- Abstract: As next generation spectroscopic tools, heavy-ion cooler storage rings will be a unique application of highly charged RI beam experiments. Decay spectroscopy of highly charged rare isotopes provides us important information relevant to the stellar conditions, such as for the s- and r-process nucleosynthesis. In-ring decay products of highly charged RI will be momentum-analyzed and reach a position-sensitive detector set-up located outside of the storage orbit. To realize such in-ring decay experiments, we have developed and tested two types of high-resolution position-sensitive detectors: silicon strips and scintillating fibers. The beam test experiments resulted in excellent position resolutions for both detectors, which will be available for future storage-ring experiments

  8. High hunger state increases olfactory sensitivity to neutral but not food odors.

    Science.gov (United States)

    Stafford, Lorenzo D; Welbeck, Kimberley

    2011-01-01

    Understanding how hunger state relates to olfactory sensitivity has become more urgent due to their possible role in obesity. In 2 studies (within-subjects: n = 24, between-subjects: n = 40), participants were provided with lunch before (satiated state) or after (nonsatiated state) testing and completed a standardized olfactory threshold test to a neutral odor (Experiments 1 and 2) and discrimination test to a food odor (Experiment 2). Experiment 1 revealed that olfactory sensitivity was greater in the nonsatiated versus satiated state, with additionally increased sensitivity for the low body mass index (BMI) compared with high BMI group. Experiment 2 replicated this effect for neutral odors, but in the case of food odors, those in a satiated state had greater acuity. Additionally, whereas the high BMI group had higher acuity to food odors in the satiated versus nonsatiated state, no such differences were found for the low BMI group. The research here is the first to demonstrate how olfactory acuity changes as a function of hunger state and relatedness of odor to food and that BMI can predict differences in olfactory sensitivity.

  9. Effects of attitude, social influence, and self-efficacy model factors on regular mammography performance in life-transition aged women in Korea.

    Science.gov (United States)

    Lee, Chang Hyun; Kim, Young Im

    2015-01-01

    This study analyzed predictors of regular mammography performance in Korea. In addition, we determined factors affecting regular mammography performance in life-transition aged women by applying an attitude, social influence, and self-efficacy (ASE) model. Data were collected from women aged over 40 years residing in province J in Korea. The 178 enrolled subjects provided informed voluntary consent prior to completing a structural questionnaire. The overall regular mammography performance rate of the subjects was 41.6%. Older age, city residency, high income and part-time job were associated with a high regular mammography performance. Among women who had undergone more breast self-examinations (BSE) or more doctors' physical examinations (PE), there were higher regular mammography performance rates. All three ASE model factors were significantly associated with regular mammography performance. Women with a high level of positive ASE values had a significantly high regular mammography performance rate. Within the ASE model, self-efficacy and social influence were particularly important. Logistic regression analysis explained 34.7% of regular mammography performance and PE experience (β=4.645, p=.003), part- time job (β=4.010, p=.050), self-efficacy (β=1.820, p=.026) and social influence (β=1.509, p=.038) were significant factors. Promotional strategies that could improve self-efficacy, reinforce social influence and reduce geographical, time and financial barriers are needed to increase the regular mammography performance rate in life-transition aged.

  10. Highly Sensitive and Selective Gas Sensor Using Hydrophilic and Hydrophobic Graphenes

    Science.gov (United States)

    Some, Surajit; Xu, Yang; Kim, Youngmin; Yoon, Yeoheung; Qin, Hongyi; Kulkarni, Atul; Kim, Taesung; Lee, Hyoyoung

    2013-01-01

    New hydrophilic 2D graphene oxide (GO) nanosheets with various oxygen functional groups were employed to maintain high sensitivity in highly unfavorable environments (extremely high humidity, strong acidic or basic). Novel one-headed polymer optical fiber sensor arrays using hydrophilic GO and hydrophobic reduced graphene oxide (rGO) were carefully designed, leading to the selective sensing of volatile organic gases for the first time. The two physically different surfaces of GO and rGO could provide the sensing ability to distinguish between tetrahydrofuran (THF) and dichloromethane (MC), respectively, which is the most challenging issue in the area of gas sensors. The eco-friendly physical properties of GO allowed for faster sensing and higher sensitivity when compared to previous results for rGO even under extreme environments of over 90% humidity, making it the best choice for an environmentally friendly gas sensor. PMID:23736838

  11. Applications of molecules as high-resolution, high-sensitivity threshold electron detectors

    International Nuclear Information System (INIS)

    Chutjian, A.

    1991-01-01

    The goal of the work under the contract entitled ''Applications of Molecules as High-Resolution, High-Sensitivity Threshold Electron Detectors'' (DoE IAA No. DE-AI01-83ER13093 Mod. A006) was to explore the electron attachment properties of a variety of molecules at electron energies not accessible by other experimental techniques. As a result of this work, not only was a large body of basic data measured on attachment cross sections and rate constants; but also extensive theoretical calculations were carried out to verify the underlying phenomenon of s-wave attachment. Important outgrowths of this week were also realized in other areas of research. The basic data have applications in fields such as combustion, soot reduction, rocket-exhaust modification, threshold photoelectron spectroscopy, and trace species detection

  12. Contributions to sensitivity analysis and generalized discriminant analysis; Contributions a l'analyse de sensibilite et a l'analyse discriminante generalisee

    Energy Technology Data Exchange (ETDEWEB)

    Jacques, J

    2005-12-15

    Two topics are studied in this thesis: sensitivity analysis and generalized discriminant analysis. Global sensitivity analysis of a mathematical model studies how the output variables of this last react to variations of its inputs. The methods based on the study of the variance quantify the part of variance of the response of the model due to each input variable and each subset of input variables. The first subject of this thesis is the impact of a model uncertainty on results of a sensitivity analysis. Two particular forms of uncertainty are studied: that due to a change of the model of reference, and that due to the use of a simplified model with the place of the model of reference. A second problem was studied during this thesis, that of models with correlated inputs. Indeed, classical sensitivity indices not having significance (from an interpretation point of view) in the presence of correlation of the inputs, we propose a multidimensional approach consisting in expressing the sensitivity of the output of the model to groups of correlated variables. Applications in the field of nuclear engineering illustrate this work. Generalized discriminant analysis consists in classifying the individuals of a test sample in groups, by using information contained in a training sample, when these two samples do not come from the same population. This work extends existing methods in a Gaussian context to the case of binary data. An application in public health illustrates the utility of generalized discrimination models thus defined. (author)

  13. Sensitivity and fidelity of DNA microarray improved with integration of Amplified Differential Gene Expression (ADGE

    Directory of Open Access Journals (Sweden)

    Ile Kristina E

    2003-07-01

    Full Text Available Abstract Background The ADGE technique is a method designed to magnify the ratios of gene expression before detection. It improves the detection sensitivity to small change of gene expression and requires small amount of starting material. However, the throughput of ADGE is low. We integrated ADGE with DNA microarray (ADGE microarray and compared it with regular microarray. Results When ADGE was integrated with DNA microarray, a quantitative relationship of a power function between detected and input ratios was found. Because of ratio magnification, ADGE microarray was better able to detect small changes in gene expression in a drug resistant model cell line system. The PCR amplification of templates and efficient labeling reduced the requirement of starting material to as little as 125 ng of total RNA for one slide hybridization and enhanced the signal intensity. Integration of ratio magnification, template amplification and efficient labeling in ADGE microarray reduced artifacts in microarray data and improved detection fidelity. The results of ADGE microarray were less variable and more reproducible than those of regular microarray. A gene expression profile generated with ADGE microarray characterized the drug resistant phenotype, particularly with reference to glutathione, proliferation and kinase pathways. Conclusion ADGE microarray magnified the ratios of differential gene expression in a power function, improved the detection sensitivity and fidelity and reduced the requirement for starting material while maintaining high throughput. ADGE microarray generated a more informative expression pattern than regular microarray.

  14. Flexible Semitransparent Energy Harvester with High Pressure Sensitivity and Power Density Based on Laterally Aligned PZT Single-Crystal Nanowires.

    Science.gov (United States)

    Zhao, Quan-Liang; He, Guang-Ping; Di, Jie-Jian; Song, Wei-Li; Hou, Zhi-Ling; Tan, Pei-Pei; Wang, Da-Wei; Cao, Mao-Sheng

    2017-07-26

    A flexible semitransparent energy harvester is assembled based on laterally aligned Pb(Zr 0.52 Ti 0.48 )O 3 (PZT) single-crystal nanowires (NWs). Such a harvester presents the highest open-circuit voltage and a stable area power density of up to 10 V and 0.27 μW/cm 2 , respectively. A high pressure sensitivity of 0.14 V/kPa is obtained in the dynamic pressure sensing, much larger than the values reported in other energy harvesters based on piezoelectric single-crystal NWs. Furthermore, theoretical and finite element analyses also confirm that the piezoelectric voltage constant g 33 of PZT NWs is competitive to the lead-based bulk single crystals and ceramics, and the enhanced pressure sensitivity and power density are substantially linked to the flexible structure with laterally aligned PZT NWs. The energy harvester in this work holds great potential in flexible and transparent sensing and self-powered systems.

  15. Factors associated with regular dental visits among hemodialysis patients

    Science.gov (United States)

    Yoshioka, Masami; Shirayama, Yasuhiko; Imoto, Issei; Hinode, Daisuke; Yanagisawa, Shizuko; Takeuchi, Yuko; Bando, Takashi; Yokota, Narushi

    2016-01-01

    AIM To investigate awareness and attitudes about preventive dental visits among dialysis patients; to clarify the barriers to visiting the dentist. METHODS Subjects included 141 dentate outpatients receiving hemodialysis treatment at two facilities, one with a dental department and the other without a dental department. We used a structured questionnaire to interview participants about their awareness of oral health management issues for dialysis patients, perceived oral symptoms and attitudes about dental visits. Bivariate analysis using the χ2 test was conducted to determine associations between study variables and regular dental check-ups. Binominal logistic regression analysis was used to determine factors associated with regular dental check-ups. RESULTS There were no significant differences in patient demographics between the two participating facilities, including attitudes about dental visits. Therefore, we included all patients in the following analyses. Few patients (4.3%) had been referred to a dentist by a medical doctor or nurse. Although 80.9% of subjects had a primary dentist, only 34.0% of subjects received regular dental check-ups. The most common reasons cited for not seeking dental care were that visits are burdensome and a lack of perceived need. Patients with gum swelling or bleeding were much more likely to be in the group of those not receiving routine dental check-ups (χ2 test, P < 0.01). Logistic regression analysis demonstrated that receiving dental check-ups was associated with awareness that oral health management is more important for dialysis patients than for others and with having a primary dentist (P < 0.05). CONCLUSION Dialysis patients should be educated about the importance of preventive dental care. Medical providers are expected to participate in promoting dental visits among dialysis patients. PMID:27648409

  16. Highly sensitive electrochemical determination of 1-naphthol based on high-index facet SnO2 modified electrode

    International Nuclear Information System (INIS)

    Huang Xiaofeng; Zhao Guohua; Liu Meichuan; Li Fengting; Qiao Junlian; Zhao Sichen

    2012-01-01

    Highlights: ► It is the first time to employ high-index faceted SnO 2 in electrochemical analysis. ► High-index faceted SnO 2 has excellent electrochemical activity toward 1-naphthol. ► Highly sensitive determination of 1-naphthol is realized on high-index faceted SnO 2 . ► The detection limit of 1-naphthol is as low as 5 nM on high-index faceted SnO 2 . ► Electro-oxidation kinetics for 1-napthol on the novel electrode is discussed. - Abstract: SnO 2 nanooctahedron with {2 2 1} high-index facet (HIF) was synthesized by a simple hydrothermal method, and was firstly employed to sensitive electrochemical sensing of a typical organic pollutant, 1-naphthol (1-NAP). The constructed HIF SnO 2 modified glassy carbon electrode (HIF SnO 2 /GCE) possessed advantages of large effective electrode area, high electron transfer rate, and low charge transfer resistance. These improved electrochemical properties allowed the high electrocatalytic performance, high effective active sites and high adsorption capacity of 1-NAP on HIF SnO 2 /GCE. Cyclic voltammetry (CV) results showed that the electrochemical oxidation of 1-NAP obeyed a two-electron transfer process and the electrode reaction was under diffusion control on HIF SnO 2 /GCE. By adopting differential pulse voltammetry (DPV), electrochemical detection of 1-NAP was conducted on HIF SnO 2 /GCE with a limit of detection as low as 5 nM, which was relatively low compared to the literatures. The electrode also illustrated good stability in comparison with those reported value. Satisfactory results were obtained with average recoveries in the range of 99.7–103.6% in the real water sample detection. A promising device for the electrochemical detection of 1-NAP with high sensitivity has therefore been provided.

  17. Mutual interaction between high and low stereo-regularity components for crystallization and melting behaviors of polypropylene blend fibers

    Science.gov (United States)

    Kawai, Kouya; Kohri, Youhei; Takarada, Wataru; Takebe, Tomoaki; Kanai, Toshitaka; Kikutani, Takeshi

    2016-03-01

    Crystallization and melting behaviors of blend fibers of two types of polypropylene (PP), i.e. high stereo-regularity/high molecular weight PP (HPP) and low stereo-regularity/low molecular weight PP (LPP), was investigated. Blend fibers consisting of various HPP/LPP compositions were prepared through the melt spinning process. Differential scanning calorimetry (DSC), temperature modulated DSC (TMDSC) and wide-angle X-ray diffraction (WAXD) analysis were applied for clarifying the crystallization and melting behaviors of individual components. In the DSC measurement of blend fibers with high LPP composition, continuous endothermic heat was detected between the melting peaks of LPP at around 40 °C and that of HPP at around 160 °C. Such endothermic heat was more distinct for the blend fibers with higher LPP composition indicating that the melting of LPP in the heating process was hindered because of the presence of HPP crystals. On the other hand, heat of crystallization was detected at around 90 °C in the case of blend fibers with LPP content of 30 to 70 wt%, indicating that the crystallization of HPP component was taking place during the heating of as-spun blend fibers in the DSC measurement. Through the TMDSC analysis, re-organization of the crystalline structure through the simultaneous melting and re-crystallization was detected in the cases of HPP and blend fibers, whereas re-crystallization was not detected during the melting of LPP fibers. In the WAXD analysis during the heating of fibers, amount of a-form crystal was almost constant up to the melting in the case of single component HPP fibers, whereas there was a distinct increase of the intensity of crystalline reflections from around 100 °C, right after the melting of LPP in the case of blend fibers. These results suggested that the crystallization of HPP in the spinning process as well as during the conditioning process after spinning was hindered by the presence of LPP.

  18. Mutual interaction between high and low stereo-regularity components for crystallization and melting behaviors of polypropylene blend fibers

    International Nuclear Information System (INIS)

    Kawai, Kouya; Takarada, Wataru; Kikutani, Takeshi; Kohri, Youhei; Takebe, Tomoaki; Kanai, Toshitaka

    2016-01-01

    Crystallization and melting behaviors of blend fibers of two types of polypropylene (PP), i.e. high stereo-regularity/high molecular weight PP (HPP) and low stereo-regularity/low molecular weight PP (LPP), was investigated. Blend fibers consisting of various HPP/LPP compositions were prepared through the melt spinning process. Differential scanning calorimetry (DSC), temperature modulated DSC (TMDSC) and wide-angle X-ray diffraction (WAXD) analysis were applied for clarifying the crystallization and melting behaviors of individual components. In the DSC measurement of blend fibers with high LPP composition, continuous endothermic heat was detected between the melting peaks of LPP at around 40 °C and that of HPP at around 160 °C. Such endothermic heat was more distinct for the blend fibers with higher LPP composition indicating that the melting of LPP in the heating process was hindered because of the presence of HPP crystals. On the other hand, heat of crystallization was detected at around 90 °C in the case of blend fibers with LPP content of 30 to 70 wt%, indicating that the crystallization of HPP component was taking place during the heating of as-spun blend fibers in the DSC measurement. Through the TMDSC analysis, re-organization of the crystalline structure through the simultaneous melting and re-crystallization was detected in the cases of HPP and blend fibers, whereas re-crystallization was not detected during the melting of LPP fibers. In the WAXD analysis during the heating of fibers, amount of a-form crystal was almost constant up to the melting in the case of single component HPP fibers, whereas there was a distinct increase of the intensity of crystalline reflections from around 100 °C, right after the melting of LPP in the case of blend fibers. These results suggested that the crystallization of HPP in the spinning process as well as during the conditioning process after spinning was hindered by the presence of LPP.

  19. Mutual interaction between high and low stereo-regularity components for crystallization and melting behaviors of polypropylene blend fibers

    Energy Technology Data Exchange (ETDEWEB)

    Kawai, Kouya; Takarada, Wataru; Kikutani, Takeshi, E-mail: kikutani.t.aa@m.titech.ac.jp [Department of Organic and Polymeric Materials, Graduate School of Science and Engineering, Tokyo Institute of Technology, 2-12-1, O-okayama, Meguro-ku, Tokyo 152-8552 (Japan); Kohri, Youhei; Takebe, Tomoaki [Performance Materials Laboratories, Idemitsu Kosan Co.,Ltd. (Japan); Kanai, Toshitaka [KT Polymer (Japan)

    2016-03-09

    Crystallization and melting behaviors of blend fibers of two types of polypropylene (PP), i.e. high stereo-regularity/high molecular weight PP (HPP) and low stereo-regularity/low molecular weight PP (LPP), was investigated. Blend fibers consisting of various HPP/LPP compositions were prepared through the melt spinning process. Differential scanning calorimetry (DSC), temperature modulated DSC (TMDSC) and wide-angle X-ray diffraction (WAXD) analysis were applied for clarifying the crystallization and melting behaviors of individual components. In the DSC measurement of blend fibers with high LPP composition, continuous endothermic heat was detected between the melting peaks of LPP at around 40 °C and that of HPP at around 160 °C. Such endothermic heat was more distinct for the blend fibers with higher LPP composition indicating that the melting of LPP in the heating process was hindered because of the presence of HPP crystals. On the other hand, heat of crystallization was detected at around 90 °C in the case of blend fibers with LPP content of 30 to 70 wt%, indicating that the crystallization of HPP component was taking place during the heating of as-spun blend fibers in the DSC measurement. Through the TMDSC analysis, re-organization of the crystalline structure through the simultaneous melting and re-crystallization was detected in the cases of HPP and blend fibers, whereas re-crystallization was not detected during the melting of LPP fibers. In the WAXD analysis during the heating of fibers, amount of a-form crystal was almost constant up to the melting in the case of single component HPP fibers, whereas there was a distinct increase of the intensity of crystalline reflections from around 100 °C, right after the melting of LPP in the case of blend fibers. These results suggested that the crystallization of HPP in the spinning process as well as during the conditioning process after spinning was hindered by the presence of LPP.

  20. Regular scattering patterns from near-cloaking devices and their implications for invisibility cloaking

    International Nuclear Information System (INIS)

    Kocyigit, Ilker; Liu, Hongyu; Sun, Hongpeng

    2013-01-01

    In this paper, we consider invisibility cloaking via the transformation optics approach through a ‘blow-up’ construction. An ideal cloak makes use of singular cloaking material. ‘Blow-up-a-small-region’ construction and ‘truncation-of-singularity’ construction are introduced to avoid the singular structure, however, giving only near-cloaks. The study in the literature is to develop various mechanisms in order to achieve high-accuracy approximate near-cloaking devices, and also from a practical viewpoint to nearly cloak an arbitrary content. We study the problem from a different viewpoint. It is shown that for those regularized cloaking devices, the corresponding scattering wave fields due to an incident plane wave have regular patterns. The regular patterns are both a curse and a blessing. On the one hand, the regular wave pattern betrays the location of a cloaking device which is an intrinsic defect due to the ‘blow-up’ construction, and this is particularly the case for the construction by employing a high-loss layer lining. Indeed, our numerical experiments show robust reconstructions of the location, even by implementing the phaseless cross-section data. The construction by employing a high-density layer lining shows a certain promising feature. On the other hand, it is shown that one can introduce an internal point source to produce the canceling scattering pattern to achieve a near-cloak of an arbitrary order of accuracy. (paper)

  1. Application of adjoint sensitivity theory to performance assessment of hydrogeologic concerns

    International Nuclear Information System (INIS)

    Metcalfe, D.E.; Harper, W.V.

    1986-01-01

    Sensitivity and uncertainty analyses are important components of performance assessment activities for potential high-level radioactive waste repositories. The application of the adjoint sensitivity technique is demonstrated for the Leadville Limestone in the Paradox Basin, Utah. The adjoint technique is used sequentially to first assist in the calibration of the regional conceptual ground-water flow model to measured potentiometric data. Second, it is used to evaluate the sensitivities of the calculated pressures used to define local scale boundary conditions to regional parameters and boundary conditions

  2. Regularity effect in prospective memory during aging

    OpenAIRE

    Blondelle, Geoffrey; Hainselin, Mathieu; Gounden, Yannick; Heurley, Laurent; Voisin, Hélène; Megalakaki, Olga; Bressous, Estelle; Quaglino, Véronique

    2016-01-01

    Background: Regularity effect can affect performance in prospective memory (PM), but little is known on the cognitive processes linked to this effect. Moreover, its impacts with regard to aging remain unknown. To our knowledge, this study is the first to examine regularity effect in PM in a lifespan perspective, with a sample of young, intermediate, and older adults.Objective and design: Our study examined the regularity effect in PM in three groups of participants: 28 young adults (18–30), 1...

  3. High Sensitivity Indium Phosphide Based Avalanche Photodiode Focal Plane Arrays, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — nLight has demonstrated highly-uniform APD arrays based on the highly sensitive InGaAs/InP material system. These results provide great promise for achieving the...

  4. Eating high fat chow increases the sensitivity of rats to 8-OH-DPAT-induced lower lip retraction.

    Science.gov (United States)

    Li, Jun-Xu; Ju, Shutian; Baladi, Michelle G; Koek, Wouter; France, Charles P

    2011-12-01

    Eating high fat food can alter sensitivity to drugs acting on dopamine systems; this study examined whether eating high fat food alters sensitivity to a drug acting on serotonin (5-HT) systems. Sensitivity to (+)-8-hydroxy-2-(dipropylamino) tetralin hydrobromide (8-OH-DPAT; 5-HT1A receptor agonist)-induced lower lip retraction was examined in separate groups (n=8-9) of rats with free access to standard (5.7% fat) or high fat (34.3% fat) chow; sensitivity to quinpirole (dopamine D3/D2 receptor agonist)-induced yawning was also examined. Rats eating high fat chow gained more body weight than rats eating standard chow and, after 6 weeks of eating high fat chow, they were more sensitive to 8-OH-DPAT (0.01-0.1 mg/kg)-induced lower lip retraction and quinpirole (0.0032-0.32 mg/kg)-induced yawning. These changes were not reversed when rats that previously ate high fat chow were switched to eating standard chow and sensitivity to 8-OH-DPAT and quinpirole increased when rats that previously ate standard chow ate high fat chow. These data extend previous results showing changes in sensitivity to drugs acting on dopamine systems in animals eating high fat chow to a drug acting at 5-HT1A receptors and they provide support for the notion that eating certain foods impacts sensitivity to drugs acting on monoamine systems.

  5. 20 CFR 226.14 - Employee regular annuity rate.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Employee regular annuity rate. 226.14 Section... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Computing an Employee Annuity § 226.14 Employee regular annuity rate. The regular annuity rate payable to the employee is the total of the employee tier I...

  6. Mass Spectrometry-based Assay for High Throughput and High Sensitivity Biomarker Verification

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Xuejiang; Tang, Keqi

    2017-06-14

    Searching for disease specific biomarkers has become a major undertaking in the biomedical research field as the effective diagnosis, prognosis and treatment of many complex human diseases are largely determined by the availability and the quality of the biomarkers. A successful biomarker as an indicator to a specific biological or pathological process is usually selected from a large group of candidates by a strict verification and validation process. To be clinically useful, the validated biomarkers must be detectable and quantifiable by the selected testing techniques in their related tissues or body fluids. Due to its easy accessibility, protein biomarkers would ideally be identified in blood plasma or serum. However, most disease related protein biomarkers in blood exist at very low concentrations (<1ng/mL) and are “masked” by many none significant species at orders of magnitude higher concentrations. The extreme requirements of measurement sensitivity, dynamic range and specificity make the method development extremely challenging. The current clinical protein biomarker measurement primarily relies on antibody based immunoassays, such as ELISA. Although the technique is sensitive and highly specific, the development of high quality protein antibody is both expensive and time consuming. The limited capability of assay multiplexing also makes the measurement an extremely low throughput one rendering it impractical when hundreds to thousands potential biomarkers need to be quantitatively measured across multiple samples. Mass spectrometry (MS)-based assays have recently shown to be a viable alternative for high throughput and quantitative candidate protein biomarker verification. Among them, the triple quadrupole MS based assay is the most promising one. When it is coupled with liquid chromatography (LC) separation and electrospray ionization (ESI) source, a triple quadrupole mass spectrometer operating in a special selected reaction monitoring (SRM) mode

  7. High-Sensitivity Measurement of Density by Magnetic Levitation.

    Science.gov (United States)

    Nemiroski, Alex; Kumar, A A; Soh, Siowling; Harburg, Daniel V; Yu, Hai-Dong; Whitesides, George M

    2016-03-01

    This paper presents methods that use Magnetic Levitation (MagLev) to measure very small differences in density of solid diamagnetic objects suspended in a paramagnetic medium. Previous work in this field has shown that, while it is a convenient method, standard MagLev (i.e., where the direction of magnetization and gravitational force are parallel) cannot resolve differences in density mm) because (i) objects close in density prevent each other from reaching an equilibrium height due to hard contact and excluded volume, and (ii) using weaker magnets or reducing the magnetic susceptibility of the medium destabilizes the magnetic trap. The present work investigates the use of weak magnetic gradients parallel to the faces of the magnets as a means of increasing the sensitivity of MagLev without destabilization. Configuring the MagLev device in a rotated state (i.e., where the direction of magnetization and gravitational force are perpendicular) relative to the standard configuration enables simple measurements along the axes with the highest sensitivity to changes in density. Manipulating the distance of separation between the magnets or the lengths of the magnets (along the axis of measurement) enables the sensitivity to be tuned. These modifications enable an improvement in the resolution up to 100-fold over the standard configuration, and measurements with resolution down to 10(-6) g/cm(3). Three examples of characterizing the small differences in density among samples of materials having ostensibly indistinguishable densities-Nylon spheres, PMMA spheres, and drug spheres-demonstrate the applicability of rotated Maglev to measuring the density of small (0.1-1 mm) objects with high sensitivity. This capability will be useful in materials science, separations, and quality control of manufactured objects.

  8. Uso regular de serviços odontológicos entre adultos: padrões de utilização e tipos de serviços Regular use of dental care services by adults: patterns of utilization and types of services

    Directory of Open Access Journals (Sweden)

    Maria Beatriz J. Camargo

    2009-09-01

    Full Text Available O objetivo deste estudo foi avaliar o uso regular de serviços odontológicos entre adultos, identificando grupos nos quais esse comportamento é mais freqüente. O estudo foi realizado em Pelotas, Rio Grande do Sul, Brasil, com desenho transversal de base populacional, envolvendo 2.961 indivíduos, que responderam um questionário estruturado. A prevalência de uso regular encontrada foi de 32,8%. Estiveram positivamente associadas ao uso de forma regular as seguintes características: ser do sexo feminino, ter idade The aim of this study was to estimate the prevalence of regular use of dental services by adults and identify groups where this behavior is more frequent. A cross-sectional population-based study was carried out in Pelotas, southern Brazil, including 2,961 individuals who answered a standardized questionnaire. Overall prevalence of regular use of dental services was 32.8%. The following variables were positively associated with regular use: female gender, age > 60 years, no partner, high educational level, high economic status, private service user, good/excellent self-rated oral health, and no perceived need for dental treatment. Those who had received orientation on prevention and expressed a favorable view towards the dentist had higher odds of being regular users. Especially among lower-income individuals, regular use was infrequent (15%. When restricting the analysis to users of public dental services, schooling was still positively associated with the outcome. Dental services, especially in the public sector, should develop strategies to increase regular and preventive use.

  9. Maternal sensitivity: a concept analysis.

    Science.gov (United States)

    Shin, Hyunjeong; Park, Young-Joo; Ryu, Hosihn; Seomun, Gyeong-Ae

    2008-11-01

    The aim of this paper is to report a concept analysis of maternal sensitivity. Maternal sensitivity is a broad concept encompassing a variety of interrelated affective and behavioural caregiving attributes. It is used interchangeably with the terms maternal responsiveness or maternal competency, with no consistency of use. There is a need to clarify the concept of maternal sensitivity for research and practice. A search was performed on the CINAHL and Ovid MEDLINE databases using 'maternal sensitivity', 'maternal responsiveness' and 'sensitive mothering' as key words. The searches yielded 54 records for the years 1981-2007. Rodgers' method of evolutionary concept analysis was used to analyse the material. Four critical attributes of maternal sensitivity were identified: (a) dynamic process involving maternal abilities; (b) reciprocal give-and-take with the infant; (c) contingency on the infant's behaviour and (d) quality of maternal behaviours. Maternal identity and infant's needs and cues are antecedents for these attributes. The consequences are infant's comfort, mother-infant attachment and infant development. In addition, three positive affecting factors (social support, maternal-foetal attachment and high self-esteem) and three negative affecting factors (maternal depression, maternal stress and maternal anxiety) were identified. A clear understanding of the concept of maternal sensitivity could be useful for developing ways to enhance maternal sensitivity and to maximize the developmental potential of infants. Knowledge of the attributes of maternal sensitivity identified in this concept analysis may be helpful for constructing measuring items or dimensions.

  10. Sensitivity analysis and parameter estimation for distributed hydrological modeling: potential of variational methods

    Directory of Open Access Journals (Sweden)

    W. Castaings

    2009-04-01

    Full Text Available Variational methods are widely used for the analysis and control of computationally intensive spatially distributed systems. In particular, the adjoint state method enables a very efficient calculation of the derivatives of an objective function (response function to be analysed or cost function to be optimised with respect to model inputs.

    In this contribution, it is shown that the potential of variational methods for distributed catchment scale hydrology should be considered. A distributed flash flood model, coupling kinematic wave overland flow and Green Ampt infiltration, is applied to a small catchment of the Thoré basin and used as a relatively simple (synthetic observations but didactic application case.

    It is shown that forward and adjoint sensitivity analysis provide a local but extensive insight on the relation between the assigned model parameters and the simulated hydrological response. Spatially distributed parameter sensitivities can be obtained for a very modest calculation effort (~6 times the computing time of a single model run and the singular value decomposition (SVD of the Jacobian matrix provides an interesting perspective for the analysis of the rainfall-runoff relation.

    For the estimation of model parameters, adjoint-based derivatives were found exceedingly efficient in driving a bound-constrained quasi-Newton algorithm. The reference parameter set is retrieved independently from the optimization initial condition when the very common dimension reduction strategy (i.e. scalar multipliers is adopted.

    Furthermore, the sensitivity analysis results suggest that most of the variability in this high-dimensional parameter space can be captured with a few orthogonal directions. A parametrization based on the SVD leading singular vectors was found very promising but should be combined with another regularization strategy in order to prevent overfitting.

  11. High sensitivity neutron activation analysis of environmental and biological standard reference materials

    International Nuclear Information System (INIS)

    Greenberg, R.R.; Fleming, R.F.; Zeisler, R.

    1984-01-01

    Neutron activation analysis is a sensitive method with unique capabilities for the analysis of environmental and biological samples. Since it is based upon the nuclear properties of the elements, it does not suffer from many of the chemical effects that plague other methods of analysis. Analyses can be performed either with no chemical treatment of the sample (instrumentally), or with separations of the elements of interest after neutron irradiation (radiochemically). Typical examples of both types of analysis are discussed, and data obtained for a number of environmental and biological SRMs are presented. (author)

  12. Regular algebra and finite machines

    CERN Document Server

    Conway, John Horton

    2012-01-01

    World-famous mathematician John H. Conway based this classic text on a 1966 course he taught at Cambridge University. Geared toward graduate students of mathematics, it will also prove a valuable guide to researchers and professional mathematicians.His topics cover Moore's theory of experiments, Kleene's theory of regular events and expressions, Kleene algebras, the differential calculus of events, factors and the factor matrix, and the theory of operators. Additional subjects include event classes and operator classes, some regulator algebras, context-free languages, communicative regular alg

  13. 39 CFR 6.1 - Regular meetings, annual meeting.

    Science.gov (United States)

    2010-07-01

    ... 39 Postal Service 1 2010-07-01 2010-07-01 false Regular meetings, annual meeting. 6.1 Section 6.1 Postal Service UNITED STATES POSTAL SERVICE THE BOARD OF GOVERNORS OF THE U.S. POSTAL SERVICE MEETINGS (ARTICLE VI) § 6.1 Regular meetings, annual meeting. The Board shall meet regularly on a schedule...

  14. High sensitivity phase retrieval method in grating-based x-ray phase contrast imaging

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Zhao; Gao, Kun; Chen, Jian; Wang, Dajiang; Wang, Shenghao; Chen, Heng; Bao, Yuan; Shao, Qigang; Wang, Zhili, E-mail: wangnsrl@ustc.edu.cn [National Synchrotron Radiation Laboratory, University of Science and Technology of China, Hefei 230029 (China); Zhang, Kai [Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049 (China); Zhu, Peiping; Wu, Ziyu, E-mail: wuzy@ustc.edu.cn [National Synchrotron Radiation Laboratory, University of Science and Technology of China, Hefei 230029, China and Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049 (China)

    2015-02-15

    Purpose: Grating-based x-ray phase contrast imaging is considered as one of the most promising techniques for future medical imaging. Many different methods have been developed to retrieve phase signal, among which the phase stepping (PS) method is widely used. However, further practical implementations are hindered, due to its complex scanning mode and high radiation dose. In contrast, the reverse projection (RP) method is a novel fast and low dose extraction approach. In this contribution, the authors present a quantitative analysis of the noise properties of the refraction signals retrieved by the two methods and compare their sensitivities. Methods: Using the error propagation formula, the authors analyze theoretically the signal-to-noise ratios (SNRs) of the refraction images retrieved by the two methods. Then, the sensitivities of the two extraction methods are compared under an identical exposure dose. Numerical experiments are performed to validate the theoretical results and provide some quantitative insight. Results: The SNRs of the two methods are both dependent on the system parameters, but in different ways. Comparison between their sensitivities reveals that for the refraction signal, the RP method possesses a higher sensitivity, especially in the case of high visibility and/or at the edge of the object. Conclusions: Compared with the PS method, the RP method has a superior sensitivity and provides refraction images with a higher SNR. Therefore, one can obtain highly sensitive refraction images in grating-based phase contrast imaging. This is very important for future preclinical and clinical implementations.

  15. High sensitivity phase retrieval method in grating-based x-ray phase contrast imaging

    International Nuclear Information System (INIS)

    Wu, Zhao; Gao, Kun; Chen, Jian; Wang, Dajiang; Wang, Shenghao; Chen, Heng; Bao, Yuan; Shao, Qigang; Wang, Zhili; Zhang, Kai; Zhu, Peiping; Wu, Ziyu

    2015-01-01

    Purpose: Grating-based x-ray phase contrast imaging is considered as one of the most promising techniques for future medical imaging. Many different methods have been developed to retrieve phase signal, among which the phase stepping (PS) method is widely used. However, further practical implementations are hindered, due to its complex scanning mode and high radiation dose. In contrast, the reverse projection (RP) method is a novel fast and low dose extraction approach. In this contribution, the authors present a quantitative analysis of the noise properties of the refraction signals retrieved by the two methods and compare their sensitivities. Methods: Using the error propagation formula, the authors analyze theoretically the signal-to-noise ratios (SNRs) of the refraction images retrieved by the two methods. Then, the sensitivities of the two extraction methods are compared under an identical exposure dose. Numerical experiments are performed to validate the theoretical results and provide some quantitative insight. Results: The SNRs of the two methods are both dependent on the system parameters, but in different ways. Comparison between their sensitivities reveals that for the refraction signal, the RP method possesses a higher sensitivity, especially in the case of high visibility and/or at the edge of the object. Conclusions: Compared with the PS method, the RP method has a superior sensitivity and provides refraction images with a higher SNR. Therefore, one can obtain highly sensitive refraction images in grating-based phase contrast imaging. This is very important for future preclinical and clinical implementations

  16. High-sensitivity C-reactive protein predicts target organ damage in Chinese patients with metabolic syndrome

    DEFF Research Database (Denmark)

    Zhao, Zhigang; Nie, Hai; He, Hongbo

    2007-01-01

    with metabolic syndrome. A total of 1082 consecutive patients of Chinese origin were screened for the presence of metabolic syndrome according to the National Cholesterol Education Program's Adult Treatment Panel III. High-sensitivity C-reactive protein and target organ damage, including cardiac hypertrophy......Observational studies established high-sensitivity C-reactive protein as a risk factor for cardiovascular events in the general population. The goal of this study was to determine the relationship between target organ damage and high-sensitivity C-reactive protein in a cohort of Chinese patients......, carotid intima-media thickness, and renal impairment, were investigated. The median (25th and 75th percentiles) of high-sensitivity C-reactive protein in 619 patients with metabolic syndrome was 2.42 mg/L (0.75 and 3.66 mg/L) compared with 1.13 mg/L (0.51 and 2.46 mg/L) among 463 control subjects (P

  17. Short-term regular aerobic exercise reduces oxidative stress produced by acute in the adipose microvasculature.

    Science.gov (United States)

    Robinson, Austin T; Fancher, Ibra S; Sudhahar, Varadarajan; Bian, Jing Tan; Cook, Marc D; Mahmoud, Abeer M; Ali, Mohamed M; Ushio-Fukai, Masuko; Brown, Michael D; Fukai, Tohru; Phillips, Shane A

    2017-05-01

    High blood pressure has been shown to elicit impaired dilation in the vasculature. The purpose of this investigation was to elucidate the mechanisms through which high pressure may elicit vascular dysfunction and determine the mechanisms through which regular aerobic exercise protects arteries against high pressure. Male C57BL/6J mice were subjected to 2 wk of voluntary running (~6 km/day) for comparison with sedentary controls. Hindlimb adipose resistance arteries were dissected from mice for measurements of flow-induced dilation (FID; with or without high intraluminal pressure exposure) or protein expression of NADPH oxidase II (NOX II) and superoxide dismutase (SOD). Microvascular endothelial cells were subjected to high physiological laminar shear stress (20 dyn/cm 2 ) or static condition and treated with ANG II + pharmacological inhibitors. Cells were analyzed for the detection of ROS or collected for Western blot determination of NOX II and SOD. Resistance arteries from exercised mice demonstrated preserved FID after high pressure exposure, whereas FID was impaired in control mouse arteries. Inhibition of ANG II or NOX II restored impaired FID in control mouse arteries. High pressure increased superoxide levels in control mouse arteries but not in exercise mouse arteries, which exhibited greater ability to convert superoxide to H 2 O 2 Arteries from exercised mice exhibited less NOX II protein expression, more SOD isoform expression, and less sensitivity to ANG II. Endothelial cells subjected to laminar shear stress exhibited less NOX II subunit expression. In conclusion, aerobic exercise prevents high pressure-induced vascular dysfunction through an improved redox environment in the adipose microvasculature. NEW & NOTEWORTHY We describe potential mechanisms contributing to aerobic exercise-conferred protection against high intravascular pressure. Subcutaneous adipose microvessels from exercise mice express less NADPH oxidase (NOX) II and more superoxide

  18. A regularized stationary mean-field game

    KAUST Repository

    Yang, Xianjin

    2016-01-01

    In the thesis, we discuss the existence and numerical approximations of solutions of a regularized mean-field game with a low-order regularization. In the first part, we prove a priori estimates and use the continuation method to obtain the existence of a solution with a positive density. Finally, we introduce the monotone flow method and solve the system numerically.

  19. A regularized stationary mean-field game

    KAUST Repository

    Yang, Xianjin

    2016-04-19

    In the thesis, we discuss the existence and numerical approximations of solutions of a regularized mean-field game with a low-order regularization. In the first part, we prove a priori estimates and use the continuation method to obtain the existence of a solution with a positive density. Finally, we introduce the monotone flow method and solve the system numerically.

  20. Automating InDesign with Regular Expressions

    CERN Document Server

    Kahrel, Peter

    2006-01-01

    If you need to make automated changes to InDesign documents beyond what basic search and replace can handle, you need regular expressions, and a bit of scripting to make them work. This Short Cut explains both how to write regular expressions, so you can find and replace the right things, and how to use them in InDesign specifically.

  1. Highly sensitive wearable strain sensor based on silver nanowires and nanoparticles

    Science.gov (United States)

    Shengbo, Sang; Lihua, Liu; Aoqun, Jian; Qianqian, Duan; Jianlong, Ji; Qiang, Zhang; Wendong, Zhang

    2018-06-01

    Here, we propose a highly sensitive and stretchable strain sensor based on silver nanoparticles and nanowires (Ag NPs and NWs), advancing the rapid development of electronic skin. To improve the sensitivity of strain sensors based on silver nanowires (Ag NWs), Ag NPs and NWs were added to polydimethylsiloxane (PDMS) as an aid filler. Silver nanoparticles (Ag NPs) increase the conductive paths for electrons, leading to the low resistance of the resulting sensor (14.9 Ω). The strain sensor based on Ag NPs and NWs showed strong piezoresistivity with a tunable gauge factor (GF) at 3766, and a change in resistance as the strain linearly increased from 0% to 28.1%. The high GF demonstrates the irreplaceable role of Ag NPs in the sensor. Moreover, the applicability of our high-performance strain sensor has been demonstrated by its ability to sense movements caused by human talking, finger bending, wrist raising and walking.

  2. A Variance Minimization Criterion to Feature Selection Using Laplacian Regularization.

    Science.gov (United States)

    He, Xiaofei; Ji, Ming; Zhang, Chiyuan; Bao, Hujun

    2011-10-01

    In many information processing tasks, one is often confronted with very high-dimensional data. Feature selection techniques are designed to find the meaningful feature subset of the original features which can facilitate clustering, classification, and retrieval. In this paper, we consider the feature selection problem in unsupervised learning scenarios, which is particularly difficult due to the absence of class labels that would guide the search for relevant information. Based on Laplacian regularized least squares, which finds a smooth function on the data manifold and minimizes the empirical loss, we propose two novel feature selection algorithms which aim to minimize the expected prediction error of the regularized regression model. Specifically, we select those features such that the size of the parameter covariance matrix of the regularized regression model is minimized. Motivated from experimental design, we use trace and determinant operators to measure the size of the covariance matrix. Efficient computational schemes are also introduced to solve the corresponding optimization problems. Extensive experimental results over various real-life data sets have demonstrated the superiority of the proposed algorithms.

  3. Optimal behaviour can violate the principle of regularity.

    Science.gov (United States)

    Trimmer, Pete C

    2013-07-22

    Understanding decisions is a fundamental aim of behavioural ecology, psychology and economics. The regularity axiom of utility theory holds that a preference between options should be maintained when other options are made available. Empirical studies have shown that animals violate regularity but this has not been understood from a theoretical perspective, such decisions have therefore been labelled as irrational. Here, I use models of state-dependent behaviour to demonstrate that choices can violate regularity even when behavioural strategies are optimal. I also show that the range of conditions over which regularity should be violated can be larger when options do not always persist into the future. Consequently, utility theory--based on axioms, including transitivity, regularity and the independence of irrelevant alternatives--is undermined, because even alternatives that are never chosen by an animal (in its current state) can be relevant to a decision.

  4. Dimensional regularization in configuration space

    International Nuclear Information System (INIS)

    Bollini, C.G.; Giambiagi, J.J.

    1995-09-01

    Dimensional regularization is introduced in configuration space by Fourier transforming in D-dimensions the perturbative momentum space Green functions. For this transformation, Bochner theorem is used, no extra parameters, such as those of Feynman or Bogoliubov-Shirkov are needed for convolutions. The regularized causal functions in x-space have ν-dependent moderated singularities at the origin. They can be multiplied together and Fourier transformed (Bochner) without divergence problems. The usual ultraviolet divergences appear as poles of the resultant functions of ν. Several example are discussed. (author). 9 refs

  5. Matrix regularization of 4-manifolds

    OpenAIRE

    Trzetrzelewski, M.

    2012-01-01

    We consider products of two 2-manifolds such as S^2 x S^2, embedded in Euclidean space and show that the corresponding 4-volume preserving diffeomorphism algebra can be approximated by a tensor product SU(N)xSU(N) i.e. functions on a manifold are approximated by the Kronecker product of two SU(N) matrices. A regularization of the 4-sphere is also performed by constructing N^2 x N^2 matrix representations of the 4-algebra (and as a byproduct of the 3-algebra which makes the regularization of S...

  6. Sensitivity enhancement by chromatographic peak concentration with ultra-high performance liquid chromatography-nuclear magnetic resonance spectroscopy for minor impurity analysis.

    Science.gov (United States)

    Tokunaga, Takashi; Akagi, Ken-Ichi; Okamoto, Masahiko

    2017-07-28

    High performance liquid chromatography can be coupled with nuclear magnetic resonance (NMR) spectroscopy to give a powerful analytical method known as liquid chromatography-nuclear magnetic resonance (LC-NMR) spectroscopy, which can be used to determine the chemical structures of the components of complex mixtures. However, intrinsic limitations in the sensitivity of NMR spectroscopy have restricted the scope of this procedure, and resolving these limitations remains a critical problem for analysis. In this study, we coupled ultra-high performance liquid chromatography (UHPLC) with NMR to give a simple and versatile analytical method with higher sensitivity than conventional LC-NMR. UHPLC separation enabled the concentration of individual peaks to give a volume similar to that of the NMR flow cell, thereby maximizing the sensitivity to the theoretical upper limit. The UHPLC concentration of compound peaks present at typical impurity levels (5.0-13.1 nmol) in a mixture led to at most three-fold increase in the signal-to-noise ratio compared with LC-NMR. Furthermore, we demonstrated the use of UHPLC-NMR for obtaining structural information of a minor impurity in a reaction mixture in actual laboratory-scale development of a synthetic process. Using UHPLC-NMR, the experimental run times for chromatography and NMR were greatly reduced compared with LC-NMR. UHPLC-NMR successfully overcomes the difficulties associated with analyses of minor components in a complex mixture by LC-NMR, which are problematic even when an ultra-high field magnet and cryogenic probe are used. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Regular Breakfast and Blood Lead Levels among Preschool Children

    Directory of Open Access Journals (Sweden)

    Needleman Herbert

    2011-04-01

    Full Text Available Abstract Background Previous studies have shown that fasting increases lead absorption in the gastrointestinal tract of adults. Regular meals/snacks are recommended as a nutritional intervention for lead poisoning in children, but epidemiological evidence of links between fasting and blood lead levels (B-Pb is rare. The purpose of this study was to examine the association between eating a regular breakfast and B-Pb among children using data from the China Jintan Child Cohort Study. Methods Parents completed a questionnaire regarding children's breakfast-eating habit (regular or not, demographics, and food frequency. Whole blood samples were collected from 1,344 children for the measurements of B-Pb and micronutrients (iron, copper, zinc, calcium, and magnesium. B-Pb and other measures were compared between children with and without regular breakfast. Linear regression modeling was used to evaluate the association between regular breakfast and log-transformed B-Pb. The association between regular breakfast and risk of lead poisoning (B-Pb≥10 μg/dL was examined using logistic regression modeling. Results Median B-Pb among children who ate breakfast regularly and those who did not eat breakfast regularly were 6.1 μg/dL and 7.2 μg/dL, respectively. Eating breakfast was also associated with greater zinc blood levels. Adjusting for other relevant factors, the linear regression model revealed that eating breakfast regularly was significantly associated with lower B-Pb (beta = -0.10 units of log-transformed B-Pb compared with children who did not eat breakfast regularly, p = 0.02. Conclusion The present study provides some initial human data supporting the notion that eating a regular breakfast might reduce B-Pb in young children. To our knowledge, this is the first human study exploring the association between breakfast frequency and B-Pb in young children.

  8. Design of a high-sensitivity classifier based on a genetic algorithm: application to computer-aided diagnosis

    International Nuclear Information System (INIS)

    Sahiner, Berkman; Chan, Heang-Ping; Petrick, Nicholas; Helvie, Mark A.; Goodsitt, Mitchell M.

    1998-01-01

    A genetic algorithm (GA) based feature selection method was developed for the design of high-sensitivity classifiers, which were tailored to yield high sensitivity with high specificity. The fitness function of the GA was based on the receiver operating characteristic (ROC) partial area index, which is defined as the average specificity above a given sensitivity threshold. The designed GA evolved towards the selection of feature combinations which yielded high specificity in the high-sensitivity region of the ROC curve, regardless of the performance at low sensitivity. This is a desirable quality of a classifier used for breast lesion characterization, since the focus in breast lesion characterization is to diagnose correctly as many benign lesions as possible without missing malignancies. The high-sensitivity classifier, formulated as the Fisher's linear discriminant using GA-selected feature variables, was employed to classify 255 biopsy-proven mammographic masses as malignant or benign. The mammograms were digitized at a pixel size of 0.1mmx0.1mm, and regions of interest (ROIs) containing the biopsied masses were extracted by an experienced radiologist. A recently developed image transformation technique, referred to as the rubber-band straightening transform, was applied to the ROIs. Texture features extracted from the spatial grey-level dependence and run-length statistics matrices of the transformed ROIs were used to distinguish malignant and benign masses. The classification accuracy of the high-sensitivity classifier was compared with that of linear discriminant analysis with stepwise feature selection (LDA sfs ). With proper GA training, the ROC partial area of the high-sensitivity classifier above a true-positive fraction of 0.95 was significantly larger than that of LDA sfs , although the latter provided a higher total area (A z ) under the ROC curve. By setting an appropriate decision threshold, the high-sensitivity classifier and LDA sfs correctly

  9. On the equivalence of different regularization methods

    International Nuclear Information System (INIS)

    Brzezowski, S.

    1985-01-01

    The R-circunflex-operation preceded by the regularization procedure is discussed. Some arguments are given, according to which the results may depend on the method of regularization, introduced in order to avoid divergences in perturbation calculations. 10 refs. (author)

  10. Accreting fluids onto regular black holes via Hamiltonian approach

    Energy Technology Data Exchange (ETDEWEB)

    Jawad, Abdul [COMSATS Institute of Information Technology, Department of Mathematics, Lahore (Pakistan); Shahzad, M.U. [COMSATS Institute of Information Technology, Department of Mathematics, Lahore (Pakistan); University of Central Punjab, CAMS, UCP Business School, Lahore (Pakistan)

    2017-08-15

    We investigate the accretion of test fluids onto regular black holes such as Kehagias-Sfetsos black holes and regular black holes with Dagum distribution function. We analyze the accretion process when different test fluids are falling onto these regular black holes. The accreting fluid is being classified through the equation of state according to the features of regular black holes. The behavior of fluid flow and the existence of sonic points is being checked for these regular black holes. It is noted that the three-velocity depends on critical points and the equation of state parameter on phase space. (orig.)

  11. Regularization of the Fourier series of discontinuous functions by various summation methods

    Energy Technology Data Exchange (ETDEWEB)

    Ahmad, S.S.; Beghi, L. (Padua Univ. (Italy). Seminario Matematico)

    1983-07-01

    In this paper the regularization by various summation methods of the Fourier series of functions containing discontinuities of the first and second kind are studied and the results of the numerical analyses referring to some typical periodic functions are presented. In addition to the Cesaro and Lanczos weightings, a new (i.e. cosine) weighting for accelerating the convergence rate is proposed. A comparison with the results obtained by Garibotti and Massaro with the punctual Pade approximants (PPA) technique in case of a periodic step function is also carried out.

  12. Impacts of clustering on noise-induced spiking regularity in the excitatory neuronal networks of subnetworks.

    Science.gov (United States)

    Li, Huiyan; Sun, Xiaojuan; Xiao, Jinghua

    2015-01-01

    In this paper, we investigate how clustering factors influent spiking regularity of the neuronal network of subnetworks. In order to do so, we fix the averaged coupling probability and the averaged coupling strength, and take the cluster number M, the ratio of intra-connection probability and inter-connection probability R, the ratio of intra-coupling strength and inter-coupling strength S as controlled parameters. With the obtained simulation results, we find that spiking regularity of the neuronal networks has little variations with changing of R and S when M is fixed. However, cluster number M could reduce the spiking regularity to low level when the uniform neuronal network's spiking regularity is at high level. Combined the obtained results, we can see that clustering factors have little influences on the spiking regularity when the entire energy is fixed, which could be controlled by the averaged coupling strength and the averaged connection probability.

  13. Highly sensitive detection of urinary cadmium to assess personal exposure

    Energy Technology Data Exchange (ETDEWEB)

    Argun, Avni A.; Banks, Ashley M.; Merlen, Gwendolynne; Tempelman, Linda A. [Giner, Inc., 89 Rumford Ave., Newton 02466, MA United States (United States); Becker, Michael F.; Schuelke, Thomas [Fraunhofer USA – CCL, 1449 Engineering Research Ct., East Lansing 48824, MI (United States); Dweik, Badawi M., E-mail: bdweik@ginerinc.com [Giner, Inc., 89 Rumford Ave., Newton 02466, MA United States (United States)

    2013-04-22

    Highlights: ► An electrochemical sensor capable of detecting cadmium at parts-per-billion levels in urine. ► A novel fabrication method for Boron-Doped Diamond (BDD) ultramicroelectrode (UME) arrays. ► Unique combination of BDD UME arrays and a differential pulse voltammetry algorithm. ► High sensitivity, high reproducibility, and very low noise levels. ► Opportunity for portable operation to assess on-site personal exposure. -- Abstract: A series of Boron-Doped Diamond (BDD) ultramicroelectrode arrays were fabricated and investigated for their performance as electrochemical sensors to detect trace level metals such as cadmium. The steady-state diffusion behavior of these sensors was validated using cyclic voltammetry followed by electrochemical detection of cadmium in water and in human urine to demonstrate high sensitivity (>200 μA ppb{sup −1} cm{sup −2}) and low background current (<4 nA). When an array of ultramicroelectrodes was positioned with optimal spacing, these BDD sensors showed a sigmoidal diffusion behavior. They also demonstrated high accuracy with linear dose dependence for quantification of cadmium in a certified reference river water sample from the U.S. National Institute of Standards and Technology (NIST) as well as in a human urine sample spiked with 0.25–1 ppb cadmium.

  14. Online high sensitivity measurement system for transuranic aerosols

    International Nuclear Information System (INIS)

    Kordas, J.F.; Phelps, P.L.

    1976-01-01

    A measurement system for transuranic aerosols has been designed that will be able to withstand the corrosive nature of stack effluents and yet have extremely high sensitivity. It will be capable of measuring 1 maximum permissible concentration (MPC) of plutonium or americium in 30 minutes with a fractional standard deviation of less than 0.33. Background resulting from 218 Po is eliminated by alpha energy discrimination and a decay scheme analysis. A microprocessor controls all data acquisition, data reduction, and instrument calibration

  15. High Excitation Transfer Efficiency from Energy Relay Dyes in Dye-Sensitized Solar Cells

    KAUST Repository

    Hardin, Brian E.

    2010-08-11

    The energy relay dye, 4-(Dicyanomethylene)-2-methyl-6-(4- dimethylaminostyryl)-4H-pyran (DCM), was used with a near-infrared sensitizing dye, TT1, to increase the overall power conversion efficiency of a dye-sensitized solar cell (DSC) from 3.5% to 4.5%. The unattached DCM dyes exhibit an average excitation transfer efficiency (EÌ?TE) of 96% inside TT1-covered, mesostructured TiO2 films. Further performance increases were limited by the solubility of DCM in an acetonitrile based electrolyte. This demonstration shows that energy relay dyes can be efficiently implemented in optimized dye-sensitized solar cells, but also highlights the need to design highly soluble energy relay dyes with high molar extinction coefficients. © 2010 American Chemical Society.

  16. Highly Sensitive Liquid Core Temperature Sensor Based on Multimode Interference Effects

    Directory of Open Access Journals (Sweden)

    Miguel A. Fuentes-Fuentes

    2015-10-01

    Full Text Available A novel fiber optic temperature sensor based on a liquid-core multimode interference device is demonstrated. The advantage of such structure is that the thermo-optic coefficient (TOC of the liquid is at least one order of magnitude larger than that of silica and this, combined with the fact that the TOC of silica and the liquid have opposite signs, provides a liquid-core multimode fiber (MMF highly sensitive to temperature. Since the refractive index of the liquid can be easily modified, this allows us to control the modal properties of the liquid-core MMF at will and the sensor sensitivity can be easily tuned by selecting the refractive index of the liquid in the core of the device. The maximum sensitivity measured in our experiments is 20 nm/°C in the low-temperature regime up to 60 °C. To the best of our knowledge, to date, this is the largest sensitivity reported for fiber-based MMI temperature sensors.

  17. Peer review of HEDR uncertainty and sensitivity analyses plan

    Energy Technology Data Exchange (ETDEWEB)

    Hoffman, F.O.

    1993-06-01

    This report consists of a detailed documentation of the writings and deliberations of the peer review panel that met on May 24--25, 1993 in Richland, Washington to evaluate your draft report ``Uncertainty/Sensitivity Analysis Plan`` (PNWD-2124 HEDR). The fact that uncertainties are being considered in temporally and spatially varying parameters through the use of alternative time histories and spatial patterns deserves special commendation. It is important to identify early those model components and parameters that will have the most influence on the magnitude and uncertainty of the dose estimates. These are the items that should be investigated most intensively prior to committing to a final set of results.

  18. Nitrogen detected TROSY at high field yields high resolution and sensitivity for protein NMR

    Energy Technology Data Exchange (ETDEWEB)

    Takeuchi, Koh [National Institute for Advanced Industrial Science and Technology, Molecular Profiling Research Center for Drug Discovery (Japan); Arthanari, Haribabu [Harvard Medical School, Department of Biochemistry and Molecular Pharmacology (United States); Shimada, Ichio, E-mail: shimada@iw-nmr.f.u-tokyo.ac.jp [National Institute for Advanced Industrial Science and Technology, Molecular Profiling Research Center for Drug Discovery (Japan); Wagner, Gerhard, E-mail: gerhard-wagner@hms.harvard.edu [Harvard Medical School, Department of Biochemistry and Molecular Pharmacology (United States)

    2015-12-15

    Detection of {sup 15}N in multidimensional NMR experiments of proteins has sparsely been utilized because of the low gyromagnetic ratio (γ) of nitrogen and the presumed low sensitivity of such experiments. Here we show that selecting the TROSY components of proton-attached {sup 15}N nuclei (TROSY {sup 15}N{sub H}) yields high quality spectra in high field magnets (>600 MHz) by taking advantage of the slow {sup 15}N transverse relaxation and compensating for the inherently low {sup 15}N sensitivity. The {sup 15}N TROSY transverse relaxation rates increase modestly with molecular weight but the TROSY gain in peak heights depends strongly on the magnetic field strength. Theoretical simulations predict that the narrowest line width for the TROSY {sup 15}N{sub H} component can be obtained at 900 MHz, but sensitivity reaches its maximum around 1.2 GHz. Based on these considerations, a {sup 15}N-detected 2D {sup 1}H–{sup 15}N TROSY-HSQC ({sup 15}N-detected TROSY-HSQC) experiment was developed and high-quality 2D spectra were recorded at 800 MHz in 2 h for 1 mM maltose-binding protein at 278 K (τ{sub c} ∼ 40 ns). Unlike for {sup 1}H detected TROSY, deuteration is not mandatory to benefit {sup 15}N detected TROSY due to reduced dipolar broadening, which facilitates studies of proteins that cannot be deuterated, especially in cases where production requires eukaryotic expression systems. The option of recording {sup 15}N TROSY of proteins expressed in H{sub 2}O media also alleviates the problem of incomplete amide proton back exchange, which often hampers the detection of amide groups in the core of large molecular weight proteins that are expressed in D{sub 2}O culture media and cannot be refolded for amide back exchange. These results illustrate the potential of {sup 15}N{sub H}-detected TROSY experiments as a means to exploit the high resolution offered by high field magnets near and above 1 GHz.

  19. Bounded Perturbation Regularization for Linear Least Squares Estimation

    KAUST Repository

    Ballal, Tarig

    2017-10-18

    This paper addresses the problem of selecting the regularization parameter for linear least-squares estimation. We propose a new technique called bounded perturbation regularization (BPR). In the proposed BPR method, a perturbation with a bounded norm is allowed into the linear transformation matrix to improve the singular-value structure. Following this, the problem is formulated as a min-max optimization problem. Next, the min-max problem is converted to an equivalent minimization problem to estimate the unknown vector quantity. The solution of the minimization problem is shown to converge to that of the ℓ2 -regularized least squares problem, with the unknown regularizer related to the norm bound of the introduced perturbation through a nonlinear constraint. A procedure is proposed that combines the constraint equation with the mean squared error (MSE) criterion to develop an approximately optimal regularization parameter selection algorithm. Both direct and indirect applications of the proposed method are considered. Comparisons with different Tikhonov regularization parameter selection methods, as well as with other relevant methods, are carried out. Numerical results demonstrate that the proposed method provides significant improvement over state-of-the-art methods.

  20. High spatial precision nano-imaging of polarization-sensitive plasmonic particles

    Science.gov (United States)

    Liu, Yunbo; Wang, Yipei; Lee, Somin Eunice

    2018-02-01

    Precise polarimetric imaging of polarization-sensitive nanoparticles is essential for resolving their accurate spatial positions beyond the diffraction limit. However, conventional technologies currently suffer from beam deviation errors which cannot be corrected beyond the diffraction limit. To overcome this issue, we experimentally demonstrate a spatially stable nano-imaging system for polarization-sensitive nanoparticles. In this study, we show that by integrating a voltage-tunable imaging variable polarizer with optical microscopy, we are able to suppress beam deviation errors. We expect that this nano-imaging system should allow for acquisition of accurate positional and polarization information from individual nanoparticles in applications where real-time, high precision spatial information is required.