Verifying Temporal Properties of Reactive Systems by Transformation
Hamilton, Geoff
2015-01-01
We show how program transformation techniques can be used for the verification of both safety and liveness properties of reactive systems. In particular, we show how the program transformation technique distillation can be used to transform reactive systems specified in a functional language into a simplified form that can subsequently be analysed to verify temporal properties of the systems. Example systems which are intended to model mutual exclusion are analysed using these techniques with...
Boosting Maintenance in Working Memory with Temporal Regularities
Plancher, Gaën; Lévêque, Yohana; Fanuel, Lison; Piquandet, Gaëlle; Tillmann, Barbara
2018-01-01
Music cognition research has provided evidence for the benefit of temporally regular structures guiding attention over time. The present study investigated whether maintenance in working memory can benefit from an isochronous rhythm. Participants were asked to remember series of 6 letters for serial recall. In the rhythm condition of Experiment…
Infants use temporal regularities to chunk objects in memory.
Kibbe, Melissa M; Feigenson, Lisa
2016-01-01
whether infants also remembered the specific identities of the objects in each chunk. In Experiment 4, we confirmed that infants remembered objects' identities in smaller arrays that did not require chunking. Next, in Experiment 5, we asked whether infants also remembered objects' identities in larger arrays that had been chunked on the basis of temporal regularities. Following a familiarization phase identical to that in Experiment 2a, we hid all four objects and then revealed either these same four objects, or four objects of which two had unexpectedly changed shape and color. Surprisingly, infants failed to look longer at the identity change outcome. Taken together, our results suggest that infants can use temporal regularities between objects to increase memory for objects' existence, but not necessarily for objects' identities. Copyright © 2015 Elsevier B.V. All rights reserved.
Temporal regularity of the environment drives time perception
van Rijn, H; Rhodes, D; Di Luca, M
2016-01-01
It’s reasonable to assume that a regularly paced sequence should be perceived as regular, but here we show that perceived regularity depends on the context in which the sequence is embedded. We presented one group of participants with perceptually regularly paced sequences, and another group of participants with mostly irregularly paced sequences (75% irregular, 25% regular). The timing of the final stimulus in each sequence could be var- ied. In one experiment, we asked whether the last stim...
International Nuclear Information System (INIS)
Verhaeghe, Jeroen; D'Asseler, Yves; Vandenberghe, Stefaan; Staelens, Steven; Lemahieu, Ignace
2007-01-01
The use of a temporal B-spline basis for the reconstruction of dynamic positron emission tomography data was investigated. Maximum likelihood (ML) reconstructions using an expectation maximization framework and maximum A-posteriori (MAP) reconstructions using the generalized expectation maximization framework were evaluated. Different parameters of the B-spline basis of such as order, number of basis functions and knot placing were investigated in a reconstruction task using simulated dynamic list-mode data. We found that a higher order basis reduced both the bias and variance. Using a higher number of basis functions in the modeling of the time activity curves (TACs) allowed the algorithm to model faster changes of the TACs, however, the TACs became noisier. We have compared ML, Gaussian postsmoothed ML and MAP reconstructions. The noise level in the ML reconstructions was controlled by varying the number of basis functions. The MAP algorithm penalized the integrated squared curvature of the reconstructed TAC. The postsmoothed ML was always outperformed in terms of bias and variance properties by the MAP and ML reconstructions. A simple adaptive knot placing strategy was also developed and evaluated. It is based on an arc length redistribution scheme during the reconstruction. The free knot reconstruction allowed a more accurate reconstruction while reducing the noise level especially for fast changing TACs such as blood input functions. Limiting the number of temporal basis functions combined with the adaptive knot placing strategy is in this case advantageous for regularization purposes when compared to the other regularization techniques
EEG/MEG Source Reconstruction with Spatial-Temporal Two-Way Regularized Regression
Tian, Tian Siva; Huang, Jianhua Z.; Shen, Haipeng; Li, Zhimin
2013-01-01
In this work, we propose a spatial-temporal two-way regularized regression method for reconstructing neural source signals from EEG/MEG time course measurements. The proposed method estimates the dipole locations and amplitudes simultaneously
Attention increases the temporal precision of conscious perception: verifying the Neural-ST Model.
Directory of Open Access Journals (Sweden)
Srivas Chennu
2009-11-01
Full Text Available What role does attention play in ensuring the temporal precision of visual perception? Behavioural studies have investigated feature selection and binding in time using fleeting sequences of stimuli in the Rapid Serial Visual Presentation (RSVP paradigm, and found that temporal accuracy is reduced when attentional control is diminished. To reduce the efficacy of attentional deployment, these studies have employed the Attentional Blink (AB phenomenon. In this article, we use electroencephalography (EEG to directly investigate the temporal dynamics of conscious perception. Specifically, employing a combination of experimental analysis and neural network modelling, we test the hypothesis that the availability of attention reduces temporal jitter in the latency between a target's visual onset and its consolidation into working memory. We perform time-frequency analysis on data from an AB study to compare the EEG trials underlying the P3 ERPs (Event-related Potential evoked by targets seen outside vs. inside the AB time window. We find visual differences in phase-sorted ERPimages and statistical differences in the variance of the P3 phase distributions. These results argue for increased variation in the latency of conscious perception during the AB. This experimental analysis is complemented by a theoretical exploration of temporal attention and target processing. Using activation traces from the Neural-ST(2 model, we generate virtual ERPs and virtual ERPimages. These are compared to their human counterparts to propose an explanation of how target consolidation in the context of the AB influences the temporal variability of selective attention. The AB provides us with a suitable phenomenon with which to investigate the interplay between attention and perception. The combination of experimental and theoretical elucidation in this article contributes to converging evidence for the notion that the AB reflects a reduction in the temporal acuity of
Graph regularized nonnegative matrix factorization for temporal link prediction in dynamic networks
Ma, Xiaoke; Sun, Penggang; Wang, Yu
2018-04-01
Many networks derived from society and nature are temporal and incomplete. The temporal link prediction problem in networks is to predict links at time T + 1 based on a given temporal network from time 1 to T, which is essential to important applications. The current algorithms either predict the temporal links by collapsing the dynamic networks or collapsing features derived from each network, which are criticized for ignoring the connection among slices. to overcome the issue, we propose a novel graph regularized nonnegative matrix factorization algorithm (GrNMF) for the temporal link prediction problem without collapsing the dynamic networks. To obtain the feature for each network from 1 to t, GrNMF factorizes the matrix associated with networks by setting the rest networks as regularization, which provides a better way to characterize the topological information of temporal links. Then, the GrNMF algorithm collapses the feature matrices to predict temporal links. Compared with state-of-the-art methods, the proposed algorithm exhibits significantly improved accuracy by avoiding the collapse of temporal networks. Experimental results of a number of artificial and real temporal networks illustrate that the proposed method is not only more accurate but also more robust than state-of-the-art approaches.
Bedoin, Nathalie; Brisseau, Lucie; Molinier, Pauline; Roch, Didier; Tillmann, Barbara
2016-01-01
Children with developmental language disorders have been shown to be also impaired in rhythm and meter perception. Temporal processing and its link to language processing can be understood within the dynamic attending theory. An external stimulus can stimulate internal oscillators, which orient attention over time and drive speech signal segmentation to provide benefits for syntax processing, which is impaired in various patient populations. For children with Specific Language Impairment (SLI) and dyslexia, previous research has shown the influence of an external rhythmic stimulation on subsequent language processing by comparing the influence of a temporally regular musical prime to that of a temporally irregular prime. Here we tested whether the observed rhythmic stimulation effect is indeed due to a benefit provided by the regular musical prime (rather than a cost subsequent to the temporally irregular prime). Sixteen children with SLI and 16 age-matched controls listened to either a regular musical prime sequence or an environmental sound scene (without temporal regularities in event occurrence; i.e., referred to as "baseline condition") followed by grammatically correct and incorrect sentences. They were required to perform grammaticality judgments for each auditorily presented sentence. Results revealed that performance for the grammaticality judgments was better after the regular prime sequences than after the baseline sequences. Our findings are interpreted in the theoretical framework of the dynamic attending theory (Jones, 1976) and the temporal sampling (oscillatory) framework for developmental language disorders (Goswami, 2011). Furthermore, they encourage the use of rhythmic structures (even in non-verbal materials) to boost linguistic structure processing and outline perspectives for rehabilitation.
Pârvu, Ovidiu; Gilbert, David
2016-01-01
Insights gained from multilevel computational models of biological systems can be translated into real-life applications only if the model correctness has been verified first. One of the most frequently employed in silico techniques for computational model verification is model checking. Traditional model checking approaches only consider the evolution of numeric values, such as concentrations, over time and are appropriate for computational models of small scale systems (e.g. intracellular networks). However for gaining a systems level understanding of how biological organisms function it is essential to consider more complex large scale biological systems (e.g. organs). Verifying computational models of such systems requires capturing both how numeric values and properties of (emergent) spatial structures (e.g. area of multicellular population) change over time and across multiple levels of organization, which are not considered by existing model checking approaches. To address this limitation we have developed a novel approximate probabilistic multiscale spatio-temporal meta model checking methodology for verifying multilevel computational models relative to specifications describing the desired/expected system behaviour. The methodology is generic and supports computational models encoded using various high-level modelling formalisms because it is defined relative to time series data and not the models used to generate it. In addition, the methodology can be automatically adapted to case study specific types of spatial structures and properties using the spatio-temporal meta model checking concept. To automate the computational model verification process we have implemented the model checking approach in the software tool Mule (http://mule.modelchecking.org). Its applicability is illustrated against four systems biology computational models previously published in the literature encoding the rat cardiovascular system dynamics, the uterine contractions of labour
Detecting violations of temporal regularities in waking and sleeping two-month-old infants
Otte, R.A.; Winkler, I.; Braeken, M.A.K.A.; Stekelenburg, J.J.; van der Stelt, O.; Van den Bergh, B.R.H.
2013-01-01
Correctly processing rapid sequences of sounds is essential for developmental milestones, such as language acquisition. We investigated the sensitivity of two-month-old infants to violations of a temporal regularity, by recording event-related brain potentials (ERPs) in an auditory oddball paradigm
Directory of Open Access Journals (Sweden)
Dong Wang
2017-01-01
Full Text Available Purpose. Dynamic contrast enhanced magnetic resonance imaging (DCE-MRI is used in cancer imaging to probe tumor vascular properties. Compressed sensing (CS theory makes it possible to recover MR images from randomly undersampled k-space data using nonlinear recovery schemes. The purpose of this paper is to quantitatively evaluate common temporal sparsity-promoting regularizers for CS DCE-MRI of the breast. Methods. We considered five ubiquitous temporal regularizers on 4.5x retrospectively undersampled Cartesian in vivo breast DCE-MRI data: Fourier transform (FT, Haar wavelet transform (WT, total variation (TV, second-order total generalized variation (TGVα2, and nuclear norm (NN. We measured the signal-to-error ratio (SER of the reconstructed images, the error in tumor mean, and concordance correlation coefficients (CCCs of the derived pharmacokinetic parameters Ktrans (volume transfer constant and ve (extravascular-extracellular volume fraction across a population of random sampling schemes. Results. NN produced the lowest image error (SER: 29.1, while TV/TGVα2 produced the most accurate Ktrans (CCC: 0.974/0.974 and ve (CCC: 0.916/0.917. WT produced the highest image error (SER: 21.8, while FT produced the least accurate Ktrans (CCC: 0.842 and ve (CCC: 0.799. Conclusion. TV/TGVα2 should be used as temporal constraints for CS DCE-MRI of the breast.
Motion-aware temporal regularization for improved 4D cone-beam computed tomography
Mory, Cyril; Janssens, Guillaume; Rit, Simon
2016-09-01
Four-dimensional cone-beam computed tomography (4D-CBCT) of the free-breathing thorax is a valuable tool in image-guided radiation therapy of the thorax and the upper abdomen. It allows the determination of the position of a tumor throughout the breathing cycle, while only its mean position can be extracted from three-dimensional CBCT. The classical approaches are not fully satisfactory: respiration-correlated methods allow one to accurately locate high-contrast structures in any frame, but contain strong streak artifacts unless the acquisition is significantly slowed down. Motion-compensated methods can yield streak-free, but static, reconstructions. This work proposes a 4D-CBCT method that can be seen as a trade-off between respiration-correlated and motion-compensated reconstruction. It builds upon the existing reconstruction using spatial and temporal regularization (ROOSTER) and is called motion-aware ROOSTER (MA-ROOSTER). It performs temporal regularization along curved trajectories, following the motion estimated on a prior 4D CT scan. MA-ROOSTER does not involve motion-compensated forward and back projections: the input motion is used only during temporal regularization. MA-ROOSTER is compared to ROOSTER, motion-compensated Feldkamp-Davis-Kress (MC-FDK), and two respiration-correlated methods, on CBCT acquisitions of one physical phantom and two patients. It yields streak-free reconstructions, visually similar to MC-FDK, and robust information on tumor location throughout the breathing cycle. MA-ROOSTER also allows a variation of the lung tissue density during the breathing cycle, similar to that of planning CT, which is required for quantitative post-processing.
Merlin, Thibaut; Visvikis, Dimitris; Fernandez, Philippe; Lamare, Frédéric
2018-02-01
Respiratory motion reduces both the qualitative and quantitative accuracy of PET images in oncology. This impact is more significant for quantitative applications based on kinetic modeling, where dynamic acquisitions are associated with limited statistics due to the necessity of enhanced temporal resolution. The aim of this study is to address these drawbacks, by combining a respiratory motion correction approach with temporal regularization in a unique reconstruction algorithm for dynamic PET imaging. Elastic transformation parameters for the motion correction are estimated from the non-attenuation-corrected PET images. The derived displacement matrices are subsequently used in a list-mode based OSEM reconstruction algorithm integrating a temporal regularization between the 3D dynamic PET frames, based on temporal basis functions. These functions are simultaneously estimated at each iteration, along with their relative coefficients for each image voxel. Quantitative evaluation has been performed using dynamic FDG PET/CT acquisitions of lung cancer patients acquired on a GE DRX system. The performance of the proposed method is compared with that of a standard multi-frame OSEM reconstruction algorithm. The proposed method achieved substantial improvements in terms of noise reduction while accounting for loss of contrast due to respiratory motion. Results on simulated data showed that the proposed 4D algorithms led to bias reduction values up to 40% in both tumor and blood regions for similar standard deviation levels, in comparison with a standard 3D reconstruction. Patlak parameter estimations on reconstructed images with the proposed reconstruction methods resulted in 30% and 40% bias reduction in the tumor and lung region respectively for the Patlak slope, and a 30% bias reduction for the intercept in the tumor region (a similar Patlak intercept was achieved in the lung area). Incorporation of the respiratory motion correction using an elastic model along with a
EEG/MEG Source Reconstruction with Spatial-Temporal Two-Way Regularized Regression
Tian, Tian Siva
2013-07-11
In this work, we propose a spatial-temporal two-way regularized regression method for reconstructing neural source signals from EEG/MEG time course measurements. The proposed method estimates the dipole locations and amplitudes simultaneously through minimizing a single penalized least squares criterion. The novelty of our methodology is the simultaneous consideration of three desirable properties of the reconstructed source signals, that is, spatial focality, spatial smoothness, and temporal smoothness. The desirable properties are achieved by using three separate penalty functions in the penalized regression framework. Specifically, we impose a roughness penalty in the temporal domain for temporal smoothness, and a sparsity-inducing penalty and a graph Laplacian penalty in the spatial domain for spatial focality and smoothness. We develop a computational efficient multilevel block coordinate descent algorithm to implement the method. Using a simulation study with several settings of different spatial complexity and two real MEG examples, we show that the proposed method outperforms existing methods that use only a subset of the three penalty functions. © 2013 Springer Science+Business Media New York.
Honing, H.; Bouwer, F.L.; Háden, G.P.; Merchant, H.; de Lafuente, V.
2014-01-01
The aim of this chapter is to give an overview of how the perception of a regular beat in music can be studied in humans adults, human newborns, and nonhuman primates using event-related brain potentials (ERPs). Next to a review of the recent literature on the perception of temporal regularity in
Directory of Open Access Journals (Sweden)
Chen Zhong
Full Text Available To discover regularities in human mobility is of fundamental importance to our understanding of urban dynamics, and essential to city and transport planning, urban management and policymaking. Previous research has revealed universal regularities at mainly aggregated spatio-temporal scales but when we zoom into finer scales, considerable heterogeneity and diversity is observed instead. The fundamental question we address in this paper is at what scales are the regularities we detect stable, explicable, and sustainable. This paper thus proposes a basic measure of variability to assess the stability of such regularities focusing mainly on changes over a range of temporal scales. We demonstrate this by comparing regularities in the urban mobility patterns in three world cities, namely London, Singapore and Beijing using one-week of smart-card data. The results show that variations in regularity scale as non-linear functions of the temporal resolution, which we measure over a scale from 1 minute to 24 hours thus reflecting the diurnal cycle of human mobility. A particularly dramatic increase in variability occurs up to the temporal scale of about 15 minutes in all three cities and this implies that limits exist when we look forward or backward with respect to making short-term predictions. The degree of regularity varies in fact from city to city with Beijing and Singapore showing higher regularity in comparison to London across all temporal scales. A detailed discussion is provided, which relates the analysis to various characteristics of the three cities. In summary, this work contributes to a deeper understanding of regularities in patterns of transit use from variations in volumes of travellers entering subway stations, it establishes a generic analytical framework for comparative studies using urban mobility data, and it provides key points for the management of variability by policy-makers intent on for making the travel experience more
Zhong, Chen; Batty, Michael; Manley, Ed; Wang, Jiaqiu; Wang, Zijia; Chen, Feng; Schmitt, Gerhard
2016-01-01
To discover regularities in human mobility is of fundamental importance to our understanding of urban dynamics, and essential to city and transport planning, urban management and policymaking. Previous research has revealed universal regularities at mainly aggregated spatio-temporal scales but when we zoom into finer scales, considerable heterogeneity and diversity is observed instead. The fundamental question we address in this paper is at what scales are the regularities we detect stable, explicable, and sustainable. This paper thus proposes a basic measure of variability to assess the stability of such regularities focusing mainly on changes over a range of temporal scales. We demonstrate this by comparing regularities in the urban mobility patterns in three world cities, namely London, Singapore and Beijing using one-week of smart-card data. The results show that variations in regularity scale as non-linear functions of the temporal resolution, which we measure over a scale from 1 minute to 24 hours thus reflecting the diurnal cycle of human mobility. A particularly dramatic increase in variability occurs up to the temporal scale of about 15 minutes in all three cities and this implies that limits exist when we look forward or backward with respect to making short-term predictions. The degree of regularity varies in fact from city to city with Beijing and Singapore showing higher regularity in comparison to London across all temporal scales. A detailed discussion is provided, which relates the analysis to various characteristics of the three cities. In summary, this work contributes to a deeper understanding of regularities in patterns of transit use from variations in volumes of travellers entering subway stations, it establishes a generic analytical framework for comparative studies using urban mobility data, and it provides key points for the management of variability by policy-makers intent on for making the travel experience more amenable.
Zhong, Chen; Batty, Michael; Manley, Ed; Wang, Jiaqiu; Wang, Zijia; Chen, Feng; Schmitt, Gerhard
2016-01-01
To discover regularities in human mobility is of fundamental importance to our understanding of urban dynamics, and essential to city and transport planning, urban management and policymaking. Previous research has revealed universal regularities at mainly aggregated spatio-temporal scales but when we zoom into finer scales, considerable heterogeneity and diversity is observed instead. The fundamental question we address in this paper is at what scales are the regularities we detect stable, explicable, and sustainable. This paper thus proposes a basic measure of variability to assess the stability of such regularities focusing mainly on changes over a range of temporal scales. We demonstrate this by comparing regularities in the urban mobility patterns in three world cities, namely London, Singapore and Beijing using one-week of smart-card data. The results show that variations in regularity scale as non-linear functions of the temporal resolution, which we measure over a scale from 1 minute to 24 hours thus reflecting the diurnal cycle of human mobility. A particularly dramatic increase in variability occurs up to the temporal scale of about 15 minutes in all three cities and this implies that limits exist when we look forward or backward with respect to making short-term predictions. The degree of regularity varies in fact from city to city with Beijing and Singapore showing higher regularity in comparison to London across all temporal scales. A detailed discussion is provided, which relates the analysis to various characteristics of the three cities. In summary, this work contributes to a deeper understanding of regularities in patterns of transit use from variations in volumes of travellers entering subway stations, it establishes a generic analytical framework for comparative studies using urban mobility data, and it provides key points for the management of variability by policy-makers intent on for making the travel experience more amenable. PMID:26872333
Why movement is captured by music, but less by speech: role of temporal regularity.
Dalla Bella, Simone; Białuńska, Anita; Sowiński, Jakub
2013-01-01
Music has a pervasive tendency to rhythmically engage our body. In contrast, synchronization with speech is rare. Music's superiority over speech in driving movement probably results from isochrony of musical beats, as opposed to irregular speech stresses. Moreover, the presence of regular patterns of embedded periodicities (i.e., meter) may be critical in making music particularly conducive to movement. We investigated these possibilities by asking participants to synchronize with isochronous auditory stimuli (target), while music and speech distractors were presented at one of various phase relationships with respect to the target. In Exp. 1, familiar musical excerpts and fragments of children poetry were used as distractors. The stimuli were manipulated in terms of beat/stress isochrony and average pitch to achieve maximum comparability. In Exp. 2, the distractors were well-known songs performed with lyrics, on a reiterated syllable, and spoken lyrics, all having the same meter. Music perturbed synchronization with the target stimuli more than speech fragments. However, music superiority over speech disappeared when distractors shared isochrony and the same meter. Music's peculiar and regular temporal structure is likely to be the main factor fostering tight coupling between sound and movement.
Energy Technology Data Exchange (ETDEWEB)
O’Shea, Tuathan P., E-mail: tuathan.oshea@icr.ac.uk; Bamber, Jeffrey C.; Harris, Emma J. [Joint Department of Physics, The Institute of Cancer Research and The Royal Marsden NHS foundation Trust, Sutton, London SM2 5PT (United Kingdom)
2016-01-15
Purpose: Ultrasound-based motion estimation is an expanding subfield of image-guided radiation therapy. Although ultrasound can detect tissue motion that is a fraction of a millimeter, its accuracy is variable. For controlling linear accelerator tracking and gating, ultrasound motion estimates must remain highly accurate throughout the imaging sequence. This study presents a temporal regularization method for correlation-based template matching which aims to improve the accuracy of motion estimates. Methods: Liver ultrasound sequences (15–23 Hz imaging rate, 2.5–5.5 min length) from ten healthy volunteers under free breathing were used. Anatomical features (blood vessels) in each sequence were manually annotated for comparison with normalized cross-correlation based template matching. Five sequences from a Siemens Acuson™ scanner were used for algorithm development (training set). Results from incremental tracking (IT) were compared with a temporal regularization method, which included a highly specific similarity metric and state observer, known as the α–β filter/similarity threshold (ABST). A further five sequences from an Elekta Clarity™ system were used for validation, without alteration of the tracking algorithm (validation set). Results: Overall, the ABST method produced marked improvements in vessel tracking accuracy. For the training set, the mean and 95th percentile (95%) errors (defined as the difference from manual annotations) were 1.6 and 1.4 mm, respectively (compared to 6.2 and 9.1 mm, respectively, for IT). For each sequence, the use of the state observer leads to improvement in the 95% error. For the validation set, the mean and 95% errors for the ABST method were 0.8 and 1.5 mm, respectively. Conclusions: Ultrasound-based motion estimation has potential to monitor liver translation over long time periods with high accuracy. Nonrigid motion (strain) and the quality of the ultrasound data are likely to have an impact on tracking
Honing, Henkjan; Bouwer, Fleur L; Háden, Gábor P
2014-01-01
The aim of this chapter is to give an overview of how the perception of a regular beat in music can be studied in humans adults, human newborns, and nonhuman primates using event-related brain potentials (ERPs). Next to a review of the recent literature on the perception of temporal regularity in music, we will discuss in how far ERPs, and especially the component called mismatch negativity (MMN), can be instrumental in probing beat perception. We conclude with a discussion on the pitfalls and prospects of using ERPs to probe the perception of a regular beat, in which we present possible constraints on stimulus design and discuss future perspectives.
Directory of Open Access Journals (Sweden)
E. S. Staricov
2016-01-01
Full Text Available Objectives. The presented research problem concerns data regularities for an unspecified time series based on an approach to the expert formalisation of knowledge integrated into a decision-making mechanism. Method. A context-free grammar, consisting of a modification of universal temporal grammar, is used to describe regularities. Using the rules of the developed grammar, an expert can describe patterns in the group of time series. A multi-dimensional matrix pattern of the behaviour of a group of time series is used in a real-time decision-making regime in the expert system to implements a universal approach to the description of the dynamics of these changes in the expert system. The multidimensional matrix pattern is specifically intended for decision-making in an expert system; the modified temporal grammar is used to identify patterns in the data. Results. It is proposed to use the temporal relations of the series and fix observation values in the time interval as ―From-To‖, ―Before‖, ―After‖, ―Simultaneously‖ and ―Duration‖. A syntactically oriented converter of descriptions is developed. A schema for the creation and application of matrix patterns in expert systems is drawn up. Conclusion. The advantage of the implementation of the proposed hybrid approaches consists in a reduction of the time taken for identifying temporal patterns and an automation of the matrix pattern of the decision-making system based on expert descriptions verified using live data derived from relationships in the monitoring data.
The Temporal Dynamics of Regularity Extraction in Non-Human Primates
Minier, Laure; Fagot, Joël; Rey, Arnaud
2016-01-01
Extracting the regularities of our environment is one of our core cognitive abilities. To study the fine-grained dynamics of the extraction of embedded regularities, a method combining the advantages of the artificial language paradigm (Saffran, Aslin, & Newport, [Saffran, J. R., 1996]) and the serial response time task (Nissen & Bullemer,…
Directory of Open Access Journals (Sweden)
Kuan Hsien Lee
2016-02-01
Full Text Available How a stimulus impacts spinal cord function depends upon temporal relations. When intermittent noxious stimulation (shock is applied and the interval between shock pulses is varied (unpredictable, it induces a lasting alteration that inhibits adaptive learning. If the same stimulus is applied in a temporally regular (predictable manner, the capacity to learn is preserved and a protective/restorative effect is engaged that counters the adverse effect of variable stimulation. Sensitivity to temporal relations implies a capacity to encode time. This study explores how spinal neurons discriminate variable and fixed spaced stimulation. Communication with the brain was blocked by means of a spinal transection and adaptive capacity was tested using an instrumental learning task. In this task, subjects must learn to maintain a hind limb in a flexed position to minimize shock exposure. To evaluate the possibility that a distinct class of afferent fibers provide a sensory cue for regularity, we manipulated the temporal relation between shocks given to two dermatomes (leg and tail. Evidence for timing emerged when the stimuli were applied in a coherent manner across dermatomes, implying that a central (spinal process detects regularity. Next, we show that fixed spaced stimulation has a restorative effect when half the physical stimuli are randomly omitted, as long as the stimuli remain in phase, suggesting that stimulus regularity is encoded by an internal oscillator Research suggests that the oscillator that drives the tempo of stepping depends upon neurons within the rostral lumbar (L1-L2 region. Disrupting communication with the L1-L2 tissue by means of a L3 transection eliminated the restorative effect of fixed spaced stimulation. Implications of the results for step training and rehabilitation after injury are discussed.
Energy Technology Data Exchange (ETDEWEB)
Mory, Cyril, E-mail: cyril.mory@philips.com [Université de Lyon, CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Université Lyon 1, F-69621 Villeurbanne Cedex (France); Philips Research Medisys, 33 rue de Verdun, 92156 Suresnes (France); Auvray, Vincent; Zhang, Bo [Philips Research Medisys, 33 rue de Verdun, 92156 Suresnes (France); Grass, Michael; Schäfer, Dirk [Philips Research, Röntgenstrasse 24–26, D-22335 Hamburg (Germany); Chen, S. James; Carroll, John D. [Department of Medicine, Division of Cardiology, University of Colorado Denver, 12605 East 16th Avenue, Aurora, Colorado 80045 (United States); Rit, Simon [Université de Lyon, CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Université Lyon 1 (France); Centre Léon Bérard, 28 rue Laënnec, F-69373 Lyon (France); Peyrin, Françoise [Université de Lyon, CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Université Lyon 1, F-69621 Villeurbanne Cedex (France); X-ray Imaging Group, European Synchrotron, Radiation Facility, BP 220, F-38043 Grenoble Cedex (France); Douek, Philippe; Boussel, Loïc [Université de Lyon, CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Université Lyon 1 (France); Hospices Civils de Lyon, 28 Avenue du Doyen Jean Lépine, 69500 Bron (France)
2014-02-15
Purpose: Reconstruction of the beating heart in 3D + time in the catheter laboratory using only the available C-arm system would improve diagnosis, guidance, device sizing, and outcome control for intracardiac interventions, e.g., electrophysiology, valvular disease treatment, structural or congenital heart disease. To obtain such a reconstruction, the patient's electrocardiogram (ECG) must be recorded during the acquisition and used in the reconstruction. In this paper, the authors present a 4D reconstruction method aiming to reconstruct the heart from a single sweep 10 s acquisition. Methods: The authors introduce the 4D RecOnstructiOn using Spatial and TEmporal Regularization (short 4D ROOSTER) method, which reconstructs all cardiac phases at once, as a 3D + time volume. The algorithm alternates between a reconstruction step based on conjugate gradient and four regularization steps: enforcing positivity, averaging along time outside a motion mask that contains the heart and vessels, 3D spatial total variation minimization, and 1D temporal total variation minimization. Results: 4D ROOSTER recovers the different temporal representations of a moving Shepp and Logan phantom, and outperforms both ECG-gated simultaneous algebraic reconstruction technique and prior image constrained compressed sensing on a clinical case. It generates 3D + time reconstructions with sharp edges which can be used, for example, to estimate the patient's left ventricular ejection fraction. Conclusions: 4D ROOSTER can be applied for human cardiac C-arm CT, and potentially in other dynamic tomography areas. It can easily be adapted to other problems as regularization is decoupled from projection and back projection.
International Nuclear Information System (INIS)
Mory, Cyril; Auvray, Vincent; Zhang, Bo; Grass, Michael; Schäfer, Dirk; Chen, S. James; Carroll, John D.; Rit, Simon; Peyrin, Françoise; Douek, Philippe; Boussel, Loïc
2014-01-01
Purpose: Reconstruction of the beating heart in 3D + time in the catheter laboratory using only the available C-arm system would improve diagnosis, guidance, device sizing, and outcome control for intracardiac interventions, e.g., electrophysiology, valvular disease treatment, structural or congenital heart disease. To obtain such a reconstruction, the patient's electrocardiogram (ECG) must be recorded during the acquisition and used in the reconstruction. In this paper, the authors present a 4D reconstruction method aiming to reconstruct the heart from a single sweep 10 s acquisition. Methods: The authors introduce the 4D RecOnstructiOn using Spatial and TEmporal Regularization (short 4D ROOSTER) method, which reconstructs all cardiac phases at once, as a 3D + time volume. The algorithm alternates between a reconstruction step based on conjugate gradient and four regularization steps: enforcing positivity, averaging along time outside a motion mask that contains the heart and vessels, 3D spatial total variation minimization, and 1D temporal total variation minimization. Results: 4D ROOSTER recovers the different temporal representations of a moving Shepp and Logan phantom, and outperforms both ECG-gated simultaneous algebraic reconstruction technique and prior image constrained compressed sensing on a clinical case. It generates 3D + time reconstructions with sharp edges which can be used, for example, to estimate the patient's left ventricular ejection fraction. Conclusions: 4D ROOSTER can be applied for human cardiac C-arm CT, and potentially in other dynamic tomography areas. It can easily be adapted to other problems as regularization is decoupled from projection and back projection
Directory of Open Access Journals (Sweden)
V. G. Margaryan
2017-12-01
Full Text Available The regularities of the space-temporal distribution of the radiation balance of the underlying surface for the conditions of the mountainous territory of the Republic of Armenia were discussed and analyzed.
Analysis of absence seizure generation using EEG spatial-temporal regularity measures.
Mammone, Nadia; Labate, Domenico; Lay-Ekuakille, Aime; Morabito, Francesco C
2012-12-01
Epileptic seizures are thought to be generated and to evolve through an underlying anomaly of synchronization in the activity of groups of neuronal populations. The related dynamic scenario of state transitions is revealed by detecting changes in the dynamical properties of Electroencephalography (EEG) signals. The recruitment procedure ending with the crisis can be explored through a spatial-temporal plot from which to extract suitable descriptors that are able to monitor and quantify the evolving synchronization level from the EEG tracings. In this paper, a spatial-temporal analysis of EEG recordings based on the concept of permutation entropy (PE) is proposed. The performance of PE are tested on a database of 24 patients affected by absence (generalized) seizures. The results achieved are compared to the dynamical behavior of the EEG of 40 healthy subjects. Being PE a feature which is dependent on two parameters, an extensive study of the sensitivity of the performance of PE with respect to the parameters' setting was carried out on scalp EEG. Once the optimal PE configuration was determined, its ability to detect the different brain states was evaluated. According to the results here presented, it seems that the widely accepted model of "jump" transition to absence seizure should be in some cases coupled (or substituted) by a gradual transition model characteristic of self-organizing networks. Indeed, it appears that the transition to the epileptic status is heralded before the preictal state, ever since the interictal stages. As a matter of fact, within the limits of the analyzed database, the frontal-temporal scalp areas appear constantly associated to PE levels higher compared to the remaining electrodes, whereas the parieto-occipital areas appear associated to lower PE values. The EEG of healthy subjects neither shows any similar dynamic behavior nor exhibits any recurrent portrait in PE topography.
Analysis of Regularly and Irregularly Sampled Spatial, Multivariate, and Multi-temporal Data
DEFF Research Database (Denmark)
Nielsen, Allan Aasbjerg
1994-01-01
This thesis describes different methods that are useful in the analysis of multivariate data. Some methods focus on spatial data (sampled regularly or irregularly), others focus on multitemporal data or data from multiple sources. The thesis covers selected and not all aspects of relevant data......-variograms are described. As a new way of setting up a well-balanced kriging support the Delaunay triangulation is suggested. Two case studies show the usefulness of 2-D semivariograms of geochemical data from areas in central Spain (with a geologist's comment) and South Greenland, and kriging/cokriging of an undersampled...... are considered as repetitions. Three case studies show the strength of the methods; one uses SPOT High Resolution Visible (HRV) multispectral (XS) data covering economically important pineapple and coffee plantations near Thika, Kiambu District, Kenya, the other two use Landsat Thematic Mapper (TM) data covering...
International Nuclear Information System (INIS)
Casanova, R; Yang, L; Hairston, W D; Laurienti, P J; Maldjian, J A
2009-01-01
Recently we have proposed the use of Tikhonov regularization with temporal smoothness constraints to estimate the BOLD fMRI hemodynamic response function (HRF). The temporal smoothness constraint was imposed on the estimates by using second derivative information while the regularization parameter was selected based on the generalized cross-validation function (GCV). Using one-dimensional simulations, we previously found this method to produce reliable estimates of the HRF time course, especially its time to peak (TTP), being at the same time fast and robust to over-sampling in the HRF estimation. Here, we extend the method to include simultaneous temporal and spatial smoothness constraints. This method does not need Gaussian smoothing as a pre-processing step as usually done in fMRI data analysis. We carried out two-dimensional simulations to compare the two methods: Tikhonov regularization with temporal (Tik-GCV-T) and spatio-temporal (Tik-GCV-ST) smoothness constraints on the estimated HRF. We focus our attention on quantifying the influence of the Gaussian data smoothing and the presence of edges on the performance of these techniques. Our results suggest that the spatial smoothing introduced by regularization is less severe than that produced by Gaussian smoothing. This allows more accurate estimates of the response amplitudes while producing similar estimates of the TTP. We illustrate these ideas using real data. (note)
Houborg, Rasmus
2015-10-14
Accurate retrieval of canopy biophysical and leaf biochemical constituents from space observations is critical to diagnosing the functioning and condition of vegetation canopies across spatio-temporal scales. Retrieved vegetation characteristics may serve as important inputs to precision farming applications and as constraints in spatially and temporally distributed model simulations of water and carbon exchange processes. However significant challenges remain in the translation of composite remote sensing signals into useful biochemical, physiological or structural quantities and treatment of confounding factors in spectrum-trait relations. Bands in the red-edge spectrum have particular potential for improving the robustness of retrieved vegetation properties. The development of observationally based vegetation retrieval capacities, effectively constrained by the enhanced information content afforded by bands in the red-edge, is a needed investment towards optimizing the benefit of current and future satellite sensor systems. In this study, a REGularized canopy reFLECtance model (REGFLEC) for joint leaf chlorophyll (Chll) and leaf area index (LAI) retrieval is extended to sensor systems with a band in the red-edge region for the first time. Application to time-series of 5 m resolution multi-spectral RapidEye data is demonstrated over an irrigated agricultural region in central Saudi Arabia, showcasing the value of satellite-derived crop information at this fine scale for precision management. Validation against in-situ measurements in fields of alfalfa, Rhodes grass, carrot and maize indicate improved accuracy of retrieved vegetation properties when exploiting red-edge information in the model inversion process. © (2015) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Pereira, Marcelo Alves; Martinez, Alexandre Souto
2009-01-01
The Prisoner's Dilemma (PD) game is used in several fields due to the emergence of cooperation among selfish players. Here, we have considered a one-dimensional lattice, where each cell represents a player, that can cooperate or defect. This one-dimensional geometry allows us to retrieve the results obtained for regular lattices and to keep track of the system spatio-temporal evolution. Players play PD with their neighbors and update their state using the Pavlovian Evolutionary Strategy. If t...
Externally Verifiable Oblivious RAM
Directory of Open Access Journals (Sweden)
Gancher Joshua
2017-04-01
Full Text Available We present the idea of externally verifiable oblivious RAM (ORAM. Our goal is to allow a client and server carrying out an ORAM protocol to have disputes adjudicated by a third party, allowing for the enforcement of penalties against an unreliable or malicious server. We give a security definition that guarantees protection not only against a malicious server but also against a client making false accusations. We then give modifications of the Path ORAM [15] and Ring ORAM [9] protocols that meet this security definition. These protocols both have the same asymptotic runtimes as the semi-honest original versions and require the external verifier to be involved only when the client or server deviates from the protocol. Finally, we implement externally verified ORAM, along with an automated cryptocurrency contract to use as the external verifier.
Verifier Theory and Unverifiability
Yampolskiy, Roman V.
2016-01-01
Despite significant developments in Proof Theory, surprisingly little attention has been devoted to the concept of proof verifier. In particular, the mathematical community may be interested in studying different types of proof verifiers (people, programs, oracles, communities, superintelligences) as mathematical objects. Such an effort could reveal their properties, their powers and limitations (particularly in human mathematicians), minimum and maximum complexity, as well as self-verificati...
Verifiably Truthful Mechanisms
DEFF Research Database (Denmark)
Branzei, Simina; Procaccia, Ariel D.
2015-01-01
the computational sense). Our approach involves three steps: (i) specifying the structure of mechanisms, (ii) constructing a verification algorithm, and (iii) measuring the quality of verifiably truthful mechanisms. We demonstrate this approach using a case study: approximate mechanism design without money...
Verifying versus falsifying banknotes
van Renesse, Rudolf L.
1998-04-01
A series of counterfeit Dutch, German, English, and U.S. banknotes was examined with respect to the various modi operandi to imitate paper based, printed and post-printed security features. These features provide positive evidence (verifiability) as well as negative evidence (falsifiability). It appears that the positive evidence provided in most cases is insufficiently convincing: banknote inspection mainly rests on negative evidence. The act of falsifying (to prove to be false), however, is an inefficacious procedure. Ergonomic verificatory security features are demanded. This demand is increasingly met by security features based on nano- technology. The potential of nano-security has a twofold base: (1) the unique optical effects displayed allow simple, fast and unambiguous inspection, and (2) the nano-technology they are based on, makes successful counterfeit or simulation extremely improbable.
International Nuclear Information System (INIS)
Bullinger, M.G.
1982-01-01
In this essay, the author attempts to enlighten the reader as to the meaning of the term ''verified scientific findings'' in section 13, sub-section 1, sentence 2 of the new Chemicals Control Law. The examples given here are the generally accepted regulations in regards to technology (that is sections 7a and 18b of the WHG (law on water economy), section 3, sub-section 1 of the machine- and engine protection laws) and to the status of technology (section 3, sub-section 6 of the BImSchG (Fed. law on prevention of air-borne pollution)), and to the status of science (section 5, sub-section 2 of the AMG (drug legislation). The ''status of science and technology'' as defined in sections 4 ff of the Atomic Energy Law (AtomG) and in sections 3, 4, 12, 2) of the First Radiation Protection Ordinance (1.StrlSch. VO), is also being discussed. The author defines the in his opinion ''dynamic term'' as the generally recognized result of scientific research, and the respective possibilities of practical utilization of technology. (orig.) [de
Verified OS Interface Code Synthesis
2016-12-01
results into the larger proof framework of the seL4 microkernel to be directly usable in practice. Beyond the stated project goals, the solution...CakeML, can now also be used in the Isabelle/HOL system that was used for the verified seL4 microkernel. This combination increases proof productivity...were used for the verified ML compiler CakeML, can now also be used in the Isabelle/HOL system that was used for the verified seL4 microkernel. This
Status of personnel identity verifiers
International Nuclear Information System (INIS)
Maxwell, R.L.
1985-01-01
Identity verification devices based on the interrogation of six different human biometric features or actions now exist and in general have been in development for about ten years. The capability of these devices to meet the cost and operational requirements of speed, accuracy, ease of use and reliability has generally increased although the verifier industry is still immature. Sandia Laboratories makes a continuing effort to stay abreast of identity verifier developments and to assess the capabilities and improvements of each device. Operating environment and procedures more typical of field use can often reveal performance results substantially different from laboratory tests. An evaluation of several recently available verifiers is herein reported
DEFF Research Database (Denmark)
Hansen, Lars Kai; Rasmussen, Carl Edward; Svarer, C.
1994-01-01
Regularization, e.g., in the form of weight decay, is important for training and optimization of neural network architectures. In this work the authors provide a tool based on asymptotic sampling theory, for iterative estimation of weight decay parameters. The basic idea is to do a gradient desce...
Directory of Open Access Journals (Sweden)
Xinpei Wang
2018-01-01
Full Text Available The acceleration and deceleration patterns in heartbeat fluctuations distribute asymmetrically, which is known as heart rate asymmetry (HRA. It is hypothesized that HRA reflects the balancing regulation of the sympathetic and parasympathetic nervous systems. This study was designed to examine whether altered autonomic balance during exercise can lead to HRA changes. Sixteen healthy college students were enrolled, and each student undertook two 5-min ECG measurements: one in a resting seated position and another while walking on a treadmill at a regular speed of 5 km/h. The two measurements were conducted in a randomized order, and a 30-min rest was required between them. RR interval time series were extracted from the 5-min ECG data, and HRA (short-term was estimated using four established metrics, that is, Porta’s index (PI, Guzik’s index (GI, slope index (SI, and area index (AI, from both raw RR interval time series and the time series after wavelet detrending that removes the low-frequency component of <~0.03 Hz. Our pilot data showed a reduced PI but unchanged GI, SI, and AI during walking compared to resting seated position based on the raw data. Based on the wavelet-detrended data, reduced PI, SI, and AI were observed while GI still showed no significant changes. The reduced PI during walking based on both raw and detrended data which suggests less short-term HRA may underline the belief that vagal tone is withdrawn during low-intensity exercise. GI may not be sensitive to short-term HRA. The reduced SI and AI based on detrended data suggest that they may capture both short- and long-term HRA features and that the expected change in short-term HRA is amplified after removing the trend that is supposed to link to long-term component. Further studies with more subjects and longer measurements are warranted to validate our observations and to examine these additional hypotheses.
Unconditionally verifiable blind quantum computation
Fitzsimons, Joseph F.; Kashefi, Elham
2017-07-01
Blind quantum computing (BQC) allows a client to have a server carry out a quantum computation for them such that the client's input, output, and computation remain private. A desirable property for any BQC protocol is verification, whereby the client can verify with high probability whether the server has followed the instructions of the protocol or if there has been some deviation resulting in a corrupted output state. A verifiable BQC protocol can be viewed as an interactive proof system leading to consequences for complexity theory. We previously proposed [A. Broadbent, J. Fitzsimons, and E. Kashefi, in Proceedings of the 50th Annual Symposium on Foundations of Computer Science, Atlanta, 2009 (IEEE, Piscataway, 2009), p. 517] a universal and unconditionally secure BQC scheme where the client only needs to be able to prepare single qubits in separable states randomly chosen from a finite set and send them to the server, who has the balance of the required quantum computational resources. In this paper we extend that protocol with additional functionality allowing blind computational basis measurements, which we use to construct another verifiable BQC protocol based on a different class of resource states. We rigorously prove that the probability of failing to detect an incorrect output is exponentially small in a security parameter, while resource overhead remains polynomial in this parameter. This resource state allows entangling gates to be performed between arbitrary pairs of logical qubits with only constant overhead. This is a significant improvement on the original scheme, which required that all computations to be performed must first be put into a nearest-neighbor form, incurring linear overhead in the number of qubits. Such an improvement has important consequences for efficiency and fault-tolerance thresholds.
UNFOLDED REGULAR AND SEMI-REGULAR POLYHEDRA
Directory of Open Access Journals (Sweden)
IONIŢĂ Elena
2015-06-01
Full Text Available This paper proposes a presentation unfolding regular and semi-regular polyhedra. Regular polyhedra are convex polyhedra whose faces are regular and equal polygons, with the same number of sides, and whose polyhedral angles are also regular and equal. Semi-regular polyhedra are convex polyhedra with regular polygon faces, several types and equal solid angles of the same type. A net of a polyhedron is a collection of edges in the plane which are the unfolded edges of the solid. Modeling and unfolding Platonic and Arhimediene polyhedra will be using 3dsMAX program. This paper is intended as an example of descriptive geometry applications.
Coordinate-invariant regularization
International Nuclear Information System (INIS)
Halpern, M.B.
1987-01-01
A general phase-space framework for coordinate-invariant regularization is given. The development is geometric, with all regularization contained in regularized DeWitt Superstructures on field deformations. Parallel development of invariant coordinate-space regularization is obtained by regularized functional integration of the momenta. As representative examples of the general formulation, the regularized general non-linear sigma model and regularized quantum gravity are discussed. copyright 1987 Academic Press, Inc
Verifying design patterns in Hoare Type Theory
DEFF Research Database (Denmark)
Svendsen, Kasper; Buisse, Alexandre; Birkedal, Lars
In this technical report we document our experiments formally verifying three design patterns in Hoare Type Theory.......In this technical report we document our experiments formally verifying three design patterns in Hoare Type Theory....
USCIS E-Verify Program Reports
Department of Homeland Security — The report builds on the last comprehensive evaluation of the E-Verify Program and demonstrates that E-Verify produces accurate results and that accuracy rates have...
Online Manifold Regularization by Dual Ascending Procedure
Sun, Boliang; Li, Guohui; Jia, Li; Zhang, Hui
2013-01-01
We propose a novel online manifold regularization framework based on the notion of duality in constrained optimization. The Fenchel conjugate of hinge functions is a key to transfer manifold regularization from offline to online in this paper. Our algorithms are derived by gradient ascent in the dual function. For practical purpose, we propose two buffering strategies and two sparse approximations to reduce the computational complexity. Detailed experiments verify the utility of our approache...
Energy functions for regularization algorithms
Delingette, H.; Hebert, M.; Ikeuchi, K.
1991-01-01
Regularization techniques are widely used for inverse problem solving in computer vision such as surface reconstruction, edge detection, or optical flow estimation. Energy functions used for regularization algorithms measure how smooth a curve or surface is, and to render acceptable solutions these energies must verify certain properties such as invariance with Euclidean transformations or invariance with parameterization. The notion of smoothness energy is extended here to the notion of a differential stabilizer, and it is shown that to void the systematic underestimation of undercurvature for planar curve fitting, it is necessary that circles be the curves of maximum smoothness. A set of stabilizers is proposed that meet this condition as well as invariance with rotation and parameterization.
Regularized Statistical Analysis of Anatomy
DEFF Research Database (Denmark)
Sjöstrand, Karl
2007-01-01
This thesis presents the application and development of regularized methods for the statistical analysis of anatomical structures. Focus is on structure-function relationships in the human brain, such as the connection between early onset of Alzheimer’s disease and shape changes of the corpus...... and mind. Statistics represents a quintessential part of such investigations as they are preluded by a clinical hypothesis that must be verified based on observed data. The massive amounts of image data produced in each examination pose an important and interesting statistical challenge...... efficient algorithms which make the analysis of large data sets feasible, and gives examples of applications....
Verifying FreeRTOS; a feasibility study
Pronk, C.
2010-01-01
This paper presents a study on modeling and verifying the kernel of Real-Time Operating Systems (RTOS). The study will show advances in formally verifying such an RTOS both by refinement and by model checking approaches. This work fits in the context of Hoare’s verification challenge. Several
van Dam, Edwin R.; Koolen, Jack H.; Tanaka, Hajime
2016-01-01
This is a survey of distance-regular graphs. We present an introduction to distance-regular graphs for the reader who is unfamiliar with the subject, and then give an overview of some developments in the area of distance-regular graphs since the monograph 'BCN'[Brouwer, A.E., Cohen, A.M., Neumaier,
Nijholt, Antinus
1980-01-01
Culik II and Cogen introduced the class of LR-regular grammars, an extension of the LR(k) grammars. In this paper we consider an analogous extension of the LL(k) grammars called the LL-regular grammars. The relation of this class of grammars to other classes of grammars will be shown. Any LL-regular
The status of personnel identity verifiers
International Nuclear Information System (INIS)
Maxwell, R.L.
1985-01-01
Identity verification devices based on the interrogation of six different human biometric features or actions now exist and in general have been in development for about ten years. The capability of these devices to meet the cost and operational requirements of speed, accuracy, ease of use and reliability has generally increased although the verifier industry is still immature. Sandia Laboratories makes a continuing effort to stay abreast of identity verifier developments and to assess the capabilities and improvements of each device. Operating environment and procedures more typical of field use can often reveal performance results substantially different from laboratory tests. An evaluation of several recently available verifiers is herein reported
Regular Expression Pocket Reference
Stubblebine, Tony
2007-01-01
This handy little book offers programmers a complete overview of the syntax and semantics of regular expressions that are at the heart of every text-processing application. Ideal as a quick reference, Regular Expression Pocket Reference covers the regular expression APIs for Perl 5.8, Ruby (including some upcoming 1.9 features), Java, PHP, .NET and C#, Python, vi, JavaScript, and the PCRE regular expression libraries. This concise and easy-to-use reference puts a very powerful tool for manipulating text and data right at your fingertips. Composed of a mixture of symbols and text, regular exp
Department of Homeland Security — E-Verify is an internet based system that contains datasets to compare information from an employee's Form I-9, Employment Eligibility Verification, to data from the...
Auto-identification fiberoptical seal verifier
International Nuclear Information System (INIS)
Yamamoto, Yoichi; Mukaiyama, Takehiro
1998-08-01
An auto COBRA seal verifier was developed by Japan Atomic Energy Research Institute (JAERI) to provide more efficient and simpler inspection measures for IAEA safeguards. The verifier is designed to provide means of a simple, quantitative and objective judgment on in-situ verification for the COBRA seal. The equipment is a portable unit with hand-held weight and size. It can be operated by battery or AC power. The verifier reads a COBRA seal signature by using a built-in CCD camera and carries out the signature comparison procedure automatically on digital basis. The result of signature comparison is given as a YES/NO answer. The production model of the verifier was completed in July 1996. The development was carried out in collaboration with Mitsubishi Heavy Industries, Ltd. This report describes the design and functions of the COBRA seal verifier and the results of environmental and functional tests. The development of the COBRA seal verifier was carried out in the framework of Japan Support Programme for Agency Safeguards (JASPAS) as a project, JD-4 since 1981. (author)
Regularization by External Variables
DEFF Research Database (Denmark)
Bossolini, Elena; Edwards, R.; Glendinning, P. A.
2016-01-01
Regularization was a big topic at the 2016 CRM Intensive Research Program on Advances in Nonsmooth Dynamics. There are many open questions concerning well known kinds of regularization (e.g., by smoothing or hysteresis). Here, we propose a framework for an alternative and important kind of regula......Regularization was a big topic at the 2016 CRM Intensive Research Program on Advances in Nonsmooth Dynamics. There are many open questions concerning well known kinds of regularization (e.g., by smoothing or hysteresis). Here, we propose a framework for an alternative and important kind...
Goyvaerts, Jan
2009-01-01
This cookbook provides more than 100 recipes to help you crunch data and manipulate text with regular expressions. Every programmer can find uses for regular expressions, but their power doesn't come worry-free. Even seasoned users often suffer from poor performance, false positives, false negatives, or perplexing bugs. Regular Expressions Cookbook offers step-by-step instructions for some of the most common tasks involving this tool, with recipes for C#, Java, JavaScript, Perl, PHP, Python, Ruby, and VB.NET. With this book, you will: Understand the basics of regular expressions through a
An IBM 370 assembly language program verifier
Maurer, W. D.
1977-01-01
The paper describes a program written in SNOBOL which verifies the correctness of programs written in assembly language for the IBM 360 and 370 series of computers. The motivation for using assembly language as a source language for a program verifier was the realization that many errors in programs are caused by misunderstanding or ignorance of the characteristics of specific computers. The proof of correctness of a program written in assembly language must take these characteristics into account. The program has been compiled and is currently running at the Center for Academic and Administrative Computing of The George Washington University.
Regularities of Multifractal Measures
Indian Academy of Sciences (India)
First, we prove the decomposition theorem for the regularities of multifractal Hausdorff measure and packing measure in R R d . This decomposition theorem enables us to split a set into regular and irregular parts, so that we can analyze each separately, and recombine them without affecting density properties. Next, we ...
Stochastic analytic regularization
International Nuclear Information System (INIS)
Alfaro, J.
1984-07-01
Stochastic regularization is reexamined, pointing out a restriction on its use due to a new type of divergence which is not present in the unregulated theory. Furthermore, we introduce a new form of stochastic regularization which permits the use of a minimal subtraction scheme to define the renormalized Green functions. (author)
Classroom Experiment to Verify the Lorentz Force
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 8; Issue 3. Classroom Experiment to Verify the Lorentz Force. Somnath Basu Anindita Bose Sumit Kumar Sinha Pankaj Vishe S Chatterjee. Classroom Volume 8 Issue 3 March 2003 pp 81-86 ...
On alternative approach for verifiable secret sharing
Kulesza, Kamil; Kotulski, Zbigniew; Pieprzyk, Joseph
2002-01-01
Secret sharing allows split/distributed control over the secret (e.g. master key). Verifiable secret sharing (VSS) is the secret sharing extended by verification capacity. Usually verification comes at the price. We propose "free lunch", the approach that allows to overcome this inconvenience.
Verified compilation of Concurrent Managed Languages
2017-11-01
Communications Division Information Directorate This report is published in the interest of scientific and technical information exchange, and its...271, 2007. [85] Viktor Vafeiadis. Modular fine-grained concurrency verification. Technical Report UCAM-CL-TR- 726, University of Cambridge, Computer...VERIFIED COMPILATION OF CONCURRENT MANAGED LANGUAGES PURDUE UNIVERSITY NOVEMBER 2017 FINAL TECHNICAL REPORT APPROVED FOR PUBLIC RELEASE
A Verifiable Secret Shuffle of Homomorphic Encryptions
DEFF Research Database (Denmark)
Groth, Jens
2003-01-01
We show how to prove in honest verifier zero-knowledge the correctness of a shuffle of homomorphic encryptions (or homomorphic commitments.) A shuffle consists in a rearrangement of the input ciphertexts and a reencryption of them so that the permutation is not revealed....
Testing Library Specifications by Verifying Conformance Tests
DEFF Research Database (Denmark)
Kiniry, Joseph Roland; Zimmerman, Daniel; Hyland, Ralph
2012-01-01
of client programs. Specication and verification researchers regularly face the question of whether the library specications we use are correct and useful, and we have collectively provided no good answers. Over the past few years we have created and refined a software engineering process, which we call...
A performance evaluation of personnel identity verifiers
International Nuclear Information System (INIS)
Maxwell, R.L.; Wright, L.J.
1987-01-01
Personnel identity verification devices, which are based on the examination and assessment of a body feature or a unique repeatable personal action, are steadily improving. These biometric devices are becoming more practical with respect to accuracy, speed, user compatibility, reliability and cost, but more development is necessary to satisfy the varied and sometimes ill-defined future requirements of the security industry. In an attempt to maintain an awareness of the availability and the capabilities of identity verifiers for the DOE security community, Sandia Laboratories continues to comparatively evaluate the capabilities and improvements of developing devices. An evaluation of several recently available verifiers is discussed in this paper. Operating environments and procedures more typical of physical access control use can reveal performance substantially different from the basic laboratory tests
Optimised resource construction for verifiable quantum computation
International Nuclear Information System (INIS)
Kashefi, Elham; Wallden, Petros
2017-01-01
Recent developments have brought the possibility of achieving scalable quantum networks and quantum devices closer. From the computational point of view these emerging technologies become relevant when they are no longer classically simulatable. Hence a pressing challenge is the construction of practical methods to verify the correctness of the outcome produced by universal or non-universal quantum devices. A promising approach that has been extensively explored is the scheme of verification via encryption through blind quantum computation. We present here a new construction that simplifies the required resources for any such verifiable protocol. We obtain an overhead that is linear in the size of the input (computation), while the security parameter remains independent of the size of the computation and can be made exponentially small (with a small extra cost). Furthermore our construction is generic and could be applied to any universal or non-universal scheme with a given underlying graph. (paper)
A Practical Voter-Verifiable Election Scheme.
Chaum, D; Ryan, PYA; Schneider, SA
2005-01-01
We present an election scheme designed to allow voters to verify that their vote is accurately included in the count. The scheme provides a high degree of transparency whilst ensuring the secrecy of votes. Assurance is derived from close auditing of all the steps of the vote recording and counting process with minimal dependence on the system components. Thus, assurance arises from verification of the election rather than having to place trust in the correct behaviour of components of the vot...
Sparse structure regularized ranking
Wang, Jim Jing-Yan; Sun, Yijun; Gao, Xin
2014-01-01
Learning ranking scores is critical for the multimedia database retrieval problem. In this paper, we propose a novel ranking score learning algorithm by exploring the sparse structure and using it to regularize ranking scores. To explore the sparse
Regular expression containment
DEFF Research Database (Denmark)
Henglein, Fritz; Nielsen, Lasse
2011-01-01
We present a new sound and complete axiomatization of regular expression containment. It consists of the conventional axiomatiza- tion of concatenation, alternation, empty set and (the singleton set containing) the empty string as an idempotent semiring, the fixed- point rule E* = 1 + E × E......* for Kleene-star, and a general coin- duction rule as the only additional rule. Our axiomatization gives rise to a natural computational inter- pretation of regular expressions as simple types that represent parse trees, and of containment proofs as coercions. This gives the axiom- atization a Curry......-Howard-style constructive interpretation: Con- tainment proofs do not only certify a language-theoretic contain- ment, but, under our computational interpretation, constructively transform a membership proof of a string in one regular expres- sion into a membership proof of the same string in another regular expression. We...
Supersymmetric dimensional regularization
International Nuclear Information System (INIS)
Siegel, W.; Townsend, P.K.; van Nieuwenhuizen, P.
1980-01-01
There is a simple modification of dimension regularization which preserves supersymmetry: dimensional reduction to real D < 4, followed by analytic continuation to complex D. In terms of component fields, this means fixing the ranges of all indices on the fields (and therefore the numbers of Fermi and Bose components). For superfields, it means continuing in the dimensionality of x-space while fixing the dimensionality of theta-space. This regularization procedure allows the simple manipulation of spinor derivatives in supergraph calculations. The resulting rules are: (1) First do all algebra exactly as in D = 4; (2) Then do the momentum integrals as in ordinary dimensional regularization. This regularization procedure needs extra rules before one can say that it is consistent. Such extra rules needed for superconformal anomalies are discussed. Problems associated with renormalizability and higher order loops are also discussed
Regularized maximum correntropy machine
Wang, Jim Jing-Yan; Wang, Yunji; Jing, Bing-Yi; Gao, Xin
2015-01-01
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
Regularized maximum correntropy machine
Wang, Jim Jing-Yan
2015-02-12
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
Descriptional complexity of non-unary self-verifying symmetric difference automata
CSIR Research Space (South Africa)
Marais, Laurette
2017-09-01
Full Text Available Previously, self-verifying symmetric difference automata were defined and a tight bound of 2^n-1-1 was shown for state complexity in the unary case. We now consider the non-unary case and show that, for every n at least 2, there is a regular...
Analytic stochastic regularization: gauge and supersymmetry theories
International Nuclear Information System (INIS)
Abdalla, M.C.B.
1988-01-01
Analytic stochastic regularization for gauge and supersymmetric theories is considered. Gauge invariance in spinor and scalar QCD is verified to brak fown by an explicit one loop computation of the two, theree and four point vertex function of the gluon field. As a result, non gauge invariant counterterms must be added. However, in the supersymmetric multiplets there is a cancellation rendering the counterterms gauge invariant. The calculation is considered at one loop order. (author) [pt
Verified Subtyping with Traits and Mixins
Directory of Open Access Journals (Sweden)
Asankhaya Sharma
2014-07-01
Full Text Available Traits allow decomposing programs into smaller parts and mixins are a form of composition that resemble multiple inheritance. Unfortunately, in the presence of traits, programming languages like Scala give up on subtyping relation between objects. In this paper, we present a method to check subtyping between objects based on entailment in separation logic. We implement our method as a domain specific language in Scala and apply it on the Scala standard library. We have verified that 67% of mixins used in the Scala standard library do indeed conform to subtyping between the traits that are used to build them.
Unary self-verifying symmetric difference automata
CSIR Research Space (South Africa)
Marais, Laurette
2016-07-01
Full Text Available stream_source_info Marais_2016_ABSTRACT.pdf.txt stream_content_type text/plain stream_size 796 Content-Encoding ISO-8859-1 stream_name Marais_2016_ABSTRACT.pdf.txt Content-Type text/plain; charset=ISO-8859-1 18th... International Workshop on Descriptional Complexity of Formal Systems, 5 - 8 July 2016, Bucharest, Romania Unary self-verifying symmetric difference automata Laurette Marais1,2 and Lynette van Zijl1(B) 1 Department of Computer Science, Stellenbosch...
Online Manifold Regularization by Dual Ascending Procedure
Directory of Open Access Journals (Sweden)
Boliang Sun
2013-01-01
Full Text Available We propose a novel online manifold regularization framework based on the notion of duality in constrained optimization. The Fenchel conjugate of hinge functions is a key to transfer manifold regularization from offline to online in this paper. Our algorithms are derived by gradient ascent in the dual function. For practical purpose, we propose two buffering strategies and two sparse approximations to reduce the computational complexity. Detailed experiments verify the utility of our approaches. An important conclusion is that our online MR algorithms can handle the settings where the target hypothesis is not fixed but drifts with the sequence of examples. We also recap and draw connections to earlier works. This paper paves a way to the design and analysis of online manifold regularization algorithms.
Manifold Regularized Reinforcement Learning.
Li, Hongliang; Liu, Derong; Wang, Ding
2018-04-01
This paper introduces a novel manifold regularized reinforcement learning scheme for continuous Markov decision processes. Smooth feature representations for value function approximation can be automatically learned using the unsupervised manifold regularization method. The learned features are data-driven, and can be adapted to the geometry of the state space. Furthermore, the scheme provides a direct basis representation extension for novel samples during policy learning and control. The performance of the proposed scheme is evaluated on two benchmark control tasks, i.e., the inverted pendulum and the energy storage problem. Simulation results illustrate the concepts of the proposed scheme and show that it can obtain excellent performance.
Verifiable process monitoring through enhanced data authentication
International Nuclear Information System (INIS)
Goncalves, Joao G.M.; Schwalbach, Peter; Schoeneman, Barry Dale; Ross, Troy D.; Baldwin, George Thomas
2010-01-01
To ensure the peaceful intent for production and processing of nuclear fuel, verifiable process monitoring of the fuel production cycle is required. As part of a U.S. Department of Energy (DOE)-EURATOM collaboration in the field of international nuclear safeguards, the DOE Sandia National Laboratories (SNL), the European Commission Joint Research Centre (JRC) and Directorate General-Energy (DG-ENER) developed and demonstrated a new concept in process monitoring, enabling the use of operator process information by branching a second, authenticated data stream to the Safeguards inspectorate. This information would be complementary to independent safeguards data, improving the understanding of the plant's operation. The concept is called the Enhanced Data Authentication System (EDAS). EDAS transparently captures, authenticates, and encrypts communication data that is transmitted between operator control computers and connected analytical equipment utilized in nuclear processes controls. The intent is to capture information as close to the sensor point as possible to assure the highest possible confidence in the branched data. Data must be collected transparently by the EDAS: Operator processes should not be altered or disrupted by the insertion of the EDAS as a monitoring system for safeguards. EDAS employs public key authentication providing 'jointly verifiable' data and private key encryption for confidentiality. Timestamps and data source are also added to the collected data for analysis. The core of the system hardware is in a security enclosure with both active and passive tamper indication. Further, the system has the ability to monitor seals or other security devices in close proximity. This paper will discuss the EDAS concept, recent technical developments, intended application philosophy and the planned future progression of this system.
Analyser Framework to Verify Software Components
Directory of Open Access Journals (Sweden)
Rolf Andreas Rasenack
2009-01-01
Full Text Available Today, it is important for software companies to build software systems in a short time-interval, to reduce costs and to have a good market position. Therefore well organized and systematic development approaches are required. Reusing software components, which are well tested, can be a good solution to develop software applications in effective manner. The reuse of software components is less expensive and less time consuming than a development from scratch. But it is dangerous to think that software components can be match together without any problems. Software components itself are well tested, of course, but even if they composed together problems occur. Most problems are based on interaction respectively communication. Avoiding such errors a framework has to be developed for analysing software components. That framework determines the compatibility of corresponding software components. The promising approach discussed here, presents a novel technique for analysing software components by applying an Abstract Syntax Language Tree (ASLT. A supportive environment will be designed that checks the compatibility of black-box software components. This article is concerned to the question how can be coupled software components verified by using an analyzer framework and determines the usage of the ASLT. Black-box Software Components and Abstract Syntax Language Tree are the basis for developing the proposed framework and are discussed here to provide the background knowledge. The practical implementation of this framework is discussed and shows the result by using a test environment.
Towards Verifying National CO2 Emissions
Fung, I. Y.; Wuerth, S. M.; Anderson, J. L.
2017-12-01
With the Paris Agreement, nations around the world have pledged their voluntary reductions in future CO2 emissions. Satellite observations of atmospheric CO2 have the potential to verify self-reported emission statistics around the globe. We present a carbon-weather data assimilation system, wherein raw weather observations together with satellite observations of the mixing ratio of column CO2 from the Orbiting Carbon Observatory-2 are assimilated every 6 hours into the NCAR carbon-climate model CAM5 coupled to the Ensemble Kalman Filter of DART. In an OSSE, we reduced the fossil fuel emissions from a country, and estimated the emissions innovations demanded by the atmospheric CO2 observations. The uncertainties in the innovation are analyzed with respect to the uncertainties in the meteorology to determine the significance of the result. The work follows from "On the use of incomplete historical data to infer the present state of the atmosphere" (Charney et al. 1969), which maps the path for continuous data assimilation for weather forecasting and the five decades of progress since.
Diverse Regular Employees and Non-regular Employment (Japanese)
MORISHIMA Motohiro
2011-01-01
Currently there are high expectations for the introduction of policies related to diverse regular employees. These policies are a response to the problem of disparities between regular and non-regular employees (part-time, temporary, contract and other non-regular employees) and will make it more likely that workers can balance work and their private lives while companies benefit from the advantages of regular employment. In this paper, I look at two issues that underlie this discussion. The ...
Sparse structure regularized ranking
Wang, Jim Jing-Yan
2014-04-17
Learning ranking scores is critical for the multimedia database retrieval problem. In this paper, we propose a novel ranking score learning algorithm by exploring the sparse structure and using it to regularize ranking scores. To explore the sparse structure, we assume that each multimedia object could be represented as a sparse linear combination of all other objects, and combination coefficients are regarded as a similarity measure between objects and used to regularize their ranking scores. Moreover, we propose to learn the sparse combination coefficients and the ranking scores simultaneously. A unified objective function is constructed with regard to both the combination coefficients and the ranking scores, and is optimized by an iterative algorithm. Experiments on two multimedia database retrieval data sets demonstrate the significant improvements of the propose algorithm over state-of-the-art ranking score learning algorithms.
'Regular' and 'emergency' repair
International Nuclear Information System (INIS)
Luchnik, N.V.
1975-01-01
Experiments on the combined action of radiation and a DNA inhibitor using Crepis roots and on split-dose irradiation of human lymphocytes lead to the conclusion that there are two types of repair. The 'regular' repair takes place twice in each mitotic cycle and ensures the maintenance of genetic stability. The 'emergency' repair is induced at all stages of the mitotic cycle by high levels of injury. (author)
Regularization of divergent integrals
Felder, Giovanni; Kazhdan, David
2016-01-01
We study the Hadamard finite part of divergent integrals of differential forms with singularities on submanifolds. We give formulae for the dependence of the finite part on the choice of regularization and express them in terms of a suitable local residue map. The cases where the submanifold is a complex hypersurface in a complex manifold and where it is a boundary component of a manifold with boundary, arising in string perturbation theory, are treated in more detail.
EIT image reconstruction with four dimensional regularization.
Dai, Tao; Soleimani, Manuchehr; Adler, Andy
2008-09-01
Electrical impedance tomography (EIT) reconstructs internal impedance images of the body from electrical measurements on body surface. The temporal resolution of EIT data can be very high, although the spatial resolution of the images is relatively low. Most EIT reconstruction algorithms calculate images from data frames independently, although data are actually highly correlated especially in high speed EIT systems. This paper proposes a 4-D EIT image reconstruction for functional EIT. The new approach is developed to directly use prior models of the temporal correlations among images and 3-D spatial correlations among image elements. A fast algorithm is also developed to reconstruct the regularized images. Image reconstruction is posed in terms of an augmented image and measurement vector which are concatenated from a specific number of previous and future frames. The reconstruction is then based on an augmented regularization matrix which reflects the a priori constraints on temporal and 3-D spatial correlations of image elements. A temporal factor reflecting the relative strength of the image correlation is objectively calculated from measurement data. Results show that image reconstruction models which account for inter-element correlations, in both space and time, show improved resolution and noise performance, in comparison to simpler image models.
Regularizing portfolio optimization
International Nuclear Information System (INIS)
Still, Susanne; Kondor, Imre
2010-01-01
The optimization of large portfolios displays an inherent instability due to estimation error. This poses a fundamental problem, because solutions that are not stable under sample fluctuations may look optimal for a given sample, but are, in effect, very far from optimal with respect to the average risk. In this paper, we approach the problem from the point of view of statistical learning theory. The occurrence of the instability is intimately related to over-fitting, which can be avoided using known regularization methods. We show how regularized portfolio optimization with the expected shortfall as a risk measure is related to support vector regression. The budget constraint dictates a modification. We present the resulting optimization problem and discuss the solution. The L2 norm of the weight vector is used as a regularizer, which corresponds to a diversification 'pressure'. This means that diversification, besides counteracting downward fluctuations in some assets by upward fluctuations in others, is also crucial because it improves the stability of the solution. The approach we provide here allows for the simultaneous treatment of optimization and diversification in one framework that enables the investor to trade off between the two, depending on the size of the available dataset.
Regularizing portfolio optimization
Still, Susanne; Kondor, Imre
2010-07-01
The optimization of large portfolios displays an inherent instability due to estimation error. This poses a fundamental problem, because solutions that are not stable under sample fluctuations may look optimal for a given sample, but are, in effect, very far from optimal with respect to the average risk. In this paper, we approach the problem from the point of view of statistical learning theory. The occurrence of the instability is intimately related to over-fitting, which can be avoided using known regularization methods. We show how regularized portfolio optimization with the expected shortfall as a risk measure is related to support vector regression. The budget constraint dictates a modification. We present the resulting optimization problem and discuss the solution. The L2 norm of the weight vector is used as a regularizer, which corresponds to a diversification 'pressure'. This means that diversification, besides counteracting downward fluctuations in some assets by upward fluctuations in others, is also crucial because it improves the stability of the solution. The approach we provide here allows for the simultaneous treatment of optimization and diversification in one framework that enables the investor to trade off between the two, depending on the size of the available dataset.
Regular Single Valued Neutrosophic Hypergraphs
Directory of Open Access Journals (Sweden)
Muhammad Aslam Malik
2016-12-01
Full Text Available In this paper, we define the regular and totally regular single valued neutrosophic hypergraphs, and discuss the order and size along with properties of regular and totally regular single valued neutrosophic hypergraphs. We also extend work on completeness of single valued neutrosophic hypergraphs.
The geometry of continuum regularization
International Nuclear Information System (INIS)
Halpern, M.B.
1987-03-01
This lecture is primarily an introduction to coordinate-invariant regularization, a recent advance in the continuum regularization program. In this context, the program is seen as fundamentally geometric, with all regularization contained in regularized DeWitt superstructures on field deformations
Annotation of Regular Polysemy
DEFF Research Database (Denmark)
Martinez Alonso, Hector
Regular polysemy has received a lot of attention from the theory of lexical semantics and from computational linguistics. However, there is no consensus on how to represent the sense of underspecified examples at the token level, namely when annotating or disambiguating senses of metonymic words...... and metonymic. We have conducted an analysis in English, Danish and Spanish. Later on, we have tried to replicate the human judgments by means of unsupervised and semi-supervised sense prediction. The automatic sense-prediction systems have been unable to find empiric evidence for the underspecified sense, even...
Regularity of Minimal Surfaces
Dierkes, Ulrich; Tromba, Anthony J; Kuster, Albrecht
2010-01-01
"Regularity of Minimal Surfaces" begins with a survey of minimal surfaces with free boundaries. Following this, the basic results concerning the boundary behaviour of minimal surfaces and H-surfaces with fixed or free boundaries are studied. In particular, the asymptotic expansions at interior and boundary branch points are derived, leading to general Gauss-Bonnet formulas. Furthermore, gradient estimates and asymptotic expansions for minimal surfaces with only piecewise smooth boundaries are obtained. One of the main features of free boundary value problems for minimal surfaces is t
Regularities of radiation heredity
International Nuclear Information System (INIS)
Skakov, M.K.; Melikhov, V.D.
2001-01-01
One analyzed regularities of radiation heredity in metals and alloys. One made conclusion about thermodynamically irreversible changes in structure of materials under irradiation. One offers possible ways of heredity transmittance of radiation effects at high-temperature transformations in the materials. Phenomenon of radiation heredity may be turned to practical use to control structure of liquid metal and, respectively, structure of ingot via preliminary radiation treatment of charge. Concentration microheterogeneities in material defect structure induced by preliminary irradiation represent the genetic factor of radiation heredity [ru
A control system verifier using automated reasoning software
International Nuclear Information System (INIS)
Smith, D.E.; Seeman, S.E.
1985-08-01
An on-line, automated reasoning software system for verifying the actions of other software or human control systems has been developed. It was demonstrated by verifying the actions of an automated procedure generation system. The verifier uses an interactive theorem prover as its inference engine with the rules included as logical axioms. Operation of the verifier is generally transparent except when the verifier disagrees with the actions of the monitored software. Testing with an automated procedure generation system demonstrates the successful application of automated reasoning software for verification of logical actions in a diverse, redundant manner. A higher degree of confidence may be placed in the verified actions of the combined system
Effective field theory dimensional regularization
International Nuclear Information System (INIS)
Lehmann, Dirk; Prezeau, Gary
2002-01-01
A Lorentz-covariant regularization scheme for effective field theories with an arbitrary number of propagating heavy and light particles is given. This regularization scheme leaves the low-energy analytic structure of Greens functions intact and preserves all the symmetries of the underlying Lagrangian. The power divergences of regularized loop integrals are controlled by the low-energy kinematic variables. Simple diagrammatic rules are derived for the regularization of arbitrary one-loop graphs and the generalization to higher loops is discussed
Effective field theory dimensional regularization
Lehmann, Dirk; Prézeau, Gary
2002-01-01
A Lorentz-covariant regularization scheme for effective field theories with an arbitrary number of propagating heavy and light particles is given. This regularization scheme leaves the low-energy analytic structure of Greens functions intact and preserves all the symmetries of the underlying Lagrangian. The power divergences of regularized loop integrals are controlled by the low-energy kinematic variables. Simple diagrammatic rules are derived for the regularization of arbitrary one-loop graphs and the generalization to higher loops is discussed.
2010-12-07
... FARM CREDIT SYSTEM INSURANCE CORPORATION Regular Meeting AGENCY: Farm Credit System Insurance Corporation Board. ACTION: Regular meeting. SUMMARY: Notice is hereby given of the regular meeting of the Farm Credit System Insurance Corporation Board (Board). Date and Time: The meeting of the Board will be held...
USCIS E-Verify Customer Satisfaction Survey, January 2013
Department of Homeland Security — This report focuses on the customer satisfaction of companies currently enrolled in the E-Verify program. Satisfaction with E-Verify remains high and follows up a...
New concepts in nuclear arms control: verified cutoff and verified disposal
International Nuclear Information System (INIS)
Donnelly, W.H.
1990-01-01
Limiting the numbers of nuclear warheads by reducing military production and stockpiles of fissionable materials has been a constant item on the nuclear arms control agenda for the last 45 years. It has become more salient recently, however, because of two events: the enforced closure for safety reasons of the current United States military plutonium production facilities; and the possibility that the US and USSR may soon conclude an agreement providing for the verified destruction of significant numbers of nuclear warheads and the recovery of the fissionable material they contain with the option of transferring these materials to peaceful uses. A study has been made of the practical problems of verifying the cut off of fissionable material production for military purposes in the nuclear weapon states, as well as providing assurance that material recovered from warheads is not re-used for proscribed military purposes and facilitating its transfer to civil uses. Implementation of such measures would have important implications for non-proliferation. The resultant paper was presented to a meeting of the PPNN Core Group held in Baden, close to Vienna, over the weekend of 18/19th November 1989 and is reprinted in this booklet. (author)
Verified Interval Orbit Propagation in Satellite Collision Avoidance
Römgens, B.A.; Mooij, E.; Naeije, M.C.
2011-01-01
Verified interval integration methods enclose a solution set corresponding to interval initial values and parameters, and bound integration and rounding errors. Verified methods suffer from overestimation of the solution, i.e., non-solutions are also included in the solution enclosure. Two verified
20 CFR 401.45 - Verifying your identity.
2010-04-01
... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Verifying your identity. 401.45 Section 401... INFORMATION The Privacy Act § 401.45 Verifying your identity. (a) When required. Unless you are making a... representative, you must verify your identity in accordance with paragraph (b) of this section if: (1) You make a...
28 CFR 802.13 - Verifying your identity.
2010-07-01
... 28 Judicial Administration 2 2010-07-01 2010-07-01 false Verifying your identity. 802.13 Section... COLUMBIA DISCLOSURE OF RECORDS Privacy Act § 802.13 Verifying your identity. (a) Requests for your own records. When you make a request for access to records about yourself, you must verify your identity. You...
Selection of regularization parameter for l1-regularized damage detection
Hou, Rongrong; Xia, Yong; Bao, Yuequan; Zhou, Xiaoqing
2018-06-01
The l1 regularization technique has been developed for structural health monitoring and damage detection through employing the sparsity condition of structural damage. The regularization parameter, which controls the trade-off between data fidelity and solution size of the regularization problem, exerts a crucial effect on the solution. However, the l1 regularization problem has no closed-form solution, and the regularization parameter is usually selected by experience. This study proposes two strategies of selecting the regularization parameter for the l1-regularized damage detection problem. The first method utilizes the residual and solution norms of the optimization problem and ensures that they are both small. The other method is based on the discrepancy principle, which requires that the variance of the discrepancy between the calculated and measured responses is close to the variance of the measurement noise. The two methods are applied to a cantilever beam and a three-story frame. A range of the regularization parameter, rather than one single value, can be determined. When the regularization parameter in this range is selected, the damage can be accurately identified even for multiple damage scenarios. This range also indicates the sensitivity degree of the damage identification problem to the regularization parameter.
Ensemble manifold regularization.
Geng, Bo; Tao, Dacheng; Xu, Chao; Yang, Linjun; Hua, Xian-Sheng
2012-06-01
We propose an automatic approximation of the intrinsic manifold for general semi-supervised learning (SSL) problems. Unfortunately, it is not trivial to define an optimization function to obtain optimal hyperparameters. Usually, cross validation is applied, but it does not necessarily scale up. Other problems derive from the suboptimality incurred by discrete grid search and the overfitting. Therefore, we develop an ensemble manifold regularization (EMR) framework to approximate the intrinsic manifold by combining several initial guesses. Algorithmically, we designed EMR carefully so it 1) learns both the composite manifold and the semi-supervised learner jointly, 2) is fully automatic for learning the intrinsic manifold hyperparameters implicitly, 3) is conditionally optimal for intrinsic manifold approximation under a mild and reasonable assumption, and 4) is scalable for a large number of candidate manifold hyperparameters, from both time and space perspectives. Furthermore, we prove the convergence property of EMR to the deterministic matrix at rate root-n. Extensive experiments over both synthetic and real data sets demonstrate the effectiveness of the proposed framework.
Adaptive Regularization of Neural Classifiers
DEFF Research Database (Denmark)
Andersen, Lars Nonboe; Larsen, Jan; Hansen, Lars Kai
1997-01-01
We present a regularization scheme which iteratively adapts the regularization parameters by minimizing the validation error. It is suggested to use the adaptive regularization scheme in conjunction with optimal brain damage pruning to optimize the architecture and to avoid overfitting. Furthermo......, we propose an improved neural classification architecture eliminating an inherent redundancy in the widely used SoftMax classification network. Numerical results demonstrate the viability of the method...
Application of automated reasoning software: procedure generation system verifier
International Nuclear Information System (INIS)
Smith, D.E.; Seeman, S.E.
1984-09-01
An on-line, automated reasoning software system for verifying the actions of other software or human control systems has been developed. It was demonstrated by verifying the actions of an automated procedure generation system. The verifier uses an interactive theorem prover as its inference engine with the rules included as logic axioms. Operation of the verifier is generally transparent except when the verifier disagrees with the actions of the monitored software. Testing with an automated procedure generation system demonstrates the successful application of automated reasoning software for verification of logical actions in a diverse, redundant manner. A higher degree of confidence may be placed in the verified actions gathered by the combined system
A Finite Equivalence of Verifiable Multi-secret Sharing
Directory of Open Access Journals (Sweden)
Hui Zhao
2012-02-01
Full Text Available We give an abstraction of verifiable multi-secret sharing schemes that is accessible to a fully mechanized analysis. This abstraction is formalized within the applied pi-calculus by using an equational theory which characterizes the cryptographic semantics of secret share. We also present an encoding from the equational theory into a convergent rewriting system, which is suitable for the automated protocol verifier ProVerif. Based on that, we verify the threshold certificate protocol in ProVerif.
2010-09-02
... FARM CREDIT SYSTEM INSURANCE CORPORATION Regular Meeting AGENCY: Farm Credit System Insurance Corporation Board. SUMMARY: Notice is hereby given of the regular meeting of the Farm Credit System Insurance Corporation Board (Board). DATE AND TIME: The meeting of the Board will be held at the offices of the Farm...
Online co-regularized algorithms
Ruijter, T. de; Tsivtsivadze, E.; Heskes, T.
2012-01-01
We propose an online co-regularized learning algorithm for classification and regression tasks. We demonstrate that by sequentially co-regularizing prediction functions on unlabeled data points, our algorithm provides improved performance in comparison to supervised methods on several UCI benchmarks
Deblina Datta, S; Tangermann, Rudolf H; Reef, Susan; William Schluter, W; Adams, Anthony
2017-07-01
The Global Certification Commission (GCC), Regional Certification Commissions (RCCs), and National Certification Committees (NCCs) provide a framework of independent bodies to assist the Global Polio Eradication Initiative (GPEI) in certifying and maintaining polio eradication in a standardized, ongoing, and credible manner. Their members meet regularly to comprehensively review population immunity, surveillance, laboratory, and other data to assess polio status in the country (NCC), World Health Organization (WHO) region (RCC), or globally (GCC). These highly visible bodies provide a framework to be replicated to independently verify measles and rubella elimination in the regions and globally. © The Author 2017. Published by Oxford University Press for the Infectious Diseases Society of America.
Mathematical Modeling the Geometric Regularity in Proteus Mirabilis Colonies
Zhang, Bin; Jiang, Yi; Minsu Kim Collaboration
Proteus Mirabilis colony exhibits striking spatiotemporal regularity, with concentric ring patterns with alternative high and low bacteria density in space, and periodicity for repetition process of growth and swarm in time. We present a simple mathematical model to explain the spatiotemporal regularity of P. Mirabilis colonies. We study a one-dimensional system. Using a reaction-diffusion model with thresholds in cell density and nutrient concentration, we recreated periodic growth and spread patterns, suggesting that the nutrient constraint and cell density regulation might be sufficient to explain the spatiotemporal periodicity in P. Mirabilis colonies. We further verify this result using a cell based model.
Appraising the value of independent EIA follow-up verifiers
Energy Technology Data Exchange (ETDEWEB)
Wessels, Jan-Albert, E-mail: janalbert.wessels@nwu.ac.za [School of Geo and Spatial Sciences, Department of Geography and Environmental Management, North-West University, C/O Hoffman and Borcherd Street, Potchefstroom, 2520 (South Africa); Retief, Francois, E-mail: francois.retief@nwu.ac.za [School of Geo and Spatial Sciences, Department of Geography and Environmental Management, North-West University, C/O Hoffman and Borcherd Street, Potchefstroom, 2520 (South Africa); Morrison-Saunders, Angus, E-mail: A.Morrison-Saunders@murdoch.edu.au [School of Geo and Spatial Sciences, Department of Geography and Environmental Management, North-West University, C/O Hoffman and Borcherd Street, Potchefstroom, 2520 (South Africa); Environmental Assessment, School of Environmental Science, Murdoch University, Australia. (Australia)
2015-01-15
Independent Environmental Impact Assessment (EIA) follow-up verifiers such as monitoring agencies, checkers, supervisors and control officers are active on various construction sites across the world. There are, however, differing views on the value that these verifiers add and very limited learning in EIA has been drawn from independent verifiers. This paper aims to appraise how and to what extent independent EIA follow-up verifiers add value in major construction projects in the developing country context of South Africa. A framework for appraising the role of independent verifiers was established and four South African case studies were examined through a mixture of site visits, project document analysis, and interviews. Appraisal results were documented in the performance areas of: planning, doing, checking, acting, public participating and integration with other programs. The results indicate that independent verifiers add most value to major construction projects when involved with screening EIA requirements of new projects, allocation of financial and human resources, checking legal compliance, influencing implementation, reporting conformance results, community and stakeholder engagement, integration with self-responsibility programs such as environmental management systems (EMS), and controlling records. It was apparent that verifiers could be more creatively utilized in pre-construction preparation, providing feedback of knowledge into assessment of new projects, giving input to the planning and design phase of projects, and performance evaluation. The study confirms the benefits of proponent and regulator follow-up, specifically in having independent verifiers that disclose information, facilitate discussion among stakeholders, are adaptable and proactive, aid in the integration of EIA with other programs, and instill trust in EIA enforcement by conformance evaluation. Overall, the study provides insight on how to harness the learning opportunities
Appraising the value of independent EIA follow-up verifiers
International Nuclear Information System (INIS)
Wessels, Jan-Albert; Retief, Francois; Morrison-Saunders, Angus
2015-01-01
Independent Environmental Impact Assessment (EIA) follow-up verifiers such as monitoring agencies, checkers, supervisors and control officers are active on various construction sites across the world. There are, however, differing views on the value that these verifiers add and very limited learning in EIA has been drawn from independent verifiers. This paper aims to appraise how and to what extent independent EIA follow-up verifiers add value in major construction projects in the developing country context of South Africa. A framework for appraising the role of independent verifiers was established and four South African case studies were examined through a mixture of site visits, project document analysis, and interviews. Appraisal results were documented in the performance areas of: planning, doing, checking, acting, public participating and integration with other programs. The results indicate that independent verifiers add most value to major construction projects when involved with screening EIA requirements of new projects, allocation of financial and human resources, checking legal compliance, influencing implementation, reporting conformance results, community and stakeholder engagement, integration with self-responsibility programs such as environmental management systems (EMS), and controlling records. It was apparent that verifiers could be more creatively utilized in pre-construction preparation, providing feedback of knowledge into assessment of new projects, giving input to the planning and design phase of projects, and performance evaluation. The study confirms the benefits of proponent and regulator follow-up, specifically in having independent verifiers that disclose information, facilitate discussion among stakeholders, are adaptable and proactive, aid in the integration of EIA with other programs, and instill trust in EIA enforcement by conformance evaluation. Overall, the study provides insight on how to harness the learning opportunities
Continuum-regularized quantum gravity
International Nuclear Information System (INIS)
Chan Huesum; Halpern, M.B.
1987-01-01
The recent continuum regularization of d-dimensional Euclidean gravity is generalized to arbitrary power-law measure and studied in some detail as a representative example of coordinate-invariant regularization. The weak-coupling expansion of the theory illustrates a generic geometrization of regularized Schwinger-Dyson rules, generalizing previous rules in flat space and flat superspace. The rules are applied in a non-trivial explicit check of Einstein invariance at one loop: the cosmological counterterm is computed and its contribution is included in a verification that the graviton mass is zero. (orig.)
New regular black hole solutions
International Nuclear Information System (INIS)
Lemos, Jose P. S.; Zanchin, Vilson T.
2011-01-01
In the present work we consider general relativity coupled to Maxwell's electromagnetism and charged matter. Under the assumption of spherical symmetry, there is a particular class of solutions that correspond to regular charged black holes whose interior region is de Sitter, the exterior region is Reissner-Nordstroem and there is a charged thin-layer in-between the two. The main physical and geometrical properties of such charged regular black holes are analyzed.
Regular variation on measure chains
Czech Academy of Sciences Publication Activity Database
Řehák, Pavel; Vitovec, J.
2010-01-01
Roč. 72, č. 1 (2010), s. 439-448 ISSN 0362-546X R&D Projects: GA AV ČR KJB100190701 Institutional research plan: CEZ:AV0Z10190503 Keywords : regularly varying function * regularly varying sequence * measure chain * time scale * embedding theorem * representation theorem * second order dynamic equation * asymptotic properties Subject RIV: BA - General Mathematics Impact factor: 1.279, year: 2010 http://www.sciencedirect.com/science/article/pii/S0362546X09008475
Manifold Regularized Correlation Object Tracking
Hu, Hongwei; Ma, Bo; Shen, Jianbing; Shao, Ling
2017-01-01
In this paper, we propose a manifold regularized correlation tracking method with augmented samples. To make better use of the unlabeled data and the manifold structure of the sample space, a manifold regularization-based correlation filter is introduced, which aims to assign similar labels to neighbor samples. Meanwhile, the regression model is learned by exploiting the block-circulant structure of matrices resulting from the augmented translated samples over multiple base samples cropped fr...
On geodesics in low regularity
Sämann, Clemens; Steinbauer, Roland
2018-02-01
We consider geodesics in both Riemannian and Lorentzian manifolds with metrics of low regularity. We discuss existence of extremal curves for continuous metrics and present several old and new examples that highlight their subtle interrelation with solutions of the geodesic equations. Then we turn to the initial value problem for geodesics for locally Lipschitz continuous metrics and generalize recent results on existence, regularity and uniqueness of solutions in the sense of Filippov.
Condition Number Regularized Covariance Estimation.
Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala
2013-06-01
Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the "large p small n " setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required.
Condition Number Regularized Covariance Estimation*
Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala
2012-01-01
Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the “large p small n” setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required. PMID:23730197
NOS CO-OPS Water Level Data, Verified, High Low
National Oceanic and Atmospheric Administration, Department of Commerce — This dataset has verified (quality-controlled), daily, high low water level (tide) data from NOAA NOS Center for Operational Oceanographic Products and Services...
NOS CO-OPS Water Level Data, Verified, 6-Minute
National Oceanic and Atmospheric Administration, Department of Commerce — This dataset has verified (quality-controlled), 6-minute, water level (tide) data from NOAA NOS Center for Operational Oceanographic Products and Services (CO-OPS)....
Verifying Correct Usage of Atomic Blocks and Typestate: Technical Companion
National Research Council Canada - National Science Library
Beckman, Nels E; Aldrich, Jonathan
2008-01-01
In this technical report, we present a static and dynamic semantics as well as a proof of soundness for a programming language presented in the paper entitled, 'Verifying Correct Usage of Atomic Blocks and Typestate...
NOS CO-OPS Water Level Data, Verified, Hourly
National Oceanic and Atmospheric Administration, Department of Commerce — This dataset has verified (quality-controlled), hourly, water level (tide) data from NOAA NOS Center for Operational Oceanographic Products and Services (CO-OPS)....
Saramäki, Jari
2013-01-01
The concept of temporal networks is an extension of complex networks as a modeling framework to include information on when interactions between nodes happen. Many studies of the last decade examine how the static network structure affect dynamic systems on the network. In this traditional approach the temporal aspects are pre-encoded in the dynamic system model. Temporal-network methods, on the other hand, lift the temporal information from the level of system dynamics to the mathematical representation of the contact network itself. This framework becomes particularly useful for cases where there is a lot of structure and heterogeneity both in the timings of interaction events and the network topology. The advantage compared to common static network approaches is the ability to design more accurate models in order to explain and predict large-scale dynamic phenomena (such as, e.g., epidemic outbreaks and other spreading phenomena). On the other hand, temporal network methods are mathematically and concept...
Regularization of Instantaneous Frequency Attribute Computations
Yedlin, M. J.; Margrave, G. F.; Van Vorst, D. G.; Ben Horin, Y.
2014-12-01
We compare two different methods of computation of a temporally local frequency:1) A stabilized instantaneous frequency using the theory of the analytic signal.2) A temporally variant centroid (or dominant) frequency estimated from a time-frequency decomposition.The first method derives from Taner et al (1979) as modified by Fomel (2007) and utilizes the derivative of the instantaneous phase of the analytic signal. The second method computes the power centroid (Cohen, 1995) of the time-frequency spectrum, obtained using either the Gabor or Stockwell Transform. Common to both methods is the necessity of division by a diagonal matrix, which requires appropriate regularization.We modify Fomel's (2007) method by explicitly penalizing the roughness of the estimate. Following Farquharson and Oldenburg (2004), we employ both the L curve and GCV methods to obtain the smoothest model that fits the data in the L2 norm.Using synthetic data, quarry blast, earthquakes and the DPRK tests, our results suggest that the optimal method depends on the data. One of the main applications for this work is the discrimination between blast events and earthquakesFomel, Sergey. " Local seismic attributes." , Geophysics, 72.3 (2007): A29-A33.Cohen, Leon. " Time frequency analysis theory and applications." USA: Prentice Hall, (1995).Farquharson, Colin G., and Douglas W. Oldenburg. "A comparison of automatic techniques for estimating the regularization parameter in non-linear inverse problems." Geophysical Journal International 156.3 (2004): 411-425.Taner, M. Turhan, Fulton Koehler, and R. E. Sheriff. " Complex seismic trace analysis." Geophysics, 44.6 (1979): 1041-1063.
Geometric continuum regularization of quantum field theory
International Nuclear Information System (INIS)
Halpern, M.B.
1989-01-01
An overview of the continuum regularization program is given. The program is traced from its roots in stochastic quantization, with emphasis on the examples of regularized gauge theory, the regularized general nonlinear sigma model and regularized quantum gravity. In its coordinate-invariant form, the regularization is seen as entirely geometric: only the supermetric on field deformations is regularized, and the prescription provides universal nonperturbative invariant continuum regularization across all quantum field theory. 54 refs
van der Aa, J.; Honing, H.; ten Cate, C.
2015-01-01
Perceiving temporal regularity in an auditory stimulus is considered one of the basic features of musicality. Here we examine whether zebra finches can detect regularity in an isochronous stimulus. Using a go/no go paradigm we show that zebra finches are able to distinguish between an isochronous
Reasoning about knowledge: Children's evaluations of generality and verifiability.
Koenig, Melissa A; Cole, Caitlin A; Meyer, Meredith; Ridge, Katherine E; Kushnir, Tamar; Gelman, Susan A
2015-12-01
In a series of experiments, we examined 3- to 8-year-old children's (N=223) and adults' (N=32) use of two properties of testimony to estimate a speaker's knowledge: generality and verifiability. Participants were presented with a "Generic speaker" who made a series of 4 general claims about "pangolins" (a novel animal kind), and a "Specific speaker" who made a series of 4 specific claims about "this pangolin" as an individual. To investigate the role of verifiability, we systematically varied whether the claim referred to a perceptually-obvious feature visible in a picture (e.g., "has a pointy nose") or a non-evident feature that was not visible (e.g., "sleeps in a hollow tree"). Three main findings emerged: (1) young children showed a pronounced reliance on verifiability that decreased with age. Three-year-old children were especially prone to credit knowledge to speakers who made verifiable claims, whereas 7- to 8-year-olds and adults credited knowledge to generic speakers regardless of whether the claims were verifiable; (2) children's attributions of knowledge to generic speakers was not detectable until age 5, and only when those claims were also verifiable; (3) children often generalized speakers' knowledge outside of the pangolin domain, indicating a belief that a person's knowledge about pangolins likely extends to new facts. Findings indicate that young children may be inclined to doubt speakers who make claims they cannot verify themselves, as well as a developmentally increasing appreciation for speakers who make general claims. Copyright © 2015 Elsevier Inc. All rights reserved.
Sparsity-regularized HMAX for visual recognition.
Directory of Open Access Journals (Sweden)
Xiaolin Hu
Full Text Available About ten years ago, HMAX was proposed as a simple and biologically feasible model for object recognition, based on how the visual cortex processes information. However, the model does not encompass sparse firing, which is a hallmark of neurons at all stages of the visual pathway. The current paper presents an improved model, called sparse HMAX, which integrates sparse firing. This model is able to learn higher-level features of objects on unlabeled training images. Unlike most other deep learning models that explicitly address global structure of images in every layer, sparse HMAX addresses local to global structure gradually along the hierarchy by applying patch-based learning to the output of the previous layer. As a consequence, the learning method can be standard sparse coding (SSC or independent component analysis (ICA, two techniques deeply rooted in neuroscience. What makes SSC and ICA applicable at higher levels is the introduction of linear higher-order statistical regularities by max pooling. After training, high-level units display sparse, invariant selectivity for particular individuals or for image categories like those observed in human inferior temporal cortex (ITC and medial temporal lobe (MTL. Finally, on an image classification benchmark, sparse HMAX outperforms the original HMAX by a large margin, suggesting its great potential for computer vision.
Verifying three-dimensional skull model reconstruction using cranial index of symmetry.
Kung, Woon-Man; Chen, Shuo-Tsung; Lin, Chung-Hsiang; Lu, Yu-Mei; Chen, Tzu-Hsuan; Lin, Muh-Shi
2013-01-01
Difficulty exists in scalp adaptation for cranioplasty with customized computer-assisted design/manufacturing (CAD/CAM) implant in situations of excessive wound tension and sub-cranioplasty dead space. To solve this clinical problem, the CAD/CAM technique should include algorithms to reconstruct a depressed contour to cover the skull defect. Satisfactory CAM-derived alloplastic implants are based on highly accurate three-dimensional (3-D) CAD modeling. Thus, it is quite important to establish a symmetrically regular CAD/CAM reconstruction prior to depressing the contour. The purpose of this study is to verify the aesthetic outcomes of CAD models with regular contours using cranial index of symmetry (CIS). From January 2011 to June 2012, decompressive craniectomy (DC) was performed for 15 consecutive patients in our institute. 3-D CAD models of skull defects were reconstructed using commercial software. These models were checked in terms of symmetry by CIS scores. CIS scores of CAD reconstructions were 99.24±0.004% (range 98.47-99.84). CIS scores of these CAD models were statistically significantly greater than 95%, identical to 99.5%, but lower than 99.6% (ppairs signed rank test). These data evidenced the highly accurate symmetry of these CAD models with regular contours. CIS calculation is beneficial to assess aesthetic outcomes of CAD-reconstructed skulls in terms of cranial symmetry. This enables further accurate CAD models and CAM cranial implants with depressed contours, which are essential in patients with difficult scalp adaptation.
Metric regularity and subdifferential calculus
International Nuclear Information System (INIS)
Ioffe, A D
2000-01-01
The theory of metric regularity is an extension of two classical results: the Lyusternik tangent space theorem and the Graves surjection theorem. Developments in non-smooth analysis in the 1980s and 1990s paved the way for a number of far-reaching extensions of these results. It was also well understood that the phenomena behind the results are of metric origin, not connected with any linear structure. At the same time it became clear that some basic hypotheses of the subdifferential calculus are closely connected with the metric regularity of certain set-valued maps. The survey is devoted to the metric theory of metric regularity and its connection with subdifferential calculus in Banach spaces
Manifold Regularized Correlation Object Tracking.
Hu, Hongwei; Ma, Bo; Shen, Jianbing; Shao, Ling
2018-05-01
In this paper, we propose a manifold regularized correlation tracking method with augmented samples. To make better use of the unlabeled data and the manifold structure of the sample space, a manifold regularization-based correlation filter is introduced, which aims to assign similar labels to neighbor samples. Meanwhile, the regression model is learned by exploiting the block-circulant structure of matrices resulting from the augmented translated samples over multiple base samples cropped from both target and nontarget regions. Thus, the final classifier in our method is trained with positive, negative, and unlabeled base samples, which is a semisupervised learning framework. A block optimization strategy is further introduced to learn a manifold regularization-based correlation filter for efficient online tracking. Experiments on two public tracking data sets demonstrate the superior performance of our tracker compared with the state-of-the-art tracking approaches.
Incentivizing Verifiable Privacy-Protection Mechanisms for Offline Crowdsensing Applications.
Sun, Jiajun; Liu, Ningzhong
2017-09-04
Incentive mechanisms of crowdsensing have recently been intensively explored. Most of these mechanisms mainly focus on the standard economical goals like truthfulness and utility maximization. However, enormous privacy and security challenges need to be faced directly in real-life environments, such as cost privacies. In this paper, we investigate offline verifiable privacy-protection crowdsensing issues. We firstly present a general verifiable privacy-protection incentive mechanism for the offline homogeneous and heterogeneous sensing job model. In addition, we also propose a more complex verifiable privacy-protection incentive mechanism for the offline submodular sensing job model. The two mechanisms not only explore the private protection issues of users and platform, but also ensure the verifiable correctness of payments between platform and users. Finally, we demonstrate that the two mechanisms satisfy privacy-protection, verifiable correctness of payments and the same revenue as the generic one without privacy protection. Our experiments also validate that the two mechanisms are both scalable and efficient, and applicable for mobile devices in crowdsensing applications based on auctions, where the main incentive for the user is the remuneration.
Dimensional regularization in configuration space
International Nuclear Information System (INIS)
Bollini, C.G.; Giambiagi, J.J.
1995-09-01
Dimensional regularization is introduced in configuration space by Fourier transforming in D-dimensions the perturbative momentum space Green functions. For this transformation, Bochner theorem is used, no extra parameters, such as those of Feynman or Bogoliubov-Shirkov are needed for convolutions. The regularized causal functions in x-space have ν-dependent moderated singularities at the origin. They can be multiplied together and Fourier transformed (Bochner) without divergence problems. The usual ultraviolet divergences appear as poles of the resultant functions of ν. Several example are discussed. (author). 9 refs
Regular algebra and finite machines
Conway, John Horton
2012-01-01
World-famous mathematician John H. Conway based this classic text on a 1966 course he taught at Cambridge University. Geared toward graduate students of mathematics, it will also prove a valuable guide to researchers and professional mathematicians.His topics cover Moore's theory of experiments, Kleene's theory of regular events and expressions, Kleene algebras, the differential calculus of events, factors and the factor matrix, and the theory of operators. Additional subjects include event classes and operator classes, some regulator algebras, context-free languages, communicative regular alg
Matrix regularization of 4-manifolds
Trzetrzelewski, M.
2012-01-01
We consider products of two 2-manifolds such as S^2 x S^2, embedded in Euclidean space and show that the corresponding 4-volume preserving diffeomorphism algebra can be approximated by a tensor product SU(N)xSU(N) i.e. functions on a manifold are approximated by the Kronecker product of two SU(N) matrices. A regularization of the 4-sphere is also performed by constructing N^2 x N^2 matrix representations of the 4-algebra (and as a byproduct of the 3-algebra which makes the regularization of S...
DEFF Research Database (Denmark)
Tryggestad, Kjell; Justesen, Lise; Mouritsen, Jan
2013-01-01
Purpose – The purpose of this paper is to explore how animals can become stakeholders in interaction with project management technologies and what happens with project temporalities when new and surprising stakeholders become part of a project and a recognized matter of concern to be taken...... into account. Design/methodology/approach – The paper is based on a qualitative case study of a project in the building industry. The authors use actor-network theory (ANT) to analyze the emergence of animal stakeholders, stakes and temporalities. Findings – The study shows how project temporalities can...... multiply in interaction with project management technologies and how conventional linear conceptions of project time may be contested with the emergence of new non-human stakeholders and temporalities. Research limitations/implications – The study draws on ANT to show how animals can become stakeholders...
(2+1-dimensional regular black holes with nonlinear electrodynamics sources
Directory of Open Access Journals (Sweden)
Yun He
2017-11-01
Full Text Available On the basis of two requirements: the avoidance of the curvature singularity and the Maxwell theory as the weak field limit of the nonlinear electrodynamics, we find two restricted conditions on the metric function of (2+1-dimensional regular black hole in general relativity coupled with nonlinear electrodynamics sources. By the use of the two conditions, we obtain a general approach to construct (2+1-dimensional regular black holes. In this manner, we construct four (2+1-dimensional regular black holes as examples. We also study the thermodynamic properties of the regular black holes and verify the first law of black hole thermodynamics.
DEFF Research Database (Denmark)
Lumaca, Massimo; Haumann, Niels Trusbak; Brattico, Elvira
2017-01-01
A core design feature of human communication systems and expressive behaviours is their temporal organization. The cultural evolutionary origins of this feature remain unclear. Here, we test the hypothesis that regularities in the temporal organization of signalling sequences arise in the course...
Verifiable Distribution of Material Goods Based on Cryptology
Directory of Open Access Journals (Sweden)
Radomír Palovský
2015-12-01
Full Text Available Counterfeiting of material goods is a general problem. In this paper an architecture for verifiable distribution of material goods is presented. This distribution is based on printing such a QR code on goods, which would contain digitally signed serial number of the product, and validity of this digital signature could be verifiable by a customer. Extension consisting of adding digital signatures to revenue stamps used for state-controlled goods is also presented. Discussion on possibilities in making copies leads to conclusion that cryptographic security needs to be completed by technical difficulties of copying.
Regularization of Nonmonotone Variational Inequalities
International Nuclear Information System (INIS)
Konnov, Igor V.; Ali, M.S.S.; Mazurkevich, E.O.
2006-01-01
In this paper we extend the Tikhonov-Browder regularization scheme from monotone to rather a general class of nonmonotone multivalued variational inequalities. We show that their convergence conditions hold for some classes of perfectly and nonperfectly competitive economic equilibrium problems
Lattice regularized chiral perturbation theory
International Nuclear Information System (INIS)
Borasoy, Bugra; Lewis, Randy; Ouimet, Pierre-Philippe A.
2004-01-01
Chiral perturbation theory can be defined and regularized on a spacetime lattice. A few motivations are discussed here, and an explicit lattice Lagrangian is reviewed. A particular aspect of the connection between lattice chiral perturbation theory and lattice QCD is explored through a study of the Wess-Zumino-Witten term
2011-01-20
... Meeting SUMMARY: Notice is hereby given of the regular meeting of the Farm Credit System Insurance Corporation Board (Board). Date and Time: The meeting of the Board will be held at the offices of the Farm... meeting of the Board will be open to the [[Page 3630
Forcing absoluteness and regularity properties
Ikegami, D.
2010-01-01
For a large natural class of forcing notions, we prove general equivalence theorems between forcing absoluteness statements, regularity properties, and transcendence properties over L and the core model K. We use our results to answer open questions from set theory of the reals.
Globals of Completely Regular Monoids
Institute of Scientific and Technical Information of China (English)
Wu Qian-qian; Gan Ai-ping; Du Xian-kun
2015-01-01
An element of a semigroup S is called irreducible if it cannot be expressed as a product of two elements in S both distinct from itself. In this paper we show that the class C of all completely regular monoids with irreducible identity elements satisfies the strong isomorphism property and so it is globally determined.
Fluid queues and regular variation
Boxma, O.J.
1996-01-01
This paper considers a fluid queueing system, fed by N independent sources that alternate between silence and activity periods. We assume that the distribution of the activity periods of one or more sources is a regularly varying function of index ¿. We show that its fat tail gives rise to an even
Fluid queues and regular variation
O.J. Boxma (Onno)
1996-01-01
textabstractThis paper considers a fluid queueing system, fed by $N$ independent sources that alternate between silence and activity periods. We assume that the distribution of the activity periods of one or more sources is a regularly varying function of index $zeta$. We show that its fat tail
Empirical laws, regularity and necessity
Koningsveld, H.
1973-01-01
In this book I have tried to develop an analysis of the concept of an empirical law, an analysis that differs in many ways from the alternative analyse's found in contemporary literature dealing with the subject.
1 am referring especially to two well-known views, viz. the regularity and
Interval matrices: Regularity generates singularity
Czech Academy of Sciences Publication Activity Database
Rohn, Jiří; Shary, S.P.
2018-01-01
Roč. 540, 1 March (2018), s. 149-159 ISSN 0024-3795 Institutional support: RVO:67985807 Keywords : interval matrix * regularity * singularity * P-matrix * absolute value equation * diagonally singilarizable matrix Subject RIV: BA - General Mathematics Impact factor: 0.973, year: 2016
Regularization in Matrix Relevance Learning
Schneider, Petra; Bunte, Kerstin; Stiekema, Han; Hammer, Barbara; Villmann, Thomas; Biehl, Michael
A In this paper, we present a regularization technique to extend recently proposed matrix learning schemes in learning vector quantization (LVQ). These learning algorithms extend the concept of adaptive distance measures in LVQ to the use of relevance matrices. In general, metric learning can
Verifying different-modality properties for concepts produces switching costs.
Pecher, Diane; Zeelenberg, René; Barsalou, Lawrence W
2003-03-01
According to perceptual symbol systems, sensorimotor simulations underlie the representation of concepts. It follows that sensorimotor phenomena should arise in conceptual processing. Previous studies have shown that switching from one modality to another during perceptual processing incurs a processing cost. If perceptual simulation underlies conceptual processing, then verifying the properties of concepts should exhibit a switching cost as well. For example, verifying a property in the auditory modality (e.g., BLENDER-loud) should be slower after verifying a property in a different modality (e.g., CRANBERRIES-tart) than after verifying a property in the same modality (e.g., LEAVES-rustling). Only words were presented to subjects, and there were no instructions to use imagery. Nevertheless, switching modalities incurred a cost, analogous to the cost of switching modalities in perception. A second experiment showed that this effect was not due to associative priming between properties in the same modality. These results support the hypothesis that perceptual simulation underlies conceptual processing.
Elements of a system for verifying a Comprehensive Test Ban
International Nuclear Information System (INIS)
Hannon, W.J.
1987-01-01
The paper discusses the goals of a monitoring system for a CTB, its functions, the challenges to verification, discrimination techniques, and some recent developments. It is concluded technical, military and political efforts are required to establish and verify test ban treaties which will contribute to stability in the long term. It currently appears there will be a significant number of unidentified events
An experiment designed to verify the general theory of relativity
International Nuclear Information System (INIS)
Surdin, Maurice
1960-01-01
The project for an experiment which uses the effect of gravitation on Maser-type clocks placed on the ground at two different heights and which is designed to verify the general theory of relativity. Reprint of a paper published in Comptes rendus des seances de l'Academie des Sciences, t. 250, p. 299-301, sitting of 11 January 1960 [fr
Building Program Verifiers from Compilers and Theorem Provers
2015-05-14
Checking with SMT UFO • LLVM-based front-end (partially reused in SeaHorn) • Combines Abstract Interpretation with Interpolation-Based Model Checking • (no...assertions Counter-examples are long Hard to determine (from main) what is relevant Assertion Main 35 Building Verifiers from Comp and SMT Gurfinkel, 2015
Verifying a smart design of TCAP : a synergetic experience
T. Arts; I.A. van Langevelde
1999-01-01
textabstractAn optimisation of the SS No. 7 Transport Capabilities Procedures is verified by specifying both the original and the optimised {scriptsize sf TCAP in {scriptsize sf $mu$CRL, generating transition systems for both using the {scriptsize sf $mu$CRL tool set, and checking weak bisimulation
A Trustworthy Internet Auction Model with Verifiable Fairness.
Liao, Gen-Yih; Hwang, Jing-Jang
2001-01-01
Describes an Internet auction model achieving verifiable fairness, a requirement aimed at enhancing the trust of bidders in auctioneers. Analysis results demonstrate that the proposed model satisfies various requirements regarding fairness and privacy. Moreover, in the proposed model, the losing bids remain sealed. (Author/AEF)
The Guided System Development Framework: Modeling and Verifying Communication Systems
DEFF Research Database (Denmark)
Carvalho Quaresma, Jose Nuno; Probst, Christian W.; Nielson, Flemming
2014-01-01
the verified specification. The refinement process carries thus security properties from the model to the implementation. Our approach also supports verification of systems previously developed and deployed. Internally, the reasoning in our framework is based on the Beliefs and Knowledge tool, a verification...... tool based on belief logics and explicit attacker knowledge....
Making Digital Artifacts on the Web Verifiable and Reliable
Kuhn, T.; Dumontier, M.
2015-01-01
The current Web has no general mechanisms to make digital artifacts - such as datasets, code, texts, and images - verifiable and permanent. For digital artifacts that are supposed to be immutable, there is moreover no commonly accepted method to enforce this immutability. These shortcomings have a
Analyzing Interaction Patterns to Verify a Simulation/Game Model
Myers, Rodney Dean
2012-01-01
In order for simulations and games to be effective for learning, instructional designers must verify that the underlying computational models being used have an appropriate degree of fidelity to the conceptual models of their real-world counterparts. A simulation/game that provides incorrect feedback is likely to promote misunderstanding and…
Regular and conformal regular cores for static and rotating solutions
Energy Technology Data Exchange (ETDEWEB)
Azreg-Aïnou, Mustapha
2014-03-07
Using a new metric for generating rotating solutions, we derive in a general fashion the solution of an imperfect fluid and that of its conformal homolog. We discuss the conditions that the stress–energy tensors and invariant scalars be regular. On classical physical grounds, it is stressed that conformal fluids used as cores for static or rotating solutions are exempt from any malicious behavior in that they are finite and defined everywhere.
Regular and conformal regular cores for static and rotating solutions
International Nuclear Information System (INIS)
Azreg-Aïnou, Mustapha
2014-01-01
Using a new metric for generating rotating solutions, we derive in a general fashion the solution of an imperfect fluid and that of its conformal homolog. We discuss the conditions that the stress–energy tensors and invariant scalars be regular. On classical physical grounds, it is stressed that conformal fluids used as cores for static or rotating solutions are exempt from any malicious behavior in that they are finite and defined everywhere.
Verifying Embedded Systems using Component-based Runtime Observers
DEFF Research Database (Denmark)
Guan, Wei; Marian, Nicolae; Angelov, Christo K.
against formally specified properties. This paper presents a component-based design method for runtime observers, which are configured from instances of prefabricated reusable components---Predicate Evaluator (PE) and Temporal Evaluator (TE). The PE computes atomic propositions for the TE; the latter...... is a reconfigurable component processing a data structure, representing the state transition diagram of a non-deterministic state machine, i.e. a Buchi automaton derived from a system property specified in Linear Temporal Logic (LTL). Observer components have been implemented using design models and design patterns...
Physical model of dimensional regularization
Energy Technology Data Exchange (ETDEWEB)
Schonfeld, Jonathan F.
2016-12-15
We explicitly construct fractals of dimension 4-ε on which dimensional regularization approximates scalar-field-only quantum-field theory amplitudes. The construction does not require fractals to be Lorentz-invariant in any sense, and we argue that there probably is no Lorentz-invariant fractal of dimension greater than 2. We derive dimensional regularization's power-law screening first for fractals obtained by removing voids from 3-dimensional Euclidean space. The derivation applies techniques from elementary dielectric theory. Surprisingly, fractal geometry by itself does not guarantee the appropriate power-law behavior; boundary conditions at fractal voids also play an important role. We then extend the derivation to 4-dimensional Minkowski space. We comment on generalization to non-scalar fields, and speculate about implications for quantum gravity. (orig.)
Maximum mutual information regularized classification
Wang, Jim Jing-Yan
2014-09-07
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
Maximum mutual information regularized classification
Wang, Jim Jing-Yan; Wang, Yi; Zhao, Shiguang; Gao, Xin
2014-01-01
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
Regularized strings with extrinsic curvature
International Nuclear Information System (INIS)
Ambjoern, J.; Durhuus, B.
1987-07-01
We analyze models of discretized string theories, where the path integral over world sheet variables is regularized by summing over triangulated surfaces. The inclusion of curvature in the action is a necessity for the scaling of the string tension. We discuss the physical properties of models with extrinsic curvature terms in the action and show that the string tension vanishes at the critical point where the bare extrinsic curvature coupling tends to infinity. Similar results are derived for models with intrinsic curvature. (orig.)
Circuit complexity of regular languages
Czech Academy of Sciences Publication Activity Database
Koucký, Michal
2009-01-01
Roč. 45, č. 4 (2009), s. 865-879 ISSN 1432-4350 R&D Projects: GA ČR GP201/07/P276; GA MŠk(CZ) 1M0545 Institutional research plan: CEZ:AV0Z10190503 Keywords : regular languages * circuit complexity * upper and lower bounds Subject RIV: BA - General Mathematics Impact factor: 0.726, year: 2009
Getting What We Paid for: a Script to Verify Full Access to E-Resources
Directory of Open Access Journals (Sweden)
Kristina M. Spurgin
2014-07-01
Full Text Available Libraries regularly pay for packages of e-resources containing hundreds to thousands of individual titles. Ideally, library patrons could access the full content of all titles in such packages. In reality, library staff and patrons inevitably stumble across inaccessible titles, but no library has the resources to manually verify full access to all titles, and basic URL checkers cannot check for access. This article describes the E-Resource Access Checker—a script that automates the verification of full access. With the Access Checker, library staff can identify all inaccessible titles in a package and bring these problems to content providers’ attention to ensure we get what we pay for.
General inverse problems for regular variation
DEFF Research Database (Denmark)
Damek, Ewa; Mikosch, Thomas Valentin; Rosinski, Jan
2014-01-01
Regular variation of distributional tails is known to be preserved by various linear transformations of some random structures. An inverse problem for regular variation aims at understanding whether the regular variation of a transformed random object is caused by regular variation of components ...
Software Platform Evaluation - Verifiable Fuel Cycle Simulation (VISION) Model
International Nuclear Information System (INIS)
J. J. Jacobson; D. E. Shropshire; W. B. West
2005-01-01
The purpose of this Software Platform Evaluation (SPE) is to document the top-level evaluation of potential software platforms on which to construct a simulation model that satisfies the requirements for a Verifiable Fuel Cycle Simulation Model (VISION) of the Advanced Fuel Cycle (AFC). See the Software Requirements Specification for Verifiable Fuel Cycle Simulation (VISION) Model (INEEL/EXT-05-02643, Rev. 0) for a discussion of the objective and scope of the VISION model. VISION is intended to serve as a broad systems analysis and study tool applicable to work conducted as part of the AFCI (including costs estimates) and Generation IV reactor development studies. This document will serve as a guide for selecting the most appropriate software platform for VISION. This is a ''living document'' that will be modified over the course of the execution of this work
Verifiable Measurement-Only Blind Quantum Computing with Stabilizer Testing.
Hayashi, Masahito; Morimae, Tomoyuki
2015-11-27
We introduce a simple protocol for verifiable measurement-only blind quantum computing. Alice, a client, can perform only single-qubit measurements, whereas Bob, a server, can generate and store entangled many-qubit states. Bob generates copies of a graph state, which is a universal resource state for measurement-based quantum computing, and sends Alice each qubit of them one by one. Alice adaptively measures each qubit according to her program. If Bob is honest, he generates the correct graph state, and, therefore, Alice can obtain the correct computation result. Regarding the security, whatever Bob does, Bob cannot get any information about Alice's computation because of the no-signaling principle. Furthermore, malicious Bob does not necessarily send the copies of the correct graph state, but Alice can check the correctness of Bob's state by directly verifying the stabilizers of some copies.
Phenotype in 18 Danish subjects with genetically verified CHARGE syndrome
DEFF Research Database (Denmark)
Husu, E; Hove, Hd; Farholt, Stense
2013-01-01
problems (12/15) were other frequent cranial nerve dysfunctions. Three-dimensional reconstructions of MRI scans showed temporal bone abnormalities in >85%. CHARGE syndrome present a broad phenotypic spectrum, although some clinical features are more frequently occurring than others. Here, we suggest...
Verifying the gravitational shift due to the earth's rotation
International Nuclear Information System (INIS)
Briatore, L.; Leschiutta, S.
1976-01-01
Data on various independent time scales kept in different laboratories are elaborated in order to verify the gravitational shift due to the earth's rotation. It is shown that the state of the art in the measurement of time is just now resulting in the possibility to make measurement of Δ t/t approximately 10 -13 . Moreover it is shown an experimental evidence of the earth's rotation relativistic effects
Building and Verifying a Predictive Model of Interruption Resumption
2012-03-01
the gardener to remember those plants (and whether they need to be removed), and so will not commit resources to remember that information . The overall...camera), the storyteller needed help much less often. This result suggests that when there is no one to help them remember the last thing they said...INV ITED P A P E R Building and Verifying a Predictive Model of Interruption Resumption Help from a robot, to allow a human storyteller to continue
Verifying a nuclear weapon`s response to radiation environments
Energy Technology Data Exchange (ETDEWEB)
Dean, F.F.; Barrett, W.H.
1998-05-01
The process described in the paper is being applied as part of the design verification of a replacement component designed for a nuclear weapon currently in the active stockpile. This process is an adaptation of the process successfully used in nuclear weapon development programs. The verification process concentrates on evaluating system response to radiation environments, verifying system performance during and after exposure to radiation environments, and assessing system survivability.
TrustGuard: A Containment Architecture with Verified Output
2017-01-01
that the TrustGuard system has minimal performance decline, despite restrictions such as high communication latency and limited available bandwidth...design are the availability of high bandwidth and low delays between the host and the monitoring chip. 3-D integration provides an alternate way of...TRUSTGUARD: A CONTAINMENT ARCHITECTURE WITH VERIFIED OUTPUT SOUMYADEEP GHOSH A DISSERTATION PRESENTED TO THE FACULTY OF PRINCETON UNIVERSITY IN
Large test rigs verify Clinch River control rod reliability
International Nuclear Information System (INIS)
Michael, H.D.; Smith, G.G.
1983-01-01
The purpose of the Clinch River control test programme was to use multiple full-scale prototypic control rod systems for verifying the system's ability to perform reliably during simulated reactor power control and emergency shutdown operations. Two major facilities, the Shutdown Control Rod and Maintenance (Scram) facility and the Dynamic and Seismic Test (Dast) facility, were constructed. The test programme of each facility is described. (UK)
Holme, Petter; Saramäki, Jari
2012-10-01
A great variety of systems in nature, society and technology-from the web of sexual contacts to the Internet, from the nervous system to power grids-can be modeled as graphs of vertices coupled by edges. The network structure, describing how the graph is wired, helps us understand, predict and optimize the behavior of dynamical systems. In many cases, however, the edges are not continuously active. As an example, in networks of communication via e-mail, text messages, or phone calls, edges represent sequences of instantaneous or practically instantaneous contacts. In some cases, edges are active for non-negligible periods of time: e.g., the proximity patterns of inpatients at hospitals can be represented by a graph where an edge between two individuals is on throughout the time they are at the same ward. Like network topology, the temporal structure of edge activations can affect dynamics of systems interacting through the network, from disease contagion on the network of patients to information diffusion over an e-mail network. In this review, we present the emergent field of temporal networks, and discuss methods for analyzing topological and temporal structure and models for elucidating their relation to the behavior of dynamical systems. In the light of traditional network theory, one can see this framework as moving the information of when things happen from the dynamical system on the network, to the network itself. Since fundamental properties, such as the transitivity of edges, do not necessarily hold in temporal networks, many of these methods need to be quite different from those for static networks. The study of temporal networks is very interdisciplinary in nature. Reflecting this, even the object of study has many names-temporal graphs, evolving graphs, time-varying graphs, time-aggregated graphs, time-stamped graphs, dynamic networks, dynamic graphs, dynamical graphs, and so on. This review covers different fields where temporal graphs are considered
Robustness and device independence of verifiable blind quantum computing
International Nuclear Information System (INIS)
Gheorghiu, Alexandru; Kashefi, Elham; Wallden, Petros
2015-01-01
Recent advances in theoretical and experimental quantum computing bring us closer to scalable quantum computing devices. This makes the need for protocols that verify the correct functionality of quantum operations timely and has led to the field of quantum verification. In this paper we address key challenges to make quantum verification protocols applicable to experimental implementations. We prove the robustness of the single server verifiable universal blind quantum computing protocol of Fitzsimons and Kashefi (2012 arXiv:1203.5217) in the most general scenario. This includes the case where the purification of the deviated input state is in the hands of an adversarial server. The proved robustness property allows the composition of this protocol with a device-independent state tomography protocol that we give, which is based on the rigidity of CHSH games as proposed by Reichardt et al (2013 Nature 496 456–60). The resulting composite protocol has lower round complexity for the verification of entangled quantum servers with a classical verifier and, as we show, can be made fault tolerant. (paper)
Regularization methods in Banach spaces
Schuster, Thomas; Hofmann, Bernd; Kazimierski, Kamil S
2012-01-01
Regularization methods aimed at finding stable approximate solutions are a necessary tool to tackle inverse and ill-posed problems. Usually the mathematical model of an inverse problem consists of an operator equation of the first kind and often the associated forward operator acts between Hilbert spaces. However, for numerous problems the reasons for using a Hilbert space setting seem to be based rather on conventions than on an approprimate and realistic model choice, so often a Banach space setting would be closer to reality. Furthermore, sparsity constraints using general Lp-norms or the B
Academic Training Lecture - Regular Programme
PH Department
2011-01-01
Regular Lecture Programme 9 May 2011 ACT Lectures on Detectors - Inner Tracking Detectors by Pippa Wells (CERN) 10 May 2011 ACT Lectures on Detectors - Calorimeters (2/5) by Philippe Bloch (CERN) 11 May 2011 ACT Lectures on Detectors - Muon systems (3/5) by Kerstin Hoepfner (RWTH Aachen) 12 May 2011 ACT Lectures on Detectors - Particle Identification and Forward Detectors by Peter Krizan (University of Ljubljana and J. Stefan Institute, Ljubljana, Slovenia) 13 May 2011 ACT Lectures on Detectors - Trigger and Data Acquisition (5/5) by Dr. Brian Petersen (CERN) from 11:00 to 12:00 at CERN ( Bldg. 222-R-001 - Filtration Plant )
Verifying three-dimensional skull model reconstruction using cranial index of symmetry.
Directory of Open Access Journals (Sweden)
Woon-Man Kung
Full Text Available BACKGROUND: Difficulty exists in scalp adaptation for cranioplasty with customized computer-assisted design/manufacturing (CAD/CAM implant in situations of excessive wound tension and sub-cranioplasty dead space. To solve this clinical problem, the CAD/CAM technique should include algorithms to reconstruct a depressed contour to cover the skull defect. Satisfactory CAM-derived alloplastic implants are based on highly accurate three-dimensional (3-D CAD modeling. Thus, it is quite important to establish a symmetrically regular CAD/CAM reconstruction prior to depressing the contour. The purpose of this study is to verify the aesthetic outcomes of CAD models with regular contours using cranial index of symmetry (CIS. MATERIALS AND METHODS: From January 2011 to June 2012, decompressive craniectomy (DC was performed for 15 consecutive patients in our institute. 3-D CAD models of skull defects were reconstructed using commercial software. These models were checked in terms of symmetry by CIS scores. RESULTS: CIS scores of CAD reconstructions were 99.24±0.004% (range 98.47-99.84. CIS scores of these CAD models were statistically significantly greater than 95%, identical to 99.5%, but lower than 99.6% (p<0.001, p = 0.064, p = 0.021 respectively, Wilcoxon matched pairs signed rank test. These data evidenced the highly accurate symmetry of these CAD models with regular contours. CONCLUSIONS: CIS calculation is beneficial to assess aesthetic outcomes of CAD-reconstructed skulls in terms of cranial symmetry. This enables further accurate CAD models and CAM cranial implants with depressed contours, which are essential in patients with difficult scalp adaptation.
Smolin, Lee
2015-11-01
Two people may claim both to be naturalists, but have divergent conceptions of basic elements of the natural world which lead them to mean different things when they talk about laws of nature, or states, or the role of mathematics in physics. These disagreements do not much affect the ordinary practice of science which is about small subsystems of the universe, described or explained against a background, idealized to be fixed. But these issues become crucial when we consider including the whole universe within our system, for then there is no fixed background to reference observables to. I argue here that the key issue responsible for divergent versions of naturalism and divergent approaches to cosmology is the conception of time. One version, which I call temporal naturalism, holds that time, in the sense of the succession of present moments, is real, and that laws of nature evolve in that time. This is contrasted with timeless naturalism, which holds that laws are immutable and the present moment and its passage are illusions. I argue that temporal naturalism is empirically more adequate than the alternatives, because it offers testable explanations for puzzles its rivals cannot address, and is likely a better basis for solving major puzzles that presently face cosmology and physics. This essay also addresses the problem of qualia and experience within naturalism and argues that only temporal naturalism can make a place for qualia as intrinsic qualities of matter.
An alternative test for verifying electronic balance linearity
International Nuclear Information System (INIS)
Thomas, I.R.
1998-02-01
This paper presents an alternative method for verifying electronic balance linearity and accuracy. This method is being developed for safeguards weighings (weighings for the control and accountability of nuclear material) at the Idaho National Engineering and Environmental Laboratory (INEEL). With regard to balance linearity and accuracy, DOE Order 5633.3B, Control and Accountability of Nuclear Materials, Paragraph 2, 4, e, (1), (a) Scales and Balances Program, states: ''All scales and balances used for accountability purposes shall be maintained in good working condition, recalibrated according to an established schedule, and checked for accuracy and linearity on each day that the scale or balance is used for accountability purposes.'' Various tests have been proposed for testing accuracy and linearity. In the 1991 Measurement Science Conference, Dr. Walter E. Kupper presented a paper entitled: ''Validation of High Accuracy Weighing Equipment.'' Dr. Kupper emphasized that tolerance checks for calibrated, state-of-the-art electronic equipment need not be complicated, and he presented four easy steps for verifying that a calibrated balance is operating correctly. These tests evaluate the standard deviation of successive weighings (of the same load), the off-center error, the calibration error, and the error due to nonlinearity. This method of balance validation is undoubtedly an authoritative means of ensuring balance operability, yet it could have two drawbacks: one, the test for linearity is not intuitively obvious, especially from a statistical viewpoint; and two, there is an absence of definitively defined testing limits. Hence, this paper describes an alternative means of verifying electronic balance linearity and accuracy that is being developed for safeguards measurements at the INEEL
Verifying the integrity of hardcopy document using OCR
CSIR Research Space (South Africa)
Mthethwa, Sthembile
2018-03-01
Full Text Available stream_source_info Mthethwa_20042_2018.pdf.txt stream_content_type text/plain stream_size 7349 Content-Encoding UTF-8 stream_name Mthethwa_20042_2018.pdf.txt Content-Type text/plain; charset=UTF-8 Verifying the Integrity...) of the document to be defined. Each text in the meta-template is labelled with a unique identifier, which makes it easier for the process of validation. The meta-template consist of two types of text; normal text and validation text (important text that must...
Verifying Architectural Design Rules of the Flight Software Product Line
Ganesan, Dharmalingam; Lindvall, Mikael; Ackermann, Chris; McComas, David; Bartholomew, Maureen
2009-01-01
This paper presents experiences of verifying architectural design rules of the NASA Core Flight Software (CFS) product line implementation. The goal of the verification is to check whether the implementation is consistent with the CFS architectural rules derived from the developer's guide. The results indicate that consistency checking helps a) identifying architecturally significant deviations that were eluded during code reviews, b) clarifying the design rules to the team, and c) assessing the overall implementation quality. Furthermore, it helps connecting business goals to architectural principles, and to the implementation. This paper is the first step in the definition of a method for analyzing and evaluating product line implementations from an architecture-centric perspective.
From Operating-System Correctness to Pervasively Verified Applications
Daum, Matthias; Schirmer, Norbert W.; Schmidt, Mareike
Though program verification is known and has been used for decades, the verification of a complete computer system still remains a grand challenge. Part of this challenge is the interaction of application programs with the operating system, which is usually entrusted with retrieving input data from and transferring output data to peripheral devices. In this scenario, the correct operation of the applications inherently relies on operating-system correctness. Based on the formal correctness of our real-time operating system Olos, this paper describes an approach to pervasively verify applications running on top of the operating system.
Psychological and social aspects verified after the Goiania's radioactive accident
International Nuclear Information System (INIS)
Helou, Suzana
1995-01-01
Psychological and social aspects verified after the radioactive accident occurred in 1987 in Goiania - brazilian city - are discussed. With this goal was going presented a public opinion research in order to retract the Goiania's radioactive accident residual psychological effects. They were going consolidated data obtained in 1.126 interviews. Four involvement different levels groups with the accident are compared with regard to the event. The research allowed to conclude that the accident affected psychologically somehow all Goiania's population. Besides, the research allowed to analyze the professionals performance quality standard in terms of the accident
Flux wire measurements in Cavalier for verifying computer code applications
International Nuclear Information System (INIS)
Fehr, M.; Stubbs, J.; Hosticka, B.
1988-01-01
The Cavalier and UVAR research reactors are to be converted from high-enrichment uranium (HEU) to low-enrichment uranium (LEU) fuel. As a first step, an extensive set of gold wire activation measurements has been taken on the Cavalier reactor. Axial traverses show internal consistency to the order of ±5%, while horizontal traverses show somewhat larger deviations. The activation measurements will be converted to flux measurements via the Thermos code and will then be used to verify the Leopard-2DB codes. The codes will ultimately be used to design an upgraded LEU core for the UVAR
Verifying Galileo's discoveries: telescope-making at the Collegio Romano
Reeves, Eileen; van Helden, Albert
The Jesuits of the Collegio Romano in Rome, especially the mathematicians Clavius and Grienberger, were very interested in Galilei's discoveries. After they had failed to recognize with telescopes of own construction the celestial phenomena, they expressed serious doubts. But from November 1610 onward, after they had built a better telescope and had obtained from Venice another one in addition, and could verify Galilei's observations, they completely accepted them. Clavius, who stuck to the Ptolemaic system till his death in 1612, even pointed out these facts in his last edition of Sacrobosco's Sphaera. He as well as his conpatres, however, avoided any conclusions with respect to the planetary system.
ASTUS system for verifying the transport seal TITUS 1
International Nuclear Information System (INIS)
Barillaux; Monteil, D.; Destain, G.D.
1991-01-01
ASTUS, a system for acquisition and processing ultrasonic signatures of TITUS 1 seals has been developed. TITUS seals are used to verify the integrity of the fissile material's container sealing after transport. An autonomous portable reading case permit to take seals signatures at the starting point and to transmit these reference signatures to a central safeguards computer by phonic modem. Then, at the terminal point with a similar reading case, an authority takes again the signature of seals and immediately transmit these signatures to the central safeguards computer. The central computer processes the data in real time by autocorrelation and return its verdict to the terminal point
Verifying real-time systems against scenario-based requirements
DEFF Research Database (Denmark)
Larsen, Kim Guldstrand; Li, Shuhao; Nielsen, Brian
2009-01-01
We propose an approach to automatic verification of real-time systems against scenario-based requirements. A real-time system is modeled as a network of Timed Automata (TA), and a scenario-based requirement is specified as a Live Sequence Chart (LSC). We define a trace-based semantics for a kernel...... subset of the LSC language. By equivalently translating an LSC chart into an observer TA and then non-intrusively composing this observer with the original system model, the problem of verifying a real-time system against a scenario-based requirement reduces to a classical real-time model checking...
Spin temperature concept verified by optical magnetometry of nuclear spins
Vladimirova, M.; Cronenberger, S.; Scalbert, D.; Ryzhov, I. I.; Zapasskii, V. S.; Kozlov, G. G.; Lemaître, A.; Kavokin, K. V.
2018-01-01
We develop a method of nonperturbative optical control over adiabatic remagnetization of the nuclear spin system and apply it to verify the spin temperature concept in GaAs microcavities. The nuclear spin system is shown to exactly follow the predictions of the spin temperature theory, despite the quadrupole interaction that was earlier reported to disrupt nuclear spin thermalization. These findings open a way for the deep cooling of nuclear spins in semiconductor structures, with the prospect of realizing nuclear spin-ordered states for high-fidelity spin-photon interfaces.
RES: Regularized Stochastic BFGS Algorithm
Mokhtari, Aryan; Ribeiro, Alejandro
2014-12-01
RES, a regularized stochastic version of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton method is proposed to solve convex optimization problems with stochastic objectives. The use of stochastic gradient descent algorithms is widespread, but the number of iterations required to approximate optimal arguments can be prohibitive in high dimensional problems. Application of second order methods, on the other hand, is impracticable because computation of objective function Hessian inverses incurs excessive computational cost. BFGS modifies gradient descent by introducing a Hessian approximation matrix computed from finite gradient differences. RES utilizes stochastic gradients in lieu of deterministic gradients for both, the determination of descent directions and the approximation of the objective function's curvature. Since stochastic gradients can be computed at manageable computational cost RES is realizable and retains the convergence rate advantages of its deterministic counterparts. Convergence results show that lower and upper bounds on the Hessian egeinvalues of the sample functions are sufficient to guarantee convergence to optimal arguments. Numerical experiments showcase reductions in convergence time relative to stochastic gradient descent algorithms and non-regularized stochastic versions of BFGS. An application of RES to the implementation of support vector machines is developed.
Regularized Label Relaxation Linear Regression.
Fang, Xiaozhao; Xu, Yong; Li, Xuelong; Lai, Zhihui; Wong, Wai Keung; Fang, Bingwu
2018-04-01
Linear regression (LR) and some of its variants have been widely used for classification problems. Most of these methods assume that during the learning phase, the training samples can be exactly transformed into a strict binary label matrix, which has too little freedom to fit the labels adequately. To address this problem, in this paper, we propose a novel regularized label relaxation LR method, which has the following notable characteristics. First, the proposed method relaxes the strict binary label matrix into a slack variable matrix by introducing a nonnegative label relaxation matrix into LR, which provides more freedom to fit the labels and simultaneously enlarges the margins between different classes as much as possible. Second, the proposed method constructs the class compactness graph based on manifold learning and uses it as the regularization item to avoid the problem of overfitting. The class compactness graph is used to ensure that the samples sharing the same labels can be kept close after they are transformed. Two different algorithms, which are, respectively, based on -norm and -norm loss functions are devised. These two algorithms have compact closed-form solutions in each iteration so that they are easily implemented. Extensive experiments show that these two algorithms outperform the state-of-the-art algorithms in terms of the classification accuracy and running time.
Verifying competence of operations personnel in nuclear power plants
International Nuclear Information System (INIS)
Farber, G.H.
1986-01-01
To ensure that only competent people are authorized to fill positions in a nuclear power plant, both the initial competence of personnel and the continuous maintenance of competence have to be verified. Two main methods are normally used for verifying competence, namely evaluation of a person's performance over a period of time, and evaluation of his knowledge and skills at a particular time by means of an examination. Both methods have limitations, and in practice they are often used together to give different and to some extent complementary evaluations of a person's competence. Verification of competence itself is a problem area, because objective judging of human competence is extremely difficult. Formal verification methods, such as tests and examinations, are particularly or exclusively applied for the direct operating personnel in the control room (very rarely for management personnel). Out of the many elements contributing to a person's competence, the knowledge which is needed and the intellectual skills are the main subjects of the formal verification methods. Therefore the presentation will concentrate on the proof of the technical qualification of operators by means of examinations. The examination process in the Federal Republic of Germany for the proof of knowledge and skills will serve as an example to describe and analyze the important aspects. From that recommendations are derived regarding standardization of the procedure as well as validation. (orig./GL)
People consider reliability and cost when verifying their autobiographical memories.
Wade, Kimberley A; Nash, Robert A; Garry, Maryanne
2014-02-01
Because memories are not always accurate, people rely on a variety of strategies to verify whether the events that they remember really did occur. Several studies have examined which strategies people tend to use, but none to date has asked why people opt for certain strategies over others. Here we examined the extent to which people's beliefs about the reliability and the cost of different strategies would determine their strategy selection. Subjects described a childhood memory and then suggested strategies they might use to verify the accuracy of that memory. Next, they rated the reliability and cost of each strategy, and the likelihood that they might use it. Reliability and cost each predicted strategy selection, but a combination of the two ratings provided even greater predictive value. Cost was significantly more influential than reliability, which suggests that a tendency to seek and to value "cheap" information more than reliable information could underlie many real-world memory errors. Copyright © 2013 Elsevier B.V. All rights reserved.
A record and verify system for radiotherapy treatment
International Nuclear Information System (INIS)
Koens, M.L.; Vroome, H. de
1984-01-01
The Record and Verify system developed for the radiotherapy department of the Leiden University Hospital is described. The system has been in use since 1980 and will now be installed in at least four of the Dutch University Hospitals. The system provides the radiographer with a powerful tool for checking the set-up of the linear accelerator preceeding the irradiation of a field. After the irradiation of a field the machine settings are registered in the computer system together with the newly calculated cumulative dose. These registrations are used by the system to produce a daily report which provides the management of the department with insight into the established differences between treatment and treatment planning. Buying a record and verify system from the manufacturer of the linear accelerator is not an optimal solution especially for a department with more than one accelerator from different manufacturers. Integration in a Hospital Information System (HIS) has important advantages over the development of a dedicated departmental system. (author)
Characterizing Verified Head Impacts in High School Girls' Lacrosse.
Caswell, Shane V; Lincoln, Andrew E; Stone, Hannah; Kelshaw, Patricia; Putukian, Margot; Hepburn, Lisa; Higgins, Michael; Cortes, Nelson
2017-12-01
Girls' high school lacrosse players have higher rates of head and facial injuries than boys. Research indicates that these injuries are caused by stick, player, and ball contacts. Yet, no studies have characterized head impacts in girls' high school lacrosse. To characterize girls' high school lacrosse game-related impacts by frequency, magnitude, mechanism, player position, and game situation. Descriptive epidemiology study. Thirty-five female participants (mean age, 16.2 ± 1.2 years; mean height, 1.66 ± 0.05 m; mean weight, 61.2 ± 6.4 kg) volunteered during 28 games in the 2014 and 2015 lacrosse seasons. Participants wore impact sensors affixed to the right mastoid process before each game. All game-related impacts recorded by the sensors were verified using game video. Data were summarized for all verified impacts in terms of frequency, peak linear acceleration (PLA), and peak rotational acceleration (PRA). Descriptive statistics and impact rates were calculated. Fifty-eight verified game-related impacts ≥20 g were recorded (median PLA, 33.8 g; median PRA, 6151.1 rad/s 2 ) during 467 player-games. The impact rate for all game-related verified impacts was 0.12 per athlete-exposure (AE) (95% CI, 0.09-0.16), equivalent to 2.1 impacts per team game, indicating that each athlete suffered fewer than 2 head impacts per season ≥20 g. Of these impacts, 28 (48.3%) were confirmed to directly strike the head, corresponding with an impact rate of 0.05 per AE (95% CI, 0.00-0.10). Overall, midfielders (n = 28, 48.3%) sustained the most impacts, followed by defenders (n = 12, 20.7%), attackers (n = 11, 19.0%), and goalies (n = 7, 12.1%). Goalies demonstrated the highest median PLA and PRA (38.8 g and 8535.0 rad/s 2 , respectively). The most common impact mechanisms were contact with a stick (n = 25, 43.1%) and a player (n = 17, 29.3%), followed by the ball (n = 7, 12.1%) and the ground (n = 7, 12.1%). One hundred percent of ball impacts occurred to goalies. Most impacts
From inactive to regular jogger
DEFF Research Database (Denmark)
Lund-Cramer, Pernille; Brinkmann Løite, Vibeke; Bredahl, Thomas Viskum Gjelstrup
study was conducted using individual semi-structured interviews on how a successful long-term behavior change had been achieved. Ten informants were purposely selected from participants in the DANO-RUN research project (7 men, 3 women, average age 41.5). Interviews were performed on the basis of Theory...... of Planned Behavior (TPB) and The Transtheoretical Model (TTM). Coding and analysis of interviews were performed using NVivo 10 software. Results TPB: During the behavior change process, the intention to jogging shifted from a focus on weight loss and improved fitness to both physical health, psychological......Title From inactive to regular jogger - a qualitative study of achieved behavioral change among recreational joggers Authors Pernille Lund-Cramer & Vibeke Brinkmann Løite Purpose Despite extensive knowledge of barriers to physical activity, most interventions promoting physical activity have proven...
Gallistel, C.R.; Craig, Andrew R.; Shahan, Timothy A.
2015-01-01
Contingency, and more particularly temporal contingency, has often figured in thinking about the nature of learning. However, it has never been formally defined in such a way as to make it a measure that can be applied to most animal learning protocols. We use elementary information theory to define contingency in such a way as to make it a measurable property of almost any conditioning protocol. We discuss how making it a measurable construct enables the exploration of the role of different contingencies in the acquisition and performance of classically and operantly conditioned behavior. PMID:23994260
Gallistel, C R; Craig, Andrew R; Shahan, Timothy A
2014-01-01
Contingency, and more particularly temporal contingency, has often figured in thinking about the nature of learning. However, it has never been formally defined in such a way as to make it a measure that can be applied to most animal learning protocols. We use elementary information theory to define contingency in such a way as to make it a measurable property of almost any conditioning protocol. We discuss how making it a measurable construct enables the exploration of the role of different contingencies in the acquisition and performance of classically and operantly conditioned behavior. Copyright © 2013 Elsevier B.V. All rights reserved.
Tessellating the Sphere with Regular Polygons
Soto-Johnson, Hortensia; Bechthold, Dawn
2004-01-01
Tessellations in the Euclidean plane and regular polygons that tessellate the sphere are reviewed. The regular polygons that can possibly tesellate the sphere are spherical triangles, squares and pentagons.
On the equivalence of different regularization methods
International Nuclear Information System (INIS)
Brzezowski, S.
1985-01-01
The R-circunflex-operation preceded by the regularization procedure is discussed. Some arguments are given, according to which the results may depend on the method of regularization, introduced in order to avoid divergences in perturbation calculations. 10 refs. (author)
The uniqueness of the regularization procedure
International Nuclear Information System (INIS)
Brzezowski, S.
1981-01-01
On the grounds of the BPHZ procedure, the criteria of correct regularization in perturbation calculations of QFT are given, together with the prescription for dividing the regularized formulas into the finite and infinite parts. (author)
van der Aa, Jeroen; Honing, Henkjan; ten Cate, Carel
2015-06-01
Perceiving temporal regularity in an auditory stimulus is considered one of the basic features of musicality. Here we examine whether zebra finches can detect regularity in an isochronous stimulus. Using a go/no go paradigm we show that zebra finches are able to distinguish between an isochronous and an irregular stimulus. However, when the tempo of the isochronous stimulus is changed, it is no longer treated as similar to the training stimulus. Training with three isochronous and three irregular stimuli did not result in improvement of the generalization. In contrast, humans, exposed to the same stimuli, readily generalized across tempo changes. Our results suggest that zebra finches distinguish the different stimuli by learning specific local temporal features of each individual stimulus rather than attending to the global structure of the stimuli, i.e., to the temporal regularity. Copyright © 2015 Elsevier B.V. All rights reserved.
[The development and evaluation of software to verify diagnostic accuracy].
Jensen, Rodrigo; de Moraes Lopes, Maria Helena Baena; Silveira, Paulo Sérgio Panse; Ortega, Neli Regina Siqueira
2012-02-01
This article describes the development and evaluation of software that verifies the accuracy of diagnoses made by nursing students. The software was based on a model that uses fuzzy logic concepts, including PERL, the MySQL database for Internet accessibility, and the NANDA-I 2007-2008 classification system. The software was evaluated in terms of its technical quality and usability through specific instruments. The activity proposed in the software involves four stages in which students establish the relationship values between nursing diagnoses, defining characteristics/risk factors and clinical cases. The relationship values determined by students are compared to those of specialists, generating performance scores for the students. In the evaluation, the software demonstrated satisfactory outcomes regarding the technical quality and, according to the students, helped in their learning and may become an educational tool to teach the process of nursing diagnosis.
Calling Out Cheaters : Covert Security with Public VerifiabilitySecurity
DEFF Research Database (Denmark)
Asharov, Gilad; Orlandi, Claudio
2012-01-01
We introduce the notion of covert security with public verifiability, building on the covert security model introduced by Aumann and Lindell (TCC 2007). Protocols that satisfy covert security guarantee that the honest parties involved in the protocol will notice any cheating attempt with some...... constant probability ε. The idea behind the model is that the fear of being caught cheating will be enough of a deterrent to prevent any cheating attempt. However, in the basic covert security model, the honest parties are not able to persuade any third party (say, a judge) that a cheating occurred. We...... propose (and formally define) an extension of the model where, when an honest party detects cheating, it also receives a certificate that can be published and used to persuade other parties, without revealing any information about the honest party’s input. In addition, malicious parties cannot create fake...
Developing a flexible and verifiable integrated dose assessment capability
International Nuclear Information System (INIS)
Parzyck, D.C.; Rhea, T.A.; Copenhaver, E.D.; Bogard, J.S.
1987-01-01
A flexible yet verifiable system of computing and recording personnel doses is needed. Recent directions in statutes establish the trend of combining internal and external doses. We are developing a Health Physics Information Management System (HPIMS) that will centralize dosimetry calculations and data storage; integrate health physics records with other health-related disciplines, such as industrial hygiene, medicine, and safety; provide a more auditable system with published algorithms and clearly defined flowcharts of system operation; readily facilitate future changes dictated by new regulations, new dosimetric models, and new systems of units; and address ad-hoc inquiries regarding worker/workplace interactions, including potential synergisms with non-radiation exposures. The system is modular and provides a high degree of isolation from low-level detail, allowing flexibility for changes without adversely affecting other parts of the system. 10 refs., 3 figs
Design of a verifiable subset for HAL/S
Browne, J. C.; Good, D. I.; Tripathi, A. R.; Young, W. D.
1979-01-01
An attempt to evaluate the applicability of program verification techniques to the existing programming language, HAL/S is discussed. HAL/S is a general purpose high level language designed to accommodate the software needs of the NASA Space Shuttle project. A diversity of features for scientific computing, concurrent and real-time programming, and error handling are discussed. The criteria by which features were evaluated for inclusion into the verifiable subset are described. Individual features of HAL/S with respect to these criteria are examined and justification for the omission of various features from the subset is provided. Conclusions drawn from the research are presented along with recommendations made for the use of HAL/S with respect to the area of program verification.
A Formally Verified Conflict Detection Algorithm for Polynomial Trajectories
Narkawicz, Anthony; Munoz, Cesar
2015-01-01
In air traffic management, conflict detection algorithms are used to determine whether or not aircraft are predicted to lose horizontal and vertical separation minima within a time interval assuming a trajectory model. In the case of linear trajectories, conflict detection algorithms have been proposed that are both sound, i.e., they detect all conflicts, and complete, i.e., they do not present false alarms. In general, for arbitrary nonlinear trajectory models, it is possible to define detection algorithms that are either sound or complete, but not both. This paper considers the case of nonlinear aircraft trajectory models based on polynomial functions. In particular, it proposes a conflict detection algorithm that precisely determines whether, given a lookahead time, two aircraft flying polynomial trajectories are in conflict. That is, it has been formally verified that, assuming that the aircraft trajectories are modeled as polynomial functions, the proposed algorithm is both sound and complete.
Leveraging Parallel Data Processing Frameworks with Verified Lifting
Directory of Open Access Journals (Sweden)
Maaz Bin Safeer Ahmad
2016-11-01
Full Text Available Many parallel data frameworks have been proposed in recent years that let sequential programs access parallel processing. To capitalize on the benefits of such frameworks, existing code must often be rewritten to the domain-specific languages that each framework supports. This rewriting–tedious and error-prone–also requires developers to choose the framework that best optimizes performance given a specific workload. This paper describes Casper, a novel compiler that automatically retargets sequential Java code for execution on Hadoop, a parallel data processing framework that implements the MapReduce paradigm. Given a sequential code fragment, Casper uses verified lifting to infer a high-level summary expressed in our program specification language that is then compiled for execution on Hadoop. We demonstrate that Casper automatically translates Java benchmarks into Hadoop. The translated results execute on average 3.3x faster than the sequential implementations and scale better, as well, to larger datasets.
Developing an Approach for Analyzing and Verifying System Communication
Stratton, William C.; Lindvall, Mikael; Ackermann, Chris; Sibol, Deane E.; Godfrey, Sally
2009-01-01
This slide presentation reviews a project for developing an approach for analyzing and verifying the inter system communications. The motivation for the study was that software systems in the aerospace domain are inherently complex, and operate under tight constraints for resources, so that systems of systems must communicate with each other to fulfill the tasks. The systems of systems requires reliable communications. The technical approach was to develop a system, DynSAVE, that detects communication problems among the systems. The project enhanced the proven Software Architecture Visualization and Evaluation (SAVE) tool to create Dynamic SAVE (DynSAVE). The approach monitors and records low level network traffic, converting low level traffic into meaningful messages, and displays the messages in a way the issues can be detected.
Verifying atom entanglement schemes by testing Bell's inequality
International Nuclear Information System (INIS)
Angelakis, D.G.; Knight, P.L.; Tregenna, B.; Munro, W.J.
2001-01-01
Recent experiments to test Bell's inequality using entangled photons and ions aimed at tests of basic quantum mechanical principles. Interesting results have been obtained and many loopholes could be closed. In this paper we want to point out that tests of Bell's inequality also play an important role in verifying atom entanglement schemes. We describe as an example a scheme to prepare arbitrary entangled states of N two-level atoms using a leaky optical cavity and a scheme to entangle atoms inside a photonic crystal. During the state preparation no photons are emitted, and observing a violation of Bell's inequality is the only way to test whether a scheme works with a high precision or not. (orig.)
Noninteractive Verifiable Outsourcing Algorithm for Bilinear Pairing with Improved Checkability
Directory of Open Access Journals (Sweden)
Yanli Ren
2017-01-01
Full Text Available It is well known that the computation of bilinear pairing is the most expensive operation in pairing-based cryptography. In this paper, we propose a noninteractive verifiable outsourcing algorithm of bilinear pairing based on two servers in the one-malicious model. The outsourcer need not execute any expensive operation, such as scalar multiplication and modular exponentiation. Moreover, the outsourcer could detect any failure with a probability close to 1 if one of the servers misbehaves. Therefore, the proposed algorithm improves checkability and decreases communication cost compared with the previous ones. Finally, we utilize the proposed algorithm as a subroutine to achieve an anonymous identity-based encryption (AIBE scheme with outsourced decryption and an identity-based signature (IBS scheme with outsourced verification.
Modelling and Verifying Communication Failure of Hybrid Systems in HCSP
DEFF Research Database (Denmark)
Wang, Shuling; Nielson, Flemming; Nielson, Hanne Riis
2016-01-01
Hybrid systems are dynamic systems with interacting discrete computation and continuous physical processes. They have become ubiquitous in our daily life, e.g. automotive, aerospace and medical systems, and in particular, many of them are safety-critical. For a safety-critical hybrid system......, in the presence of communication failure, the expected control from the controller will get lost and as a consequence the physical process cannot behave as expected. In this paper, we mainly consider the communication failure caused by the non-engagement of one party in communication action, i.......e. the communication itself fails to occur. To address this issue, this paper proposes a formal framework by extending HCSP, a formal modeling language for hybrid systems, for modeling and verifying hybrid systems in the absence of receiving messages due to communication failure. We present two inference systems...
A detailed and verified wind resource atlas for Denmark
Energy Technology Data Exchange (ETDEWEB)
Mortensen, N G; Landberg, L; Rathmann, O; Nielsen, M N [Risoe National Lab., Roskilde (Denmark); Nielsen, P [Energy and Environmental Data, Aalberg (Denmark)
1999-03-01
A detailed and reliable wind resource atlas covering the entire land area of Denmark has been established. Key words of the methodology are wind atlas analysis, interpolation of wind atlas data sets, automated generation of digital terrain descriptions and modelling of local wind climates. The atlas contains wind speed and direction distributions, as well as mean energy densities of the wind, for 12 sectors and four heights above ground level: 25, 45, 70 and 100 m. The spatial resolution is 200 meters in the horizontal. The atlas has been verified by comparison with actual wind turbine power productions from over 1200 turbines. More than 80% of these turbines were predicted to within 10%. The atlas will become available on CD-ROM and on the Internet. (au)
Verifying reciprocal relations for experimental diffusion coefficients in multicomponent mixtures
DEFF Research Database (Denmark)
Medvedev, Oleg; Shapiro, Alexander
2003-01-01
The goal of the present study is to verify the agreement of the available data on diffusion in ternary mixtures with the theoretical requirement of linear non-equilibrium thermodynamics consisting in symmetry of the matrix of the phenomenological coefficients. A common set of measured diffusion...... coefficients for a three-component mixture consists of four Fickian diffusion coefficients, each being reported separately. However, the Onsager theory predicts the existence of only three independent coefficients, as one of them disappears due to the symmetry requirement. Re-calculation of the Fickian...... extended sets of experimental data and reliable thermodynamic models were available. The sensitivity of the symmetry property to different thermodynamic parameters of the models was also checked. (C) 2003 Elsevier Science B.V. All rights reserved....
How to Verify and Manage the Translational Plagiarism?
Wiwanitkit, Viroj
2016-01-01
The use of Google translator as a tool for determining translational plagiarism is a big challenge. As noted, plagiarism of the original papers written in Macedonian and translated into other languages can be verified after computerised translation in other languages. Attempts to screen the translational plagiarism should be supported. The use of Google Translate tool might be helpful. Special focus should be on any non-English reference that might be the source of plagiarised material and non-English article that might translate from an original English article, which cannot be detected by simple plagiarism screening tool. It is a hard job for any journal to detect the complex translational plagiarism but the harder job might be how to effectively manage the case. PMID:27703588
Application of Turchin's method of statistical regularization
Zelenyi, Mikhail; Poliakova, Mariia; Nozik, Alexander; Khudyakov, Alexey
2018-04-01
During analysis of experimental data, one usually needs to restore a signal after it has been convoluted with some kind of apparatus function. According to Hadamard's definition this problem is ill-posed and requires regularization to provide sensible results. In this article we describe an implementation of the Turchin's method of statistical regularization based on the Bayesian approach to the regularization strategy.
Regular extensions of some classes of grammars
Nijholt, Antinus
Culik and Cohen introduced the class of LR-regular grammars, an extension of the LR(k) grammars. In this report we consider the analogous extension of the LL(k) grammers, called the LL-regular grammars. The relations of this class of grammars to other classes of grammars are shown. Every LL-regular
High-resolution seismic data regularization and wavefield separation
Cao, Aimin; Stump, Brian; DeShon, Heather
2018-04-01
We present a new algorithm, non-equispaced fast antileakage Fourier transform (NFALFT), for irregularly sampled seismic data regularization. Synthetic tests from 1-D to 5-D show that the algorithm may efficiently remove leaked energy in the frequency wavenumber domain, and its corresponding regularization process is accurate and fast. Taking advantage of the NFALFT algorithm, we suggest a new method (wavefield separation) for the detection of the Earth's inner core shear wave with irregularly distributed seismic arrays or networks. All interfering seismic phases that propagate along the minor arc are removed from the time window around the PKJKP arrival. The NFALFT algorithm is developed for seismic data, but may also be used for other irregularly sampled temporal or spatial data processing.
Automated measurement and control of concrete properties in a ready mix truck with VERIFI.
2014-02-01
In this research, twenty batches of concrete with six different mixture proportions were tested with VERIFI to evaluate 1) accuracy : and repeatability of VERIFI measurements, 2) ability of VERIFI to adjust slump automatically with water and admixtur...
A survey of temporal data mining
Indian Academy of Sciences (India)
Data mining is concerned with analysing large volumes of (often unstructured) data to automatically discover interesting regularities or relationships which in turn lead to better understanding of the underlying processes. The ﬁeld of temporal data mining is concerned with such analysis in the case of ordered data streams ...
Class of regular bouncing cosmologies
Vasilić, Milovan
2017-06-01
In this paper, I construct a class of everywhere regular geometric sigma models that possess bouncing solutions. Precisely, I show that every bouncing metric can be made a solution of such a model. My previous attempt to do so by employing one scalar field has failed due to the appearance of harmful singularities near the bounce. In this work, I use four scalar fields to construct a class of geometric sigma models which are free of singularities. The models within the class are parametrized by their background geometries. I prove that, whatever background is chosen, the dynamics of its small perturbations is classically stable on the whole time axis. Contrary to what one expects from the structure of the initial Lagrangian, the physics of background fluctuations is found to carry two tensor, two vector, and two scalar degrees of freedom. The graviton mass, which naturally appears in these models, is shown to be several orders of magnitude smaller than its experimental bound. I provide three simple examples to demonstrate how this is done in practice. In particular, I show that graviton mass can be made arbitrarily small.
A Hybrid Verifiable and Delegated Cryptographic Model in Cloud Computing
Directory of Open Access Journals (Sweden)
Jaber Ibrahim Naser
2018-02-01
Full Text Available Access control is very important in cloud data sharing. Especially in the domains like healthcare, it is essential to have access control mechanisms in place for confidentiality and secure data access. Attribute based encryption has been around for many years to secure data and provide controlled access. In this paper, we proposed a framework that supports circuit and attributes based encryption mechanism that involves multiple parties. They are data owner, data user, cloud server and attribute authority. An important feature of the proposed system is the verifiable delegation of the decryption process to cloud server. Data owner encrypts data and delegates decryption process to cloud. Cloud server performs partial decryption and then the final decrypted data are shared for users as per the privileges. Data owner thus reduces computational complexity by delegating decryption process cloud server. We built a prototype application using the Microsoft.NET platform for proof of the concept. The empirical results revealed that there is controlled access with multiple user roles and access control rights for secure and confidential data access in cloud computing.
Verifying the Simulation Hypothesis via Infinite Nested Universe Simulacrum Loops
Sharma, Vikrant
2017-01-01
The simulation hypothesis proposes that local reality exists as a simulacrum within a hypothetical computer's dimension. More specifically, Bostrom's trilemma proposes that the number of simulations an advanced 'posthuman' civilization could produce makes the proposition very likely. In this paper a hypothetical method to verify the simulation hypothesis is discussed using infinite regression applied to a new type of infinite loop. Assign dimension n to any computer in our present reality, where dimension signifies the hierarchical level in nested simulations our reality exists in. A computer simulating known reality would be dimension (n-1), and likewise a computer simulating an artificial reality, such as a video game, would be dimension (n +1). In this method, among others, four key assumptions are made about the nature of the original computer dimension n. Summations show that regressing such a reality infinitely will create convergence, implying that the verification of whether local reality is a grand simulation is feasible to detect with adequate compute capability. The action of reaching said convergence point halts the simulation of local reality. Sensitivities to the four assumptions and implications are discussed.
Verifying operator fitness - an imperative not an option
International Nuclear Information System (INIS)
Scott, A.B. Jr.
1987-01-01
In the early morning hours of April 26, 1986, whatever credence those who operate nuclear power plants around the world could then muster, suffered a jarring reversal. Through an incredible series of personal errors, the operators at what was later to be termed one of the best operated plants in the USSR systematically stripped away the physical and procedural safeguards inherent to their installation and precipitated the worst reactor accident the world has yet seen. This challenge to the adequacy of nuclear operators comes at a time when many companies throughout the world - not only those that involve nuclear power - are grappling with the problem of how to assure the fitness for duty of those in their employ, specifically those users of substances that have an impact on the ability to function safely and productively in the workplace. In actuality, operator fitness for duty is far more than the lack of impairment from substance abuse, which many today consider it. Full fitness for duty implies mental and moral fitness, as well, and physical fitness in a more general sense. If we are to earn the confidence of the public, credible ways to verify total fitness on an operator-by-operator basis must be considered
A credit card verifier structure using diffraction and spectroscopy concepts
Sumriddetchkajorn, Sarun; Intaravanne, Yuttana
2008-04-01
We propose and experimentally demonstrate an angle-multiplexing based optical structure for verifying a credit card. Our key idea comes from the fact that the fine detail of the embossed hologram stamped on the credit card is hard to duplicate and therefore its key color features can be used for distinguishing between the real and counterfeit ones. As the embossed hologram is a diffractive optical element, we choose to shine one at a time a number of broadband lightsources, each at different incident angle, on the embossed hologram of the credit card in such a way that different color spectra per incident angle beam is diffracted and separated in space. In this way, the number of pixels of each color plane is investigated. Then we apply a feed forward back propagation neural network configuration to separate the counterfeit credit card from the real one. Our experimental demonstration using two off-the-shelf broadband white light emitting diodes, one digital camera, a 3-layer neural network, and a notebook computer can identify all 69 counterfeit credit cards from eight real credit cards.
AUTOMATIC ESTIMATION OF SIZE PARAMETERS USING VERIFIED COMPUTERIZED STEREOANALYSIS
Directory of Open Access Journals (Sweden)
Peter R Mouton
2011-05-01
Full Text Available State-of-the-art computerized stereology systems combine high-resolution video microscopy and hardwaresoftware integration with stereological methods to assist users in quantifying multidimensional parameters of importance to biomedical research, including volume, surface area, length, number, their variation and spatial distribution. The requirement for constant interactions between a trained, non-expert user and the targeted features of interest currently limits the throughput efficiency of these systems. To address this issue we developed a novel approach for automatic stereological analysis of 2-D images, Verified Computerized Stereoanalysis (VCS. The VCS approach minimizes the need for user interactions with high contrast [high signal-to-noise ratio (S:N] biological objects of interest. Performance testing of the VCS approach confirmed dramatic increases in the efficiency of total object volume (size estimation, without a loss of accuracy or precision compared to conventional computerized stereology. The broad application of high efficiency VCS to high-contrast biological objects on tissue sections could reduce labor costs, enhance hypothesis testing, and accelerate the progress of biomedical research focused on improvements in health and the management of disease.
Verifying large modular systems using iterative abstraction refinement
International Nuclear Information System (INIS)
Lahtinen, Jussi; Kuismin, Tuomas; Heljanko, Keijo
2015-01-01
Digital instrumentation and control (I&C) systems are increasingly used in the nuclear engineering domain. The exhaustive verification of these systems is challenging, and the usual verification methods such as testing and simulation are typically insufficient. Model checking is a formal method that is able to exhaustively analyse the behaviour of a model against a formally written specification. If the model checking tool detects a violation of the specification, it will give out a counter-example that demonstrates how the specification is violated in the system. Unfortunately, sometimes real life system designs are too big to be directly analysed by traditional model checking techniques. We have developed an iterative technique for model checking large modular systems. The technique uses abstraction based over-approximations of the model behaviour, combined with iterative refinement. The main contribution of the work is the concrete abstraction refinement technique based on the modular structure of the model, the dependency graph of the model, and a refinement sampling heuristic similar to delta debugging. The technique is geared towards proving properties, and outperforms BDD-based model checking, the k-induction technique, and the property directed reachability algorithm (PDR) in our experiments. - Highlights: • We have developed an iterative technique for model checking large modular systems. • The technique uses BDD-based model checking, k-induction, and PDR in parallel. • We have tested our algorithm by verifying two models with it. • The technique outperforms classical model checking methods in our experiments
DEFF Research Database (Denmark)
Ritschel, Tobias; Ihrke, Matthias; Frisvad, Jeppe Revall
2009-01-01
Glare is a consequence of light scattered within the human eye when looking at bright light sources. This effect can be exploited for tone mapping since adding glare to the depiction of high-dynamic range (HDR) imagery on a low-dynamic range (LDR) medium can dramatically increase perceived contra...... to initially static HDR images. By conducting psychophysical studies, we validate that our method improves perceived brightness and that dynamic glare-renderings are often perceived as more attractive depending on the chosen scene.......Glare is a consequence of light scattered within the human eye when looking at bright light sources. This effect can be exploited for tone mapping since adding glare to the depiction of high-dynamic range (HDR) imagery on a low-dynamic range (LDR) medium can dramatically increase perceived contrast....... Even though most, if not all, subjects report perceiving glare as a bright pattern that fluctuates in time, up to now it has only been modeled as a static phenomenon. We argue that the temporal properties of glare are a strong means to increase perceived brightness and to produce realistic...
Relative clock verifies endogenous bursts of human dynamics
Zhou, Tao; Zhao, Zhi-Dan; Yang, Zimo; Zhou, Changsong
2012-01-01
Temporal bursts are widely observed in many human-activated systems, which may result from both endogenous mechanisms like the highest-priority-first protocol and exogenous factors like the seasonality of activities. To distinguish the effects from different mechanisms is thus of theoretical significance. This letter reports a new timing method by using a relative clock, namely the time length between two consecutive events of an agent is counted as the number of other agents' events appeared during this interval. We propose a model, in which agents act either in a constant rate or with a power-law inter-event time distribution, and the global activity either keeps unchanged or varies periodically vs. time. Our analysis shows that the bursts caused by the heterogeneity of global activity can be eliminated by setting the relative clock, yet the bursts from real individual behaviors still exist. We perform extensive experiments on four large-scale systems, the search engine by AOL, a social bookmarking system —Delicious, a short-message communication network, and a microblogging system —Twitter. Seasonality of global activity is observed, yet the bursts cannot be eliminated by using the relative clock.
Scenarios for exercising technical approaches to verified nuclear reductions
International Nuclear Information System (INIS)
Doyle, James
2010-01-01
Presidents Obama and Medvedev in April 2009 committed to a continuing process of step-by-step nuclear arms reductions beyond the new START treaty that was signed April 8, 2010 and to the eventual goal of a world free of nuclear weapons. In addition, the US Nuclear Posture review released April 6, 2010 commits the US to initiate a comprehensive national research and development program to support continued progress toward a world free of nuclear weapons, including expanded work on verification technologies and the development of transparency measures. It is impossible to predict the specific directions that US-RU nuclear arms reductions will take over the 5-10 years. Additional bilateral treaties could be reached requiring effective verification as indicated by statements made by the Obama administration. There could also be transparency agreements or other initiatives (unilateral, bilateral or multilateral) that require monitoring with a standard of verification lower than formal arms control, but still needing to establish confidence to domestic, bilateral and multilateral audiences that declared actions are implemented. The US Nuclear Posture Review and other statements give some indication of the kinds of actions and declarations that may need to be confirmed in a bilateral or multilateral setting. Several new elements of the nuclear arsenals could be directly limited. For example, it is likely that both strategic and nonstrategic nuclear warheads (deployed and in storage), warhead components, and aggregate stocks of such items could be accountable under a future treaty or transparency agreement. In addition, new initiatives or agreements may require the verified dismantlement of a certain number of nuclear warheads over a specified time period. Eventually procedures for confirming the elimination of nuclear warheads, components and fissile materials from military stocks will need to be established. This paper is intended to provide useful background information
Adaptive regularization of noisy linear inverse problems
DEFF Research Database (Denmark)
Hansen, Lars Kai; Madsen, Kristoffer Hougaard; Lehn-Schiøler, Tue
2006-01-01
In the Bayesian modeling framework there is a close relation between regularization and the prior distribution over parameters. For prior distributions in the exponential family, we show that the optimal hyper-parameter, i.e., the optimal strength of regularization, satisfies a simple relation: T......: The expectation of the regularization function, i.e., takes the same value in the posterior and prior distribution. We present three examples: two simulations, and application in fMRI neuroimaging....
Higher derivative regularization and chiral anomaly
International Nuclear Information System (INIS)
Nagahama, Yoshinori.
1985-02-01
A higher derivative regularization which automatically leads to the consistent chiral anomaly is analyzed in detail. It explicitly breaks all the local gauge symmetry but preserves global chiral symmetry and leads to the chirally symmetric consistent anomaly. This regularization thus clarifies the physics content contained in the consistent anomaly. We also briefly comment on the application of this higher derivative regularization to massless QED. (author)
Regularity effect in prospective memory during aging
Directory of Open Access Journals (Sweden)
Geoffrey Blondelle
2016-10-01
Full Text Available Background: Regularity effect can affect performance in prospective memory (PM, but little is known on the cognitive processes linked to this effect. Moreover, its impacts with regard to aging remain unknown. To our knowledge, this study is the first to examine regularity effect in PM in a lifespan perspective, with a sample of young, intermediate, and older adults. Objective and design: Our study examined the regularity effect in PM in three groups of participants: 28 young adults (18–30, 16 intermediate adults (40–55, and 25 older adults (65–80. The task, adapted from the Virtual Week, was designed to manipulate the regularity of the various activities of daily life that were to be recalled (regular repeated activities vs. irregular non-repeated activities. We examine the role of several cognitive functions including certain dimensions of executive functions (planning, inhibition, shifting, and binding, short-term memory, and retrospective episodic memory to identify those involved in PM, according to regularity and age. Results: A mixed-design ANOVA showed a main effect of task regularity and an interaction between age and regularity: an age-related difference in PM performances was found for irregular activities (older < young, but not for regular activities. All participants recalled more regular activities than irregular ones with no age effect. It appeared that recalling of regular activities only involved planning for both intermediate and older adults, while recalling of irregular ones were linked to planning, inhibition, short-term memory, binding, and retrospective episodic memory. Conclusion: Taken together, our data suggest that planning capacities seem to play a major role in remembering to perform intended actions with advancing age. Furthermore, the age-PM-paradox may be attenuated when the experimental design is adapted by implementing a familiar context through the use of activities of daily living. The clinical
Regularity effect in prospective memory during aging
Blondelle, Geoffrey; Hainselin, Mathieu; Gounden, Yannick; Heurley, Laurent; Voisin, Hélène; Megalakaki, Olga; Bressous, Estelle; Quaglino, Véronique
2016-01-01
Background: Regularity effect can affect performance in prospective memory (PM), but little is known on the cognitive processes linked to this effect. Moreover, its impacts with regard to aging remain unknown. To our knowledge, this study is the first to examine regularity effect in PM in a lifespan perspective, with a sample of young, intermediate, and older adults.Objective and design: Our study examined the regularity effect in PM in three groups of participants: 28 young adults (18–30), 1...
Some Proxy Signature and Designated verifier Signature Schemes over Braid Groups
Lal, Sunder; Verma, Vandani
2009-01-01
Braids groups provide an alternative to number theoretic public cryptography and can be implemented quite efficiently. The paper proposes five signature schemes: Proxy Signature, Designated Verifier, Bi-Designated Verifier, Designated Verifier Proxy Signature And Bi-Designated Verifier Proxy Signature scheme based on braid groups. We also discuss the security aspects of each of the proposed schemes.
Regularization and error assignment to unfolded distributions
Zech, Gunter
2011-01-01
The commonly used approach to present unfolded data only in graphical formwith the diagonal error depending on the regularization strength is unsatisfac-tory. It does not permit the adjustment of parameters of theories, the exclusionof theories that are admitted by the observed data and does not allow the com-bination of data from different experiments. We propose fixing the regulariza-tion strength by a p-value criterion, indicating the experimental uncertaintiesindependent of the regularization and publishing the unfolded data in additionwithout regularization. These considerations are illustrated with three differentunfolding and smoothing approaches applied to a toy example.
Iterative Regularization with Minimum-Residual Methods
DEFF Research Database (Denmark)
Jensen, Toke Koldborg; Hansen, Per Christian
2007-01-01
subspaces. We provide a combination of theory and numerical examples, and our analysis confirms the experience that MINRES and MR-II can work as general regularization methods. We also demonstrate theoretically and experimentally that the same is not true, in general, for GMRES and RRGMRES their success......We study the regularization properties of iterative minimum-residual methods applied to discrete ill-posed problems. In these methods, the projection onto the underlying Krylov subspace acts as a regularizer, and the emphasis of this work is on the role played by the basis vectors of these Krylov...... as regularization methods is highly problem dependent....
Iterative regularization with minimum-residual methods
DEFF Research Database (Denmark)
Jensen, Toke Koldborg; Hansen, Per Christian
2006-01-01
subspaces. We provide a combination of theory and numerical examples, and our analysis confirms the experience that MINRES and MR-II can work as general regularization methods. We also demonstrate theoretically and experimentally that the same is not true, in general, for GMRES and RRGMRES - their success......We study the regularization properties of iterative minimum-residual methods applied to discrete ill-posed problems. In these methods, the projection onto the underlying Krylov subspace acts as a regularizer, and the emphasis of this work is on the role played by the basis vectors of these Krylov...... as regularization methods is highly problem dependent....
Verifying cell loss requirements in high-speed communication networks
Directory of Open Access Journals (Sweden)
Kerry W. Fendick
1998-01-01
Full Text Available In high-speed communication networks it is common to have requirements of very small cell loss probabilities due to buffer overflow. Losses are measured to verify that the cell loss requirements are being met, but it is not clear how to interpret such measurements. We propose methods for determining whether or not cell loss requirements are being met. A key idea is to look at the stream of losses as successive clusters of losses. Often clusters of losses, rather than individual losses, should be regarded as the important loss events. Thus we propose modeling the cell loss process by a batch Poisson stochastic process. Successive clusters of losses are assumed to arrive according to a Poisson process. Within each cluster, cell losses do not occur at a single time, but the distance between losses within a cluster should be negligible compared to the distance between clusters. Thus, for the purpose of estimating the cell loss probability, we ignore the spaces between successive cell losses in a cluster of losses. Asymptotic theory suggests that the counting process of losses initiating clusters often should be approximately a Poisson process even though the cell arrival process is not nearly Poisson. The batch Poisson model is relatively easy to test statistically and fit; e.g., the batch-size distribution and the batch arrival rate can readily be estimated from cell loss data. Since batch (cluster sizes may be highly variable, it may be useful to focus on the number of batches instead of the number of cells in a measurement interval. We also propose a method for approximately determining the parameters of a special batch Poisson cell loss with geometric batch-size distribution from a queueing model of the buffer content. For this step, we use a reflected Brownian motion (RBM approximation of a G/D/1/C queueing model. We also use the RBM model to estimate the input burstiness given the cell loss rate. In addition, we use the RBM model to
VISION User Guide - VISION (Verifiable Fuel Cycle Simulation) Model
International Nuclear Information System (INIS)
Jacobson, Jacob J.; Jeffers, Robert F.; Matthern, Gretchen E.; Piet, Steven J.; Baker, Benjamin A.; Grimm, Joseph
2009-01-01
The purpose of this document is to provide a guide for using the current version of the Verifiable Fuel Cycle Simulation (VISION) model. This is a complex model with many parameters; the user is strongly encouraged to read this user guide before attempting to run the model. This model is an R and D work in progress and may contain errors and omissions. It is based upon numerous assumptions. This model is intended to assist in evaluating 'what if' scenarios and in comparing fuel, reactor, and fuel processing alternatives at a systems level for U.S. nuclear power. The model is not intended as a tool for process flow and design modeling of specific facilities nor for tracking individual units of fuel or other material through the system. The model is intended to examine the interactions among the components of a fuel system as a function of time varying system parameters; this model represents a dynamic rather than steady-state approximation of the nuclear fuel system. VISION models the nuclear cycle at the system level, not individual facilities, e.g., 'reactor types' not individual reactors and 'separation types' not individual separation plants. Natural uranium can be enriched, which produces enriched uranium, which goes into fuel fabrication, and depleted uranium (DU), which goes into storage. Fuel is transformed (transmuted) in reactors and then goes into a storage buffer. Used fuel can be pulled from storage into either separation of disposal. If sent to separations, fuel is transformed (partitioned) into fuel products, recovered uranium, and various categories of waste. Recycled material is stored until used by its assigned reactor type. Note that recovered uranium is itself often partitioned: some RU flows with recycled transuranic elements, some flows with wastes, and the rest is designated RU. RU comes out of storage if needed to correct the U/TRU ratio in new recycled fuel. Neither RU nor DU are designated as wastes. VISION is comprised of several Microsoft
Directory of Open Access Journals (Sweden)
Bo Chen
2018-05-01
Full Text Available Electrical resistance tomography (ERT has been considered as a data collection and image reconstruction method in many multi-phase flow application areas due to its advantages of high speed, low cost and being non-invasive. In order to improve the quality of the reconstructed images, the Total Variation algorithm attracts abundant attention due to its ability to solve large piecewise and discontinuous conductivity distributions. In industrial processing tomography (IPT, techniques such as ERT have been used to extract important flow measurement information. For a moving object inside a pipe, a velocity profile can be calculated from the cross correlation between signals generated from ERT sensors. Many previous studies have used two sets of 2D ERT measurements based on pixel-pixel cross correlation, which requires two ERT systems. In this paper, a method for carrying out flow velocity measurement using a single ERT system is proposed. A novel spatiotemporal total variation regularization approach is utilised to exploit sparsity both in space and time in 4D, and a voxel-voxel cross correlation method is adopted for measurement of flow profile. Result shows that the velocity profile can be calculated with a single ERT system and that the volume fraction and movement can be monitored using the proposed method. Both semi-dynamic experimental and static simulation studies verify the suitability of the proposed method. For in plane velocity profile, a 3D image based on temporal 2D images produces velocity profile with accuracy of less than 1% error and a 4D image for 3D velocity profiling shows an error of 4%.
Regularity and predictability of human mobility in personal space.
Directory of Open Access Journals (Sweden)
Daniel Austin
Full Text Available Fundamental laws governing human mobility have many important applications such as forecasting and controlling epidemics or optimizing transportation systems. These mobility patterns, studied in the context of out of home activity during travel or social interactions with observations recorded from cell phone use or diffusion of money, suggest that in extra-personal space humans follow a high degree of temporal and spatial regularity - most often in the form of time-independent universal scaling laws. Here we show that mobility patterns of older individuals in their home also show a high degree of predictability and regularity, although in a different way than has been reported for out-of-home mobility. Studying a data set of almost 15 million observations from 19 adults spanning up to 5 years of unobtrusive longitudinal home activity monitoring, we find that in-home mobility is not well represented by a universal scaling law, but that significant structure (predictability and regularity is uncovered when explicitly accounting for contextual data in a model of in-home mobility. These results suggest that human mobility in personal space is highly stereotyped, and that monitoring discontinuities in routine room-level mobility patterns may provide an opportunity to predict individual human health and functional status or detect adverse events and trends.
Pilot study to verify the calibration of electrometers
International Nuclear Information System (INIS)
Becker, P.; Meghzifene, A.
2002-01-01
National Laboratory for Electrical Measurements has not yet developed its capability for the standardization of small electrical charge produced by DC, the IRD is trying to verify its standardization procedures of the electrical charge through a comparison programme. This subject was discussed with a major electrometer manufacturer that has offered to provide free of charge, three of their electrometer calibration standards for a pilot run. The model to be provided consists of four calibrated resistors and two calibrated capacitors, covering the charge/current range of interest. For producing charge or current a standard DC voltage must be applied to these components. Since practically all-modern electrometers measure using virtual ground, this methodology is viable. The IRD, in collaboration with the IAEA, wishes to invite interested laboratories to participate in this pilot comparison programme. This exercise is expected to be useful for all participants and will hopefully open the way for the establishment of routine comparisons in this area. The results will be discussed and published in an appropriate journal. Interested institutions should contact directly Mr. Paulo H. B. Becker through e-mail (pbecker at ird.gov.br) or fax +55 21 24421950 informing him of the model and manufacturer of the electrometer to be used for the pilot study and discuss all practical details. (author)
A regularized stationary mean-field game
Yang, Xianjin
2016-01-01
In the thesis, we discuss the existence and numerical approximations of solutions of a regularized mean-field game with a low-order regularization. In the first part, we prove a priori estimates and use the continuation method to obtain the existence of a solution with a positive density. Finally, we introduce the monotone flow method and solve the system numerically.
A regularized stationary mean-field game
Yang, Xianjin
2016-04-19
In the thesis, we discuss the existence and numerical approximations of solutions of a regularized mean-field game with a low-order regularization. In the first part, we prove a priori estimates and use the continuation method to obtain the existence of a solution with a positive density. Finally, we introduce the monotone flow method and solve the system numerically.
On infinite regular and chiral maps
Arredondo, John A.; Valdez, Camilo Ramírez y Ferrán
2015-01-01
We prove that infinite regular and chiral maps take place on surfaces with at most one end. Moreover, we prove that an infinite regular or chiral map on an orientable surface with genus can only be realized on the Loch Ness monster, that is, the topological surface of infinite genus with one end.
From recreational to regular drug use
DEFF Research Database (Denmark)
Järvinen, Margaretha; Ravn, Signe
2011-01-01
This article analyses the process of going from recreational use to regular and problematic use of illegal drugs. We present a model containing six career contingencies relevant for young people’s progress from recreational to regular drug use: the closing of social networks, changes in forms...
Automating InDesign with Regular Expressions
Kahrel, Peter
2006-01-01
If you need to make automated changes to InDesign documents beyond what basic search and replace can handle, you need regular expressions, and a bit of scripting to make them work. This Short Cut explains both how to write regular expressions, so you can find and replace the right things, and how to use them in InDesign specifically.
Regularization modeling for large-eddy simulation
Geurts, Bernardus J.; Holm, D.D.
2003-01-01
A new modeling approach for large-eddy simulation (LES) is obtained by combining a "regularization principle" with an explicit filter and its inversion. This regularization approach allows a systematic derivation of the implied subgrid model, which resolves the closure problem. The central role of
2010-07-01
... employee under subsection (a) or in excess of the employee's normal working hours or regular working hours... Relating to Labor (Continued) WAGE AND HOUR DIVISION, DEPARTMENT OF LABOR STATEMENTS OF GENERAL POLICY OR... not less than one and one-half times their regular rates of pay. Section 7(e) of the Act defines...
Tsang, Mankei; Psaltis, Demetri
2006-01-01
The concept of quantum temporal imaging is proposed to manipulate the temporal correlation of entangled photons. In particular, we show that time correlation and anticorrelation can be converted to each other using quantum temporal imaging.
Predicted and verified deviations from Zipf's law in ecology of competing products.
Hisano, Ryohei; Sornette, Didier; Mizuno, Takayuki
2011-08-01
Zipf's power-law distribution is a generic empirical statistical regularity found in many complex systems. However, rather than universality with a single power-law exponent (equal to 1 for Zipf's law), there are many reported deviations that remain unexplained. A recently developed theory finds that the interplay between (i) one of the most universal ingredients, namely stochastic proportional growth, and (ii) birth and death processes, leads to a generic power-law distribution with an exponent that depends on the characteristics of each ingredient. Here, we report the first complete empirical test of the theory and its application, based on the empirical analysis of the dynamics of market shares in the product market. We estimate directly the average growth rate of market shares and its standard deviation, the birth rates and the "death" (hazard) rate of products. We find that temporal variations and product differences of the observed power-law exponents can be fully captured by the theory with no adjustable parameters. Our results can be generalized to many systems for which the statistical properties revealed by power-law exponents are directly linked to the underlying generating mechanism.
UTP and Temporal Logic Model Checking
Anderson, Hugh; Ciobanu, Gabriel; Freitas, Leo
In this paper we give an additional perspective to the formal verification of programs through temporal logic model checking, which uses Hoare and He Unifying Theories of Programming (UTP). Our perspective emphasizes the use of UTP designs, an alphabetised relational calculus expressed as a pre/post condition pair of relations, to verify state or temporal assertions about programs. The temporal model checking relation is derived from a satisfaction relation between the model and its properties. The contribution of this paper is that it shows a UTP perspective to temporal logic model checking. The approach includes the notion of efficiency found in traditional model checkers, which reduced a state explosion problem through the use of efficient data structures
An iterative method for Tikhonov regularization with a general linear regularization operator
Hochstenbach, M.E.; Reichel, L.
2010-01-01
Tikhonov regularization is one of the most popular approaches to solve discrete ill-posed problems with error-contaminated data. A regularization operator and a suitable value of a regularization parameter have to be chosen. This paper describes an iterative method, based on Golub-Kahan
Multiple graph regularized protein domain ranking
Wang, Jim Jing-Yan
2012-11-19
Background: Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods.Results: To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods.Conclusion: The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications. 2012 Wang et al; licensee BioMed Central Ltd.
Multiple graph regularized protein domain ranking
Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin
2012-01-01
Background: Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods.Results: To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods.Conclusion: The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications. 2012 Wang et al; licensee BioMed Central Ltd.
Hierarchical regular small-world networks
International Nuclear Information System (INIS)
Boettcher, Stefan; Goncalves, Bruno; Guclu, Hasan
2008-01-01
Two new networks are introduced that resemble small-world properties. These networks are recursively constructed but retain a fixed, regular degree. They possess a unique one-dimensional lattice backbone overlaid by a hierarchical sequence of long-distance links, mixing real-space and small-world features. Both networks, one 3-regular and the other 4-regular, lead to distinct behaviors, as revealed by renormalization group studies. The 3-regular network is planar, has a diameter growing as √N with system size N, and leads to super-diffusion with an exact, anomalous exponent d w = 1.306..., but possesses only a trivial fixed point T c = 0 for the Ising ferromagnet. In turn, the 4-regular network is non-planar, has a diameter growing as ∼2 √(log 2 N 2 ) , exhibits 'ballistic' diffusion (d w = 1), and a non-trivial ferromagnetic transition, T c > 0. It suggests that the 3-regular network is still quite 'geometric', while the 4-regular network qualifies as a true small world with mean-field properties. As an engineering application we discuss synchronization of processors on these networks. (fast track communication)
Multiple graph regularized protein domain ranking.
Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin
2012-11-19
Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications.
Coupling regularizes individual units in noisy populations
International Nuclear Information System (INIS)
Ly Cheng; Ermentrout, G. Bard
2010-01-01
The regularity of a noisy system can modulate in various ways. It is well known that coupling in a population can lower the variability of the entire network; the collective activity is more regular. Here, we show that diffusive (reciprocal) coupling of two simple Ornstein-Uhlenbeck (O-U) processes can regularize the individual, even when it is coupled to a noisier process. In cellular networks, the regularity of individual cells is important when a select few play a significant role. The regularizing effect of coupling surprisingly applies also to general nonlinear noisy oscillators. However, unlike with the O-U process, coupling-induced regularity is robust to different kinds of coupling. With two coupled noisy oscillators, we derive an asymptotic formula assuming weak noise and coupling for the variance of the period (i.e., spike times) that accurately captures this effect. Moreover, we find that reciprocal coupling can regularize the individual period of higher dimensional oscillators such as the Morris-Lecar and Brusselator models, even when coupled to noisier oscillators. Coupling can have a counterintuitive and beneficial effect on noisy systems. These results have implications for the role of connectivity with noisy oscillators and the modulation of variability of individual oscillators.
Multiple graph regularized protein domain ranking
Directory of Open Access Journals (Sweden)
Wang Jim
2012-11-01
Full Text Available Abstract Background Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. Results To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. Conclusion The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications.
Diagrammatic methods in phase-space regularization
International Nuclear Information System (INIS)
Bern, Z.; Halpern, M.B.; California Univ., Berkeley
1987-11-01
Using the scalar prototype and gauge theory as the simplest possible examples, diagrammatic methods are developed for the recently proposed phase-space form of continuum regularization. A number of one-loop and all-order applications are given, including general diagrammatic discussions of the nogrowth theorem and the uniqueness of the phase-space stochastic calculus. The approach also generates an alternate derivation of the equivalence of the large-β phase-space regularization to the more conventional coordinate-space regularization. (orig.)
J-regular rings with injectivities
Shen, Liang
2010-01-01
A ring $R$ is called a J-regular ring if R/J(R) is von Neumann regular, where J(R) is the Jacobson radical of R. It is proved that if R is J-regular, then (i) R is right n-injective if and only if every homomorphism from an $n$-generated small right ideal of $R$ to $R_{R}$ can be extended to one from $R_{R}$ to $R_{R}$; (ii) R is right FP-injective if and only if R is right (J, R)-FP-injective. Some known results are improved.
Privacy-Preserving Verifiability: A Case for an Electronic Exam Protocol
DEFF Research Database (Denmark)
Giustolisi, Rosario; Iovino, Vincenzo; Lenzini, Gabriele
2017-01-01
We introduce the notion of privacy-preserving verifiability for security protocols. It holds when a protocol admits a verifiability test that does not reveal, to the verifier that runs it, more pieces of information about the protocol’s execution than those required to run the test. Our definition...... of privacy-preserving verifiability is general and applies to cryptographic protocols as well as to human security protocols. In this paper we exemplify it in the domain of e-exams. We prove that the notion is meaningful by studying an existing exam protocol that is verifiable but whose verifiability tests...... are not privacy-preserving. We prove that the notion is applicable: we review the protocol using functional encryption so that it admits a verifiability test that preserves privacy according to our definition. We analyse, in ProVerif, that the verifiability holds despite malicious parties and that the new...
Towards General Temporal Aggregation
DEFF Research Database (Denmark)
Boehlen, Michael H.; Gamper, Johann; Jensen, Christian Søndergaard
2008-01-01
associated with the management of temporal data. Indeed, temporal aggregation is complex and among the most difficult, and thus interesting, temporal functionality to support. This paper presents a general framework for temporal aggregation that accommodates existing kinds of aggregation, and it identifies...
Entanglement in coined quantum walks on regular graphs
International Nuclear Information System (INIS)
Carneiro, Ivens; Loo, Meng; Xu, Xibai; Girerd, Mathieu; Kendon, Viv; Knight, Peter L
2005-01-01
Quantum walks, both discrete (coined) and continuous time, form the basis of several recent quantum algorithms. Here we use numerical simulations to study the properties of discrete, coined quantum walks. We investigate the variation in the entanglement between the coin and the position of the particle by calculating the entropy of the reduced density matrix of the coin. We consider both dynamical evolution and asymptotic limits for coins of dimensions from two to eight on regular graphs. For low coin dimensions, quantum walks which spread faster (as measured by the mean square deviation of their distribution from uniform) also exhibit faster convergence towards the asymptotic value of the entanglement between the coin and particle's position. For high-dimensional coins, the DFT coin operator is more efficient at spreading than the Grover coin. We study the entanglement of the coin on regular finite graphs such as cycles, and also show that on complete bipartite graphs, a quantum walk with a Grover coin is always periodic with period four. We generalize the 'glued trees' graph used by Childs et al (2003 Proc. STOC, pp 59-68) to higher branching rate (fan out) and verify that the scaling with branching rate and with tree depth is polynomial
A general framework for regularized, similarity-based image restoration.
Kheradmand, Amin; Milanfar, Peyman
2014-12-01
Any image can be represented as a function defined on a weighted graph, in which the underlying structure of the image is encoded in kernel similarity and associated Laplacian matrices. In this paper, we develop an iterative graph-based framework for image restoration based on a new definition of the normalized graph Laplacian. We propose a cost function, which consists of a new data fidelity term and regularization term derived from the specific definition of the normalized graph Laplacian. The normalizing coefficients used in the definition of the Laplacian and associated regularization term are obtained using fast symmetry preserving matrix balancing. This results in some desired spectral properties for the normalized Laplacian such as being symmetric, positive semidefinite, and returning zero vector when applied to a constant image. Our algorithm comprises of outer and inner iterations, where in each outer iteration, the similarity weights are recomputed using the previous estimate and the updated objective function is minimized using inner conjugate gradient iterations. This procedure improves the performance of the algorithm for image deblurring, where we do not have access to a good initial estimate of the underlying image. In addition, the specific form of the cost function allows us to render the spectral analysis for the solutions of the corresponding linear equations. In addition, the proposed approach is general in the sense that we have shown its effectiveness for different restoration problems, including deblurring, denoising, and sharpening. Experimental results verify the effectiveness of the proposed algorithm on both synthetic and real examples.
International Nuclear Information System (INIS)
Lowry, D.; Holmes, C.W.; Nisbet, E.G.; Rata, N.D.
2002-01-01
The main anthropogenic sources of methane in industrialised countries (landfill/waste treatment, gas storage and distribution, coal) are far easier to reduce than CO 2 sources and the implementation of reduction strategies is potentially profitable. Statistical databases of methane emissions need independent external verification and carbon isotope data provide one way of estimating the expected source mix for each country if the main source types have been characterised isotopically. Using this method each country participating in the CORINAIR 94 database has been assigned an expected isotopic value for its emissions. The averaged δ 13 C of methane emitted from the CORINAIR region of Europe, based on total emissions of each country is -55.4 per mille for 1994. This European source mix can be verified using trajectory analysis for air samples collected at background stations. Methane emissions from the UK, and particularly the London region, have undergone more detailed analysis using data collected at the Royal Holloway site on the western fringe of London. If the latest emissions inventory figures are correct then the modelled isotopic change in the UK source mix is from -48.4 per mille in 1990 to -50.7 per mille in 1997. This represents a reduction in emissions of 25% over a 7-year period, important in meeting proposed UK greenhouse gas reduction targets. These changes can be tested by the isotopic analysis of air samples at carefully selected coastal background and interior sites. Regular sampling and isotopic analysis coupled with back trajectory analysis from a range of sites could provide an important tool for monitoring and verification of EC and UK methane emissions in the run-up to 2010. (author)
Generalized regular genus for manifolds with boundary
Directory of Open Access Journals (Sweden)
Paola Cristofori
2003-05-01
Full Text Available We introduce a generalization of the regular genus, a combinatorial invariant of PL manifolds ([10], which is proved to be strictly related, in dimension three, to generalized Heegaard splittings defined in [12].
Geometric regularizations and dual conifold transitions
International Nuclear Information System (INIS)
Landsteiner, Karl; Lazaroiu, Calin I.
2003-01-01
We consider a geometric regularization for the class of conifold transitions relating D-brane systems on noncompact Calabi-Yau spaces to certain flux backgrounds. This regularization respects the SL(2,Z) invariance of the flux superpotential, and allows for computation of the relevant periods through the method of Picard-Fuchs equations. The regularized geometry is a noncompact Calabi-Yau which can be viewed as a monodromic fibration, with the nontrivial monodromy being induced by the regulator. It reduces to the original, non-monodromic background when the regulator is removed. Using this regularization, we discuss the simple case of the local conifold, and show how the relevant field-theoretic information can be extracted in this approach. (author)
2012-11-26
...-1294, ``Preoperational Testing of On-Site Electric Power Systems to Verify Proper Load Group... entitled ``Preoperational Testing of On- Site Electric Power Systems to Verify Proper Load Group... Electric Power Systems to Verify Proper Load Group Assignments, Electrical Separation, and Redundancy...
31 CFR 363.14 - How will you verify my identity?
2010-07-01
... 31 Money and Finance: Treasury 2 2010-07-01 2010-07-01 false How will you verify my identity? 363... you verify my identity? (a) Individual. When you establish an account, we may use a verification service to verify your identity using information you provide about yourself on the online application. At...
Fast and compact regular expression matching
DEFF Research Database (Denmark)
Bille, Philip; Farach-Colton, Martin
2008-01-01
We study 4 problems in string matching, namely, regular expression matching, approximate regular expression matching, string edit distance, and subsequence indexing, on a standard word RAM model of computation that allows logarithmic-sized words to be manipulated in constant time. We show how...... to improve the space and/or remove a dependency on the alphabet size for each problem using either an improved tabulation technique of an existing algorithm or by combining known algorithms in a new way....
Regular-fat dairy and human health
DEFF Research Database (Denmark)
Astrup, Arne; Bradley, Beth H Rice; Brenna, J Thomas
2016-01-01
In recent history, some dietary recommendations have treated dairy fat as an unnecessary source of calories and saturated fat in the human diet. These assumptions, however, have recently been brought into question by current research on regular fat dairy products and human health. In an effort to......, cheese and yogurt, can be important components of an overall healthy dietary pattern. Systematic examination of the effects of dietary patterns that include regular-fat milk, cheese and yogurt on human health is warranted....
Deterministic automata for extended regular expressions
Directory of Open Access Journals (Sweden)
Syzdykov Mirzakhmet
2017-12-01
Full Text Available In this work we present the algorithms to produce deterministic finite automaton (DFA for extended operators in regular expressions like intersection, subtraction and complement. The method like “overriding” of the source NFA(NFA not defined with subset construction rules is used. The past work described only the algorithm for AND-operator (or intersection of regular languages; in this paper the construction for the MINUS-operator (and complement is shown.
Regularities of intermediate adsorption complex relaxation
International Nuclear Information System (INIS)
Manukova, L.A.
1982-01-01
The experimental data, characterizing the regularities of intermediate adsorption complex relaxation in the polycrystalline Mo-N 2 system at 77 K are given. The method of molecular beam has been used in the investigation. The analytical expressions of change regularity in the relaxation process of full and specific rates - of transition from intermediate state into ''non-reversible'', of desorption into the gas phase and accumUlation of the particles in the intermediate state are obtained
International Nuclear Information System (INIS)
Cao, Xiaoqing; Xie, Qingguo; Xiao, Peng
2015-01-01
List mode format is commonly used in modern positron emission tomography (PET) for image reconstruction due to certain special advantages. In this work, we proposed a list mode based regularized relaxed ordered subset (LMROS) algorithm for static PET imaging. LMROS is able to work with regularization terms which can be formulated as twice differentiable convex functions. Such a versatility would make LMROS a convenient and general framework for fulfilling different regularized list mode reconstruction methods. LMROS was applied to two simulated undersampling PET imaging scenarios to verify its effectiveness. Convex quadratic function, total variation constraint, non-local means and dictionary learning based regularization methods were successfully realized for different cases. The results showed that the LMROS algorithm was effective and some regularization methods greatly reduced the distortions and artifacts caused by undersampling. (paper)
Directory of Open Access Journals (Sweden)
Tinghua Zhang
2018-02-01
Full Text Available Coded Aperture Compressive Temporal Imaging (CACTI can afford low-cost temporal super-resolution (SR, but limits are imposed by noise and compression ratio on reconstruction quality. To utilize inter-frame redundant information from multiple observations and sparsity in multi-transform domains, a robust reconstruction approach based on maximum a posteriori probability and Markov random field (MAP-MRF model for CACTI is proposed. The proposed approach adopts a weighted 3D neighbor system (WNS and the coordinate descent method to perform joint estimation of model parameters, to achieve the robust super-resolution reconstruction. The proposed multi-reconstruction algorithm considers both total variation (TV and ℓ 2 , 1 norm in wavelet domain to address the minimization problem for compressive sensing, and solves it using an accelerated generalized alternating projection algorithm. The weighting coefficient for different regularizations and frames is resolved by the motion characteristics of pixels. The proposed approach can provide high visual quality in the foreground and background of a scene simultaneously and enhance the fidelity of the reconstruction results. Simulation results have verified the efficacy of our new optimization framework and the proposed reconstruction approach.
Improvements in GRACE Gravity Fields Using Regularization
Save, H.; Bettadpur, S.; Tapley, B. D.
2008-12-01
The unconstrained global gravity field models derived from GRACE are susceptible to systematic errors that show up as broad "stripes" aligned in a North-South direction on the global maps of mass flux. These errors are believed to be a consequence of both systematic and random errors in the data that are amplified by the nature of the gravity field inverse problem. These errors impede scientific exploitation of the GRACE data products, and limit the realizable spatial resolution of the GRACE global gravity fields in certain regions. We use regularization techniques to reduce these "stripe" errors in the gravity field products. The regularization criteria are designed such that there is no attenuation of the signal and that the solutions fit the observations as well as an unconstrained solution. We have used a computationally inexpensive method, normally referred to as "L-ribbon", to find the regularization parameter. This paper discusses the characteristics and statistics of a 5-year time-series of regularized gravity field solutions. The solutions show markedly reduced stripes, are of uniformly good quality over time, and leave little or no systematic observation residuals, which is a frequent consequence of signal suppression from regularization. Up to degree 14, the signal in regularized solution shows correlation greater than 0.8 with the un-regularized CSR Release-04 solutions. Signals from large-amplitude and small-spatial extent events - such as the Great Sumatra Andaman Earthquake of 2004 - are visible in the global solutions without using special post-facto error reduction techniques employed previously in the literature. Hydrological signals as small as 5 cm water-layer equivalent in the small river basins, like Indus and Nile for example, are clearly evident, in contrast to noisy estimates from RL04. The residual variability over the oceans relative to a seasonal fit is small except at higher latitudes, and is evident without the need for de-striping or
Applying the Water Vapor Radiometer to Verify the Precipitable Water Vapor Measured by GPS
Directory of Open Access Journals (Sweden)
Ta-Kang Yeh
2014-01-01
Full Text Available Taiwan is located at the land-sea interface in a subtropical region. Because the climate is warm and moist year round, there is a large and highly variable amount of water vapor in the atmosphere. In this study, we calculated the Zenith Wet Delay (ZWD of the troposphere using the ground-based Global Positioning System (GPS. The ZWD measured by two Water Vapor Radiometers (WVRs was then used to verify the ZWD that had been calculated using GPS. We also analyzed the correlation between the ZWD and the precipitation data of these two types of station. Moreover, we used the observational data from 14 GPS and rainfall stations to evaluate three cases. The offset between the GPS-ZWD and the WVR-ZWD ranged from 1.31 to 2.57 cm. The correlation coefficient ranged from 0.89 to 0.93. The results calculated from GPS and those measured using the WVR were very similar. Moreover, when there was no rain, light rain, moderate rain, or heavy rain, the flatland station ZWD was 0.31, 0.36, 0.38, or 0.40 m, respectively. The mountain station ZWD exhibited the same trend. Therefore, these results have demonstrated that the potential and strength of precipitation in a region can be estimated according to its ZWD values. Now that the precision of GPS-ZWD has been confirmed, this method can eventually be expanded to the more than 400 GPS stations in Taiwan and its surrounding islands. The near real-time ZWD data with improved spatial and temporal resolution can be provided to the city and countryside weather-forecasting system that is currently under development. Such an exchange would fundamentally improve the resources used to generate weather forecasts.
Gravitational Quasinormal Modes of Regular Phantom Black Hole
Directory of Open Access Journals (Sweden)
Jin Li
2017-01-01
Full Text Available We investigate the gravitational quasinormal modes (QNMs for a type of regular black hole (BH known as phantom BH, which is a static self-gravitating solution of a minimally coupled phantom scalar field with a potential. The studies are carried out for three different spacetimes: asymptotically flat, de Sitter (dS, and anti-de Sitter (AdS. In order to consider the standard odd parity and even parity of gravitational perturbations, the corresponding master equations are derived. The QNMs are discussed by evaluating the temporal evolution of the perturbation field which, in turn, provides direct information on the stability of BH spacetime. It is found that in asymptotically flat, dS, and AdS spacetimes the gravitational perturbations have similar characteristics for both odd and even parities. The decay rate of perturbation is strongly dependent on the scale parameter b, which measures the coupling strength between phantom scalar field and the gravity. Furthermore, through the analysis of Hawking radiation, it is shown that the thermodynamics of such regular phantom BH is also influenced by b. The obtained results might shed some light on the quantum interpretation of QNM perturbation.
Regular Expression Matching and Operational Semantics
Directory of Open Access Journals (Sweden)
Asiri Rathnayake
2011-08-01
Full Text Available Many programming languages and tools, ranging from grep to the Java String library, contain regular expression matchers. Rather than first translating a regular expression into a deterministic finite automaton, such implementations typically match the regular expression on the fly. Thus they can be seen as virtual machines interpreting the regular expression much as if it were a program with some non-deterministic constructs such as the Kleene star. We formalize this implementation technique for regular expression matching using operational semantics. Specifically, we derive a series of abstract machines, moving from the abstract definition of matching to increasingly realistic machines. First a continuation is added to the operational semantics to describe what remains to be matched after the current expression. Next, we represent the expression as a data structure using pointers, which enables redundant searches to be eliminated via testing for pointer equality. From there, we arrive both at Thompson's lockstep construction and a machine that performs some operations in parallel, suitable for implementation on a large number of cores, such as a GPU. We formalize the parallel machine using process algebra and report some preliminary experiments with an implementation on a graphics processor using CUDA.
Regularities, Natural Patterns and Laws of Nature
Directory of Open Access Journals (Sweden)
Stathis Psillos
2014-02-01
Full Text Available The goal of this paper is to sketch an empiricist metaphysics of laws of nature. The key idea is that there are regularities without regularity-enforcers. Differently put, there are natural laws without law-makers of a distinct metaphysical kind. This sketch will rely on the concept of a natural pattern and more significantly on the existence of a network of natural patterns in nature. The relation between a regularity and a pattern will be analysed in terms of mereology. Here is the road map. In section 2, I will briefly discuss the relation between empiricism and metaphysics, aiming to show that an empiricist metaphysics is possible. In section 3, I will offer arguments against stronger metaphysical views of laws. Then, in section 4 I will motivate nomic objectivism. In section 5, I will address the question ‘what is a regularity?’ and will develop a novel answer to it, based on the notion of a natural pattern. In section 6, I will raise the question: ‘what is a law of nature?’, the answer to which will be: a law of nature is a regularity that is characterised by the unity of a natural pattern.
DEFF Research Database (Denmark)
Kiniry, Joseph Roland; Zimmerman, Daniel
2011-01-01
---falls every year and any mention of mathematics in the classroom seems to frighten students away. So the question is: How do we attract new students in computing to the area of dependable software systems? Over the past several years at three universities we have experimented with the use of computer games......In recent years, several Grand Challenges (GCs) of computing have been identified and expounded upon by various professional organizations in the U.S. and England. These GCs are typically very difficult problems that will take many hundreds, or perhaps thousands, of man-years to solve. Researchers...
Fractional Regularization Term for Variational Image Registration
Directory of Open Access Journals (Sweden)
Rafael Verdú-Monedero
2009-01-01
Full Text Available Image registration is a widely used task of image analysis with applications in many fields. Its classical formulation and current improvements are given in the spatial domain. In this paper a regularization term based on fractional order derivatives is formulated. This term is defined and implemented in the frequency domain by translating the energy functional into the frequency domain and obtaining the Euler-Lagrange equations which minimize it. The new regularization term leads to a simple formulation and design, being applicable to higher dimensions by using the corresponding multidimensional Fourier transform. The proposed regularization term allows for a real gradual transition from a diffusion registration to a curvature registration which is best suited to some applications and it is not possible in the spatial domain. Results with 3D actual images show the validity of this approach.
International Nuclear Information System (INIS)
Obregon, Octavio; Quevedo, Hernando; Ryan, Michael P.
2004-01-01
We construct a family of time and angular dependent, regular S-brane solutions which corresponds to a simple analytical continuation of the Zipoy-Voorhees 4-dimensional vacuum spacetime. The solutions are asymptotically flat and turn out to be free of singularities without requiring a twist in space. They can be considered as the simplest non-singular generalization of the singular S0-brane solution. We analyze the properties of a representative of this family of solutions and show that it resembles to some extent the asymptotic properties of the regular Kerr S-brane. The R-symmetry corresponds, however, to the general lorentzian symmetry. Several generalizations of this regular solution are derived which include a charged S-brane and an additional dilatonic field. (author)
Design of 4D x-ray tomography experiments for reconstruction using regularized iterative algorithms
Mohan, K. Aditya
2017-10-01
4D X-ray computed tomography (4D-XCT) is widely used to perform non-destructive characterization of time varying physical processes in various materials. The conventional approach to improving temporal resolution in 4D-XCT involves the development of expensive and complex instrumentation that acquire data faster with reduced noise. It is customary to acquire data with many tomographic views at a high signal to noise ratio. Instead, temporal resolution can be improved using regularized iterative algorithms that are less sensitive to noise and limited views. These algorithms benefit from optimization of other parameters such as the view sampling strategy while improving temporal resolution by reducing the total number of views or the detector exposure time. This paper presents the design principles of 4D-XCT experiments when using regularized iterative algorithms derived using the framework of model-based reconstruction. A strategy for performing 4D-XCT experiments is presented that allows for improving the temporal resolution by progressively reducing the number of views or the detector exposure time. Theoretical analysis of the effect of the data acquisition parameters on the detector signal to noise ratio, spatial reconstruction resolution, and temporal reconstruction resolution is also presented in this paper.
Regular transport dynamics produce chaotic travel times.
Villalobos, Jorge; Muñoz, Víctor; Rogan, José; Zarama, Roberto; Johnson, Neil F; Toledo, Benjamín; Valdivia, Juan Alejandro
2014-06-01
In the hope of making passenger travel times shorter and more reliable, many cities are introducing dedicated bus lanes (e.g., Bogota, London, Miami). Here we show that chaotic travel times are actually a natural consequence of individual bus function, and hence of public transport systems more generally, i.e., chaotic dynamics emerge even when the route is empty and straight, stops and lights are equidistant and regular, and loading times are negligible. More generally, our findings provide a novel example of chaotic dynamics emerging from a single object following Newton's laws of motion in a regularized one-dimensional system.
Regularity of difference equations on Banach spaces
Agarwal, Ravi P; Lizama, Carlos
2014-01-01
This work introduces readers to the topic of maximal regularity for difference equations. The authors systematically present the method of maximal regularity, outlining basic linear difference equations along with relevant results. They address recent advances in the field, as well as basic semigroup and cosine operator theories in the discrete setting. The authors also identify some open problems that readers may wish to take up for further research. This book is intended for graduate students and researchers in the area of difference equations, particularly those with advance knowledge of and interest in functional analysis.
PET regularization by envelope guided conjugate gradients
International Nuclear Information System (INIS)
Kaufman, L.; Neumaier, A.
1996-01-01
The authors propose a new way to iteratively solve large scale ill-posed problems and in particular the image reconstruction problem in positron emission tomography by exploiting the relation between Tikhonov regularization and multiobjective optimization to obtain iteratively approximations to the Tikhonov L-curve and its corner. Monitoring the change of the approximate L-curves allows us to adjust the regularization parameter adaptively during a preconditioned conjugate gradient iteration, so that the desired solution can be reconstructed with a small number of iterations
Matrix regularization of embedded 4-manifolds
International Nuclear Information System (INIS)
Trzetrzelewski, Maciej
2012-01-01
We consider products of two 2-manifolds such as S 2 ×S 2 , embedded in Euclidean space and show that the corresponding 4-volume preserving diffeomorphism algebra can be approximated by a tensor product SU(N)⊗SU(N) i.e. functions on a manifold are approximated by the Kronecker product of two SU(N) matrices. A regularization of the 4-sphere is also performed by constructing N 2 ×N 2 matrix representations of the 4-algebra (and as a byproduct of the 3-algebra which makes the regularization of S 3 also possible).
Virta, L J; Saarinen, M M; Kolho, K-L
2017-12-01
The frequency of coeliac disease (CD) has been on the rise over the past decades, especially in Western Europe, but current trends are unclear. To research the recent temporal changes in the incidence of adult, biopsy-verified coeliac disease and dermatitis herpetiformis (DH) in Finland, a country with a high frequency of coeliac disease. All coeliac disease and DH cases diagnosed at age 20-79 years during 2005-2014 were retrieved from a nationwide database documenting all applicants for monthly compensation to cover the extra cost of maintaining a gluten-free diet. This benefit is granted on the basis of histology, not socioeconomic status. Temporal trends in the annual incidences were estimated using Poisson regression analyses. The total incidence of coeliac disease decreased from 33/100 000 during the years 2005-2006 to 29/100 000 during 2013-2014. The mean annual incidence of coeliac disease was nearly twice as high among women as among men, 42 vs 22 per 100 000, respectively. For middle- and old-aged women, the average rate of decrease in incidence was 4.8% (95% CI 3.9-5.7) per year and for men 3.0% (1.8-4.1) (P for linear trend adults, the rate of change remained low and nonsignificant throughout the period 2005-2014. Although the awareness of coeliac disease has increased during the past decades, the incidence of biopsy-verified diagnoses is not increasing, which suggests that exposure to yet unidentified triggering factors for coeliac disease has plateaued among the Finnish adult population. © 2017 John Wiley & Sons Ltd.
On a correspondence between regular and non-regular operator monotone functions
DEFF Research Database (Denmark)
Gibilisco, P.; Hansen, Frank; Isola, T.
2009-01-01
We prove the existence of a bijection between the regular and the non-regular operator monotone functions satisfying a certain functional equation. As an application we give a new proof of the operator monotonicity of certain functions related to the Wigner-Yanase-Dyson skew information....
Evaluation of verifiability in HAL/S. [programming language for aerospace computers
Young, W. D.; Tripathi, A. R.; Good, D. I.; Browne, J. C.
1979-01-01
The ability of HAL/S to write verifiable programs, a characteristic which is highly desirable in aerospace applications, is lacking since many of the features of HAL/S do not lend themselves to existing verification techniques. The methods of language evaluation are described along with the means in which language features are evaluated for verifiability. These methods are applied in this study to various features of HAL/S to identify specific areas in which the language fails with respect to verifiability. Some conclusions are drawn for the design of programming languages for aerospace applications and ongoing work to identify a verifiable subset of HAL/S is described.
Regularity and irreversibility of weekly travel behavior
Kitamura, R.; van der Hoorn, A.I.J.M.
1987-01-01
Dynamic characteristics of travel behavior are analyzed in this paper using weekly travel diaries from two waves of panel surveys conducted six months apart. An analysis of activity engagement indicates the presence of significant regularity in weekly activity participation between the two waves.
Regular and context-free nominal traces
DEFF Research Database (Denmark)
Degano, Pierpaolo; Ferrari, Gian-Luigi; Mezzetti, Gianluca
2017-01-01
Two kinds of automata are presented, for recognising new classes of regular and context-free nominal languages. We compare their expressive power with analogous proposals in the literature, showing that they express novel classes of languages. Although many properties of classical languages hold ...
Faster 2-regular information-set decoding
Bernstein, D.J.; Lange, T.; Peters, C.P.; Schwabe, P.; Chee, Y.M.
2011-01-01
Fix positive integers B and w. Let C be a linear code over F 2 of length Bw. The 2-regular-decoding problem is to find a nonzero codeword consisting of w length-B blocks, each of which has Hamming weight 0 or 2. This problem appears in attacks on the FSB (fast syndrome-based) hash function and
Complexity in union-free regular languages
Czech Academy of Sciences Publication Activity Database
Jirásková, G.; Masopust, Tomáš
2011-01-01
Roč. 22, č. 7 (2011), s. 1639-1653 ISSN 0129-0541 Institutional research plan: CEZ:AV0Z10190503 Keywords : Union-free regular language * one-cycle-free-path automaton * descriptional complexity Subject RIV: BA - General Mathematics Impact factor: 0.379, year: 2011 http://www.worldscinet.com/ijfcs/22/2207/S0129054111008933.html
Regular Gleason Measures and Generalized Effect Algebras
Dvurečenskij, Anatolij; Janda, Jiří
2015-12-01
We study measures, finitely additive measures, regular measures, and σ-additive measures that can attain even infinite values on the quantum logic of a Hilbert space. We show when particular classes of non-negative measures can be studied in the frame of generalized effect algebras.
Regularization of finite temperature string theories
International Nuclear Information System (INIS)
Leblanc, Y.; Knecht, M.; Wallet, J.C.
1990-01-01
The tachyonic divergences occurring in the free energy of various string theories at finite temperature are eliminated through the use of regularization schemes and analytic continuations. For closed strings, we obtain finite expressions which, however, develop an imaginary part above the Hagedorn temperature, whereas open string theories are still plagued with dilatonic divergences. (orig.)
A Sim(2 invariant dimensional regularization
Directory of Open Access Journals (Sweden)
J. Alfaro
2017-09-01
Full Text Available We introduce a Sim(2 invariant dimensional regularization of loop integrals. Then we can compute the one loop quantum corrections to the photon self energy, electron self energy and vertex in the Electrodynamics sector of the Very Special Relativity Standard Model (VSRSM.
Continuum regularized Yang-Mills theory
International Nuclear Information System (INIS)
Sadun, L.A.
1987-01-01
Using the machinery of stochastic quantization, Z. Bern, M. B. Halpern, C. Taubes and I recently proposed a continuum regularization technique for quantum field theory. This regularization may be implemented by applying a regulator to either the (d + 1)-dimensional Parisi-Wu Langevin equation or, equivalently, to the d-dimensional second order Schwinger-Dyson (SD) equations. This technique is non-perturbative, respects all gauge and Lorentz symmetries, and is consistent with a ghost-free gauge fixing (Zwanziger's). This thesis is a detailed study of this regulator, and of regularized Yang-Mills theory, using both perturbative and non-perturbative techniques. The perturbative analysis comes first. The mechanism of stochastic quantization is reviewed, and a perturbative expansion based on second-order SD equations is developed. A diagrammatic method (SD diagrams) for evaluating terms of this expansion is developed. We apply the continuum regulator to a scalar field theory. Using SD diagrams, we show that all Green functions can be rendered finite to all orders in perturbation theory. Even non-renormalizable theories can be regularized. The continuum regulator is then applied to Yang-Mills theory, in conjunction with Zwanziger's gauge fixing. A perturbative expansion of the regulator is incorporated into the diagrammatic method. It is hoped that the techniques discussed in this thesis will contribute to the construction of a renormalized Yang-Mills theory is 3 and 4 dimensions
Gravitational lensing by a regular black hole
International Nuclear Information System (INIS)
Eiroa, Ernesto F; Sendra, Carlos M
2011-01-01
In this paper, we study a regular Bardeen black hole as a gravitational lens. We find the strong deflection limit for the deflection angle, from which we obtain the positions and magnifications of the relativistic images. As an example, we apply the results to the particular case of the supermassive black hole at the center of our galaxy.
Gravitational lensing by a regular black hole
Energy Technology Data Exchange (ETDEWEB)
Eiroa, Ernesto F; Sendra, Carlos M, E-mail: eiroa@iafe.uba.ar, E-mail: cmsendra@iafe.uba.ar [Instituto de Astronomia y Fisica del Espacio, CC 67, Suc. 28, 1428, Buenos Aires (Argentina)
2011-04-21
In this paper, we study a regular Bardeen black hole as a gravitational lens. We find the strong deflection limit for the deflection angle, from which we obtain the positions and magnifications of the relativistic images. As an example, we apply the results to the particular case of the supermassive black hole at the center of our galaxy.
Analytic stochastic regularization and gange invariance
International Nuclear Information System (INIS)
Abdalla, E.; Gomes, M.; Lima-Santos, A.
1986-05-01
A proof that analytic stochastic regularization breaks gauge invariance is presented. This is done by an explicit one loop calculation of the vaccum polarization tensor in scalar electrodynamics, which turns out not to be transversal. The counterterm structure, Langevin equations and the construction of composite operators in the general framework of stochastic quantization, are also analysed. (Author) [pt
Annotation of regular polysemy and underspecification
DEFF Research Database (Denmark)
Martínez Alonso, Héctor; Pedersen, Bolette Sandford; Bel, Núria
2013-01-01
We present the result of an annotation task on regular polysemy for a series of seman- tic classes or dot types in English, Dan- ish and Spanish. This article describes the annotation process, the results in terms of inter-encoder agreement, and the sense distributions obtained with two methods...
Stabilization, pole placement, and regular implementability
Belur, MN; Trentelman, HL
In this paper, we study control by interconnection of linear differential systems. We give necessary and sufficient conditions for regular implementability of a-given linear, differential system. We formulate the problems of stabilization and pole placement as problems of finding a suitable,
12 CFR 725.3 - Regular membership.
2010-01-01
... UNION ADMINISTRATION CENTRAL LIQUIDITY FACILITY § 725.3 Regular membership. (a) A natural person credit....5(b) of this part, and forwarding with its completed application funds equal to one-half of this... 1, 1979, is not required to forward these funds to the Facility until October 1, 1979. (3...
Supervised scale-regularized linear convolutionary filters
DEFF Research Database (Denmark)
Loog, Marco; Lauze, Francois Bernard
2017-01-01
also be solved relatively efficient. All in all, the idea is to properly control the scale of a trained filter, which we solve by introducing a specific regularization term into the overall objective function. We demonstrate, on an artificial filter learning problem, the capabil- ities of our basic...
On regular riesz operators | Raubenheimer | Quaestiones ...
African Journals Online (AJOL)
The r-asymptotically quasi finite rank operators on Banach lattices are examples of regular Riesz operators. We characterise Riesz elements in a subalgebra of a Banach algebra in terms of Riesz elements in the Banach algebra. This enables us to characterise r-asymptotically quasi finite rank operators in terms of adjoint ...
Regularized Discriminant Analysis: A Large Dimensional Study
Yang, Xiaoke
2018-04-28
In this thesis, we focus on studying the performance of general regularized discriminant analysis (RDA) classifiers. The data used for analysis is assumed to follow Gaussian mixture model with different means and covariances. RDA offers a rich class of regularization options, covering as special cases the regularized linear discriminant analysis (RLDA) and the regularized quadratic discriminant analysis (RQDA) classi ers. We analyze RDA under the double asymptotic regime where the data dimension and the training size both increase in a proportional way. This double asymptotic regime allows for application of fundamental results from random matrix theory. Under the double asymptotic regime and some mild assumptions, we show that the asymptotic classification error converges to a deterministic quantity that only depends on the data statistical parameters and dimensions. This result not only implicates some mathematical relations between the misclassification error and the class statistics, but also can be leveraged to select the optimal parameters that minimize the classification error, thus yielding the optimal classifier. Validation results on the synthetic data show a good accuracy of our theoretical findings. We also construct a general consistent estimator to approximate the true classification error in consideration of the unknown previous statistics. We benchmark the performance of our proposed consistent estimator against classical estimator on synthetic data. The observations demonstrate that the general estimator outperforms others in terms of mean squared error (MSE).
Complexity in union-free regular languages
Czech Academy of Sciences Publication Activity Database
Jirásková, G.; Masopust, Tomáš
2011-01-01
Roč. 22, č. 7 (2011), s. 1639-1653 ISSN 0129-0541 Institutional research plan: CEZ:AV0Z10190503 Keywords : Union-free regular language * one-cycle-free- path automaton * descriptional complexity Subject RIV: BA - General Mathematics Impact factor: 0.379, year: 2011 http://www.worldscinet.com/ijfcs/22/2207/S0129054111008933.html
Bit-coded regular expression parsing
DEFF Research Database (Denmark)
Nielsen, Lasse; Henglein, Fritz
2011-01-01
the DFA-based parsing algorithm due to Dub ´e and Feeley to emit the bits of the bit representation without explicitly materializing the parse tree itself. We furthermore show that Frisch and Cardelli’s greedy regular expression parsing algorithm can be straightforwardly modified to produce bit codings...
Tetravalent one-regular graphs of order 4p2
DEFF Research Database (Denmark)
Feng, Yan-Quan; Kutnar, Klavdija; Marusic, Dragan
2014-01-01
A graph is one-regular if its automorphism group acts regularly on the set of its arcs. In this paper tetravalent one-regular graphs of order 4p2, where p is a prime, are classified.......A graph is one-regular if its automorphism group acts regularly on the set of its arcs. In this paper tetravalent one-regular graphs of order 4p2, where p is a prime, are classified....
Automated Assume-Guarantee Reasoning for Omega-Regular Systems and Specifications
Chaki, Sagar; Gurfinkel, Arie
2010-01-01
We develop a learning-based automated Assume-Guarantee (AG) reasoning framework for verifying omega-regular properties of concurrent systems. We study the applicability of non-circular (AGNC) and circular (AG-C) AG proof rules in the context of systems with infinite behaviors. In particular, we show that AG-NC is incomplete when assumptions are restricted to strictly infinite behaviors, while AG-C remains complete. We present a general formalization, called LAG, of the learning based automated AG paradigm. We show how existing approaches for automated AG reasoning are special instances of LAG.We develop two learning algorithms for a class of systems, called infinite regular systems, that combine finite and infinite behaviors. We show that for infinity-regular systems, both AG-NC and AG-C are sound and complete. Finally, we show how to instantiate LAG to do automated AG reasoning for infinite regular, and omega-regular, systems using both AG-NC and AG-C as proof rules
The role of multisensory interplay in enabling temporal expectations.
Ball, Felix; Michels, Lara E; Thiele, Carsten; Noesselt, Toemme
2018-01-01
Temporal regularities can guide our attention to focus on a particular moment in time and to be especially vigilant just then. Previous research provided evidence for the influence of temporal expectation on perceptual processing in unisensory auditory, visual, and tactile contexts. However, in real life we are often exposed to a complex and continuous stream of multisensory events. Here we tested - in a series of experiments - whether temporal expectations can enhance perception in multisensory contexts and whether this enhancement differs from enhancements in unisensory contexts. Our discrimination paradigm contained near-threshold targets (subject-specific 75% discrimination accuracy) embedded in a sequence of distractors. The likelihood of target occurrence (early or late) was manipulated block-wise. Furthermore, we tested whether spatial and modality-specific target uncertainty (i.e. predictable vs. unpredictable target position or modality) would affect temporal expectation (TE) measured with perceptual sensitivity (d ' ) and response times (RT). In all our experiments, hidden temporal regularities improved performance for expected multisensory targets. Moreover, multisensory performance was unaffected by spatial and modality-specific uncertainty, whereas unisensory TE effects on d ' but not RT were modulated by spatial and modality-specific uncertainty. Additionally, the size of the temporal expectation effect, i.e. the increase in perceptual sensitivity and decrease of RT, scaled linearly with the likelihood of expected targets. Finally, temporal expectation effects were unaffected by varying target position within the stream. Together, our results strongly suggest that participants quickly adapt to novel temporal contexts, that they benefit from multisensory (relative to unisensory) stimulation and that multisensory benefits are maximal if the stimulus-driven uncertainty is highest. We propose that enhanced informational content (i.e. multisensory
Stephen M Ogle; Kenneth Davis; Thomas Lauvaux; Andrew Schuh; Dan Cooley; Tristram O West; Linda S Heath; Natasha L Miles; Scott Richardson; F Jay Breidt; James E Smith; Jessica L McCarty; Kevin R Gurney; Pieter Tans; A Scott. Denning
2015-01-01
Verifying national greenhouse gas (GHG) emissions inventories is a critical step to ensure that reported emissions data to the United Nations Framework Convention on Climate Change (UNFCCC) are accurate and representative of a country's contribution to GHG concentrations in the atmosphere. Furthermore, verifying biogenic fluxes provides a check on estimated...
Verifying Safety Messages Using Relative-Time and Zone Priority in Vehicular Ad Hoc Networks
Banani, Sam; Thiemjarus, Surapa; Kittipiyakul, Somsak
2018-01-01
In high-density road networks, with each vehicle broadcasting multiple messages per second, the arrival rate of safety messages can easily exceed the rate at which digital signatures can be verified. Since not all messages can be verified, algorithms for selecting which messages to verify are required to ensure that each vehicle receives appropriate awareness about neighbouring vehicles. This paper presents a novel scheme to select important safety messages for verification in vehicular ad hoc networks (VANETs). The proposed scheme uses location and direction of the sender, as well as proximity and relative-time between vehicles, to reduce the number of irrelevant messages verified (i.e., messages from vehicles that are unlikely to cause an accident). Compared with other existing schemes, the analysis results show that the proposed scheme can verify messages from nearby vehicles with lower inter-message delay and reduced packet loss and thus provides high level of awareness of the nearby vehicles. PMID:29652840
Save, H.; Bettadpur, S. V.
2013-12-01
It has been demonstrated before that using Tikhonov regularization produces spherical harmonic solutions from GRACE that have very little residual stripes while capturing all the signal observed by GRACE within the noise level. This paper demonstrates a two-step process and uses Tikhonov regularization to remove the residual stripes in the CSR regularized spherical harmonic coefficients when computing the spatial projections. We discuss methods to produce mass anomaly grids that have no stripe features while satisfying the necessary condition of capturing all observed signal within the GRACE noise level.
Extreme values, regular variation and point processes
Resnick, Sidney I
1987-01-01
Extremes Values, Regular Variation and Point Processes is a readable and efficient account of the fundamental mathematical and stochastic process techniques needed to study the behavior of extreme values of phenomena based on independent and identically distributed random variables and vectors It presents a coherent treatment of the distributional and sample path fundamental properties of extremes and records It emphasizes the core primacy of three topics necessary for understanding extremes the analytical theory of regularly varying functions; the probabilistic theory of point processes and random measures; and the link to asymptotic distribution approximations provided by the theory of weak convergence of probability measures in metric spaces The book is self-contained and requires an introductory measure-theoretic course in probability as a prerequisite Almost all sections have an extensive list of exercises which extend developments in the text, offer alternate approaches, test mastery and provide for enj...
Stream Processing Using Grammars and Regular Expressions
DEFF Research Database (Denmark)
Rasmussen, Ulrik Terp
disambiguation. The first algorithm operates in two passes in a semi-streaming fashion, using a constant amount of working memory and an auxiliary tape storage which is written in the first pass and consumed by the second. The second algorithm is a single-pass and optimally streaming algorithm which outputs...... as much of the parse tree as is semantically possible based on the input prefix read so far, and resorts to buffering as many symbols as is required to resolve the next choice. Optimality is obtained by performing a PSPACE-complete pre-analysis on the regular expression. In the second part we present...... Kleenex, a language for expressing high-performance streaming string processing programs as regular grammars with embedded semantic actions, and its compilation to streaming string transducers with worst-case linear-time performance. Its underlying theory is based on transducer decomposition into oracle...
Describing chaotic attractors: Regular and perpetual points
Dudkowski, Dawid; Prasad, Awadhesh; Kapitaniak, Tomasz
2018-03-01
We study the concepts of regular and perpetual points for describing the behavior of chaotic attractors in dynamical systems. The idea of these points, which have been recently introduced to theoretical investigations, is thoroughly discussed and extended into new types of models. We analyze the correlation between regular and perpetual points, as well as their relation with phase space, showing the potential usefulness of both types of points in the qualitative description of co-existing states. The ability of perpetual points in finding attractors is indicated, along with its potential cause. The location of chaotic trajectories and sets of considered points is investigated and the study on the stability of systems is shown. The statistical analysis of the observing desired states is performed. We focus on various types of dynamical systems, i.e., chaotic flows with self-excited and hidden attractors, forced mechanical models, and semiconductor superlattices, exhibiting the universality of appearance of the observed patterns and relations.
Chaos regularization of quantum tunneling rates
International Nuclear Information System (INIS)
Pecora, Louis M.; Wu Dongho; Lee, Hoshik; Antonsen, Thomas; Lee, Ming-Jer; Ott, Edward
2011-01-01
Quantum tunneling rates through a barrier separating two-dimensional, symmetric, double-well potentials are shown to depend on the classical dynamics of the billiard trajectories in each well and, hence, on the shape of the wells. For shapes that lead to regular (integrable) classical dynamics the tunneling rates fluctuate greatly with eigenenergies of the states sometimes by over two orders of magnitude. Contrarily, shapes that lead to completely chaotic trajectories lead to tunneling rates whose fluctuations are greatly reduced, a phenomenon we call regularization of tunneling rates. We show that a random-plane-wave theory of tunneling accounts for the mean tunneling rates and the small fluctuation variances for the chaotic systems.
Least square regularized regression in sum space.
Xu, Yong-Li; Chen, Di-Rong; Li, Han-Xiong; Liu, Lu
2013-04-01
This paper proposes a least square regularized regression algorithm in sum space of reproducing kernel Hilbert spaces (RKHSs) for nonflat function approximation, and obtains the solution of the algorithm by solving a system of linear equations. This algorithm can approximate the low- and high-frequency component of the target function with large and small scale kernels, respectively. The convergence and learning rate are analyzed. We measure the complexity of the sum space by its covering number and demonstrate that the covering number can be bounded by the product of the covering numbers of basic RKHSs. For sum space of RKHSs with Gaussian kernels, by choosing appropriate parameters, we tradeoff the sample error and regularization error, and obtain a polynomial learning rate, which is better than that in any single RKHS. The utility of this method is illustrated with two simulated data sets and five real-life databases.
Contour Propagation With Riemannian Elasticity Regularization
DEFF Research Database (Denmark)
Bjerre, Troels; Hansen, Mads Fogtmann; Sapru, W.
2011-01-01
Purpose/Objective(s): Adaptive techniques allow for correction of spatial changes during the time course of the fractionated radiotherapy. Spatial changes include tumor shrinkage and weight loss, causing tissue deformation and residual positional errors even after translational and rotational image...... the planning CT onto the rescans and correcting to reflect actual anatomical changes. For deformable registration, a free-form, multi-level, B-spline deformation model with Riemannian elasticity, penalizing non-rigid local deformations, and volumetric changes, was used. Regularization parameters was defined...... on the original delineation and tissue deformation in the time course between scans form a better starting point than rigid propagation. There was no significant difference of locally and globally defined regularization. The method used in the present study suggests that deformed contours need to be reviewed...
Thin accretion disk around regular black hole
Directory of Open Access Journals (Sweden)
QIU Tianqi
2014-08-01
Full Text Available The Penrose′s cosmic censorship conjecture says that naked singularities do not exist in nature.So,it seems reasonable to further conjecture that not even a singularity exists in nature.In this paper,a regular black hole without singularity is studied in detail,especially on its thin accretion disk,energy flux,radiation temperature and accretion efficiency.It is found that the interaction of regular black hole is stronger than that of the Schwarzschild black hole. Furthermore,the thin accretion will be more efficiency to lost energy while the mass of black hole decreased. These particular properties may be used to distinguish between black holes.
Convex nonnegative matrix factorization with manifold regularization.
Hu, Wenjun; Choi, Kup-Sze; Wang, Peiliang; Jiang, Yunliang; Wang, Shitong
2015-03-01
Nonnegative Matrix Factorization (NMF) has been extensively applied in many areas, including computer vision, pattern recognition, text mining, and signal processing. However, nonnegative entries are usually required for the data matrix in NMF, which limits its application. Besides, while the basis and encoding vectors obtained by NMF can represent the original data in low dimension, the representations do not always reflect the intrinsic geometric structure embedded in the data. Motivated by manifold learning and Convex NMF (CNMF), we propose a novel matrix factorization method called Graph Regularized and Convex Nonnegative Matrix Factorization (GCNMF) by introducing a graph regularized term into CNMF. The proposed matrix factorization technique not only inherits the intrinsic low-dimensional manifold structure, but also allows the processing of mixed-sign data matrix. Clustering experiments on nonnegative and mixed-sign real-world data sets are conducted to demonstrate the effectiveness of the proposed method. Copyright © 2014 Elsevier Ltd. All rights reserved.
Representation and management of temporal and uncertain knowledge
International Nuclear Information System (INIS)
Chen, Ziqiang
1993-01-01
This thesis contributes to the investigation of uncertain temporal knowledge representation and management, especially for process verification and supervisor systems design. The evolution of process behaviour is time dependent and information describing this temporal evolution is uncertain/imprecise. In Artificial Intelligence, time and uncertainty have been, since long-time, considered as two of the most difficult research fields. Furthermore, these two fields, even different, may be present in an interactive way. We now try to deal with this special kind of uncertainty: temporal uncertainty. Integrating time and uncertainty brings out study issues of temporal information representation, events ordering and temporal reasoning under uncertainty. The investigation of these problems has been guided by preserving the intrinsic properties of time. The main contribution of this thesis can be summarised as follows: (1) unified representation of uncertainty and imprecision over temporal information; (2) formal structuring of time under uncertainty; (3) formalising fuzzy temporal reasoning system; (4) modelling temporal evolution of process, providing associated reasoning mechanism to verify the process evolution, modelling fuzzy temporal Petri nets; (5) design and implementation of SURTEL, a programming tool for dealing with uncertain temporal information and knowledge. (author) [fr
A short proof of increased parabolic regularity
Directory of Open Access Journals (Sweden)
Stephen Pankavich
2015-08-01
Full Text Available We present a short proof of the increased regularity obtained by solutions to uniformly parabolic partial differential equations. Though this setting is fairly introductory, our new method of proof, which uses a priori estimates and an inductive method, can be extended to prove analogous results for problems with time-dependent coefficients, advection-diffusion or reaction diffusion equations, and nonlinear PDEs even when other tools, such as semigroup methods or the use of explicit fundamental solutions, are unavailable.
Regular black hole in three dimensions
Myung, Yun Soo; Yoon, Myungseok
2008-01-01
We find a new black hole in three dimensional anti-de Sitter space by introducing an anisotropic perfect fluid inspired by the noncommutative black hole. This is a regular black hole with two horizons. We compare thermodynamics of this black hole with that of non-rotating BTZ black hole. The first-law of thermodynamics is not compatible with the Bekenstein-Hawking entropy.
Sparse regularization for force identification using dictionaries
Qiao, Baijie; Zhang, Xingwu; Wang, Chenxi; Zhang, Hang; Chen, Xuefeng
2016-04-01
The classical function expansion method based on minimizing l2-norm of the response residual employs various basis functions to represent the unknown force. Its difficulty lies in determining the optimum number of basis functions. Considering the sparsity of force in the time domain or in other basis space, we develop a general sparse regularization method based on minimizing l1-norm of the coefficient vector of basis functions. The number of basis functions is adaptively determined by minimizing the number of nonzero components in the coefficient vector during the sparse regularization process. First, according to the profile of the unknown force, the dictionary composed of basis functions is determined. Second, a sparsity convex optimization model for force identification is constructed. Third, given the transfer function and the operational response, Sparse reconstruction by separable approximation (SpaRSA) is developed to solve the sparse regularization problem of force identification. Finally, experiments including identification of impact and harmonic forces are conducted on a cantilever thin plate structure to illustrate the effectiveness and applicability of SpaRSA. Besides the Dirac dictionary, other three sparse dictionaries including Db6 wavelets, Sym4 wavelets and cubic B-spline functions can also accurately identify both the single and double impact forces from highly noisy responses in a sparse representation frame. The discrete cosine functions can also successfully reconstruct the harmonic forces including the sinusoidal, square and triangular forces. Conversely, the traditional Tikhonov regularization method with the L-curve criterion fails to identify both the impact and harmonic forces in these cases.
Analytic stochastic regularization and gauge theories
International Nuclear Information System (INIS)
Abdalla, E.; Gomes, M.; Lima-Santos, A.
1987-04-01
We prove that analytic stochatic regularization braks gauge invariance. This is done by an explicit one loop calculation of the two three and four point vertex functions of the gluon field in scalar chromodynamics, which turns out not to be geuge invariant. We analyse the counter term structure, Langevin equations and the construction of composite operators in the general framework of stochastic quantization. (author) [pt
Preconditioners for regularized saddle point matrices
Czech Academy of Sciences Publication Activity Database
Axelsson, Owe
2011-01-01
Roč. 19, č. 2 (2011), s. 91-112 ISSN 1570-2820 Institutional research plan: CEZ:AV0Z30860518 Keywords : saddle point matrices * preconditioning * regularization * eigenvalue clustering Subject RIV: BA - General Mathematics Impact factor: 0.533, year: 2011 http://www.degruyter.com/view/j/jnma.2011.19.issue-2/jnum.2011.005/jnum.2011.005. xml
Regularized forecasting of chaotic dynamical systems
International Nuclear Information System (INIS)
Bollt, Erik M.
2017-01-01
While local models of dynamical systems have been highly successful in terms of using extensive data sets observing even a chaotic dynamical system to produce useful forecasts, there is a typical problem as follows. Specifically, with k-near neighbors, kNN method, local observations occur due to recurrences in a chaotic system, and this allows for local models to be built by regression to low dimensional polynomial approximations of the underlying system estimating a Taylor series. This has been a popular approach, particularly in context of scalar data observations which have been represented by time-delay embedding methods. However such local models can generally allow for spatial discontinuities of forecasts when considered globally, meaning jumps in predictions because the collected near neighbors vary from point to point. The source of these discontinuities is generally that the set of near neighbors varies discontinuously with respect to the position of the sample point, and so therefore does the model built from the near neighbors. It is possible to utilize local information inferred from near neighbors as usual but at the same time to impose a degree of regularity on a global scale. We present here a new global perspective extending the general local modeling concept. In so doing, then we proceed to show how this perspective allows us to impose prior presumed regularity into the model, by involving the Tikhonov regularity theory, since this classic perspective of optimization in ill-posed problems naturally balances fitting an objective with some prior assumed form of the result, such as continuity or derivative regularity for example. This all reduces to matrix manipulations which we demonstrate on a simple data set, with the implication that it may find much broader context.
Minimal length uncertainty relation and ultraviolet regularization
Kempf, Achim; Mangano, Gianpiero
1997-06-01
Studies in string theory and quantum gravity suggest the existence of a finite lower limit Δx0 to the possible resolution of distances, at the latest on the scale of the Planck length of 10-35 m. Within the framework of the Euclidean path integral we explicitly show ultraviolet regularization in field theory through this short distance structure. Both rotation and translation invariance can be preserved. An example is studied in detail.
Regularity and chaos in cavity QED
International Nuclear Information System (INIS)
Bastarrachea-Magnani, Miguel Angel; López-del-Carpio, Baldemar; Chávez-Carlos, Jorge; Lerma-Hernández, Sergio; Hirsch, Jorge G
2017-01-01
The interaction of a quantized electromagnetic field in a cavity with a set of two-level atoms inside it can be described with algebraic Hamiltonians of increasing complexity, from the Rabi to the Dicke models. Their algebraic character allows, through the use of coherent states, a semiclassical description in phase space, where the non-integrable Dicke model has regions associated with regular and chaotic motion. The appearance of classical chaos can be quantified calculating the largest Lyapunov exponent over the whole available phase space for a given energy. In the quantum regime, employing efficient diagonalization techniques, we are able to perform a detailed quantitative study of the regular and chaotic regions, where the quantum participation ratio (P R ) of coherent states on the eigenenergy basis plays a role equivalent to the Lyapunov exponent. It is noted that, in the thermodynamic limit, dividing the participation ratio by the number of atoms leads to a positive value in chaotic regions, while it tends to zero in the regular ones. (paper)
Solution path for manifold regularized semisupervised classification.
Wang, Gang; Wang, Fei; Chen, Tao; Yeung, Dit-Yan; Lochovsky, Frederick H
2012-04-01
Traditional learning algorithms use only labeled data for training. However, labeled examples are often difficult or time consuming to obtain since they require substantial human labeling efforts. On the other hand, unlabeled data are often relatively easy to collect. Semisupervised learning addresses this problem by using large quantities of unlabeled data with labeled data to build better learning algorithms. In this paper, we use the manifold regularization approach to formulate the semisupervised learning problem where a regularization framework which balances a tradeoff between loss and penalty is established. We investigate different implementations of the loss function and identify the methods which have the least computational expense. The regularization hyperparameter, which determines the balance between loss and penalty, is crucial to model selection. Accordingly, we derive an algorithm that can fit the entire path of solutions for every value of the hyperparameter. Its computational complexity after preprocessing is quadratic only in the number of labeled examples rather than the total number of labeled and unlabeled examples.
Regularizations: different recipes for identical situations
International Nuclear Information System (INIS)
Gambin, E.; Lobo, C.O.; Battistel, O.A.
2004-03-01
We present a discussion where the choice of the regularization procedure and the routing for the internal lines momenta are put at the same level of arbitrariness in the analysis of Ward identities involving simple and well-known problems in QFT. They are the complex self-interacting scalar field and two simple models where the SVV and AVV process are pertinent. We show that, in all these problems, the conditions to symmetry relations preservation are put in terms of the same combination of divergent Feynman integrals, which are evaluated in the context of a very general calculational strategy, concerning the manipulations and calculations involving divergences. Within the adopted strategy, all the arbitrariness intrinsic to the problem are still maintained in the final results and, consequently, a perfect map can be obtained with the corresponding results of the traditional regularization techniques. We show that, when we require an universal interpretation for the arbitrariness involved, in order to get consistency with all stated physical constraints, a strong condition is imposed for regularizations which automatically eliminates the ambiguities associated to the routing of the internal lines momenta of loops. The conclusion is clean and sound: the association between ambiguities and unavoidable symmetry violations in Ward identities cannot be maintained if an unique recipe is required for identical situations in the evaluation of divergent physical amplitudes. (author)
Fisher, Michael; Gabbay, Dov; Gough, Graham
2000-01-01
Time is a fascinating subject that has captured mankind's imagination from ancient times to the present. It has been, and continues to be studied across a wide range of disciplines, from the natural sciences to philosophy and logic. More than two decades ago, Pnueli in a seminal work showed the value of temporal logic in the specification and verification of computer programs. Today, a strong, vibrant international research community exists in the broad community of computer science and AI. This volume presents a number of articles from leading researchers containing state-of-the-art results in such areas as pure temporal/modal logic, specification and verification, temporal databases, temporal aspects in AI, tense and aspect in natural language, and temporal theorem proving. Earlier versions of some of the articles were given at the most recent International Conference on Temporal Logic, University of Manchester, UK. Readership: Any student of the area - postgraduate, postdoctoral or even research professor ...
Parekh, Ankit
Sparsity has become the basis of some important signal processing methods over the last ten years. Many signal processing problems (e.g., denoising, deconvolution, non-linear component analysis) can be expressed as inverse problems. Sparsity is invoked through the formulation of an inverse problem with suitably designed regularization terms. The regularization terms alone encode sparsity into the problem formulation. Often, the ℓ1 norm is used to induce sparsity, so much so that ℓ1 regularization is considered to be `modern least-squares'. The use of ℓ1 norm, as a sparsity-inducing regularizer, leads to a convex optimization problem, which has several benefits: the absence of extraneous local minima, well developed theory of globally convergent algorithms, even for large-scale problems. Convex regularization via the ℓ1 norm, however, tends to under-estimate the non-zero values of sparse signals. In order to estimate the non-zero values more accurately, non-convex regularization is often favored over convex regularization. However, non-convex regularization generally leads to non-convex optimization, which suffers from numerous issues: convergence may be guaranteed to only a stationary point, problem specific parameters may be difficult to set, and the solution is sensitive to the initialization of the algorithm. The first part of this thesis is aimed toward combining the benefits of non-convex regularization and convex optimization to estimate sparse signals more effectively. To this end, we propose to use parameterized non-convex regularizers with designated non-convexity and provide a range for the non-convex parameter so as to ensure that the objective function is strictly convex. By ensuring convexity of the objective function (sum of data-fidelity and non-convex regularizer), we can make use of a wide variety of convex optimization algorithms to obtain the unique global minimum reliably. The second part of this thesis proposes a non-linear signal
Supersymmetric Regularization Two-Loop QCD Amplitudes and Coupling Shifts
International Nuclear Information System (INIS)
Dixon, Lance
2002-01-01
We present a definition of the four-dimensional helicity (FDH) regularization scheme valid for two or more loops. This scheme was previously defined and utilized at one loop. It amounts to a variation on the standard 't Hooft-Veltman scheme and is designed to be compatible with the use of helicity states for ''observed'' particles. It is similar to dimensional reduction in that it maintains an equal number of bosonic and fermionic states, as required for preserving supersymmetry. Supersymmetry Ward identities relate different helicity amplitudes in supersymmetric theories. As a check that the FDH scheme preserves supersymmetry, at least through two loops, we explicitly verify a number of these identities for gluon-gluon scattering (gg → gg) in supersymmetric QCD. These results also cross-check recent non-trivial two-loop calculations in ordinary QCD. Finally, we compute the two-loop shift between the FDH coupling and the standard MS coupling, α s . The FDH shift is identical to the one for dimensional reduction. The two-loop coupling shifts are then used to obtain the three-loop QCD β function in the FDH and dimensional reduction schemes
Supersymmetric Regularization Two-Loop QCD Amplitudes and Coupling Shifts
Energy Technology Data Exchange (ETDEWEB)
Dixon, Lance
2002-03-08
We present a definition of the four-dimensional helicity (FDH) regularization scheme valid for two or more loops. This scheme was previously defined and utilized at one loop. It amounts to a variation on the standard 't Hooft-Veltman scheme and is designed to be compatible with the use of helicity states for ''observed'' particles. It is similar to dimensional reduction in that it maintains an equal number of bosonic and fermionic states, as required for preserving supersymmetry. Supersymmetry Ward identities relate different helicity amplitudes in supersymmetric theories. As a check that the FDH scheme preserves supersymmetry, at least through two loops, we explicitly verify a number of these identities for gluon-gluon scattering (gg {yields} gg) in supersymmetric QCD. These results also cross-check recent non-trivial two-loop calculations in ordinary QCD. Finally, we compute the two-loop shift between the FDH coupling and the standard {bar M}{bar S} coupling, {alpha}{sub s}. The FDH shift is identical to the one for dimensional reduction. The two-loop coupling shifts are then used to obtain the three-loop QCD {beta} function in the FDH and dimensional reduction schemes.
Sparsity regularization for parameter identification problems
International Nuclear Information System (INIS)
Jin, Bangti; Maass, Peter
2012-01-01
The investigation of regularization schemes with sparsity promoting penalty terms has been one of the dominant topics in the field of inverse problems over the last years, and Tikhonov functionals with ℓ p -penalty terms for 1 ⩽ p ⩽ 2 have been studied extensively. The first investigations focused on regularization properties of the minimizers of such functionals with linear operators and on iteration schemes for approximating the minimizers. These results were quickly transferred to nonlinear operator equations, including nonsmooth operators and more general function space settings. The latest results on regularization properties additionally assume a sparse representation of the true solution as well as generalized source conditions, which yield some surprising and optimal convergence rates. The regularization theory with ℓ p sparsity constraints is relatively complete in this setting; see the first part of this review. In contrast, the development of efficient numerical schemes for approximating minimizers of Tikhonov functionals with sparsity constraints for nonlinear operators is still ongoing. The basic iterated soft shrinkage approach has been extended in several directions and semi-smooth Newton methods are becoming applicable in this field. In particular, the extension to more general non-convex, non-differentiable functionals by variational principles leads to a variety of generalized iteration schemes. We focus on such iteration schemes in the second part of this review. A major part of this survey is devoted to applying sparsity constrained regularization techniques to parameter identification problems for partial differential equations, which we regard as the prototypical setting for nonlinear inverse problems. Parameter identification problems exhibit different levels of complexity and we aim at characterizing a hierarchy of such problems. The operator defining these inverse problems is the parameter-to-state mapping. We first summarize some
Indeterministic Temporal Logic
Directory of Open Access Journals (Sweden)
Trzęsicki Kazimierz
2015-09-01
Full Text Available The questions od determinism, causality, and freedom have been the main philosophical problems debated since the beginning of temporal logic. The issue of the logical value of sentences about the future was stated by Aristotle in the famous tomorrow sea-battle passage. The question has inspired Łukasiewicz’s idea of many-valued logics and was a motive of A. N. Prior’s considerations about the logic of tenses. In the scheme of temporal logic there are different solutions to the problem. In the paper we consider indeterministic temporal logic based on the idea of temporal worlds and the relation of accessibility between them.
Spiking Regularity and Coherence in Complex Hodgkin–Huxley Neuron Networks
International Nuclear Information System (INIS)
Zhi-Qiang, Sun; Ping, Xie; Wei, Li; Peng-Ye, Wang
2010-01-01
We study the effects of the strength of coupling between neurons on the spiking regularity and coherence in a complex network with randomly connected Hodgkin–Huxley neurons driven by colored noise. It is found that for the given topology realization and colored noise correlation time, there exists an optimal strength of coupling, at which the spiking regularity of the network reaches the best level. Moreover, when the temporal regularity reaches the best level, the spatial coherence of the system has already increased to a relatively high level. In addition, for the given number of neurons and noise correlation time, the values of average regularity and spatial coherence at the optimal strength of coupling are nearly independent of the topology realization. Furthermore, there exists an optimal value of colored noise correlation time at which the spiking regularity can reach its best level. These results may be helpful for understanding of the real neuron world. (cross-disciplinary physics and related areas of science and technology)
Learning Sparse Visual Representations with Leaky Capped Norm Regularizers
Wangni, Jianqiao; Lin, Dahua
2017-01-01
Sparsity inducing regularization is an important part for learning over-complete visual representations. Despite the popularity of $\\ell_1$ regularization, in this paper, we investigate the usage of non-convex regularizations in this problem. Our contribution consists of three parts. First, we propose the leaky capped norm regularization (LCNR), which allows model weights below a certain threshold to be regularized more strongly as opposed to those above, therefore imposes strong sparsity and...
Directory of Open Access Journals (Sweden)
Dustin Kai Yan Lau
2014-03-01
Full Text Available Background Unlike alphabetic languages, Chinese uses a logographic script. However, the pronunciation of many character’s phonetic radical has the same pronunciation as the character as a whole. These are considered regular characters and can be read through a lexical non-semantic route (Weekes & Chen, 1999. Pseudocharacters are another way to study this non-semantic route. A pseudocharacter is the combination of existing semantic and phonetic radicals in their legal positions resulting in a non-existing character (Ho, Chan, Chung, Lee, & Tsang, 2007. Pseudocharacters can be pronounced by direct derivation from the sound of its phonetic radical. Conversely, if the pronunciation of a character does not follow that of the phonetic radical, it is considered as irregular and can only be correctly read through the lexical-semantic route. The aim of the current investigation was to examine reading aloud in normal adults. We hypothesized that the regularity effect, previously described for alphabetical scripts and acquired dyslexic patients of Chinese (Weekes & Chen, 1999; Wu, Liu, Sun, Chromik, & Zhang, 2014, would also be present in normal adult Chinese readers. Method Participants. Thirty (50% female native Hong Kong Cantonese speakers with a mean age of 19.6 years and a mean education of 12.9 years. Stimuli. Sixty regular-, 60 irregular-, and 60 pseudo-characters (with at least 75% of name agreement in Chinese were matched by initial phoneme, number of strokes and family size. Additionally, regular- and irregular-characters were matched by frequency (low and consistency. Procedure. Each participant was asked to read aloud the stimuli presented on a laptop using the DMDX software. The order of stimuli presentation was randomized. Data analysis. ANOVAs were carried out by participants and items with RTs and errors as dependent variables and type of stimuli (regular-, irregular- and pseudo-character as repeated measures (F1 or between subject
Energy Technology Data Exchange (ETDEWEB)
Yilmaz, Ergin, E-mail: erginyilmaz@yahoo.com [Department of Biomedical Engineering, Engineering Faculty, Bülent Ecevit University, 67100 Zonguldak (Turkey); Ozer, Mahmut [Department of Electrical and Electronics Engineering, Engineering Faculty, Bülent Ecevit University, 67100 Zonguldak (Turkey)
2013-08-01
We consider a scale-free network of stochastic HH neurons driven by a subthreshold periodic stimulus and investigate how the collective spiking regularity or the collective temporal coherence changes with the stimulus frequency, the intrinsic noise (or the cell size), the network average degree and the coupling strength. We show that the best temporal coherence is obtained for a certain level of the intrinsic noise when the frequencies of the external stimulus and the subthreshold oscillations of the network elements match. We also find that the collective regularity exhibits a resonance-like behavior depending on both the coupling strength and the network average degree at the optimal values of the stimulus frequency and the cell size, indicating that the best temporal coherence also requires an optimal coupling strength and an optimal average degree of the connectivity.
Energy Technology Data Exchange (ETDEWEB)
Surdin, Maurice [Commissariat a l' energie atomique et aux energies alternatives - CEA (France)
1960-07-01
The project for an experiment which uses the effect of gravitation on Maser-type clocks placed on the ground at two different heights and which is designed to verify the general theory of relativity. Reprint of a paper published in Comptes rendus des seances de l'Academie des Sciences, t. 250, p. 299-301, sitting of 11 January 1960 [French] Projet d'une experience, utilisant l'effet de gravitation sur des horloges du type Maser placees sur la terre a deux altitudes differentes, et destinee a verifier la theorie de la relativite generalisee. Reproduction d'un article publie dans les Comptes rendus des seances de l'Academie des Sciences, t. 250, p. 299-301, seance du 11 janvier 1960.
Convergence and fluctuations of Regularized Tyler estimators
Kammoun, Abla; Couillet, Romain; Pascal, Frederic; Alouini, Mohamed-Slim
2015-01-01
This article studies the behavior of regularized Tyler estimators (RTEs) of scatter matrices. The key advantages of these estimators are twofold. First, they guarantee by construction a good conditioning of the estimate and second, being a derivative of robust Tyler estimators, they inherit their robustness properties, notably their resilience to the presence of outliers. Nevertheless, one major problem that poses the use of RTEs in practice is represented by the question of setting the regularization parameter p. While a high value of p is likely to push all the eigenvalues away from zero, it comes at the cost of a larger bias with respect to the population covariance matrix. A deep understanding of the statistics of RTEs is essential to come up with appropriate choices for the regularization parameter. This is not an easy task and might be out of reach, unless one considers asymptotic regimes wherein the number of observations n and/or their size N increase together. First asymptotic results have recently been obtained under the assumption that N and n are large and commensurable. Interestingly, no results concerning the regime of n going to infinity with N fixed exist, even though the investigation of this assumption has usually predated the analysis of the most difficult N and n large case. This motivates our work. In particular, we prove in the present paper that the RTEs converge to a deterministic matrix when n → ∞ with N fixed, which is expressed as a function of the theoretical covariance matrix. We also derive the fluctuations of the RTEs around this deterministic matrix and establish that these fluctuations converge in distribution to a multivariate Gaussian distribution with zero mean and a covariance depending on the population covariance and the parameter.
Convergence and fluctuations of Regularized Tyler estimators
Kammoun, Abla
2015-10-26
This article studies the behavior of regularized Tyler estimators (RTEs) of scatter matrices. The key advantages of these estimators are twofold. First, they guarantee by construction a good conditioning of the estimate and second, being a derivative of robust Tyler estimators, they inherit their robustness properties, notably their resilience to the presence of outliers. Nevertheless, one major problem that poses the use of RTEs in practice is represented by the question of setting the regularization parameter p. While a high value of p is likely to push all the eigenvalues away from zero, it comes at the cost of a larger bias with respect to the population covariance matrix. A deep understanding of the statistics of RTEs is essential to come up with appropriate choices for the regularization parameter. This is not an easy task and might be out of reach, unless one considers asymptotic regimes wherein the number of observations n and/or their size N increase together. First asymptotic results have recently been obtained under the assumption that N and n are large and commensurable. Interestingly, no results concerning the regime of n going to infinity with N fixed exist, even though the investigation of this assumption has usually predated the analysis of the most difficult N and n large case. This motivates our work. In particular, we prove in the present paper that the RTEs converge to a deterministic matrix when n → ∞ with N fixed, which is expressed as a function of the theoretical covariance matrix. We also derive the fluctuations of the RTEs around this deterministic matrix and establish that these fluctuations converge in distribution to a multivariate Gaussian distribution with zero mean and a covariance depending on the population covariance and the parameter.
Chondroblastoma of temporal bone
Energy Technology Data Exchange (ETDEWEB)
Tanohta, K.; Noda, M.; Katoh, H.; Okazaki, A.; Sugiyama, S.; Maehara, T.; Onishi, S.; Tanida, T.
1986-07-01
The case of a 55-year-old female with chondroblastoma arising from the left temporal bone is presented. Although 10 cases of temporal chondroblastoma have been reported, this is the first in which plain radiography, pluridirectional tomography, computed tomography (CT) and angiography were performed. We discuss the clinical and radiological aspects of this rare tumor.
Chondroblastoma of temporal bone
International Nuclear Information System (INIS)
Tanohta, K.; Noda, M.; Katoh, H.; Okazaki, A.; Sugiyama, S.; Maehara, T.; Onishi, S.; Tanida, T.
1986-01-01
The case of a 55-year-old female with chondroblastoma arising from the left temporal bone is presented. Although 10 cases of temporal chondroblastoma have been reported, this is the first in which plain radiography, pluridirectional tomography, computed tomography (CT) and angiography were performed. We discuss the clinical and radiological aspects of this rare tumor. (orig.)
The use of regularization in inferential measurements
International Nuclear Information System (INIS)
Hines, J. Wesley; Gribok, Andrei V.; Attieh, Ibrahim; Uhrig, Robert E.
1999-01-01
Inferential sensing is the prediction of a plant variable through the use of correlated plant variables. A correct prediction of the variable can be used to monitor sensors for drift or other failures making periodic instrument calibrations unnecessary. This move from periodic to condition based maintenance can reduce costs and increase the reliability of the instrument. Having accurate, reliable measurements is important for signals that may impact safety or profitability. This paper investigates how collinearity adversely affects inferential sensing by making the results inconsistent and unrepeatable; and presents regularization as a potential solution (author) (ml)
Regularization ambiguities in loop quantum gravity
International Nuclear Information System (INIS)
Perez, Alejandro
2006-01-01
One of the main achievements of loop quantum gravity is the consistent quantization of the analog of the Wheeler-DeWitt equation which is free of ultraviolet divergences. However, ambiguities associated to the intermediate regularization procedure lead to an apparently infinite set of possible theories. The absence of an UV problem--the existence of well-behaved regularization of the constraints--is intimately linked with the ambiguities arising in the quantum theory. Among these ambiguities is the one associated to the SU(2) unitary representation used in the diffeomorphism covariant 'point-splitting' regularization of the nonlinear functionals of the connection. This ambiguity is labeled by a half-integer m and, here, it is referred to as the m ambiguity. The aim of this paper is to investigate the important implications of this ambiguity. We first study 2+1 gravity (and more generally BF theory) quantized in the canonical formulation of loop quantum gravity. Only when the regularization of the quantum constraints is performed in terms of the fundamental representation of the gauge group does one obtain the usual topological quantum field theory as a result. In all other cases unphysical local degrees of freedom arise at the level of the regulated theory that conspire against the existence of the continuum limit. This shows that there is a clear-cut choice in the quantization of the constraints in 2+1 loop quantum gravity. We then analyze the effects of the ambiguity in 3+1 gravity exhibiting the existence of spurious solutions for higher representation quantizations of the Hamiltonian constraint. Although the analysis is not complete in 3+1 dimensions - due to the difficulties associated to the definition of the physical inner product - it provides evidence supporting the definitions quantum dynamics of loop quantum gravity in terms of the fundamental representation of the gauge group as the only consistent possibilities. If the gauge group is SO(3) we find
Effort variation regularization in sound field reproduction
DEFF Research Database (Denmark)
Stefanakis, Nick; Jacobsen, Finn; Sarris, Ioannis
2010-01-01
In this paper, active control is used in order to reproduce a given sound field in an extended spatial region. A method is proposed which minimizes the reproduction error at a number of control positions with the reproduction sources holding a certain relation within their complex strengths......), and adaptive wave field synthesis (AWFS), both under free-field conditions and in reverberant rooms. It is shown that effort variation regularization overcomes the problems associated with small spaces and with a low ratio of direct to reverberant energy, improving thus the reproduction accuracy...
New regularities in mass spectra of hadrons
International Nuclear Information System (INIS)
Kajdalov, A.B.
1989-01-01
The properties of bosonic and baryonic Regge trajectories for hadrons composed of light quarks are considered. Experimental data agree with an existence of daughter trajectories consistent with string models. It is pointed out that the parity doubling for baryonic trajectories, observed experimentally, is not understood in the existing quark models. Mass spectrum of bosons and baryons indicates to an approximate supersymmetry in the mass region M>1 GeV. These regularities indicates to a high degree of symmetry for the dynamics in the confinement region. 8 refs.; 5 figs
Total-variation regularization with bound constraints
International Nuclear Information System (INIS)
Chartrand, Rick; Wohlberg, Brendt
2009-01-01
We present a new algorithm for bound-constrained total-variation (TV) regularization that in comparison with its predecessors is simple, fast, and flexible. We use a splitting approach to decouple TV minimization from enforcing the constraints. Consequently, existing TV solvers can be employed with minimal alteration. This also makes the approach straightforward to generalize to any situation where TV can be applied. We consider deblurring of images with Gaussian or salt-and-pepper noise, as well as Abel inversion of radiographs with Poisson noise. We incorporate previous iterative reweighting algorithms to solve the TV portion.
Bayesian regularization of diffusion tensor images
DEFF Research Database (Denmark)
Frandsen, Jesper; Hobolth, Asger; Østergaard, Leif
2007-01-01
Diffusion tensor imaging (DTI) is a powerful tool in the study of the course of nerve fibre bundles in the human brain. Using DTI, the local fibre orientation in each image voxel can be described by a diffusion tensor which is constructed from local measurements of diffusion coefficients along...... several directions. The measured diffusion coefficients and thereby the diffusion tensors are subject to noise, leading to possibly flawed representations of the three dimensional fibre bundles. In this paper we develop a Bayesian procedure for regularizing the diffusion tensor field, fully utilizing...
Indefinite metric and regularization of electrodynamics
International Nuclear Information System (INIS)
Gaudin, M.
1984-06-01
The invariant regularization of Pauli and Villars in quantum electrodynamics can be considered as deriving from a local and causal lagrangian theory for spin 1/2 bosons, by introducing an indefinite metric and a condition on the allowed states similar to the Lorentz condition. The consequences are the asymptotic freedom of the photon's propagator. We present a calcultion of the effective charge to the fourth order in the coupling as a function of the auxiliary masses, the theory avoiding all mass divergencies to this order [fr
Strategies for regular segmented reductions on GPU
DEFF Research Database (Denmark)
Larsen, Rasmus Wriedt; Henriksen, Troels
2017-01-01
We present and evaluate an implementation technique for regular segmented reductions on GPUs. Existing techniques tend to be either consistent in performance but relatively inefficient in absolute terms, or optimised for specific workloads and thereby exhibiting bad performance for certain input...... is in the context of the Futhark compiler, the implementation technique is applicable to any library or language that has a need for segmented reductions. We evaluate the technique on four microbenchmarks, two of which we also compare to implementations in the CUB library for GPU programming, as well as on two...
Regularization of the light-cone gauge gluon propagator singularities using sub-gauge conditions
Energy Technology Data Exchange (ETDEWEB)
Chirilli, Giovanni A.; Kovchegov, Yuri V.; Wertepny, Douglas E. [Department of Physics, The Ohio State University,191 W Woodruff Ave, Columbus, OH 43210 (United States)
2015-12-21
Perturbative QCD calculations in the light-cone gauge have long suffered from the ambiguity associated with the regularization of the poles in the gluon propagator. In this work we study sub-gauge conditions within the light-cone gauge corresponding to several known ways of regulating the gluon propagator. Using the functional integral calculation of the gluon propagator, we rederive the known sub-gauge conditions for the θ-function gauges and identify the sub-gauge condition for the principal value (PV) regularization of the gluon propagator’s light-cone poles. The obtained sub-gauge condition for the PV case is further verified by a sample calculation of the classical Yang-Mills field of two collinear ultrarelativistic point color charges. Our method does not allow one to construct a sub-gauge condition corresponding to the well-known Mandelstam-Leibbrandt prescription for regulating the gluon propagator poles.
NSVZ scheme with the higher derivative regularization for N=1 SQED
International Nuclear Information System (INIS)
Kataev, A.L.; Stepanyantz, K.V.
2013-01-01
The exact NSVZ relation between a β-function of N=1 SQED and an anomalous dimension of the matter superfields is studied within the Slavnov higher derivative regularization approach. It is shown that if the renormalization group functions are defined in terms of the bare coupling constant, this relation is always valid. In the renormalized theory the NSVZ relation is obtained in the momentum subtraction scheme supplemented by a special finite renormalization. Unlike the dimensional reduction, the higher derivative regularization allows to fix this finite renormalization. This is made by imposing the conditions Z 3 (α,μ=Λ)=1 and Z(α,μ=Λ)=1 on the renormalization constants of N=1 SQED, where Λ is a parameter in the higher derivative term. The results are verified by the explicit three-loop calculation. In this approximation we relate the DR ¯ scheme and the NSVZ scheme defined within the higher derivative approach by the finite renormalization
Emotion regulation deficits in regular marijuana users.
Zimmermann, Kaeli; Walz, Christina; Derckx, Raissa T; Kendrick, Keith M; Weber, Bernd; Dore, Bruce; Ochsner, Kevin N; Hurlemann, René; Becker, Benjamin
2017-08-01
Effective regulation of negative affective states has been associated with mental health. Impaired regulation of negative affect represents a risk factor for dysfunctional coping mechanisms such as drug use and thus could contribute to the initiation and development of problematic substance use. This study investigated behavioral and neural indices of emotion regulation in regular marijuana users (n = 23) and demographically matched nonusing controls (n = 20) by means of an fMRI cognitive emotion regulation (reappraisal) paradigm. Relative to nonusing controls, marijuana users demonstrated increased neural activity in a bilateral frontal network comprising precentral, middle cingulate, and supplementary motor regions during reappraisal of negative affect (P marijuana users relative to controls. Together, the present findings could reflect an unsuccessful attempt of compensatory recruitment of additional neural resources in the context of disrupted amygdala-prefrontal interaction during volitional emotion regulation in marijuana users. As such, impaired volitional regulation of negative affect might represent a consequence of, or risk factor for, regular marijuana use. Hum Brain Mapp 38:4270-4279, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Efficient multidimensional regularization for Volterra series estimation
Birpoutsoukis, Georgios; Csurcsia, Péter Zoltán; Schoukens, Johan
2018-05-01
This paper presents an efficient nonparametric time domain nonlinear system identification method. It is shown how truncated Volterra series models can be efficiently estimated without the need of long, transient-free measurements. The method is a novel extension of the regularization methods that have been developed for impulse response estimates of linear time invariant systems. To avoid the excessive memory needs in case of long measurements or large number of estimated parameters, a practical gradient-based estimation method is also provided, leading to the same numerical results as the proposed Volterra estimation method. Moreover, the transient effects in the simulated output are removed by a special regularization method based on the novel ideas of transient removal for Linear Time-Varying (LTV) systems. Combining the proposed methodologies, the nonparametric Volterra models of the cascaded water tanks benchmark are presented in this paper. The results for different scenarios varying from a simple Finite Impulse Response (FIR) model to a 3rd degree Volterra series with and without transient removal are compared and studied. It is clear that the obtained models capture the system dynamics when tested on a validation dataset, and their performance is comparable with the white-box (physical) models.
Supporting Regularized Logistic Regression Privately and Efficiently
Li, Wenfa; Liu, Hongzhe; Yang, Peng; Xie, Wei
2016-01-01
As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc. PMID:27271738
Supporting Regularized Logistic Regression Privately and Efficiently.
Li, Wenfa; Liu, Hongzhe; Yang, Peng; Xie, Wei
2016-01-01
As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc.
Multiple graph regularized nonnegative matrix factorization
Wang, Jim Jing-Yan
2013-10-01
Non-negative matrix factorization (NMF) has been widely used as a data representation method based on components. To overcome the disadvantage of NMF in failing to consider the manifold structure of a data set, graph regularized NMF (GrNMF) has been proposed by Cai et al. by constructing an affinity graph and searching for a matrix factorization that respects graph structure. Selecting a graph model and its corresponding parameters is critical for this strategy. This process is usually carried out by cross-validation or discrete grid search, which are time consuming and prone to overfitting. In this paper, we propose a GrNMF, called MultiGrNMF, in which the intrinsic manifold is approximated by a linear combination of several graphs with different models and parameters inspired by ensemble manifold regularization. Factorization metrics and linear combination coefficients of graphs are determined simultaneously within a unified object function. They are alternately optimized in an iterative algorithm, thus resulting in a novel data representation algorithm. Extensive experiments on a protein subcellular localization task and an Alzheimer\\'s disease diagnosis task demonstrate the effectiveness of the proposed algorithm. © 2013 Elsevier Ltd. All rights reserved.
Accelerating Large Data Analysis By Exploiting Regularities
Moran, Patrick J.; Ellsworth, David
2003-01-01
We present techniques for discovering and exploiting regularity in large curvilinear data sets. The data can be based on a single mesh or a mesh composed of multiple submeshes (also known as zones). Multi-zone data are typical to Computational Fluid Dynamics (CFD) simulations. Regularities include axis-aligned rectilinear and cylindrical meshes as well as cases where one zone is equivalent to a rigid-body transformation of another. Our algorithms can also discover rigid-body motion of meshes in time-series data. Next, we describe a data model where we can utilize the results from the discovery process in order to accelerate large data visualizations. Where possible, we replace general curvilinear zones with rectilinear or cylindrical zones. In rigid-body motion cases we replace a time-series of meshes with a transformed mesh object where a reference mesh is dynamically transformed based on a given time value in order to satisfy geometry requests, on demand. The data model enables us to make these substitutions and dynamic transformations transparently with respect to the visualization algorithms. We present results with large data sets where we combine our mesh replacement and transformation techniques with out-of-core paging in order to achieve significant speed-ups in analysis.
Supporting Regularized Logistic Regression Privately and Efficiently.
Directory of Open Access Journals (Sweden)
Wenfa Li
Full Text Available As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc.
Multiview Hessian regularization for image annotation.
Liu, Weifeng; Tao, Dacheng
2013-07-01
The rapid development of computer hardware and Internet technology makes large scale data dependent models computationally tractable, and opens a bright avenue for annotating images through innovative machine learning algorithms. Semisupervised learning (SSL) therefore received intensive attention in recent years and was successfully deployed in image annotation. One representative work in SSL is Laplacian regularization (LR), which smoothes the conditional distribution for classification along the manifold encoded in the graph Laplacian, however, it is observed that LR biases the classification function toward a constant function that possibly results in poor generalization. In addition, LR is developed to handle uniformly distributed data (or single-view data), although instances or objects, such as images and videos, are usually represented by multiview features, such as color, shape, and texture. In this paper, we present multiview Hessian regularization (mHR) to address the above two problems in LR-based image annotation. In particular, mHR optimally combines multiple HR, each of which is obtained from a particular view of instances, and steers the classification function that varies linearly along the data manifold. We apply mHR to kernel least squares and support vector machines as two examples for image annotation. Extensive experiments on the PASCAL VOC'07 dataset validate the effectiveness of mHR by comparing it with baseline algorithms, including LR and HR.
International Nuclear Information System (INIS)
Deufel, Christopher L; Furutani, Keith M
2014-01-01
As dose optimization for high dose rate brachytherapy becomes more complex, it becomes increasingly important to have a means of verifying that optimization results are reasonable. A method is presented for using a simple optimization as quality assurance for the more complex optimization algorithms typically found in commercial brachytherapy treatment planning systems. Quality assurance tests may be performed during commissioning, at regular intervals, and/or on a patient specific basis. A simple optimization method is provided that optimizes conformal target coverage using an exact, variance-based, algebraic approach. Metrics such as dose volume histogram, conformality index, and total reference air kerma agree closely between simple and complex optimizations for breast, cervix, prostate, and planar applicators. The simple optimization is shown to be a sensitive measure for identifying failures in a commercial treatment planning system that are possibly due to operator error or weaknesses in planning system optimization algorithms. Results from the simple optimization are surprisingly similar to the results from a more complex, commercial optimization for several clinical applications. This suggests that there are only modest gains to be made from making brachytherapy optimization more complex. The improvements expected from sophisticated linear optimizations, such as PARETO methods, will largely be in making systems more user friendly and efficient, rather than in finding dramatically better source strength distributions. (paper)
Graph Regularized Meta-path Based Transductive Regression in Heterogeneous Information Network.
Wan, Mengting; Ouyang, Yunbo; Kaplan, Lance; Han, Jiawei
2015-01-01
A number of real-world networks are heterogeneous information networks, which are composed of different types of nodes and links. Numerical prediction in heterogeneous information networks is a challenging but significant area because network based information for unlabeled objects is usually limited to make precise estimations. In this paper, we consider a graph regularized meta-path based transductive regression model ( Grempt ), which combines the principal philosophies of typical graph-based transductive classification methods and transductive regression models designed for homogeneous networks. The computation of our method is time and space efficient and the precision of our model can be verified by numerical experiments.
Accretion onto some well-known regular black holes
International Nuclear Information System (INIS)
Jawad, Abdul; Shahzad, M.U.
2016-01-01
In this work, we discuss the accretion onto static spherically symmetric regular black holes for specific choices of the equation of state parameter. The underlying regular black holes are charged regular black holes using the Fermi-Dirac distribution, logistic distribution, nonlinear electrodynamics, respectively, and Kehagias-Sftesos asymptotically flat regular black holes. We obtain the critical radius, critical speed, and squared sound speed during the accretion process near the regular black holes. We also study the behavior of radial velocity, energy density, and the rate of change of the mass for each of the regular black holes. (orig.)
Accretion onto some well-known regular black holes
Energy Technology Data Exchange (ETDEWEB)
Jawad, Abdul; Shahzad, M.U. [COMSATS Institute of Information Technology, Department of Mathematics, Lahore (Pakistan)
2016-03-15
In this work, we discuss the accretion onto static spherically symmetric regular black holes for specific choices of the equation of state parameter. The underlying regular black holes are charged regular black holes using the Fermi-Dirac distribution, logistic distribution, nonlinear electrodynamics, respectively, and Kehagias-Sftesos asymptotically flat regular black holes. We obtain the critical radius, critical speed, and squared sound speed during the accretion process near the regular black holes. We also study the behavior of radial velocity, energy density, and the rate of change of the mass for each of the regular black holes. (orig.)
Accretion onto some well-known regular black holes
Jawad, Abdul; Shahzad, M. Umair
2016-03-01
In this work, we discuss the accretion onto static spherically symmetric regular black holes for specific choices of the equation of state parameter. The underlying regular black holes are charged regular black holes using the Fermi-Dirac distribution, logistic distribution, nonlinear electrodynamics, respectively, and Kehagias-Sftesos asymptotically flat regular black holes. We obtain the critical radius, critical speed, and squared sound speed during the accretion process near the regular black holes. We also study the behavior of radial velocity, energy density, and the rate of change of the mass for each of the regular black holes.
Reasoning about knowledge: Children’s evaluations of generality and verifiability
Koenig, Melissa A.; Cole, Caitlin A.; Meyer, Meredith; Ridge, Katherine E.; Kushnir, Tamar; Gelman, Susan A.
2015-01-01
In a series of experiments, we examined 3- to 8-year-old children’s (N = 223) and adults’ (N = 32) use of two properties of testimony to estimate a speaker’s knowledge: generality and verifiability. Participants were presented with a “Generic speaker” who made a series of 4 general claims about “pangolins” (a novel animal kind), and a “Specific speaker” who made a series of 4 specific claims about “this pangolin” as an individual. To investigate the role of verifiability, we systematically varied whether the claim referred to a perceptually-obvious feature visible in a picture (e.g., “has a pointy nose”) or a non-evident feature that was not visible (e.g., “sleeps in a hollow tree”). Three main findings emerged: (1) Young children showed a pronounced reliance on verifiability that decreased with age. Three-year-old children were especially prone to credit knowledge to speakers who made verifiable claims, whereas 7- to 8-year-olds and adults credited knowledge to generic speakers regardless of whether the claims were verifiable; (2) Children’s attributions of knowledge to generic speakers was not detectable until age 5, and only when those claims were also verifiable; (3) Children often generalized speakers’ knowledge outside of the pangolin domain, indicating a belief that a person’s knowledge about pangolins likely extends to new facts. Findings indicate that young children may be inclined to doubt speakers who make claims they cannot verify themselves, as well as a developmentally increasing appreciation for speakers who make general claims. PMID:26451884
Otosclerosis: Temporal Bone Pathology.
Quesnel, Alicia M; Ishai, Reuven; McKenna, Michael J
2018-04-01
Otosclerosis is pathologically characterized by abnormal bony remodeling, which includes bone resorption, new bone deposition, and vascular proliferation in the temporal bone. Sensorineural hearing loss in otosclerosis is associated with extension of otosclerosis to the cochlear endosteum and deposition of collagen throughout the spiral ligament. Persistent or recurrent conductive hearing loss after stapedectomy has been associated with incomplete footplate fenestration, poor incus-prosthesis connection, and incus resorption in temporal bone specimens. Human temporal bone pathology has helped to define the role of computed tomography imaging for otosclerosis, confirming that computed tomography is highly sensitive for diagnosis, yet limited in assessing cochlear endosteal involvement. Copyright © 2017 Elsevier Inc. All rights reserved.
Laplacian embedded regression for scalable manifold regularization.
Chen, Lin; Tsang, Ivor W; Xu, Dong
2012-06-01
Semi-supervised learning (SSL), as a powerful tool to learn from a limited number of labeled data and a large number of unlabeled data, has been attracting increasing attention in the machine learning community. In particular, the manifold regularization framework has laid solid theoretical foundations for a large family of SSL algorithms, such as Laplacian support vector machine (LapSVM) and Laplacian regularized least squares (LapRLS). However, most of these algorithms are limited to small scale problems due to the high computational cost of the matrix inversion operation involved in the optimization problem. In this paper, we propose a novel framework called Laplacian embedded regression by introducing an intermediate decision variable into the manifold regularization framework. By using ∈-insensitive loss, we obtain the Laplacian embedded support vector regression (LapESVR) algorithm, which inherits the sparse solution from SVR. Also, we derive Laplacian embedded RLS (LapERLS) corresponding to RLS under the proposed framework. Both LapESVR and LapERLS possess a simpler form of a transformed kernel, which is the summation of the original kernel and a graph kernel that captures the manifold structure. The benefits of the transformed kernel are two-fold: (1) we can deal with the original kernel matrix and the graph Laplacian matrix in the graph kernel separately and (2) if the graph Laplacian matrix is sparse, we only need to perform the inverse operation for a sparse matrix, which is much more efficient when compared with that for a dense one. Inspired by kernel principal component analysis, we further propose to project the introduced decision variable into a subspace spanned by a few eigenvectors of the graph Laplacian matrix in order to better reflect the data manifold, as well as accelerate the calculation of the graph kernel, allowing our methods to efficiently and effectively cope with large scale SSL problems. Extensive experiments on both toy and real
The Method of a Standalone Functional Verifying Operability of Sonar Control Systems
Directory of Open Access Journals (Sweden)
A. A. Sotnikov
2014-01-01
Full Text Available This article describes a method of standalone verifying sonar control system, which is based on functional checking of control system operability.The main features of realized method are a development of the valid mathematic model for simulation of sonar signals at the point of hydroacoustic antenna, a valid representation of the sonar control system modes as a discrete Markov model, providing functional object verification in real time mode.Some ways are proposed to control computational complexity in case of insufficient computing resources of the simulation equipment, namely the way of model functionality reduction and the way of adequacy reduction.Experiments were made using testing equipment, which was developed by department of Research Institute of Information Control System at Bauman Moscow State Technical University to verify technical validity of industrial sonar complexes.On-board software was artificially changed to create malfunctions in functionality of sonar control systems during the verifying process in order to estimate verifying system performances.The method efficiency was proved by the theory and experiment results in comparison with the basic methodology of verifying technical systems.This method could be also used in debugging of on-board software of sonar complexes and in development of new promising algorithms of sonar signal processing.
Constrained least squares regularization in PET
International Nuclear Information System (INIS)
Choudhury, K.R.; O'Sullivan, F.O.
1996-01-01
Standard reconstruction methods used in tomography produce images with undesirable negative artifacts in background and in areas of high local contrast. While sophisticated statistical reconstruction methods can be devised to correct for these artifacts, their computational implementation is excessive for routine operational use. This work describes a technique for rapid computation of approximate constrained least squares regularization estimates. The unique feature of the approach is that it involves no iterative projection or backprojection steps. This contrasts with the familiar computationally intensive algorithms based on algebraic reconstruction (ART) or expectation-maximization (EM) methods. Experimentation with the new approach for deconvolution and mixture analysis shows that the root mean square error quality of estimators based on the proposed algorithm matches and usually dominates that of more elaborate maximum likelihood, at a fraction of the computational effort
Regularities of radiorace formation in yeasts
International Nuclear Information System (INIS)
Korogodin, V.I.; Bliznik, K.M.; Kapul'tsevich, Yu.G.; Petin, V.G.; Akademiya Meditsinskikh Nauk SSSR, Obninsk. Nauchno-Issledovatel'skij Inst. Meditsinskoj Radiologii)
1977-01-01
Two strains of diploid yeast, namely, Saccharomyces ellipsoides, Megri 139-B, isolated under natural conditions, and Saccharomyces cerevisiae 5a x 3Bα, heterozygous by genes ade 1 and ade 2, were exposed to γ-quanta of Co 60 . The content of cells-saltants forming colonies with changed morphology, that of the nonviable cells, cells that are respiration mutants, and cells-recombinants by gene ade 1 and ade 2, has been determined. A certain regularity has been revealed in the distribution among the colonies of cells of the four types mentioned above: the higher the content of cells of some one of the types, the higher that of the cells having other hereditary changes
Regularization destriping of remote sensing imagery
Basnayake, Ranil; Bollt, Erik; Tufillaro, Nicholas; Sun, Jie; Gierach, Michelle
2017-07-01
We illustrate the utility of variational destriping for ocean color images from both multispectral and hyperspectral sensors. In particular, we examine data from a filter spectrometer, the Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi National Polar Partnership (NPP) orbiter, and an airborne grating spectrometer, the Jet Population Laboratory's (JPL) hyperspectral Portable Remote Imaging Spectrometer (PRISM) sensor. We solve the destriping problem using a variational regularization method by giving weights spatially to preserve the other features of the image during the destriping process. The target functional penalizes the neighborhood of stripes (strictly, directionally uniform features) while promoting data fidelity, and the functional is minimized by solving the Euler-Lagrange equations with an explicit finite-difference scheme. We show the accuracy of our method from a benchmark data set which represents the sea surface temperature off the coast of Oregon, USA. Technical details, such as how to impose continuity across data gaps using inpainting, are also described.
The Regularity of Optimal Irrigation Patterns
Morel, Jean-Michel; Santambrogio, Filippo
2010-02-01
A branched structure is observable in draining and irrigation systems, in electric power supply systems, and in natural objects like blood vessels, the river basins or the trees. Recent approaches of these networks derive their branched structure from an energy functional whose essential feature is to favor wide routes. Given a flow s in a river, a road, a tube or a wire, the transportation cost per unit length is supposed in these models to be proportional to s α with 0 measure is the Lebesgue density on a smooth open set and the irrigating measure is a single source. In that case we prove that all branches of optimal irrigation trees satisfy an elliptic equation and that their curvature is a bounded measure. In consequence all branching points in the network have a tangent cone made of a finite number of segments, and all other points have a tangent. An explicit counterexample disproves these regularity properties for non-Lebesgue irrigated measures.
Singular tachyon kinks from regular profiles
International Nuclear Information System (INIS)
Copeland, E.J.; Saffin, P.M.; Steer, D.A.
2003-01-01
We demonstrate how Sen's singular kink solution of the Born-Infeld tachyon action can be constructed by taking the appropriate limit of initially regular profiles. It is shown that the order in which different limits are taken plays an important role in determining whether or not such a solution is obtained for a wide class of potentials. Indeed, by introducing a small parameter into the action, we are able circumvent the results of a recent paper which derived two conditions on the asymptotic tachyon potential such that the singular kink could be recovered in the large amplitude limit of periodic solutions. We show that this is explained by the non-commuting nature of two limits, and that Sen's solution is recovered if the order of the limits is chosen appropriately
Two-pass greedy regular expression parsing
DEFF Research Database (Denmark)
Grathwohl, Niels Bjørn Bugge; Henglein, Fritz; Nielsen, Lasse
2013-01-01
We present new algorithms for producing greedy parses for regular expressions (REs) in a semi-streaming fashion. Our lean-log algorithm executes in time O(mn) for REs of size m and input strings of size n and outputs a compact bit-coded parse tree representation. It improves on previous algorithms...... by: operating in only 2 passes; using only O(m) words of random-access memory (independent of n); requiring only kn bits of sequentially written and read log storage, where k ... and not requiring it to be stored at all. Previous RE parsing algorithms do not scale linearly with input size, or require substantially more log storage and employ 3 passes where the first consists of reversing the input, or do not or are not known to produce a greedy parse. The performance of our unoptimized C...
Discriminative Elastic-Net Regularized Linear Regression.
Zhang, Zheng; Lai, Zhihui; Xu, Yong; Shao, Ling; Wu, Jian; Xie, Guo-Sen
2017-03-01
In this paper, we aim at learning compact and discriminative linear regression models. Linear regression has been widely used in different problems. However, most of the existing linear regression methods exploit the conventional zero-one matrix as the regression targets, which greatly narrows the flexibility of the regression model. Another major limitation of these methods is that the learned projection matrix fails to precisely project the image features to the target space due to their weak discriminative capability. To this end, we present an elastic-net regularized linear regression (ENLR) framework, and develop two robust linear regression models which possess the following special characteristics. First, our methods exploit two particular strategies to enlarge the margins of different classes by relaxing the strict binary targets into a more feasible variable matrix. Second, a robust elastic-net regularization of singular values is introduced to enhance the compactness and effectiveness of the learned projection matrix. Third, the resulting optimization problem of ENLR has a closed-form solution in each iteration, which can be solved efficiently. Finally, rather than directly exploiting the projection matrix for recognition, our methods employ the transformed features as the new discriminate representations to make final image classification. Compared with the traditional linear regression model and some of its variants, our method is much more accurate in image classification. Extensive experiments conducted on publicly available data sets well demonstrate that the proposed framework can outperform the state-of-the-art methods. The MATLAB codes of our methods can be available at http://www.yongxu.org/lunwen.html.
Regularized Regression and Density Estimation based on Optimal Transport
Burger, M.; Franek, M.; Schonlieb, C.-B.
2012-01-01
for estimating densities and for preserving edges in the case of total variation regularization. In order to compute solutions of the variational problems, a regularized optimal transport problem needs to be solved, for which we discuss several formulations
Incremental projection approach of regularization for inverse problems
Energy Technology Data Exchange (ETDEWEB)
Souopgui, Innocent, E-mail: innocent.souopgui@usm.edu [The University of Southern Mississippi, Department of Marine Science (United States); Ngodock, Hans E., E-mail: hans.ngodock@nrlssc.navy.mil [Naval Research Laboratory (United States); Vidard, Arthur, E-mail: arthur.vidard@imag.fr; Le Dimet, François-Xavier, E-mail: ledimet@imag.fr [Laboratoire Jean Kuntzmann (France)
2016-10-15
This paper presents an alternative approach to the regularized least squares solution of ill-posed inverse problems. Instead of solving a minimization problem with an objective function composed of a data term and a regularization term, the regularization information is used to define a projection onto a convex subspace of regularized candidate solutions. The objective function is modified to include the projection of each iterate in the place of the regularization. Numerical experiments based on the problem of motion estimation for geophysical fluid images, show the improvement of the proposed method compared with regularization methods. For the presented test case, the incremental projection method uses 7 times less computation time than the regularization method, to reach the same error target. Moreover, at convergence, the incremental projection is two order of magnitude more accurate than the regularization method.
Dimensional regularization and analytical continuation at finite temperature
International Nuclear Information System (INIS)
Chen Xiangjun; Liu Lianshou
1998-01-01
The relationship between dimensional regularization and analytical continuation of infrared divergent integrals at finite temperature is discussed and a method of regularization of infrared divergent integrals and infrared divergent sums is given
Bounded Perturbation Regularization for Linear Least Squares Estimation
Ballal, Tarig; Suliman, Mohamed Abdalla Elhag; Al-Naffouri, Tareq Y.
2017-01-01
This paper addresses the problem of selecting the regularization parameter for linear least-squares estimation. We propose a new technique called bounded perturbation regularization (BPR). In the proposed BPR method, a perturbation with a bounded
Regular Generalized Star Star closed sets in Bitopological Spaces
K. Kannan; D. Narasimhan; K. Chandrasekhara Rao; R. Ravikumar
2011-01-01
The aim of this paper is to introduce the concepts of τ1τ2-regular generalized star star closed sets , τ1τ2-regular generalized star star open sets and study their basic properties in bitopological spaces.
Directory of Open Access Journals (Sweden)
Thibault Warlop
2018-02-01
Full Text Available Variability raises considerable interest as a promising and sensitive marker of dysfunction in physiology, in particular in neurosciences. Both internally (e.g., pathology and/or externally (e.g., environment generated perturbations and the neuro-mechanical responses to them contribute to the fluctuating dynamics of locomotion. Defective internal gait control in Parkinson's disease (PD, resulting in typical timing gait disorders, is characterized by the breakdown of the temporal organization of stride duration variability. Influence of external cue on gait pattern could be detrimental or advantageous depending on situations (healthy or pathological gait pattern, respectively. As well as being an interesting rehabilitative approach in PD, treadmills are usually implemented in laboratory settings to perform instrumented gait analysis including gait variability assessment. However, possibly acting as an external pacemaker, treadmill could modulate the temporal organization of gait variability of PD patients which could invalidate any gait variability assessment. This study aimed to investigate the immediate influence of treadmill walking (TW on the temporal organization of stride duration variability in PD and healthy population. Here, we analyzed the gait pattern of 20 PD patients and 15 healthy age-matched subjects walking on overground and on a motorized-treadmill (randomized order at a self-selected speed. The temporal organization and regularity of time series of walking were assessed on 512 consecutive strides and assessed by the application of non-linear mathematical methods (i.e., the detrended fluctuation analysis and power spectral density; and sample entropy, for the temporal organization and regularity of gait variability, respectively. A more temporally organized and regular gait pattern seems to emerge from TW in PD while no influence was observed on healthy gait pattern. Treadmill could afford the necessary framework to regulate gait
Nontraumatic temporal subcortical hemorrhage
International Nuclear Information System (INIS)
Weisberg, L.A.; Stazio, A.; Shamsnia, M.; Elliott, D.; Charity Hospital, New Orleans, LA
1990-01-01
Thirty patients with temporal hematomas were analyzed. Four with frontal extension survived. Of 6 with ganglionic extension, three had residual deficit. Of 8 with parietal extension, 4 had delayed deterioration and died, two patients recovered, and two with peritumoral hemorrhage due to glioblastoma multiforme died. Five patients with posterior temporal hematomas recovered. In 7 patients with basal-inferior temporal hematomas, angiography showed aneurysms in 3 cases, angiomas in 2 cases and no vascular lesion in 2 cases. Of 23 cases with negative angiography and no systemic cause for temporal hematoma, 12 patients were hypertensive and 11 were normotensive. Ten hypertensive patients without evidence of chronic vascular disease had the largest hematomas, extending into the parietal or ganglionic regions. Seven of these patients died; 3 had residual deficit. Eleven normotensive and two hypertensive patients with evidence of chronic vascular change had smaller hematomas. They survived with good functional recovery. (orig.)
... functions, including having odd feelings — such as euphoria, deja vu or fear. Temporal lobe seizures are sometimes called ... sudden sense of unprovoked fear or joy A deja vu experience — a feeling that what's happening has happened ...
Directory of Open Access Journals (Sweden)
Kesavan
2016-03-01
Full Text Available INTRODUCTION Human temporal bones are difficult to procure now a days due to various ethical issues. Sheep temporal bone is a good alternative due to morphological similarities, easy to procure and less cost. Many middle ear exercises can be done easily and handling of instruments is done in the procedures like myringoplasty, tympanoplasty, stapedotomy, facial nerve dissection and some middle ear implants. This is useful for resident training programme.
Dynamic Symmetric Key Mobile Commerce Scheme Based on Self-Verified Mechanism
Directory of Open Access Journals (Sweden)
Jiachen Yang
2014-01-01
Full Text Available In terms of the security and efficiency of mobile e-commerce, the authors summarized the advantages and disadvantages of several related schemes, especially the self-verified mobile payment scheme based on the elliptic curve cryptosystem (ECC and then proposed a new type of dynamic symmetric key mobile commerce scheme based on self-verified mechanism. The authors analyzed the basic algorithm based on self-verified mechanisms and detailed the complete transaction process of the proposed scheme. The authors analyzed the payment scheme based on the security and high efficiency index. The analysis shows that the proposed scheme not only meets the high efficiency of mobile electronic payment premise, but also takes the security into account. The user confirmation mechanism at the end of the proposed scheme further strengthens the security of the proposed scheme. In brief, the proposed scheme is more efficient and practical than most of the existing schemes.
Evolution of optically nondestructive and data-non-intrusive credit card verifiers
Sumriddetchkajorn, Sarun; Intaravanne, Yuttana
2010-04-01
Since the deployment of the credit card, the number of credit card fraud cases has grown rapidly with a huge amount of loss in millions of US dollars. Instead of asking more information from the credit card's holder or taking risk through payment approval, a nondestructive and data-non-intrusive credit card verifier is highly desirable before transaction begins. In this paper, we review optical techniques that have been proposed and invented in order to make the genuine credit card more distinguishable than the counterfeit credit card. Several optical approaches for the implementation of credit card verifiers are also included. In particular, we highlight our invention on a hyperspectral-imaging based portable credit card verifier structure that offers a very low false error rate of 0.79%. Other key features include low cost, simplicity in design and implementation, no moving part, no need of an additional decoding key, and adaptive learning.
What are the ultimate limits to computational techniques: verifier theory and unverifiability
International Nuclear Information System (INIS)
Yampolskiy, Roman V
2017-01-01
Despite significant developments in proof theory, surprisingly little attention has been devoted to the concept of proof verifiers. In particular, the mathematical community may be interested in studying different types of proof verifiers (people, programs, oracles, communities, superintelligences) as mathematical objects. Such an effort could reveal their properties, their powers and limitations (particularly in human mathematicians), minimum and maximum complexity, as well as self-verification and self-reference issues. We propose an initial classification system for verifiers and provide some rudimentary analysis of solved and open problems in this important domain. Our main contribution is a formal introduction of the notion of unverifiability, for which the paper could serve as a general citation in domains of theorem proving, as well as software and AI verification. (invited comment)
What are the ultimate limits to computational techniques: verifier theory and unverifiability
Yampolskiy, Roman V.
2017-09-01
Despite significant developments in proof theory, surprisingly little attention has been devoted to the concept of proof verifiers. In particular, the mathematical community may be interested in studying different types of proof verifiers (people, programs, oracles, communities, superintelligences) as mathematical objects. Such an effort could reveal their properties, their powers and limitations (particularly in human mathematicians), minimum and maximum complexity, as well as self-verification and self-reference issues. We propose an initial classification system for verifiers and provide some rudimentary analysis of solved and open problems in this important domain. Our main contribution is a formal introduction of the notion of unverifiability, for which the paper could serve as a general citation in domains of theorem proving, as well as software and AI verification.
Exclusion of children with intellectual disabilities from regular ...
African Journals Online (AJOL)
Study investigated why teachers exclude children with intellectual disability from the regular classrooms in Nigeria. Participants were, 169 regular teachers randomly selected from Oyo and Ogun states. Questionnaire was used to collect data result revealed that 57.4% regular teachers could not cope with children with ID ...
39 CFR 6.1 - Regular meetings, annual meeting.
2010-07-01
... 39 Postal Service 1 2010-07-01 2010-07-01 false Regular meetings, annual meeting. 6.1 Section 6.1 Postal Service UNITED STATES POSTAL SERVICE THE BOARD OF GOVERNORS OF THE U.S. POSTAL SERVICE MEETINGS (ARTICLE VI) § 6.1 Regular meetings, annual meeting. The Board shall meet regularly on a schedule...
Recognition Memory for Novel Stimuli: The Structural Regularity Hypothesis
Cleary, Anne M.; Morris, Alison L.; Langley, Moses M.
2007-01-01
Early studies of human memory suggest that adherence to a known structural regularity (e.g., orthographic regularity) benefits memory for an otherwise novel stimulus (e.g., G. A. Miller, 1958). However, a more recent study suggests that structural regularity can lead to an increase in false-positive responses on recognition memory tests (B. W. A.…
5 CFR 551.421 - Regular working hours.
2010-01-01
... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Regular working hours. 551.421 Section... Activities § 551.421 Regular working hours. (a) Under the Act there is no requirement that a Federal employee... distinction based on whether the activity is performed by an employee during regular working hours or outside...
20 CFR 226.35 - Deductions from regular annuity rate.
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Deductions from regular annuity rate. 226.35... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Computing a Spouse or Divorced Spouse Annuity § 226.35 Deductions from regular annuity rate. The regular annuity rate of the spouse and divorced...
20 CFR 226.34 - Divorced spouse regular annuity rate.
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Divorced spouse regular annuity rate. 226.34... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Computing a Spouse or Divorced Spouse Annuity § 226.34 Divorced spouse regular annuity rate. The regular annuity rate of a divorced spouse is equal to...
20 CFR 226.14 - Employee regular annuity rate.
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Employee regular annuity rate. 226.14 Section... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Computing an Employee Annuity § 226.14 Employee regular annuity rate. The regular annuity rate payable to the employee is the total of the employee tier I...
International Nuclear Information System (INIS)
Chaari, L.; Pesquet, J.Ch.; Chaari, L.; Ciuciu, Ph.; Benazza-Benyahia, A.
2011-01-01
To reduce scanning time and/or improve spatial/temporal resolution in some Magnetic Resonance Imaging (MRI) applications, parallel MRI acquisition techniques with multiple coils acquisition have emerged since the early 1990's as powerful imaging methods that allow a faster acquisition process. In these techniques, the full FOV image has to be reconstructed from the resulting acquired under sampled k-space data. To this end, several reconstruction techniques have been proposed such as the widely-used Sensitivity Encoding (SENSE) method. However, the reconstructed image generally presents artifacts when perturbations occur in both the measured data and the estimated coil sensitivity profiles. In this paper, we aim at achieving accurate image reconstruction under degraded experimental conditions (low magnetic field and high reduction factor), in which neither the SENSE method nor the Tikhonov regularization in the image domain give convincing results. To this end, we present a novel method for SENSE-based reconstruction which proceeds with regularization in the complex wavelet domain by promoting sparsity. The proposed approach relies on a fast algorithm that enables the minimization of regularized non-differentiable criteria including more general penalties than a classical l 1 term. To further enhance the reconstructed image quality, local convex constraints are added to the regularization process. In vivo human brain experiments carried out on Gradient-Echo (GRE) anatomical and Echo Planar Imaging (EPI) functional MRI data at 1.5 T indicate that our algorithm provides reconstructed images with reduced artifacts for high reduction factors. (authors)
Development of material measures for performance verifying surface topography measuring instruments
International Nuclear Information System (INIS)
Leach, Richard; Giusca, Claudiu; Rickens, Kai; Riemer, Oltmann; Rubert, Paul
2014-01-01
The development of two irregular-geometry material measures for performance verifying surface topography measuring instruments is described. The material measures are designed to be used to performance verify tactile and optical areal surface topography measuring instruments. The manufacture of the material measures using diamond turning followed by nickel electroforming is described in detail. Measurement results are then obtained using a traceable stylus instrument and a commercial coherence scanning interferometer, and the results are shown to agree to within the measurement uncertainties. The material measures are now commercially available as part of a suite of material measures aimed at the calibration and performance verification of areal surface topography measuring instruments
Association between cotinine-verified smoking status and hypertension in 167,868 Korean adults.
Kim, Byung Jin; Han, Ji Min; Kang, Jung Gyu; Kim, Bum Soo; Kang, Jin Ho
2017-10-01
Previous studies showed inconsistent results concerning the relationship between chronic smoking and blood pressure. Most of the studies involved self-reported smoking status. This study was performed to evaluate the association of urinary cotinine or self-reported smoking status with hypertension and blood pressure in Korean adults. Among individuals enrolled in the Kangbuk Samsung Health Study and Kangbuk Samsung Cohort Study, 167,868 participants (men, 55.7%; age, 37.5 ± 6.9 years) between 2011 and 2013 who had urinary cotinine measurements were included. Individuals with urinary cotinine levels ≥50 ng/mL were defined as cotinine-verified current smokers. The prevalence of hypertension and cotinine-verified current smokers in the overall population was 6.8% and 22.7%, respectively (10.0% in men and 2.8% in women for hypertension: 37.7% in men and 3.9% in women for cotinine-verified current smokers). In a multivariate regression analysis adjusted for age, sex, body mass index, waist circumference, alcohol drinking, vigorous exercise, and diabetes, cotinine-verified current smoking was associated with lower prevalence of hypertension compared with cotinine-verified never smoking (OR[95% CI], 0.79 [0.75, 0.84]). Log-transformed cotinine levels and unobserved smoking were negatively associated with hypertension, respectively (0.96 [0.96, 0.97] and 0.55 [0.39, 0.79]). In a multivariate linear regression analysis, the cotinine-verified current smoking was inversely associated with systolic and diastolic blood pressure (BP) (regression coefficient[95% CI], -1.23[-1.39, -1.07] for systolic BP and -0.71 [-0.84, -0.58] for diastolic BP). In subgroup analyses according to sex, the inverse associations between cotinine-verified current smoking and hypertension were observed only in men. This large observational study showed that cotinine-verified current smoking and unobserved smoking were inversely associated with hypertension in Korean adults, especially only in
Propagation of spiking regularity and double coherence resonance in feedforward networks.
Men, Cong; Wang, Jiang; Qin, Ying-Mei; Deng, Bin; Tsang, Kai-Ming; Chan, Wai-Lok
2012-03-01
We investigate the propagation of spiking regularity in noisy feedforward networks (FFNs) based on FitzHugh-Nagumo neuron model systematically. It is found that noise could modulate the transmission of firing rate and spiking regularity. Noise-induced synchronization and synfire-enhanced coherence resonance are also observed when signals propagate in noisy multilayer networks. It is interesting that double coherence resonance (DCR) with the combination of synaptic input correlation and noise intensity is finally attained after the processing layer by layer in FFNs. Furthermore, inhibitory connections also play essential roles in shaping DCR phenomena. Several properties of the neuronal network such as noise intensity, correlation of synaptic inputs, and inhibitory connections can serve as control parameters in modulating both rate coding and the order of temporal coding.
Psychosocial functioning among regular cannabis users with and without cannabis use disorder.
Foster, Katherine T; Arterberry, Brooke J; Iacono, William G; McGue, Matt; Hicks, Brian M
2017-11-27
In the United States, cannabis accessibility has continued to rise as the perception of its harmfulness has decreased. Only about 30% of regular cannabis users develop cannabis use disorder (CUD), but it is unclear if individuals who use cannabis regularly without ever developing CUD experience notable psychosocial impairment across the lifespan. Therefore, psychosocial functioning was compared across regular cannabis users with or without CUD and a non-user control group during adolescence (age 17; early risk) and young adulthood (ages 18-25; peak CUD prevalence). Weekly cannabis users with CUD (n = 311), weekly users without CUD (n = 111), and non-users (n = 996) were identified in the Minnesota Twin Family Study. Groups were compared on alcohol and illicit drug use, psychiatric problems, personality, and social functioning at age 17 and from ages 18 to 25. Self-reported cannabis use and problem use were independently verified using co-twin informant report. In both adolescence and young adulthood, non-CUD users reported significantly higher levels of substance use problems and externalizing behaviors than non-users, but lower levels than CUD users. High agreement between self- and co-twin informant reports confirmed the validity of self-reported cannabis use problems. Even in the absence of CUD, regular cannabis use was associated with psychosocial impairment in adolescence and young adulthood. However, regular users with CUD endorsed especially high psychiatric comorbidity and psychosocial impairment. The need for early prevention and intervention - regardless of CUD status - was highlighted by the presence of these patterns in adolescence.
Accreting fluids onto regular black holes via Hamiltonian approach
Energy Technology Data Exchange (ETDEWEB)
Jawad, Abdul [COMSATS Institute of Information Technology, Department of Mathematics, Lahore (Pakistan); Shahzad, M.U. [COMSATS Institute of Information Technology, Department of Mathematics, Lahore (Pakistan); University of Central Punjab, CAMS, UCP Business School, Lahore (Pakistan)
2017-08-15
We investigate the accretion of test fluids onto regular black holes such as Kehagias-Sfetsos black holes and regular black holes with Dagum distribution function. We analyze the accretion process when different test fluids are falling onto these regular black holes. The accreting fluid is being classified through the equation of state according to the features of regular black holes. The behavior of fluid flow and the existence of sonic points is being checked for these regular black holes. It is noted that the three-velocity depends on critical points and the equation of state parameter on phase space. (orig.)
On the regularized fermionic projector of the vacuum
Finster, Felix
2008-03-01
We construct families of fermionic projectors with spherically symmetric regularization, which satisfy the condition of a distributional MP-product. The method is to analyze regularization tails with a power law or logarithmic scaling in composite expressions in the fermionic projector. The resulting regularizations break the Lorentz symmetry and give rise to a multilayer structure of the fermionic projector near the light cone. Furthermore, we construct regularizations which go beyond the distributional MP-product in that they yield additional distributional contributions supported at the origin. The remaining freedom for the regularization parameters and the consequences for the normalization of the fermionic states are discussed.
On the regularized fermionic projector of the vacuum
International Nuclear Information System (INIS)
Finster, Felix
2008-01-01
We construct families of fermionic projectors with spherically symmetric regularization, which satisfy the condition of a distributional MP-product. The method is to analyze regularization tails with a power law or logarithmic scaling in composite expressions in the fermionic projector. The resulting regularizations break the Lorentz symmetry and give rise to a multilayer structure of the fermionic projector near the light cone. Furthermore, we construct regularizations which go beyond the distributional MP-product in that they yield additional distributional contributions supported at the origin. The remaining freedom for the regularization parameters and the consequences for the normalization of the fermionic states are discussed
MRI reconstruction with joint global regularization and transform learning.
Tanc, A Korhan; Eksioglu, Ender M
2016-10-01
Sparsity based regularization has been a popular approach to remedy the measurement scarcity in image reconstruction. Recently, sparsifying transforms learned from image patches have been utilized as an effective regularizer for the Magnetic Resonance Imaging (MRI) reconstruction. Here, we infuse additional global regularization terms to the patch-based transform learning. We develop an algorithm to solve the resulting novel cost function, which includes both patchwise and global regularization terms. Extensive simulation results indicate that the introduced mixed approach has improved MRI reconstruction performance, when compared to the algorithms which use either of the patchwise transform learning or global regularization terms alone. Copyright © 2016 Elsevier Ltd. All rights reserved.
Learning rational temporal eye movement strategies.
Hoppe, David; Rothkopf, Constantin A
2016-07-19
During active behavior humans redirect their gaze several times every second within the visual environment. Where we look within static images is highly efficient, as quantified by computational models of human gaze shifts in visual search and face recognition tasks. However, when we shift gaze is mostly unknown despite its fundamental importance for survival in a dynamic world. It has been suggested that during naturalistic visuomotor behavior gaze deployment is coordinated with task-relevant events, often predictive of future events, and studies in sportsmen suggest that timing of eye movements is learned. Here we establish that humans efficiently learn to adjust the timing of eye movements in response to environmental regularities when monitoring locations in the visual scene to detect probabilistically occurring events. To detect the events humans adopt strategies that can be understood through a computational model that includes perceptual and acting uncertainties, a minimal processing time, and, crucially, the intrinsic costs of gaze behavior. Thus, subjects traded off event detection rate with behavioral costs of carrying out eye movements. Remarkably, based on this rational bounded actor model the time course of learning the gaze strategies is fully explained by an optimal Bayesian learner with humans' characteristic uncertainty in time estimation, the well-known scalar law of biological timing. Taken together, these findings establish that the human visual system is highly efficient in learning temporal regularities in the environment and that it can use these regularities to control the timing of eye movements to detect behaviorally relevant events.
Directory of Open Access Journals (Sweden)
Shuo Zheng
2014-08-01
Full Text Available In the research of earthquake anomaly recognition, the coupling effect of multiple geosystem spheres can be expected to reasonably interpretating the correlation between various anomalous signals before strong earthquake. Specially, the development of the Lithosphere–Atmosphere–Ionosphere (LAI coupling model has been accepted as verified by some experimental, thermal and electromagnetic data. However, quasi-synchronous anomalies of the multiple parameters, including thermal, radon and electromagnetic data, have not been reported in a single event case for verifying the geosystem spheres coupling effect. In this paper, we firstly summarized the reported studies on the power spectrum density (PSD in the ELF/VLF band and radon data recorded from Guza seismic station. Then, historical surface latent heat flux (SLHF data from the NCEP/NCAR Reanalysis Project was employed for investigating anomalous change in a month before the April 14, 2010, Ms7.1 Yushu earthquake which is one of the typical intra-continental earthquakes in Tibet Plateau. The results from spatial and temporal analysis revealed that anomalous fields of PSD and SLHF data were located close to the epicenter and the ends of some active faults at Bayan Har Block and all anomalous dates converged between April 8 and 11 (6 to 3 days before the Yushu earthquake. Therefore, we suggest that the anomalies of multiple parameters before the main shock are related with the Yushu earthquake. This paper could give an ideal case study to verify the geosystem spheres coupling effect happened in a single event.
Regressive transgressive cycle of Devonian sea in Uruguay verified by Palynology
International Nuclear Information System (INIS)
Da Silva, J.
1990-01-01
This work is about the results and conclusions of the populations palinomorphs study, carried out in Devonian formations in the center of Uruguay. The existence of a regressive transgressive cycle is verified by analyzing the vertical distribution of palinomorphs as well as is mentioned the presence of chintziest for the section studied - hoesphaeridium Cyathochitina kinds
Die verifiëring, verfyning en toepassing van leksikografiese liniale ...
African Journals Online (AJOL)
Leksikografiese liniale vir Afrikaans en die Afrikatale is 'n dekade oud en word algemeen gebruik in die samestelling van woordeboeke. Die samestellers het dit tot dusver nie nodig geag om hierdie liniale te verifieer of te verfyn nie. Kritiek is egter uitgespreek op die samestelling van die Afrikaanse Liniaal en dit word in ...
Verifiable Outsourced Decryption of Attribute-Based Encryption with Constant Ciphertext Length
Directory of Open Access Journals (Sweden)
Jiguo Li
2017-01-01
Full Text Available Outsourced decryption ABE system largely reduces the computation cost for users who intend to access the encrypted files stored in cloud. However, the correctness of the transformation ciphertext cannot be guaranteed because the user does not have the original ciphertext. Lai et al. provided an ABE scheme with verifiable outsourced decryption which helps the user to check whether the transformation done by the cloud is correct. In order to improve the computation performance and reduce communication overhead, we propose a new verifiable outsourcing scheme with constant ciphertext length. To be specific, our scheme achieves the following goals. (1 Our scheme is verifiable which ensures that the user efficiently checks whether the transformation is done correctly by the CSP. (2 The size of ciphertext and the number of expensive pairing operations are constant, which do not grow with the complexity of the access structure. (3 The access structure in our scheme is AND gates on multivalued attributes and we prove our scheme is verifiable and it is secure against selectively chosen-plaintext attack in the standard model. (4 We give some performance analysis which indicates that our scheme is adaptable for various limited bandwidth and computation-constrained devices, such as mobile phone.
Methods for verifying compliance with low-level radioactive waste acceptance criteria
Energy Technology Data Exchange (ETDEWEB)
NONE
1993-09-01
This report summarizes the methods that are currently employed and those that can be used to verify compliance with low-level radioactive waste (LLW) disposal facility waste acceptance criteria (WAC). This report presents the applicable regulations representing the Federal, State, and site-specific criteria for accepting LLW. Typical LLW generators are summarized, along with descriptions of their waste streams and final waste forms. General procedures and methods used by the LLW generators to verify compliance with the disposal facility WAC are presented. The report was written to provide an understanding of how a regulator could verify compliance with a LLW disposal facility`s WAC. A comprehensive study of the methodology used to verify waste generator compliance with the disposal facility WAC is presented in this report. The study involved compiling the relevant regulations to define the WAC, reviewing regulatory agency inspection programs, and summarizing waste verification technology and equipment. The results of the study indicate that waste generators conduct verification programs that include packaging, classification, characterization, and stabilization elements. The current LLW disposal facilities perform waste verification steps on incoming shipments. A model inspection and verification program, which includes an emphasis on the generator`s waste application documentation of their waste verification program, is recommended. The disposal facility verification procedures primarily involve the use of portable radiological survey instrumentation. The actual verification of generator compliance to the LLW disposal facility WAC is performed through a combination of incoming shipment checks and generator site audits.
13 CFR 127.403 - What happens if SBA verifies the concern's eligibility?
2010-01-01
... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false What happens if SBA verifies the concern's eligibility? 127.403 Section 127.403 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION WOMEN-OWNED SMALL BUSINESS FEDERAL CONTRACT ASSISTANCE PROCEDURES Eligibility Examinations § 127...
13 CFR 127.404 - What happens if SBA is unable to verify a concern's eligibility?
2010-01-01
... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false What happens if SBA is unable to verify a concern's eligibility? 127.404 Section 127.404 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION WOMEN-OWNED SMALL BUSINESS FEDERAL CONTRACT ASSISTANCE PROCEDURES Eligibility Examinations § 127...
40 CFR 8.9 - Measures to assess and verify environmental impacts.
2010-07-01
... environmental impacts. 8.9 Section 8.9 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY GENERAL ENVIRONMENTAL IMPACT ASSESSMENT OF NONGOVERNMENTAL ACTIVITIES IN ANTARCTICA § 8.9 Measures to assess and verify environmental impacts. (a) The operator shall conduct appropriate monitoring of key environmental indicators as...
Eddy-Current Testing of Welded Stainless Steel Storage Containers to Verify Integrity and Identity
International Nuclear Information System (INIS)
Tolk, Keith M.; Stoker, Gerald C.
1999-01-01
An eddy-current scanning system is being developed to allow the International Atomic Energy Agency (IAEA) to verify the integrity of nuclear material storage containers. Such a system is necessary to detect attempts to remove material from the containers in facilities where continuous surveillance of the containers is not practical. Initial tests have shown that the eddy-current system is also capable of verifying the identity of each container using the electromagnetic signature of its welds. The DOE-3013 containers proposed for use in some US facilities are made of an austenitic stainless steel alloy, which is nonmagnetic in its normal condition. When the material is cold worked by forming or by local stresses experienced in welding, it loses its austenitic grain structure and its magnetic permeability increases. This change in magnetic permeability can be measured using an eddy-current probe specifically designed for this purpose. Initial tests have shown that variations of magnetic permeability and material conductivity in and around welds can be detected, and form a pattern unique to the container. The changes in conductivity that are present around a mechanically inserted plug can also be detected. Further development of the system is currently underway to adapt the system to verifying the integrity and identity of sealable, tamper-indicating enclosures designed to prevent unauthorized access to measurement equipment used to verify international agreements
Methods for verifying compliance with low-level radioactive waste acceptance criteria
International Nuclear Information System (INIS)
1993-09-01
This report summarizes the methods that are currently employed and those that can be used to verify compliance with low-level radioactive waste (LLW) disposal facility waste acceptance criteria (WAC). This report presents the applicable regulations representing the Federal, State, and site-specific criteria for accepting LLW. Typical LLW generators are summarized, along with descriptions of their waste streams and final waste forms. General procedures and methods used by the LLW generators to verify compliance with the disposal facility WAC are presented. The report was written to provide an understanding of how a regulator could verify compliance with a LLW disposal facility's WAC. A comprehensive study of the methodology used to verify waste generator compliance with the disposal facility WAC is presented in this report. The study involved compiling the relevant regulations to define the WAC, reviewing regulatory agency inspection programs, and summarizing waste verification technology and equipment. The results of the study indicate that waste generators conduct verification programs that include packaging, classification, characterization, and stabilization elements. The current LLW disposal facilities perform waste verification steps on incoming shipments. A model inspection and verification program, which includes an emphasis on the generator's waste application documentation of their waste verification program, is recommended. The disposal facility verification procedures primarily involve the use of portable radiological survey instrumentation. The actual verification of generator compliance to the LLW disposal facility WAC is performed through a combination of incoming shipment checks and generator site audits
Manifold Regularized Experimental Design for Active Learning.
Zhang, Lining; Shum, Hubert P H; Shao, Ling
2016-12-02
Various machine learning and data mining tasks in classification require abundant data samples to be labeled for training. Conventional active learning methods aim at labeling the most informative samples for alleviating the labor of the user. Many previous studies in active learning select one sample after another in a greedy manner. However, this is not very effective because the classification models has to be retrained for each newly labeled sample. Moreover, many popular active learning approaches utilize the most uncertain samples by leveraging the classification hyperplane of the classifier, which is not appropriate since the classification hyperplane is inaccurate when the training data are small-sized. The problem of insufficient training data in real-world systems limits the potential applications of these approaches. This paper presents a novel method of active learning called manifold regularized experimental design (MRED), which can label multiple informative samples at one time for training. In addition, MRED gives an explicit geometric explanation for the selected samples to be labeled by the user. Different from existing active learning methods, our method avoids the intrinsic problems caused by insufficiently labeled samples in real-world applications. Various experiments on synthetic datasets, the Yale face database and the Corel image database have been carried out to show how MRED outperforms existing methods.
Regularization of the Coulomb scattering problem
International Nuclear Information System (INIS)
Baryshevskii, V.G.; Feranchuk, I.D.; Kats, P.B.
2004-01-01
The exact solution of the Schroedinger equation for the Coulomb potential is used within the scope of both stationary and time-dependent scattering theories in order to find the parameters which determine the regularization of the Rutherford cross section when the scattering angle tends to zero but the distance r from the center remains finite. The angular distribution of the particles scattered in the Coulomb field is studied on rather a large but finite distance r from the center. It is shown that the standard asymptotic representation of the wave functions is inapplicable in the case when small scattering angles are considered. The unitary property of the scattering matrix is analyzed and the 'optical' theorem for this case is discussed. The total and transport cross sections for scattering the particle by the Coulomb center proved to be finite values and are calculated in the analytical form. It is shown that the effects under consideration can be important for the observed characteristics of the transport processes in semiconductors which are determined by the electron and hole scattering by the field of charged impurity centers
Color correction optimization with hue regularization
Zhang, Heng; Liu, Huaping; Quan, Shuxue
2011-01-01
Previous work has suggested that observers are capable of judging the quality of an image without any knowledge of the original scene. When no reference is available, observers can extract the apparent objects in an image and compare them with the typical colors of similar objects recalled from their memories. Some generally agreed upon research results indicate that although perfect colorimetric rendering is not conspicuous and color errors can be well tolerated, the appropriate rendition of certain memory colors such as skin, grass, and sky is an important factor in the overall perceived image quality. These colors are appreciated in a fairly consistent manner and are memorized with slightly different hues and higher color saturation. The aim of color correction for a digital color pipeline is to transform the image data from a device dependent color space to a target color space, usually through a color correction matrix which in its most basic form is optimized through linear regressions between the two sets of data in two color spaces in the sense of minimized Euclidean color error. Unfortunately, this method could result in objectionable distortions if the color error biased certain colors undesirably. In this paper, we propose a color correction optimization method with preferred color reproduction in mind through hue regularization and present some experimental results.
Wave dynamics of regular and chaotic rays
International Nuclear Information System (INIS)
McDonald, S.W.
1983-09-01
In order to investigate general relationships between waves and rays in chaotic systems, I study the eigenfunctions and spectrum of a simple model, the two-dimensional Helmholtz equation in a stadium boundary, for which the rays are ergodic. Statistical measurements are performed so that the apparent randomness of the stadium modes can be quantitatively contrasted with the familiar regularities observed for the modes in a circular boundary (with integrable rays). The local spatial autocorrelation of the eigenfunctions is constructed in order to indirectly test theoretical predictions for the nature of the Wigner distribution corresponding to chaotic waves. A portion of the large-eigenvalue spectrum is computed and reported in an appendix; the probability distribution of successive level spacings is analyzed and compared with theoretical predictions. The two principal conclusions are: 1) waves associated with chaotic rays may exhibit randomly situated localized regions of high intensity; 2) the Wigner function for these waves may depart significantly from being uniformly distributed over the surface of constant frequency in the ray phase space
Regularities and irregularities in order flow data
Theissen, Martin; Krause, Sebastian M.; Guhr, Thomas
2017-11-01
We identify and analyze statistical regularities and irregularities in the recent order flow of different NASDAQ stocks, focusing on the positions where orders are placed in the order book. This includes limit orders being placed outside of the spread, inside the spread and (effective) market orders. Based on the pairwise comparison of the order flow of different stocks, we perform a clustering of stocks into groups with similar behavior. This is useful to assess systemic aspects of stock price dynamics. We find that limit order placement inside the spread is strongly determined by the dynamics of the spread size. Most orders, however, arrive outside of the spread. While for some stocks order placement on or next to the quotes is dominating, deeper price levels are more important for other stocks. As market orders are usually adjusted to the quote volume, the impact of market orders depends on the order book structure, which we find to be quite diverse among the analyzed stocks as a result of the way limit order placement takes place.
Library search with regular reflectance IR spectra
International Nuclear Information System (INIS)
Staat, H.; Korte, E.H.; Lampen, P.
1989-01-01
Characterisation in situ for coatings and other surface layers is generally favourable, but a prerequisite for precious items such as art objects. In infrared spectroscopy only reflection techniques are applicable here. However for attenuated total reflection (ATR) it is difficult to obtain the necessary optical contact of the crystal with the sample, when the latter is not perfectly plane or flexible. The measurement of diffuse reflectance demands a scattering sample and usually the reflectance is very poor. Therefore in most cases one is left with regular reflectance. Such spectra consist of dispersion-like feature instead of bands impeding their interpretation in the way the analyst is used to. Furthermore for computer search in common spectral libraries compiled from transmittance or absorbance spectra a transformation of the reflectance spectra is needed. The correct conversion is based on the Kramers-Kronig transformation. This somewhat time - consuming procedure can be speeded up by using appropriate approximations. A coarser conversion may be obtained from the first derivative of the reflectance spectrum which resembles the second derivative of a transmittance spectrum. The resulting distorted spectra can still be used successfully for the search in peak table libraries. Experiences with both transformations are presented. (author)
Regularities of praseodymium oxide dissolution in acids
International Nuclear Information System (INIS)
Savin, V.D.; Elyutin, A.V.; Mikhajlova, N.P.; Eremenko, Z.V.; Opolchenova, N.L.
1989-01-01
The regularities of Pr 2 O 3 , Pr 2 O 5 and Pr(OH) 3 interaction with inorganic acids are studied. pH of the solution and oxidation-reduction potential registrated at 20±1 deg C are the working parameters of studies. It is found that the amount of all oxides dissolved increase in the series of acids - nitric, hydrochloric and sulfuric, in this case for hydrochloric and sulfuric acid it increases in the series of oxides Pr 2 O 3 , Pr 2 O 5 and Pr(OH) 3 . It is noted that Pr 2 O 5 has a high value of oxidation-reduction potential with a positive sign in the whole disslolving range. A low positive value of a redox potential during dissolving belongs to Pr(OH) 3 and in the case of Pr 2 O 3 dissloving redox potential is negative. The schemes of dissolving processes which do not agree with classical assumptions are presented
Regular expressions compiler and some applications
International Nuclear Information System (INIS)
Saldana A, H.
1978-01-01
We deal with high level programming language of a Regular Expressions Compiler (REC). The first chapter is an introduction in which the history of the REC development and the problems related to its numerous applicatons are described. The syntactic and sematic rules as well as the language features are discussed just after the introduction. Concerning the applicatons as examples, an adaptation is given in order to solve numerical problems and another for the data manipulation. The last chapter is an exposition of ideas and techniques about the compiler construction. Examples of the adaptation to numerical problems show the applications to education, vector analysis, quantum mechanics, physics, mathematics and other sciences. The rudiments of an operating system for a minicomputer are the examples of the adaptation to symbolic data manipulaton. REC is a programming language that could be applied to solve problems in almost any human activity. Handling of computer graphics, control equipment, research on languages, microprocessors and general research are some of the fields in which this programming language can be applied and developed. (author)
Quantum implications of a scale invariant regularization
Ghilencea, D. M.
2018-04-01
We study scale invariance at the quantum level in a perturbative approach. For a scale-invariant classical theory, the scalar potential is computed at a three-loop level while keeping manifest this symmetry. Spontaneous scale symmetry breaking is transmitted at a quantum level to the visible sector (of ϕ ) by the associated Goldstone mode (dilaton σ ), which enables a scale-invariant regularization and whose vacuum expectation value ⟨σ ⟩ generates the subtraction scale (μ ). While the hidden (σ ) and visible sector (ϕ ) are classically decoupled in d =4 due to an enhanced Poincaré symmetry, they interact through (a series of) evanescent couplings ∝ɛ , dictated by the scale invariance of the action in d =4 -2 ɛ . At the quantum level, these couplings generate new corrections to the potential, as scale-invariant nonpolynomial effective operators ϕ2 n +4/σ2 n. These are comparable in size to "standard" loop corrections and are important for values of ϕ close to ⟨σ ⟩. For n =1 , 2, the beta functions of their coefficient are computed at three loops. In the IR limit, dilaton fluctuations decouple, the effective operators are suppressed by large ⟨σ ⟩, and the effective potential becomes that of a renormalizable theory with explicit scale symmetry breaking by the DR scheme (of μ =constant).
Regularities development of entrepreneurial structures in regions
Directory of Open Access Journals (Sweden)
Julia Semenovna Pinkovetskaya
2012-12-01
Full Text Available Consider regularities and tendencies for the three types of entrepreneurial structures — small enterprises, medium enterprises and individual entrepreneurs. The aim of the research was to confirm the possibilities of describing indicators of aggregate entrepreneurial structures with the use of normal law distribution functions. Presented proposed by the author the methodological approach and results of construction of the functions of the density distribution for the main indicators for the various objects: the Russian Federation, regions, as well as aggregates ofentrepreneurial structures, specialized in certain forms ofeconomic activity. All the developed functions, as shown by the logical and statistical analysis, are of high quality and well-approximate the original data. In general, the proposed methodological approach is versatile and can be used in further studies of aggregates of entrepreneurial structures. The received results can be applied in solving a wide range of problems justify the need for personnel and financial resources at the federal, regional and municipal levels, as well as the formation of plans and forecasts of development entrepreneurship and improvement of this sector of the economy.
Business rescue decision making through verifier determinants – ask the specialists
Directory of Open Access Journals (Sweden)
Marius Pretorius
2013-11-01
Full Text Available Orientation: Business rescue has become a critical part of business strategy decision making, especially during economic downturns and recessions. Past legislation has generally supported creditor-friendly regimes, and its mind-set still applies which increases the difficulty of such turnarounds. There are many questions and critical issues faced by those involved in rescue. Despite extensive theory in the literature on failure, there is a void regarding practical verifiers of the signs and causes of venture decline, as specialists are not forthcoming about what they regard as their “competitive advantage”. Research purpose: This article introduces the concept and role of “verifier determinants” of early warning signs, as a tool to confirm the causes of decline in order to direct rescue strategies and, most importantly, reduce time between the first observation and the implementation of the rescue. Motivation for the study: Knowing how specialist practitioners confirm causes of business decline could assist in deciding on strategies for the rescue earlier than can be done using traditional due diligence which is time consuming. Reducing time is a crucial element of a successful rescue. Research design and approach: The researchers interviewed specialist practitioners with extensive experience in rescue and turnaround. An experimental design was used to ensure the specialists evaluated the same real cases to extract their experiences and base their decisions on. Main findings: The specialists confirmed the use of verifier determinants and identified such determinants as they personally used them to confirm causes of decline. These verifier determinants were classified into five categories; namely, management, finance, strategic, banking and operations and marketing of the ventures under investigation. The verifier determinants and their use often depend heavily on subconscious (non-factual information based on previous experiences
International Nuclear Information System (INIS)
Ogle, Stephen M; Davis, Kenneth; Lauvaux, Thomas; Miles, Natasha L; Richardson, Scott; Schuh, Andrew; Cooley, Dan; Breidt, F Jay; West, Tristram O; Heath, Linda S; Smith, James E; McCarty, Jessica L; Gurney, Kevin R; Tans, Pieter; Denning, A Scott
2015-01-01
Verifying national greenhouse gas (GHG) emissions inventories is a critical step to ensure that reported emissions data to the United Nations Framework Convention on Climate Change (UNFCCC) are accurate and representative of a country’s contribution to GHG concentrations in the atmosphere. Furthermore, verifying biogenic fluxes provides a check on estimated emissions associated with managing lands for carbon sequestration and other activities, which often have large uncertainties. We report here on the challenges and results associated with a case study using atmospheric measurements of CO 2 concentrations and inverse modeling to verify nationally-reported biogenic CO 2 emissions. The biogenic CO 2 emissions inventory was compiled for the Mid-Continent region of United States based on methods and data used by the US government for reporting to the UNFCCC, along with additional sources and sinks to produce a full carbon balance. The biogenic emissions inventory produced an estimated flux of −408 ± 136 Tg CO 2 for the entire study region, which was not statistically different from the biogenic flux of −478 ± 146 Tg CO 2 that was estimated using the atmospheric CO 2 concentration data. At sub-regional scales, the spatial density of atmospheric observations did not appear sufficient to verify emissions in general. However, a difference between the inventory and inversion results was found in one isolated area of West-central Wisconsin. This part of the region is dominated by forestlands, suggesting that further investigation may be warranted into the forest C stock or harvested wood product data from this portion of the study area. The results suggest that observations of atmospheric CO 2 concentration data and inverse modeling could be used to verify biogenic emissions, and provide more confidence in biogenic GHG emissions reporting to the UNFCCC. (letter)
Musical Scales in Tone Sequences Improve Temporal Accuracy.
Li, Min S; Di Luca, Massimiliano
2018-01-01
Predicting the time of stimulus onset is a key component in perception. Previous investigations of perceived timing have focused on the effect of stimulus properties such as rhythm and temporal irregularity, but the influence of non-temporal properties and their role in predicting stimulus timing has not been exhaustively considered. The present study aims to understand how a non-temporal pattern in a sequence of regularly timed stimuli could improve or bias the detection of temporal deviations. We presented interspersed sequences of 3, 4, 5, and 6 auditory tones where only the timing of the last stimulus could slightly deviate from isochrony. Participants reported whether the last tone was 'earlier' or 'later' relative to the expected regular timing. In two conditions, the tones composing the sequence were either organized into musical scales or they were random tones. In one experiment, all sequences ended with the same tone; in the other experiment, each sequence ended with a different tone. Results indicate higher discriminability of anisochrony with musical scales and with longer sequences, irrespective of the knowledge of the final tone. Such an outcome suggests that the predictability of non-temporal properties, as enabled by the musical scale pattern, can be a factor in determining the sensitivity of time judgments.
Holme, Petter
2017-01-01
This book covers recent developments in epidemic process models and related data on temporally varying networks. It is widely recognized that contact networks are indispensable for describing, understanding, and intervening to stop the spread of infectious diseases in human and animal populations; “network epidemiology” is an umbrella term to describe this research field. More recently, contact networks have been recognized as being highly dynamic. This observation, also supported by an increasing amount of new data, has led to research on temporal networks, a rapidly growing area. Changes in network structure are often informed by epidemic (or other) dynamics, in which case they are referred to as adaptive networks. This volume gathers contributions by prominent authors working in temporal and adaptive network epidemiology, a field essential to understanding infectious diseases in real society.
Temporal Concurrent Constraint Programming
DEFF Research Database (Denmark)
Valencia, Frank Dan
Concurrent constraint programming (ccp) is a formalism for concurrency in which agents interact with one another by telling (adding) and asking (reading) information in a shared medium. Temporal ccp extends ccp by allowing agents to be constrained by time conditions. This dissertation studies...... temporal ccp by developing a process calculus called ntcc. The ntcc calculus generalizes the tcc model, the latter being a temporal ccp model for deterministic and synchronouss timed reactive systems. The calculus is built upon few basic ideas but it captures several aspects of timed systems. As tcc, ntcc...... structures, robotic devises, multi-agent systems and music applications. The calculus is provided with a denotational semantics that captures the reactive computations of processes in the presence of arbitrary environments. The denotation is proven to be fully-abstract for a substantial fragment...
Recognition memory is improved by a structured temporal framework during encoding
Directory of Open Access Journals (Sweden)
Sathesan eThavabalasingam
2016-01-01
Full Text Available In order to function optimally within our environment, we continuously extract temporal patterns from our experiences and formulate expectations that facilitate adaptive behavior. Given that our memories are embedded within spatiotemporal contexts, an intriguing possibility is that mnemonic processes are sensitive to the temporal structure of events. To test this hypothesis, in a series of behavioral experiments we manipulated the regularity of interval durations at encoding to create temporally structured and unstructured frameworks. Our findings revealed enhanced recognition memory (d’ for stimuli that were explicitly encoded within a temporally structured versus unstructured framework. Encoding information within a temporally structured framework was also associated with a reduction in the negative effects of proactive interference and was linked to greater recollective recognition memory. Furthermore, rhythmic temporal structure was found to enhance recognition memory for incidentally encoded information. Collectively, these results support the possibility that we possess a greater capacity to learn and subsequently remember temporally structured information.
Chico Abad, Virginia
2015-01-01
Las empresas de trabajo temporal han ido tomando mayor relevancia debido a la estructura de la sociedad y de la economía. La entrada en vigor de la ley 14/1994 por la que se regulan las empresas de trabajo temporal suposo la incorporación al ordenamiento jurífico español de un tipo de empresas cuya actuación se habia extendido en otros países del entorno europeo. La idea general gira en torno a la flexibilidad de un nuevo marco económico y organizativo y exige a las empresas una capa...
International Nuclear Information System (INIS)
Silver, A.J.; Cross, D.T.; Friedman, D.P.; Bello, J.A.; Hilal, S.K.
1989-01-01
To better define the MR appearance of hippocampal sclerosis, the authors have reviewed over 500 MR coronal images of the temporal lobes. Many cysts were noted that analysis showed were of choroid-fissure (arachnoid) origin. Their association with seizures was low. A few nontumorous, static, medial temporal lesions, noted on T2-weighted coronal images, were poorly visualized on T1-weighted images and did not enhance with gadolinium. The margins were irregular, involved the hippocampus, and were often associated with focal atrophy. The lesions usually were associated with seizure disorders and specific electroencephalographic changes, and the authors believe they represented hippocampal sclerosis
TRANSIENT LUNAR PHENOMENA: REGULARITY AND REALITY
International Nuclear Information System (INIS)
Crotts, Arlin P. S.
2009-01-01
Transient lunar phenomena (TLPs) have been reported for centuries, but their nature is largely unsettled, and even their existence as a coherent phenomenon is controversial. Nonetheless, TLP data show regularities in the observations; a key question is whether this structure is imposed by processes tied to the lunar surface, or by terrestrial atmospheric or human observer effects. I interrogate an extensive catalog of TLPs to gauge how human factors determine the distribution of TLP reports. The sample is grouped according to variables which should produce differing results if determining factors involve humans, and not reflecting phenomena tied to the lunar surface. Features dependent on human factors can then be excluded. Regardless of how the sample is split, the results are similar: ∼50% of reports originate from near Aristarchus, ∼16% from Plato, ∼6% from recent, major impacts (Copernicus, Kepler, Tycho, and Aristarchus), plus several at Grimaldi. Mare Crisium produces a robust signal in some cases (however, Crisium is too large for a 'feature' as defined). TLP count consistency for these features indicates that ∼80% of these may be real. Some commonly reported sites disappear from the robust averages, including Alphonsus, Ross D, and Gassendi. These reports begin almost exclusively after 1955, when TLPs became widely known and many more (and inexperienced) observers searched for TLPs. In a companion paper, we compare the spatial distribution of robust TLP sites to transient outgassing (seen by Apollo and Lunar Prospector instruments). To a high confidence, robust TLP sites and those of lunar outgassing correlate strongly, further arguing for the reality of TLPs.
Elementary Particle Spectroscopy in Regular Solid Rewrite
International Nuclear Information System (INIS)
Trell, Erik
2008-01-01
The Nilpotent Universal Computer Rewrite System (NUCRS) has operationalized the radical ontological dilemma of Nothing at All versus Anything at All down to the ground recursive syntax and principal mathematical realisation of this categorical dichotomy as such and so governing all its sui generis modalities, leading to fulfilment of their individual terms and compass when the respective choice sequence operations are brought to closure. Focussing on the general grammar, NUCRS by pure logic and its algebraic notations hence bootstraps Quantum Mechanics, aware that it ''is the likely keystone of a fundamental computational foundation'' also for e.g. physics, molecular biology and neuroscience. The present work deals with classical geometry where morphology is the modality, and ventures that the ancient regular solids are its specific rewrite system, in effect extensively anticipating the detailed elementary particle spectroscopy, and further on to essential structures at large both over the inorganic and organic realms. The geodetic antipode to Nothing is extension, with natural eigenvector the endless straight line which when deployed according to the NUCRS as well as Plotelemeian topographic prescriptions forms a real three-dimensional eigenspace with cubical eigenelements where observed quark-skewed quantum-chromodynamical particle events self-generate as an Aristotelean phase transition between the straight and round extremes of absolute endlessness under the symmetry- and gauge-preserving, canonical coset decomposition SO(3)xO(5) of Lie algebra SU(3). The cubical eigen-space and eigen-elements are the parental state and frame, and the other solids are a range of transition matrix elements and portions adapting to the spherical root vector symmetries and so reproducibly reproducing the elementary particle spectroscopy, including a modular, truncated octahedron nano-composition of the Electron which piecemeal enter into molecular structures or compressed to each
Cai, Ailong; Li, Lei; Zheng, Zhizhong; Zhang, Hanming; Wang, Linyuan; Hu, Guoen; Yan, Bin
2018-02-01
In medical imaging many conventional regularization methods, such as total variation or total generalized variation, impose strong prior assumptions which can only account for very limited classes of images. A more reasonable sparse representation frame for images is still badly needed. Visually understandable images contain meaningful patterns, and combinations or collections of these patterns can be utilized to form some sparse and redundant representations which promise to facilitate image reconstructions. In this work, we propose and study block matching sparsity regularization (BMSR) and devise an optimization program using BMSR for computed tomography (CT) image reconstruction for an incomplete projection set. The program is built as a constrained optimization, minimizing the L1-norm of the coefficients of the image in the transformed domain subject to data observation and positivity of the image itself. To solve the program efficiently, a practical method based on the proximal point algorithm is developed and analyzed. In order to accelerate the convergence rate, a practical strategy for tuning the BMSR parameter is proposed and applied. The experimental results for various settings, including real CT scanning, have verified the proposed reconstruction method showing promising capabilities over conventional regularization.
Oxidative stress and inflammation: liver responses and adaptations to acute and regular exercise.
Pillon Barcelos, Rômulo; Freire Royes, Luiz Fernando; Gonzalez-Gallego, Javier; Bresciani, Guilherme
2017-02-01
The liver is remarkably important during exercise outcomes due to its contribution to detoxification, synthesis, and release of biomolecules, and energy supply to the exercising muscles. Recently, liver has been also shown to play an important role in redox status and inflammatory modulation during exercise. However, while several studies have described the adaptations of skeletal muscles to acute and chronic exercise, hepatic changes are still scarcely investigated. Indeed, acute intense exercise challenges the liver with increased reactive oxygen species (ROS) and inflammation onset, whereas regular training induces hepatic antioxidant and anti-inflammatory improvements. Acute and regular exercise protocols in combination with antioxidant and anti-inflammatory supplementation have been also tested to verify hepatic adaptations to exercise. Although positive results have been reported in some acute models, several studies have shown an increased exercise-related stress upon liver. A similar trend has been observed during training: while synergistic effects of training and antioxidant/anti-inflammatory supplementations have been occasionally found, others reported a blunting of relevant adaptations to exercise, following the patterns described in skeletal muscles. This review discusses current data regarding liver responses and adaptation to acute and regular exercise protocols alone or combined with antioxidant and anti-inflammatory supplementation. The understanding of the mechanisms behind these modulations is of interest for both exercise-related health and performance outcomes.
Structural controllability and controlling centrality of temporal networks.
Pan, Yujian; Li, Xiang
2014-01-01
Temporal networks are such networks where nodes and interactions may appear and disappear at various time scales. With the evidence of ubiquity of temporal networks in our economy, nature and society, it's urgent and significant to focus on its structural controllability as well as the corresponding characteristics, which nowadays is still an untouched topic. We develop graphic tools to study the structural controllability as well as its characteristics, identifying the intrinsic mechanism of the ability of individuals in controlling a dynamic and large-scale temporal network. Classifying temporal trees of a temporal network into different types, we give (both upper and lower) analytical bounds of the controlling centrality, which are verified by numerical simulations of both artificial and empirical temporal networks. We find that the positive relationship between aggregated degree and controlling centrality as well as the scale-free distribution of node's controlling centrality are virtually independent of the time scale and types of datasets, meaning the inherent robustness and heterogeneity of the controlling centrality of nodes within temporal networks.
Losing the beat: deficits in temporal coordination
Palmer, Caroline; Lidji, Pascale; Peretz, Isabelle
2014-01-01
Tapping or clapping to an auditory beat, an easy task for most individuals, reveals precise temporal synchronization with auditory patterns such as music, even in the presence of temporal fluctuations. Most models of beat-tracking rely on the theoretical concept of pulse: a perceived regular beat generated by an internal oscillation that forms the foundation of entrainment abilities. Although tapping to the beat is a natural sensorimotor activity for most individuals, not everyone can track an auditory beat. Recently, the case of Mathieu was documented (Phillips-Silver et al. 2011 Neuropsychologia 49, 961–969. (doi:10.1016/j.neuropsychologia.2011.02.002)). Mathieu presented himself as having difficulty following a beat and exhibited synchronization failures. We examined beat-tracking in normal control participants, Mathieu, and a second beat-deaf individual, who tapped with an auditory metronome in which unpredictable perturbations were introduced to disrupt entrainment. Both beat-deaf cases exhibited failures in error correction in response to the perturbation task while exhibiting normal spontaneous motor tempi (in the absence of an auditory stimulus), supporting a deficit specific to perception–action coupling. A damped harmonic oscillator model was applied to the temporal adaptation responses; the model's parameters of relaxation time and endogenous frequency accounted for differences between the beat-deaf cases as well as the control group individuals. PMID:25385783
Efficient Verifiable Range and Closest Point Queries in Zero-Knowledge
Directory of Open Access Journals (Sweden)
Ghosh Esha
2016-10-01
Full Text Available We present an efficient method for answering one-dimensional range and closest-point queries in a verifiable and privacy-preserving manner. We consider a model where a data owner outsources a dataset of key-value pairs to a server, who answers range and closest-point queries issued by a client and provides proofs of the answers. The client verifies the correctness of the answers while learning nothing about the dataset besides the answers to the current and previous queries. Our work yields for the first time a zero-knowledge privacy assurance to authenticated range and closest-point queries. Previous work leaked the size of the dataset and used an inefficient proof protocol. Our construction is based on hierarchical identity-based encryption. We prove its security and analyze its efficiency both theoretically and with experiments on synthetic and real data (Enron email and Boston taxi datasets.
A Secure and Verifiable Outsourced Access Control Scheme in Fog-Cloud Computing.
Fan, Kai; Wang, Junxiong; Wang, Xin; Li, Hui; Yang, Yintang
2017-07-24
With the rapid development of big data and Internet of things (IOT), the number of networking devices and data volume are increasing dramatically. Fog computing, which extends cloud computing to the edge of the network can effectively solve the bottleneck problems of data transmission and data storage. However, security and privacy challenges are also arising in the fog-cloud computing environment. Ciphertext-policy attribute-based encryption (CP-ABE) can be adopted to realize data access control in fog-cloud computing systems. In this paper, we propose a verifiable outsourced multi-authority access control scheme, named VO-MAACS. In our construction, most encryption and decryption computations are outsourced to fog devices and the computation results can be verified by using our verification method. Meanwhile, to address the revocation issue, we design an efficient user and attribute revocation method for it. Finally, analysis and simulation results show that our scheme is both secure and highly efficient.
A Secure and Verifiable Outsourced Access Control Scheme in Fog-Cloud Computing
Fan, Kai; Wang, Junxiong; Wang, Xin; Li, Hui; Yang, Yintang
2017-01-01
With the rapid development of big data and Internet of things (IOT), the number of networking devices and data volume are increasing dramatically. Fog computing, which extends cloud computing to the edge of the network can effectively solve the bottleneck problems of data transmission and data storage. However, security and privacy challenges are also arising in the fog-cloud computing environment. Ciphertext-policy attribute-based encryption (CP-ABE) can be adopted to realize data access control in fog-cloud computing systems. In this paper, we propose a verifiable outsourced multi-authority access control scheme, named VO-MAACS. In our construction, most encryption and decryption computations are outsourced to fog devices and the computation results can be verified by using our verification method. Meanwhile, to address the revocation issue, we design an efficient user and attribute revocation method for it. Finally, analysis and simulation results show that our scheme is both secure and highly efficient. PMID:28737733
DEFF Research Database (Denmark)
Toman, David; Bowman, Ivan Thomas
2003-01-01
Recent research in the area of temporal databases has proposed a number of query languages that vary in their expressive power and the semantics they provide to users. These query languages represent a spectrum of solutions to the tension between clean semantics and efficient evaluation. Often, t...
Temporal Concurrent Constraint Programming
DEFF Research Database (Denmark)
Nielsen, Mogens; Palamidessi, Catuscia; Valencia, Frank Dan
2002-01-01
The ntcc calculus is a model of non-deterministic temporal concurrent constraint programming. In this paper we study behavioral notions for this calculus. In the underlying computational model, concurrent constraint processes are executed in discrete time intervals. The behavioral notions studied...
DEFF Research Database (Denmark)
Schjøth, Lars; Frisvad, Jeppe Revall; Erleben, Kenny
2010-01-01
The finite frame rate also used in computer animated films is cause of adverse temporal aliasing effects. Most noticeable of these is a stroboscopic effect that is seen as intermittent movement of fast moving illumination. This effect can be mitigated using non-zero shutter times, effectively...
Temporal compressive sensing systems
Reed, Bryan W.
2017-12-12
Methods and systems for temporal compressive sensing are disclosed, where within each of one or more sensor array data acquisition periods, one or more sensor array measurement datasets comprising distinct linear combinations of time slice data are acquired, and where mathematical reconstruction allows for calculation of accurate representations of the individual time slice datasets.
Directory of Open Access Journals (Sweden)
Christian Flender
2016-09-01
Full Text Available Being able to give reasons for what the world is and how it works is one of the defining characteristics of modernity. Mathematical reason and empirical observation brought science and engineering to unprecedented success. However, modernity has reached a post-state where an instrumental view of technology needs revision with reasonable arguments and evidence, i.e. without falling back to superstition and mysticism. Instrumentally, technology bears the potential to ease and to harm. Easing and harming can't be controlled like the initial development of technology is a controlled exercise for a specific, mostly easing purpose. Therefore, a revised understanding of information technology is proposed based upon mathematical concepts and intuitions as developed in quantum mechanics. Quantum mechanics offers unequaled opportunities because it raises foundational questions in a precise form. Beyond instrumentalism it enables to raise the question of essences as that what remains through time what it is. The essence of information technology is acausality. The time of acausality is temporality. Temporality is not a concept or a category. It is not epistemological. As an existential and thus more comprehensive and fundamental than a concept or a category temporality is ontological; it does not simply have ontic properties. Rather it exhibits general essences. Datability, significance, spannedness and openness are general essences of equiprimordial time (temporality.
Temporal logic motion planning
CSIR Research Space (South Africa)
Seotsanyana, M
2010-01-01
Full Text Available In this paper, a critical review on temporal logic motion planning is presented. The review paper aims to address the following problems: (a) In a realistic situation, the motion planning problem is carried out in real-time, in a dynamic, uncertain...
Experimental temporal quantum steering
Czech Academy of Sciences Publication Activity Database
Bartkiewicz, K.; Černoch, Antonín; Lemr, K.; Miranowicz, A.; Nori, F.
2016-01-01
Roč. 6, Nov (2016), 1-8, č. článku 38076. ISSN 2045-2322 R&D Projects: GA ČR GAP205/12/0382 Institutional support: RVO:68378271 Keywords : temporal quantum steering * EPR steering Subject RIV: BH - Optics, Masers, Lasers Impact factor: 4.259, year: 2016
A Strategy for Efficiently Verifying Requirements Specifications Using Composition and Invariants
2003-09-05
Colle - sur - Loup , France, Oct. 1984. Springer-Verlag. [34] J. Ramish. Empirical studies of compositional abstraction. Technical report, Naval Research...global to modular temporal rea- soning about programs. In K. R. Apt, editor, Proc. NATO Adv. Study Inst. on Logics and Models of Concurrent Systems, La
Election Verifiability: Cryptographic Definitions and an Analysis of Helios and JCJ
2015-08-06
Computer Society, 2014. To appear. [26] David Chaum . Untraceable electronic mail, return addresses, and digital pseudonyms. Communications of the ACM...24(2):84–88, 1981. [27] David Chaum . Secret-ballot receipts: True voter-verifiable elections. IEEE Security and Privacy, 2(1):38–47, 2004. [28... David Chaum , Richard Carback, Jeremy Clark, Aleksander Essex, Stefan Popoveniuc, Ronald L. Rivest, Peter Y. A. Ryan, Emily Shen, and Alan T. Sherman
Using Concept Space to Verify Hyponymy in Building a Hyponymy Lexicon
Liu, Lei; Zhang, Sen; Diao, Lu Hong; Yan, Shu Ying; Cao, Cun Gen
Verification of hyponymy relations is a basic problem in knowledge acquisition. We present a method of hyponymy verification based on concept space. Firstly, we give the definition of concept space about a group of candidate hyponymy relations. Secondly we analyze the concept space and define a set of hyponymy features based on the space structure. Then we use them to verify candidate hyponymy relations. Experimental results show that the method can provide adequate verification of hyponymy.
Aguilar, Michael L.; Bonanne, Kevin H.; Favretto, Jeffrey A.; Jackson, Maddalena M.; Jones, Stephanie L.; Mackey, Ryan M.; Sarrel, Marc A.; Simpson, Kimberly A.
2014-01-01
The Exploration Systems Development (ESD) Standing Review Board (SRB) requested the NASA Engineering and Safety Center (NESC) conduct an independent review of the plan developed by Ground Systems Development and Operations (GSDO) for identifying models and emulators to create a tool(s) to verify their command and control software. The NESC was requested to identify any issues or weaknesses in the GSDO plan. This document contains the outcome of the NESC review.
An method of verify period signal based on data acquisition card
International Nuclear Information System (INIS)
Zeng Shaoli
2005-01-01
This paper introduces an method to verify index voltage of Period Signal Generator by using data acquisition card. which it's error is less 0.5%. A corresponding Win32's program, which use voluntarily developed VxD to control data acquisition card direct I/O and multi thread technique for gain the best time scale precision, has developed in Windows platform. The program will real time collect inda voltage data and auto measure period. (authors)
A Benchmark for Comparing Different Approaches for Specifying and Verifying Real-Time Systems
1993-01-01
To be considered correct or useful, real - time systems must deliver results within specified time intervals, either without exception or with high...probability. Recently, a large number of formal methods have been invented for specifying and verifying real - time systems . It has been suggested that...these formal methods need to be tested out on actual real - time systems . Such testing will allow the scalability of the methods to be assessed and also
Communication, Technology, Temporality
Directory of Open Access Journals (Sweden)
Mark A. Martinez
2012-08-01
Full Text Available This paper proposes a media studies that foregrounds technological objects as communicative and historical agents. Specifically, I take the digital computer as a powerful catalyst of crises in communication theories and certain key features of modernity. Finally, the computer is the motor of “New Media” which is at once a set of technologies, a historical epoch, and a field of knowledge. As such the computer shapes “the new” and “the future” as History pushes its origins further in the past and its convergent quality pushes its future as a predominate medium. As treatment of information and interface suggest, communication theories observe computers, and technologies generally, for the mediated languages they either afford or foreclose to us. My project describes the figures information and interface for the different ways they can be thought of as aspects of communication. I treat information not as semantic meaning, formal or discursive language, but rather as a physical organism. Similarly an interface is not a relationship between a screen and a human visual intelligence, but is instead a reciprocal, affective and physical process of contact. I illustrate that historically there have been conceptions of information and interface complimentary to mine, fleeting as they have been in the face of a dominant temporality of mediation. I begin with a theoretically informed approach to media history, and extend it to a new theory of communication. In doing so I discuss a model of time common to popular, scientific, and critical conceptions of media technologies especially in theories of computer technology. This is a predominate model with particular rules of temporal change and causality for thinking about mediation, and limits the conditions of possibility for knowledge production about communication. I suggest a new model of time as integral to any event of observation and analysis, and that human mediation does not exhaust the
Biochemically verified smoking cessation and vaping beliefs among vape store customers.
Tackett, Alayna P; Lechner, William V; Meier, Ellen; Grant, DeMond M; Driskill, Leslie M; Tahirkheli, Noor N; Wagener, Theodore L
2015-05-01
To evaluate biochemically verified smoking status and electronic nicotine delivery systems (ENDS) use behaviors and beliefs among a sample of customers from vapor stores (stores specializing in ENDS). A cross-sectional survey of 215 adult vapor store customers at four retail locations in the Midwestern United States; a subset of participants (n = 181) also completed exhaled carbon monoxide (CO) testing to verify smoking status. Outcomes evaluated included ENDS preferences, harm beliefs, use behaviors, smoking history and current biochemically verified smoking status. Most customers reported starting ENDS as a means of smoking cessation (86%), using newer-generation devices (89%), vaping non-tobacco/non-menthol flavors (72%) and using e-liquid with nicotine strengths of ≤20 mg/ml (72%). There was a high rate of switching (91.4%) to newer-generation ENDS among those who started with a first-generation product. Exhaled CO readings confirmed that 66% of the tested sample had quit smoking. Among those who continued to smoke, mean cigarettes per day decreased from 22.1 to 7.5 (P customers in the United States who use electronic nicotine delivery devices to stop smoking, vaping longer, using newer-generation devices and using non-tobacco and non-menthol flavored e-liquid appear to be associated with higher rates of smoking cessation. © 2015 Society for the Study of Addiction.
A Novel Simple Phantom for Verifying the Dose of Radiation Therapy
Directory of Open Access Journals (Sweden)
J. H. Lee
2015-01-01
Full Text Available A standard protocol of dosimetric measurements is used by the organizations responsible for verifying that the doses delivered in radiation-therapy institutions are within authorized limits. This study evaluated a self-designed simple auditing phantom for use in verifying the dose of radiation therapy; the phantom design, dose audit system, and clinical tests are described. Thermoluminescent dosimeters (TLDs were used as postal dosimeters, and mailable phantoms were produced for use in postal audits. Correction factors are important for converting TLD readout values from phantoms into the absorbed dose in water. The phantom scatter correction factor was used to quantify the difference in the scattered dose between a solid water phantom and homemade phantoms; its value ranged from 1.084 to 1.031. The energy-dependence correction factor was used to compare the TLD readout of the unit dose irradiated by audit beam energies with 60Co in the solid water phantom; its value was 0.99 to 1.01. The setup-condition factor was used to correct for differences in dose-output calibration conditions. Clinical tests of the device calibrating the dose output revealed that the dose deviation was within 3%. Therefore, our homemade phantoms and dosimetric system can be applied for accurately verifying the doses applied in radiation-therapy institutions.
The AutoProof Verifier: Usability by Non-Experts and on Standard Code
Directory of Open Access Journals (Sweden)
Carlo A. Furia
2015-08-01
Full Text Available Formal verification tools are often developed by experts for experts; as a result, their usability by programmers with little formal methods experience may be severely limited. In this paper, we discuss this general phenomenon with reference to AutoProof: a tool that can verify the full functional correctness of object-oriented software. In particular, we present our experiences of using AutoProof in two contrasting contexts representative of non-expert usage. First, we discuss its usability by students in a graduate course on software verification, who were tasked with verifying implementations of various sorting algorithms. Second, we evaluate its usability in verifying code developed for programming assignments of an undergraduate course. The first scenario represents usability by serious non-experts; the second represents usability on "standard code", developed without full functional verification in mind. We report our experiences and lessons learnt, from which we derive some general suggestions for furthering the development of verification tools with respect to improving their usability.
Regularization of plurisubharmonic functions with a net of good points
Li, Long
2017-01-01
The purpose of this article is to present a new regularization technique of quasi-plurisubharmoinc functions on a compact Kaehler manifold. The idea is to regularize the function on local coordinate balls first, and then glue each piece together. Therefore, all the higher order terms in the complex Hessian of this regularization vanish at the center of each coordinate ball, and all the centers build a delta-net of the manifold eventually.
International Nuclear Information System (INIS)
Keller, Kai Johannes
2010-04-01
The present work contains a consistent formulation of the methods of dimensional regularization (DimReg) and minimal subtraction (MS) in Minkowski position space. The methods are implemented into the framework of perturbative Algebraic Quantum Field Theory (pAQFT). The developed methods are used to solve the Epstein-Glaser recursion for the construction of time-ordered products in all orders of causal perturbation theory. A solution is given in terms of a forest formula in the sense of Zimmermann. A relation to the alternative approach to renormalization theory using Hopf algebras is established. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Keller, Kai Johannes
2010-04-15
The present work contains a consistent formulation of the methods of dimensional regularization (DimReg) and minimal subtraction (MS) in Minkowski position space. The methods are implemented into the framework of perturbative Algebraic Quantum Field Theory (pAQFT). The developed methods are used to solve the Epstein-Glaser recursion for the construction of time-ordered products in all orders of causal perturbation theory. A solution is given in terms of a forest formula in the sense of Zimmermann. A relation to the alternative approach to renormalization theory using Hopf algebras is established. (orig.)
Higher order total variation regularization for EIT reconstruction.
Gong, Bo; Schullcke, Benjamin; Krueger-Ziolek, Sabine; Zhang, Fan; Mueller-Lisse, Ullrich; Moeller, Knut
2018-01-08
Electrical impedance tomography (EIT) attempts to reveal the conductivity distribution of a domain based on the electrical boundary condition. This is an ill-posed inverse problem; its solution is very unstable. Total variation (TV) regularization is one of the techniques commonly employed to stabilize reconstructions. However, it is well known that TV regularization induces staircase effects, which are not realistic in clinical applications. To reduce such artifacts, modified TV regularization terms considering a higher order differential operator were developed in several previous studies. One of them is called total generalized variation (TGV) regularization. TGV regularization has been successively applied in image processing in a regular grid context. In this study, we adapted TGV regularization to the finite element model (FEM) framework for EIT reconstruction. Reconstructions using simulation and clinical data were performed. First results indicate that, in comparison to TV regularization, TGV regularization promotes more realistic images. Graphical abstract Reconstructed conductivity changes located on selected vertical lines. For each of the reconstructed images as well as the ground truth image, conductivity changes located along the selected left and right vertical lines are plotted. In these plots, the notation GT in the legend stands for ground truth, TV stands for total variation method, and TGV stands for total generalized variation method. Reconstructed conductivity distributions from the GREIT algorithm are also demonstrated.
Yuan, Xiangyong; Bi, Cuihua; Huang, Xiting
2015-05-01
Out-of-synchrony experiences can easily recalibrate one's subjective simultaneity point in the direction of the experienced asynchrony. Although temporal adjustment of multiple audiovisual stimuli has been recently demonstrated to be spatially specific, perceptual grouping processes that organize separate audiovisual stimuli into distinctive "objects" may play a more important role in forming the basis for subsequent multiple temporal recalibrations. We investigated whether apparent physical differences between audiovisual pairs that make them distinct from each other can independently drive multiple concurrent temporal recalibrations regardless of spatial overlap. Experiment 1 verified that reducing the physical difference between two audiovisual pairs diminishes the multiple temporal recalibrations by exposing observers to two utterances with opposing temporal relationships spoken by one single speaker rather than two distinct speakers at the same location. Experiment 2 found that increasing the physical difference between two stimuli pairs can promote multiple temporal recalibrations by complicating their non-temporal dimensions (e.g., disks composed of two rather than one attribute and tones generated by multiplying two frequencies); however, these recalibration aftereffects were subtle. Experiment 3 further revealed that making the two audiovisual pairs differ in temporal structures (one transient and one gradual) was sufficient to drive concurrent temporal recalibration. These results confirm that the more audiovisual pairs physically differ, especially in temporal profile, the more likely multiple temporal perception adjustments will be content-constrained regardless of spatial overlap. These results indicate that multiple temporal recalibrations are based secondarily on the outcome of perceptual grouping processes.
Two-way regularization for MEG source reconstruction via multilevel coordinate descent
Siva Tian, Tian
2013-12-01
Magnetoencephalography (MEG) source reconstruction refers to the inverse problem of recovering the neural activity from the MEG time course measurements. A spatiotemporal two-way regularization (TWR) method was recently proposed by Tian et al. to solve this inverse problem and was shown to outperform several one-way regularization methods and spatiotemporal methods. This TWR method is a two-stage procedure that first obtains a raw estimate of the source signals and then refines the raw estimate to ensure spatial focality and temporal smoothness using spatiotemporal regularized matrix decomposition. Although proven to be effective, the performance of two-stage TWR depends on the quality of the raw estimate. In this paper we directly solve the MEG source reconstruction problem using a multivariate penalized regression where the number of variables is much larger than the number of cases. A special feature of this regression is that the regression coefficient matrix has a spatiotemporal two-way structure that naturally invites a two-way penalty. Making use of this structure, we develop a computationally efficient multilevel coordinate descent algorithm to implement the method. This new one-stage TWR method has shown its superiority to the two-stage TWR method in three simulation studies with different levels of complexity and a real-world MEG data analysis. © 2013 Wiley Periodicals, Inc., A Wiley Company.
Efficacy of a Respiratory Training System on the Regularity of Breathing
International Nuclear Information System (INIS)
Shin, Eun Hyuk; Park, Hee Chul; Han, Young Yih; Ju, Sang Gyu; Shin, Jung Suk; Ahn, Yong Chan
2008-01-01
In order to enhance the efficiency of respiratory gated 4-dimensional radiation therapy for more regular and stable respiratory period and amplitude, a respiration training system was designed, and its efficacy was evaluated. Materials and Methods: The experiment was designed to measure the difference in respiration regularity following the use of a training system. A total of 11 subjects (9 volunteers and 2 patients) were included in the experiments. Three different breathing signals, including free breathing (free-breathing), guided breathing that followed training software (guided-breathing), and free breathing after the guided-breathing (post guided-breathing), were consecutively recorded in each subject. The peak-to-peak (PTP) period of the breathing signal, standard deviation (SD), peak-amplitude and its SD, area of the one cycle of the breathing wave form, and its root mean square (RMS) were measured and computed. Results: The temporal regularity was significantly improved in guided-breathing since the SD of breathing period reduced (free-breathing 0.568 vs guided-breathing 0.344, p=0.0013). The SD of the breathing period representing the post guided-breathing was also reduced, but the difference was not statistically significant (free-breathing 0.568 vs. guided-breathing 0.512, p=ns). Also the SD of measured amplitude was reduced in guided-breathing (free-breathing 1.317 vs. guided-breathing 1.068, p=0.187), although not significant. This indicated that the tidal volume for each breath was kept more even in guided-breathing compared to free-breathing. There was no change in breathing pattern between free-breathing and guided-breathing. The average area of breathing wave form and its RMS in postguided-breathing, however, was reduced by 7% and 5.9%, respectively. Conclusion: The guided-breathing was more stable and regular than the other forms of breathing data. Therefore, the developed respiratory training system was effective in improving the temporal
Regular Breakfast and Blood Lead Levels among Preschool Children
Directory of Open Access Journals (Sweden)
Needleman Herbert
2011-04-01
Full Text Available Abstract Background Previous studies have shown that fasting increases lead absorption in the gastrointestinal tract of adults. Regular meals/snacks are recommended as a nutritional intervention for lead poisoning in children, but epidemiological evidence of links between fasting and blood lead levels (B-Pb is rare. The purpose of this study was to examine the association between eating a regular breakfast and B-Pb among children using data from the China Jintan Child Cohort Study. Methods Parents completed a questionnaire regarding children's breakfast-eating habit (regular or not, demographics, and food frequency. Whole blood samples were collected from 1,344 children for the measurements of B-Pb and micronutrients (iron, copper, zinc, calcium, and magnesium. B-Pb and other measures were compared between children with and without regular breakfast. Linear regression modeling was used to evaluate the association between regular breakfast and log-transformed B-Pb. The association between regular breakfast and risk of lead poisoning (B-Pb≥10 μg/dL was examined using logistic regression modeling. Results Median B-Pb among children who ate breakfast regularly and those who did not eat breakfast regularly were 6.1 μg/dL and 7.2 μg/dL, respectively. Eating breakfast was also associated with greater zinc blood levels. Adjusting for other relevant factors, the linear regression model revealed that eating breakfast regularly was significantly associated with lower B-Pb (beta = -0.10 units of log-transformed B-Pb compared with children who did not eat breakfast regularly, p = 0.02. Conclusion The present study provides some initial human data supporting the notion that eating a regular breakfast might reduce B-Pb in young children. To our knowledge, this is the first human study exploring the association between breakfast frequency and B-Pb in young children.
DEFF Research Database (Denmark)
Nielsen, Mikka
According to the official diagnostic manual, ADHD is defined by symptoms of inattention, hyperactivity, and impulsivity and patterns of behaviour are characterized as failure to pay attention to details, excessive talking, fidgeting, or inability to remain seated in appropriate situations (DSM-5......). In this paper, however, I will ask if we can understand what we call ADHD in a different way than through the symptom descriptions and will advocate for a complementary, phenomenological understanding of ADHD as a certain being in the world – more specifically as a matter of a phenomenological difference...... in temporal experience and/or rhythm. Inspired by both psychiatry’s experiments with people diagnosed with ADHD and their assessment of time and phenomenological perspectives on mental disorders and temporal disorientation I explore the experience of ADHD as a disruption in the phenomenological experience...
Causal inference and temporal predictions in audiovisual perception of speech and music.
Noppeney, Uta; Lee, Hwee Ling
2018-03-31
To form a coherent percept of the environment, the brain must integrate sensory signals emanating from a common source but segregate those from different sources. Temporal regularities are prominent cues for multisensory integration, particularly for speech and music perception. In line with models of predictive coding, we suggest that the brain adapts an internal model to the statistical regularities in its environment. This internal model enables cross-sensory and sensorimotor temporal predictions as a mechanism to arbitrate between integration and segregation of signals from different senses. © 2018 New York Academy of Sciences.
Temporal Concurrent Constraint Programming
DEFF Research Database (Denmark)
Nielsen, Mogens; Valencia Posso, Frank Dan
2002-01-01
The ntcc calculus is a model of non-deterministic temporal concurrent constraint programming. In this paper we study behavioral notions for this calculus. In the underlying computational model, concurrent constraint processes are executed in discrete time intervals. The behavioral notions studied...... reflect the reactive interactions between concurrent constraint processes and their environment, as well as internal interactions between individual processes. Relationships between the suggested notions are studied, and they are all proved to be decidable for a substantial fragment of the calculus...
Multilingual Validation of the Questionnaire for Verifying Stroke-Free Status in West Africa.
Sarfo, Fred; Gebregziabher, Mulugeta; Ovbiagele, Bruce; Akinyemi, Rufus; Owolabi, Lukman; Obiako, Reginald; Akpa, Onoja; Armstrong, Kevin; Akpalu, Albert; Adamu, Sheila; Obese, Vida; Boa-Antwi, Nana; Appiah, Lambert; Arulogun, Oyedunni; Mensah, Yaw; Adeoye, Abiodun; Tosin, Aridegbe; Adeleye, Osimhiarherhuo; Tabi-Ajayi, Eric; Phillip, Ibinaiye; Sani, Abubakar; Isah, Suleiman; Tabari, Nasir; Mande, Aliyu; Agunloye, Atinuke; Ogbole, Godwin; Akinyemi, Joshua; Laryea, Ruth; Melikam, Sylvia; Uvere, Ezinne; Adekunle, Gregory; Kehinde, Salaam; Azuh, Paschal; Dambatta, Abdul; Ishaq, Naser; Saulson, Raelle; Arnett, Donna; Tiwari, Hemnant; Jenkins, Carolyn; Lackland, Dan; Owolabi, Mayowa
2016-01-01
The Questionnaire for Verifying Stroke-Free Status (QVSFS), a method for verifying stroke-free status in participants of clinical, epidemiological, and genetic studies, has not been validated in low-income settings where populations have limited knowledge of stroke symptoms. We aimed to validate QVSFS in 3 languages, Yoruba, Hausa and Akan, for ascertainment of stroke-free status of control subjects enrolled in an on-going stroke epidemiological study in West Africa. Data were collected using a cross-sectional study design where 384 participants were consecutively recruited from neurology and general medicine clinics of 5 tertiary referral hospitals in Nigeria and Ghana. Ascertainment of stroke status was by neurologists using structured neurological examination, review of case records, and neuroimaging (gold standard). Relative performance of QVSFS without and with pictures of stroke symptoms (pictograms) was assessed using sensitivity, specificity, positive predictive value, and negative predictive value. The overall median age of the study participants was 54 years and 48.4% were males. Of 165 stroke cases identified by gold standard, 98% were determined to have had stroke, whereas of 219 without stroke 87% were determined to be stroke-free by QVSFS. Negative predictive value of the QVSFS across the 3 languages was 0.97 (range, 0.93-1.00), sensitivity, specificity, and positive predictive value were 0.98, 0.82, and 0.80, respectively. Agreement between the questionnaire with and without the pictogram was excellent/strong with Cohen k=0.92. QVSFS is a valid tool for verifying stroke-free status across culturally diverse populations in West Africa. © 2015 American Heart Association, Inc.
A two-dimensional deformable phantom for quantitatively verifying deformation algorithms
Energy Technology Data Exchange (ETDEWEB)
Kirby, Neil; Chuang, Cynthia; Pouliot, Jean [Department of Radiation Oncology, University of California San Francisco, San Francisco, California 94143-1708 (United States)
2011-08-15
Purpose: The incorporation of deformable image registration into the treatment planning process is rapidly advancing. For this reason, the methods used to verify the underlying deformation algorithms must evolve equally fast. This manuscript proposes a two-dimensional deformable phantom, which can objectively verify the accuracy of deformation algorithms, as the next step for improving these techniques. Methods: The phantom represents a single plane of the anatomy for a head and neck patient. Inflation of a balloon catheter inside the phantom simulates tumor growth. CT and camera images of the phantom are acquired before and after its deformation. Nonradiopaque markers reside on the surface of the deformable anatomy and are visible through an acrylic plate, which enables an optical camera to measure their positions; thus, establishing the ground-truth deformation. This measured deformation is directly compared to the predictions of deformation algorithms, using several similarity metrics. The ratio of the number of points with more than a 3 mm deformation error over the number that are deformed by more than 3 mm is used for an error metric to evaluate algorithm accuracy. Results: An optical method of characterizing deformation has been successfully demonstrated. For the tests of this method, the balloon catheter deforms 32 out of the 54 surface markers by more than 3 mm. Different deformation errors result from the different similarity metrics. The most accurate deformation predictions had an error of 75%. Conclusions: The results presented here demonstrate the utility of the phantom for objectively verifying deformation algorithms and determining which is the most accurate. They also indicate that the phantom would benefit from more electron density heterogeneity. The reduction of the deformable anatomy to a two-dimensional system allows for the use of nonradiopaque markers, which do not influence deformation algorithms. This is the fundamental advantage of this
Saavalainen, Liisu; Tikka, Tuulia; But, Anna; Gissler, Mika; Haukka, Jari; Tiitinen, Aila; Härkki, Päivi; Heikinheimo, Oskari
2018-01-01
To study the trends in incidence rate, type and surgical treatment, and patient characteristics of surgically verified endometriosis during 1987-2012. This is a register-based cohort study. We identified women receiving their first diagnosis of endometriosis in surgery from the Finnish Hospital Discharge Register (FHDR). Quality of the FHDR records was assessed bidirectionally. The age-standardized incidence rates of the first surgically verified endometriosis was assessed by calendar year. The cohort comprises 49 956 women. The quality assessment suggested the FHDR data to be of good quality. The most common diagnosis, ovarian endometriosis (46%), was associated with highest median age 38.5 years (interquartile range 31.0-44.8) and the second most common diagnosis, peritoneal endometriosis (40%), with median age 34.9 years (28.6-41.7). Between 1987 and 2012, a decrease was observed in the median age, from 38.8 (32.3-43.6) to 34.0 (28.9-41.0) years, and in the age-standardized incidence rate from 116 [95% confidence interval (CI) 112-121] to 45 (42-48) per 100 000 women. The proportion of hysterectomy as a first surgical treatment decreased from 38 to 19%, whereas that of laparoscopy increased from 42 to 73% when comparing 1987-1995 with 1996-2012. This nationwide cohort of surgically verified endometriosis showed a decrease in the incidence rate and in the patient age at the time of first diagnosis, even though the proportion of laparoscopy has increased. The number of hysterectomies has decreased. These changes are likely to reflect the evolving diagnostics, increasing awareness of endometriosis, and effective use of medical treatment before surgery. © 2017 Nordic Federation of Societies of Obstetrics and Gynecology.
Chimeric mitochondrial peptides from contiguous regular and swinger RNA.
Seligmann, Hervé
2016-01-01
Previous mass spectrometry analyses described human mitochondrial peptides entirely translated from swinger RNAs, RNAs where polymerization systematically exchanged nucleotides. Exchanges follow one among 23 bijective transformation rules, nine symmetric exchanges (X ↔ Y, e.g. A ↔ C) and fourteen asymmetric exchanges (X → Y → Z → X, e.g. A → C → G → A), multiplying by 24 DNA's protein coding potential. Abrupt switches from regular to swinger polymerization produce chimeric RNAs. Here, human mitochondrial proteomic analyses assuming abrupt switches between regular and swinger transcriptions, detect chimeric peptides, encoded by part regular, part swinger RNA. Contiguous regular- and swinger-encoded residues within single peptides are stronger evidence for translation of swinger RNA than previously detected, entirely swinger-encoded peptides: regular parts are positive controls matched with contiguous swinger parts, increasing confidence in results. Chimeric peptides are 200 × rarer than swinger peptides (3/100,000 versus 6/1000). Among 186 peptides with > 8 residues for each regular and swinger parts, regular parts of eleven chimeric peptides correspond to six among the thirteen recognized, mitochondrial protein-coding genes. Chimeric peptides matching partly regular proteins are rarer and less expressed than chimeric peptides matching non-coding sequences, suggesting targeted degradation of misfolded proteins. Present results strengthen hypotheses that the short mitogenome encodes far more proteins than hitherto assumed. Entirely swinger-encoded proteins could exist.
Tur\\'an type inequalities for regular Coulomb wave functions
Baricz, Árpád
2015-01-01
Tur\\'an, Mitrinovi\\'c-Adamovi\\'c and Wilker type inequalities are deduced for regular Coulomb wave functions. The proofs are based on a Mittag-Leffler expansion for the regular Coulomb wave function, which may be of independent interest. Moreover, some complete monotonicity results concerning the Coulomb zeta functions and some interlacing properties of the zeros of Coulomb wave functions are given.
Regularization and Complexity Control in Feed-forward Networks
Bishop, C. M.
1995-01-01
In this paper we consider four alternative approaches to complexity control in feed-forward networks based respectively on architecture selection, regularization, early stopping, and training with noise. We show that there are close similarities between these approaches and we argue that, for most practical applications, the technique of regularization should be the method of choice.
Optimal Embeddings of Distance Regular Graphs into Euclidean Spaces
F. Vallentin (Frank)
2008-01-01
htmlabstractIn this paper we give a lower bound for the least distortion embedding of a distance regular graph into Euclidean space. We use the lower bound for finding the least distortion for Hamming graphs, Johnson graphs, and all strongly regular graphs. Our technique involves semidefinite
Degree-regular triangulations of torus and Klein bottle
Indian Academy of Sciences (India)
Home; Journals; Proceedings – Mathematical Sciences; Volume 115; Issue 3 ... A triangulation of a connected closed surface is called degree-regular if each of its vertices have the same degree. ... In [5], Datta and Nilakantan have classified all the degree-regular triangulations of closed surfaces on at most 11 vertices.
Adaptive Regularization of Neural Networks Using Conjugate Gradient
DEFF Research Database (Denmark)
Goutte, Cyril; Larsen, Jan
1998-01-01
Andersen et al. (1997) and Larsen et al. (1996, 1997) suggested a regularization scheme which iteratively adapts regularization parameters by minimizing validation error using simple gradient descent. In this contribution we present an improved algorithm based on the conjugate gradient technique........ Numerical experiments with feedforward neural networks successfully demonstrate improved generalization ability and lower computational cost...
Strictly-regular number system and data structures
DEFF Research Database (Denmark)
Elmasry, Amr Ahmed Abd Elmoneim; Jensen, Claus; Katajainen, Jyrki
2010-01-01
We introduce a new number system that we call the strictly-regular system, which efficiently supports the operations: digit-increment, digit-decrement, cut, concatenate, and add. Compared to other number systems, the strictly-regular system has distinguishable properties. It is superior to the re...
Inclusion Professional Development Model and Regular Middle School Educators
Royster, Otelia; Reglin, Gary L.; Losike-Sedimo, Nonofo
2014-01-01
The purpose of this study was to determine the impact of a professional development model on regular education middle school teachers' knowledge of best practices for teaching inclusive classes and attitudes toward teaching these classes. There were 19 regular education teachers who taught the core subjects. Findings for Research Question 1…