WorldWideScience

Sample records for regularized block matching

  1. Block matching sparsity regularization-based image reconstruction for incomplete projection data in computed tomography

    Science.gov (United States)

    Cai, Ailong; Li, Lei; Zheng, Zhizhong; Zhang, Hanming; Wang, Linyuan; Hu, Guoen; Yan, Bin

    2018-02-01

    In medical imaging many conventional regularization methods, such as total variation or total generalized variation, impose strong prior assumptions which can only account for very limited classes of images. A more reasonable sparse representation frame for images is still badly needed. Visually understandable images contain meaningful patterns, and combinations or collections of these patterns can be utilized to form some sparse and redundant representations which promise to facilitate image reconstructions. In this work, we propose and study block matching sparsity regularization (BMSR) and devise an optimization program using BMSR for computed tomography (CT) image reconstruction for an incomplete projection set. The program is built as a constrained optimization, minimizing the L1-norm of the coefficients of the image in the transformed domain subject to data observation and positivity of the image itself. To solve the program efficiently, a practical method based on the proximal point algorithm is developed and analyzed. In order to accelerate the convergence rate, a practical strategy for tuning the BMSR parameter is proposed and applied. The experimental results for various settings, including real CT scanning, have verified the proposed reconstruction method showing promising capabilities over conventional regularization.

  2. Regular Expression Matching and Operational Semantics

    Directory of Open Access Journals (Sweden)

    Asiri Rathnayake

    2011-08-01

    Full Text Available Many programming languages and tools, ranging from grep to the Java String library, contain regular expression matchers. Rather than first translating a regular expression into a deterministic finite automaton, such implementations typically match the regular expression on the fly. Thus they can be seen as virtual machines interpreting the regular expression much as if it were a program with some non-deterministic constructs such as the Kleene star. We formalize this implementation technique for regular expression matching using operational semantics. Specifically, we derive a series of abstract machines, moving from the abstract definition of matching to increasingly realistic machines. First a continuation is added to the operational semantics to describe what remains to be matched after the current expression. Next, we represent the expression as a data structure using pointers, which enables redundant searches to be eliminated via testing for pointer equality. From there, we arrive both at Thompson's lockstep construction and a machine that performs some operations in parallel, suitable for implementation on a large number of cores, such as a GPU. We formalize the parallel machine using process algebra and report some preliminary experiments with an implementation on a graphics processor using CUDA.

  3. Fast and compact regular expression matching

    DEFF Research Database (Denmark)

    Bille, Philip; Farach-Colton, Martin

    2008-01-01

    We study 4 problems in string matching, namely, regular expression matching, approximate regular expression matching, string edit distance, and subsequence indexing, on a standard word RAM model of computation that allows logarithmic-sized words to be manipulated in constant time. We show how...... to improve the space and/or remove a dependency on the alphabet size for each problem using either an improved tabulation technique of an existing algorithm or by combining known algorithms in a new way....

  4. BLOCKS - PDB ATOM matching - DB-SPIRE | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us DB-SPIRE BLOCKS - PDB ATOM matching Data detail Data name BLOCKS - PDB ATOM matching DOI 10....18908/lsdba.nbdc00411-008 Description of data contents Sequence numbers of PDB entries/chains whose ATOM mat...ches a BLOCKS entry Data file File name: dbspire_blocks_pdb_atom.zip File URL: ftp://ftp.biosciencedbc.jp/ar...chive/dbspire/LATEST/dbspire_blocks_pdb_atom.zip File size: 6.2 MB Simple search ...URL http://togodb.biosciencedbc.jp/togodb/view/dbspire_blocks_pdb_atom#en Data acquisition method BLOCKS, PD

  5. Automatic block-matching registration to improve lung tumor localization during image-guided radiotherapy

    Science.gov (United States)

    Robertson, Scott Patrick

    To improve relatively poor outcomes for locally-advanced lung cancer patients, many current efforts are dedicated to minimizing uncertainties in radiotherapy. This enables the isotoxic delivery of escalated tumor doses, leading to better local tumor control. The current dissertation specifically addresses inter-fractional uncertainties resulting from patient setup variability. An automatic block-matching registration (BMR) algorithm is implemented and evaluated for the purpose of directly localizing advanced-stage lung tumors during image-guided radiation therapy. In this algorithm, small image sub-volumes, termed "blocks", are automatically identified on the tumor surface in an initial planning computed tomography (CT) image. Each block is independently and automatically registered to daily images acquired immediately prior to each treatment fraction. To improve the accuracy and robustness of BMR, this algorithm incorporates multi-resolution pyramid registration, regularization with a median filter, and a new multiple-candidate-registrations technique. The result of block-matching is a sparse displacement vector field that models local tissue deformations near the tumor surface. The distribution of displacement vectors is aggregated to obtain the final tumor registration, corresponding to the treatment couch shift for patient setup correction. Compared to existing rigid and deformable registration algorithms, the final BMR algorithm significantly improves the overlap between target volumes from the planning CT and registered daily images. Furthermore, BMR results in the smallest treatment margins for the given study population. However, despite these improvements, large residual target localization errors were noted, indicating that purely rigid couch shifts cannot correct for all sources of inter-fractional variability. Further reductions in treatment uncertainties may require the combination of high-quality target localization and adaptive radiotherapy.

  6. A block matching-based registration algorithm for localization of locally advanced lung tumors

    Energy Technology Data Exchange (ETDEWEB)

    Robertson, Scott P.; Weiss, Elisabeth; Hugo, Geoffrey D., E-mail: gdhugo@vcu.edu [Department of Radiation Oncology, Virginia Commonwealth University, Richmond, Virginia, 23298 (United States)

    2014-04-15

    Purpose: To implement and evaluate a block matching-based registration (BMR) algorithm for locally advanced lung tumor localization during image-guided radiotherapy. Methods: Small (1 cm{sup 3}), nonoverlapping image subvolumes (“blocks”) were automatically identified on the planning image to cover the tumor surface using a measure of the local intensity gradient. Blocks were independently and automatically registered to the on-treatment image using a rigid transform. To improve speed and robustness, registrations were performed iteratively from coarse to fine image resolution. At each resolution, all block displacements having a near-maximum similarity score were stored. From this list, a single displacement vector for each block was iteratively selected which maximized the consistency of displacement vectors across immediately neighboring blocks. These selected displacements were regularized using a median filter before proceeding to registrations at finer image resolutions. After evaluating all image resolutions, the global rigid transform of the on-treatment image was computed using a Procrustes analysis, providing the couch shift for patient setup correction. This algorithm was evaluated for 18 locally advanced lung cancer patients, each with 4–7 weekly on-treatment computed tomography scans having physician-delineated gross tumor volumes. Volume overlap (VO) and border displacement errors (BDE) were calculated relative to the nominal physician-identified targets to establish residual error after registration. Results: Implementation of multiresolution registration improved block matching accuracy by 39% compared to registration using only the full resolution images. By also considering multiple potential displacements per block, initial errors were reduced by 65%. Using the final implementation of the BMR algorithm, VO was significantly improved from 77% ± 21% (range: 0%–100%) in the initial bony alignment to 91% ± 8% (range: 56%–100%;p < 0

  7. A block matching-based registration algorithm for localization of locally advanced lung tumors

    International Nuclear Information System (INIS)

    Robertson, Scott P.; Weiss, Elisabeth; Hugo, Geoffrey D.

    2014-01-01

    Purpose: To implement and evaluate a block matching-based registration (BMR) algorithm for locally advanced lung tumor localization during image-guided radiotherapy. Methods: Small (1 cm 3 ), nonoverlapping image subvolumes (“blocks”) were automatically identified on the planning image to cover the tumor surface using a measure of the local intensity gradient. Blocks were independently and automatically registered to the on-treatment image using a rigid transform. To improve speed and robustness, registrations were performed iteratively from coarse to fine image resolution. At each resolution, all block displacements having a near-maximum similarity score were stored. From this list, a single displacement vector for each block was iteratively selected which maximized the consistency of displacement vectors across immediately neighboring blocks. These selected displacements were regularized using a median filter before proceeding to registrations at finer image resolutions. After evaluating all image resolutions, the global rigid transform of the on-treatment image was computed using a Procrustes analysis, providing the couch shift for patient setup correction. This algorithm was evaluated for 18 locally advanced lung cancer patients, each with 4–7 weekly on-treatment computed tomography scans having physician-delineated gross tumor volumes. Volume overlap (VO) and border displacement errors (BDE) were calculated relative to the nominal physician-identified targets to establish residual error after registration. Results: Implementation of multiresolution registration improved block matching accuracy by 39% compared to registration using only the full resolution images. By also considering multiple potential displacements per block, initial errors were reduced by 65%. Using the final implementation of the BMR algorithm, VO was significantly improved from 77% ± 21% (range: 0%–100%) in the initial bony alignment to 91% ± 8% (range: 56%–100%;p < 0.001). Left

  8. Support agnostic Bayesian matching pursuit for block sparse signals

    KAUST Repository

    Masood, Mudassir

    2013-05-01

    A fast matching pursuit method using a Bayesian approach is introduced for block-sparse signal recovery. This method performs Bayesian estimates of block-sparse signals even when the distribution of active blocks is non-Gaussian or unknown. It is agnostic to the distribution of active blocks in the signal and utilizes a priori statistics of additive noise and the sparsity rate of the signal, which are shown to be easily estimated from data and no user intervention is required. The method requires a priori knowledge of block partition and utilizes a greedy approach and order-recursive updates of its metrics to find the most dominant sparse supports to determine the approximate minimum mean square error (MMSE) estimate of the block-sparse signal. Simulation results demonstrate the power and robustness of our proposed estimator. © 2013 IEEE.

  9. Improving Conductivity Image Quality Using Block Matrix-based Multiple Regularization (BMMR Technique in EIT: A Simulation Study

    Directory of Open Access Journals (Sweden)

    Tushar Kanti Bera

    2011-06-01

    Full Text Available A Block Matrix based Multiple Regularization (BMMR technique is proposed for improving conductivity image quality in EIT. The response matrix (JTJ has been partitioned into several sub-block matrices and the highest eigenvalue of each sub-block matrices has been chosen as regularization parameter for the nodes contained by that sub-block. Simulated boundary data are generated for circular domain with circular inhomogeneity and the conductivity images are reconstructed in a Model Based Iterative Image Reconstruction (MoBIIR algorithm. Conductivity images are reconstructed with BMMR technique and the results are compared with the Single-step Tikhonov Regularization (STR and modified Levenberg-Marquardt Regularization (LMR methods. It is observed that the BMMR technique reduces the projection error and solution error and improves the conductivity reconstruction in EIT. Result show that the BMMR method also improves the image contrast and inhomogeneity conductivity profile and hence the reconstructed image quality is enhanced. ;doi:10.5617/jeb.170 J Electr Bioimp, vol. 2, pp. 33-47, 2011

  10. Learning SAS’s Perl Regular Expression Matching the Easy Way: By Doing

    Science.gov (United States)

    2015-01-12

    Doing 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Paul Genovesi 5d. PROJECT NUMBER 5e. TASK NUMBER 5f...regex_learning_tool allows both beginner and expert to efficiently practice PRX matching by selecting and processing only the match records that the user is interested...perl regular expression and/or source string. The regex_learning_tool allows both beginner and expert to efficiently practice PRX matching by

  11. A Boyer-Moore (or Watson-Watson) type algorithm for regular tree pattern matching

    NARCIS (Netherlands)

    Watson, B.W.; Aarts, E.H.L.; Eikelder, ten H.M.M.; Hemerik, C.; Rem, M.

    1995-01-01

    In this chapter, I outline a new algorithm for regular tree pattern matching. The existence of this algorithm was first mentioned in the statements accompanying my dissertation, [2]. In order to avoid repeating the material in my dissertation, it is assumed that the reader is familiar with Chapters

  12. Reduction of snapshots for MIMO radar detection by block/group orthogonal matching pursuit

    KAUST Repository

    Ali, Hussain El Hosiny

    2014-10-01

    Multiple-input multiple-output (MIMO) radar works on the principle of transmission of independent waveforms at each element of its antenna array and is widely used for surveillance purposes. In this work, we investigate MIMO radar target localization problem with compressive sensing. Specifically, we try to solve the problem of estimation of target location in MIMO radar by group and block sparsity algorithms. It will lead us to a reduced number of snapshots required and also we can achieve better radar resolution. We will use group orthogonal matching pursuit (GOMP) and block orthogonal matching pursuit (BOMP) for our problem. © 2014 IEEE.

  13. FPGA-accelerated algorithm for the regular expression matching system

    Science.gov (United States)

    Russek, P.; Wiatr, K.

    2015-01-01

    This article describes an algorithm to support a regular expressions matching system. The goal was to achieve an attractive performance system with low energy consumption. The basic idea of the algorithm comes from a concept of the Bloom filter. It starts from the extraction of static sub-strings for strings of regular expressions. The algorithm is devised to gain from its decomposition into parts which are intended to be executed by custom hardware and the central processing unit (CPU). The pipelined custom processor architecture is proposed and a software algorithm explained accordingly. The software part of the algorithm was coded in C and runs on a processor from the ARM family. The hardware architecture was described in VHDL and implemented in field programmable gate array (FPGA). The performance results and required resources of the above experiments are given. An example of target application for the presented solution is computer and network security systems. The idea was tested on nearly 100,000 body-based viruses from the ClamAV virus database. The solution is intended for the emerging technology of clusters of low-energy computing nodes.

  14. Optimal Design of the Adaptive Normalized Matched Filter Detector using Regularized Tyler Estimators

    KAUST Repository

    Kammoun, Abla; Couillet, Romain; Pascal, Frederic; Alouini, Mohamed-Slim

    2017-01-01

    This article addresses improvements on the design of the adaptive normalized matched filter (ANMF) for radar detection. It is well-acknowledged that the estimation of the noise-clutter covariance matrix is a fundamental step in adaptive radar detection. In this paper, we consider regularized estimation methods which force by construction the eigenvalues of the covariance estimates to be greater than a positive regularization parameter ρ. This makes them more suitable for high dimensional problems with a limited number of secondary data samples than traditional sample covariance estimates. The motivation behind this work is to understand the effect and properly set the value of ρthat improves estimate conditioning while maintaining a low estimation bias. More specifically, we consider the design of the ANMF detector for two kinds of regularized estimators, namely the regularized sample covariance matrix (RSCM), the regularized Tyler estimator (RTE). The rationale behind this choice is that the RTE is efficient in mitigating the degradation caused by the presence of impulsive noises while inducing little loss when the noise is Gaussian. Based on asymptotic results brought by recent tools from random matrix theory, we propose a design for the regularization parameter that maximizes the asymptotic detection probability under constant asymptotic false alarm rates. Provided Simulations support the efficiency of the proposed method, illustrating its gain over conventional settings of the regularization parameter.

  15. Optimal Design of the Adaptive Normalized Matched Filter Detector using Regularized Tyler Estimators

    KAUST Repository

    Kammoun, Abla

    2017-10-25

    This article addresses improvements on the design of the adaptive normalized matched filter (ANMF) for radar detection. It is well-acknowledged that the estimation of the noise-clutter covariance matrix is a fundamental step in adaptive radar detection. In this paper, we consider regularized estimation methods which force by construction the eigenvalues of the covariance estimates to be greater than a positive regularization parameter ρ. This makes them more suitable for high dimensional problems with a limited number of secondary data samples than traditional sample covariance estimates. The motivation behind this work is to understand the effect and properly set the value of ρthat improves estimate conditioning while maintaining a low estimation bias. More specifically, we consider the design of the ANMF detector for two kinds of regularized estimators, namely the regularized sample covariance matrix (RSCM), the regularized Tyler estimator (RTE). The rationale behind this choice is that the RTE is efficient in mitigating the degradation caused by the presence of impulsive noises while inducing little loss when the noise is Gaussian. Based on asymptotic results brought by recent tools from random matrix theory, we propose a design for the regularization parameter that maximizes the asymptotic detection probability under constant asymptotic false alarm rates. Provided Simulations support the efficiency of the proposed method, illustrating its gain over conventional settings of the regularization parameter.

  16. Modified Three-Step Search Block Matching Motion Estimation and Weighted Finite Automata based Fractal Video Compression

    Directory of Open Access Journals (Sweden)

    Shailesh Kamble

    2017-08-01

    Full Text Available The major challenge with fractal image/video coding technique is that, it requires more encoding time. Therefore, how to reduce the encoding time is the research component remains in the fractal coding. Block matching motion estimation algorithms are used, to reduce the computations performed in the process of encoding. The objective of the proposed work is to develop an approach for video coding using modified three step search (MTSS block matching algorithm and weighted finite automata (WFA coding with a specific focus on reducing the encoding time. The MTSS block matching algorithm are used for computing motion vectors between the two frames i.e. displacement of pixels and WFA is used for the coding as it behaves like the Fractal Coding (FC. WFA represents an image (frame or motion compensated prediction error based on the idea of fractal that the image has self-similarity in itself. The self-similarity is sought from the symmetry of an image, so the encoding algorithm divides an image into multi-levels of quad-tree segmentations and creates an automaton from the sub-images. The proposed MTSS block matching algorithm is based on the combination of rectangular and hexagonal search pattern and compared with the existing New Three-Step Search (NTSS, Three-Step Search (TSS, and Efficient Three-Step Search (ETSS block matching estimation algorithm. The performance of the proposed MTSS block matching algorithm is evaluated on the basis of performance evaluation parameters i.e. mean absolute difference (MAD and average search points required per frame. Mean of absolute difference (MAD distortion function is used as the block distortion measure (BDM. Finally, developed approaches namely, MTSS and WFA, MTSS and FC, and Plane FC (applied on every frame are compared with each other. The experimentations are carried out on the standard uncompressed video databases, namely, akiyo, bus, mobile, suzie, traffic, football, soccer, ice etc. Developed

  17. Video error concealment using block matching and frequency selective extrapolation algorithms

    Science.gov (United States)

    P. K., Rajani; Khaparde, Arti

    2017-06-01

    Error Concealment (EC) is a technique at the decoder side to hide the transmission errors. It is done by analyzing the spatial or temporal information from available video frames. It is very important to recover distorted video because they are used for various applications such as video-telephone, video-conference, TV, DVD, internet video streaming, video games etc .Retransmission-based and resilient-based methods, are also used for error removal. But these methods add delay and redundant data. So error concealment is the best option for error hiding. In this paper, the error concealment methods such as Block Matching error concealment algorithm is compared with Frequency Selective Extrapolation algorithm. Both the works are based on concealment of manually error video frames as input. The parameter used for objective quality measurement was PSNR (Peak Signal to Noise Ratio) and SSIM(Structural Similarity Index). The original video frames along with error video frames are compared with both the Error concealment algorithms. According to simulation results, Frequency Selective Extrapolation is showing better quality measures such as 48% improved PSNR and 94% increased SSIM than Block Matching Algorithm.

  18. 1 / n Expansion for the Number of Matchings on Regular Graphs and Monomer-Dimer Entropy

    Science.gov (United States)

    Pernici, Mario

    2017-08-01

    Using a 1 / n expansion, that is an expansion in descending powers of n, for the number of matchings in regular graphs with 2 n vertices, we study the monomer-dimer entropy for two classes of graphs. We study the difference between the extensive monomer-dimer entropy of a random r-regular graph G (bipartite or not) with 2 n vertices and the average extensive entropy of r-regular graphs with 2 n vertices, in the limit n → ∞. We find a series expansion for it in the numbers of cycles; with probability 1 it converges for dimer density p diverges as |ln(1-p)| for p → 1. In the case of regular lattices, we similarly expand the difference between the specific monomer-dimer entropy on a lattice and the one on the Bethe lattice; we write down its Taylor expansion in powers of p through the order 10, expressed in terms of the number of totally reducible walks which are not tree-like. We prove through order 6 that its expansion coefficients in powers of p are non-negative.

  19. Randomized Block Cubic Newton Method

    KAUST Repository

    Doikov, Nikita; Richtarik, Peter

    2018-01-01

    We study the problem of minimizing the sum of three convex functions: a differentiable, twice-differentiable and a non-smooth term in a high dimensional setting. To this effect we propose and analyze a randomized block cubic Newton (RBCN) method, which in each iteration builds a model of the objective function formed as the sum of the natural models of its three components: a linear model with a quadratic regularizer for the differentiable term, a quadratic model with a cubic regularizer for the twice differentiable term, and perfect (proximal) model for the nonsmooth term. Our method in each iteration minimizes the model over a random subset of blocks of the search variable. RBCN is the first algorithm with these properties, generalizing several existing methods, matching the best known bounds in all special cases. We establish ${\\cal O}(1/\\epsilon)$, ${\\cal O}(1/\\sqrt{\\epsilon})$ and ${\\cal O}(\\log (1/\\epsilon))$ rates under different assumptions on the component functions. Lastly, we show numerically that our method outperforms the state-of-the-art on a variety of machine learning problems, including cubically regularized least-squares, logistic regression with constraints, and Poisson regression.

  20. Randomized Block Cubic Newton Method

    KAUST Repository

    Doikov, Nikita

    2018-02-12

    We study the problem of minimizing the sum of three convex functions: a differentiable, twice-differentiable and a non-smooth term in a high dimensional setting. To this effect we propose and analyze a randomized block cubic Newton (RBCN) method, which in each iteration builds a model of the objective function formed as the sum of the natural models of its three components: a linear model with a quadratic regularizer for the differentiable term, a quadratic model with a cubic regularizer for the twice differentiable term, and perfect (proximal) model for the nonsmooth term. Our method in each iteration minimizes the model over a random subset of blocks of the search variable. RBCN is the first algorithm with these properties, generalizing several existing methods, matching the best known bounds in all special cases. We establish ${\\\\cal O}(1/\\\\epsilon)$, ${\\\\cal O}(1/\\\\sqrt{\\\\epsilon})$ and ${\\\\cal O}(\\\\log (1/\\\\epsilon))$ rates under different assumptions on the component functions. Lastly, we show numerically that our method outperforms the state-of-the-art on a variety of machine learning problems, including cubically regularized least-squares, logistic regression with constraints, and Poisson regression.

  1. From Matched Spatial Filtering towards the Fused Statistical Descriptive Regularization Method for Enhanced Radar Imaging

    Directory of Open Access Journals (Sweden)

    Shkvarko Yuriy

    2006-01-01

    Full Text Available We address a new approach to solve the ill-posed nonlinear inverse problem of high-resolution numerical reconstruction of the spatial spectrum pattern (SSP of the backscattered wavefield sources distributed over the remotely sensed scene. An array or synthesized array radar (SAR that employs digital data signal processing is considered. By exploiting the idea of combining the statistical minimum risk estimation paradigm with numerical descriptive regularization techniques, we address a new fused statistical descriptive regularization (SDR strategy for enhanced radar imaging. Pursuing such an approach, we establish a family of the SDR-related SSP estimators, that encompass a manifold of existing beamforming techniques ranging from traditional matched filter to robust and adaptive spatial filtering, and minimum variance methods.

  2. Modeling of Video Sequences by Gaussian Mixture: Application in Motion Estimation by Block Matching Method

    Directory of Open Access Journals (Sweden)

    Abdenaceur Boudlal

    2010-01-01

    Full Text Available This article investigates a new method of motion estimation based on block matching criterion through the modeling of image blocks by a mixture of two and three Gaussian distributions. Mixture parameters (weights, means vectors, and covariance matrices are estimated by the Expectation Maximization algorithm (EM which maximizes the log-likelihood criterion. The similarity between a block in the current image and the more resembling one in a search window on the reference image is measured by the minimization of Extended Mahalanobis distance between the clusters of mixture. Performed experiments on sequences of real images have given good results, and PSNR reached 3 dB.

  3. SU-G-BRA-02: Development of a Learning Based Block Matching Algorithm for Ultrasound Tracking in Radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Shepard, A; Bednarz, B [University of Wisconsin, Madison, WI (United States)

    2016-06-15

    Purpose: To develop an ultrasound learning-based tracking algorithm with the potential to provide real-time motion traces of anatomy-based fiducials that may aid in the effective delivery of external beam radiation. Methods: The algorithm was developed in Matlab R2015a and consists of two main stages: reference frame selection, and localized block matching. Immediately following frame acquisition, a normalized cross-correlation (NCC) similarity metric is used to determine a reference frame most similar to the current frame from a series of training set images that were acquired during a pretreatment scan. Segmented features in the reference frame provide the basis for the localized block matching to determine the feature locations in the current frame. The boundary points of the reference frame segmentation are used as the initial locations for the block matching and NCC is used to find the most similar block in the current frame. The best matched block locations in the current frame comprise the updated feature boundary. The algorithm was tested using five features from two sets of ultrasound patient data obtained from MICCAI 2014 CLUST. Due to the lack of a training set associated with the image sequences, the first 200 frames of the image sets were considered a valid training set for preliminary testing, and tracking was performed over the remaining frames. Results: Tracking of the five vessel features resulted in an average tracking error of 1.21 mm relative to predefined annotations. The average analysis rate was 15.7 FPS with analysis for one of the two patients reaching real-time speeds. Computations were performed on an i5-3230M at 2.60 GHz. Conclusion: Preliminary tests show tracking errors comparable with similar algorithms at close to real-time speeds. Extension of the work onto a GPU platform has the potential to achieve real-time performance, making tracking for therapy applications a feasible option. This work is partially funded by NIH grant R01CA

  4. SU-G-BRA-02: Development of a Learning Based Block Matching Algorithm for Ultrasound Tracking in Radiotherapy

    International Nuclear Information System (INIS)

    Shepard, A; Bednarz, B

    2016-01-01

    Purpose: To develop an ultrasound learning-based tracking algorithm with the potential to provide real-time motion traces of anatomy-based fiducials that may aid in the effective delivery of external beam radiation. Methods: The algorithm was developed in Matlab R2015a and consists of two main stages: reference frame selection, and localized block matching. Immediately following frame acquisition, a normalized cross-correlation (NCC) similarity metric is used to determine a reference frame most similar to the current frame from a series of training set images that were acquired during a pretreatment scan. Segmented features in the reference frame provide the basis for the localized block matching to determine the feature locations in the current frame. The boundary points of the reference frame segmentation are used as the initial locations for the block matching and NCC is used to find the most similar block in the current frame. The best matched block locations in the current frame comprise the updated feature boundary. The algorithm was tested using five features from two sets of ultrasound patient data obtained from MICCAI 2014 CLUST. Due to the lack of a training set associated with the image sequences, the first 200 frames of the image sets were considered a valid training set for preliminary testing, and tracking was performed over the remaining frames. Results: Tracking of the five vessel features resulted in an average tracking error of 1.21 mm relative to predefined annotations. The average analysis rate was 15.7 FPS with analysis for one of the two patients reaching real-time speeds. Computations were performed on an i5-3230M at 2.60 GHz. Conclusion: Preliminary tests show tracking errors comparable with similar algorithms at close to real-time speeds. Extension of the work onto a GPU platform has the potential to achieve real-time performance, making tracking for therapy applications a feasible option. This work is partially funded by NIH grant R01CA

  5. Compressive Video Recovery Using Block Match Multi-Frame Motion Estimation Based on Single Pixel Cameras

    Directory of Open Access Journals (Sweden)

    Sheng Bi

    2016-03-01

    Full Text Available Compressive sensing (CS theory has opened up new paths for the development of signal processing applications. Based on this theory, a novel single pixel camera architecture has been introduced to overcome the current limitations and challenges of traditional focal plane arrays. However, video quality based on this method is limited by existing acquisition and recovery methods, and the method also suffers from being time-consuming. In this paper, a multi-frame motion estimation algorithm is proposed in CS video to enhance the video quality. The proposed algorithm uses multiple frames to implement motion estimation. Experimental results show that using multi-frame motion estimation can improve the quality of recovered videos. To further reduce the motion estimation time, a block match algorithm is used to process motion estimation. Experiments demonstrate that using the block match algorithm can reduce motion estimation time by 30%.

  6. A Review on Block Matching Motion Estimation and Automata Theory based Approaches for Fractal Coding

    Directory of Open Access Journals (Sweden)

    Shailesh Kamble

    2016-12-01

    Full Text Available Fractal compression is the lossy compression technique in the field of gray/color image and video compression. It gives high compression ratio, better image quality with fast decoding time but improvement in encoding time is a challenge. This review paper/article presents the analysis of most significant existing approaches in the field of fractal based gray/color images and video compression, different block matching motion estimation approaches for finding out the motion vectors in a frame based on inter-frame coding and intra-frame coding i.e. individual frame coding and automata theory based coding approaches to represent an image/sequence of images. Though different review papers exist related to fractal coding, this paper is different in many sense. One can develop the new shape pattern for motion estimation and modify the existing block matching motion estimation with automata coding to explore the fractal compression technique with specific focus on reducing the encoding time and achieving better image/video reconstruction quality. This paper is useful for the beginners in the domain of video compression.

  7. The Effect of Regular-Season Rest on Playoff Performance Among Players in the National Basketball Association.

    Science.gov (United States)

    Belk, John W; Marshall, Hayden A; McCarty, Eric C; Kraeutler, Matthew J

    2017-10-01

    There has been speculation that rest during the regular season for players in the National Basketball Association (NBA) improves player performance in the postseason. To determine whether there is a correlation between the amount of regular-season rest among NBA players and playoff performance and injury risk in the same season. Cohort study; Level of evidence, 3. The Basketball Reference and Pro Sports Transactions archives were searched from the 2005 to 2015 seasons. Data were collected on players who missed fewer than 5 regular-season games because of rest (group A) and 5 to 9 regular-season games because of rest (group B) during each season. Inclusion criteria consisted of players who played a minimum of 20 minutes per game and made the playoffs that season. Players were excluded if they missed ≥10 games because of rest or suspension or missed ≥20 games in a season for any reason. Matched pairs were formed between the groups based on the following criteria: position, mean age at the start of the season within 2 years, regular-season minutes per game within 5 minutes, same playoff seeding, and player efficiency rating (PER) within 2 points. The following data from the playoffs were collected and compared between matched pairs at each position (point guard, shooting guard, forward/center): points per game, assists per game, PER, true shooting percentage, blocks, steals, and number of playoff games missed because of injury. A total of 811 players met the inclusion and exclusion criteria (group A: n = 744 players; group B: n = 67 players). Among all eligible players, 27 matched pairs were formed. Within these matched pairs, players in group B missed significantly more regular-season games because of rest than players in group A (6.0 games vs 1.3 games, respectively; P NBA regular season does not improve playoff performance or affect the injury risk during the playoffs in the same season.

  8. Automatic registration of remote sensing images based on SIFT and fuzzy block matching for change detection

    Directory of Open Access Journals (Sweden)

    Cai Guo-Rong

    2011-10-01

    Full Text Available This paper presents an automated image registration approach to detecting changes in multi-temporal remote sensing images. The proposed algorithm is based on the scale invariant feature transform (SIFT and has two phases. The first phase focuses on SIFT feature extraction and on estimation of image transformation. In the second phase, Structured Local Binary Haar Pattern (SLBHP combined with a fuzzy similarity measure is then used to build a new and effective block similarity measure for change detection. Experimental results obtained on multi-temporal data sets show that compared with three mainstream block matching algorithms, the proposed algorithm is more effective in dealing with scale, rotation and illumination changes.

  9. Analysis of Block OMP using Block RIP

    OpenAIRE

    Wang, Jun; Li, Gang; Zhang, Hao; Wang, Xiqin

    2011-01-01

    Orthogonal matching pursuit (OMP) is a canonical greedy algorithm for sparse signal reconstruction. When the signal of interest is block sparse, i.e., it has nonzero coefficients occurring in clusters, the block version of OMP algorithm (i.e., Block OMP) outperforms the conventional OMP. In this paper, we demonstrate that a new notion of block restricted isometry property (Block RIP), which is less stringent than standard restricted isometry property (RIP), can be used for a very straightforw...

  10. Spine labeling in MRI via regularized distribution matching.

    Science.gov (United States)

    Hojjat, Seyed-Parsa; Ayed, Ismail; Garvin, Gregory J; Punithakumar, Kumaradevan

    2017-11-01

    This study investigates an efficient (nearly real-time) two-stage spine labeling algorithm that removes the need for an external training while being applicable to different types of MRI data and acquisition protocols. Based solely on the image being labeled (i.e., we do not use training data), the first stage aims at detecting potential vertebra candidates following the optimization of a functional containing two terms: (i) a distribution-matching term that encodes contextual information about the vertebrae via a density model learned from a very simple user input, which amounts to a point (mouse click) on a predefined vertebra; and (ii) a regularization constraint, which penalizes isolated candidates in the solution. The second stage removes false positives and identifies all vertebrae and discs by optimizing a geometric constraint, which embeds generic anatomical information on the interconnections between neighboring structures. Based on generic knowledge, our geometric constraint does not require external training. We performed quantitative evaluations of the algorithm over a data set of 90 mid-sagittal MRI images of the lumbar spine acquired from 45 different subjects. To assess the flexibility of the algorithm, we used both T1- and T2-weighted images for each subject. A total of 990 structures were automatically detected/labeled and compared to ground-truth annotations by an expert. On the T2-weighted data, we obtained an accuracy of 91.6% for the vertebrae and 89.2% for the discs. On the T1-weighted data, we obtained an accuracy of 90.7% for the vertebrae and 88.1% for the discs. Our algorithm removes the need for external training while being applicable to different types of MRI data and acquisition protocols. Based on the current testing data, a subject-specific model density and generic anatomical information, our method can achieve competitive performances when applied to T1- and T2-weighted MRI images.

  11. Manifold Regularized Correlation Object Tracking.

    Science.gov (United States)

    Hu, Hongwei; Ma, Bo; Shen, Jianbing; Shao, Ling

    2018-05-01

    In this paper, we propose a manifold regularized correlation tracking method with augmented samples. To make better use of the unlabeled data and the manifold structure of the sample space, a manifold regularization-based correlation filter is introduced, which aims to assign similar labels to neighbor samples. Meanwhile, the regression model is learned by exploiting the block-circulant structure of matrices resulting from the augmented translated samples over multiple base samples cropped from both target and nontarget regions. Thus, the final classifier in our method is trained with positive, negative, and unlabeled base samples, which is a semisupervised learning framework. A block optimization strategy is further introduced to learn a manifold regularization-based correlation filter for efficient online tracking. Experiments on two public tracking data sets demonstrate the superior performance of our tracker compared with the state-of-the-art tracking approaches.

  12. Measurement of left ventricular torsion using block-matching-based speckle tracking for two-dimensional echocardiography

    Science.gov (United States)

    Sun, Feng-Rong; Wang, Xiao-Jing; Wu, Qiang; Yao, Gui-Hua; Zhang, Yun

    2013-01-01

    Left ventricular (LV) torsion is a sensitive and global index of LV systolic and diastolic function, but how to noninvasively measure it is challenging. Two-dimensional echocardiography and the block-matching based speckle tracking method were used to measure LV torsion. Main advantages of the proposed method over the previous ones are summarized as follows: (1) The method is automatic, except for manually selecting some endocardium points on the end-diastolic frame in initialization step. (2) The diamond search strategy is applied, with a spatial smoothness constraint introduced into the sum of absolute differences matching criterion; and the reference frame during the search is determined adaptively. (3) The method is capable of removing abnormal measurement data automatically. The proposed method was validated against that using Doppler tissue imaging and some preliminary clinical experimental studies were presented to illustrate clinical values of the proposed method.

  13. Compute-unified device architecture implementation of a block-matching algorithm for multiple graphical processing unit cards.

    Science.gov (United States)

    Massanes, Francesc; Cadennes, Marie; Brankov, Jovan G

    2011-07-01

    In this paper we describe and evaluate a fast implementation of a classical block matching motion estimation algorithm for multiple Graphical Processing Units (GPUs) using the Compute Unified Device Architecture (CUDA) computing engine. The implemented block matching algorithm (BMA) uses summed absolute difference (SAD) error criterion and full grid search (FS) for finding optimal block displacement. In this evaluation we compared the execution time of a GPU and CPU implementation for images of various sizes, using integer and non-integer search grids.The results show that use of a GPU card can shorten computation time by a factor of 200 times for integer and 1000 times for a non-integer search grid. The additional speedup for non-integer search grid comes from the fact that GPU has built-in hardware for image interpolation. Further, when using multiple GPU cards, the presented evaluation shows the importance of the data splitting method across multiple cards, but an almost linear speedup with a number of cards is achievable.In addition we compared execution time of the proposed FS GPU implementation with two existing, highly optimized non-full grid search CPU based motion estimations methods, namely implementation of the Pyramidal Lucas Kanade Optical flow algorithm in OpenCV and Simplified Unsymmetrical multi-Hexagon search in H.264/AVC standard. In these comparisons, FS GPU implementation still showed modest improvement even though the computational complexity of FS GPU implementation is substantially higher than non-FS CPU implementation.We also demonstrated that for an image sequence of 720×480 pixels in resolution, commonly used in video surveillance, the proposed GPU implementation is sufficiently fast for real-time motion estimation at 30 frames-per-second using two NVIDIA C1060 Tesla GPU cards.

  14. Structure-Based Low-Rank Model With Graph Nuclear Norm Regularization for Noise Removal.

    Science.gov (United States)

    Ge, Qi; Jing, Xiao-Yuan; Wu, Fei; Wei, Zhi-Hui; Xiao, Liang; Shao, Wen-Ze; Yue, Dong; Li, Hai-Bo

    2017-07-01

    Nonlocal image representation methods, including group-based sparse coding and block-matching 3-D filtering, have shown their great performance in application to low-level tasks. The nonlocal prior is extracted from each group consisting of patches with similar intensities. Grouping patches based on intensity similarity, however, gives rise to disturbance and inaccuracy in estimation of the true images. To address this problem, we propose a structure-based low-rank model with graph nuclear norm regularization. We exploit the local manifold structure inside a patch and group the patches by the distance metric of manifold structure. With the manifold structure information, a graph nuclear norm regularization is established and incorporated into a low-rank approximation model. We then prove that the graph-based regularization is equivalent to a weighted nuclear norm and the proposed model can be solved by a weighted singular-value thresholding algorithm. Extensive experiments on additive white Gaussian noise removal and mixed noise removal demonstrate that the proposed method achieves a better performance than several state-of-the-art algorithms.

  15. Study on the Matching Relationship between Polymer Hydrodynamic Characteristic Size and Pore Throat Radius of Target Block S Based on the Microporous Membrane Filtration Method

    Directory of Open Access Journals (Sweden)

    Li Yiqiang

    2014-01-01

    Full Text Available The concept of the hydrodynamic characteristic size of polymer was proposed in this study, to characterize the size of aggregates of many polymer molecules in the polymer percolation process. The hydrodynamic characteristic sizes of polymers used in the target block S were examined by employing microporous membrane filtration method, and the factors were studied. Natural core flow experiments were conducted in order to set up the flow matching relationship plate. According to the flow matching plate, the relationship between the hydrodynamic characteristic size of polymer and pore throat radius obtained from core mercury injection data was found. And several suitable polymers for different reservoirs permeability were given. The experimental results of microporous membrane filtration indicated that the hydrodynamic characteristic size of polymer maintained a good nonlinear relationship with polymer viscosity; the value increased as the molecular weight and concentration of the polymer increased and increased as the salinity of dilution water decreased. Additionally, the hydrodynamic characteristic size decreased as the pressure increased, so the hydrodynamic characteristic size ought to be determined based on the pressure of the target block. In the core flow studies, good matching of polymer and formation was identified as polymer flow pressure gradient lower than the fracture pressure gradient of formation. In this case, good matching that was the pore throat radius should be larger than 10 times the hydrodynamic characteristic size of polymer in this study. Using relationship, more matching relationship between the hydrodynamic characteristic sizes of polymer solutions and the pore throat radius of target block was determined.

  16. Chimeric mitochondrial peptides from contiguous regular and swinger RNA.

    Science.gov (United States)

    Seligmann, Hervé

    2016-01-01

    Previous mass spectrometry analyses described human mitochondrial peptides entirely translated from swinger RNAs, RNAs where polymerization systematically exchanged nucleotides. Exchanges follow one among 23 bijective transformation rules, nine symmetric exchanges (X ↔ Y, e.g. A ↔ C) and fourteen asymmetric exchanges (X → Y → Z → X, e.g. A → C → G → A), multiplying by 24 DNA's protein coding potential. Abrupt switches from regular to swinger polymerization produce chimeric RNAs. Here, human mitochondrial proteomic analyses assuming abrupt switches between regular and swinger transcriptions, detect chimeric peptides, encoded by part regular, part swinger RNA. Contiguous regular- and swinger-encoded residues within single peptides are stronger evidence for translation of swinger RNA than previously detected, entirely swinger-encoded peptides: regular parts are positive controls matched with contiguous swinger parts, increasing confidence in results. Chimeric peptides are 200 × rarer than swinger peptides (3/100,000 versus 6/1000). Among 186 peptides with > 8 residues for each regular and swinger parts, regular parts of eleven chimeric peptides correspond to six among the thirteen recognized, mitochondrial protein-coding genes. Chimeric peptides matching partly regular proteins are rarer and less expressed than chimeric peptides matching non-coding sequences, suggesting targeted degradation of misfolded proteins. Present results strengthen hypotheses that the short mitogenome encodes far more proteins than hitherto assumed. Entirely swinger-encoded proteins could exist.

  17. Manifold Regularized Correlation Object Tracking

    OpenAIRE

    Hu, Hongwei; Ma, Bo; Shen, Jianbing; Shao, Ling

    2017-01-01

    In this paper, we propose a manifold regularized correlation tracking method with augmented samples. To make better use of the unlabeled data and the manifold structure of the sample space, a manifold regularization-based correlation filter is introduced, which aims to assign similar labels to neighbor samples. Meanwhile, the regression model is learned by exploiting the block-circulant structure of matrices resulting from the augmented translated samples over multiple base samples cropped fr...

  18. Support agnostic Bayesian matching pursuit for block sparse signals

    KAUST Repository

    Masood, Mudassir; Al-Naffouri, Tareq Y.

    2013-01-01

    priori knowledge of block partition and utilizes a greedy approach and order-recursive updates of its metrics to find the most dominant sparse supports to determine the approximate minimum mean square error (MMSE) estimate of the block-sparse signal

  19. Oncoplastic round block technique has comparable operative parameters as standard wide local excision: a matched case-control study.

    Science.gov (United States)

    Lim, Geok-Hoon; Allen, John Carson; Ng, Ruey Pyng

    2017-08-01

    Although oncoplastic breast surgery is used to resect larger tumors with lower re-excision rates compared to standard wide local excision (sWLE), criticisms of oncoplastic surgery include a longer-albeit, well concealed-scar, longer operating time and hospital stay, and increased risk of complications. Round block technique has been reported to be very suitable for patients with relatively smaller breasts and minimal ptosis. We aim to determine if round block technique will result in operative parameters comparable with sWLE. Breast cancer patients who underwent a round block procedure from 1st May 2014 to 31st January 2016 were included in the study. These patients were then matched for the type of axillary procedure, on a one to one basis, with breast cancer patients who had undergone sWLE from 1st August 2011 to 31st January 2016. The operative parameters between the 2 groups were compared. 22 patients were included in the study. Patient demographics and histologic parameters were similar in the 2 groups. No complications were reported in either group. The mean operating time was 122 and 114 minutes in the round block and sWLE groups, respectively (P=0.64). Length of stay was similar in the 2 groups (P=0.11). Round block patients had better cosmesis and lower re-excision rates. A higher rate of recurrence was observed in the sWLE group. The round block technique has comparable operative parameters to sWLE with no evidence of increased complications. Lower re-excision rate and better cosmesis were observed in the round block patients suggesting that the round block technique is not only comparable in general, but may have advantages to sWLE in selected cases.

  20. Role model and prototype matching

    DEFF Research Database (Denmark)

    Lykkegaard, Eva; Ulriksen, Lars

    2016-01-01

    ’ meetings with the role models affected their thoughts concerning STEM students and attending university. The regular self-to-prototype matching process was shown in real-life role-models meetings to be extended to a more complex three-way matching process between students’ self-perceptions, prototype...

  1. Ipsilateral Brachial Plexus Block and Hemidiaphragmatic Paresis as Adverse Effect of a High Thoracic Paravertebral Block

    NARCIS (Netherlands)

    Renes, Steven H.; van Geffen, Geert J.; Snoeren, Miranda M.; Gielen, Matthieu J.; Groen, Gerbrand J.

    Background: Thoracic paravertebral block is regularly used for unilateral chest and abdominal surgery and is associated with a low complication rate. Case Reports: We describe 2 patients with an ipsilateral brachial plexus block with Horner syndrome after a high continuous thoracic paravertebral

  2. Quality-aware features-based noise level estimator for block matching and three-dimensional filtering algorithm

    Science.gov (United States)

    Xu, Shaoping; Hu, Lingyan; Yang, Xiaohui

    2016-01-01

    The performance of conventional denoising algorithms is usually controlled by one or several parameters whose optimal settings depend on the contents of the processed images and the characteristics of the noises. Among these parameters, noise level is a fundamental parameter that is always assumed to be known by most of the existing denoising algorithms (so-called nonblind denoising algorithms), which largely limits the applicability of these nonblind denoising algorithms in many applications. Moreover, these nonblind algorithms do not always achieve the best denoised images in visual quality even when fed with the actual noise level parameter. To address these shortcomings, in this paper we propose a new quality-aware features-based noise level estimator (NLE), which consists of quality-aware features extraction and optimal noise level parameter prediction. First, considering that image local contrast features convey important structural information that is closely related to image perceptual quality, we utilize the marginal statistics of two local contrast operators, i.e., the gradient magnitude and the Laplacian of Gaussian (LOG), to extract quality-aware features. The proposed quality-aware features have very low computational complexity, making them well suited for time-constrained applications. Then we propose a learning-based framework where the noise level parameter is estimated based on the quality-aware features. Based on the proposed NLE, we develop a blind block matching and three-dimensional filtering (BBM3D) denoising algorithm which is capable of effectively removing additive white Gaussian noise, even coupled with impulse noise. The noise level parameter of the BBM3D algorithm is automatically tuned according to the quality-aware features, guaranteeing the best performance. As such, the classical block matching and three-dimensional algorithm can be transformed into a blind one in an unsupervised manner. Experimental results demonstrate that the

  3. CONJUGATED BLOCK-COPOLYMERS FOR ELECTROLUMINESCENT DIODES

    NARCIS (Netherlands)

    Hilberer, A; Gill, R.E; Herrema, J.K; Malliaras, G.G; Wildeman, J.; Hadziioannou, G

    In this article we review results obtained in our laboratory on the design and study of new light-emitting polymers. We are interested in the synthesis and characterisation of block copolymers with regularly alternating conjugated and non conjugated sequences. The blocks giving rise to luminescence

  4. GENETIC SOURCES AND TECTONOPHYSICAL REGULARITIES OF DIVISIBILITY OF THE LITHOSPHERE INTO BLOCKS OF VARIOUS RANKS AT DIFFERENT STAGES OF ITS FORMATION: TECTONOPHYSICAL ANALYSIS

    Directory of Open Access Journals (Sweden)

    Semen I. Sherman

    2015-01-01

    those with minimum sizes, such as rock lumps. They reflect primarily the degradation of megablocks as a result of their destruction due to high stresses in excess of the tensile strength of the medium. This group may also include blocks which formation is related to convection in the upper mantle layer, asthenosphere. There are grounds to assume that through the vast intermediate interval of geologic time, including supercycles of Kenorlend, Rodin, and and partically Pangea, the formation of the large lithospheric blocks was controlled by convection, and later on, they were 'fragmented' under the physical laws of destruction of solid bodies. However, it is difficult to clearly distinguish between the processes that predetermine the hierarchy of formation of the block structures of various origins – sizes of ancient lithospheric blocks cannot be estimated unambiguously.Thus, mantle convection is a genetic endogenous source of initial divisibility of the cooling upper cover of the Earth and megablock divisibility of the lithosphere in the subsequent and recent geodynamic development stages. At the present stage, regular patterns of the lithospheric block divisibility of various scales are observed at all the hierarchic levels. The areas of the lithospheric megaplates result from regular changes of convective processes in the mantle, which influenced the formation of plates and plate kinematics. Fragmentation of the megaplates into smaller ones is a result of destruction of the solid lithosphere under the physical laws of destruction of solid bodies under the impact of high stresses.

  5. Regular-, irregular-, and pseudo-character processing in Chinese: The regularity effect in normal adult readers

    Directory of Open Access Journals (Sweden)

    Dustin Kai Yan Lau

    2014-03-01

    Full Text Available Background Unlike alphabetic languages, Chinese uses a logographic script. However, the pronunciation of many character’s phonetic radical has the same pronunciation as the character as a whole. These are considered regular characters and can be read through a lexical non-semantic route (Weekes & Chen, 1999. Pseudocharacters are another way to study this non-semantic route. A pseudocharacter is the combination of existing semantic and phonetic radicals in their legal positions resulting in a non-existing character (Ho, Chan, Chung, Lee, & Tsang, 2007. Pseudocharacters can be pronounced by direct derivation from the sound of its phonetic radical. Conversely, if the pronunciation of a character does not follow that of the phonetic radical, it is considered as irregular and can only be correctly read through the lexical-semantic route. The aim of the current investigation was to examine reading aloud in normal adults. We hypothesized that the regularity effect, previously described for alphabetical scripts and acquired dyslexic patients of Chinese (Weekes & Chen, 1999; Wu, Liu, Sun, Chromik, & Zhang, 2014, would also be present in normal adult Chinese readers. Method Participants. Thirty (50% female native Hong Kong Cantonese speakers with a mean age of 19.6 years and a mean education of 12.9 years. Stimuli. Sixty regular-, 60 irregular-, and 60 pseudo-characters (with at least 75% of name agreement in Chinese were matched by initial phoneme, number of strokes and family size. Additionally, regular- and irregular-characters were matched by frequency (low and consistency. Procedure. Each participant was asked to read aloud the stimuli presented on a laptop using the DMDX software. The order of stimuli presentation was randomized. Data analysis. ANOVAs were carried out by participants and items with RTs and errors as dependent variables and type of stimuli (regular-, irregular- and pseudo-character as repeated measures (F1 or between subject

  6. Numerical experiment on finite element method for matching data

    International Nuclear Information System (INIS)

    Tokuda, Shinji; Kumakura, Toshimasa; Yoshimura, Koichi.

    1993-03-01

    Numerical experiments are presented on the finite element method by Pletzer-Dewar for matching data of an ordinary differential equation with regular singular points by using model equation. Matching data play an important role in nonideal MHD stability analysis of a magnetically confined plasma. In the Pletzer-Dewar method, the Frobenius series for the 'big solution', the fundamental solution which is not square-integrable at the regular singular point, is prescribed. The experiments include studies of the convergence rate of the matching data obtained by the finite element method and of the effect on the results of computation by truncating the Frobenius series at finite terms. It is shown from the present study that the finite element method is an effective method for obtaining the matching data with high accuracy. (author)

  7. Sleep patterns and match performance in elite Australian basketball athletes.

    Science.gov (United States)

    Staunton, Craig; Gordon, Brett; Custovic, Edhem; Stanger, Jonathan; Kingsley, Michael

    2017-08-01

    To assess sleep patterns and associations between sleep and match performance in elite Australian female basketball players. Prospective cohort study. Seventeen elite female basketball players were monitored across two consecutive in-season competitions (30 weeks). Total sleep time and sleep efficiency were determined using triaxial accelerometers for Baseline, Pre-match, Match-day and Post-match timings. Match performance was determined using the basketball efficiency statistic (EFF). The effects of match schedule (Regular versus Double-Header; Home versus Away) and sleep on EFF were assessed. The Double-Header condition changed the pattern of sleep when compared with the Regular condition (F (3,48) =3.763, P=0.017), where total sleep time Post-match was 11% less for Double-Header (mean±SD; 7.2±1.4h) compared with Regular (8.0±1.3h; P=0.007). Total sleep time for Double-Header was greater Pre-match (8.2±1.7h) compared with Baseline (7.1±1.6h; P=0.022) and Match-day (7.3±1.5h; P=0.007). Small correlations existed between sleep metrics at Pre-match and EFF for pooled data (r=-0.39 to -0.22; P≥0.238). Relationships between total sleep time and EFF ranged from moderate negative to large positive correlations for individual players (r=-0.37 to 0.62) and reached significance for one player (r=0.60; P=0.025). Match schedule can affect the sleep patterns of elite female basketball players. A large degree of inter-individual variability existed in the relationship between sleep and match performance; nevertheless, sleep monitoring might assist in the optimisation of performance for some athletes. Copyright © 2017 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  8. USING DISPERSION MEASURES FOR DETERMINING BLOCK-SIZE IN-MOTION ESTIMATION

    Directory of Open Access Journals (Sweden)

    CARLOS MERA

    2012-01-01

    Full Text Available Las técnicas de compresión de video disminuyen la redundancia temporal entre los fotogramas del video para realizar una compresión eficiente del mismo. Dicha reducción se logra mediante la compensación de movimiento, la cual está basada en la estimación de movimiento. Block matching es quizás la técnica más robusta y fiable para la estimación de movimiento, en compresión de video. Sin embargo, el block-matching es un proceso computacionalmente costoso. Diferentes enfoques han sido propuestos con el fin de mejorar la precisión y la eficiencia del block-matching. En este trabajo se presenta una estrategia de block-matching para la estimación de movimiento. En el enfoque propuesto el tamaño de los bloques se determina teniendo en cuenta las variaciones de las intensidades de la luz en las regiones de los fotogramas. Las variaciones de las intensidades de la luz se evalúa usando dos medidas de variabilidad: la varianza y la desviación media absoluta. Los resultados experimentales muestran un mejor desempeño de los algoritmos de block-matching usando el enfoque propuesto.

  9. A Multi-Scale Settlement Matching Algorithm Based on ARG

    Science.gov (United States)

    Yue, Han; Zhu, Xinyan; Chen, Di; Liu, Lingjia

    2016-06-01

    Homonymous entity matching is an important part of multi-source spatial data integration, automatic updating and change detection. Considering the low accuracy of existing matching methods in dealing with matching multi-scale settlement data, an algorithm based on Attributed Relational Graph (ARG) is proposed. The algorithm firstly divides two settlement scenes at different scales into blocks by small-scale road network and constructs local ARGs in each block. Then, ascertains candidate sets by merging procedures and obtains the optimal matching pairs by comparing the similarity of ARGs iteratively. Finally, the corresponding relations between settlements at large and small scales are identified. At the end of this article, a demonstration is presented and the results indicate that the proposed algorithm is capable of handling sophisticated cases.

  10. Space-Frequency Block Code with Matched Rotation for MIMO-OFDM System with Limited Feedback

    Directory of Open Access Journals (Sweden)

    Thushara D. Abhayapala

    2009-01-01

    Full Text Available This paper presents a novel matched rotation precoding (MRP scheme to design a rate one space-frequency block code (SFBC and a multirate SFBC for MIMO-OFDM systems with limited feedback. The proposed rate one MRP and multirate MRP can always achieve full transmit diversity and optimal system performance for arbitrary number of antennas, subcarrier intervals, and subcarrier groupings, with limited channel knowledge required by the transmit antennas. The optimization process of the rate one MRP is simple and easily visualized so that the optimal rotation angle can be derived explicitly, or even intuitively for some cases. The multirate MRP has a complex optimization process, but it has a better spectral efficiency and provides a relatively smooth balance between system performance and transmission rate. Simulations show that the proposed SFBC with MRP can overcome the diversity loss for specific propagation scenarios, always improve the system performance, and demonstrate flexible performance with large performance gain. Therefore the proposed SFBCs with MRP demonstrate flexibility and feasibility so that it is more suitable for a practical MIMO-OFDM system with dynamic parameters.

  11. A review of a scientific work A SCHOOL TO MATCH ANY CHILD A MANUAL FOR WORKING WITH PUPILS WITH DEVELOPMENTAL DIFICULTIES IN REGULAR SCHOOLS

    Directory of Open Access Journals (Sweden)

    Sofija АRNAUDOVA

    2005-12-01

    Full Text Available The presented text is a review of the scientific work A school to match any child, a manual to work with children of regular schools with physical disabilities. The author and the editor of this work is Sulejman Hrnjica, in cooperation with Vera Rajovikj, Tatjana Cholin, Ksenija Krtikj and Dijana Dopunovikj. The manual is a part of the project Inclusion of students with physical disabilities in regular primary education, which was approved and financed by the Ministry of education and sport of the Republic of Serbia, and published by the Institute of Psychology at the Faculty of Philosophy in Belgrade, with the assistance of the foundation "Save the Children" from Great Britain, the office in Belgrade, 2004. This work can hardly be found in bookshops through the country, but it can be found in the library of the Faculty of Philosophy, and certainly at the Book Fair, held every year in Skopje.

  12. On the regularization of extremal three-point functions involving giant gravitons

    Directory of Open Access Journals (Sweden)

    Charlotte Kristjansen

    2015-11-01

    Full Text Available In the AdS5/CFT4 set-up, extremal three-point functions involving two giant 1/2 BPS gravitons and one point-like 1/2 BPS graviton, when calculated using semi-classical string theory methods, match the corresponding three-point functions obtained in the tree-level gauge theory. The string theory computation relies on a certain regularization procedure whose justification is based on the match between gauge and string theory. We revisit the regularization procedure and reformulate it in a way which allows a generalization to the ABJM set-up where three-point functions of 1/2 BPS operators are not protected and where a match between tree-level gauge theory and semi-classical string theory is hence not expected.

  13. Integral-fuel blocks

    International Nuclear Information System (INIS)

    Cunningham, C.; Simpkin, S.D.

    1975-01-01

    A prismatic moderator block is described which has fuel-containing channels and coolant channels disposed parallel to each other and to edge faces of the block. The coolant channels are arranged in rows on an equilateral triangular lattice pattern and the fuel-containing channels are disposed in a regular lattice pattern with one fuel-containing channel between and equidistant from each of the coolant channels in each group of three mutually adjacent coolant channels. The edge faces of the block are parallel to the rows of coolant channels and the channels nearest to each edge face are disposed in two rows parallel thereto, with one of the rows containing only coolant channels and the other row containing only fuel-containing channels. (Official Gazette)

  14. Radial motion of the carotid artery wall: A block matching algorithm approach

    Directory of Open Access Journals (Sweden)

    Effat Soleimani

    2012-06-01

    Full Text Available Introduction: During recent years, evaluating the relation between mechanical properties of the arterialwall and cardiovascular diseases has been of great importance. On the other hand, motion estimation of thearterial wall using a sequence of noninvasive ultrasonic images and convenient processing methods mightprovide useful information related to biomechanical indexes and elastic properties of the arteries and assistdoctors to discriminate between healthy and diseased arteries. In the present study, a block matching basedalgorithm was introduced to extract radial motion of the carotid artery wall during cardiac cycles.Materials and Methods: The program was implemented to the consecutive ultrasonic images of thecommon carotid artery of 10 healthy men and maximum and mean radial movement of the posterior wall ofthe artery was extracted. Manual measurements were carried out to validate the automatic method andresults of two methods were compared.Results: Paired t-test analysis showed no significant differences between the automatic and manualmethods (P>0.05. There was significant correlation between the changes in the instantaneous radialmovement of the common carotid artery measured with the manual and automatic methods (withcorrelation coefficient 0.935 and P<0.05.Conclusion: Results of the present study showed that by using a semi automated computer analysismethod, with minimizing the user interfere and no attention to the user experience or skill, arterial wallmotion in the radial direction can be extracted from consecutive ultrasonic frames

  15. Thermo-responsive block copolymers

    NARCIS (Netherlands)

    Mocan Cetintas, Merve

    2017-01-01

    Block copolymers (BCPs) are remarkable materials because of their self-assembly behavior into nano-sized regular structures and high tunable properties. BCPs are in used various applications such as surfactants, nanolithography, biomedicine and nanoporous membranes. In these thesis, we aimed to

  16. Faster 2-regular information-set decoding

    NARCIS (Netherlands)

    Bernstein, D.J.; Lange, T.; Peters, C.P.; Schwabe, P.; Chee, Y.M.

    2011-01-01

    Fix positive integers B and w. Let C be a linear code over F 2 of length Bw. The 2-regular-decoding problem is to find a nonzero codeword consisting of w length-B blocks, each of which has Hamming weight 0 or 2. This problem appears in attacks on the FSB (fast syndrome-based) hash function and

  17. Comparison of Kalman-filter-based approaches for block matching in arterial wall motion analysis from B-mode ultrasound

    International Nuclear Information System (INIS)

    Gastounioti, A; Stoitsis, J; Nikita, K S; Golemati, S

    2011-01-01

    Block matching (BM) has been previously used to estimate motion of the carotid artery from B-mode ultrasound image sequences. In this paper, Kalman filtering (KF) was incorporated in this conventional method in two distinct scenarios: (a) as an adaptive strategy, by renewing the reference block and (b) by renewing the displacements estimated by BM or adaptive BM. All methods resulting from combinations of BM and KF with the two scenarios were evaluated on synthetic image sequences by computing the warping index, defined as the mean squared error between the real and estimated displacements. Adaptive BM, followed by an update through the second scenario at the end of tracking, ABM K F-K2, minimized the warping index and yielded average displacement error reductions of 24% with respect to BM. The same method decreased estimation bias and jitter over varying center frequencies by 30% and 64%, respectively, with respect to BM. These results demonstrated the increased accuracy and robustness of ABM K F-K2 in motion tracking of the arterial wall from B-mode ultrasound images, which is crucial in the study of mechanical properties of normal and diseased arterial segments

  18. A Multi-Scale Settlement Matching Algorithm Based on ARG

    Directory of Open Access Journals (Sweden)

    H. Yue

    2016-06-01

    Full Text Available Homonymous entity matching is an important part of multi-source spatial data integration, automatic updating and change detection. Considering the low accuracy of existing matching methods in dealing with matching multi-scale settlement data, an algorithm based on Attributed Relational Graph (ARG is proposed. The algorithm firstly divides two settlement scenes at different scales into blocks by small-scale road network and constructs local ARGs in each block. Then, ascertains candidate sets by merging procedures and obtains the optimal matching pairs by comparing the similarity of ARGs iteratively. Finally, the corresponding relations between settlements at large and small scales are identified. At the end of this article, a demonstration is presented and the results indicate that the proposed algorithm is capable of handling sophisticated cases.

  19. Block matching 3D random noise filtering for absorption optical projection tomography

    International Nuclear Information System (INIS)

    Fumene Feruglio, P; Vinegoni, C; Weissleder, R; Gros, J; Sbarbati, A

    2010-01-01

    Absorption and emission optical projection tomography (OPT), alternatively referred to as optical computed tomography (optical-CT) and optical-emission computed tomography (optical-ECT), are recently developed three-dimensional imaging techniques with value for developmental biology and ex vivo gene expression studies. The techniques' principles are similar to the ones used for x-ray computed tomography and are based on the approximation of negligible light scattering in optically cleared samples. The optical clearing is achieved by a chemical procedure which aims at substituting the cellular fluids within the sample with a cell membranes' index matching solution. Once cleared the sample presents very low scattering and is then illuminated with a light collimated beam whose intensity is captured in transillumination mode by a CCD camera. Different projection images of the sample are subsequently obtained over a 360 0 full rotation, and a standard backprojection algorithm can be used in a similar fashion as for x-ray tomography in order to obtain absorption maps. Because not all biological samples present significant absorption contrast, it is not always possible to obtain projections with a good signal-to-noise ratio, a condition necessary to achieve high-quality tomographic reconstructions. Such is the case for example, for early stage's embryos. In this work we demonstrate how, through the use of a random noise removal algorithm, the image quality of the reconstructions can be considerably improved even when the noise is strongly present in the acquired projections. Specifically, we implemented a block matching 3D (BM3D) filter applying it separately on each acquired transillumination projection before performing a complete three-dimensional tomographical reconstruction. To test the efficiency of the adopted filtering scheme, a phantom and a real biological sample were processed. In both cases, the BM3D filter led to a signal-to-noise ratio increment of over 30 d

  20. Carotid artery wall motion analysis from B-mode ultrasound using adaptive block matching: in silico evaluation and in vivo application

    International Nuclear Information System (INIS)

    Gastounioti, A; Stoitsis, J S; Nikita, K S; Golemati, S

    2013-01-01

    Valid risk stratification for carotid atherosclerotic plaques represents a crucial public health issue toward preventing fatal cerebrovascular events. Although motion analysis (MA) provides useful information about arterial wall dynamics, the identification of motion-based risk markers remains a significant challenge. Considering that the ability of a motion estimator (ME) to handle changes in the appearance of motion targets has a major effect on accuracy in MA, we investigated the potential of adaptive block matching (ABM) MEs, which consider changes in image intensities over time. To assure the validity in MA, we optimized and evaluated the ABM MEs in the context of a specially designed in silico framework. ABM FIRF2 , which takes advantage of the periodicity characterizing the arterial wall motion, was the most effective ABM algorithm, yielding a 47% accuracy increase with respect to the conventional block matching. The in vivo application of ABM FIRF2 revealed five potential risk markers: low movement amplitude of the normal part of the wall adjacent to the plaques in the radial (RMA PWL ) and longitudinal (LMA PWL ) directions, high radial motion amplitude of the plaque top surface (RMA PTS ), and high relative movement, expressed in terms of radial strain (RSI PL ) and longitudinal shear strain (LSSI PL ), between plaque top and bottom surfaces. The in vivo results were reproduced by OF LK(WLS) and ABM KF-K2 , MEs previously proposed by the authors and with remarkable in silico performances, thereby reinforcing the clinical values of the markers and the potential of those MEs. Future in vivo studies will elucidate with confidence the full potential of the markers. (paper)

  1. Sparse coded image super-resolution using K-SVD trained dictionary based on regularized orthogonal matching pursuit.

    Science.gov (United States)

    Sajjad, Muhammad; Mehmood, Irfan; Baik, Sung Wook

    2015-01-01

    Image super-resolution (SR) plays a vital role in medical imaging that allows a more efficient and effective diagnosis process. Usually, diagnosing is difficult and inaccurate from low-resolution (LR) and noisy images. Resolution enhancement through conventional interpolation methods strongly affects the precision of consequent processing steps, such as segmentation and registration. Therefore, we propose an efficient sparse coded image SR reconstruction technique using a trained dictionary. We apply a simple and efficient regularized version of orthogonal matching pursuit (ROMP) to seek the coefficients of sparse representation. ROMP has the transparency and greediness of OMP and the robustness of the L1-minization that enhance the dictionary learning process to capture feature descriptors such as oriented edges and contours from complex images like brain MRIs. The sparse coding part of the K-SVD dictionary training procedure is modified by substituting OMP with ROMP. The dictionary update stage allows simultaneously updating an arbitrary number of atoms and vectors of sparse coefficients. In SR reconstruction, ROMP is used to determine the vector of sparse coefficients for the underlying patch. The recovered representations are then applied to the trained dictionary, and finally, an optimization leads to high-resolution output of high-quality. Experimental results demonstrate that the super-resolution reconstruction quality of the proposed scheme is comparatively better than other state-of-the-art schemes.

  2. Parent-reported problem behavior among children with sensory disabilities attending elementary regular schools

    NARCIS (Netherlands)

    Maes, B; Grietens, H

    2004-01-01

    Parent-reported problem behaviors of 94 children with visual and auditory disabilities, attending elementary regular schools, were compared with problems reported in a general population sample of nondisabled children. Both samples were matched by means of a pairwise matching procedure, taking into

  3. Fingerprint Recognition Using Minutia Score Matching

    OpenAIRE

    J, Ravi.; Raja, K. B.; R, Venugopal. K.

    2010-01-01

    The popular Biometric used to authenticate a person is Fingerprint which is unique and permanent throughout a person’s life. A minutia matching is widely used for fingerprint recognition and can be classified as ridge ending and ridge bifurcation. In this paper we projected Fingerprint Recognition using Minutia Score Matching method (FRMSM). For Fingerprint thinning, the Block Filter is used, which scans the image at the boundary to preserves the quality of the image and extract the minutiae ...

  4. Too Deep or Not Too Deep?: A Propensity-Matched Comparison of the Analgesic Effects of a Superficial Versus Deep Serratus Fascial Plane Block for Ambulatory Breast Cancer Surgery.

    Science.gov (United States)

    Abdallah, Faraj W; Cil, Tulin; MacLean, David; Madjdpour, Caveh; Escallon, Jaime; Semple, John; Brull, Richard

    2018-07-01

    Serratus fascial plane block can reduce pain following breast surgery, but the question of whether to inject the local anesthetic superficial or deep to the serratus muscle has not been answered. This cohort study compares the analgesic benefits of superficial versus deep serratus plane blocks in ambulatory breast cancer surgery patients at Women's College Hospital between February 2014 and December 2016. We tested the joint hypothesis that deep serratus block is noninferior to superficial serratus block for postoperative in-hospital (pre-discharge) opioid consumption and pain severity. One hundred sixty-six patients were propensity matched among 2 groups (83/group): superficial and deep serratus blocks. The cohort was used to evaluate the effect of blocks on postoperative oral morphine equivalent consumption and area under the curve for rest pain scores. We considered deep serratus block to be noninferior to superficial serratus block if it were noninferior for both outcomes, within 15 mg morphine and 4 cm·h units margins. Other outcomes included intraoperative fentanyl requirements, time to first analgesic request, recovery room stay, and incidence of postoperative nausea and vomiting. Deep serratus block was associated with postoperative morphine consumption and pain scores area under the curve that were noninferior to those of the superficial serratus block. Intraoperative fentanyl requirements, time to first analgesic request, recovery room stay, and postoperative nausea and vomiting were not different between blocks. The postoperative in-hospital analgesia associated with deep serratus block is as effective (within an acceptable margin) as superficial serratus block following ambulatory breast cancer surgery. These new findings are important to inform both current clinical practices and future prospective studies.

  5. Tolerating Correlated Failures for Generalized Cartesian Distributions via Bipartite Matching

    International Nuclear Information System (INIS)

    Ali, Nawab; Krishnamoorthy, Sriram; Halappanavar, Mahantesh; Daily, Jeffrey A.

    2011-01-01

    Faults are expected to play an increasingly important role in how algorithms and applications are designed to run on future extreme-scale systems. A key ingredient of any approach to fault tolerance is effective support for fault tolerant data storage. A typical application execution consists of phases in which certain data structures are modified while others are read-only. Often, read-only data structures constitute a large fraction of total memory consumed. Fault tolerance for read-only data can be ensured through the use of checksums or parities, without resorting to expensive in-memory duplication or checkpointing to secondary storage. In this paper, we present a graph-matching approach to compute and store parity data for read-only matrices that are compatible with fault tolerant linear algebra (FTLA). Typical approaches only support blocked data distributions with each process holding one block with the parity located on additional processes. The matrices are assumed to be blocked by a cartesian grid with each block assigned to a process. We consider a generalized distribution in which each process can be assigned arbitrary blocks. We also account for the fact that multiple processes might be part of the same failure unit, say an SMP node. The flexibility enabled by our novel application of graph matching extends fault tolerance support to data distributions beyond those supported by prior work. We evaluate the matching implementations and cost to compute the parity and recover lost data, demonstrating the low overhead incurred by our approach.

  6. Nearest Neighbour Corner Points Matching Detection Algorithm

    Directory of Open Access Journals (Sweden)

    Zhang Changlong

    2015-01-01

    Full Text Available Accurate detection towards the corners plays an important part in camera calibration. To deal with the instability and inaccuracies of present corner detection algorithm, the nearest neighbour corners match-ing detection algorithms was brought forward. First, it dilates the binary image of the photographed pictures, searches and reserves quadrilateral outline of the image. Second, the blocks which accord with chess-board-corners are classified into a class. If too many blocks in class, it will be deleted; if not, it will be added, and then let the midpoint of the two vertex coordinates be the rough position of corner. At last, it precisely locates the position of the corners. The Experimental results have shown that the algorithm has obvious advantages on accuracy and validity in corner detection, and it can give security for camera calibration in traffic accident measurement.

  7. UNFOLDED REGULAR AND SEMI-REGULAR POLYHEDRA

    Directory of Open Access Journals (Sweden)

    IONIŢĂ Elena

    2015-06-01

    Full Text Available This paper proposes a presentation unfolding regular and semi-regular polyhedra. Regular polyhedra are convex polyhedra whose faces are regular and equal polygons, with the same number of sides, and whose polyhedral angles are also regular and equal. Semi-regular polyhedra are convex polyhedra with regular polygon faces, several types and equal solid angles of the same type. A net of a polyhedron is a collection of edges in the plane which are the unfolded edges of the solid. Modeling and unfolding Platonic and Arhimediene polyhedra will be using 3dsMAX program. This paper is intended as an example of descriptive geometry applications.

  8. Object Detection and Tracking using Modified Diamond Search Block Matching Motion Estimation Algorithm

    Directory of Open Access Journals (Sweden)

    Apurva Samdurkar

    2018-06-01

    Full Text Available Object tracking is one of the main fields within computer vision. Amongst various methods/ approaches for object detection and tracking, the background subtraction approach makes the detection of object easier. To the detected object, apply the proposed block matching algorithm for generating the motion vectors. The existing diamond search (DS and cross diamond search algorithms (CDS are studied and experiments are carried out on various standard video data sets and user defined data sets. Based on the study and analysis of these two existing algorithms a modified diamond search pattern (MDS algorithm is proposed using small diamond shape search pattern in initial step and large diamond shape (LDS in further steps for motion estimation. The initial search pattern consists of five points in small diamond shape pattern and gradually grows into a large diamond shape pattern, based on the point with minimum cost function. The algorithm ends with the small shape pattern at last. The proposed MDS algorithm finds the smaller motion vectors and fewer searching points than the existing DS and CDS algorithms. Further, object detection is carried out by using background subtraction approach and finally, MDS motion estimation algorithm is used for tracking the object in color video sequences. The experiments are carried out by using different video data sets containing a single object. The results are evaluated and compared by using the evaluation parameters like average searching points per frame and average computational time per frame. The experimental results show that the MDS performs better than DS and CDS on average search point and average computation time.

  9. An enhanced block matching algorithm for fast elastic registration in adaptive radiotherapy

    International Nuclear Information System (INIS)

    Malsch, U; Thieke, C; Huber, P E; Bendl, R

    2006-01-01

    Image registration has many medical applications in diagnosis, therapy planning and therapy. Especially for time-adaptive radiotherapy, an efficient and accurate elastic registration of images acquired for treatment planning, and at the time of the actual treatment, is highly desirable. Therefore, we developed a fully automatic and fast block matching algorithm which identifies a set of anatomical landmarks in a 3D CT dataset and relocates them in another CT dataset by maximization of local correlation coefficients in the frequency domain. To transform the complete dataset, a smooth interpolation between the landmarks is calculated by modified thin-plate splines with local impact. The concept of the algorithm allows separate processing of image discontinuities like temporally changing air cavities in the intestinal track or rectum. The result is a fully transformed 3D planning dataset (planning CT as well as delineations of tumour and organs at risk) to a verification CT, allowing evaluation and, if necessary, changes of the treatment plan based on the current patient anatomy without time-consuming manual re-contouring. Typically the total calculation time is less than 5 min, which allows the use of the registration tool between acquiring the verification images and delivering the dose fraction for online corrections. We present verifications of the algorithm for five different patient datasets with different tumour locations (prostate, paraspinal and head-and-neck) by comparing the results with manually selected landmarks, visual assessment and consistency testing. It turns out that the mean error of the registration is better than the voxel resolution (2 x 2 x 3 mm 3 ). In conclusion, we present an algorithm for fully automatic elastic image registration that is precise and fast enough for online corrections in an adaptive fractionated radiation treatment course

  10. Image Relaxation Matching Based on Feature Points for DSM Generation

    Institute of Scientific and Technical Information of China (English)

    ZHENG Shunyi; ZHANG Zuxun; ZHANG Jianqing

    2004-01-01

    In photogrammetry and remote sensing, image matching is a basic and crucial process for automatic DEM generation. In this paper we presented a image relaxation matching method based on feature points. This method can be considered as an extention of regular grid point based matching. It avoids the shortcome of grid point based matching. For example, with this method, we can avoid low or even no texture area where errors frequently appear in cross correlaton matching. In the mean while, it makes full use of some mature techniques such as probability relaxation, image pyramid and the like which have already been successfully used in grid point matching process. Application of the technique to DEM generaton in different regions proved that it is more reasonable and reliable.

  11. Conformal blocks from Wilson lines with loop corrections

    Science.gov (United States)

    Hikida, Yasuaki; Uetoko, Takahiro

    2018-04-01

    We compute the conformal blocks of the Virasoro minimal model or its WN extension with large central charge from Wilson line networks in a Chern-Simons theory including loop corrections. In our previous work, we offered a prescription to regularize divergences from loops attached to Wilson lines. In this paper, we generalize our method with the prescription by dealing with more general operators for N =3 and apply it to the identity W3 block. We further compute general light-light blocks and heavy-light correlators for N =2 with the Wilson line method and compare the results with known ones obtained using a different prescription. We briefly discuss general W3 blocks.

  12. Blocking landing techniques in volleyball and the possible association with anterior cruciate ligament injury.

    Science.gov (United States)

    Zahradnik, David; Jandacka, Daniel; Holcapek, Michal; Farana, Roman; Uchytil, Jaroslav; Hamill, Joseph

    2018-04-01

    The number and type of landings performed after blocking during volleyball matches has been related to the potential risk of ACL injury. The aim of the present study was to determine whether gender affects the frequency of specific blocking landing techniques with potential risk of ACL injury from the perspective of foot contact and subsequent movement after the block used by volleyball players during competitive matches. Three matches involving four female volleyball teams (fourteen sets) and three matches involving four male volleyball teams (thirteen sets) in the Czech Republic were analyzed for this study. A Pearson chi-square test of independence was used to detect the relationship between gender and different blocking techniques. The results of the present study showed that gender affected single-leg landings with subsequent movement in lateral direction and double-leg landings. Although the total number of landings was lower for male athletes than for female athletes, a larger portion of male athletes demonstrated single leg landings with a subsequent movement than female athletes. Single leg landings with a subsequent movement have a higher potential risk of ACL injury.

  13. Global rotational motion and displacement estimation of digital image stabilization based on the oblique vectors matching algorithm

    Science.gov (United States)

    Yu, Fei; Hui, Mei; Zhao, Yue-jin

    2009-08-01

    The image block matching algorithm based on motion vectors of correlative pixels in oblique direction is presented for digital image stabilization. The digital image stabilization is a new generation of image stabilization technique which can obtains the information of relative motion among frames of dynamic image sequences by the method of digital image processing. In this method the matching parameters are calculated from the vectors projected in the oblique direction. The matching parameters based on the vectors contain the information of vectors in transverse and vertical direction in the image blocks at the same time. So the better matching information can be obtained after making correlative operation in the oblique direction. And an iterative weighted least square method is used to eliminate the error of block matching. The weights are related with the pixels' rotational angle. The center of rotation and the global emotion estimation of the shaking image can be obtained by the weighted least square from the estimation of each block chosen evenly from the image. Then, the shaking image can be stabilized with the center of rotation and the global emotion estimation. Also, the algorithm can run at real time by the method of simulated annealing in searching method of block matching. An image processing system based on DSP was used to exam this algorithm. The core processor in the DSP system is TMS320C6416 of TI, and the CCD camera with definition of 720×576 pixels was chosen as the input video signal. Experimental results show that the algorithm can be performed at the real time processing system and have an accurate matching precision.

  14. Continuous vs. blocks of physiotherapy for motor development in children with cerebral palsy and similar syndromes: A prospective randomized study.

    Science.gov (United States)

    Brunner, Anne-Louise; Rutz, Erich; Juenemann, Stephanie; Brunner, Reinald

    2014-12-01

    To determine whether physiotherapy is more effective when applied in blocks or continuously in children with cerebral palsy (CP). A prospective randomized cross-over design study compared the effect of regular physiotherapy (baseline) with blocks of physiotherapy alternating with no physiotherapy over one year. Thirty-nine institutionalized children with CP and clinically similar syndromes (6-16 years old, Gross Motor Function Classification Scale II-IV) were included. During the first scholastic year, group A received regular physiotherapy, group B blocks of physiotherapy and vice versa in the second year. The Gross Motor Function Measure 66 (GMFM-66) was the outcome measure. Thirteen children in each group completed the study. GMFM-66 improved (p Physiotherapy may be more effective when provided regularly rather than in blocks.

  15. Experimental scheme and restoration algorithm of block compression sensing

    Science.gov (United States)

    Zhang, Linxia; Zhou, Qun; Ke, Jun

    2018-01-01

    Compressed Sensing (CS) can use the sparseness of a target to obtain its image with much less data than that defined by the Nyquist sampling theorem. In this paper, we study the hardware implementation of a block compression sensing system and its reconstruction algorithms. Different block sizes are used. Two algorithms, the orthogonal matching algorithm (OMP) and the full variation minimum algorithm (TV) are used to obtain good reconstructions. The influence of block size on reconstruction is also discussed.

  16. Role model and prototype matching: Upper-secondary school students’ meetings with tertiary STEM students

    DEFF Research Database (Denmark)

    Lykkegaard, Eva; Ulriksen, Lars

    2016-01-01

    concerning STEM students and attending university. The regular self-to-prototype matching process was shown in real-life role-models meetings to be extended to a more complex three-way matching process between students’ self-perceptions, prototype images and situation-specific conceptions of role models...

  17. Improving blood transfusion practice by regular education in the United Arab Emirates.

    Science.gov (United States)

    Sajwani, F H

    2012-07-01

    A cross-match to transfused unit ratio of less than 2.0 is frequently used to assess performance in many hospital blood banks. This brief report was initiated to evaluate the practice at a local hospital and to emphasize the importance of regular educational sessions to improve blood transfusion practice. Retrospective data on cross-match : transfused (C : T) ratio of all departments was collected and educational sessions were given to improve practice. Thereafter, a new set of data was collected and change in practice was assessed. Initial data showed total (C : T) ratio of 1.95. After medical staff education, analysis showed clinically significant improvement in blood utilization practice with a (C : T) ratio of 1.60. This brief report indicates the importance of regular physician education, the potential role of blood transfusion committee, and the need to implement clear guidelines for blood transfusion. © 2012 American Association of Blood Banks.

  18. A machine learning approach to create blocking criteria for record linkage.

    Science.gov (United States)

    Giang, Phan H

    2015-03-01

    Record linkage, a part of data cleaning, is recognized as one of most expensive steps in data warehousing. Most record linkage (RL) systems employ a strategy of using blocking filters to reduce the number of pairs to be matched. A blocking filter consists of a number of blocking criteria. Until recently, blocking criteria are selected manually by domain experts. This paper proposes a new method to automatically learn efficient blocking criteria for record linkage. Our method addresses the lack of sufficient labeled data for training. Unlike previous works, we do not consider a blocking filter in isolation but in the context of an accompanying matcher which is employed after the blocking filter. We show that given such a matcher, the labels (assigned to record pairs) that are relevant for learning are the labels assigned by the matcher (link/nonlink), not the labels assigned objectively (match/unmatch). This conclusion allows us to generate an unlimited amount of labeled data for training. We formulate the problem of learning a blocking filter as a Disjunctive Normal Form (DNF) learning problem and use the Probably Approximately Correct (PAC) learning theory to guide the development of algorithm to search for blocking filters. We test the algorithm on a real patient master file of 2.18 million records. The experimental results show that compared with filters obtained by educated guess, the optimal learned filters have comparable recall but reduce throughput (runtime) by an order-of-magnitude factor.

  19. Kangaroo – A pattern-matching program for biological sequences

    Directory of Open Access Journals (Sweden)

    Betel Doron

    2002-07-01

    Full Text Available Abstract Background Biologists are often interested in performing a simple database search to identify proteins or genes that contain a well-defined sequence pattern. Many databases do not provide straightforward or readily available query tools to perform simple searches, such as identifying transcription binding sites, protein motifs, or repetitive DNA sequences. However, in many cases simple pattern-matching searches can reveal a wealth of information. We present in this paper a regular expression pattern-matching tool that was used to identify short repetitive DNA sequences in human coding regions for the purpose of identifying potential mutation sites in mismatch repair deficient cells. Results Kangaroo is a web-based regular expression pattern-matching program that can search for patterns in DNA, protein, or coding region sequences in ten different organisms. The program is implemented to facilitate a wide range of queries with no restriction on the length or complexity of the query expression. The program is accessible on the web at http://bioinfo.mshri.on.ca/kangaroo/ and the source code is freely distributed at http://sourceforge.net/projects/slritools/. Conclusion A low-level simple pattern-matching application can prove to be a useful tool in many research settings. For example, Kangaroo was used to identify potential genetic targets in a human colorectal cancer variant that is characterized by a high frequency of mutations in coding regions containing mononucleotide repeats.

  20. On the MSE Performance and Optimization of Regularized Problems

    KAUST Repository

    Alrashdi, Ayed

    2016-11-01

    The amount of data that has been measured, transmitted/received, and stored in the recent years has dramatically increased. So, today, we are in the world of big data. Fortunately, in many applications, we can take advantages of possible structures and patterns in the data to overcome the curse of dimensionality. The most well known structures include sparsity, low-rankness, block sparsity. This includes a wide range of applications such as machine learning, medical imaging, signal processing, social networks and computer vision. This also led to a specific interest in recovering signals from noisy compressed measurements (Compressed Sensing (CS) problem). Such problems are generally ill-posed unless the signal is structured. The structure can be captured by a regularizer function. This gives rise to a potential interest in regularized inverse problems, where the process of reconstructing the structured signal can be modeled as a regularized problem. This thesis particularly focuses on finding the optimal regularization parameter for such problems, such as ridge regression, LASSO, square-root LASSO and low-rank Generalized LASSO. Our goal is to optimally tune the regularizer to minimize the mean-squared error (MSE) of the solution when the noise variance or structure parameters are unknown. The analysis is based on the framework of the Convex Gaussian Min-max Theorem (CGMT) that has been used recently to precisely predict performance errors.

  1. Efficient image duplicated region detection model using sequential block clustering

    Czech Academy of Sciences Publication Activity Database

    Sekeh, M. A.; Maarof, M. A.; Rohani, M. F.; Mahdian, Babak

    2013-01-01

    Roč. 10, č. 1 (2013), s. 73-84 ISSN 1742-2876 Institutional support: RVO:67985556 Keywords : Image forensic * Copy–paste forgery * Local block matching Subject RIV: IN - Informatics, Computer Science Impact factor: 0.986, year: 2013 http://library.utia.cas.cz/separaty/2013/ZOI/mahdian-efficient image duplicated region detection model using sequential block clustering.pdf

  2. Protograph based LDPC codes with minimum distance linearly growing with block size

    Science.gov (United States)

    Divsalar, Dariush; Jones, Christopher; Dolinar, Sam; Thorpe, Jeremy

    2005-01-01

    We propose several LDPC code constructions that simultaneously achieve good threshold and error floor performance. Minimum distance is shown to grow linearly with block size (similar to regular codes of variable degree at least 3) by considering ensemble average weight enumerators. Our constructions are based on projected graph, or protograph, structures that support high-speed decoder implementations. As with irregular ensembles, our constructions are sensitive to the proportion of degree-2 variable nodes. A code with too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code with too many such nodes tends to not exhibit a minimum distance that grows linearly in block length. In this paper we also show that precoding can be used to lower the threshold of regular LDPC codes. The decoding thresholds of the proposed codes, which have linearly increasing minimum distance in block size, outperform that of regular LDPC codes. Furthermore, a family of low to high rate codes, with thresholds that adhere closely to their respective channel capacity thresholds, is presented. Simulation results for a few example codes show that the proposed codes have low error floors as well as good threshold SNFt performance.

  3. Diverse Regular Employees and Non-regular Employment (Japanese)

    OpenAIRE

    MORISHIMA Motohiro

    2011-01-01

    Currently there are high expectations for the introduction of policies related to diverse regular employees. These policies are a response to the problem of disparities between regular and non-regular employees (part-time, temporary, contract and other non-regular employees) and will make it more likely that workers can balance work and their private lives while companies benefit from the advantages of regular employment. In this paper, I look at two issues that underlie this discussion. The ...

  4. Multiple Learning Strategies Project. Small Engine Repair Service. Regular Vocational. [Vol. 1.

    Science.gov (United States)

    Pitts, Jim; And Others

    This instructional package is one of two designed for use by regular vocational students in the vocational area of small engine repair service. Contained in this document are forty-four learning modules organized into ten units: engine block; air cleaner; starters; fuel tanks; lines, filters, and pumps; carburetors; electrical; magneto systems;…

  5. Robust and Adaptive Block Tracking Method Based on Particle Filter

    Directory of Open Access Journals (Sweden)

    Bin Sun

    2015-10-01

    Full Text Available In the field of video analysis and processing, object tracking is attracting more and more attention especially in traffic management, digital surveillance and so on. However problems such as objects’ abrupt motion, occlusion and complex target structures would bring difficulties to academic study and engineering application. In this paper, a fragmentsbased tracking method using the block relationship coefficient is proposed. In this method, we use particle filter algorithm and object region is divided into blocks initially. The contribution of this method is that object features are not extracted just from a single block, the relationship between current block and its neighbor blocks are extracted to describe the variation of the block. Each block is weighted according to the block relationship coefficient when the block is voted on the most matched region in next frame. This method can make full use of the relationship between blocks. The experimental results demonstrate that our method can provide good performance in condition of occlusion and abrupt posture variation.

  6. SU-C-207B-07: Deep Convolutional Neural Network Image Matching for Ultrasound Guidance in Radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, N; Najafi, M; Hancock, S; Hristov, D [Stanford University Cancer Center, Palo Alto, CA (United States)

    2016-06-15

    Purpose: Robust matching of ultrasound images is a challenging problem as images of the same anatomy often present non-trivial differences. This poses an obstacle for ultrasound guidance in radiotherapy. Thus our objective is to overcome this obstacle by designing and evaluating an image blocks matching framework based on a two channel deep convolutional neural network. Methods: We extend to 3D an algorithmic structure previously introduced for 2D image feature learning [1]. To obtain the similarity between two 3D image blocks A and B, the 3D image blocks are divided into 2D patches Ai and Bi. The similarity is then calculated as the average similarity score of Ai and Bi. The neural network was then trained with public non-medical image pairs, and subsequently evaluated on ultrasound image blocks for the following scenarios: (S1) same image blocks with/without shifts (A and A-shift-x); (S2) non-related random block pairs; (S3) ground truth registration matched pairs of different ultrasound images with/without shifts (A-i and A-reg-i-shift-x). Results: For S1 the similarity scores of A and A-shift-x were 32.63, 18.38, 12.95, 9.23, 2.15 and 0.43 for x=ranging from 0 mm to 10 mm in 2 mm increments. For S2 the average similarity score for non-related block pairs was −1.15. For S3 the average similarity score of ground truth registration matched blocks A-i and A-reg-i-shift-0 (1≤i≤5) was 12.37. After translating A-reg-i-shift-0 by 0 mm, 2 mm, 4 mm, 6 mm, 8 mm, and 10 mm, the average similarity scores of A-i and A-reg-i-shift-x were 11.04, 8.42, 4.56, 2.27, and 0.29 respectively. Conclusion: The proposed method correctly assigns highest similarity to corresponding 3D ultrasound image blocks despite differences in image content and thus can form the basis for ultrasound image registration and tracking.[1] Zagoruyko, Komodakis, “Learning to compare image patches via convolutional neural networks', IEEE CVPR 2015,pp.4353–4361.

  7. SU-C-207B-07: Deep Convolutional Neural Network Image Matching for Ultrasound Guidance in Radiotherapy

    International Nuclear Information System (INIS)

    Zhu, N; Najafi, M; Hancock, S; Hristov, D

    2016-01-01

    Purpose: Robust matching of ultrasound images is a challenging problem as images of the same anatomy often present non-trivial differences. This poses an obstacle for ultrasound guidance in radiotherapy. Thus our objective is to overcome this obstacle by designing and evaluating an image blocks matching framework based on a two channel deep convolutional neural network. Methods: We extend to 3D an algorithmic structure previously introduced for 2D image feature learning [1]. To obtain the similarity between two 3D image blocks A and B, the 3D image blocks are divided into 2D patches Ai and Bi. The similarity is then calculated as the average similarity score of Ai and Bi. The neural network was then trained with public non-medical image pairs, and subsequently evaluated on ultrasound image blocks for the following scenarios: (S1) same image blocks with/without shifts (A and A-shift-x); (S2) non-related random block pairs; (S3) ground truth registration matched pairs of different ultrasound images with/without shifts (A-i and A-reg-i-shift-x). Results: For S1 the similarity scores of A and A-shift-x were 32.63, 18.38, 12.95, 9.23, 2.15 and 0.43 for x=ranging from 0 mm to 10 mm in 2 mm increments. For S2 the average similarity score for non-related block pairs was −1.15. For S3 the average similarity score of ground truth registration matched blocks A-i and A-reg-i-shift-0 (1≤i≤5) was 12.37. After translating A-reg-i-shift-0 by 0 mm, 2 mm, 4 mm, 6 mm, 8 mm, and 10 mm, the average similarity scores of A-i and A-reg-i-shift-x were 11.04, 8.42, 4.56, 2.27, and 0.29 respectively. Conclusion: The proposed method correctly assigns highest similarity to corresponding 3D ultrasound image blocks despite differences in image content and thus can form the basis for ultrasound image registration and tracking.[1] Zagoruyko, Komodakis, “Learning to compare image patches via convolutional neural networks', IEEE CVPR 2015,pp.4353–4361.

  8. A MATCHING METHOD TO REDUCE THE INFLUENCE OF SAR GEOMETRIC DEFORMATION

    Directory of Open Access Journals (Sweden)

    C. Gao

    2018-04-01

    Full Text Available There are large geometrical deformations in SAR image, including foreshortening, layover, shade,which leads to SAR Image matching with low accuracy. Especially in complex terrain area, the control points are difficult to obtain, and the matching is difficult to achieve. Considering the impact of geometric distortions in SAR image pairs, a matching algorithm with a combination of speeded up robust features (SURF and summed of normalize cross correlation (SNCC was proposed, which can avoid the influence of SAR geometric deformation. Firstly, SURF algorithm was utilized to predict the search area. Then the matching point pairs was selected based on summed of normalized cross correlation. Finally, false match points were eliminated by the bidirectional consistency. SURF algorithm can control the range of matching points, and the matching points extracted from the deformation area are eliminated, and the matching points with stable and even distribution are obtained. The experimental results demonstrated that the proposed algorithm had high precision, and can effectively avoid the effect of geometric distortion on SAR image matching. Meet accuracy requirements of the block adjustment with sparse control points.

  9. Coordinate-invariant regularization

    International Nuclear Information System (INIS)

    Halpern, M.B.

    1987-01-01

    A general phase-space framework for coordinate-invariant regularization is given. The development is geometric, with all regularization contained in regularized DeWitt Superstructures on field deformations. Parallel development of invariant coordinate-space regularization is obtained by regularized functional integration of the momenta. As representative examples of the general formulation, the regularized general non-linear sigma model and regularized quantum gravity are discussed. copyright 1987 Academic Press, Inc

  10. Reduction of snapshots for MIMO radar detection by block/group orthogonal matching pursuit

    KAUST Repository

    Ali, Hussain El Hosiny; Ahmed, Sajid; Al-Naffouri, Tareq Y.; Alouini, Mohamed-Slim

    2014-01-01

    localization problem with compressive sensing. Specifically, we try to solve the problem of estimation of target location in MIMO radar by group and block sparsity algorithms. It will lead us to a reduced number of snapshots required and also we can achieve

  11. A novel fractal image compression scheme with block classification and sorting based on Pearson's correlation coefficient.

    Science.gov (United States)

    Wang, Jianji; Zheng, Nanning

    2013-09-01

    Fractal image compression (FIC) is an image coding technology based on the local similarity of image structure. It is widely used in many fields such as image retrieval, image denoising, image authentication, and encryption. FIC, however, suffers from the high computational complexity in encoding. Although many schemes are published to speed up encoding, they do not easily satisfy the encoding time or the reconstructed image quality requirements. In this paper, a new FIC scheme is proposed based on the fact that the affine similarity between two blocks in FIC is equivalent to the absolute value of Pearson's correlation coefficient (APCC) between them. First, all blocks in the range and domain pools are chosen and classified using an APCC-based block classification method to increase the matching probability. Second, by sorting the domain blocks with respect to APCCs between these domain blocks and a preset block in each class, the matching domain block for a range block can be searched in the selected domain set in which these APCCs are closer to APCC between the range block and the preset block. Experimental results show that the proposed scheme can significantly speed up the encoding process in FIC while preserving the reconstructed image quality well.

  12. Processing SPARQL queries with regular expressions in RDF databases

    Science.gov (United States)

    2011-01-01

    Background As the Resource Description Framework (RDF) data model is widely used for modeling and sharing a lot of online bioinformatics resources such as Uniprot (dev.isb-sib.ch/projects/uniprot-rdf) or Bio2RDF (bio2rdf.org), SPARQL - a W3C recommendation query for RDF databases - has become an important query language for querying the bioinformatics knowledge bases. Moreover, due to the diversity of users’ requests for extracting information from the RDF data as well as the lack of users’ knowledge about the exact value of each fact in the RDF databases, it is desirable to use the SPARQL query with regular expression patterns for querying the RDF data. To the best of our knowledge, there is currently no work that efficiently supports regular expression processing in SPARQL over RDF databases. Most of the existing techniques for processing regular expressions are designed for querying a text corpus, or only for supporting the matching over the paths in an RDF graph. Results In this paper, we propose a novel framework for supporting regular expression processing in SPARQL query. Our contributions can be summarized as follows. 1) We propose an efficient framework for processing SPARQL queries with regular expression patterns in RDF databases. 2) We propose a cost model in order to adapt the proposed framework in the existing query optimizers. 3) We build a prototype for the proposed framework in C++ and conduct extensive experiments demonstrating the efficiency and effectiveness of our technique. Conclusions Experiments with a full-blown RDF engine show that our framework outperforms the existing ones by up to two orders of magnitude in processing SPARQL queries with regular expression patterns. PMID:21489225

  13. Processing SPARQL queries with regular expressions in RDF databases.

    Science.gov (United States)

    Lee, Jinsoo; Pham, Minh-Duc; Lee, Jihwan; Han, Wook-Shin; Cho, Hune; Yu, Hwanjo; Lee, Jeong-Hoon

    2011-03-29

    As the Resource Description Framework (RDF) data model is widely used for modeling and sharing a lot of online bioinformatics resources such as Uniprot (dev.isb-sib.ch/projects/uniprot-rdf) or Bio2RDF (bio2rdf.org), SPARQL - a W3C recommendation query for RDF databases - has become an important query language for querying the bioinformatics knowledge bases. Moreover, due to the diversity of users' requests for extracting information from the RDF data as well as the lack of users' knowledge about the exact value of each fact in the RDF databases, it is desirable to use the SPARQL query with regular expression patterns for querying the RDF data. To the best of our knowledge, there is currently no work that efficiently supports regular expression processing in SPARQL over RDF databases. Most of the existing techniques for processing regular expressions are designed for querying a text corpus, or only for supporting the matching over the paths in an RDF graph. In this paper, we propose a novel framework for supporting regular expression processing in SPARQL query. Our contributions can be summarized as follows. 1) We propose an efficient framework for processing SPARQL queries with regular expression patterns in RDF databases. 2) We propose a cost model in order to adapt the proposed framework in the existing query optimizers. 3) We build a prototype for the proposed framework in C++ and conduct extensive experiments demonstrating the efficiency and effectiveness of our technique. Experiments with a full-blown RDF engine show that our framework outperforms the existing ones by up to two orders of magnitude in processing SPARQL queries with regular expression patterns.

  14. Block Fusion on Dynamically Adaptive Spacetree Grids for Shallow Water Waves

    KAUST Repository

    Weinzierl, Tobias

    2014-09-01

    © 2014 World Scientific Publishing Company. Spacetrees are a popular formalism to describe dynamically adaptive Cartesian grids. Even though they directly yield a mesh, it is often computationally reasonable to embed regular Cartesian blocks into their leaves. This promotes stencils working on homogeneous data chunks. The choice of a proper block size is sensitive. While large block sizes foster loop parallelism and vectorisation, they restrict the adaptivity\\'s granularity and hence increase the memory footprint and lower the numerical accuracy per byte. In the present paper, we therefore use a multiscale spacetree-block coupling admitting blocks on all spacetree nodes. We propose to find sets of blocks on the finest scale throughout the simulation and to replace them by fused big blocks. Such a replacement strategy can pick up hardware characteristics, i.e. which block size yields the highest throughput, while the dynamic adaptivity of the fine grid mesh is not constrained - applications can work with fine granular blocks. We study the fusion with a state-of-the-art shallow water solver at hands of an Intel Sandy Bridge and a Xeon Phi processor where we anticipate their reaction to selected block optimisation and vectorisation.

  15. SAD PROCESSOR FOR MULTIPLE MACROBLOCK MATCHING IN FAST SEARCH VIDEO MOTION ESTIMATION

    Directory of Open Access Journals (Sweden)

    Nehal N. Shah

    2015-02-01

    Full Text Available Motion estimation is a very important but computationally complex task in video coding. Process of determining motion vectors based on the temporal correlation of consecutive frame is used for video compression. In order to reduce the computational complexity of motion estimation and maintain the quality of encoding during motion compensation, different fast search techniques are available. These block based motion estimation algorithms use the sum of absolute difference (SAD between corresponding macroblock in current frame and all the candidate macroblocks in the reference frame to identify best match. Existing implementations can perform SAD between two blocks using sequential or pipeline approach but performing multi operand SAD in single clock cycle with optimized recourses is state of art. In this paper various parallel architectures for computation of the fixed block size SAD is evaluated and fast parallel SAD architecture is proposed with optimized resources. Further SAD processor is described with 9 processing elements which can be configured for any existing fast search block matching algorithm. Proposed SAD processor consumes 7% fewer adders compared to existing implementation for one processing elements. Using nine PE it can process 84 HD frames per second in worse case which is good outcome for real time implementation. In average case architecture process 325 HD frames per second.

  16. The Presence of Thyroid-Stimulation Blocking Antibody Prevents High Bone Turnover in Untreated Premenopausal Patients with Graves' Disease.

    Science.gov (United States)

    Cho, Sun Wook; Bae, Jae Hyun; Noh, Gyeong Woon; Kim, Ye An; Moon, Min Kyong; Park, Kyoung Un; Song, Junghan; Yi, Ka Hee; Park, Do Joon; Chung, June-Key; Cho, Bo Youn; Park, Young Joo

    2015-01-01

    Osteoporosis-related fractures are one of the complications of Graves' disease. This study hypothesized that the different actions of thyroid-stimulating hormone receptor (TSHR) antibodies, both stimulating and blocking activities in Graves' disease patients might oppositely impact bone turnover. Newly diagnosed premenopausal Graves' disease patients were enrolled (n = 93) and divided into two groups: patients with TSHR antibodies with thyroid-stimulating activity (stimulating activity group, n = 83) and patients with TSHR antibodies with thyroid-stimulating activity combined with blocking activity (blocking activity group, n = 10). From the stimulating activity group, patients who had matched values for free T4 and TSH binding inhibitor immunoglobulin (TBII) to the blocking activity group were further classified as stimulating activity-matched control (n = 11). Bone turnover markers BS-ALP, Osteocalcin, and C-telopeptide were significantly lower in the blocking activity group than in the stimulating activity or stimulating activity-matched control groups. The TBII level showed positive correlations with BS-ALP and osteocalcin levels in the stimulating activity group, while it had a negative correlation with the osteocalcin level in the blocking activity group. In conclusion, the activation of TSHR antibody-activated TSH signaling contributes to high bone turnover, independent of the actions of thyroid hormone, and thyroid-stimulation blocking antibody has protective effects against bone metabolism in Graves' disease.

  17. The Presence of Thyroid-Stimulation Blocking Antibody Prevents High Bone Turnover in Untreated Premenopausal Patients with Graves' Disease.

    Directory of Open Access Journals (Sweden)

    Sun Wook Cho

    Full Text Available Osteoporosis-related fractures are one of the complications of Graves' disease. This study hypothesized that the different actions of thyroid-stimulating hormone receptor (TSHR antibodies, both stimulating and blocking activities in Graves' disease patients might oppositely impact bone turnover. Newly diagnosed premenopausal Graves' disease patients were enrolled (n = 93 and divided into two groups: patients with TSHR antibodies with thyroid-stimulating activity (stimulating activity group, n = 83 and patients with TSHR antibodies with thyroid-stimulating activity combined with blocking activity (blocking activity group, n = 10. From the stimulating activity group, patients who had matched values for free T4 and TSH binding inhibitor immunoglobulin (TBII to the blocking activity group were further classified as stimulating activity-matched control (n = 11. Bone turnover markers BS-ALP, Osteocalcin, and C-telopeptide were significantly lower in the blocking activity group than in the stimulating activity or stimulating activity-matched control groups. The TBII level showed positive correlations with BS-ALP and osteocalcin levels in the stimulating activity group, while it had a negative correlation with the osteocalcin level in the blocking activity group. In conclusion, the activation of TSHR antibody-activated TSH signaling contributes to high bone turnover, independent of the actions of thyroid hormone, and thyroid-stimulation blocking antibody has protective effects against bone metabolism in Graves' disease.

  18. LDPC Codes with Minimum Distance Proportional to Block Size

    Science.gov (United States)

    Divsalar, Dariush; Jones, Christopher; Dolinar, Samuel; Thorpe, Jeremy

    2009-01-01

    Low-density parity-check (LDPC) codes characterized by minimum Hamming distances proportional to block sizes have been demonstrated. Like the codes mentioned in the immediately preceding article, the present codes are error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels. The previously mentioned codes have low decoding thresholds and reasonably low error floors. However, the minimum Hamming distances of those codes do not grow linearly with code-block sizes. Codes that have this minimum-distance property exhibit very low error floors. Examples of such codes include regular LDPC codes with variable degrees of at least 3. Unfortunately, the decoding thresholds of regular LDPC codes are high. Hence, there is a need for LDPC codes characterized by both low decoding thresholds and, in order to obtain acceptably low error floors, minimum Hamming distances that are proportional to code-block sizes. The present codes were developed to satisfy this need. The minimum Hamming distances of the present codes have been shown, through consideration of ensemble-average weight enumerators, to be proportional to code block sizes. As in the cases of irregular ensembles, the properties of these codes are sensitive to the proportion of degree-2 variable nodes. A code having too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code having too many such nodes tends not to exhibit a minimum distance that is proportional to block size. Results of computational simulations have shown that the decoding thresholds of codes of the present type are lower than those of regular LDPC codes. Included in the simulations were a few examples from a family of codes characterized by rates ranging from low to high and by thresholds that adhere closely to their respective channel capacity thresholds; the simulation results from these examples showed that the codes in question have low

  19. Pre-operative brachial plexus block compared with an identical block performed at the end of surgery: a prospective, double-blind, randomised clinical trial.

    Science.gov (United States)

    Holmberg, A; Sauter, A R; Klaastad, Ø; Draegni, T; Raeder, J C

    2017-08-01

    We evaluated whether pre-emptive analgesia with a pre-operative ultrasound-guided infraclavicular brachial plexus block resulted in better postoperative analgesia than an identical block performed postoperatively. Fifty-two patients undergoing fixation of a fractured radius were included. All patients received general anaesthesia with remifentanil and propofol. Patients were randomly allocated into two groups: a pre-operative block or a postoperative block with 0.5 ml.kg -1 ropivacaine 0.75%. After surgery, all patients received regular paracetamol plus opioids for breakthrough pain. Mean (SD) time to first rescue analgesic after emergence from general anaesthesia was 544 (217) min in the pre-operative block group compared with 343 (316) min in the postoperative block group (p = 0.015). Postoperative pain scores were higher and more patients required rescue analgesia during the first 4 h after surgery in the postoperative block group. There were no significant differences in plasma stress mediators between the groups. Analgesic consumption was lower at day seven in the pre-operative block group. Pain was described as very strong at block resolution in 27 (63%) patients and 26 (76%) had episodes of mild pain after 6 months. We conclude that a pre-operative ultrasound-guided infraclavicular brachial plexus block provides longer and better analgesia in the acute postoperative period compared with an identical postoperative block in patients undergoing surgery for fractured radius. © 2017 The Association of Anaesthetists of Great Britain and Ireland.

  20. Emotion regulation deficits in regular marijuana users.

    Science.gov (United States)

    Zimmermann, Kaeli; Walz, Christina; Derckx, Raissa T; Kendrick, Keith M; Weber, Bernd; Dore, Bruce; Ochsner, Kevin N; Hurlemann, René; Becker, Benjamin

    2017-08-01

    Effective regulation of negative affective states has been associated with mental health. Impaired regulation of negative affect represents a risk factor for dysfunctional coping mechanisms such as drug use and thus could contribute to the initiation and development of problematic substance use. This study investigated behavioral and neural indices of emotion regulation in regular marijuana users (n = 23) and demographically matched nonusing controls (n = 20) by means of an fMRI cognitive emotion regulation (reappraisal) paradigm. Relative to nonusing controls, marijuana users demonstrated increased neural activity in a bilateral frontal network comprising precentral, middle cingulate, and supplementary motor regions during reappraisal of negative affect (P marijuana users relative to controls. Together, the present findings could reflect an unsuccessful attempt of compensatory recruitment of additional neural resources in the context of disrupted amygdala-prefrontal interaction during volitional emotion regulation in marijuana users. As such, impaired volitional regulation of negative affect might represent a consequence of, or risk factor for, regular marijuana use. Hum Brain Mapp 38:4270-4279, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  1. Block versus Random Amphiphilic Glycopolymer Nanopaticles as Glucose-Responsive Vehicles.

    Science.gov (United States)

    Guo, Qianqian; Zhang, Tianqi; An, Jinxia; Wu, Zhongming; Zhao, Yu; Dai, Xiaomei; Zhang, Xinge; Li, Chaoxing

    2015-10-12

    To explore the effect of polymer structure on their self-assembled aggregates and their unique characteristics, this study was devoted to developing a series of amphiphilic block and random phenylboronic acid-based glycopolymers by RAFT polymerization. The amphiphilic glycopolymers were successfully self-assembled into spherically shaped nanoparticles with narrow size distribution in aqueous solution. For block and random copolymers with similar monomer compositions, block copolymer nanoparticles exhibited a more regular transmittance change with the increasing glucose level, while a more evident variation of size and quicker decreasing tendency in I/I0 behavior in different glucose media were observed for random copolymer nanoparticles. Cell viability of all the polymer nanoparticles investigated by MTT assay was higher than 80%, indicating that both block and random copolymers had good cytocompatibility. Insulin could be encapsulated into both nanoparticles, and insulin release rate for random glycopolymer was slightly quicker than that for the block ones. We speculate that different chain conformations between block and random glycopolymers play an important role in self-assembled nanoaggregates and underlying glucose-sensitive behavior.

  2. A novel approach of ensuring layout regularity correct by construction in advanced technologies

    Science.gov (United States)

    Ahmed, Shafquat Jahan; Vaderiya, Yagnesh; Gupta, Radhika; Parthasarathy, Chittoor; Marin, Jean-Claude; Robert, Frederic

    2017-03-01

    In advanced technology nodes, layout regularity has become a mandatory prerequisite to create robust designs less sensitive to variations in manufacturing process in order to improve yield and minimizing electrical variability. In this paper we describe a method for designing regular full custom layouts based on design and process co-optimization. The method includes various design rule checks that can be used on-the-fly during leaf-cell layout development. We extract a Layout Regularity Index (LRI) from the layouts based on the jogs, alignments and pitches used in the design for any given metal layer. Regularity Index of a layout is the direct indicator of manufacturing yield and is used to compare the relative health of different layout blocks in terms of process friendliness. The method has been deployed for 28nm and 40nm technology nodes for Memory IP and is being extended to other IPs (IO, standard-cell). We have quantified the gain of layout regularity with the deployed method on printability and electrical characteristics by process-variation (PV) band simulation analysis and have achieved up-to 5nm reduction in PV band.

  3. Selection of regularization parameter for l1-regularized damage detection

    Science.gov (United States)

    Hou, Rongrong; Xia, Yong; Bao, Yuequan; Zhou, Xiaoqing

    2018-06-01

    The l1 regularization technique has been developed for structural health monitoring and damage detection through employing the sparsity condition of structural damage. The regularization parameter, which controls the trade-off between data fidelity and solution size of the regularization problem, exerts a crucial effect on the solution. However, the l1 regularization problem has no closed-form solution, and the regularization parameter is usually selected by experience. This study proposes two strategies of selecting the regularization parameter for the l1-regularized damage detection problem. The first method utilizes the residual and solution norms of the optimization problem and ensures that they are both small. The other method is based on the discrepancy principle, which requires that the variance of the discrepancy between the calculated and measured responses is close to the variance of the measurement noise. The two methods are applied to a cantilever beam and a three-story frame. A range of the regularization parameter, rather than one single value, can be determined. When the regularization parameter in this range is selected, the damage can be accurately identified even for multiple damage scenarios. This range also indicates the sensitivity degree of the damage identification problem to the regularization parameter.

  4. Introducing passive matched field acoustic tomography

    International Nuclear Information System (INIS)

    Gasparini, O.; Camporeale, C.; Crise, A.

    1997-01-01

    In acoustic tomography sea-basin environmental parameters such as temperature profiles and current-velocities are derived, when ray propagation models are adopted, by the travel time estimates relative to the identifiable ray paths. The transmitted signals are either single frequency, or impulsive, or intermittent and deterministic. When the wavelength is comparable with the scale lengths present in the propagation scenario, Matched Field Tomography (MFT) is used, entailing the consideration of waveguide modes instead of rays. A new concept in tomography is introduced in the paper, that employs passively the noise emitted by ships of opportunity (cargoes, ferries) as source signals. The passive technique is acoustic-pollution-free, and if a basin is selected in which a regular ship traffic occurs data can be received on a regular schedule, with no transmission cost. A novel array pre-processor for passive tomography is introduced, such that the signal structure at the pre-processor output in nearly the same as that obtainable in the case of single-frequency source signals

  5. Extended -Regular Sequence for Automated Analysis of Microarray Images

    Directory of Open Access Journals (Sweden)

    Jin Hee-Jeong

    2006-01-01

    Full Text Available Microarray study enables us to obtain hundreds of thousands of expressions of genes or genotypes at once, and it is an indispensable technology for genome research. The first step is the analysis of scanned microarray images. This is the most important procedure for obtaining biologically reliable data. Currently most microarray image processing systems require burdensome manual block/spot indexing work. Since the amount of experimental data is increasing very quickly, automated microarray image analysis software becomes important. In this paper, we propose two automated methods for analyzing microarray images. First, we propose the extended -regular sequence to index blocks and spots, which enables a novel automatic gridding procedure. Second, we provide a methodology, hierarchical metagrid alignment, to allow reliable and efficient batch processing for a set of microarray images. Experimental results show that the proposed methods are more reliable and convenient than the commercial tools.

  6. Processing SPARQL queries with regular expressions in RDF databases

    Directory of Open Access Journals (Sweden)

    Cho Hune

    2011-03-01

    Full Text Available Abstract Background As the Resource Description Framework (RDF data model is widely used for modeling and sharing a lot of online bioinformatics resources such as Uniprot (dev.isb-sib.ch/projects/uniprot-rdf or Bio2RDF (bio2rdf.org, SPARQL - a W3C recommendation query for RDF databases - has become an important query language for querying the bioinformatics knowledge bases. Moreover, due to the diversity of users’ requests for extracting information from the RDF data as well as the lack of users’ knowledge about the exact value of each fact in the RDF databases, it is desirable to use the SPARQL query with regular expression patterns for querying the RDF data. To the best of our knowledge, there is currently no work that efficiently supports regular expression processing in SPARQL over RDF databases. Most of the existing techniques for processing regular expressions are designed for querying a text corpus, or only for supporting the matching over the paths in an RDF graph. Results In this paper, we propose a novel framework for supporting regular expression processing in SPARQL query. Our contributions can be summarized as follows. 1 We propose an efficient framework for processing SPARQL queries with regular expression patterns in RDF databases. 2 We propose a cost model in order to adapt the proposed framework in the existing query optimizers. 3 We build a prototype for the proposed framework in C++ and conduct extensive experiments demonstrating the efficiency and effectiveness of our technique. Conclusions Experiments with a full-blown RDF engine show that our framework outperforms the existing ones by up to two orders of magnitude in processing SPARQL queries with regular expression patterns.

  7. Technique to match mantle and para-aortic fields

    International Nuclear Information System (INIS)

    Lutz, W.R.; Larsen, R.D.

    1983-01-01

    A technique is described to match the mantle and para-aortic fields used in treatment of Hodgkin's disease, when the patient is treated alternately in supine and prone position. The approach is based on referencing the field edges to a point close to the vertebral column, where uncontrolled motion is minimal and where accurate matching is particularly important. Fiducial surface points are established in the simulation process to accomplish the objective. Dose distributions have been measured to study the combined effect of divergence differences, changes in body angulation and setup errors. Even with the most careful technique, the use of small cord blocks of 50% transmission is an advisable precaution for the posterior fields

  8. Impedance-matched Marx generators

    Directory of Open Access Journals (Sweden)

    W. A. Stygar

    2017-04-01

    Full Text Available We have conceived a new class of prime-power sources for pulsed-power accelerators: impedance-matched Marx generators (IMGs. The fundamental building block of an IMG is a brick, which consists of two capacitors connected electrically in series with a single switch. An IMG comprises a single stage or several stages distributed axially and connected in series. Each stage is powered by a single brick or several bricks distributed azimuthally within the stage and connected in parallel. The stages of a multistage IMG drive an impedance-matched coaxial transmission line with a conical center conductor. When the stages are triggered sequentially to launch a coherent traveling wave along the coaxial line, the IMG achieves electromagnetic-power amplification by triggered emission of radiation. Hence a multistage IMG is a pulsed-power analogue of a laser. To illustrate the IMG approach to prime power, we have developed conceptual designs of two ten-stage IMGs with LC time constants on the order of 100 ns. One design includes 20 bricks per stage, and delivers a peak electrical power of 1.05 TW to a matched-impedance 1.22-Ω load. The design generates 113 kV per stage and has a maximum energy efficiency of 89%. The other design includes a single brick per stage, delivers 68 GW to a matched-impedance 19-Ω load, generates 113 kV per stage, and has a maximum energy efficiency of 90%. For a given electrical-power-output time history, an IMG is less expensive and slightly more efficient than a linear transformer driver, since an IMG does not use ferromagnetic cores.

  9. Impedance-matched Marx generators

    Science.gov (United States)

    Stygar, W. A.; LeChien, K. R.; Mazarakis, M. G.; Savage, M. E.; Stoltzfus, B. S.; Austin, K. N.; Breden, E. W.; Cuneo, M. E.; Hutsel, B. T.; Lewis, S. A.; McKee, G. R.; Moore, J. K.; Mulville, T. D.; Muron, D. J.; Reisman, D. B.; Sceiford, M. E.; Wisher, M. L.

    2017-04-01

    We have conceived a new class of prime-power sources for pulsed-power accelerators: impedance-matched Marx generators (IMGs). The fundamental building block of an IMG is a brick, which consists of two capacitors connected electrically in series with a single switch. An IMG comprises a single stage or several stages distributed axially and connected in series. Each stage is powered by a single brick or several bricks distributed azimuthally within the stage and connected in parallel. The stages of a multistage IMG drive an impedance-matched coaxial transmission line with a conical center conductor. When the stages are triggered sequentially to launch a coherent traveling wave along the coaxial line, the IMG achieves electromagnetic-power amplification by triggered emission of radiation. Hence a multistage IMG is a pulsed-power analogue of a laser. To illustrate the IMG approach to prime power, we have developed conceptual designs of two ten-stage IMGs with L C time constants on the order of 100 ns. One design includes 20 bricks per stage, and delivers a peak electrical power of 1.05 TW to a matched-impedance 1.22 -Ω load. The design generates 113 kV per stage and has a maximum energy efficiency of 89%. The other design includes a single brick per stage, delivers 68 GW to a matched-impedance 19 -Ω load, generates 113 kV per stage, and has a maximum energy efficiency of 90%. For a given electrical-power-output time history, an IMG is less expensive and slightly more efficient than a linear transformer driver, since an IMG does not use ferromagnetic cores.

  10. IMPROVING SEMI-GLOBAL MATCHING: COST AGGREGATION AND CONFIDENCE MEASURE

    Directory of Open Access Journals (Sweden)

    P. d’Angelo

    2016-06-01

    Full Text Available Digital elevation models are one of the basic products that can be generated from remotely sensed imagery. The Semi Global Matching (SGM algorithm is a robust and practical algorithm for dense image matching. The connection between SGM and Belief Propagation was recently developed, and based on that improvements such as correction of over-counting the data term, and a new confidence measure have been proposed. Later the MGM algorithm has been proposed, it aims at improving the regularization step of SGM, but has only been evaluated on the Middlebury stereo benchmark so far. This paper evaluates these proposed improvements on the ISPRS satellite stereo benchmark, using a Pleiades Triplet and a Cartosat-1 Stereo pair. The over-counting correction slightly improves matching density, at the expense of adding a few outliers. The MGM cost aggregation shows leads to a slight increase of accuracy.

  11. Optimal adaptive normalized matched filter for large antenna arrays

    KAUST Repository

    Kammoun, Abla

    2016-09-13

    This paper focuses on the problem of detecting a target in the presence of a compound Gaussian clutter with unknown statistics. To this end, we focus on the design of the adaptive normalized matched filter (ANMF) detector which uses the regularized Tyler estimator (RTE) built from N-dimensional observations x, · · ·, x in order to estimate the clutter covariance matrix. The choice for the RTE is motivated by its possessing two major attributes: first its resilience to the presence of outliers, and second its regularization parameter that makes it more suitable to handle the scarcity in observations. In order to facilitate the design of the ANMF detector, we consider the regime in which n and N are both large. This allows us to derive closed-form expressions for the asymptotic false alarm and detection probabilities. Based on these expressions, we propose an asymptotically optimal setting for the regularization parameter of the RTE that maximizes the asymptotic detection probability while keeping the asymptotic false alarm probability below a certain threshold. Numerical results are provided in order to illustrate the gain of the proposed detector over a recently proposed setting of the regularization parameter.

  12. Optimal adaptive normalized matched filter for large antenna arrays

    KAUST Repository

    Kammoun, Abla; Couillet, Romain; Pascal, Fré dé ric; Alouini, Mohamed-Slim

    2016-01-01

    This paper focuses on the problem of detecting a target in the presence of a compound Gaussian clutter with unknown statistics. To this end, we focus on the design of the adaptive normalized matched filter (ANMF) detector which uses the regularized Tyler estimator (RTE) built from N-dimensional observations x, · · ·, x in order to estimate the clutter covariance matrix. The choice for the RTE is motivated by its possessing two major attributes: first its resilience to the presence of outliers, and second its regularization parameter that makes it more suitable to handle the scarcity in observations. In order to facilitate the design of the ANMF detector, we consider the regime in which n and N are both large. This allows us to derive closed-form expressions for the asymptotic false alarm and detection probabilities. Based on these expressions, we propose an asymptotically optimal setting for the regularization parameter of the RTE that maximizes the asymptotic detection probability while keeping the asymptotic false alarm probability below a certain threshold. Numerical results are provided in order to illustrate the gain of the proposed detector over a recently proposed setting of the regularization parameter.

  13. Interference by clustered regularly interspaced short palindromic repeat (CRISPR) RNA is governed by a seed sequence

    NARCIS (Netherlands)

    Semenova, E.V.; Jore, M.M.; Westra, E.R.; Oost, van der J.; Brouns, S.J.J.

    2011-01-01

    Prokaryotic clustered regularly interspaced short palindromic repeat (CRISPR)/Cas (CRISPR-associated sequences) systems provide adaptive immunity against viruses when a spacer sequence of small CRISPR RNA (crRNA) matches a protospacer sequence in the viral genome. Viruses that escape CRISPR/Cas

  14. Posterior Probability Matching and Human Perceptual Decision Making.

    Directory of Open Access Journals (Sweden)

    Richard F Murray

    2015-06-01

    Full Text Available Probability matching is a classic theory of decision making that was first developed in models of cognition. Posterior probability matching, a variant in which observers match their response probabilities to the posterior probability of each response being correct, is being used increasingly often in models of perception. However, little is known about whether posterior probability matching is consistent with the vast literature on vision and hearing that has developed within signal detection theory. Here we test posterior probability matching models using two tools from detection theory. First, we examine the models' performance in a two-pass experiment, where each block of trials is presented twice, and we measure the proportion of times that the model gives the same response twice to repeated stimuli. We show that at low performance levels, posterior probability matching models give highly inconsistent responses across repeated presentations of identical trials. We find that practised human observers are more consistent across repeated trials than these models predict, and we find some evidence that less practised observers more consistent as well. Second, we compare the performance of posterior probability matching models on a discrimination task to the performance of a theoretical ideal observer that achieves the best possible performance. We find that posterior probability matching is very inefficient at low-to-moderate performance levels, and that human observers can be more efficient than is ever possible according to posterior probability matching models. These findings support classic signal detection models, and rule out a broad class of posterior probability matching models for expert performance on perceptual tasks that range in complexity from contrast discrimination to symmetry detection. However, our findings leave open the possibility that inexperienced observers may show posterior probability matching behaviour, and our methods

  15. Analgesic Choice in Management of Rib Fractures: Paravertebral Block or Epidural Analgesia?

    Science.gov (United States)

    Malekpour, Mahdi; Hashmi, Ammar; Dove, James; Torres, Denise; Wild, Jeffrey

    2017-06-01

    Rib fractures are commonly encountered in the setting of trauma. The aim of this study was to assess the association between the clinical outcome of rib fracture and epidural analgesia (EA) versus paravertebral block (PVB) using the National Trauma Data Bank (NTDB). Using the 2011 and 2012 versions of the NTDB, we retrieved completed records for all patients above 18 years of age who were admitted with rib fractures. Primary outcome was in-hospital mortality. Secondary outcomes were length of stay (LOS), intensive care unit (ICU) admission, ICU LOS, mechanical ventilation, duration of mechanical ventilation, development of pneumonia, and development of any other complication. Clinical outcomes were first compared between propensity score-matched EA and PVB patients. Then, EA and PVB patients were combined into the procedure group and the outcomes were compared with propensity score-matched patients that received neither intervention (no-procedure group). A total of 194,766 patients were included in the study with 1073 patients having EA, 1110 patients having PVB, and 192,583 patients having neither procedure. After propensity score matching, comparison of primary and secondary outcomes between EA and PVB patients showed no difference. Comparison of propensity score-matched procedure and no-procedure patients showed prolonged LOS and more frequent ICU admissions in patients receiving a procedure (both P rib fractures. There was an association between use of a block and improved outcome, but this could be explained by selection of healthier patients to receive a block. Prospective study of this association is recommended.

  16. Automatic UAV Image Geo-Registration by Matching UAV Images to Georeferenced Image Data

    Directory of Open Access Journals (Sweden)

    Xiangyu Zhuo

    2017-04-01

    Full Text Available Recent years have witnessed the fast development of UAVs (unmanned aerial vehicles. As an alternative to traditional image acquisition methods, UAVs bridge the gap between terrestrial and airborne photogrammetry and enable flexible acquisition of high resolution images. However, the georeferencing accuracy of UAVs is still limited by the low-performance on-board GNSS and INS. This paper investigates automatic geo-registration of an individual UAV image or UAV image blocks by matching the UAV image(s with a previously taken georeferenced image, such as an individual aerial or satellite image with a height map attached or an aerial orthophoto with a DSM (digital surface model attached. As the biggest challenge for matching UAV and aerial images is in the large differences in scale and rotation, we propose a novel feature matching method for nadir or slightly tilted images. The method is comprised of a dense feature detection scheme, a one-to-many matching strategy and a global geometric verification scheme. The proposed method is able to find thousands of valid matches in cases where SIFT and ASIFT fail. Those matches can be used to geo-register the whole UAV image block towards the reference image data. When the reference images offer high georeferencing accuracy, the UAV images can also be geolocalized in a global coordinate system. A series of experiments involving different scenarios was conducted to validate the proposed method. The results demonstrate that our approach achieves not only decimeter-level registration accuracy, but also comparable global accuracy as the reference images.

  17. Emplacement of small and large buffer blocks

    International Nuclear Information System (INIS)

    Saari, H.; Nikula, M.; Suikki, M.

    2010-05-01

    The report describes emplacement of a buffer structure encircling a spent fuel canister to be deposited in a vertical hole. The report deals with installability of various size blocks and with an emplacement gear, as well as evaluates the achieved quality of emplacement and the time needed for installing the buffer. Two block assembly of unequal size were chosen for examination. A first option involved small blocks, the use of which resulted in a buffer structure consisting of small sector blocks 200 mm in height. A second option involved large blocks, resulting in a buffer structure which consists of eight blocks. In these tests, the material chosen for both block options was concrete instead of bentonite. The emplacement test was a three-phase process. A first phase included stacking a two meter high buffer structure with small blocks for ensuring the operation of test equipment and blocks. A second phase included installing buffer structures with both block options to a height matching that of a canister-encircling cylindrical component. A third phase included testing also the installability of blocks to be placed above the canister by using small blocks. In emplacement tests, special attention was paid to the installability of blocks as well as to the time required for emplacement. Lifters for both blocks worked well. Due to the mass to be lifted, the lifter for large blocks had a more heavy-duty frame structure (and other lifting gear). The employed lifters were suspended in the tests on a single steel wire rope. Stacking was managed with both block sizes at adequate precision and stacked-up towers were steady. The stacking of large blocks was considerably faster. Therefore it is probably that the overall handling of the large blocks will be more convenient at a final disposal site. From the standpoint of reliability in lifting, the small blocks were safer to install above the canister. In large blocks, there are strict shape-related requirements which are

  18. Long-Term Interference at the Semantic Level: Evidence from Blocked-Cyclic Picture Matching

    Science.gov (United States)

    Wei, Tao; Schnur, Tatiana T.

    2016-01-01

    Processing semantically related stimuli creates interference across various domains of cognition, including language and memory. In this study, we identify the locus and mechanism of interference when retrieving meanings associated with words and pictures. Subjects matched a probe stimulus (e.g., cat) to its associated target picture (e.g., yarn)…

  19. Matching the quasiparton distribution in a momentum subtraction scheme

    Science.gov (United States)

    Stewart, Iain W.; Zhao, Yong

    2018-03-01

    The quasiparton distribution is a spatial correlation of quarks or gluons along the z direction in a moving nucleon which enables direct lattice calculations of parton distribution functions. It can be defined with a nonperturbative renormalization in a regularization independent momentum subtraction scheme (RI/MOM), which can then be perturbatively related to the collinear parton distribution in the MS ¯ scheme. Here we carry out a direct matching from the RI/MOM scheme for the quasi-PDF to the MS ¯ PDF, determining the non-singlet quark matching coefficient at next-to-leading order in perturbation theory. We find that the RI/MOM matching coefficient is insensitive to the ultraviolet region of convolution integral, exhibits improved perturbative convergence when converting between the quasi-PDF and PDF, and is consistent with a quasi-PDF that vanishes in the unphysical region as the proton momentum Pz→∞ , unlike other schemes. This direct approach therefore has the potential to improve the accuracy for converting quasidistribution lattice calculations to collinear distributions.

  20. Image super-resolution reconstruction based on regularization technique and guided filter

    Science.gov (United States)

    Huang, De-tian; Huang, Wei-qin; Gu, Pei-ting; Liu, Pei-zhong; Luo, Yan-min

    2017-06-01

    In order to improve the accuracy of sparse representation coefficients and the quality of reconstructed images, an improved image super-resolution algorithm based on sparse representation is presented. In the sparse coding stage, the autoregressive (AR) regularization and the non-local (NL) similarity regularization are introduced to improve the sparse coding objective function. A group of AR models which describe the image local structures are pre-learned from the training samples, and one or several suitable AR models can be adaptively selected for each image patch to regularize the solution space. Then, the image non-local redundancy is obtained by the NL similarity regularization to preserve edges. In the process of computing the sparse representation coefficients, the feature-sign search algorithm is utilized instead of the conventional orthogonal matching pursuit algorithm to improve the accuracy of the sparse coefficients. To restore image details further, a global error compensation model based on weighted guided filter is proposed to realize error compensation for the reconstructed images. Experimental results demonstrate that compared with Bicubic, L1SR, SISR, GR, ANR, NE + LS, NE + NNLS, NE + LLE and A + (16 atoms) methods, the proposed approach has remarkable improvement in peak signal-to-noise ratio, structural similarity and subjective visual perception.

  1. SAD-Based Stereo Matching Using FPGAs

    Science.gov (United States)

    Ambrosch, Kristian; Humenberger, Martin; Kubinger, Wilfried; Steininger, Andreas

    In this chapter we present a field-programmable gate array (FPGA) based stereo matching architecture. This architecture uses the sum of absolute differences (SAD) algorithm and is targeted at automotive and robotics applications. The disparity maps are calculated using 450×375 input images and a disparity range of up to 150 pixels. We discuss two different implementation approaches for the SAD and analyze their resource usage. Furthermore, block sizes ranging from 3×3 up to 11×11 and their impact on the consumed logic elements as well as on the disparity map quality are discussed. The stereo matching architecture enables a frame rate of up to 600 fps by calculating the data in a highly parallel and pipelined fashion. This way, a software solution optimized by using Intel's Open Source Computer Vision Library running on an Intel Pentium 4 with 3 GHz clock frequency is outperformed by a factor of 400.

  2. An efficient, block-by-block algorithm for inverting a block tridiagonal, nearly block Toeplitz matrix

    International Nuclear Information System (INIS)

    Reuter, Matthew G; Hill, Judith C

    2012-01-01

    We present an algorithm for computing any block of the inverse of a block tridiagonal, nearly block Toeplitz matrix (defined as a block tridiagonal matrix with a small number of deviations from the purely block Toeplitz structure). By exploiting both the block tridiagonal and the nearly block Toeplitz structures, this method scales independently of the total number of blocks in the matrix and linearly with the number of deviations. Numerical studies demonstrate this scaling and the advantages of our method over alternatives.

  3. Constrained least squares regularization in PET

    International Nuclear Information System (INIS)

    Choudhury, K.R.; O'Sullivan, F.O.

    1996-01-01

    Standard reconstruction methods used in tomography produce images with undesirable negative artifacts in background and in areas of high local contrast. While sophisticated statistical reconstruction methods can be devised to correct for these artifacts, their computational implementation is excessive for routine operational use. This work describes a technique for rapid computation of approximate constrained least squares regularization estimates. The unique feature of the approach is that it involves no iterative projection or backprojection steps. This contrasts with the familiar computationally intensive algorithms based on algebraic reconstruction (ART) or expectation-maximization (EM) methods. Experimentation with the new approach for deconvolution and mixture analysis shows that the root mean square error quality of estimators based on the proposed algorithm matches and usually dominates that of more elaborate maximum likelihood, at a fraction of the computational effort

  4. Interactive Block Games for Assessing Children's Cognitive Skills: Design and Preliminary Evaluation

    Directory of Open Access Journals (Sweden)

    Kiju Lee

    2018-05-01

    Full Text Available Background: This paper presents design and results from preliminary evaluation of Tangible Geometric Games (TAG-Games for cognitive assessment in young children. The TAG-Games technology employs a set of sensor-integrated cube blocks, called SIG-Blocks, and graphical user interfaces for test administration and real-time performance monitoring. TAG-Games were administered to children from 4 to 8 years of age for evaluating preliminary efficacy of this new technology-based approach.Methods: Five different sets of SIG-Blocks comprised of geometric shapes, segmented human faces, segmented animal faces, emoticons, and colors, were used for three types of TAG-Games, including Assembly, Shape Matching, and Sequence Memory. Computational task difficulty measures were defined for each game and used to generate items with varying difficulty. For preliminary evaluation, TAG-Games were tested on 40 children. To explore the clinical utility of the information assessed by TAG-Games, three subtests of the age-appropriate Wechsler tests (i.e., Block Design, Matrix Reasoning, and Picture Concept were also administered.Results: Internal consistency of TAG-Games was evaluated by the split-half reliability test. Weak to moderate correlations between Assembly and Block Design, Shape Matching and Matrix Reasoning, and Sequence Memory and Picture Concept were found. The computational measure of task complexity for each TAG-Game showed a significant correlation with participants' performance. In addition, age-correlations on TAG-Game scores were found, implying its potential use for assessing children's cognitive skills autonomously.

  5. Osmosis and thermodynamics explained by solute blocking.

    Science.gov (United States)

    Nelson, Peter Hugo

    2017-01-01

    A solute-blocking model is presented that provides a kinetic explanation of osmosis and ideal solution thermodynamics. It validates a diffusive model of osmosis that is distinct from the traditional convective flow model of osmosis. Osmotic equilibrium occurs when the fraction of water molecules in solution matches the fraction of pure water molecules that have enough energy to overcome the pressure difference. Solute-blocking also provides a kinetic explanation for why Raoult's law and the other colligative properties depend on the mole fraction (but not the size) of the solute particles, resulting in a novel kinetic explanation for the entropy of mixing and chemical potential of ideal solutions. Some of its novel predictions have been confirmed; others can be tested experimentally or by simulation.

  6. Osmosis and thermodynamics explained by solute blocking

    Science.gov (United States)

    Nelson, Peter Hugo

    2016-01-01

    A solute-blocking model is presented that provides a kinetic explanation of osmosis and ideal solution thermodynamics. It validates a diffusive model of osmosis that is distinct from the traditional convective flow model of osmosis. Osmotic equilibrium occurs when the fraction of water molecules in solution matches the fraction of pure water molecules that have enough energy to overcome the pressure difference. Solute-blocking also provides a kinetic explanation for why Raoult’s law and the other colligative properties depend on the mole fraction (but not the size) of the solute particles, resulting in a novel kinetic explanation for the entropy of mixing and chemical potential of ideal solutions. Some of its novel predictions have been confirmed, others can be tested experimentally or by simulation. PMID:27225298

  7. Block assembly for global registration of building scans

    KAUST Repository

    Yan, Feilong; Nan, Liangliang; Wonka, Peter

    2016-01-01

    We propose a framework for global registration of building scans. The first contribution of our work is to detect and use portals (e.g., doors and windows) to improve the local registration between two scans. Our second contribution is an optimization based on a linear integer programming formulation. We abstract each scan as a block and model the blocks registration as an optimization problem that aims at maximizing the overall matching score of the entire scene. We propose an efficient solution to this optimization problem by iteratively detecting and adding local constraints. We demonstrate the effectiveness of the proposed method on buildings of various styles and that our approach is superior to the current state of the art.

  8. Block assembly for global registration of building scans

    KAUST Repository

    Yan, Feilong

    2016-11-11

    We propose a framework for global registration of building scans. The first contribution of our work is to detect and use portals (e.g., doors and windows) to improve the local registration between two scans. Our second contribution is an optimization based on a linear integer programming formulation. We abstract each scan as a block and model the blocks registration as an optimization problem that aims at maximizing the overall matching score of the entire scene. We propose an efficient solution to this optimization problem by iteratively detecting and adding local constraints. We demonstrate the effectiveness of the proposed method on buildings of various styles and that our approach is superior to the current state of the art.

  9. Distance-regular graphs

    NARCIS (Netherlands)

    van Dam, Edwin R.; Koolen, Jack H.; Tanaka, Hajime

    2016-01-01

    This is a survey of distance-regular graphs. We present an introduction to distance-regular graphs for the reader who is unfamiliar with the subject, and then give an overview of some developments in the area of distance-regular graphs since the monograph 'BCN'[Brouwer, A.E., Cohen, A.M., Neumaier,

  10. Effect of cyclic block loading on character of deformation and strength of structural materials in plane stressed state

    International Nuclear Information System (INIS)

    Kul'chitskij, N.M.; Troshchenko, A.V.; Koval'chuk, B.I.; Khamaza, L.A.; Nikolaev, I.A.

    1982-01-01

    The paper is concerned with choice of conditions for preliminary cyclic block loading, determination of fatigue failure resistance characteristics for various structural materials under regular and selected block loading, investigation of the preliminary cyclic loading effect on regularities of elastoplastic deformation of materials concerned in the biaxial stressed state. Under selected conditions of cyclic block loading the character of damage accumulation is close to the linear law for the materials of high-srength doped steel, and VT6 alloys of concern. These materials in the initial state and after preliminary cyclic loading are anisotropic. Axial direction is characterized by a higher plastic strain resistance for steel and tangential direction - for VT6 alloy. The generalized strain curves for the materials in question are not invariant as to the stressed state type. It is stated that the effect of preliminary unsteady cyclic loading on resistance and general regularities of material deformation in the complex stressed state is insignificant. It is observed that stress-strain properties of the materials tend to vary in the following way: plastic strain resistance of the steel lowers and that of VT6 rises, anisotropy of the materials somehow decreases. The variation in the material anisotropy may be attributed to a decrease in residual stresses resulting from preliminary cyclic loading

  11. Regular expressions cookbook

    CERN Document Server

    Goyvaerts, Jan

    2009-01-01

    This cookbook provides more than 100 recipes to help you crunch data and manipulate text with regular expressions. Every programmer can find uses for regular expressions, but their power doesn't come worry-free. Even seasoned users often suffer from poor performance, false positives, false negatives, or perplexing bugs. Regular Expressions Cookbook offers step-by-step instructions for some of the most common tasks involving this tool, with recipes for C#, Java, JavaScript, Perl, PHP, Python, Ruby, and VB.NET. With this book, you will: Understand the basics of regular expressions through a

  12. High Density Aerial Image Matching: State-Of and Future Prospects

    Science.gov (United States)

    Haala, N.; Cavegn, S.

    2016-06-01

    Ongoing innovations in matching algorithms are continuously improving the quality of geometric surface representations generated automatically from aerial images. This development motivated the launch of the joint ISPRS/EuroSDR project "Benchmark on High Density Aerial Image Matching", which aims on the evaluation of photogrammetric 3D data capture in view of the current developments in dense multi-view stereo-image matching. Originally, the test aimed on image based DSM computation from conventional aerial image flights for different landuse and image block configurations. The second phase then put an additional focus on high quality, high resolution 3D geometric data capture in complex urban areas. This includes both the extension of the test scenario to oblique aerial image flights as well as the generation of filtered point clouds as additional output of the respective multi-view reconstruction. The paper uses the preliminary outcomes of the benchmark to demonstrate the state-of-the-art in airborne image matching with a special focus of high quality geometric data capture in urban scenarios.

  13. LINE-BASED MULTI-IMAGE MATCHING FOR FAÇADE RECONSTRUCTION

    Directory of Open Access Journals (Sweden)

    T. A. Teo

    2012-07-01

    Full Text Available This research integrates existing LOD 2 building models and multiple close-range images for façade structural lines extraction. The major works are orientation determination and multiple image matching. In the orientation determination, Speeded Up Robust Features (SURF is applied to extract tie points automatically. Then, tie points and control points are combined for block adjustment. An object-based multi-images matching is proposed to extract the façade structural lines. The 2D lines in image space are extracted by Canny operator followed by Hough transform. The role of LOD 2 building models is to correct the tilt displacement of image from different views. The wall of LOD 2 model is also used to generate hypothesis planes for similarity measurement. Finally, average normalized cross correlation is calculated to obtain the best location in object space. The test images are acquired by a nonmetric camera Nikon D2X. The total number of image is 33. The experimental results indicate that the accuracy of orientation determination is about 1 pixel from 2515 tie points and 4 control points. It also indicates that line-based matching is more flexible than point-based matching.

  14. Efficient moving target analysis for inverse synthetic aperture radar images via joint speeded-up robust features and regular moment

    Science.gov (United States)

    Yang, Hongxin; Su, Fulin

    2018-01-01

    We propose a moving target analysis algorithm using speeded-up robust features (SURF) and regular moment in inverse synthetic aperture radar (ISAR) image sequences. In our study, we first extract interest points from ISAR image sequences by SURF. Different from traditional feature point extraction methods, SURF-based feature points are invariant to scattering intensity, target rotation, and image size. Then, we employ a bilateral feature registering model to match these feature points. The feature registering scheme can not only search the isotropic feature points to link the image sequences but also reduce the error matching pairs. After that, the target centroid is detected by regular moment. Consequently, a cost function based on correlation coefficient is adopted to analyze the motion information. Experimental results based on simulated and real data validate the effectiveness and practicability of the proposed method.

  15. LL-regular grammars

    NARCIS (Netherlands)

    Nijholt, Antinus

    1980-01-01

    Culik II and Cogen introduced the class of LR-regular grammars, an extension of the LR(k) grammars. In this paper we consider an analogous extension of the LL(k) grammars called the LL-regular grammars. The relation of this class of grammars to other classes of grammars will be shown. Any LL-regular

  16. A novel directional asymmetric sampling search algorithm for fast block-matching motion estimation

    Science.gov (United States)

    Li, Yue-e.; Wang, Qiang

    2011-11-01

    This paper proposes a novel directional asymmetric sampling search (DASS) algorithm for video compression. Making full use of the error information (block distortions) of the search patterns, eight different direction search patterns are designed for various situations. The strategy of local sampling search is employed for the search of big-motion vector. In order to further speed up the search, early termination strategy is adopted in procedure of DASS. Compared to conventional fast algorithms, the proposed method has the most satisfactory PSNR values for all test sequences.

  17. High transmission acoustic focusing by impedance-matched acoustic meta-surfaces

    KAUST Repository

    Al Jahdali, Rasha

    2016-01-19

    Impedance is an important issue in the design of acoustic lenses because mismatched impedance is detrimental to real focusing applications. Here, we report two designs of acoustic lenses that focus acoustic waves in water and air, respectively. They are tailored by acoustic meta-surfaces, which are rigid thin plates decorated with periodically distributed sub-wavelength slits. Their respective building blocks are constructed from the coiling-up spaces in water and the layered structures in air. Analytic analysis based on coupled-mode theory and transfer matrix reveals that the impedances of the lenses are matched to those of the background media. With these impedance-matched acoustic lenses, we demonstrate the acoustic focusing effect by finite-element simulations.

  18. High transmission acoustic focusing by impedance-matched acoustic meta-surfaces

    KAUST Repository

    Al Jahdali, Rasha; Wu, Ying

    2016-01-01

    Impedance is an important issue in the design of acoustic lenses because mismatched impedance is detrimental to real focusing applications. Here, we report two designs of acoustic lenses that focus acoustic waves in water and air, respectively. They are tailored by acoustic meta-surfaces, which are rigid thin plates decorated with periodically distributed sub-wavelength slits. Their respective building blocks are constructed from the coiling-up spaces in water and the layered structures in air. Analytic analysis based on coupled-mode theory and transfer matrix reveals that the impedances of the lenses are matched to those of the background media. With these impedance-matched acoustic lenses, we demonstrate the acoustic focusing effect by finite-element simulations.

  19. Assessing the match between female primary students’ anthropometric dimensions and furniture dimensions in Hamadan schools in 2013

    Directory of Open Access Journals (Sweden)

    R. Heidarimoghadam

    2015-04-01

    Conclusion: Despite differences in the body dimensions of primary school students, there is no regularity in using of school furniture. Overall, the dimensions of existing benches and desks are not matched with the anthropometric dimensions of students.

  20. On structure-exploiting trust-region regularized nonlinear least squares algorithms for neural-network learning.

    Science.gov (United States)

    Mizutani, Eiji; Demmel, James W

    2003-01-01

    This paper briefly introduces our numerical linear algebra approaches for solving structured nonlinear least squares problems arising from 'multiple-output' neural-network (NN) models. Our algorithms feature trust-region regularization, and exploit sparsity of either the 'block-angular' residual Jacobian matrix or the 'block-arrow' Gauss-Newton Hessian (or Fisher information matrix in statistical sense) depending on problem scale so as to render a large class of NN-learning algorithms 'efficient' in both memory and operation costs. Using a relatively large real-world nonlinear regression application, we shall explain algorithmic strengths and weaknesses, analyzing simulation results obtained by both direct and iterative trust-region algorithms with two distinct NN models: 'multilayer perceptrons' (MLP) and 'complementary mixtures of MLP-experts' (or neuro-fuzzy modular networks).

  1. Slip-spring model of entangled rod-coil block copolymers

    Science.gov (United States)

    Wang, Muzhou; Likhtman, Alexei E.; Olsen, Bradley D.

    2015-03-01

    Understanding the dynamics of rod-coil block copolymers is important for optimal design of functional nanostructured materials for organic electronics and biomaterials. Recently, we proposed a reptation theory of entangled rod-coil block copolymers, predicting the relaxation mechanisms of activated reptation and arm retraction that slow rod-coil dynamics relative to coil and rod homopolymers, respectively. In this work, we introduce a coarse-grained slip-spring model of rod-coil block copolymers to further explore these mechanisms. First, parameters of the coarse-grained model are tuned to match previous molecular dynamics simulation results for coils, rods, and block copolymers. For activated reptation, rod-coil copolymers are shown to disfavor configurations where the rod occupies curved portions of the entanglement tube of randomly varying curvature created by the coil ends. The effect of these barriers on diffusion is quantitatively captured by considering one-dimensional motion along an entanglement tube with a rough free energy potential. Finally, we analyze the crossover between the two mechanisms. The resulting dynamics from both mechanisms acting in combination is faster than from each one individually.

  2. Regular use of aspirin and pancreatic cancer risk

    Directory of Open Access Journals (Sweden)

    Mahoney Martin C

    2002-09-01

    Full Text Available Abstract Background Regular use of aspirin and other non-steroidal anti-inflammatory drugs (NSAIDs has been consistently associated with reduced risk of colorectal cancer and adenoma, and there is some evidence for a protective effect for other types of cancer. As experimental studies reveal a possible role for NSAIDs is reducing the risk of pancreatic cancer, epidemiological studies examining similar associations in human populations become more important. Methods In this hospital-based case-control study, 194 patients with pancreatic cancer were compared to 582 age and sex-matched patients with non-neoplastic conditions to examine the association between aspirin use and risk of pancreatic cancer. All participants received medical services at the Roswell Park Cancer Institute in Buffalo, NY and completed a comprehensive epidemiologic questionnaire that included information on demographics, lifestyle factors and medical history as well as frequency and duration of aspirin use. Patients using at least one tablet per week for at least six months were classified as regular aspirin users. Unconditional logistic regression was used to compute crude and adjusted odds ratios (ORs with 95% confidence intervals (CIs. Results Pancreatic cancer risk in aspirin users was not changed relative to non-users (adjusted OR = 1.00; 95% CI 0.72–1.39. No significant change in risk was found in relation to greater frequency or prolonged duration of use, in the total sample or in either gender. Conclusions These data suggest that regular aspirin use may not be associated with lower risk of pancreatic cancer.

  3. An iterative method for Tikhonov regularization with a general linear regularization operator

    NARCIS (Netherlands)

    Hochstenbach, M.E.; Reichel, L.

    2010-01-01

    Tikhonov regularization is one of the most popular approaches to solve discrete ill-posed problems with error-contaminated data. A regularization operator and a suitable value of a regularization parameter have to be chosen. This paper describes an iterative method, based on Golub-Kahan

  4. The match-to-match variation of match-running in elite female soccer.

    Science.gov (United States)

    Trewin, Joshua; Meylan, César; Varley, Matthew C; Cronin, John

    2018-02-01

    The purpose of this study was to examine the match-to-match variation of match-running in elite female soccer players utilising GPS, using full-match and rolling period analyses. Longitudinal study. Elite female soccer players (n=45) from the same national team were observed during 55 international fixtures across 5 years (2012-2016). Data was analysed using a custom built MS Excel spreadsheet as full-matches and using a rolling 5-min analysis period, for all players who played 90-min matches (files=172). Variation was examined using co-efficient of variation and 90% confidence limits, calculated following log transformation. Total distance per minute exhibited the smallest variation when both the full-match and peak 5-min running periods were examined (CV=6.8-7.2%). Sprint-efforts were the most variable during a full-match (CV=53%), whilst high-speed running per minute exhibited the greatest variation in the post-peak 5-min period (CV=143%). Peak running periods were observed as slightly more variable than full-match analyses, with the post-peak period very-highly variable. Variability of accelerations (CV=17%) and Player Load (CV=14%) was lower than that of high-speed actions. Positional differences were also present, with centre backs exhibiting the greatest variation in high-speed movements (CV=41-65%). Practitioners and researchers should account for within player variability when examining match performances. Identification of peak running periods should be used to assist worst case scenarios. Whilst micro-sensor technology should be further examined as to its viable use within match-analyses. Copyright © 2017 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  5. Chemical Interactions and Their Role in the Microphase Separation of Block Copolymer Thin Films

    Directory of Open Access Journals (Sweden)

    Richard A. Farrell

    2009-08-01

    Full Text Available The thermodynamics of self-assembling systems are discussed in terms of the chemical interactions and the intermolecular forces between species. It is clear that there are both theoretical and practical limitations on the dimensions and the structural regularity of these systems. These considerations are made with reference to the microphase separation that occurs in block copolymer (BCP systems. BCP systems self-assemble via a thermodynamic driven process where chemical dis-affinity between the blocks driving them part is balanced by a restorative force deriving from the chemical bond between the blocks. These systems are attracting much interest because of their possible role in nanoelectronic fabrication. This form of self-assembly can obtain highly regular nanopatterns in certain circumstances where the orientation and alignment of chemically distinct blocks can be guided through molecular interactions between the polymer and the surrounding interfaces. However, for this to be possible, great care must be taken to properly engineer the interactions between the surfaces and the polymer blocks. The optimum methods of structure directing are chemical pre-patterning (defining regions on the substrate of different chemistry and graphoepitaxy (topographical alignment but both centre on generating alignment through favourable chemical interactions. As in all self-assembling systems, the problems of defect formation must be considered and the origin of defects in these systems is explored. It is argued that in these nanostructures equilibrium defects are relatively few and largely originate from kinetic effects arising during film growth. Many defects also arise from the confinement of the systems when they are ‘directed’ by topography. The potential applications of these materials in electronics are discussed.

  6. Regular Expression Pocket Reference

    CERN Document Server

    Stubblebine, Tony

    2007-01-01

    This handy little book offers programmers a complete overview of the syntax and semantics of regular expressions that are at the heart of every text-processing application. Ideal as a quick reference, Regular Expression Pocket Reference covers the regular expression APIs for Perl 5.8, Ruby (including some upcoming 1.9 features), Java, PHP, .NET and C#, Python, vi, JavaScript, and the PCRE regular expression libraries. This concise and easy-to-use reference puts a very powerful tool for manipulating text and data right at your fingertips. Composed of a mixture of symbols and text, regular exp

  7. An ensemble based nonlinear orthogonal matching pursuit algorithm for sparse history matching of reservoir models

    KAUST Repository

    Fsheikh, Ahmed H.

    2013-01-01

    A nonlinear orthogonal matching pursuit (NOMP) for sparse calibration of reservoir models is presented. Sparse calibration is a challenging problem as the unknowns are both the non-zero components of the solution and their associated weights. NOMP is a greedy algorithm that discovers at each iteration the most correlated components of the basis functions with the residual. The discovered basis (aka support) is augmented across the nonlinear iterations. Once the basis functions are selected from the dictionary, the solution is obtained by applying Tikhonov regularization. The proposed algorithm relies on approximate gradient estimation using an iterative stochastic ensemble method (ISEM). ISEM utilizes an ensemble of directional derivatives to efficiently approximate gradients. In the current study, the search space is parameterized using an overcomplete dictionary of basis functions built using the K-SVD algorithm.

  8. Automated sequence-specific protein NMR assignment using the memetic algorithm MATCH

    International Nuclear Information System (INIS)

    Volk, Jochen; Herrmann, Torsten; Wuethrich, Kurt

    2008-01-01

    MATCH (Memetic Algorithm and Combinatorial Optimization Heuristics) is a new memetic algorithm for automated sequence-specific polypeptide backbone NMR assignment of proteins. MATCH employs local optimization for tracing partial sequence-specific assignments within a global, population-based search environment, where the simultaneous application of local and global optimization heuristics guarantees high efficiency and robustness. MATCH thus makes combined use of the two predominant concepts in use for automated NMR assignment of proteins. Dynamic transition and inherent mutation are new techniques that enable automatic adaptation to variable quality of the experimental input data. The concept of dynamic transition is incorporated in all major building blocks of the algorithm, where it enables switching between local and global optimization heuristics at any time during the assignment process. Inherent mutation restricts the intrinsically required randomness of the evolutionary algorithm to those regions of the conformation space that are compatible with the experimental input data. Using intact and artificially deteriorated APSY-NMR input data of proteins, MATCH performed sequence-specific resonance assignment with high efficiency and robustness

  9. Ankle Block vs Single-Shot Popliteal Fossa Block as Primary Anesthesia for Forefoot Operative Procedures: Prospective, Randomized Comparison.

    Science.gov (United States)

    Schipper, Oliver N; Hunt, Kenneth J; Anderson, Robert B; Davis, W Hodges; Jones, Carroll P; Cohen, Bruce E

    2017-11-01

    Postoperative pain is often difficult to control with oral medications, requiring large doses of opioid analgesia. Regional anesthesia may be used for primary anesthesia, reducing the need for general anesthetic and postoperative pain medication requirements in the immediate postoperative period. The purpose of this study was to compare the analgesic effects of an ankle block (AB) to a single-shot popliteal fossa block (PFB) for patients undergoing orthopedic forefoot procedures. All patients having elective outpatient orthopedic forefoot procedures were invited to participate in the study. Patients were prospectively randomized to receive either an ultrasound-guided AB or PFB by a board-certified anesthesiologist prior to their procedure. Intraoperative conversion to general anesthesia and postanesthesia care unit (PACU) opioid requirements were recorded. Postoperative pain was assessed using the visual analog scale (VAS) at regular time intervals until 8 am on postoperative day (POD) 2. Patients rated the effectiveness of the block on a 1 to 5 scale, with 5 being very effective. A total of 167 patients participated in the study with 88 patients (53%) receiving an AB and 79 (47%) receiving a single-shot PFB. There was no significant difference in the rate of conversion to general anesthesia between the 2 groups (13.6% [12/88] AB vs 12.7% [10/79] PFB). PACU morphine requirements and doses were significantly reduced in the PFB group ( P = .004) when compared to the AB group. The VAS was also significantly lower for the PFB patients at 10 pm on POD 0 (4.6 vs 1.6, P block site pain and/or erythema (AB 6.9% [6/88] vs PFB 5.1% [4/79], P = .44). The analgesic effect of the PFB lasted significantly longer when compared to the ankle block (AB 14.5 hours vs PFB 20.9 hours, P block between the 2 groups, with both blocks being highly effective (AB 4.79/5 vs PFB 4.82/5, P = .68). Regional anesthesia was a safe and reliable adjunct to perioperative pain management and highly

  10. Efficient Stereoscopic Video Matching and Map Reconstruction for a Wheeled Mobile Robot

    Directory of Open Access Journals (Sweden)

    Oscar Montiel-Ross

    2012-10-01

    Full Text Available This paper presents a novel method to achieve stereoscopic vision for mobile robot (MR navigation with the advantage of not needing camera calibration for depth (distance estimation measurements. It uses the concept of the adaptive candidate matching window for stereoscopic correspondence for block matching, resulting in improvements in efficiency and accuracy. An average of 40% of time reduction in the calculation process is obtained. All the algorithms for navigation, including the stereoscopic vision module, were implemented using an original computer architecture for the Virtex 5 FPGA, where a distributed multicore processor system was embedded and coordinated using the Message Passing Interface.

  11. Unsupervised seismic facies analysis with spatial constraints using regularized fuzzy c-means

    Science.gov (United States)

    Song, Chengyun; Liu, Zhining; Cai, Hanpeng; Wang, Yaojun; Li, Xingming; Hu, Guangmin

    2017-12-01

    Seismic facies analysis techniques combine classification algorithms and seismic attributes to generate a map that describes main reservoir heterogeneities. However, most of the current classification algorithms only view the seismic attributes as isolated data regardless of their spatial locations, and the resulting map is generally sensitive to noise. In this paper, a regularized fuzzy c-means (RegFCM) algorithm is used for unsupervised seismic facies analysis. Due to the regularized term of the RegFCM algorithm, the data whose adjacent locations belong to same classification will play a more important role in the iterative process than other data. Therefore, this method can reduce the effect of seismic data noise presented in discontinuous regions. The synthetic data with different signal/noise values are used to demonstrate the noise tolerance ability of the RegFCM algorithm. Meanwhile, the fuzzy factor, the neighbour window size and the regularized weight are tested using various values, to provide a reference of how to set these parameters. The new approach is also applied to a real seismic data set from the F3 block of the Netherlands. The results show improved spatial continuity, with clear facies boundaries and channel morphology, which reveals that the method is an effective seismic facies analysis tool.

  12. The geometry of continuum regularization

    International Nuclear Information System (INIS)

    Halpern, M.B.

    1987-03-01

    This lecture is primarily an introduction to coordinate-invariant regularization, a recent advance in the continuum regularization program. In this context, the program is seen as fundamentally geometric, with all regularization contained in regularized DeWitt superstructures on field deformations

  13. Regular expression containment

    DEFF Research Database (Denmark)

    Henglein, Fritz; Nielsen, Lasse

    2011-01-01

    We present a new sound and complete axiomatization of regular expression containment. It consists of the conventional axiomatiza- tion of concatenation, alternation, empty set and (the singleton set containing) the empty string as an idempotent semiring, the fixed- point rule E* = 1 + E × E......* for Kleene-star, and a general coin- duction rule as the only additional rule. Our axiomatization gives rise to a natural computational inter- pretation of regular expressions as simple types that represent parse trees, and of containment proofs as coercions. This gives the axiom- atization a Curry......-Howard-style constructive interpretation: Con- tainment proofs do not only certify a language-theoretic contain- ment, but, under our computational interpretation, constructively transform a membership proof of a string in one regular expres- sion into a membership proof of the same string in another regular expression. We...

  14. Supersymmetric dimensional regularization

    International Nuclear Information System (INIS)

    Siegel, W.; Townsend, P.K.; van Nieuwenhuizen, P.

    1980-01-01

    There is a simple modification of dimension regularization which preserves supersymmetry: dimensional reduction to real D < 4, followed by analytic continuation to complex D. In terms of component fields, this means fixing the ranges of all indices on the fields (and therefore the numbers of Fermi and Bose components). For superfields, it means continuing in the dimensionality of x-space while fixing the dimensionality of theta-space. This regularization procedure allows the simple manipulation of spinor derivatives in supergraph calculations. The resulting rules are: (1) First do all algebra exactly as in D = 4; (2) Then do the momentum integrals as in ordinary dimensional regularization. This regularization procedure needs extra rules before one can say that it is consistent. Such extra rules needed for superconformal anomalies are discussed. Problems associated with renormalizability and higher order loops are also discussed

  15. Image degradation characteristics and restoration based on regularization for diffractive imaging

    Science.gov (United States)

    Zhi, Xiyang; Jiang, Shikai; Zhang, Wei; Wang, Dawei; Li, Yun

    2017-11-01

    The diffractive membrane optical imaging system is an important development trend of ultra large aperture and lightweight space camera. However, related investigations on physics-based diffractive imaging degradation characteristics and corresponding image restoration methods are less studied. In this paper, the model of image quality degradation for the diffraction imaging system is first deduced mathematically based on diffraction theory and then the degradation characteristics are analyzed. On this basis, a novel regularization model of image restoration that contains multiple prior constraints is established. After that, the solving approach of the equation with the multi-norm coexistence and multi-regularization parameters (prior's parameters) is presented. Subsequently, the space-variant PSF image restoration method for large aperture diffractive imaging system is proposed combined with block idea of isoplanatic region. Experimentally, the proposed algorithm demonstrates its capacity to achieve multi-objective improvement including MTF enhancing, dispersion correcting, noise and artifact suppressing as well as image's detail preserving, and produce satisfactory visual quality. This can provide scientific basis for applications and possesses potential application prospects on future space applications of diffractive membrane imaging technology.

  16. Regularization by External Variables

    DEFF Research Database (Denmark)

    Bossolini, Elena; Edwards, R.; Glendinning, P. A.

    2016-01-01

    Regularization was a big topic at the 2016 CRM Intensive Research Program on Advances in Nonsmooth Dynamics. There are many open questions concerning well known kinds of regularization (e.g., by smoothing or hysteresis). Here, we propose a framework for an alternative and important kind of regula......Regularization was a big topic at the 2016 CRM Intensive Research Program on Advances in Nonsmooth Dynamics. There are many open questions concerning well known kinds of regularization (e.g., by smoothing or hysteresis). Here, we propose a framework for an alternative and important kind...

  17. Regular Single Valued Neutrosophic Hypergraphs

    Directory of Open Access Journals (Sweden)

    Muhammad Aslam Malik

    2016-12-01

    Full Text Available In this paper, we define the regular and totally regular single valued neutrosophic hypergraphs, and discuss the order and size along with properties of regular and totally regular single valued neutrosophic hypergraphs. We also extend work on completeness of single valued neutrosophic hypergraphs.

  18. Identifying factors affecting the safety of mid-block bicycle lanes considering mixed 2-wheeled traffic flow.

    Science.gov (United States)

    Bai, Lu; Chan, Ching-Yao; Liu, Pan; Xu, Chengcheng

    2017-10-03

    Electric bikes (e-bikes) have been one of the fastest growing trip modes in Southeast Asia over the past 2 decades. The increasing popularity of e-bikes raised some safety concerns regarding urban transport systems. The primary objective of this study was to identify whether and how the generalized linear regression model (GLM) could be used to relate cyclists' safety with various contributing factors when riding in a mid-block bike lane. The types of 2-wheeled vehicles in the study included bicycle-style electric bicycles (BSEBs), scooter-style electric bicycles (SSEBs), and regular bicycles (RBs). Traffic conflict technology was applied as a surrogate measure to evaluate the safety of 2-wheeled vehicles. The safety performance model was developed by adopting a generalized linear regression model for relating the frequency of rear-end conflicts between e-bikes and regular bikes to the operating speeds of BSEBs, SSEBs, and RBs in mid-block bike lanes. The frequency of rear-end conflicts between e-bikes and bikes increased with an increase in the operating speeds of e-bikes and the volume of e-bikes and bikes and decreased with an increase in the width of bike lanes. The large speed difference between e-bikes and bikes increased the frequency of rear-end conflicts between e-bikes and bikes in mid-block bike lanes. A 1% increase in the average operating speed of e-bikes would increase the expected number of rear-end conflicts between e-bikes and bikes by 1.48%. A 1% increase in the speed difference between e-bikes and bikes would increase the expected number of rear-end conflicts between e-bikes/bikes by 0.16%. The conflict frequency in mid-block bike lanes can be modeled using generalized linear regression models. The factors that significantly affected the frequency of rear-end conflicts included the operating speeds of e-bikes, the speed difference between e-bikes and regular bikes, the volume of e-bikes, the volume of bikes, and the width of bike lanes. The

  19. On a correspondence between regular and non-regular operator monotone functions

    DEFF Research Database (Denmark)

    Gibilisco, P.; Hansen, Frank; Isola, T.

    2009-01-01

    We prove the existence of a bijection between the regular and the non-regular operator monotone functions satisfying a certain functional equation. As an application we give a new proof of the operator monotonicity of certain functions related to the Wigner-Yanase-Dyson skew information....

  20. Development of a Blocking ELISA Using a Monoclonal Antibody to a Dominant Epitope in Non-Structural Protein 3A of Foot-and-Mouth Disease Virus, as a Matching Test for a Negative-Marker Vaccine.

    Directory of Open Access Journals (Sweden)

    Yuanfang Fu

    Full Text Available Foot-and-mouth disease (FMD is a devastating animal disease. Strategies for differentiation of infected from vaccinated animals (DIVA remain very important for controlling disease. Development of an epitope-deleted marker vaccine and accompanying diagnostic method will improve the efficiency of DIVA. Here, a monoclonal antibody (Mab was found to recognize a conserved "AEKNPLE" epitope spanning amino acids 109-115 of non-structural protein (NSP 3A of foot-and-mouth disease virus (FMDV; O/Tibet/CHA/99 strain, which could be deleted by a reverse-genetic procedure. In addition, a blocking ELISA was developed based on this Mab against NSP 3A, which could serve as a matching test for a negative-marker vaccine. The criterion of this blocking ELISA was determined by detecting panels of sera from different origins. The serum samples with a percentage inhibition (PI equal or greater than 50% were considered to be from infected animals, and those with <50% PI were considered to be from non-infected animals. This test showed similar performance when compared with other 2 blocking ELISAs based on an anti-NSP 3B Mab. This is the first report of the DIVA test for an NSP antibody based on an Mab against the conserved and predominant "AEKNPLE" epitope in NSP 3A of FMDV.

  1. Matching theory

    CERN Document Server

    Plummer, MD

    1986-01-01

    This study of matching theory deals with bipartite matching, network flows, and presents fundamental results for the non-bipartite case. It goes on to study elementary bipartite graphs and elementary graphs in general. Further discussed are 2-matchings, general matching problems as linear programs, the Edmonds Matching Algorithm (and other algorithmic approaches), f-factors and vertex packing.

  2. Stochastic analytic regularization

    International Nuclear Information System (INIS)

    Alfaro, J.

    1984-07-01

    Stochastic regularization is reexamined, pointing out a restriction on its use due to a new type of divergence which is not present in the unregulated theory. Furthermore, we introduce a new form of stochastic regularization which permits the use of a minimal subtraction scheme to define the renormalized Green functions. (author)

  3. Differences in rheological profile of regular diesel and bio-diesel fuel

    Directory of Open Access Journals (Sweden)

    Jiří Čupera

    2010-01-01

    Full Text Available Biodiesel represents a promising alternative to regular fossil diesel. Fuel viscosity markedly influences injection, spraying and combustion, viscosity is thus critical factor to be evaluated and monitored. This work is focused on quantifying the differences in temperature dependent kinematic viscosity regular diesel fuel and B30 biodiesel fuel. The samples were assumed to be Newtonian fluids. Vis­co­si­ty was measured on a digital rotary viscometer in a range of 0 to 80 °C. More significant difference between minimum and maximum values was found in case of diesel fuel in comparison with biodiesel fuel. Temperature dependence of both fuels was modeled using several mathematical models – polynomial, power and Gaussian equation. The Gaussian fit offers the best match between experimental and computed data. Description of viscosity behavior of fuels is critically important, e.g. when considering or calculating running efficiency and performance of combustion engines. The models proposed in this work may be used as a tool for precise prediction of rheological behavior of diesel-type fuels.

  4. Numerical Upscaling of Solute Transport in Fractured Porous Media Based on Flow Aligned Blocks

    Science.gov (United States)

    Leube, P.; Nowak, W.; Sanchez-Vila, X.

    2013-12-01

    High-contrast or fractured-porous media (FPM) pose one of the largest unresolved challenges for simulating large hydrogeological systems. The high contrast in advective transport between fast conduits and low-permeability rock matrix, including complex mass transfer processes, leads to the typical complex characteristics of early bulk arrivals and long tailings. Adequate direct representation of FPM requires enormous numerical resolutions. For large scales, e.g. the catchment scale, and when allowing for uncertainty in the fracture network architecture or in matrix properties, computational costs quickly reach an intractable level. In such cases, multi-scale simulation techniques have become useful tools. They allow decreasing the complexity of models by aggregating and transferring their parameters to coarser scales and so drastically reduce the computational costs. However, these advantages come at a loss of detail and accuracy. In this work, we develop and test a new multi-scale or upscaled modeling approach based on block upscaling. The novelty is that individual blocks are defined by and aligned with the local flow coordinates. We choose a multi-rate mass transfer (MRMT) model to represent the remaining sub-block non-Fickian behavior within these blocks on the coarse scale. To make the scale transition simple and to save computational costs, we capture sub-block features by temporal moments (TM) of block-wise particle arrival times to be matched with the MRMT model. By predicting spatial mass distributions of injected tracers in a synthetic test scenario, our coarse-scale solution matches reasonably well with the corresponding fine-scale reference solution. For predicting higher TM-orders (such as arrival time and effective dispersion), the prediction accuracy steadily decreases. This is compensated to some extent by the MRMT model. If the MRMT model becomes too complex, it loses its effect. We also found that prediction accuracy is sensitive to the choice of

  5. USING THE ADAPTED DLP SYSTEM FOR BLOCKING INFORMATION LEAKS

    Directory of Open Access Journals (Sweden)

    T. A. Andryianava

    2017-01-01

    Full Text Available The importance of using the adapted DLP-system in the «Blocking» mode of leaking confidential information of the company is investigated. The scheme of interception of information security events in the «Copy» mode is given, the analysis of which reflects the main drawback of using this mode – the DLP-system works only with copies of confidential documents, while the originals were delivered to the recipient. Such cases inflict enormous damage on companies, so the transfer of critical information beyond the corporate network is unacceptable.A solution is proposed for transferring the operation of the DLP-system from the «Copy» mode to the «Blocking» mode. It is important that the operation of the DLP-system does not hinder the staff members from performing regular operations and does not hinder business processes. Therefore, it is mandatory to adapt the standard DLP-system to the specifics of the company’s activities. After that the transition of the adapted DLP-system to the «Blocking» mode is carried out.Developed: the transition procedure of the adapted DLP-system from the «Copy» mode to the «Blocking» mode, the scheme of the event capture by the DLP-system for the two modes. The main channels of data leaks were investigated, the main leaks were identified by the data type and by the transmission channel. The analysis of the DLP-system operation in the «Blocking» mode is performed.

  6. Image segmentation with a novel regularized composite shape prior based on surrogate study

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Tingting, E-mail: tingtingzhao@mednet.ucla.edu; Ruan, Dan, E-mail: druan@mednet.ucla.edu [The Department of Radiation Oncology, University of California, Los Angeles, California 90095 (United States)

    2016-05-15

    Purpose: Incorporating training into image segmentation is a good approach to achieve additional robustness. This work aims to develop an effective strategy to utilize shape prior knowledge, so that the segmentation label evolution can be driven toward the desired global optimum. Methods: In the variational image segmentation framework, a regularization for the composite shape prior is designed to incorporate the geometric relevance of individual training data to the target, which is inferred by an image-based surrogate relevance metric. Specifically, this regularization is imposed on the linear weights of composite shapes and serves as a hyperprior. The overall problem is formulated in a unified optimization setting and a variational block-descent algorithm is derived. Results: The performance of the proposed scheme is assessed in both corpus callosum segmentation from an MR image set and clavicle segmentation based on CT images. The resulted shape composition provides a proper preference for the geometrically relevant training data. A paired Wilcoxon signed rank test demonstrates statistically significant improvement of image segmentation accuracy, when compared to multiatlas label fusion method and three other benchmark active contour schemes. Conclusions: This work has developed a novel composite shape prior regularization, which achieves superior segmentation performance than typical benchmark schemes.

  7. Image segmentation with a novel regularized composite shape prior based on surrogate study

    International Nuclear Information System (INIS)

    Zhao, Tingting; Ruan, Dan

    2016-01-01

    Purpose: Incorporating training into image segmentation is a good approach to achieve additional robustness. This work aims to develop an effective strategy to utilize shape prior knowledge, so that the segmentation label evolution can be driven toward the desired global optimum. Methods: In the variational image segmentation framework, a regularization for the composite shape prior is designed to incorporate the geometric relevance of individual training data to the target, which is inferred by an image-based surrogate relevance metric. Specifically, this regularization is imposed on the linear weights of composite shapes and serves as a hyperprior. The overall problem is formulated in a unified optimization setting and a variational block-descent algorithm is derived. Results: The performance of the proposed scheme is assessed in both corpus callosum segmentation from an MR image set and clavicle segmentation based on CT images. The resulted shape composition provides a proper preference for the geometrically relevant training data. A paired Wilcoxon signed rank test demonstrates statistically significant improvement of image segmentation accuracy, when compared to multiatlas label fusion method and three other benchmark active contour schemes. Conclusions: This work has developed a novel composite shape prior regularization, which achieves superior segmentation performance than typical benchmark schemes.

  8. Screening synteny blocks in pairwise genome comparisons through integer programming.

    Science.gov (United States)

    Tang, Haibao; Lyons, Eric; Pedersen, Brent; Schnable, James C; Paterson, Andrew H; Freeling, Michael

    2011-04-18

    It is difficult to accurately interpret chromosomal correspondences such as true orthology and paralogy due to significant divergence of genomes from a common ancestor. Analyses are particularly problematic among lineages that have repeatedly experienced whole genome duplication (WGD) events. To compare multiple "subgenomes" derived from genome duplications, we need to relax the traditional requirements of "one-to-one" syntenic matchings of genomic regions in order to reflect "one-to-many" or more generally "many-to-many" matchings. However this relaxation may result in the identification of synteny blocks that are derived from ancient shared WGDs that are not of interest. For many downstream analyses, we need to eliminate weak, low scoring alignments from pairwise genome comparisons. Our goal is to objectively select subset of synteny blocks whose total scores are maximized while respecting the duplication history of the genomes in comparison. We call this "quota-based" screening of synteny blocks in order to appropriately fill a quota of syntenic relationships within one genome or between two genomes having WGD events. We have formulated the synteny block screening as an optimization problem known as "Binary Integer Programming" (BIP), which is solved using existing linear programming solvers. The computer program QUOTA-ALIGN performs this task by creating a clear objective function that maximizes the compatible set of synteny blocks under given constraints on overlaps and depths (corresponding to the duplication history in respective genomes). Such a procedure is useful for any pairwise synteny alignments, but is most useful in lineages affected by multiple WGDs, like plants or fish lineages. For example, there should be a 1:2 ploidy relationship between genome A and B if genome B had an independent WGD subsequent to the divergence of the two genomes. We show through simulations and real examples using plant genomes in the rosid superorder that the quota

  9. 31 CFR 595.301 - Blocked account; blocked property.

    Science.gov (United States)

    2010-07-01

    ... (Continued) OFFICE OF FOREIGN ASSETS CONTROL, DEPARTMENT OF THE TREASURY TERRORISM SANCTIONS REGULATIONS General Definitions § 595.301 Blocked account; blocked property. The terms blocked account and blocked...

  10. Temporally Regular Musical Primes Facilitate Subsequent Syntax Processing in Children with Specific Language Impairment.

    Science.gov (United States)

    Bedoin, Nathalie; Brisseau, Lucie; Molinier, Pauline; Roch, Didier; Tillmann, Barbara

    2016-01-01

    Children with developmental language disorders have been shown to be also impaired in rhythm and meter perception. Temporal processing and its link to language processing can be understood within the dynamic attending theory. An external stimulus can stimulate internal oscillators, which orient attention over time and drive speech signal segmentation to provide benefits for syntax processing, which is impaired in various patient populations. For children with Specific Language Impairment (SLI) and dyslexia, previous research has shown the influence of an external rhythmic stimulation on subsequent language processing by comparing the influence of a temporally regular musical prime to that of a temporally irregular prime. Here we tested whether the observed rhythmic stimulation effect is indeed due to a benefit provided by the regular musical prime (rather than a cost subsequent to the temporally irregular prime). Sixteen children with SLI and 16 age-matched controls listened to either a regular musical prime sequence or an environmental sound scene (without temporal regularities in event occurrence; i.e., referred to as "baseline condition") followed by grammatically correct and incorrect sentences. They were required to perform grammaticality judgments for each auditorily presented sentence. Results revealed that performance for the grammaticality judgments was better after the regular prime sequences than after the baseline sequences. Our findings are interpreted in the theoretical framework of the dynamic attending theory (Jones, 1976) and the temporal sampling (oscillatory) framework for developmental language disorders (Goswami, 2011). Furthermore, they encourage the use of rhythmic structures (even in non-verbal materials) to boost linguistic structure processing and outline perspectives for rehabilitation.

  11. Determination of methodology of automatic history matching; Determinacao de metodologia de ajuste automatizado de historico

    Energy Technology Data Exchange (ETDEWEB)

    Santos, Jose Pedro Moura dos

    2000-01-01

    In management of hydrocarbon reservoirs the numeric simulation is a fundamental tool. The validation of a field model with production report is made through history matching which, many times, is done by trial and errors procedures consuming excessive computational time and human efforts. The objective of this work is to present a methodology for automation of the history matching using the program UNIPAR with the modules of parallel computing, sensibility analysis and optimization. Based on an example of an offshore field, analyses of the objective function (water production, oil production and average pressure of the reservoir) behavior were accomplished as function of the variations of reservoir parameters. It was verified that this behavior is very regular. After that, several adjustment situations were tested to define a procedure to be used for a history matching. (author) Ate aqui estao scanned.

  12. In vivo cross-match by chromium-51 urinary excretion from labeled erythrocytes: A case of anti-Gerbich

    International Nuclear Information System (INIS)

    Mochizuki, T.; Tauxe, W.N.; Ramsey, G.

    1990-01-01

    We studied a patient with an alloantibody to the high-frequency red blood cell (RBC) antigen Gerbich. A nationwide search for rare Gerbich-negative blood (less than 1:45,000 donors) located only seven units, and our supply was quickly exhausted. By using an in vivo cross-matching method, we demonstrated that this anti-Gerbich did not cause RBC destruction. Regular Gerbich-positive transfusions could then proceed without hemolysis. This cross-match test was based on the determination of the urinary excretion rates of injected radioactive chromium-labeled donor erythrocytes by which it was possible to determine compatibility only 24 hr after the test was begun. The procedure provides an easy and accurate means for in vivo cross-matching of conventionally incompatible donor blood

  13. Nerve Blocks

    Science.gov (United States)

    ... News Physician Resources Professions Site Index A-Z Nerve Blocks A nerve block is an injection to ... the limitations of Nerve Block? What is a Nerve Block? A nerve block is an anesthetic and/ ...

  14. Effective field theory dimensional regularization

    International Nuclear Information System (INIS)

    Lehmann, Dirk; Prezeau, Gary

    2002-01-01

    A Lorentz-covariant regularization scheme for effective field theories with an arbitrary number of propagating heavy and light particles is given. This regularization scheme leaves the low-energy analytic structure of Greens functions intact and preserves all the symmetries of the underlying Lagrangian. The power divergences of regularized loop integrals are controlled by the low-energy kinematic variables. Simple diagrammatic rules are derived for the regularization of arbitrary one-loop graphs and the generalization to higher loops is discussed

  15. Effective field theory dimensional regularization

    Science.gov (United States)

    Lehmann, Dirk; Prézeau, Gary

    2002-01-01

    A Lorentz-covariant regularization scheme for effective field theories with an arbitrary number of propagating heavy and light particles is given. This regularization scheme leaves the low-energy analytic structure of Greens functions intact and preserves all the symmetries of the underlying Lagrangian. The power divergences of regularized loop integrals are controlled by the low-energy kinematic variables. Simple diagrammatic rules are derived for the regularization of arbitrary one-loop graphs and the generalization to higher loops is discussed.

  16. Ultraporous films with uniform nanochannels by block copolymer micelles assembly

    KAUST Repository

    Nunes, Suzana Pereira

    2010-10-12

    Films with high pore density and regularity that are easy to manufacture by conventional large-scale technology are key components aimed for fabrication of new generations of magnetic arrays for storage media, medical scaffolds, and artificial membranes. However, potential manufacture strategies like the self-assembly of block copolymers, which lead to amazing regular patterns, could be hardly reproduced up to now using commercially feasible methods. Here we report a unique production method of nanoporous films based on the self-assembly of copper(II) ion-polystyrene-b-poly(4-vinylpyridine) complexes and nonsolvent induced phase separation. Extremely high pore densities and uniformity were achieved. Water fluxes of 890 L m-2 h-1 bar-1 were obtained, which are at least 1 order of magnitude higher than those of commercially available membranes with comparable pore size. The pores are also stimuli (pH)-responsive. © 2010 American Chemical Society.

  17. Hierarchical regular small-world networks

    International Nuclear Information System (INIS)

    Boettcher, Stefan; Goncalves, Bruno; Guclu, Hasan

    2008-01-01

    Two new networks are introduced that resemble small-world properties. These networks are recursively constructed but retain a fixed, regular degree. They possess a unique one-dimensional lattice backbone overlaid by a hierarchical sequence of long-distance links, mixing real-space and small-world features. Both networks, one 3-regular and the other 4-regular, lead to distinct behaviors, as revealed by renormalization group studies. The 3-regular network is planar, has a diameter growing as √N with system size N, and leads to super-diffusion with an exact, anomalous exponent d w = 1.306..., but possesses only a trivial fixed point T c = 0 for the Ising ferromagnet. In turn, the 4-regular network is non-planar, has a diameter growing as ∼2 √(log 2 N 2 ) , exhibits 'ballistic' diffusion (d w = 1), and a non-trivial ferromagnetic transition, T c > 0. It suggests that the 3-regular network is still quite 'geometric', while the 4-regular network qualifies as a true small world with mean-field properties. As an engineering application we discuss synchronization of processors on these networks. (fast track communication)

  18. Oracle Database 10g: a platform for BLAST search and Regular Expression pattern matching in life sciences.

    Science.gov (United States)

    Stephens, Susie M; Chen, Jake Y; Davidson, Marcel G; Thomas, Shiby; Trute, Barry M

    2005-01-01

    As database management systems expand their array of analytical functionality, they become powerful research engines for biomedical data analysis and drug discovery. Databases can hold most of the data types commonly required in life sciences and consequently can be used as flexible platforms for the implementation of knowledgebases. Performing data analysis in the database simplifies data management by minimizing the movement of data from disks to memory, allowing pre-filtering and post-processing of datasets, and enabling data to remain in a secure, highly available environment. This article describes the Oracle Database 10g implementation of BLAST and Regular Expression Searches and provides case studies of their usage in bioinformatics. http://www.oracle.com/technology/software/index.html.

  19. 75 FR 76006 - Regular Meeting

    Science.gov (United States)

    2010-12-07

    ... FARM CREDIT SYSTEM INSURANCE CORPORATION Regular Meeting AGENCY: Farm Credit System Insurance Corporation Board. ACTION: Regular meeting. SUMMARY: Notice is hereby given of the regular meeting of the Farm Credit System Insurance Corporation Board (Board). Date and Time: The meeting of the Board will be held...

  20. Testing block subdivision algorithms on block designs

    Science.gov (United States)

    Wiseman, Natalie; Patterson, Zachary

    2016-01-01

    Integrated land use-transportation models predict future transportation demand taking into account how households and firms arrange themselves partly as a function of the transportation system. Recent integrated models require parcels as inputs and produce household and employment predictions at the parcel scale. Block subdivision algorithms automatically generate parcel patterns within blocks. Evaluating block subdivision algorithms is done by way of generating parcels and comparing them to those in a parcel database. Three block subdivision algorithms are evaluated on how closely they reproduce parcels of different block types found in a parcel database from Montreal, Canada. While the authors who developed each of the algorithms have evaluated them, they have used their own metrics and block types to evaluate their own algorithms. This makes it difficult to compare their strengths and weaknesses. The contribution of this paper is in resolving this difficulty with the aim of finding a better algorithm suited to subdividing each block type. The proposed hypothesis is that given the different approaches that block subdivision algorithms take, it's likely that different algorithms are better adapted to subdividing different block types. To test this, a standardized block type classification is used that consists of mutually exclusive and comprehensive categories. A statistical method is used for finding a better algorithm and the probability it will perform well for a given block type. Results suggest the oriented bounding box algorithm performs better for warped non-uniform sites, as well as gridiron and fragmented uniform sites. It also produces more similar parcel areas and widths. The Generalized Parcel Divider 1 algorithm performs better for gridiron non-uniform sites. The Straight Skeleton algorithm performs better for loop and lollipop networks as well as fragmented non-uniform and warped uniform sites. It also produces more similar parcel shapes and patterns.

  1. General inverse problems for regular variation

    DEFF Research Database (Denmark)

    Damek, Ewa; Mikosch, Thomas Valentin; Rosinski, Jan

    2014-01-01

    Regular variation of distributional tails is known to be preserved by various linear transformations of some random structures. An inverse problem for regular variation aims at understanding whether the regular variation of a transformed random object is caused by regular variation of components ...

  2. Illumination invariant feature point matching for high-resolution planetary remote sensing images

    Science.gov (United States)

    Wu, Bo; Zeng, Hai; Hu, Han

    2018-03-01

    Despite its success with regular close-range and remote-sensing images, the scale-invariant feature transform (SIFT) algorithm is essentially not invariant to illumination differences due to the use of gradients for feature description. In planetary remote sensing imagery, which normally lacks sufficient textural information, salient regions are generally triggered by the shadow effects of keypoints, reducing the matching performance of classical SIFT. Based on the observation of dual peaks in a histogram of the dominant orientations of SIFT keypoints, this paper proposes an illumination-invariant SIFT matching method for high-resolution planetary remote sensing images. First, as the peaks in the orientation histogram are generally aligned closely with the sub-solar azimuth angle at the time of image collection, an adaptive suppression Gaussian function is tuned to level the histogram and thereby alleviate the differences in illumination caused by a changing solar angle. Next, the suppression function is incorporated into the original SIFT procedure for obtaining feature descriptors, which are used for initial image matching. Finally, as the distribution of feature descriptors changes after anisotropic suppression, and the ratio check used for matching and outlier removal in classical SIFT may produce inferior results, this paper proposes an improved matching procedure based on cross-checking and template image matching. The experimental results for several high-resolution remote sensing images from both the Moon and Mars, with illumination differences of 20°-180°, reveal that the proposed method retrieves about 40%-60% more matches than the classical SIFT method. The proposed method is of significance for matching or co-registration of planetary remote sensing images for their synergistic use in various applications. It also has the potential to be useful for flyby and rover images by integrating with the affine invariant feature detectors.

  3. Which skills and factors better predict winning and losing in high-level men's volleyball?

    Science.gov (United States)

    Peña, Javier; Rodríguez-Guerra, Jorge; Buscà, Bernat; Serra, Núria

    2013-09-01

    The aim of this study was to determine which skills and factors better predicted the outcomes of regular season volleyball matches in the Spanish "Superliga" and were significant for obtaining positive results in the game. The study sample consisted of 125 matches played during the 2010-11 Spanish men's first division volleyball championship. Matches were played by 12 teams composed of 148 players from 17 different nations from October 2010 to March 2011. The variables analyzed were the result of the game, team category, home/away court factors, points obtained in the break point phase, number of service errors, number of service aces, number of reception errors, percentage of positive receptions, percentage of perfect receptions, reception efficiency, number of attack errors, number of blocked attacks, attack points, percentage of attack points, attack efficiency, and number of blocks performed by both teams participating in the match. The results showed that the variables of team category, points obtained in the break point phase, number of reception errors, and number of blocked attacks by the opponent were significant predictors of winning or losing the matches. Odds ratios indicated that the odds of winning a volleyball match were 6.7 times greater for the teams belonging to higher rankings and that every additional point in Complex II increased the odds of winning a match by 1.5 times. Every reception and blocked ball error decreased the possibility of winning by 0.6 and 0.7 times, respectively.

  4. Continuum-regularized quantum gravity

    International Nuclear Information System (INIS)

    Chan Huesum; Halpern, M.B.

    1987-01-01

    The recent continuum regularization of d-dimensional Euclidean gravity is generalized to arbitrary power-law measure and studied in some detail as a representative example of coordinate-invariant regularization. The weak-coupling expansion of the theory illustrates a generic geometrization of regularized Schwinger-Dyson rules, generalizing previous rules in flat space and flat superspace. The rules are applied in a non-trivial explicit check of Einstein invariance at one loop: the cosmological counterterm is computed and its contribution is included in a verification that the graviton mass is zero. (orig.)

  5. Self-assembly in casting solutions of block copolymer membranes

    KAUST Repository

    Marques, Debora S.; Vainio, Ulla; Moreno Chaparro, Nicolas; Calo, Victor M.; Bezahd, Ali Reza; Pitera, Jed W.; Peinemann, Klaus; Nunes, Suzana Pereira

    2013-01-01

    Membranes with exceptional pore regularity and high porosity were obtained from block copolymer solutions. We demonstrate by small-angle X-ray scattering that the order which gives rise to the pore morphology is already incipient in the casting solution. Hexagonal order was confirmed in PS-b-P4VP 175k-b-65k solutions in DMF/THF/dioxane with concentrations as high as 24 wt%, while lamellar structures were obtained in more concentrated solutions in DMF or DMF/dioxane. The change in order has been understood with the support of dissipative particle dynamic modeling. © 2013 The Royal Society of Chemistry.

  6. 76 FR 5235 - Privacy Act of 1974, as Amended; Computer Matching Program (SSA Internal Match)-Match Number 1014

    Science.gov (United States)

    2011-01-28

    ...; Computer Matching Program (SSA Internal Match)--Match Number 1014 AGENCY: Social Security Administration... regarding protections for such persons. The Privacy Act, as amended, regulates the use of computer matching....C. 552a, as amended, and the provisions of the Computer Matching and Privacy Protection Act of 1988...

  7. Towards a Next-Generation Catalogue Cross-Match Service

    Science.gov (United States)

    Pineau, F.; Boch, T.; Derriere, S.; Arches Consortium

    2015-09-01

    We have been developing in the past several catalogue cross-match tools. On one hand the CDS XMatch service (Pineau et al. 2011), able to perform basic but very efficient cross-matches, scalable to the largest catalogues on a single regular server. On the other hand, as part of the European project ARCHES1, we have been developing a generic and flexible tool which performs potentially complex multi-catalogue cross-matches and which computes probabilities of association based on a novel statistical framework. Although the two approaches have been managed so far as different tracks, the need for next generation cross-match services dealing with both efficiency and complexity is becoming pressing with forthcoming projects which will produce huge high quality catalogues. We are addressing this challenge which is both theoretical and technical. In ARCHES we generalize to N catalogues the candidate selection criteria - based on the chi-square distribution - described in Pineau et al. (2011). We formulate and test a number of Bayesian hypothesis which necessarily increases dramatically with the number of catalogues. To assign a probability to each hypotheses, we rely on estimated priors which account for local densities of sources. We validated our developments by comparing the theoretical curves we derived with the results of Monte-Carlo simulations. The current prototype is able to take into account heterogeneous positional errors, object extension and proper motion. The technical complexity is managed by OO programming design patterns and SQL-like functionalities. Large tasks are split into smaller independent pieces for scalability. Performances are achieved resorting to multi-threading, sequential reads and several tree data-structures. In addition to kd-trees, we account for heterogeneous positional errors and object's extension using M-trees. Proper-motions are supported using a modified M-tree we developed, inspired from Time Parametrized R-trees (TPR

  8. Technical performance and match-to-match variation in elite football teams.

    Science.gov (United States)

    Liu, Hongyou; Gómez, Miguel-Angel; Gonçalves, Bruno; Sampaio, Jaime

    2016-01-01

    Recent research suggests that match-to-match variation adds important information to performance descriptors in team sports, as it helps measure how players fine-tune their tactical behaviours and technical actions to the extreme dynamical environments. The current study aims to identify the differences in technical performance of players from strong and weak teams and to explore match-to-match variation of players' technical match performance. Performance data of all the 380 matches of season 2012-2013 in the Spanish First Division Professional Football League were analysed. Twenty-one performance-related match actions and events were chosen as variables in the analyses. Players' technical performance profiles were established by unifying count values of each action or event of each player per match into the same scale. Means of these count values of players from Top3 and Bottom3 teams were compared and plotted into radar charts. Coefficient of variation of each match action or event within a player was calculated to represent his match-to-match variation of technical performance. Differences in the variation of technical performances of players across different match contexts (team and opposition strength, match outcome and match location) were compared. All the comparisons were achieved by the magnitude-based inferences. Results showed that technical performances differed between players of strong and weak teams from different perspectives across different field positions. Furthermore, the variation of the players' technical performance is affected by the match context, with effects from team and opposition strength greater than effects from match location and match outcome.

  9. Online co-regularized algorithms

    NARCIS (Netherlands)

    Ruijter, T. de; Tsivtsivadze, E.; Heskes, T.

    2012-01-01

    We propose an online co-regularized learning algorithm for classification and regression tasks. We demonstrate that by sequentially co-regularizing prediction functions on unlabeled data points, our algorithm provides improved performance in comparison to supervised methods on several UCI benchmarks

  10. Geometric continuum regularization of quantum field theory

    International Nuclear Information System (INIS)

    Halpern, M.B.

    1989-01-01

    An overview of the continuum regularization program is given. The program is traced from its roots in stochastic quantization, with emphasis on the examples of regularized gauge theory, the regularized general nonlinear sigma model and regularized quantum gravity. In its coordinate-invariant form, the regularization is seen as entirely geometric: only the supermetric on field deformations is regularized, and the prescription provides universal nonperturbative invariant continuum regularization across all quantum field theory. 54 refs

  11. The comparison of Co-60 and 4MV photons matching dosimetry during half-beam technique

    International Nuclear Information System (INIS)

    Cakir, Aydin; Bilge, Hatice; Dadasbilge, Alpar; Kuecuecuek, Halil; Okutan, Murat; Merdan Fayda, Emre

    2005-01-01

    In this phantom study, we tried to compare matching dosimetry differences between half-blocking of Co-60 and asymmetric collimation of the 4MV photons during craniospinal irradiation. The dose distributions are compared and discussed. Firstly, some gaps with different sizes are left between cranial and spinal field borders. Secondly, the fields are overlapped in the same sizes. We irradiate the films located in water-equivalent solid phantoms with Co-60 and 4MV photon beams. This study indicates that the field placement errors in +/- 1mm are acceptable for both Co-60 and 4MV photon energies during craniospinal irradiation with half-beam block technique. Within these limits the dose variations are specified in +/- 5%. However, the setup errors that are more than 1mm are unacceptable for both asymmetric collimation of 4MV photon and half-blocking of Co-60

  12. Tunneling of the blocked wave in a circular hydraulic jump

    Energy Technology Data Exchange (ETDEWEB)

    Bhattacharjee, Jayanta K., E-mail: jkb@hri.res.in

    2017-02-19

    The formation of a circular hydraulic jump in a thin liquid layer involves the creation of a horizon where the incoming wave (surface ripples) is blocked by the fast flowing fluid. That there is a jump at the horizon is due to the viscosity of the fluid which is not relevant for the horizon formation. By using a tunneling formalism developed for the study of the Hawking radiation from black holes, we explicitly show that there will be an exponentially small tunneling of the blocked wave across the horizons as anticipated in studies of “analog gravity”. - Highlights: • A thin layer of radially flowing fluid traveling at high speed sweeps away the incoming ripples. • The speed decreases as the flow spreads out. • At some radius the flow speed decreases to match the ripple speed creating a “horizon”. • The “horizon” blocks out the incoming ripples mimicking a white hole horizon. • The fluctuations around the steady state allows an exponentially small penetration inside the horizon analogous to the Hawking effect.

  13. Traveling waves in a spring-block chain sliding down a slope

    Science.gov (United States)

    Morales, J. E.; James, G.; Tonnelier, A.

    2017-07-01

    Traveling waves are studied in a spring slider-block model. We explicitly construct front waves (kinks) for a piecewise-linear spinodal friction force. Pulse waves are obtained as the matching of two traveling fronts with identical speeds. Explicit formulas are obtained for the wavespeed and the wave form in the anticontinuum limit. The link with localized waves in a Burridge-Knopoff model of an earthquake fault is briefly discussed.

  14. Bypassing the Limits of Ll Regularization: Convex Sparse Signal Processing Using Non-Convex Regularization

    Science.gov (United States)

    Parekh, Ankit

    Sparsity has become the basis of some important signal processing methods over the last ten years. Many signal processing problems (e.g., denoising, deconvolution, non-linear component analysis) can be expressed as inverse problems. Sparsity is invoked through the formulation of an inverse problem with suitably designed regularization terms. The regularization terms alone encode sparsity into the problem formulation. Often, the ℓ1 norm is used to induce sparsity, so much so that ℓ1 regularization is considered to be `modern least-squares'. The use of ℓ1 norm, as a sparsity-inducing regularizer, leads to a convex optimization problem, which has several benefits: the absence of extraneous local minima, well developed theory of globally convergent algorithms, even for large-scale problems. Convex regularization via the ℓ1 norm, however, tends to under-estimate the non-zero values of sparse signals. In order to estimate the non-zero values more accurately, non-convex regularization is often favored over convex regularization. However, non-convex regularization generally leads to non-convex optimization, which suffers from numerous issues: convergence may be guaranteed to only a stationary point, problem specific parameters may be difficult to set, and the solution is sensitive to the initialization of the algorithm. The first part of this thesis is aimed toward combining the benefits of non-convex regularization and convex optimization to estimate sparse signals more effectively. To this end, we propose to use parameterized non-convex regularizers with designated non-convexity and provide a range for the non-convex parameter so as to ensure that the objective function is strictly convex. By ensuring convexity of the objective function (sum of data-fidelity and non-convex regularizer), we can make use of a wide variety of convex optimization algorithms to obtain the unique global minimum reliably. The second part of this thesis proposes a non-linear signal

  15. Block design reconstruction skills: not a good candidate for an endophenotypic marker in autism research.

    Science.gov (United States)

    de Jonge, Maretha; Kemner, Chantal; Naber, Fabienne; van Engeland, Herman

    2009-04-01

    Superior performance on block design tasks is reported in autistic individuals, although it is not consistently found in high-functioning individuals or individuals with Asperger Syndrome. It is assumed to reflect weak central coherence: an underlying cognitive deficit, which might also be part of the genetic makeup of the disorder. We assessed block design reconstruction skills in high-functioning individuals with autism spectrum disorders (ASD) from multi-incidence families and in their parents. Performance was compared to relevant matched control groups. We used a task that was assumed to be highly sensitive to subtle performance differences. We did not find individuals with ASD to be significantly faster on this task than the matched control group, not even when the difference between reconstruction time of segmented and pre-segmented designs was compared. However, we found individuals with ASD to make fewer errors during the process of reconstruction which might indicate some dexterity in mental segmentation. However, parents of individuals with ASD did not perform better on the task than control parents. Therefore, based on our data, we conclude that mental segmentation ability as measured with a block design reconstruction task is not a neurocognitive marker or endophenotype useful in genetic studies.

  16. Using Tikhonov Regularization for Spatial Projections from CSR Regularized Spherical Harmonic GRACE Solutions

    Science.gov (United States)

    Save, H.; Bettadpur, S. V.

    2013-12-01

    It has been demonstrated before that using Tikhonov regularization produces spherical harmonic solutions from GRACE that have very little residual stripes while capturing all the signal observed by GRACE within the noise level. This paper demonstrates a two-step process and uses Tikhonov regularization to remove the residual stripes in the CSR regularized spherical harmonic coefficients when computing the spatial projections. We discuss methods to produce mass anomaly grids that have no stripe features while satisfying the necessary condition of capturing all observed signal within the GRACE noise level.

  17. Regularized maximum correntropy machine

    KAUST Repository

    Wang, Jim Jing-Yan; Wang, Yunji; Jing, Bing-Yi; Gao, Xin

    2015-01-01

    In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

  18. Regularized maximum correntropy machine

    KAUST Repository

    Wang, Jim Jing-Yan

    2015-02-12

    In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

  19. Cutaneous Sensory Block Area, Muscle-Relaxing Effect, and Block Duration of the Transversus Abdominis Plane Block

    DEFF Research Database (Denmark)

    Støving, Kion; Rothe, Christian; Rosenstock, Charlotte V

    2015-01-01

    BACKGROUND AND OBJECTIVES: The transversus abdominis plane (TAP) block is a widely used nerve block. However, basic block characteristics are poorly described. The purpose of this study was to assess the cutaneous sensory block area, muscle-relaxing effect, and block duration. METHODS: Sixteen...... healthy volunteers were randomized to receive an ultrasound-guided unilateral TAP block with 20 mL 7.5 mg/mL ropivacaine and placebo on the contralateral side. Measurements were performed at baseline and 90 minutes after performing the block. Cutaneous sensory block area was mapped and separated...... into a medial and lateral part by a vertical line through the anterior superior iliac spine. We measured muscle thickness of the 3 lateral abdominal muscle layers with ultrasound in the relaxed state and during maximal voluntary muscle contraction. The volunteers reported the duration of the sensory block...

  20. Utility of bronchial lavage fluids for epithelial growth factor receptor mutation assay in lung cancer patients: Comparison between cell pellets, cell blocks and matching tissue specimens

    Science.gov (United States)

    Asaka, Shiho; Yoshizawa, Akihiko; Nakata, Rie; Negishi, Tatsuya; Yamamoto, Hiroshi; Shiina, Takayuki; Shigeto, Shohei; Matsuda, Kazuyuki; Kobayashi, Yukihiro; Honda, Takayuki

    2018-01-01

    The detection of epidermal growth factor receptor (EGFR) mutations is necessary for the selection of suitable patients with non-small cell lung cancer (NSCLC) for treatment with EGFR tyrosine kinase inhibitors. Cytology specimens are known to be suitable for EGFR mutation detection, although tissue specimens should be prioritized; however, there are limited studies that examine the utility of bronchial lavage fluid (BLF) in mutation detection. The purpose of the present study was to investigate the utility of BLF specimens for the detection of EGFR mutations using a conventional quantitative EGFR polymerase chain reaction (PCR) assay. Initially, quantification cycle (Cq) values of cell pellets, cell-free supernatants and cell blocks obtained from three series of 1% EGFR mutation-positive lung cancer cell line samples were compared for mutation detection. In addition, PCR analysis of BLF specimens obtained from 77 consecutive NSCLC patients, detecting EGFR mutations was validated, and these results were compared with those for the corresponding formalin-fixed paraffin-embedded (FFPE) tissue specimens obtained by surgical resection or biopsy of 49 of these patients. The Cq values for mutation detection were significantly lower in the cell pellet group (average, 29.58) compared with the other groups, followed by those in cell-free supernatants (average, 34.15) and in cell blocks (average, 37.12) for all three series (P<0.05). Mutational status was successfully analyzed in 77 BLF specimens, and the results obtained were concordant with those of the 49 matching FFPE tissue specimens. Notably, EGFR mutations were even detected in 10 cytological specimens that contained insufficient tumor cells. EGFR mutation testing with BLF specimens is therefore a useful and reliable method, particularly when sufficient cancer cells are not obtained. PMID:29399190

  1. Technical match characteristics and influence of body anthropometry on playing performance in male elite team handball.

    Science.gov (United States)

    Michalsik, Lars Bojsen; Madsen, Klavs; Aagaard, Per

    2015-02-01

    Modern team handball match-play imposes substantial physical and technical demands on elite players. However, only limited knowledge seems to exist about the specific working requirements in elite team handball. Thus, the purpose of this study was to examine the physical demands imposed on male elite team handball players in relation to playing position and body anthropometry. Based on continuous video recording of individual players during elite team handball match-play (62 tournament games, ∼4 players per game), computerized technical match analysis was performed in male elite team handball players along with anthropometric measurements over a 6 season time span. Technical match activities were distributed in 6 major types of playing actions (shots, breakthroughs, fast breaks, tackles, technical errors, and defense errors) and further divided into various subcategories (e.g., hard or light tackles, type of shot, claspings, screenings, and blockings). Players showed 36.9 ± 13.1 (group mean ± SD) high-intense technical playing actions per match with a mean total effective playing time of 53.85 ± 5.87 minutes. In offense, each player performed 6.0 ± 5.2 fast breaks, received 34.5 ± 21.3 tackles in total, and performed in defense 3.7 ± 3.5 blockings, 3.9 ± 3.0 claspings, and 5.8 ± 3.6 hard tackles. Wing players (84.5 ± 5.8 kg, 184.9 ± 5.7 cm) were less heavy and smaller (p handball match-play is characterized by a high number of short-term, high-intense intermittent technical playing actions. Indications of technical fatigue were observed. Physical demands differed between playing positions with wing players performing more fast breaks and less physical confrontations with opponent players than backcourt players and pivots. Body anthropometry seemed to have an important influence on playing performance because it is highly related to playing positions. The present observations suggest that male elite team handball players should implement more position

  2. 31 CFR 594.301 - Blocked account; blocked property.

    Science.gov (United States)

    2010-07-01

    ... (Continued) OFFICE OF FOREIGN ASSETS CONTROL, DEPARTMENT OF THE TREASURY GLOBAL TERRORISM SANCTIONS REGULATIONS General Definitions § 594.301 Blocked account; blocked property. The terms blocked account and...

  3. State of otolaryngology match: has competition increased since the "early" match?

    Science.gov (United States)

    Cabrera-Muffly, Cristina; Sheeder, Jeanelle; Abaza, Mona

    2015-05-01

    To examine fluctuations in supply and demand of otolaryngology residency positions after the shift from an "early match" coordinated by the San Francisco match to a "conventional" matching process through the National Residency Matching Program (NRMP). To determine whether competition among otolaryngology residency positions have changed during this time frame. Database analysis. Matching statistics from 1998 to 2013 were obtained for all first-year residency positions through the NRMP. Matching statistics from 1998 to 2005 were obtained for otolaryngology residency positions through the San Francisco match. Univariate analysis was performed, with a P value less than .05 determined as significant. The number of otolaryngology positions and applicants remained proportional to the overall number of positions and applicants in the NRMP match. Otolaryngology applicants per position and the matching rate of all applicants did not change between the 2 time periods studied. The overall match rate of US seniors applying to otolaryngology did not change, while the match rate of non-US seniors decreased significantly following initiation of the conventional match. There was no significant change in United States Medical Licensing Exam step 1 scores or percentage of unfilled otolaryngology residency positions between the 2 time periods. When comparing the early versus conventional otolaryngology match time periods, the only major change was the decreased percentage of matching among non-US senior applicants. Despite a significant shift in match timing after 2006, the supply, demand, and competitiveness of otolaryngology residency positions have not changed significantly. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2015.

  4. Exercising with blocked muscle glycogenolysis

    DEFF Research Database (Denmark)

    Nielsen, Tue L; Pinós, Tomàs; Brull, Astrid

    2018-01-01

    BACKGROUND: McArdle disease (glycogen storage disease type V) is an inborn error of skeletal muscle metabolism, which affects glycogen phosphorylase (myophosphorylase) activity leading to an inability to break down glycogen. Patients with McArdle disease are exercise intolerant, as muscle glycogen......-derived glucose is unavailable during exercise. Metabolic adaptation to blocked muscle glycogenolysis occurs at rest in the McArdle mouse model, but only in highly glycolytic muscle. However, it is unknown what compensatory metabolic adaptations occur during exercise in McArdle disease. METHODS: In this study, 8......-week old McArdle and wild-type mice were exercised on a treadmill until exhausted. Dissected muscles were compared with non-exercised, age-matched McArdle and wild-type mice for histology and activation and expression of proteins involved in glucose uptake and glycogenolysis. RESULTS: Investigation...

  5. Cube or block. Statistical analysis, historial review, failure mode and behaviour; Cubo o bloque. Ajuste estadistico, analisis historico, modo de fallo y comportamiento

    Energy Technology Data Exchange (ETDEWEB)

    Negro, V.; Varela, O.; Campo, J. M. del; Lopez Gutierrez, J. S.

    2010-07-01

    Many different concrete shapes have been developed as armour units for rubble mound breakwaters. Nearly all are mass concrete construction and can be classified as random placed or regular patterns Placed. the majority of artificial armour unit are placed in two layers and they are massive. they intended to function in a similar way to natural rock (cubes, blocks, antifer cubes,...). More complex armour units were designed to achieve greater stability by obtaining a high degree of interlock (dolosse, accropode, Xbloc, core-loc,...). finally, the third group are the regular pattern placed units with a greater percentage of voids for giving a stronger dissipation of cement hydration (cob, shed, hollow cubes,...), This research deals about the comparison between two massive concrete units, the cubes and the blocks and the analysis of the geometry, the porosity, the construction process and the failure mode. The first stage is the statistical analysis. the scope of it is based on the historical reference of the Spanish Breakwaters with main layer of cubes and blocks (ministry of Public Works, General Directorate of Ports, 1988). (Author) 9 refs.

  6. Matching Aerial Images to 3D Building Models Using Context-Based Geometric Hashing

    Directory of Open Access Journals (Sweden)

    Jaewook Jung

    2016-06-01

    Full Text Available A city is a dynamic entity, which environment is continuously changing over time. Accordingly, its virtual city models also need to be regularly updated to support accurate model-based decisions for various applications, including urban planning, emergency response and autonomous navigation. A concept of continuous city modeling is to progressively reconstruct city models by accommodating their changes recognized in spatio-temporal domain, while preserving unchanged structures. A first critical step for continuous city modeling is to coherently register remotely sensed data taken at different epochs with existing building models. This paper presents a new model-to-image registration method using a context-based geometric hashing (CGH method to align a single image with existing 3D building models. This model-to-image registration process consists of three steps: (1 feature extraction; (2 similarity measure; and matching, and (3 estimating exterior orientation parameters (EOPs of a single image. For feature extraction, we propose two types of matching cues: edged corner features representing the saliency of building corner points with associated edges, and contextual relations among the edged corner features within an individual roof. A set of matched corners are found with given proximity measure through geometric hashing, and optimal matches are then finally determined by maximizing the matching cost encoding contextual similarity between matching candidates. Final matched corners are used for adjusting EOPs of the single airborne image by the least square method based on collinearity equations. The result shows that acceptable accuracy of EOPs of a single image can be achievable using the proposed registration approach as an alternative to a labor-intensive manual registration process.

  7. Regularities of Multifractal Measures

    Indian Academy of Sciences (India)

    First, we prove the decomposition theorem for the regularities of multifractal Hausdorff measure and packing measure in R R d . This decomposition theorem enables us to split a set into regular and irregular parts, so that we can analyze each separately, and recombine them without affecting density properties. Next, we ...

  8. ["Habitual" left branch block alternating with 2 "disguised" bracnch block].

    Science.gov (United States)

    Lévy, S; Jullien, G; Mathieu, P; Mostefa, S; Gérard, R

    1976-10-01

    Two cases of alternating left bundle branch block and "masquerading block" (with left bundle branch morphology in the stnadard leads and right bundle branch block morphology in the precordial leads) were studied by serial tracings and his bundle electrocardiography. In case 1 "the masquerading" block was associated with a first degree AV block related to a prolongation of HV interval. This case is to our knowledge the first cas of alternating bundle branch block in which his bundle activity was recorded in man. In case 2, the patient had atrial fibrilation and His bundle recordings were performed while differents degrees of left bundle branch block were present: The mechanism of the alternation and the concept of "masquerading" block are discussed. It is suggested that this type of block represents a right bundle branch block associated with severe lesions of the "left system".

  9. Block copolymer micelles with a dual-stimuli-responsive core for fast or slow degradation.

    Science.gov (United States)

    Han, Dehui; Tong, Xia; Zhao, Yue

    2012-02-07

    We report the design and demonstration of a dual-stimuli-responsive block copolymer (BCP) micelle with increased complexity and control. We have synthesized and studied a new amphiphilic ABA-type triblock copolymer whose hydrophobic middle block contains two types of stimuli-sensitive functionalities regularly and repeatedly positioned in the main chain. Using a two-step click chemistry approach, disulfide and o-nitrobenzyle methyl ester groups are inserted into the main chain, which react to reducing agents and light, respectively. With the end blocks being poly(ethylene oxide), micelles formed by this BCP possess a core that can be disintegrated either rapidly via photocleavage of o-nitrobenzyl methyl esters or slowly through cleavage of disulfide groups by a reducing agent in the micellar solution. This feature makes possible either burst release of an encapsulated hydrophobic species from disintegrated micelles by UV light, or slow release by the action of a reducing agent, or release with combined fast-slow rate profiles using the two stimuli.

  10. Engineering multifunctional protein nanoparticles by in vitro disassembling and reassembling of heterologous building blocks

    Science.gov (United States)

    Unzueta, Ugutz; Serna, Naroa; Sánchez-García, Laura; Roldán, Mónica; Sánchez-Chardi, Alejandro; Mangues, Ramón; Villaverde, Antonio; Vázquez, Esther

    2017-12-01

    The engineering of protein self-assembling at the nanoscale allows the generation of functional and biocompatible materials, which can be produced by easy biological fabrication. The combination of cationic and histidine-rich stretches in fusion proteins promotes oligomerization as stable protein-only regular nanoparticles that are composed by a moderate number of building blocks. Among other applications, these materials are highly appealing as tools in targeted drug delivery once empowered with peptidic ligands of cell surface receptors. In this context, we have dissected here this simple technological platform regarding the controlled disassembling and reassembling of the composing building blocks. By applying high salt and imidazole in combination, nanoparticles are disassembled in a process that is fully reversible upon removal of the disrupting agents. By taking this approach, we accomplish here the in vitro generation of hybrid nanoparticles formed by heterologous building blocks. This fact demonstrates the capability to generate multifunctional and/or multiparatopic or multispecific materials usable in nanomedical applications.

  11. Adaptive Regularization of Neural Classifiers

    DEFF Research Database (Denmark)

    Andersen, Lars Nonboe; Larsen, Jan; Hansen, Lars Kai

    1997-01-01

    We present a regularization scheme which iteratively adapts the regularization parameters by minimizing the validation error. It is suggested to use the adaptive regularization scheme in conjunction with optimal brain damage pruning to optimize the architecture and to avoid overfitting. Furthermo......, we propose an improved neural classification architecture eliminating an inherent redundancy in the widely used SoftMax classification network. Numerical results demonstrate the viability of the method...

  12. Modelling relationships between match events and match outcome in elite football.

    Science.gov (United States)

    Liu, Hongyou; Hopkins, Will G; Gómez, Miguel-Angel

    2016-08-01

    Identifying match events that are related to match outcome is an important task in football match analysis. Here we have used generalised mixed linear modelling to determine relationships of 16 football match events and 1 contextual variable (game location: home/away) with the match outcome. Statistics of 320 close matches (goal difference ≤ 2) of season 2012-2013 in the Spanish First Division Professional Football League were analysed. Relationships were evaluated with magnitude-based inferences and were expressed as extra matches won or lost per 10 close matches for an increase of two within-team or between-team standard deviations (SD) of the match event (representing effects of changes in team values from match to match and of differences between average team values, respectively). There was a moderate positive within-team effect from shots on target (3.4 extra wins per 10 matches; 99% confidence limits ±1.0), and a small positive within-team effect from total shots (1.7 extra wins; ±1.0). Effects of most other match events were related to ball possession, which had a small negative within-team effect (1.2 extra losses; ±1.0) but a small positive between-team effect (1.7 extra wins; ±1.4). Game location showed a small positive within-team effect (1.9 extra wins; ±0.9). In analyses of nine combinations of team and opposition end-of-season rank (classified as high, medium, low), almost all between-team effects were unclear, while within-team effects varied depending on the strength of team and opposition. Some of these findings will be useful to coaches and performance analysts when planning training sessions and match tactics.

  13. Job Searchers, Job Matches and the Elasticity of Matching

    NARCIS (Netherlands)

    Broersma, L.; van Ours, J.C.

    1998-01-01

    This paper stresses the importance of a specification of the matching function in which the measure of job matches corresponds to the measure of job searchers. In many empirical studies on the matching function this requirement has not been fulfilled because it is difficult to find information about

  14. Condition Number Regularized Covariance Estimation.

    Science.gov (United States)

    Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala

    2013-06-01

    Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the "large p small n " setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required.

  15. Spatial distribution of block falls using volumetric GIS-decision-tree models

    Science.gov (United States)

    Abdallah, C.

    2010-10-01

    Block falls are considered a significant aspect of surficial instability contributing to losses in land and socio-economic aspects through their damaging effects to natural and human environments. This paper predicts and maps the geographic distribution and volumes of block falls in central Lebanon using remote sensing, geographic information systems (GIS) and decision-tree modeling (un-pruned and pruned trees). Eleven terrain parameters (lithology, proximity to fault line, karst type, soil type, distance to drainage line, elevation, slope gradient, slope aspect, slope curvature, land cover/use, and proximity to roads) were generated to statistically explain the occurrence of block falls. The latter were discriminated using SPOT4 satellite imageries, and their dimensions were determined during field surveys. The un-pruned tree model based on all considered parameters explained 86% of the variability in field block fall measurements. Once pruned, it classifies 50% in block falls' volumes by selecting just four parameters (lithology, slope gradient, soil type, and land cover/use). Both tree models (un-pruned and pruned) were converted to quantitative 1:50,000 block falls' maps with different classes; starting from Nil (no block falls) to more than 4000 m 3. These maps are fairly matching with coincidence value equal to 45%; however, both can be used to prioritize the choice of specific zones for further measurement and modeling, as well as for land-use management. The proposed tree models are relatively simple, and may also be applied to other areas (i.e. the choice of un-pruned or pruned model is related to the availability of terrain parameters in a given area).

  16. q-Space Upsampling Using x-q Space Regularization.

    Science.gov (United States)

    Chen, Geng; Dong, Bin; Zhang, Yong; Shen, Dinggang; Yap, Pew-Thian

    2017-09-01

    Acquisition time in diffusion MRI increases with the number of diffusion-weighted images that need to be acquired. Particularly in clinical settings, scan time is limited and only a sparse coverage of the vast q -space is possible. In this paper, we show how non-local self-similar information in the x - q space of diffusion MRI data can be harnessed for q -space upsampling. More specifically, we establish the relationships between signal measurements in x - q space using a patch matching mechanism that caters to unstructured data. We then encode these relationships in a graph and use it to regularize an inverse problem associated with recovering a high q -space resolution dataset from its low-resolution counterpart. Experimental results indicate that the high-resolution datasets reconstructed using the proposed method exhibit greater quality, both quantitatively and qualitatively, than those obtained using conventional methods, such as interpolation using spherical radial basis functions (SRBFs).

  17. E-Block: A Tangible Programming Tool with Graphical Blocks

    Directory of Open Access Journals (Sweden)

    Danli Wang

    2013-01-01

    Full Text Available This paper designs a tangible programming tool, E-Block, for children aged 5 to 9 to experience the preliminary understanding of programming by building blocks. With embedded artificial intelligence, the tool defines the programming blocks with the sensors as the input and enables children to write programs to complete the tasks in the computer. The symbol on the programming block's surface is used to help children understanding the function of each block. The sequence information is transferred to computer by microcomputers and then translated into semantic information. The system applies wireless and infrared technologies and provides user with feedbacks on both screen and programming blocks. Preliminary user studies using observation and user interview methods are shown for E-Block's prototype. The test results prove that E-Block is attractive to children and easy to learn and use. The project also highlights potential advantages of using single chip microcomputer (SCM technology to develop tangible programming tools for children.

  18. E-Block: A Tangible Programming Tool with Graphical Blocks

    OpenAIRE

    Danli Wang; Yang Zhang; Shengyong Chen

    2013-01-01

    This paper designs a tangible programming tool, E-Block, for children aged 5 to 9 to experience the preliminary understanding of programming by building blocks. With embedded artificial intelligence, the tool defines the programming blocks with the sensors as the input and enables children to write programs to complete the tasks in the computer. The symbol on the programming block's surface is used to help children understanding the function of each block. The sequence information is transfer...

  19. 78 FR 73195 - Privacy Act of 1974: CMS Computer Matching Program Match No. 2013-01; HHS Computer Matching...

    Science.gov (United States)

    2013-12-05

    ... 1974: CMS Computer Matching Program Match No. 2013-01; HHS Computer Matching Program Match No. 1312 AGENCY: Centers for Medicare & Medicaid Services (CMS), Department of Health and Human Services (HHS... Privacy Act of 1974 (5 U.S.C. 552a), as amended, this notice announces the renewal of a CMP that CMS plans...

  20. Moving force identification based on redundant concatenated dictionary and weighted l1-norm regularization

    Science.gov (United States)

    Pan, Chu-Dong; Yu, Ling; Liu, Huan-Lin; Chen, Ze-Peng; Luo, Wen-Feng

    2018-01-01

    Moving force identification (MFI) is an important inverse problem in the field of bridge structural health monitoring (SHM). Reasonable signal structures of moving forces are rarely considered in the existing MFI methods. Interaction forces are complex because they contain both slowly-varying harmonic and impact signals due to bridge vibration and bumps on a bridge deck, respectively. Therefore, the interaction forces are usually hard to be expressed completely and sparsely by using a single basis function set. Based on the redundant concatenated dictionary and weighted l1-norm regularization method, a hybrid method is proposed for MFI in this study. The redundant dictionary consists of both trigonometric functions and rectangular functions used for matching the harmonic and impact signal features of unknown moving forces. The weighted l1-norm regularization method is introduced for formulation of MFI equation, so that the signal features of moving forces can be accurately extracted. The fast iterative shrinkage-thresholding algorithm (FISTA) is used for solving the MFI problem. The optimal regularization parameter is appropriately chosen by the Bayesian information criterion (BIC) method. In order to assess the accuracy and the feasibility of the proposed method, a simply-supported beam bridge subjected to a moving force is taken as an example for numerical simulations. Finally, a series of experimental studies on MFI of a steel beam are performed in laboratory. Both numerical and experimental results show that the proposed method can accurately identify the moving forces with a strong robustness, and it has a better performance than the Tikhonov regularization method. Some related issues are discussed as well.

  1. Does semantic impairment explain surface dyslexia? VLSM evidence for a double dissociation between regularization errors in reading and semantic errors in picture naming

    Directory of Open Access Journals (Sweden)

    Sara Pillay

    2014-04-01

    Full Text Available The correlation between semantic deficits and exception word regularization errors ("surface dyslexia" in semantic dementia has been taken as strong evidence for involvement of semantic codes in exception word pronunciation. Rare cases with semantic deficits but no exception word reading deficit have been explained as due to individual differences in reading strategy, but this account is hotly debated. Semantic dementia is a diffuse process that always includes semantic impairment, making lesion localization difficult and independent assessment of semantic deficits and reading errors impossible. We addressed this problem using voxel-based lesion symptom mapping in 38 patients with left hemisphere stroke. Patients were all right-handed, native English speakers and at least 6 months from stroke onset. Patients performed an oral reading task that included 80 exception words (words with inconsistent orthographic-phonologic correspondence, e.g., pint, plaid, glove. Regularization errors were defined as plausible but incorrect pronunciations based on application of spelling-sound correspondence rules (e.g., 'plaid' pronounced as "played". Two additional tests examined explicit semantic knowledge and retrieval. The first measured semantic substitution errors during naming of 80 standard line drawings of objects. This error type is generally presumed to arise at the level of concept selection. The second test (semantic matching required patients to match a printed sample word (e.g., bus with one of two alternative choice words (e.g., car, taxi on the basis of greater similarity of meaning. Lesions were labeled on high-resolution T1 MRI volumes using a semi-automated segmentation method, followed by diffeomorphic registration to a template. VLSM used an ANCOVA approach to remove variance due to age, education, and total lesion volume. Regularization errors during reading were correlated with damage in the posterior half of the middle temporal gyrus and

  2. Sparse Reconstruction of Regional Gravity Signal Based on Stabilized Orthogonal Matching Pursuit (SOMP)

    Science.gov (United States)

    Saadat, S. A.; Safari, A.; Needell, D.

    2016-06-01

    The main role of gravity field recovery is the study of dynamic processes in the interior of the Earth especially in exploration geophysics. In this paper, the Stabilized Orthogonal Matching Pursuit (SOMP) algorithm is introduced for sparse reconstruction of regional gravity signals of the Earth. In practical applications, ill-posed problems may be encountered regarding unknown parameters that are sensitive to the data perturbations. Therefore, an appropriate regularization method needs to be applied to find a stabilized solution. The SOMP algorithm aims to regularize the norm of the solution vector, while also minimizing the norm of the corresponding residual vector. In this procedure, a convergence point of the algorithm that specifies optimal sparsity-level of the problem is determined. The results show that the SOMP algorithm finds the stabilized solution for the ill-posed problem at the optimal sparsity-level, improving upon existing sparsity based approaches.

  3. 15 CFR 50.40 - Fee structure for statistics for city blocks in the 1980 Census of Population and Housing.

    Science.gov (United States)

    2010-01-01

    ... blocks in the 1980 Census of Population and Housing. 50.40 Section 50.40 Commerce and Foreign Trade... the 1980 Census of Population and Housing. (a) As part of the regular program of the 1980 census, the Census Bureau will publish printed reports containing certain summary population and housing statistics...

  4. Matching IMRT fields with static photon field in the treatment of head-and-neck cancer

    International Nuclear Information System (INIS)

    Li, Jonathan G.; Liu, Chihray; Kim, Siyong; Amdur, Robert J.; Palta, Jatinder R.

    2005-01-01

    Radiation treatment with intensity-modulated radiation therapy (IMRT) for head-and-neck cancer usually involves treating the superior aspects of the target volume with intensity-modulated (IM) fields, and the inferior portion of the target volume (the low neck nodes) with a static anterior-posterior field (commonly known as the low anterior neck, or LAN field). A match line between the IM and the LAN fields is created with possibly large dose inhomogeneities, which are clinically undesirable. We propose a practical method to properly match these fields with minimal dependence on patient setup errors. The method requires mono-isocentric setup of the IM and LAN fields with half-beam blocks as defined by the asymmetric jaws. The inferior jaws of the IM fields, which extend ∼1 cm inferiorly past the isocenter, are changed manually before patient treatment, so that they match the superior jaw of the LAN field at the isocenter. The matching of these fields therefore does not depend on the particular treatment plan of IMRT and depends only on the matching of the asymmetric jaws. Measurements in solid water phantom were performed to verify the field-matching technique. Dose inhomogeneities of less than 5% were obtained in the match-line region. Feathering of the match line is done twice during the course of a treatment by changing the matching jaw positions superiorly at 3-mm increments each time, which further reduces the dose inhomogeneity. Compared to the method of including the lower neck nodes in the IMRT fields, the field-matching technique increases the delivery efficiency and significantly reduces the total treatment time

  5. Condition Number Regularized Covariance Estimation*

    Science.gov (United States)

    Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala

    2012-01-01

    Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the “large p small n” setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required. PMID:23730197

  6. Evaluation of Isoprene Chain Extension from PEO Macromolecular Chain Transfer Agents for the Preparation of Dual, Invertible Block Copolymer Nanoassemblies.

    Science.gov (United States)

    Bartels, Jeremy W; Cauët, Solène I; Billings, Peter L; Lin, Lily Yun; Zhu, Jiahua; Fidge, Christopher; Pochan, Darrin J; Wooley, Karen L

    2010-09-14

    Two RAFT-capable PEO macro-CTAs, 2 and 5 kDa, were prepared and used for the polymerization of isoprene which yielded well-defined block copolymers of varied lengths and compositions. GPC analysis of the PEO macro-CTAs and block copolymers showed remaining unreacted PEO macro-CTA. Mathematical deconvolution of the GPC chromatograms allowed for the estimation of the blocking efficiency, about 50% for the 5 kDa PEO macro-CTA and 64% for the 2 kDa CTA. Self assembly of the block copolymers in both water and decane was investigated and the resulting regular and inverse assemblies, respectively, were analyzed with DLS, AFM, and TEM to ascertain their dimensions and properties. Assembly of PEO-b-PIp block copolymers in aqueous solution resulted in well-defined micelles of varying sizes while the assembly in hydrophobic, organic solvent resulted in the formation of different morphologies including large aggregates and well-defined cylindrical and spherical structures.

  7. Evaluation of social-cognitive versus stage-matched, self-help physical activity interventions at the workplace.

    Science.gov (United States)

    Griffin-Blake, C Shannon; DeJoy, David M

    2006-01-01

    To compare the effectiveness of stage-matched vs. social-cognitive physical activity interventions in a work setting. Both interventions were designed as minimal-contact, self-help programs suitable for large-scale application. Randomized trial. Participants were randomized into one of the two intervention groups at baseline; the follow-up assessment was conducted 1 month later. A large, public university in the southeastern region of the United States. Employees from two academic colleges within the participating institution were eligible to participate: 366 employees completed the baseline assessment; 208 of these completed both assessments (baseline and follow-up) and met the compliance criteria. Printed, self-help exercise booklets (12 to 16 pages in length) either (1) matched to the individual's stage of motivational readiness for exercise adoption at baseline or (2) derived from social-cognitive theory but not matched by stage. Standard questionnaires were administered to assess stage of motivational readiness for physical activity; physical activity participation; and exercise-related processes of change, decisional balance, self-efficacy, outcome expectancy, and goal satisfaction. The two interventions were equally effective in moving participants to higher levels of motivational readiness for regular physical activity. Among participants not already in maintenance at baseline, 34.9% in the stage-matched condition progressed, while 33.9% in the social-cognitive group did so (chi2 = not significant). Analyses of variance showed that the two treatment groups did not differ in terms of physical activity participation, cognitive and behavioral process use, decisional balance, or the other psychological constructs. For both treatment groups, cognitive process use remained high across all stages, while behavioral process use increased at the higher stages. The pros component of decisional balance did not vary across stage, whereas cons decreased significantly

  8. Factors associated with regular consumption of obesogenic foods: National School-Based Student Health Hurvey, 2012

    Directory of Open Access Journals (Sweden)

    Giovana LONGO-SILVA

    Full Text Available ABSTRACT Objective: To investigate the frequency of consumption of obesogenic foods among adolescents and its association with sociodemographic, family, behavioral, and environmental variables. Methods: Secondary data from the National School-Based Student Health Hurvey were analyzed from a representative sample of 9th grade Brazilian students (high school. A self-administered questionnaire, organized into thematic blocks, was used. The dependent variables were the consumption of deep fried snacks, packaged snacks, sugar candies, and soft drinks; consumption frequency for the seven days preceding the study was analyzed. Bivariate analysis was carried out to determine the empirical relationship between the regular consumption of these foods (≥3 days/week with sociodemographic, family, behavioral, and school structural variables. p-value <0.20 was used as the criterion for initial inclusion in the multivariate logistic analysis, which was conducted using the "Enter" method, and the results were expressed as adjusted odds ratios with 95% confidence interval and p<0.05 indicating a statistically significance. Results: Regular food consumption ranged from 27.17% to 65.96%. The variables female gender, mobile phone ownership, Internet access at home, tobacco use, alcohol consumption, regular physical activity, eating while watching television or studying, watching television for at least 2 hours a day, and not willing to lose weight were associated in the final logistic models of all foods analyzed. Conclusion: It was concluded that fried snacks, packaged snacks, sugar candies, and soft drinks are regularly consumed by adolescents and that such consumption was associated with the sociodemographic, family, behavioral, and school structural variables.

  9. Collective firing regularity of a scale-free Hodgkin–Huxley neuronal network in response to a subthreshold signal

    Energy Technology Data Exchange (ETDEWEB)

    Yilmaz, Ergin, E-mail: erginyilmaz@yahoo.com [Department of Biomedical Engineering, Engineering Faculty, Bülent Ecevit University, 67100 Zonguldak (Turkey); Ozer, Mahmut [Department of Electrical and Electronics Engineering, Engineering Faculty, Bülent Ecevit University, 67100 Zonguldak (Turkey)

    2013-08-01

    We consider a scale-free network of stochastic HH neurons driven by a subthreshold periodic stimulus and investigate how the collective spiking regularity or the collective temporal coherence changes with the stimulus frequency, the intrinsic noise (or the cell size), the network average degree and the coupling strength. We show that the best temporal coherence is obtained for a certain level of the intrinsic noise when the frequencies of the external stimulus and the subthreshold oscillations of the network elements match. We also find that the collective regularity exhibits a resonance-like behavior depending on both the coupling strength and the network average degree at the optimal values of the stimulus frequency and the cell size, indicating that the best temporal coherence also requires an optimal coupling strength and an optimal average degree of the connectivity.

  10. Matching Intensity-Modulated Radiation Therapy to an Anterior Low Neck Field

    International Nuclear Information System (INIS)

    Amdur, Robert J.; Liu, Chihray; Li, Jonathan; Mendenhall, William; Hinerman, Russell

    2007-01-01

    When using intensity-modulated radiation therapy (IMRT) to treat head and neck cancer with the primary site above the level of the larynx, there are two basic options for the low neck lymphatics: to treat the entire neck with IMRT, or to match the IMRT plan to a conventional anterior 'low neck' field. In view of the potential advantages of using a conventional low neck field, it is important to look for ways to minimize or manage the problems of matching IMRT to a conventional radiotherapy field. Treating the low neck with a single anterior field and the standard larynx block decreases the dose to the larynx and often results in a superior IMRT plan at the primary site. The purpose of this article is to review the most applicable studies and to discuss our experience with implementing a technique that involves moving the position of the superior border of the low neck field several times during a single treatment fraction

  11. Regularity effect in prospective memory during aging

    Directory of Open Access Journals (Sweden)

    Geoffrey Blondelle

    2016-10-01

    Full Text Available Background: Regularity effect can affect performance in prospective memory (PM, but little is known on the cognitive processes linked to this effect. Moreover, its impacts with regard to aging remain unknown. To our knowledge, this study is the first to examine regularity effect in PM in a lifespan perspective, with a sample of young, intermediate, and older adults. Objective and design: Our study examined the regularity effect in PM in three groups of participants: 28 young adults (18–30, 16 intermediate adults (40–55, and 25 older adults (65–80. The task, adapted from the Virtual Week, was designed to manipulate the regularity of the various activities of daily life that were to be recalled (regular repeated activities vs. irregular non-repeated activities. We examine the role of several cognitive functions including certain dimensions of executive functions (planning, inhibition, shifting, and binding, short-term memory, and retrospective episodic memory to identify those involved in PM, according to regularity and age. Results: A mixed-design ANOVA showed a main effect of task regularity and an interaction between age and regularity: an age-related difference in PM performances was found for irregular activities (older < young, but not for regular activities. All participants recalled more regular activities than irregular ones with no age effect. It appeared that recalling of regular activities only involved planning for both intermediate and older adults, while recalling of irregular ones were linked to planning, inhibition, short-term memory, binding, and retrospective episodic memory. Conclusion: Taken together, our data suggest that planning capacities seem to play a major role in remembering to perform intended actions with advancing age. Furthermore, the age-PM-paradox may be attenuated when the experimental design is adapted by implementing a familiar context through the use of activities of daily living. The clinical

  12. J-regular rings with injectivities

    OpenAIRE

    Shen, Liang

    2010-01-01

    A ring $R$ is called a J-regular ring if R/J(R) is von Neumann regular, where J(R) is the Jacobson radical of R. It is proved that if R is J-regular, then (i) R is right n-injective if and only if every homomorphism from an $n$-generated small right ideal of $R$ to $R_{R}$ can be extended to one from $R_{R}$ to $R_{R}$; (ii) R is right FP-injective if and only if R is right (J, R)-FP-injective. Some known results are improved.

  13. Effects of visual information regarding allocentric processing in haptic parallelity matching.

    Science.gov (United States)

    Van Mier, Hanneke I

    2013-10-01

    Research has revealed that haptic perception of parallelity deviates from physical reality. Large and systematic deviations have been found in haptic parallelity matching most likely due to the influence of the hand-centered egocentric reference frame. Providing information that increases the influence of allocentric processing has been shown to improve performance on haptic matching. In this study allocentric processing was stimulated by providing informative vision in haptic matching tasks that were performed using hand- and arm-centered reference frames. Twenty blindfolded participants (ten men, ten women) explored the orientation of a reference bar with the non-dominant hand and subsequently matched (task HP) or mirrored (task HM) its orientation on a test bar with the dominant hand. Visual information was provided by means of informative vision with participants having full view of the test bar, while the reference bar was blocked from their view (task VHP). To decrease the egocentric bias of the hands, participants also performed a visual haptic parallelity drawing task (task VHPD) using an arm-centered reference frame, by drawing the orientation of the reference bar. In all tasks, the distance between and orientation of the bars were manipulated. A significant effect of task was found; performance improved from task HP, to VHP to VHPD, and HM. Significant effects of distance were found in the first three tasks, whereas orientation and gender effects were only significant in tasks HP and VHP. The results showed that stimulating allocentric processing by means of informative vision and reducing the egocentric bias by using an arm-centered reference frame led to most accurate performance on parallelity matching. © 2013 Elsevier B.V. All rights reserved.

  14. Poly(ferrocenylsilane)-block-Polylactide Block Copolymers

    NARCIS (Netherlands)

    Roerdink, M.; van Zanten, Thomas S.; Hempenius, Mark A.; Zhong, Zhiyuan; Feijen, Jan; Vancso, Gyula J.

    2007-01-01

    A PFS/PLA block copolymer was studied to probe the effect of strong surface interactions on pattern formation in PFS block copolymer thin films. Successful synthesis of PFS-b-PLA was demonstrated. Thin films of these polymers show phase separation to form PFS microdomains in a PLA matrix, and

  15. Iterative Regularization with Minimum-Residual Methods

    DEFF Research Database (Denmark)

    Jensen, Toke Koldborg; Hansen, Per Christian

    2007-01-01

    subspaces. We provide a combination of theory and numerical examples, and our analysis confirms the experience that MINRES and MR-II can work as general regularization methods. We also demonstrate theoretically and experimentally that the same is not true, in general, for GMRES and RRGMRES their success......We study the regularization properties of iterative minimum-residual methods applied to discrete ill-posed problems. In these methods, the projection onto the underlying Krylov subspace acts as a regularizer, and the emphasis of this work is on the role played by the basis vectors of these Krylov...... as regularization methods is highly problem dependent....

  16. Iterative regularization with minimum-residual methods

    DEFF Research Database (Denmark)

    Jensen, Toke Koldborg; Hansen, Per Christian

    2006-01-01

    subspaces. We provide a combination of theory and numerical examples, and our analysis confirms the experience that MINRES and MR-II can work as general regularization methods. We also demonstrate theoretically and experimentally that the same is not true, in general, for GMRES and RRGMRES - their success......We study the regularization properties of iterative minimum-residual methods applied to discrete ill-posed problems. In these methods, the projection onto the underlying Krylov subspace acts as a regularizer, and the emphasis of this work is on the role played by the basis vectors of these Krylov...... as regularization methods is highly problem dependent....

  17. Detection of regularities in variation in geomechanical behavior of rock mass during multi-roadway preparation and mining of an extraction panel

    Science.gov (United States)

    Tsvetkov, AB; Pavlova, LD; Fryanov, VN

    2018-03-01

    The results of numerical simulation of the stress–strain state in a rock block and surrounding mass mass under multi-roadway preparation to mining are presented. The numerical solutions obtained by the nonlinear modeling and using the constitutive relations of the theory of elasticity are compared. The regularities of the stress distribution in the vicinity of the pillars located in the zone of the abutment pressure of are found.

  18. Effects of regular away travel on training loads, recovery, and injury rates in professional Australian soccer players.

    Science.gov (United States)

    Fowler, Peter; Duffield, Rob; Waterson, Adam; Vaile, Joanna

    2015-07-01

    The current study examined the acute and longitudinal effects of regular away travel on training load (TL), player wellness, and injury surrounding competitive football (soccer) matches. Eighteen male professional football players, representing a team competing in the highest national competition in Australia, volunteered to participate in the study. Training loads, player wellness and injury incidence, rate, severity, and type, together with the activity at the time of injury, were recorded on the day before, the day of, and for 4 d after each of the 27 matches of the 2012-13 season. This included 14 home and 13 away matches, further subdivided based on the midpoint of the season into early (1-13) and late competition (14-27) phases. While TLs were significantly greater on day 3 at home compared with away during the early competition phase (P=.03), no other significant effects of match location were identified (P>.05). Total TL and mean wellness over the 6 d surrounding matches and TL on day 3 were significantly reduced during the late compared with the early competition phase at home and away (P.05), training missed due to injury was 60% and 50% greater during the late than during the early competition phase at home and away, respectively. In conclusion, no significant interactions between match location and competition phase were evident during the late competition phase, which suggests that away travel had negligible cumulative effects on the reduction in player wellness in the latter half of the season.

  19. Technical match characteristics and influence of body anthropometry on playing performance in male elite team handball

    DEFF Research Database (Denmark)

    Michalsik, Lars Bojsen; Madsen, Klavs; Aagaard, Per

    2015-01-01

    ). In conclusion, modern male elite team handball match-play is characterized by a high number of short-term, high-intense intermittent technical playing actions. Indications of technical fatigue were observed. Physical demands differed between playing positions with wing players performing more fast breaks...... players along with anthropometric measurements over a 6 season time span. Technical match activities were distributed in 6 major types of playing actions (shots, breakthroughs, fast breaks, tackles, technical errors, and defense errors) and further divided into various subcategories (e.g., hard or light...... tackles, type of shot, claspings, screenings, and blockings). Players showed 36.9 ± 13.1 (group mean ± SD) high-intense technical playing actions per match with a mean total effective playing time of 53.85 ± 5.87 minutes. In offense, each player performed 6.0 ± 5.2 fast breaks, received 34.5 ± 21...

  20. Multiple graph regularized protein domain ranking.

    Science.gov (United States)

    Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin

    2012-11-19

    Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications.

  1. History Matching in Parallel Computational Environments

    Energy Technology Data Exchange (ETDEWEB)

    Steven Bryant; Sanjay Srinivasan; Alvaro Barrera; Sharad Yadav

    2005-10-01

    A novel methodology for delineating multiple reservoir domains for the purpose of history matching in a distributed computing environment has been proposed. A fully probabilistic approach to perturb permeability within the delineated zones is implemented. The combination of robust schemes for identifying reservoir zones and distributed computing significantly increase the accuracy and efficiency of the probabilistic approach. The information pertaining to the permeability variations in the reservoir that is contained in dynamic data is calibrated in terms of a deformation parameter rD. This information is merged with the prior geologic information in order to generate permeability models consistent with the observed dynamic data as well as the prior geology. The relationship between dynamic response data and reservoir attributes may vary in different regions of the reservoir due to spatial variations in reservoir attributes, well configuration, flow constrains etc. The probabilistic approach then has to account for multiple r{sub D} values in different regions of the reservoir. In order to delineate reservoir domains that can be characterized with different rD parameters, principal component analysis (PCA) of the Hessian matrix has been done. The Hessian matrix summarizes the sensitivity of the objective function at a given step of the history matching to model parameters. It also measures the interaction of the parameters in affecting the objective function. The basic premise of PC analysis is to isolate the most sensitive and least correlated regions. The eigenvectors obtained during the PCA are suitably scaled and appropriate grid block volume cut-offs are defined such that the resultant domains are neither too large (which increases interactions between domains) nor too small (implying ineffective history matching). The delineation of domains requires calculation of Hessian, which could be computationally costly and as well as restricts the current approach to

  2. Image Segmentation, Registration, Compression, and Matching

    Science.gov (United States)

    Yadegar, Jacob; Wei, Hai; Yadegar, Joseph; Ray, Nilanjan; Zabuawala, Sakina

    2011-01-01

    A novel computational framework was developed of a 2D affine invariant matching exploiting a parameter space. Named as affine invariant parameter space (AIPS), the technique can be applied to many image-processing and computer-vision problems, including image registration, template matching, and object tracking from image sequence. The AIPS is formed by the parameters in an affine combination of a set of feature points in the image plane. In cases where the entire image can be assumed to have undergone a single affine transformation, the new AIPS match metric and matching framework becomes very effective (compared with the state-of-the-art methods at the time of this reporting). No knowledge about scaling or any other transformation parameters need to be known a priori to apply the AIPS framework. An automated suite of software tools has been created to provide accurate image segmentation (for data cleaning) and high-quality 2D image and 3D surface registration (for fusing multi-resolution terrain, image, and map data). These tools are capable of supporting existing GIS toolkits already in the marketplace, and will also be usable in a stand-alone fashion. The toolkit applies novel algorithmic approaches for image segmentation, feature extraction, and registration of 2D imagery and 3D surface data, which supports first-pass, batched, fully automatic feature extraction (for segmentation), and registration. A hierarchical and adaptive approach is taken for achieving automatic feature extraction, segmentation, and registration. Surface registration is the process of aligning two (or more) data sets to a common coordinate system, during which the transformation between their different coordinate systems is determined. Also developed here are a novel, volumetric surface modeling and compression technique that provide both quality-guaranteed mesh surface approximations and compaction of the model sizes by efficiently coding the geometry and connectivity

  3. Explaining Match Outcome During The Men’s Basketball Tournament at The Olympic Games

    Directory of Open Access Journals (Sweden)

    Anthony S. Leicht, Miguel A. Gómez, Carl T. Woods

    2017-12-01

    Full Text Available In preparation for the Olympics, there is a limited opportunity for coaches and athletes to interact regularly with team performance indicators providing important guidance to coaches for enhanced match success at the elite level. This study examined the relationship between match outcome and team performance indicators during men’s basketball tournaments at the Olympic Games. Twelve team performance indicators were collated from all men’s teams and matches during the basketball tournament of the 2004-2016 Olympic Games (n = 156. Linear and non-linear analyses examined the relationship between match outcome and team performance indicator characteristics; namely, binary logistic regression and a conditional interference (CI classification tree. The most parsimonious logistic regression model retained ‘assists’, ‘defensive rebounds’, ‘field-goal percentage’, ‘fouls’, ‘fouls against’, ‘steals’ and ‘turnovers’ (delta AIC <0.01; Akaike weight = 0.28 with a classification accuracy of 85.5%. Conversely, four performance indicators were retained with the CI classification tree with an average classification accuracy of 81.4%. However, it was the combination of ‘field-goal percentage’ and ‘defensive rebounds’ that provided the greatest probability of winning (93.2%. Match outcome during the men’s basketball tournaments at the Olympic Games was identified by a unique combination of performance indicators. Despite the average model accuracy being marginally higher for the logistic regression analysis, the CI classification tree offered a greater practical utility for coaches through its resolution of non-linear phenomena to guide team success.

  4. Regularized non-stationary morphological reconstruction algorithm for weak signal detection in microseismic monitoring: methodology

    Science.gov (United States)

    Huang, Weilin; Wang, Runqiu; Chen, Yangkang

    2018-05-01

    Microseismic signal is typically weak compared with the strong background noise. In order to effectively detect the weak signal in microseismic data, we propose a mathematical morphology based approach. We decompose the initial data into several morphological multiscale components. For detection of weak signal, a non-stationary weighting operator is proposed and introduced into the process of reconstruction of data by morphological multiscale components. The non-stationary weighting operator can be obtained by solving an inversion problem. The regularized non-stationary method can be understood as a non-stationary matching filtering method, where the matching filter has the same size as the data to be filtered. In this paper, we provide detailed algorithmic descriptions and analysis. The detailed algorithm framework, parameter selection and computational issue for the regularized non-stationary morphological reconstruction (RNMR) method are presented. We validate the presented method through a comprehensive analysis through different data examples. We first test the proposed technique using a synthetic data set. Then the proposed technique is applied to a field project, where the signals induced from hydraulic fracturing are recorded by 12 three-component geophones in a monitoring well. The result demonstrates that the RNMR can improve the detectability of the weak microseismic signals. Using the processed data, the short-term-average over long-term average picking algorithm and Geiger's method are applied to obtain new locations of microseismic events. In addition, we show that the proposed RNMR method can be used not only in microseismic data but also in reflection seismic data to detect the weak signal. We also discussed the extension of RNMR from 1-D to 2-D or a higher dimensional version.

  5. Block and sub-block boundary strengthening in lath martensite

    NARCIS (Netherlands)

    Du, C.; Hoefnagels, J.P.M.; Vaes, R.; Geers, M.G.D.

    2016-01-01

    Well-defined uniaxial micro-tensile tests were performed on lath martensite single block specimens and multi-block specimens with different number of block boundaries parallel to the loading direction. Detailed slip trace analyses consistently revealed that in the {110}<111> slip system with the

  6. On the Eigenvalues and Eigenvectors of Block Triangular Preconditioned Block Matrices

    KAUST Repository

    Pestana, Jennifer

    2014-01-01

    Block lower triangular matrices and block upper triangular matrices are popular preconditioners for 2×2 block matrices. In this note we show that a block lower triangular preconditioner gives the same spectrum as a block upper triangular preconditioner and that the eigenvectors of the two preconditioned matrices are related. © 2014 Society for Industrial and Applied Mathematics.

  7. Higher derivative regularization and chiral anomaly

    International Nuclear Information System (INIS)

    Nagahama, Yoshinori.

    1985-02-01

    A higher derivative regularization which automatically leads to the consistent chiral anomaly is analyzed in detail. It explicitly breaks all the local gauge symmetry but preserves global chiral symmetry and leads to the chirally symmetric consistent anomaly. This regularization thus clarifies the physics content contained in the consistent anomaly. We also briefly comment on the application of this higher derivative regularization to massless QED. (author)

  8. Regular paths in SparQL: querying the NCI Thesaurus.

    Science.gov (United States)

    Detwiler, Landon T; Suciu, Dan; Brinkley, James F

    2008-11-06

    OWL, the Web Ontology Language, provides syntax and semantics for representing knowledge for the semantic web. Many of the constructs of OWL have a basis in the field of description logics. While the formal underpinnings of description logics have lead to a highly computable language, it has come at a cognitive cost. OWL ontologies are often unintuitive to readers lacking a strong logic background. In this work we describe GLEEN, a regular path expression library, which extends the RDF query language SparQL to support complex path expressions over OWL and other RDF-based ontologies. We illustrate the utility of GLEEN by showing how it can be used in a query-based approach to defining simpler, more intuitive views of OWL ontologies. In particular we show how relatively simple GLEEN-enhanced SparQL queries can create views of the OWL version of the NCI Thesaurus that match the views generated by the web-based NCI browser.

  9. Determine point-to-point networking interactions using regular expressions

    Directory of Open Access Journals (Sweden)

    Konstantin S. Deev

    2015-06-01

    Full Text Available As Internet growth and becoming more popular, the number of concurrent data flows start to increasing, which makes sense in bandwidth requested. Providers and corporate customers need ability to identify point-to-point interactions. The best is to use special software and hardware implementations that distribute the load in the internals of the complex, using the principles and approaches, in particular, described in this paper. This paper represent the principles of building system, which searches for a regular expression match using computing on graphics adapter in server station. A significant computing power and capability to parallel execution on modern graphic processor allows inspection of large amounts of data through sets of rules. Using the specified characteristics can lead to increased computing power in 30…40 times compared to the same setups on the central processing unit. The potential increase in bandwidth capacity could be used in systems that provide packet analysis, firewalls and network anomaly detectors.

  10. Powder wastes confinement block and manufacturing process of this block

    International Nuclear Information System (INIS)

    Dagot, L.; Brunel, G.

    1996-01-01

    This invention concerns a powder wastes containment block and a manufacturing process of this block. In this block, the waste powder is encapsulated in a thermo hardening polymer as for example an epoxy resin, the encapsulated resin being spread into cement. This block can contain between 45 and 55% in mass of wastes, between 18 and 36% in mass of polymer and between 14 and 32% in mass of cement. Such a containment block can be used for the radioactive wastes storage. (O.M.). 4 refs

  11. Blocked Randomization with Randomly Selected Block Sizes

    Directory of Open Access Journals (Sweden)

    Jimmy Efird

    2010-12-01

    Full Text Available When planning a randomized clinical trial, careful consideration must be given to how participants are selected for various arms of a study. Selection and accidental bias may occur when participants are not assigned to study groups with equal probability. A simple random allocation scheme is a process by which each participant has equal likelihood of being assigned to treatment versus referent groups. However, by chance an unequal number of individuals may be assigned to each arm of the study and thus decrease the power to detect statistically significant differences between groups. Block randomization is a commonly used technique in clinical trial design to reduce bias and achieve balance in the allocation of participants to treatment arms, especially when the sample size is small. This method increases the probability that each arm will contain an equal number of individuals by sequencing participant assignments by block. Yet still, the allocation process may be predictable, for example, when the investigator is not blind and the block size is fixed. This paper provides an overview of blocked randomization and illustrates how to avoid selection bias by using random block sizes.

  12. Population Blocks.

    Science.gov (United States)

    Smith, Martin H.

    1992-01-01

    Describes an educational game called "Population Blocks" that is designed to illustrate the concept of exponential growth of the human population and some potential effects of overpopulation. The game material consists of wooden blocks; 18 blocks are painted green (representing land), 7 are painted blue (representing water); and the remaining…

  13. Multiple graph regularized protein domain ranking

    KAUST Repository

    Wang, Jim Jing-Yan

    2012-11-19

    Background: Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods.Results: To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods.Conclusion: The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications. 2012 Wang et al; licensee BioMed Central Ltd.

  14. Multiple graph regularized protein domain ranking

    KAUST Repository

    Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin

    2012-01-01

    Background: Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods.Results: To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods.Conclusion: The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications. 2012 Wang et al; licensee BioMed Central Ltd.

  15. Multiple graph regularized protein domain ranking

    Directory of Open Access Journals (Sweden)

    Wang Jim

    2012-11-01

    Full Text Available Abstract Background Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. Results To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. Conclusion The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications.

  16. 75 FR 53966 - Regular Meeting

    Science.gov (United States)

    2010-09-02

    ... FARM CREDIT SYSTEM INSURANCE CORPORATION Regular Meeting AGENCY: Farm Credit System Insurance Corporation Board. SUMMARY: Notice is hereby given of the regular meeting of the Farm Credit System Insurance Corporation Board (Board). DATE AND TIME: The meeting of the Board will be held at the offices of the Farm...

  17. Work and family life of childrearing women workers in Japan: comparison of non-regular employees with short working hours, non-regular employees with long working hours, and regular employees.

    Science.gov (United States)

    Seto, Masako; Morimoto, Kanehisa; Maruyama, Soichiro

    2006-05-01

    This study assessed the working and family life characteristics, and the degree of domestic and work strain of female workers with different employment statuses and weekly working hours who are rearing children. Participants were the mothers of preschoolers in a large Japanese city. We classified the women into three groups according to the hours they worked and their employment conditions. The three groups were: non-regular employees working less than 30 h a week (n=136); non-regular employees working 30 h or more per week (n=141); and regular employees working 30 h or more a week (n=184). We compared among the groups the subjective values of work, financial difficulties, childcare and housework burdens, psychological effects, and strains such as work and family strain, work-family conflict, and work dissatisfaction. Regular employees were more likely to report job pressures and inflexible work schedules and to experience more strain related to work and family than non-regular employees. Non-regular employees were more likely to be facing financial difficulties. In particular, non-regular employees working longer hours tended to encounter socioeconomic difficulties and often lacked support from family and friends. Female workers with children may have different social backgrounds and different stressors according to their working hours and work status.

  18. Reading comprehension of deaf students in regular education.

    Science.gov (United States)

    Luccas, Marcia Regina Zemella; Chiari, Brasília Maria; Goulart, Bárbara Niegia Garcia de

    2012-01-01

    To evaluate and compare the reading comprehension of deaf students included in regular classrooms of public schools with and without specialized educational support. Observational analytic study with 35 students with sensorineural hearing loss, with and without educational support. All subjects were assessed with the Word Reading Competence Test (WRCT), the Picture-Print Matching Test by Choice (PPMT-C), and the Sentence Reading Comprehension Test (SRCT). In the tests regarding comprehension of words (WRCT and PPMT-C), the results showed no difference in the performance of deaf students who attend and do not attend educational support. Regarding reading comprehension of sentences, the application of the SRCT also did not show differences between the groups of deaf students. A significant correlation was found between age and grade, indicating that the older the students and the higher their educational level, the better their performance in reading sentences. The results indicate that deaf students, regardless of attending educational support, read words better than sentences. There is no difference in reading comprehension between deaf students who receive and do not receive specialized pedagogical monitoring.

  19. Cerenkov luminescence tomography based on preconditioning orthogonal matching pursuit

    Science.gov (United States)

    Liu, Haixiao; Hu, Zhenhua; Wang, Kun; Tian, Jie; Yang, Xin

    2015-03-01

    Cerenkov luminescence imaging (CLI) is a novel optical imaging method and has been proved to be a potential substitute of the traditional radionuclide imaging such as positron emission tomography (PET) and single-photon emission computed tomography (SPECT). This imaging method inherits the high sensitivity of nuclear medicine and low cost of optical molecular imaging. To obtain the depth information of the radioactive isotope, Cerenkov luminescence tomography (CLT) is established and the 3D distribution of the isotope is reconstructed. However, because of the strong absorption and scatter, the reconstruction of the CLT sources is always converted to an ill-posed linear system which is hard to be solved. In this work, the sparse nature of the light source was taken into account and the preconditioning orthogonal matching pursuit (POMP) method was established to effectively reduce the ill-posedness and obtain better reconstruction accuracy. To prove the accuracy and speed of this algorithm, a heterogeneous numerical phantom experiment and an in vivo mouse experiment were conducted. Both the simulation result and the mouse experiment showed that our reconstruction method can provide more accurate reconstruction result compared with the traditional Tikhonov regularization method and the ordinary orthogonal matching pursuit (OMP) method. Our reconstruction method will provide technical support for the biological application for Cerenkov luminescence.

  20. Magnetic safety matches

    Science.gov (United States)

    Lindén, J.; Lindberg, M.; Greggas, A.; Jylhävuori, N.; Norrgrann, H.; Lill, J. O.

    2017-07-01

    In addition to the main ingredients; sulfur, potassium chlorate and carbon, ordinary safety matches contain various dyes, glues etc, giving the head of the match an even texture and appealing color. Among the common reddish-brown matches there are several types, which after ignition can be attracted by a strong magnet. Before ignition the match head is generally not attracted by the magnet. An elemental analysis based on proton-induced x-ray emission was performed to single out iron as the element responsible for the observed magnetism. 57Fe Mössbauer spectroscopy was used for identifying the various types of iron-compounds, present before and after ignition, responsible for the macroscopic magnetism: Fe2O3 before and Fe3O4 after. The reaction was verified by mixing the main chemicals in the match-head with Fe2O3 in glue and mounting the mixture on a match stick. The ash residue after igniting the mixture was magnetic.

  1. Relationship between physical fitness and game-related statistics in elite professional basketball players: Regular season vs. playoffs

    Directory of Open Access Journals (Sweden)

    João Henrique Gomes

    2017-05-01

    Full Text Available Abstract AIMS This study aimed to verify th erelation ship between of anthropometric and physical performance variables with game-related statistics in professional elite basketball players during a competition. METHODS Eleven male basketball players were evaluated during 10 weeks in two distinct moments (regular season and playoffs. Overall, 11 variables of physical fitness and 13 variables of game-related statistics were analysed. RESULTS The following significant Pearson’scorrelations were found in regular season: percentage of fat mass with assists (r = -0.62 and steals (r = -0.63; height (r = 0.68, lean mass (r = 0.64, and maximum strength (r = 0.67 with blocks; squat jump with steals (r = 0.63; and time in the T-test with success ful two-point field-goals (r = -0.65, success ful free-throws (r = -0.61, and steals (r = -0.62. However, in playoffs, only stature and lean mass maintained these correlations (p ≤ 0.05. CONCLUSIONS The anthropometric and physical characteristics of the players showed few correlations with the game-related statistics in regular season, and these correlations are even lower in the playoff games of a professional elite Champion ship, wherefore, not being good predictors of technical performance.

  2. Incremental projection approach of regularization for inverse problems

    Energy Technology Data Exchange (ETDEWEB)

    Souopgui, Innocent, E-mail: innocent.souopgui@usm.edu [The University of Southern Mississippi, Department of Marine Science (United States); Ngodock, Hans E., E-mail: hans.ngodock@nrlssc.navy.mil [Naval Research Laboratory (United States); Vidard, Arthur, E-mail: arthur.vidard@imag.fr; Le Dimet, François-Xavier, E-mail: ledimet@imag.fr [Laboratoire Jean Kuntzmann (France)

    2016-10-15

    This paper presents an alternative approach to the regularized least squares solution of ill-posed inverse problems. Instead of solving a minimization problem with an objective function composed of a data term and a regularization term, the regularization information is used to define a projection onto a convex subspace of regularized candidate solutions. The objective function is modified to include the projection of each iterate in the place of the regularization. Numerical experiments based on the problem of motion estimation for geophysical fluid images, show the improvement of the proposed method compared with regularization methods. For the presented test case, the incremental projection method uses 7 times less computation time than the regularization method, to reach the same error target. Moreover, at convergence, the incremental projection is two order of magnitude more accurate than the regularization method.

  3. Geometric regularizations and dual conifold transitions

    International Nuclear Information System (INIS)

    Landsteiner, Karl; Lazaroiu, Calin I.

    2003-01-01

    We consider a geometric regularization for the class of conifold transitions relating D-brane systems on noncompact Calabi-Yau spaces to certain flux backgrounds. This regularization respects the SL(2,Z) invariance of the flux superpotential, and allows for computation of the relevant periods through the method of Picard-Fuchs equations. The regularized geometry is a noncompact Calabi-Yau which can be viewed as a monodromic fibration, with the nontrivial monodromy being induced by the regulator. It reduces to the original, non-monodromic background when the regulator is removed. Using this regularization, we discuss the simple case of the local conifold, and show how the relevant field-theoretic information can be extracted in this approach. (author)

  4. Adaptive regularization

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Rasmussen, Carl Edward; Svarer, C.

    1994-01-01

    Regularization, e.g., in the form of weight decay, is important for training and optimization of neural network architectures. In this work the authors provide a tool based on asymptotic sampling theory, for iterative estimation of weight decay parameters. The basic idea is to do a gradient desce...

  5. Regularizing portfolio optimization

    International Nuclear Information System (INIS)

    Still, Susanne; Kondor, Imre

    2010-01-01

    The optimization of large portfolios displays an inherent instability due to estimation error. This poses a fundamental problem, because solutions that are not stable under sample fluctuations may look optimal for a given sample, but are, in effect, very far from optimal with respect to the average risk. In this paper, we approach the problem from the point of view of statistical learning theory. The occurrence of the instability is intimately related to over-fitting, which can be avoided using known regularization methods. We show how regularized portfolio optimization with the expected shortfall as a risk measure is related to support vector regression. The budget constraint dictates a modification. We present the resulting optimization problem and discuss the solution. The L2 norm of the weight vector is used as a regularizer, which corresponds to a diversification 'pressure'. This means that diversification, besides counteracting downward fluctuations in some assets by upward fluctuations in others, is also crucial because it improves the stability of the solution. The approach we provide here allows for the simultaneous treatment of optimization and diversification in one framework that enables the investor to trade off between the two, depending on the size of the available dataset.

  6. Regularizing portfolio optimization

    Science.gov (United States)

    Still, Susanne; Kondor, Imre

    2010-07-01

    The optimization of large portfolios displays an inherent instability due to estimation error. This poses a fundamental problem, because solutions that are not stable under sample fluctuations may look optimal for a given sample, but are, in effect, very far from optimal with respect to the average risk. In this paper, we approach the problem from the point of view of statistical learning theory. The occurrence of the instability is intimately related to over-fitting, which can be avoided using known regularization methods. We show how regularized portfolio optimization with the expected shortfall as a risk measure is related to support vector regression. The budget constraint dictates a modification. We present the resulting optimization problem and discuss the solution. The L2 norm of the weight vector is used as a regularizer, which corresponds to a diversification 'pressure'. This means that diversification, besides counteracting downward fluctuations in some assets by upward fluctuations in others, is also crucial because it improves the stability of the solution. The approach we provide here allows for the simultaneous treatment of optimization and diversification in one framework that enables the investor to trade off between the two, depending on the size of the available dataset.

  7. Identification of tumor specimens by DNA analysis in a case of histocytological paraffin tissue block swapping

    Science.gov (United States)

    Raina, Anupuma; Yadav, Bhuvnesh; Ali, Sher; Das Dogra, Tirath

    2011-01-01

    We report on a patient who was diagnosed with high-grade breast carcinoma by all the pre-surgery clinical evidence of malignancy, but histopathological reports did not reveal any such tumor residue in the post-surgical tissue block. This raised a suspicion that either exchange of block, labeling error, or a technical error took place during gross examination of the tissue. The mastectomy residue was unprocurable to sort out the problem. So, two doubtful paraffin blocks were sent for DNA fingerprinting analysis. The partial DNA profiles (8-9/15 loci) were obtained from histocytological blocks. The random matching probability for both the paraffin blocks and the patient’s blood were found to be 1 in 4.43E4, 1.89E6, and 8.83E13, respectively for Asian population. Multiplex short tandem repeat analysis applied in this case determined that the cause of tumor absence was an error in gross examination of the post-surgical tissue. Moreover, the analysis helped in justifying the therapy given to the patient. Thus, with DNA fingerprinting technique, it was concluded that there was no exchange of the blocks between the two patients operated on the same day and the treatment given to the concerned patient was in the right direction. PMID:21674839

  8. New poly(dimethylsiloxane)/poly(perfluorooctylethyl acrylate) block copolymers: structure and order across multiple length scales in thin films

    KAUST Repository

    Martinelli, Elisa; Galli, Giancarlo; Krishnan, Sitaraman; Paik, Marvin Y.; Ober, Christopher K.; Fischer, Daniel A.

    2011-01-01

    Three sets of a new class of low surface tension block copolymers were synthesized consisting of a poly(dimethylsiloxane) (PDMS) block and a poly(perfluorooctylethyl acrylate) (AF8) block. The polymers were prepared using a bromo-terminated PDMS macroinitiator, to which was attached an AF8 block grown using atom transfer radical polymerization (ATRP) in such a designed way that the molecular weight and composition of the two polymer blocks were regularly varied. The interplay of both the phase separated microstructure and the mesomorphic character of the fluorinated domains with their effect on surface structure was evaluated using a suite of analytical tools. Surfaces of spin-coated and thermally annealed films were assessed using a combination of X-ray photoelectron spectroscopy (XPS) and near-edge X-ray absorption fine structure (NEXAFS) studies. Both atomic force microscopy (AFM) measurements and grazing incidence small angle X-ray scattering (GISAXS) studies were carried out to evaluate the microstructure of the thin films. Even in block copolymers in which the PDMS block was the majority component, a significant presence of the lower surface energy AF8 block was detected at the film surface. Moreover, the perfluorooctyl helices of the AF8 repeat units were highly oriented at the surface in an ordered, tilted smectic structure, which was compared with those of the bulk powder samples using wide-angle X-ray powder diffraction (WAXD) studies. © 2011 The Royal Society of Chemistry.

  9. Tessellating the Sphere with Regular Polygons

    Science.gov (United States)

    Soto-Johnson, Hortensia; Bechthold, Dawn

    2004-01-01

    Tessellations in the Euclidean plane and regular polygons that tessellate the sphere are reviewed. The regular polygons that can possibly tesellate the sphere are spherical triangles, squares and pentagons.

  10. Evaluation of Effect of Pudendal Nerve Block on Post Hemmorrhoidectomy Pain

    Directory of Open Access Journals (Sweden)

    M.H. Sarmast Shoshtari

    2008-10-01

    Full Text Available Introduction & Objective: Hemorrhoid is one of the most common anorectal disease which presents with pain, bleeding and mass protrusion from anus. One of the most important reasons to avoid operation in these patients fears of the pain. Pain control specially during the first 24 hour postoperation period results in decreasing urinary retension and constipation as well as increasing patients pleasant. In this study we assisted the effect of pudendal nerve block to reduce pain in posthemorrhoidectomy period and compared with those patients without pudendal nerve block.Materials & Methods: We randomized 120 patients with average age of 37.7 year who referred to the hospitals of Ahwaz university for hemorrhoidectomy into 2 groups (N1: 60 N2:60. In the first group pudendal nerve block was done but in the second group we didn't. Then pain scores by analogue scale method were calculated in each group at 2, 6, 12& 24 hours after operations. The scores were matched with Chi- Square test. Also we calculated and compared the dosages of injected narcotics.Results: The average pain scores at 2, 6, 12, 24 hours after operation in the first group (with nerve block. Were 2.53, 2.4, 1.91, 2.7, 2.38, and in the second group (without nerve block were 3.43, 3.23, 2.98, 2.81, 3.11. The average of narcotic dosage in the first group was 0.69 and in the second group was 1.3. P-value in two groups in those times were 0.001, 0.002, 0.001, 0.66. P-value for comparison of two groups was 0.01. P-value for comparison of narcotic consumption was 0.003Conclusions: In this study, we showed that pudendal nerve block in post hemorrhoidectomy period, reduced pain significantly and decreased narcotic consumption as well.

  11. Accretion onto some well-known regular black holes

    International Nuclear Information System (INIS)

    Jawad, Abdul; Shahzad, M.U.

    2016-01-01

    In this work, we discuss the accretion onto static spherically symmetric regular black holes for specific choices of the equation of state parameter. The underlying regular black holes are charged regular black holes using the Fermi-Dirac distribution, logistic distribution, nonlinear electrodynamics, respectively, and Kehagias-Sftesos asymptotically flat regular black holes. We obtain the critical radius, critical speed, and squared sound speed during the accretion process near the regular black holes. We also study the behavior of radial velocity, energy density, and the rate of change of the mass for each of the regular black holes. (orig.)

  12. Accretion onto some well-known regular black holes

    Energy Technology Data Exchange (ETDEWEB)

    Jawad, Abdul; Shahzad, M.U. [COMSATS Institute of Information Technology, Department of Mathematics, Lahore (Pakistan)

    2016-03-15

    In this work, we discuss the accretion onto static spherically symmetric regular black holes for specific choices of the equation of state parameter. The underlying regular black holes are charged regular black holes using the Fermi-Dirac distribution, logistic distribution, nonlinear electrodynamics, respectively, and Kehagias-Sftesos asymptotically flat regular black holes. We obtain the critical radius, critical speed, and squared sound speed during the accretion process near the regular black holes. We also study the behavior of radial velocity, energy density, and the rate of change of the mass for each of the regular black holes. (orig.)

  13. Accretion onto some well-known regular black holes

    Science.gov (United States)

    Jawad, Abdul; Shahzad, M. Umair

    2016-03-01

    In this work, we discuss the accretion onto static spherically symmetric regular black holes for specific choices of the equation of state parameter. The underlying regular black holes are charged regular black holes using the Fermi-Dirac distribution, logistic distribution, nonlinear electrodynamics, respectively, and Kehagias-Sftesos asymptotically flat regular black holes. We obtain the critical radius, critical speed, and squared sound speed during the accretion process near the regular black holes. We also study the behavior of radial velocity, energy density, and the rate of change of the mass for each of the regular black holes.

  14. A binary-decision-diagram-based two-bit arithmetic logic unit on a GaAs-based regular nanowire network with hexagonal topology

    International Nuclear Information System (INIS)

    Zhao Hongquan; Kasai, Seiya; Shiratori, Yuta; Hashizume, Tamotsu

    2009-01-01

    A two-bit arithmetic logic unit (ALU) was successfully fabricated on a GaAs-based regular nanowire network with hexagonal topology. This fundamental building block of central processing units can be implemented on a regular nanowire network structure with simple circuit architecture based on graphical representation of logic functions using a binary decision diagram and topology control of the graph. The four-instruction ALU was designed by integrating subgraphs representing each instruction, and the circuitry was implemented by transferring the logical graph structure to a GaAs-based nanowire network formed by electron beam lithography and wet chemical etching. A path switching function was implemented in nodes by Schottky wrap gate control of nanowires. The fabricated circuit integrating 32 node devices exhibits the correct output waveforms at room temperature allowing for threshold voltage variation.

  15. Diagrammatic methods in phase-space regularization

    International Nuclear Information System (INIS)

    Bern, Z.; Halpern, M.B.; California Univ., Berkeley

    1987-11-01

    Using the scalar prototype and gauge theory as the simplest possible examples, diagrammatic methods are developed for the recently proposed phase-space form of continuum regularization. A number of one-loop and all-order applications are given, including general diagrammatic discussions of the nogrowth theorem and the uniqueness of the phase-space stochastic calculus. The approach also generates an alternate derivation of the equivalence of the large-β phase-space regularization to the more conventional coordinate-space regularization. (orig.)

  16. Metric regularity and subdifferential calculus

    International Nuclear Information System (INIS)

    Ioffe, A D

    2000-01-01

    The theory of metric regularity is an extension of two classical results: the Lyusternik tangent space theorem and the Graves surjection theorem. Developments in non-smooth analysis in the 1980s and 1990s paved the way for a number of far-reaching extensions of these results. It was also well understood that the phenomena behind the results are of metric origin, not connected with any linear structure. At the same time it became clear that some basic hypotheses of the subdifferential calculus are closely connected with the metric regularity of certain set-valued maps. The survey is devoted to the metric theory of metric regularity and its connection with subdifferential calculus in Banach spaces

  17. Minimum description length block finder, a method to identify haplotype blocks and to compare the strength of block boundaries.

    Science.gov (United States)

    Mannila, H; Koivisto, M; Perola, M; Varilo, T; Hennah, W; Ekelund, J; Lukk, M; Peltonen, L; Ukkonen, E

    2003-07-01

    We describe a new probabilistic method for finding haplotype blocks that is based on the use of the minimum description length (MDL) principle. We give a rigorous definition of the quality of a segmentation of a genomic region into blocks and describe a dynamic programming algorithm for finding the optimal segmentation with respect to this measure. We also describe a method for finding the probability of a block boundary for each pair of adjacent markers: this gives a tool for evaluating the significance of each block boundary. We have applied the method to the published data of Daly and colleagues. The results expose some problems that exist in the current methods for the evaluation of the significance of predicted block boundaries. Our method, MDL block finder, can be used to compare block borders in different sample sets, and we demonstrate this by applying the MDL-based method to define the block structure in chromosomes from population isolates.

  18. Temporal regularity of the environment drives time perception

    OpenAIRE

    van Rijn, H; Rhodes, D; Di Luca, M

    2016-01-01

    It’s reasonable to assume that a regularly paced sequence should be perceived as regular, but here we show that perceived regularity depends on the context in which the sequence is embedded. We presented one group of participants with perceptually regularly paced sequences, and another group of participants with mostly irregularly paced sequences (75% irregular, 25% regular). The timing of the final stimulus in each sequence could be var- ied. In one experiment, we asked whether the last stim...

  19. Explaining Match Outcome During The Men’s Basketball Tournament at The Olympic Games

    Science.gov (United States)

    Leicht, Anthony S.; Gómez, Miguel A.; Woods, Carl T.

    2017-01-01

    In preparation for the Olympics, there is a limited opportunity for coaches and athletes to interact regularly with team performance indicators providing important guidance to coaches for enhanced match success at the elite level. This study examined the relationship between match outcome and team performance indicators during men’s basketball tournaments at the Olympic Games. Twelve team performance indicators were collated from all men’s teams and matches during the basketball tournament of the 2004-2016 Olympic Games (n = 156). Linear and non-linear analyses examined the relationship between match outcome and team performance indicator characteristics; namely, binary logistic regression and a conditional interference (CI) classification tree. The most parsimonious logistic regression model retained ‘assists’, ‘defensive rebounds’, ‘field-goal percentage’, ‘fouls’, ‘fouls against’, ‘steals’ and ‘turnovers’ (delta AIC winning (93.2%). Match outcome during the men’s basketball tournaments at the Olympic Games was identified by a unique combination of performance indicators. Despite the average model accuracy being marginally higher for the logistic regression analysis, the CI classification tree offered a greater practical utility for coaches through its resolution of non-linear phenomena to guide team success. Key points A unique combination of team performance indicators explained 93.2% of winning observations in men’s basketball at the Olympics. Monitoring of these team performance indicators may provide coaches with the capability to devise multiple game plans or strategies to enhance their likelihood of winning. Incorporation of machine learning techniques with team performance indicators may provide a valuable and strategic approach to explain patterns within multivariate datasets in sport science. PMID:29238245

  20. Best matching theory & applications

    CERN Document Server

    Moghaddam, Mohsen

    2017-01-01

    Mismatch or best match? This book demonstrates that best matching of individual entities to each other is essential to ensure smooth conduct and successful competitiveness in any distributed system, natural and artificial. Interactions must be optimized through best matching in planning and scheduling, enterprise network design, transportation and construction planning, recruitment, problem solving, selective assembly, team formation, sensor network design, and more. Fundamentals of best matching in distributed and collaborative systems are explained by providing: § Methodical analysis of various multidimensional best matching processes § Comprehensive taxonomy, comparing different best matching problems and processes § Systematic identification of systems’ hierarchy, nature of interactions, and distribution of decision-making and control functions § Practical formulation of solutions based on a library of best matching algorithms and protocols, ready for direct applications and apps development. Design...

  1. Many-to-one blind matching for device-to-device communications

    KAUST Repository

    Hamza, Doha R.

    2018-01-23

    We formulate a two-sided many-to-one abstract matching problem defined by a collection of agreement functions. We then propose a blind matching algorithm (BLMA) to solve the problem. Our solution concept is a modified notion of pairwise stability whereby no pair of agents can e-improve their aspiration levels. We show that the random and decentralized process of BLMA converges to e-pairwise stable solutions with probability one. Next, we consider the application of BLMA in the resource and power allocation problem of device-to-device (D2D) links underlaying a cellular network. Only one resource block (RB) can be assigned to a given D2D while D2D links may occupy many RBs at any given time. We cast the D2D allocation problem within our many-to-one matching problem. We then consider a specific instance of BLMA with limited information exchange so that agents know nothing about the value, utility, of their mutual offers. Offers are simply declared and then either accepted or rejected. Despite the market and information decentralization characteristic of the BLMA, we show that agreement of aspiration levels can still be ascertained and that attaining e-pairwise stability is feasible. Numerical results further demonstrate the convergence properties of the BLMA and that the total utility attained by the BLMA is almost equal to the total utility attained by a centralized controller.

  2. Development and evaluation of an observational system for goalball match analysis

    Directory of Open Access Journals (Sweden)

    Márcio Pereira Morato

    Full Text Available Abstract Our purpose was to develop and evaluate an observational system for goalball match analysis. We used a non-participant systematic game observation method including eight elite games, video recorded, and randomly chosen. Observational categories and performance indicators were determined for each offensive (i.e., ball control, attack preparation, and throwing and defensive principles (i.e., defensive balance, throw reading, and blocking. The comprehensive method of development and the ideal reliability levels (kappa coefficient of 0.81–1.00 of this protocol ensure the generation of quantitative and qualitative information for players and coaches and the rigor required for scientific use.

  3. The uniqueness of the regularization procedure

    International Nuclear Information System (INIS)

    Brzezowski, S.

    1981-01-01

    On the grounds of the BPHZ procedure, the criteria of correct regularization in perturbation calculations of QFT are given, together with the prescription for dividing the regularized formulas into the finite and infinite parts. (author)

  4. Coupling regularizes individual units in noisy populations

    International Nuclear Information System (INIS)

    Ly Cheng; Ermentrout, G. Bard

    2010-01-01

    The regularity of a noisy system can modulate in various ways. It is well known that coupling in a population can lower the variability of the entire network; the collective activity is more regular. Here, we show that diffusive (reciprocal) coupling of two simple Ornstein-Uhlenbeck (O-U) processes can regularize the individual, even when it is coupled to a noisier process. In cellular networks, the regularity of individual cells is important when a select few play a significant role. The regularizing effect of coupling surprisingly applies also to general nonlinear noisy oscillators. However, unlike with the O-U process, coupling-induced regularity is robust to different kinds of coupling. With two coupled noisy oscillators, we derive an asymptotic formula assuming weak noise and coupling for the variance of the period (i.e., spike times) that accurately captures this effect. Moreover, we find that reciprocal coupling can regularize the individual period of higher dimensional oscillators such as the Morris-Lecar and Brusselator models, even when coupled to noisier oscillators. Coupling can have a counterintuitive and beneficial effect on noisy systems. These results have implications for the role of connectivity with noisy oscillators and the modulation of variability of individual oscillators.

  5. Learning regularization parameters for general-form Tikhonov

    International Nuclear Information System (INIS)

    Chung, Julianne; Español, Malena I

    2017-01-01

    Computing regularization parameters for general-form Tikhonov regularization can be an expensive and difficult task, especially if multiple parameters or many solutions need to be computed in real time. In this work, we assume training data is available and describe an efficient learning approach for computing regularization parameters that can be used for a large set of problems. We consider an empirical Bayes risk minimization framework for finding regularization parameters that minimize average errors for the training data. We first extend methods from Chung et al (2011 SIAM J. Sci. Comput. 33 3132–52) to the general-form Tikhonov problem. Then we develop a learning approach for multi-parameter Tikhonov problems, for the case where all involved matrices are simultaneously diagonalizable. For problems where this is not the case, we describe an approach to compute near-optimal regularization parameters by using operator approximations for the original problem. Finally, we propose a new class of regularizing filters, where solutions correspond to multi-parameter Tikhonov solutions, that requires less data than previously proposed optimal error filters, avoids the generalized SVD, and allows flexibility and novelty in the choice of regularization matrices. Numerical results for 1D and 2D examples using different norms on the errors show the effectiveness of our methods. (paper)

  6. 5 CFR 551.421 - Regular working hours.

    Science.gov (United States)

    2010-01-01

    ... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Regular working hours. 551.421 Section... Activities § 551.421 Regular working hours. (a) Under the Act there is no requirement that a Federal employee... distinction based on whether the activity is performed by an employee during regular working hours or outside...

  7. Regular extensions of some classes of grammars

    NARCIS (Netherlands)

    Nijholt, Antinus

    Culik and Cohen introduced the class of LR-regular grammars, an extension of the LR(k) grammars. In this report we consider the analogous extension of the LL(k) grammers, called the LL-regular grammars. The relations of this class of grammars to other classes of grammars are shown. Every LL-regular

  8. Intramolecular structures in a single copolymer chain consisting of flexible and semiflexible blocks: Monte Carlo simulation of a lattice model

    International Nuclear Information System (INIS)

    Martemyanova, Julia A; Ivanov, Victor A; Paul, Wolfgang

    2014-01-01

    We study conformational properties of a single multiblock copolymer chain consisting of flexible and semiflexible blocks. Monomer units of different blocks are equivalent in the sense of the volume interaction potential, but the intramolecular bending potential between successive bonds along the chain is different. We consider a single flexible-semiflexible regular multiblock copolymer chain with equal content of flexible and semiflexible units and vary the length of the blocks and the stiffness parameter. We perform flat histogram type Monte Carlo simulations based on the Wang-Landau approach and employ the bond fluctuation lattice model. We present here our data on different non-trivial globular morphologies which we have obtained in our model for different values of the block length and the stiffness parameter. We demonstrate that the collapse can occur in one or in two stages depending on the values of both these parameters and discuss the role of the inhomogeneity of intraglobular distributions of monomer units of both flexible and semiflexible blocks. For short block length and/or large stiffness the collapse occurs in two stages, because it goes through intermediate (meta-)stable structures, like a dumbbell shaped conformation. In such conformations the semiflexible blocks form a cylinder-like core, and the flexible blocks form two domains at both ends of such a cylinder. For long block length and/or small stiffness the collapse occurs in one stage, and in typical conformations the flexible blocks form a spherical core of a globule while the semiflexible blocks are located on the surface and wrap around this core.

  9. Regular non-twisting S-branes

    International Nuclear Information System (INIS)

    Obregon, Octavio; Quevedo, Hernando; Ryan, Michael P.

    2004-01-01

    We construct a family of time and angular dependent, regular S-brane solutions which corresponds to a simple analytical continuation of the Zipoy-Voorhees 4-dimensional vacuum spacetime. The solutions are asymptotically flat and turn out to be free of singularities without requiring a twist in space. They can be considered as the simplest non-singular generalization of the singular S0-brane solution. We analyze the properties of a representative of this family of solutions and show that it resembles to some extent the asymptotic properties of the regular Kerr S-brane. The R-symmetry corresponds, however, to the general lorentzian symmetry. Several generalizations of this regular solution are derived which include a charged S-brane and an additional dilatonic field. (author)

  10. Near-Regular Structure Discovery Using Linear Programming

    KAUST Repository

    Huang, Qixing

    2014-06-02

    Near-regular structures are common in manmade and natural objects. Algorithmic detection of such regularity greatly facilitates our understanding of shape structures, leads to compact encoding of input geometries, and enables efficient generation and manipulation of complex patterns on both acquired and synthesized objects. Such regularity manifests itself both in the repetition of certain geometric elements, as well as in the structured arrangement of the elements. We cast the regularity detection problem as an optimization and efficiently solve it using linear programming techniques. Our optimization has a discrete aspect, that is, the connectivity relationships among the elements, as well as a continuous aspect, namely the locations of the elements of interest. Both these aspects are captured by our near-regular structure extraction framework, which alternates between discrete and continuous optimizations. We demonstrate the effectiveness of our framework on a variety of problems including near-regular structure extraction, structure-preserving pattern manipulation, and markerless correspondence detection. Robustness results with respect to geometric and topological noise are presented on synthesized, real-world, and also benchmark datasets. © 2014 ACM.

  11. Tetravalent one-regular graphs of order 4p2

    DEFF Research Database (Denmark)

    Feng, Yan-Quan; Kutnar, Klavdija; Marusic, Dragan

    2014-01-01

    A graph is one-regular if its automorphism group acts regularly on the set of its arcs. In this paper tetravalent one-regular graphs of order 4p2, where p is a prime, are classified.......A graph is one-regular if its automorphism group acts regularly on the set of its arcs. In this paper tetravalent one-regular graphs of order 4p2, where p is a prime, are classified....

  12. Regularization and error assignment to unfolded distributions

    CERN Document Server

    Zech, Gunter

    2011-01-01

    The commonly used approach to present unfolded data only in graphical formwith the diagonal error depending on the regularization strength is unsatisfac-tory. It does not permit the adjustment of parameters of theories, the exclusionof theories that are admitted by the observed data and does not allow the com-bination of data from different experiments. We propose fixing the regulariza-tion strength by a p-value criterion, indicating the experimental uncertaintiesindependent of the regularization and publishing the unfolded data in additionwithout regularization. These considerations are illustrated with three differentunfolding and smoothing approaches applied to a toy example.

  13. [Propensity score matching in SPSS].

    Science.gov (United States)

    Huang, Fuqiang; DU, Chunlin; Sun, Menghui; Ning, Bing; Luo, Ying; An, Shengli

    2015-11-01

    To realize propensity score matching in PS Matching module of SPSS and interpret the analysis results. The R software and plug-in that could link with the corresponding versions of SPSS and propensity score matching package were installed. A PS matching module was added in the SPSS interface, and its use was demonstrated with test data. Score estimation and nearest neighbor matching was achieved with the PS matching module, and the results of qualitative and quantitative statistical description and evaluation were presented in the form of a graph matching. Propensity score matching can be accomplished conveniently using SPSS software.

  14. Pooled Open Blocks Shorten Wait Times for Nonelective Surgical Cases.

    Science.gov (United States)

    Zenteno, Ana C; Carnes, Tim; Levi, Retsef; Daily, Bethany J; Price, Devon; Moss, Susan C; Dunn, Peter F

    2015-07-01

    Assess the impact of the implementation of a data-driven scheduling strategy that aimed to improve the access to care of nonelective surgical patients at Massachusetts General Hospital (MGH). Between July 2009 and June 2010, MGH experienced increasing throughput challenges in its perioperative environment: approximately 30% of the nonelective patients were waiting more than the prescribed amount of time to get to surgery, hampering access to care and aggravating the lack of inpatient beds. This work describes the design and implementation of an "open block" strategy: operating room (OR) blocks were reserved for nonelective patients during regular working hours (prime time) and their management centralized. Discrete event simulation showed that 5 rooms would decrease the percentage of delayed patients from 30% to 2%, assuming that OR availability was the only reason for preoperative delay. Implementation began in January 2012. We compare metrics for June through December of 2012 against the same months of 2011. The average preoperative wait time of all nonelective surgical patients decreased by 25.5% (P reason for delay. Rigorous metrics were developed to evaluate its performance. Strong managerial leadership was crucial to enact the new practices and turn them into organizational change.

  15. Oxidative stress and antioxidants in athletes undertaking regular exercise training.

    Science.gov (United States)

    Watson, Trent A; MacDonald-Wicks, Lesley K; Garg, Manohar L

    2005-04-01

    Exercise has been shown to increase the production of reactive oxygen species to a point that can exceed antioxidant defenses to cause oxidative stress. Dietary intake of antioxidants, physical activity levels, various antioxidants and oxidative stress markers were examined in 20 exercise-trained "athletes" and 20 age- and sex-matched sedentary "controls." Plasma F2-isoprostanes, antioxidant enzyme activities, and uric acid levels were similar in athletes and sedentary controls. Plasma alpha-tocopherol and beta-carotene were higher in athletes compared with sedentary controls. Total antioxidant capacity tended to be lower in athletes, with a significant difference between male athletes and male controls. Dietary intakes of antioxidants were also similar between groups and well above recommended dietary intakes for Australians. These findings suggest that athletes who consume a diet rich in antioxidants have elevated plasma alpha-tocopherol and beta-carotene that were likely to be brought about by adaptive processes resulting from regular exercise.

  16. Spectra of primordial fluctuations in two-perfect-fluid regular bounces

    International Nuclear Information System (INIS)

    Finelli, Fabio; Peter, Patrick; Pinto-Neto, Nelson

    2008-01-01

    We introduce analytic solutions for a class of two components bouncing models, where the bounce is triggered by a negative energy density perfect fluid. The equation of state of the two components are constant in time, but otherwise unrelated. By numerically integrating regular equations for scalar cosmological perturbations, we find that the (would-be) growing mode of the Newtonian potential before the bounce never matches with the growing mode in the expanding stage. For the particular case of a negative energy density component with a stiff equation of state we give a detailed analytic study, which is in complete agreement with the numerical results. We also perform analytic and numerical calculations for long wavelength tensor perturbations, obtaining that, in most cases of interest, the tensor spectral index is independent of the negative energy fluid and given by the spectral index of the growing mode in the contracting stage. We compare our results with previous investigations in the literature

  17. Jordan blocks and Gamow-Jordan eigenfunctions associated to a double pole of the S-matrix

    International Nuclear Information System (INIS)

    Hernandez, E.; Mondragon, A.; Jauregui, A.

    2002-01-01

    An accidental degeneracy of resonances gives rise to a double pole in the scattering matrix, a double zero in the Jost function and a Jordan chain of length two of generalized Gamow-Jordan eigenfunctions of the radial Schrodinger equation. The generalized Gamow-Jordan eigenfunctions are basis elements of an expansion in bound and resonant energy eigenfunctions plus a continuum of scattering wave functions ol complex wave number. In this bi orthonormal basis, any operator f (H r (l) which is a regular function of the Hamiltonian is represented by a complex matrix which is diagonal except for a Jordan block of rank two. The occurrence of a double pole in the Green's function, as well as the non-exponential time evolution of the Gamow-Jordan generalized eigenfunctions are associated to the Jordan block in the complex energy representation. (Author)

  18. Matching Students to Schools

    Directory of Open Access Journals (Sweden)

    Dejan Trifunovic

    2017-08-01

    Full Text Available In this paper, we present the problem of matching students to schools by using different matching mechanisms. This market is specific since public schools are free and the price mechanism cannot be used to determine the optimal allocation of children in schools. Therefore, it is necessary to use different matching algorithms that mimic the market mechanism and enable us to determine the core of the cooperative game. In this paper, we will determine that it is possible to apply cooperative game theory in matching problems. This review paper is based on illustrative examples aiming to compare matching algorithms in terms of the incentive compatibility, stability and efficiency of the matching. In this paper we will present some specific problems that may occur in matching, such as improving the quality of schools, favoring minority students, the limited length of the list of preferences and generating strict priorities from weak priorities.

  19. Generalized Block Failure

    DEFF Research Database (Denmark)

    Jönsson, Jeppe

    2015-01-01

    Block tearing is considered in several codes as a pure block tension or a pure block shear failure mechanism. However in many situations the load acts eccentrically and involves the transfer of a substantial moment in combination with the shear force and perhaps a normal force. A literature study...... shows that no readily available tests with a well-defined substantial eccentricity have been performed. This paper presents theoretical and experimental work leading towards generalized block failure capacity methods. Simple combination of normal force, shear force and moment stress distributions along...... yield lines around the block leads to simple interaction formulas similar to other interaction formulas in the codes....

  20. Higher order total variation regularization for EIT reconstruction.

    Science.gov (United States)

    Gong, Bo; Schullcke, Benjamin; Krueger-Ziolek, Sabine; Zhang, Fan; Mueller-Lisse, Ullrich; Moeller, Knut

    2018-01-08

    Electrical impedance tomography (EIT) attempts to reveal the conductivity distribution of a domain based on the electrical boundary condition. This is an ill-posed inverse problem; its solution is very unstable. Total variation (TV) regularization is one of the techniques commonly employed to stabilize reconstructions. However, it is well known that TV regularization induces staircase effects, which are not realistic in clinical applications. To reduce such artifacts, modified TV regularization terms considering a higher order differential operator were developed in several previous studies. One of them is called total generalized variation (TGV) regularization. TGV regularization has been successively applied in image processing in a regular grid context. In this study, we adapted TGV regularization to the finite element model (FEM) framework for EIT reconstruction. Reconstructions using simulation and clinical data were performed. First results indicate that, in comparison to TV regularization, TGV regularization promotes more realistic images. Graphical abstract Reconstructed conductivity changes located on selected vertical lines. For each of the reconstructed images as well as the ground truth image, conductivity changes located along the selected left and right vertical lines are plotted. In these plots, the notation GT in the legend stands for ground truth, TV stands for total variation method, and TGV stands for total generalized variation method. Reconstructed conductivity distributions from the GREIT algorithm are also demonstrated.

  1. Application of Turchin's method of statistical regularization

    Science.gov (United States)

    Zelenyi, Mikhail; Poliakova, Mariia; Nozik, Alexander; Khudyakov, Alexey

    2018-04-01

    During analysis of experimental data, one usually needs to restore a signal after it has been convoluted with some kind of apparatus function. According to Hadamard's definition this problem is ill-posed and requires regularization to provide sensible results. In this article we describe an implementation of the Turchin's method of statistical regularization based on the Bayesian approach to the regularization strategy.

  2. Preconditioning of matrices partitioned in 2 x 2 block form: Eigenvalue estimates and Schwarz DD for mixed FEM

    Czech Academy of Sciences Publication Activity Database

    Axelsson, Owe; Blaheta, Radim

    2010-01-01

    Roč. 17, č. 5 (2010), s. 787-810 ISSN 1070-5325 R&D Projects: GA ČR GA105/09/1830 Institutional research plan: CEZ:AV0Z30860518 Keywords : iterative solution methods * saddle point problems * preconditioning block matrices * domain decomposition * heterogeneous problems * regularization Subject RIV: JC - Computer Hardware ; Software Impact factor: 1.163, year: 2010 http://onlinelibrary.wiley.com/doi/10.1002/nla.v17:5/issuetoc

  3. On the regularized fermionic projector of the vacuum

    Science.gov (United States)

    Finster, Felix

    2008-03-01

    We construct families of fermionic projectors with spherically symmetric regularization, which satisfy the condition of a distributional MP-product. The method is to analyze regularization tails with a power law or logarithmic scaling in composite expressions in the fermionic projector. The resulting regularizations break the Lorentz symmetry and give rise to a multilayer structure of the fermionic projector near the light cone. Furthermore, we construct regularizations which go beyond the distributional MP-product in that they yield additional distributional contributions supported at the origin. The remaining freedom for the regularization parameters and the consequences for the normalization of the fermionic states are discussed.

  4. On the regularized fermionic projector of the vacuum

    International Nuclear Information System (INIS)

    Finster, Felix

    2008-01-01

    We construct families of fermionic projectors with spherically symmetric regularization, which satisfy the condition of a distributional MP-product. The method is to analyze regularization tails with a power law or logarithmic scaling in composite expressions in the fermionic projector. The resulting regularizations break the Lorentz symmetry and give rise to a multilayer structure of the fermionic projector near the light cone. Furthermore, we construct regularizations which go beyond the distributional MP-product in that they yield additional distributional contributions supported at the origin. The remaining freedom for the regularization parameters and the consequences for the normalization of the fermionic states are discussed

  5. Epidural block

    Science.gov (United States)

    ... page: //medlineplus.gov/ency/patientinstructions/000484.htm Epidural block - pregnancy To use the sharing features on this page, please enable JavaScript. An epidural block is a numbing medicine given by injection (shot) ...

  6. Simulation of car collision with an impact block

    Science.gov (United States)

    Kostek, R.; Aleksandrowicz, P.

    2017-10-01

    This article presents the experimental results of crash test of Fiat Cinquecento performed by Allgemeiner Deutscher Automobil-Club (ADAC) and the simulation results obtained with program called V-SIM for default settings. At the next stage a wheel was blocked and the parameters of contact between the vehicle and the barrier were changed for better results matching. The following contact parameters were identified: stiffness at compression phase, stiffness at restitution phase, the coefficients of restitution and friction. The changes lead to various post-impact positions, which shows sensitivity of the results to contact parameters. V-SIM is commonly used by expert witnesses who tend to use default settings, therefore the companies offering simulation programs should identify those parameters with due diligence.

  7. Stability analysis of resistive MHD modes via a new numerical matching technique

    International Nuclear Information System (INIS)

    Furukawa, M.; Tokuda, S.; Zheng, L.-J.

    2009-01-01

    Full text: Asymptotic matching technique is one of the principal methods for calculating linear stability of resistive magnetohydrodynamics (MHD) modes such as tearing modes. In applying the asymptotic method, the plasma region is divided into two regions: a thin inner layer around the mode-resonant surface and ideal MHD regions except for the layer. If we try to solve this asymptotic matching problem numerically, we meet practical difficulties. Firstly, the inertia-less ideal MHD equation or the Newcomb equation has a regular singular point at the mode-resonant surface, leading to the so-called big and small solutions. Since the big solution is not square-integrable, it needs sophisticated treatment. Even if such a treatment is applied, the matching data or the ratio of small solution to the big one, has been revealed to be sensitive to local MHD equilibrium accuracy and grid structure at the mode-resonant surface by numerical experiments. Secondly, one of the independent solutions in the inner layer, which should be matched onto the ideal MHD solution, is not square-integrable. The response formalism has been adopted to resolve this problem. In the present paper, we propose a new method for computing the linear stability of resistive MHD modes via matching technique, where the plasma region is divided into ideal MHD regions and an inner region with finite width. The matching technique using an inner region with finite width was recently developed for ideal MHD modes in cylindrical geometry, and good performance was shown. Our method extends this idea to resistive MHD modes. In the inner region, the low-beta reduced MHD equations are solved, and the solution is matched onto the solution of the Newcomb equation by using boundary conditions such that the parallel electric field vanishes properly as approaching the computational boundaries. If we use the inner region with finite width, the practical difficulties raised above can be avoided from the beginning. Figure

  8. Regularization modeling for large-eddy simulation

    NARCIS (Netherlands)

    Geurts, Bernardus J.; Holm, D.D.

    2003-01-01

    A new modeling approach for large-eddy simulation (LES) is obtained by combining a "regularization principle" with an explicit filter and its inversion. This regularization approach allows a systematic derivation of the implied subgrid model, which resolves the closure problem. The central role of

  9. Regional anesthesia for small incision cataract surgery: Comparison of subtenon and peribulbar block

    Directory of Open Access Journals (Sweden)

    Oyebola Olubodun Adekola

    2018-01-01

    Full Text Available Background and Objective: The recent trend in cataract surgery is the use of regional ophthalmic nerve blocks or topical anesthesia. We determined and compared the effect of peribulbar and subtenon block on pain and patients' satisfaction, following small incision cataract surgery (SICS. Methods: This was age-sex-matched comparative study involving 462 ASA I-III patients, aged 18 years and above scheduled for SICS. They were assigned to receive either peribulbar block (Group P or subtenon (Group ST. The pain score and patients' satisfaction with the anesthetic experiences were recorded by a study-masked anesthesiologist during surgery and postoperatively at 30 min and 1, 2, 4, and 24 h. Results: The median numeric rating score was significantly lower in the subtenon group than the peribulbar group: During surgery, Group ST 1 (1 versus group P 1.5 (2.25, P < 0.001. At 30 min after surgery, Group ST 0 (1 versus Group P 1 (2.5 versus P < 0.001, and at 1 h after surgery, Group ST 0 (1 versus group P 1 (2, P = 0.002. Ten patients had akinesia in the peribulbar group compared with one in the subtenon group. Chemosis was significantly higher in the subtenon group 10 (3.2% than in the peribulbar group 0 (0%, P = 0.035. Similarly, a significant difference was not with subconjuctival hemorrhage; subtenon 14 (4.5% versus peribulbar 2 (1.3%, P = 0.105. Conclusion: The use of subtenon block resulted in lower pain scores and higher patient's satisfaction than peribulbar block. However, subconjuctival hemorrhage and chemosis were more common with subtenon block.

  10. Spatially-Variant Tikhonov Regularization for Double-Difference Waveform Inversion

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Youzuo [Los Alamos National Laboratory; Huang, Lianjie [Los Alamos National Laboratory; Zhang, Zhigang [Los Alamos National Laboratory

    2011-01-01

    Double-difference waveform inversion is a potential tool for quantitative monitoring for geologic carbon storage. It jointly inverts time-lapse seismic data for changes in reservoir geophysical properties. Due to the ill-posedness of waveform inversion, it is a great challenge to obtain reservoir changes accurately and efficiently, particularly when using time-lapse seismic reflection data. Regularization techniques can be utilized to address the issue of ill-posedness. The regularization parameter controls the smoothness of inversion results. A constant regularization parameter is normally used in waveform inversion, and an optimal regularization parameter has to be selected. The resulting inversion results are a trade off among regions with different smoothness or noise levels; therefore the images are either over regularized in some regions while under regularized in the others. In this paper, we employ a spatially-variant parameter in the Tikhonov regularization scheme used in double-difference waveform tomography to improve the inversion accuracy and robustness. We compare the results obtained using a spatially-variant parameter with those obtained using a constant regularization parameter and those produced without any regularization. We observe that, utilizing a spatially-variant regularization scheme, the target regions are well reconstructed while the noise is reduced in the other regions. We show that the spatially-variant regularization scheme provides the flexibility to regularize local regions based on the a priori information without increasing computational costs and the computer memory requirement.

  11. Predictability of blocking

    International Nuclear Information System (INIS)

    Tosi, E.; Ruti, P.; Tibaldi, S.; D'Andrea, F.

    1994-01-01

    Tibaldi and Molteni (1990, hereafter referred to as TM) had previously investigated operational blocking predictability by the ECMWF model and the possible relationships between model systematic error and blocking in the winter season of the Northern Hemisphere, using seven years of ECMWF operational archives of analyses and day 1 to 10 forecasts. They showed that fewer blocking episodes than in the real atmosphere were generally simulated by the model, and that this deficiency increased with increasing forecast time. As a consequence of this, a major contribution to the systematic error in the winter season was shown to derive from the inability of the model to properly forecast blocking. In this study, the analysis performed in TM for the first seven winter seasons of the ECMWF operational model is extended to the subsequent five winters, during which model development, reflecting both resolution increases and parametrisation modifications, continued unabated. In addition the objective blocking index developed by TM has been applied to the observed data to study the natural low frequency variability of blocking. The ability to simulate blocking of some climate models has also been tested

  12. From recreational to regular drug use

    DEFF Research Database (Denmark)

    Järvinen, Margaretha; Ravn, Signe

    2011-01-01

    This article analyses the process of going from recreational use to regular and problematic use of illegal drugs. We present a model containing six career contingencies relevant for young people’s progress from recreational to regular drug use: the closing of social networks, changes in forms...

  13. Regular variation on measure chains

    Czech Academy of Sciences Publication Activity Database

    Řehák, Pavel; Vitovec, J.

    2010-01-01

    Roč. 72, č. 1 (2010), s. 439-448 ISSN 0362-546X R&D Projects: GA AV ČR KJB100190701 Institutional research plan: CEZ:AV0Z10190503 Keywords : regularly varying function * regularly varying sequence * measure chain * time scale * embedding theorem * representation theorem * second order dynamic equation * asymptotic properties Subject RIV: BA - General Mathematics Impact factor: 1.279, year: 2010 http://www.sciencedirect.com/science/article/pii/S0362546X09008475

  14. New regular black hole solutions

    International Nuclear Information System (INIS)

    Lemos, Jose P. S.; Zanchin, Vilson T.

    2011-01-01

    In the present work we consider general relativity coupled to Maxwell's electromagnetism and charged matter. Under the assumption of spherical symmetry, there is a particular class of solutions that correspond to regular charged black holes whose interior region is de Sitter, the exterior region is Reissner-Nordstroem and there is a charged thin-layer in-between the two. The main physical and geometrical properties of such charged regular black holes are analyzed.

  15. On geodesics in low regularity

    Science.gov (United States)

    Sämann, Clemens; Steinbauer, Roland

    2018-02-01

    We consider geodesics in both Riemannian and Lorentzian manifolds with metrics of low regularity. We discuss existence of extremal curves for continuous metrics and present several old and new examples that highlight their subtle interrelation with solutions of the geodesic equations. Then we turn to the initial value problem for geodesics for locally Lipschitz continuous metrics and generalize recent results on existence, regularity and uniqueness of solutions in the sense of Filippov.

  16. A method to assess the influence of individual player performance distribution on match outcome in team sports.

    Science.gov (United States)

    Robertson, Sam; Gupta, Ritu; McIntosh, Sam

    2016-10-01

    This study developed a method to determine whether the distribution of individual player performances can be modelled to explain match outcome in team sports, using Australian Rules football as an example. Player-recorded values (converted to a percentage of team total) in 11 commonly reported performance indicators were obtained for all regular season matches played during the 2014 Australian Football League season, with team totals also recorded. Multiple features relating to heuristically determined percentiles for each performance indicator were then extracted for each team and match, along with the outcome (win/loss). A generalised estimating equation model comprising eight key features was developed, explaining match outcome at a median accuracy of 63.9% under 10-fold cross-validation. Lower 75th, 90th and 95th percentile values for team goals and higher 25th and 50th percentile values for disposals were linked with winning. Lower 95th and higher 25th percentile values for Inside 50s and Marks, respectively, were also important contributors. These results provide evidence supporting team strategies which aim to obtain an even spread of goal scorers in Australian Rules football. The method developed in this investigation could be used to quantify the importance of individual contributions to overall team performance in team sports.

  17. Manifold Regularized Reinforcement Learning.

    Science.gov (United States)

    Li, Hongliang; Liu, Derong; Wang, Ding

    2018-04-01

    This paper introduces a novel manifold regularized reinforcement learning scheme for continuous Markov decision processes. Smooth feature representations for value function approximation can be automatically learned using the unsupervised manifold regularization method. The learned features are data-driven, and can be adapted to the geometry of the state space. Furthermore, the scheme provides a direct basis representation extension for novel samples during policy learning and control. The performance of the proposed scheme is evaluated on two benchmark control tasks, i.e., the inverted pendulum and the energy storage problem. Simulation results illustrate the concepts of the proposed scheme and show that it can obtain excellent performance.

  18. Homogeneous bilateral block shifts

    Indian Academy of Sciences (India)

    Douglas class were classified in [3]; they are unilateral block shifts of arbitrary block size (i.e. dim H(n) can be anything). However, no examples of irreducible homogeneous bilateral block shifts of block size larger than 1 were known until now.

  19. Temporal regularization of ultrasound-based liver motion estimation for image-guided radiation therapy

    Energy Technology Data Exchange (ETDEWEB)

    O’Shea, Tuathan P., E-mail: tuathan.oshea@icr.ac.uk; Bamber, Jeffrey C.; Harris, Emma J. [Joint Department of Physics, The Institute of Cancer Research and The Royal Marsden NHS foundation Trust, Sutton, London SM2 5PT (United Kingdom)

    2016-01-15

    Purpose: Ultrasound-based motion estimation is an expanding subfield of image-guided radiation therapy. Although ultrasound can detect tissue motion that is a fraction of a millimeter, its accuracy is variable. For controlling linear accelerator tracking and gating, ultrasound motion estimates must remain highly accurate throughout the imaging sequence. This study presents a temporal regularization method for correlation-based template matching which aims to improve the accuracy of motion estimates. Methods: Liver ultrasound sequences (15–23 Hz imaging rate, 2.5–5.5 min length) from ten healthy volunteers under free breathing were used. Anatomical features (blood vessels) in each sequence were manually annotated for comparison with normalized cross-correlation based template matching. Five sequences from a Siemens Acuson™ scanner were used for algorithm development (training set). Results from incremental tracking (IT) were compared with a temporal regularization method, which included a highly specific similarity metric and state observer, known as the α–β filter/similarity threshold (ABST). A further five sequences from an Elekta Clarity™ system were used for validation, without alteration of the tracking algorithm (validation set). Results: Overall, the ABST method produced marked improvements in vessel tracking accuracy. For the training set, the mean and 95th percentile (95%) errors (defined as the difference from manual annotations) were 1.6 and 1.4 mm, respectively (compared to 6.2 and 9.1 mm, respectively, for IT). For each sequence, the use of the state observer leads to improvement in the 95% error. For the validation set, the mean and 95% errors for the ABST method were 0.8 and 1.5 mm, respectively. Conclusions: Ultrasound-based motion estimation has potential to monitor liver translation over long time periods with high accuracy. Nonrigid motion (strain) and the quality of the ultrasound data are likely to have an impact on tracking

  20. Laplacian manifold regularization method for fluorescence molecular tomography

    Science.gov (United States)

    He, Xuelei; Wang, Xiaodong; Yi, Huangjian; Chen, Yanrong; Zhang, Xu; Yu, Jingjing; He, Xiaowei

    2017-04-01

    Sparse regularization methods have been widely used in fluorescence molecular tomography (FMT) for stable three-dimensional reconstruction. Generally, ℓ1-regularization-based methods allow for utilizing the sparsity nature of the target distribution. However, in addition to sparsity, the spatial structure information should be exploited as well. A joint ℓ1 and Laplacian manifold regularization model is proposed to improve the reconstruction performance, and two algorithms (with and without Barzilai-Borwein strategy) are presented to solve the regularization model. Numerical studies and in vivo experiment demonstrate that the proposed Gradient projection-resolved Laplacian manifold regularization method for the joint model performed better than the comparative algorithm for ℓ1 minimization method in both spatial aggregation and location accuracy.

  1. Learning Sparse Visual Representations with Leaky Capped Norm Regularizers

    OpenAIRE

    Wangni, Jianqiao; Lin, Dahua

    2017-01-01

    Sparsity inducing regularization is an important part for learning over-complete visual representations. Despite the popularity of $\\ell_1$ regularization, in this paper, we investigate the usage of non-convex regularizations in this problem. Our contribution consists of three parts. First, we propose the leaky capped norm regularization (LCNR), which allows model weights below a certain threshold to be regularized more strongly as opposed to those above, therefore imposes strong sparsity and...

  2. Adaptive regularization of noisy linear inverse problems

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Madsen, Kristoffer Hougaard; Lehn-Schiøler, Tue

    2006-01-01

    In the Bayesian modeling framework there is a close relation between regularization and the prior distribution over parameters. For prior distributions in the exponential family, we show that the optimal hyper-parameter, i.e., the optimal strength of regularization, satisfies a simple relation: T......: The expectation of the regularization function, i.e., takes the same value in the posterior and prior distribution. We present three examples: two simulations, and application in fMRI neuroimaging....

  3. Building block method: a bottom-up modular synthesis methodology for distributed compliant mechanisms

    Directory of Open Access Journals (Sweden)

    G. Krishnan

    2012-03-01

    Full Text Available Synthesizing topologies of compliant mechanisms are based on rigid-link kinematic designs or completely automated optimization techniques. These designs yield mechanisms that match the kinematic specifications as a whole, but seldom yield user insight on how each constituent member contributes towards the overall mechanism performance. This paper reviews recent developments in building block based design of compliant mechanisms. A key aspect of such a methodology is formulating a representation of compliance at a (i single unique point of interest in terms of geometric quantities such as ellipses and vectors, and (ii relative compliance between distinct input(s and output(s in terms of load flow. This geometric representation provides a direct mapping between the mechanism geometry and their behavior, and is used to characterize simple deformable members that form a library of building blocks. The design space spanned by the building block library guides the decomposition of a given problem specification into tractable sub-problems that can be each solved from an entry in the library. The effectiveness of this geometric representation aids user insight in design, and enables discovery of trends and guidelines to obtain practical conceptual designs.

  4. Exclusion of children with intellectual disabilities from regular ...

    African Journals Online (AJOL)

    Study investigated why teachers exclude children with intellectual disability from the regular classrooms in Nigeria. Participants were, 169 regular teachers randomly selected from Oyo and Ogun states. Questionnaire was used to collect data result revealed that 57.4% regular teachers could not cope with children with ID ...

  5. Detection block

    International Nuclear Information System (INIS)

    Bezak, A.

    1987-01-01

    A diagram is given of a detection block used for monitoring burnup of nuclear reactor fuel. A shielding block is an important part of the detection block. It stabilizes the fuel assembly in the fixing hole in front of a collimator where a suitable gamma beam is defined for gamma spectrometry determination of fuel burnup. The detector case and a neutron source case are placed on opposite sides of the fixing hole. For neutron measurement for which the water in the tank is used as a moderator, the neutron detector-fuel assembly configuration is selected such that neutrons from spontaneous fission and neutrons induced with the neutron source can both be measured. The patented design of the detection block permits longitudinal travel and rotation of the fuel assembly to any position, and thus more reliable determination of nuclear fuel burnup. (E.S.). 1 fig

  6. On infinite regular and chiral maps

    OpenAIRE

    Arredondo, John A.; Valdez, Camilo Ramírez y Ferrán

    2015-01-01

    We prove that infinite regular and chiral maps take place on surfaces with at most one end. Moreover, we prove that an infinite regular or chiral map on an orientable surface with genus can only be realized on the Loch Ness monster, that is, the topological surface of infinite genus with one end.

  7. 29 CFR 779.18 - Regular rate.

    Science.gov (United States)

    2010-07-01

    ... employee under subsection (a) or in excess of the employee's normal working hours or regular working hours... Relating to Labor (Continued) WAGE AND HOUR DIVISION, DEPARTMENT OF LABOR STATEMENTS OF GENERAL POLICY OR... not less than one and one-half times their regular rates of pay. Section 7(e) of the Act defines...

  8. Latent palmprint matching.

    Science.gov (United States)

    Jain, Anil K; Feng, Jianjiang

    2009-06-01

    The evidential value of palmprints in forensic applications is clear as about 30 percent of the latents recovered from crime scenes are from palms. While biometric systems for palmprint-based personal authentication in access control type of applications have been developed, they mostly deal with low-resolution (about 100 ppi) palmprints and only perform full-to-full palmprint matching. We propose a latent-to-full palmprint matching system that is needed in forensic applications. Our system deals with palmprints captured at 500 ppi (the current standard in forensic applications) or higher resolution and uses minutiae as features to be compatible with the methodology used by latent experts. Latent palmprint matching is a challenging problem because latent prints lifted at crime scenes are of poor image quality, cover only a small area of the palm, and have a complex background. Other difficulties include a large number of minutiae in full prints (about 10 times as many as fingerprints), and the presence of many creases in latents and full prints. A robust algorithm to reliably estimate the local ridge direction and frequency in palmprints is developed. This facilitates the extraction of ridge and minutiae features even in poor quality palmprints. A fixed-length minutia descriptor, MinutiaCode, is utilized to capture distinctive information around each minutia and an alignment-based minutiae matching algorithm is used to match two palmprints. Two sets of partial palmprints (150 live-scan partial palmprints and 100 latent palmprints) are matched to a background database of 10,200 full palmprints to test the proposed system. Despite the inherent difficulty of latent-to-full palmprint matching, rank-1 recognition rates of 78.7 and 69 percent, respectively, were achieved in searching live-scan partial palmprints and latent palmprints against the background database.

  9. Continuum regularized Yang-Mills theory

    International Nuclear Information System (INIS)

    Sadun, L.A.

    1987-01-01

    Using the machinery of stochastic quantization, Z. Bern, M. B. Halpern, C. Taubes and I recently proposed a continuum regularization technique for quantum field theory. This regularization may be implemented by applying a regulator to either the (d + 1)-dimensional Parisi-Wu Langevin equation or, equivalently, to the d-dimensional second order Schwinger-Dyson (SD) equations. This technique is non-perturbative, respects all gauge and Lorentz symmetries, and is consistent with a ghost-free gauge fixing (Zwanziger's). This thesis is a detailed study of this regulator, and of regularized Yang-Mills theory, using both perturbative and non-perturbative techniques. The perturbative analysis comes first. The mechanism of stochastic quantization is reviewed, and a perturbative expansion based on second-order SD equations is developed. A diagrammatic method (SD diagrams) for evaluating terms of this expansion is developed. We apply the continuum regulator to a scalar field theory. Using SD diagrams, we show that all Green functions can be rendered finite to all orders in perturbation theory. Even non-renormalizable theories can be regularized. The continuum regulator is then applied to Yang-Mills theory, in conjunction with Zwanziger's gauge fixing. A perturbative expansion of the regulator is incorporated into the diagrammatic method. It is hoped that the techniques discussed in this thesis will contribute to the construction of a renormalized Yang-Mills theory is 3 and 4 dimensions

  10. Comparisons of clustered regularly interspaced short palindromic repeats and viromes in human saliva reveal bacterial adaptations to salivary viruses.

    Science.gov (United States)

    Pride, David T; Salzman, Julia; Relman, David A

    2012-09-01

    Explorations of human microbiota have provided substantial insight into microbial community composition; however, little is known about interactions between various microbial components in human ecosystems. In response to the powerful impact of viral predation, bacteria have acquired potent defences, including an adaptive immune response based on the clustered regularly interspaced short palindromic repeats (CRISPRs)/Cas system. To improve our understanding of the interactions between bacteria and their viruses in humans, we analysed 13 977 streptococcal CRISPR sequences and compared them with 2 588 172 virome reads in the saliva of four human subjects over 17 months. We found a diverse array of viruses and CRISPR spacers, many of which were specific to each subject and time point. There were numerous viral sequences matching CRISPR spacers; these matches were highly specific for salivary viruses. We determined that spacers and viruses coexist at the same time, which suggests that streptococcal CRISPR/Cas systems are under constant pressure from salivary viruses. CRISPRs in some subjects were just as likely to match viral sequences from other subjects as they were to match viruses from the same subject. Because interactions between bacteria and viruses help to determine the structure of bacterial communities, CRISPR-virus analyses are likely to provide insight into the forces shaping the human microbiome. © 2012 Society for Applied Microbiology and Blackwell Publishing Ltd.

  11. Electron/photon matched field technique for treatment of orbital disease

    International Nuclear Information System (INIS)

    Arthur, Douglas W.; Zwicker, Robert D.; Garmon, Pamela W.; Huang, David T.; Schmidt-Ullrich, Rupert K.

    1997-01-01

    Purpose: A number of approaches have been described in the literature for irradiation of malignant and benign diseases of the orbit. Techniques described to date do not deliver a homogeneous dose to the orbital contents while sparing the cornea and lens of excessive dose. This is a result of the geometry encountered in this region and the fact that the target volume, which includes the periorbital and retroorbital tissues but excludes the cornea, anterior chamber, and lens, cannot be readily accommodated by photon beams alone. To improve the dose distribution for these treatments, we have developed a technique that combines a low-energy electron field carefully matched with modified photon fields to achieve acceptable dose coverage and uniformity. Methods and Materials: An anterior electron field and a lateral photon field setup is used to encompass the target volume. Modification of these fields permits accurate matching as well as conformation of the dose distribution to the orbit. A flat-surfaced wax compensator assures uniform electron penetration across the field, and a sunken lead alloy eye block prevents excessive dose to the central structures of the anterior segment. The anterior edge of the photon field is modified by broadening the penumbra using a form of pseudodynamic collimation. Direct measurements using film and ion chamber dosimetry were used to study the characteristics of the fall-off region of the electron field and the penumbra of the photon fields. >From the data collected, the technique for accurate field matching and dose uniformity was generated. Results: The isodose curves produced with this treatment technique demonstrate homogeneous dose coverage of the orbit, including the paralenticular region, and sufficient dose sparing of the anterior segment. The posterior lens accumulates less than 40% of the prescribed dose, and the lateral aspect of the lens receives less than 30%. A dose variation in the match region of ±12% is confronted when

  12. Physical characteristics of elite adolescent female basketball players and their relationship to match performance.

    Science.gov (United States)

    Fort-Vanmeerhaeghe, Azahara; Montalvo, Alicia; Latinjak, Alexander; Unnithan, Viswanath

    2016-12-01

    There were two aims of this study: first, to investigate physical fitness and match performance differences between under-16 (U16) and under-18 (U18) female basketball players, and second, to evaluate the relationship between physical fitness and game-related performances. Twenty-three young, female, elite Spanish basketball players (16.2 1.2 years) participated in the study. The sample was divided into two groups: U16 and U18 players. The average scores from pre- and post-season physical fitness measurements were used for subsequent analyses. Anthropometric variables were also measured. To evaluate game performance, game-related statistics, including the number of games and minutes played, points, rebounds, assists, steals and blocks per game, were recorded for every competitive match in one season. When anthropometric and physical performance variables were compared between groups, the U18 group demonstrated significantly (pagility, anaerobic power, repeated sprint ability and aerobic power (p ≤ 0.005). These findings can help optimize training programs for young, elite female basketball players.

  13. Quantity precommitment and price matching

    DEFF Research Database (Denmark)

    Tumennasan, Norovsambuu

    We revisit the question of whether price matching is anti-competitive in a capacity constrained duopoly setting. We show that the effect of price matching depends on capacity. Specifically, price matching has no effect when capacity is relatively low, but it benefits the firms when capacity...... is relatively high. Interestingly, when capacity is in an intermediate range, price matching benefits only the small firm but does not affect the large firm in any way. Therefore, one has to consider capacity seriously when evaluating if price matching is anti-competitive. If the firms choose their capacities...... simultaneously before pricing decisions, then the effect of price matching is either pro-competitive or ambiguous. We show that if the cost of capacity is high, then price matching can only (weakly) decrease the market price. On the other hand, if the cost of capacity is low, then the effect of price matching...

  14. Oblique Multi-Camera Systems - Orientation and Dense Matching Issues

    Science.gov (United States)

    Rupnik, E.; Nex, F.; Remondino, F.

    2014-03-01

    The use of oblique imagery has become a standard for many civil and mapping applications, thanks to the development of airborne digital multi-camera systems, as proposed by many companies (Blomoblique, IGI, Leica, Midas, Pictometry, Vexcel/Microsoft, VisionMap, etc.). The indisputable virtue of oblique photography lies in its simplicity of interpretation and understanding for inexperienced users allowing their use of oblique images in very different applications, such as building detection and reconstruction, building structural damage classification, road land updating and administration services, etc. The paper reports an overview of the actual oblique commercial systems and presents a workflow for the automated orientation and dense matching of large image blocks. Perspectives, potentialities, pitfalls and suggestions for achieving satisfactory results are given. Tests performed on two datasets acquired with two multi-camera systems over urban areas are also reported.

  15. Synthesis of amylose-block-polystyrene rod-coil block copolymers

    NARCIS (Netherlands)

    Loos, Katja; Stadler, Reimund

    1997-01-01

    In the present communication we demonstrate the synthesis of a hybrid block copolymer based on the combination of a biopolymer (amylose) with a synthetic block (polystyrene). To obtain such materials, amino-functionalized polymers were modified with maltoheptaose moieties that serve as initiators

  16. Regularity effect in prospective memory during aging

    OpenAIRE

    Blondelle, Geoffrey; Hainselin, Mathieu; Gounden, Yannick; Heurley, Laurent; Voisin, Hélène; Megalakaki, Olga; Bressous, Estelle; Quaglino, Véronique

    2016-01-01

    Background: Regularity effect can affect performance in prospective memory (PM), but little is known on the cognitive processes linked to this effect. Moreover, its impacts with regard to aging remain unknown. To our knowledge, this study is the first to examine regularity effect in PM in a lifespan perspective, with a sample of young, intermediate, and older adults.Objective and design: Our study examined the regularity effect in PM in three groups of participants: 28 young adults (18–30), 1...

  17. Paravertebral Block Plus Thoracic Wall Block versus Paravertebral Block Alone for Analgesia of Modified Radical Mastectomy: A Retrospective Cohort Study.

    Directory of Open Access Journals (Sweden)

    Nai-Liang Li

    Full Text Available Paravertebral block placement was the main anesthetic technique for modified radical mastectomy in our hospital until February 2014, when its combination with blocks targeting the pectoral musculature was initiated. We compared the analgesic effects of paravertebral blocks with or without blocks targeting the pectoral musculature for modified radical mastectomy.We retrospectively collected data from a single surgeon and anesthesiologist from June 1, 2012, to May 31, 2015. Intraoperative sedatives and analgesic requirements, time to the first analgesic request, postoperative analgesic doses, patient satisfaction, and complications were compared.Fifty-four patients received a paravertebral block alone (PECS 0, and 46 received a paravertebral block combined with blocks targeting the pectoral musculature (PECS 1. The highest intraoperative effect-site concentration of propofol was significantly lower in the PECS 1 group than in the PECS 0 group [2.3 (1.5, 2.8 vs 2.5 (1.5, 4 μg/mL, p = 0.0014]. The intraoperative rescue analgesic dose was significantly lower in the PECS 1 group [0 (0, 25 vs 0 (0, 75 mg of ketamine, p = 0.0384]. Furthermore, the PECS 1 group had a significantly longer time to the first analgesic request [636.5 (15, 720 vs 182.5 (14, 720 min, p = 0.0001]. After further adjustment for age, body mass index, American Society of Anesthesiologists Physical Status classification, chronic pain history, incidence of a superficial cervical plexus block placement, and operation duration, blocks targeting the pectoral musculature were determined to be the only significant factor (hazard ratio, 0.36; 95% confidence interval, 0.23-0.58; p < 0.0001. Very few patients used potent analgesics including morphine and ketorolac; the cumulative use of morphine or ketorolac was similar in the study groups. However, the incidence of all analgesic use, namely morphine, ketorolac, acetaminophen, and celecoxib, was significantly lower in the PECS 1 group [3

  18. 20 CFR 226.14 - Employee regular annuity rate.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Employee regular annuity rate. 226.14 Section... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Computing an Employee Annuity § 226.14 Employee regular annuity rate. The regular annuity rate payable to the employee is the total of the employee tier I...

  19. Effect of dexamethasone in low volume supraclavicular brachial plexus block: A double-blinded randomized clinical study

    Directory of Open Access Journals (Sweden)

    Arun Kumar Alarasan

    2016-01-01

    Full Text Available Background and Aims: With the use of ultrasound, a minimal effective volume of 20 ml has been described for supraclavicular brachial plexus block. However achieving a long duration of analgesia with this minimal volume remains a challenge. We aimed to determine the effect of dexamethasone on onset and duration of analgesia in low volume supraclavicular brachial plexus block. Material and Methods: Sixty patients were randomly divided into two groups of 30 each. Group C received saline (2 ml + 20 ml of 0.5% bupivacaine and Group D received dexamethasone (8 mg + 20 ml of 0.5% bupivacaine in supraclavicular brachial plexus block. Hemodynamic variables and visual analog scale (VAS score were noted at regular intervals until 450 min. The onset and duration of sensory and motor block were measured. The incidence of "Halo" around brachial plexus was observed. Student′s t-test and Chi-square test were used for statistical analysis. Results: The onset of sensory and motor block was significantly earlier in dexamethasone group (10.36 ± 1.99 and 12 ± 1.64 minutes compared to control group (12.9 ± 2.23 and 18.03 ± 2.41 minutes. The duration of sensory and motor block was significantly prolonged in dexamethasone group (366 ± 28.11 and 337.33 ± 28.75 minutes compared to control group (242.66 ± 26.38 and 213 ± 26.80 minutes. The VAS score was significantly lower in dexamethasone group after 210 min. "Halo" was present around the brachial plexus in all patients in both the groups. Conclusion: Dexamethasone addition significantly increases the duration of analgesia in patients receiving low volume supraclavicular brachial plexus block. No significant side-effects were seen in patients receiving dexamethasone as an adjunct.

  20. The Interaction Between Schema Matching and Record Matching in Data Integration

    KAUST Repository

    Gu, Binbin; Li, Zhixu; Zhang, Xiangliang; Liu, An; Liu, Guanfeng; Zheng, Kai; Zhao, Lei; Zhou, Xiaofang

    2016-01-01

    Schema Matching (SM) and Record Matching (RM) are two necessary steps in integrating multiple relational tables of different schemas, where SM unifies the schemas and RM detects records referring to the same real-world entity. The two processes have

  1. Effect of blocking Rac1 expression in cholangiocarcinoma QBC939 cells

    Directory of Open Access Journals (Sweden)

    Liu Xudong

    2011-05-01

    Full Text Available Cholangiocarcinomas (CCs are malignant tumors that originate from epithelial cells lining the biliary tree and gallbladder. Ras correlative C3 creotoxin substrate 1 (Rac1, a small guanosine triphosphatase, is a critical mediator of various aspects of endothelial cell functions. The objective of the present investigation was to study the effect of blocking Rac1 expression in CCs. Seventy-four extrahepatic CC (ECC specimens and matched adjacent normal mucosa were obtained from the Department of Pathology, Inner Mongolia Medicine Hospital, between 2007 and 2009. Our results showed that the expression of Rac1 was significantly higher (53.12% in tumor tissues than in normal tissues. Western blotting data indicated a significant reduction in Rac1-miRNA cell protein levels. Rac1-miRNA cell growth rate was significantly different at 24, 48, and 72 h after transfection. Flow cytometry analysis showed that Rac1-miRNA cells undergo apoptosis more effectively than control QBC939 cells. Blocking Rac1 expression by RNAi effectively inhibits the growth of CCs. miRNA silencing of the Rac1 gene suppresses proliferation and induces apoptosis of QBC939 cells. These results suggest that Rac1 may be a new gene therapy target for CC. Blocking Rac1 expression in CC cells induces apoptosis of these tumor cells and may thus represent a new therapeutic approach.

  2. The expected value of possession in professional rugby league match-play.

    Science.gov (United States)

    Kempton, Thomas; Kennedy, Nicholas; Coutts, Aaron J

    2016-01-01

    This study estimated the expected point value for starting possessions in different field locations during rugby league match-play and calculated the mean expected points for each subsequent play during the possession. It also examined the origin of tries scored according to the method of gaining possession. Play-by-play data were taken from all 768 regular-season National Rugby League (NRL) matches during 2010-2013. A probabilistic model estimated the expected point outcome based on the net difference in points scored by a team in possession in a given situation. An iterative method was used to approximate the value of each situation based on actual scoring outcomes. Possessions commencing close to the opposition's goal-line had the highest expected point equity, which decreased as the location of the possession moved towards the team's own goal-line. Possessions following an opposition error, penalty or goal-line dropout had the highest likelihood of a try being scored on the set subsequent to their occurrence. In contrast, possessions that follow an opposition completed set or a restart were least likely to result in a try. The expected point values framework from our model has applications for informing playing strategy and assessing individual and team performance in professional rugby league.

  3. Regular algebra and finite machines

    CERN Document Server

    Conway, John Horton

    2012-01-01

    World-famous mathematician John H. Conway based this classic text on a 1966 course he taught at Cambridge University. Geared toward graduate students of mathematics, it will also prove a valuable guide to researchers and professional mathematicians.His topics cover Moore's theory of experiments, Kleene's theory of regular events and expressions, Kleene algebras, the differential calculus of events, factors and the factor matrix, and the theory of operators. Additional subjects include event classes and operator classes, some regulator algebras, context-free languages, communicative regular alg

  4. Influence of anchor block size on the thickness of adsorbed block copolymer layers

    NARCIS (Netherlands)

    Belder, G.F; ten Brinke, G.; Hadziioannou, G

    1997-01-01

    We present surface force data on three different polystyrene/poly(2-vinylpyridine) block copolymers (PS/P2VP) with a fixed size of the nonadsorbing PS block but widely varying sizes of the adsorbing P2VP block. With respect to the sizes of the two blocks, they range from moderately to highly

  5. Preliminary results on organization on the court, physical and technical performance of Brazilian professional futsal players: comparison between friendly pre-season and official match

    Directory of Open Access Journals (Sweden)

    Luiz Henrique Palucci Vieira

    2016-06-01

    Full Text Available Abstract The main aim of this study was to verify possible differences between a friendly pre-season match (FM and an official in-season match (OM regarding physical, technical, and organizational performances of a professional Brazilian futsal team. Ten professional futsal athletes participated in this study. The matches were monitored with video cameras (30 Hz and athlete trajectories obtained with automatic tracking. The values obtained for distance covered per minute, percentage of distance covered at moderate intensity, team coverage area, spread, passes, possessions, ball touches and successful passes per minute were greater for the OM than FM. On the contrary, percentage of distance covered, standing and walking was greater for the FM than OM. We concluded that physical, technical, and tactical performances are different between a FM and an OM in futsal and also these parameters mutually influenced each other distinctly. Future studies should verify whether pre-season tournaments reproduce similar demands to a regular season official match.

  6. Matching Matched Filtering with Deep Networks for Gravitational-Wave Astronomy

    Science.gov (United States)

    Gabbard, Hunter; Williams, Michael; Hayes, Fergus; Messenger, Chris

    2018-04-01

    We report on the construction of a deep convolutional neural network that can reproduce the sensitivity of a matched-filtering search for binary black hole gravitational-wave signals. The standard method for the detection of well-modeled transient gravitational-wave signals is matched filtering. We use only whitened time series of measured gravitational-wave strain as an input, and we train and test on simulated binary black hole signals in synthetic Gaussian noise representative of Advanced LIGO sensitivity. We show that our network can classify signal from noise with a performance that emulates that of match filtering applied to the same data sets when considering the sensitivity defined by receiver-operator characteristics.

  7. Matching Matched Filtering with Deep Networks for Gravitational-Wave Astronomy.

    Science.gov (United States)

    Gabbard, Hunter; Williams, Michael; Hayes, Fergus; Messenger, Chris

    2018-04-06

    We report on the construction of a deep convolutional neural network that can reproduce the sensitivity of a matched-filtering search for binary black hole gravitational-wave signals. The standard method for the detection of well-modeled transient gravitational-wave signals is matched filtering. We use only whitened time series of measured gravitational-wave strain as an input, and we train and test on simulated binary black hole signals in synthetic Gaussian noise representative of Advanced LIGO sensitivity. We show that our network can classify signal from noise with a performance that emulates that of match filtering applied to the same data sets when considering the sensitivity defined by receiver-operator characteristics.

  8. 39 CFR 6.1 - Regular meetings, annual meeting.

    Science.gov (United States)

    2010-07-01

    ... 39 Postal Service 1 2010-07-01 2010-07-01 false Regular meetings, annual meeting. 6.1 Section 6.1 Postal Service UNITED STATES POSTAL SERVICE THE BOARD OF GOVERNORS OF THE U.S. POSTAL SERVICE MEETINGS (ARTICLE VI) § 6.1 Regular meetings, annual meeting. The Board shall meet regularly on a schedule...

  9. 78 FR 42080 - Privacy Act of 1974; CMS Computer Match No. 2013-07; HHS Computer Match No. 1303; DoD-DMDC Match...

    Science.gov (United States)

    2013-07-15

    ... 1974; CMS Computer Match No. 2013-07; HHS Computer Match No. 1303; DoD-DMDC Match No. 18 AGENCY: Centers for Medicare & Medicaid Services (CMS), Department of Health and Human Services (HHS). ACTION... Act of 1974, as amended, this notice announces the establishment of a CMP that CMS plans to conduct...

  10. Combined KHFAC + DC nerve block without onset or reduced nerve conductivity after block

    Science.gov (United States)

    Franke, Manfred; Vrabec, Tina; Wainright, Jesse; Bhadra, Niloy; Bhadra, Narendra; Kilgore, Kevin

    2014-10-01

    Objective. Kilohertz frequency alternating current (KHFAC) waveforms have been shown to provide peripheral nerve conductivity block in many acute and chronic animal models. KHFAC nerve block could be used to address multiple disorders caused by neural over-activity, including blocking pain and spasticity. However, one drawback of KHFAC block is a transient activation of nerve fibers during the initiation of the nerve block, called the onset response. The objective of this study is to evaluate the feasibility of using charge balanced direct current (CBDC) waveforms to temporarily block motor nerve conductivity distally to the KHFAC electrodes to mitigate the block onset-response. Approach. A total of eight animals were used in this study. A set of four animals were used to assess feasibility and reproducibility of a combined KHFAC + CBDC block. A following randomized study, conducted on a second set of four animals, compared the onset response resulting from KHFAC alone and combined KHFAC + CBDC waveforms. To quantify the onset, peak forces and the force-time integral were measured during KHFAC block initiation. Nerve conductivity was monitored throughout the study by comparing muscle twitch forces evoked by supra-maximal stimulation proximal and distal to the block electrodes. Each animal of the randomized study received at least 300 s (range: 318-1563 s) of cumulative dc to investigate the impact of combined KHFAC + CBDC on nerve viability. Main results. The peak onset force was reduced significantly from 20.73 N (range: 18.6-26.5 N) with KHFAC alone to 0.45 N (range: 0.2-0.7 N) with the combined CBDC and KHFAC block waveform (p conductivity was observed after application of the combined KHFAC + CBDC block relative to KHFAC waveforms. Significance. The distal application of CBDC can significantly reduce or even completely prevent the KHFAC onset response without a change in nerve conductivity.

  11. Long-term interference at the semantic level: Evidence from blocked-cyclic picture matching.

    Science.gov (United States)

    Wei, Tao; Schnur, Tatiana T

    2016-01-01

    Processing semantically related stimuli creates interference across various domains of cognition, including language and memory. In this study, we identify the locus and mechanism of interference when retrieving meanings associated with words and pictures. Subjects matched a probe stimulus (e.g., cat) to its associated target picture (e.g., yarn) from an array of unrelated pictures. Across trials, probes were either semantically related or unrelated. To test the locus of interference, we presented probes as either words or pictures. If semantic interference occurs at the stage common to both tasks, that is, access to semantic representations, then interference should occur in both probe presentation modalities. Results showed clear semantic interference effects independent of presentation modality and lexical frequency, confirming a semantic locus of interference in comprehension. To test the mechanism of interference, we repeated trials across 4 presentation cycles and manipulated the number of unrelated intervening trials (zero vs. two). We found that semantic interference was additive across cycles and survived 2 intervening trials, demonstrating interference to be long-lasting as opposed to short-lived. However, interference was smaller with zero versus 2 intervening trials, which we interpret to suggest that short-lived facilitation counteracted the long-lived interference. We propose that retrieving meanings associated with words/pictures from the same semantic category yields both interference due to long-lasting changes in connection strength between semantic representations (i.e., incremental learning) and facilitation caused by short-lived residual activation. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  12. How players exploit variability and regularity of game actions in female volleyball teams.

    Science.gov (United States)

    Ramos, Ana; Coutinho, Patrícia; Silva, Pedro; Davids, Keith; Mesquita, Isabel

    2017-05-01

    Variability analysis has been used to understand how competitive constraints shape different behaviours in team sports. In this study, we analysed and compared variability of tactical performance indices in players within complex I at two different competitive levels in volleyball. We also examined whether variability was influenced by set type and period. Eight matches from the 2012 Olympics competition and from the Portuguese national league in the 2014-2015 season were analysed (1496 rallies). Variability of setting conditions, attack zone, attack tempo and block opposition was assessed using Shannon entropy measures. Magnitude-based inferences were used to analyse the practical significance of compared values of selected variables. Results showed differences between elite and national teams for all variables, which were co-adapted to the competitive constraints of set type and set periods. Elite teams exploited system stability in setting conditions and block opposition, but greater unpredictability in zone and tempo of attack. These findings suggest that uncertainty in attacking actions was a key factor that could only be achieved with greater performance stability in other game actions. Data suggested how coaches could help setters develop the capacity to play at faster tempos, diversifying attack zones, especially at critical moments in competition.

  13. A regularized stationary mean-field game

    KAUST Repository

    Yang, Xianjin

    2016-01-01

    In the thesis, we discuss the existence and numerical approximations of solutions of a regularized mean-field game with a low-order regularization. In the first part, we prove a priori estimates and use the continuation method to obtain the existence of a solution with a positive density. Finally, we introduce the monotone flow method and solve the system numerically.

  14. A regularized stationary mean-field game

    KAUST Repository

    Yang, Xianjin

    2016-04-19

    In the thesis, we discuss the existence and numerical approximations of solutions of a regularized mean-field game with a low-order regularization. In the first part, we prove a priori estimates and use the continuation method to obtain the existence of a solution with a positive density. Finally, we introduce the monotone flow method and solve the system numerically.

  15. Automating InDesign with Regular Expressions

    CERN Document Server

    Kahrel, Peter

    2006-01-01

    If you need to make automated changes to InDesign documents beyond what basic search and replace can handle, you need regular expressions, and a bit of scripting to make them work. This Short Cut explains both how to write regular expressions, so you can find and replace the right things, and how to use them in InDesign specifically.

  16. 78 FR 48169 - Privacy Act of 1974; CMS Computer Match No. 2013-02; HHS Computer Match No. 1306; DoD-DMDC Match...

    Science.gov (United States)

    2013-08-07

    ... 1974; CMS Computer Match No. 2013-02; HHS Computer Match No. 1306; DoD-DMDC Match No. 12 AGENCY: Department of Health and Human Services (HHS), Centers for Medicare & Medicaid Services (CMS). ACTION: Notice... of 1974, as amended, this notice establishes a CMP that CMS plans to conduct with the Department of...

  17. Abdominal wall blocks in adults

    DEFF Research Database (Denmark)

    Børglum, Jens; Gögenür, Ismail; Bendtsen, Thomas F

    2016-01-01

    been introduced with success. Future research should also investigate the effect of specific abdominal wall blocks on neuroendocrine and inflammatory stress response after surgery.  Summary USG abdominal wall blocks in adults are commonplace techniques today. Most abdominal wall blocks are assigned......Purpose of review Abdominal wall blocks in adults have evolved much during the last decade; that is, particularly with the introduction of ultrasound-guided (USG) blocks. This review highlights recent advances of block techniques within this field and proposes directions for future research.......  Recent findings Ultrasound guidance is now considered the golden standard for abdominal wall blocks in adults, even though some landmark-based blocks are still being investigated. The efficiency of USG transversus abdominis plane blocks in relation to many surgical procedures involving the abdominal wall...

  18. Role model and prototype matching: Upper-secondary school students’ meetings with tertiary STEM students

    Directory of Open Access Journals (Sweden)

    Eva Lykkegaard

    2016-04-01

    Full Text Available Previous research has found that young people’s prototypes of science students and scientists affect their inclination to choose tertiary STEM programs (Science, Technology, Engineering and Mathematics. Consequently, many recruitment initiatives include role models to challenge these prototypes. The present study followed 15 STEM-oriented upper-secondary school students from university-distant backgrounds during and after their participation in an 18-months long university-based recruitment and outreach project involving tertiary STEM students as role models. The analysis focusses on how the students’ meetings with the role models affected their thoughts concerning STEM students and attending university. The regular self-to-prototype matching process was shown in real-life role-models meetings to be extended to a more complex three-way matching process between students’ self-perceptions, prototype images and situation-specific conceptions of role models. Furthermore, the study underlined the positive effect of prolonged role-model contact, the importance of using several role models and that traditional school subjects catered more resistant prototype images than unfamiliar ones did.

  19. Self-consistent EXAFS PDF Projection Method by Matched Correction of Fourier Filter Signal Distortion

    International Nuclear Information System (INIS)

    Lee, Jay Min; Yang, Dong-Seok

    2007-01-01

    Inverse problem solving computation was performed for solving PDF (pair distribution function) from simulated data EXAFS based on data FEFF. For a realistic comparison with experimental data, we chose a model of the first sub-shell Mn-0 pair showing the Jahn Teller distortion in crystalline LaMnO3. To restore the Fourier filtering signal distortion, involved in the first sub-shell information isolated from higher shell contents, relevant distortion matching function was computed initially from the proximity model, and iteratively from the prior-guess during consecutive regularization computation. Adaptive computation of EXAFS background correction is an issue of algorithm development, but our preliminary test was performed under the simulated background correction perfectly excluding the higher shell interference. In our numerical result, efficient convergence of iterative solution indicates a self-consistent tendency that a true PDF solution is convinced as a counterpart of genuine chi-data, provided that a background correction function is iteratively solved using an extended algorithm of MEPP (Matched EXAFS PDF Projection) under development

  20. Comparative study between ultrasound guided TAP block and paravertebral block in upper abdominal surgeries

    Directory of Open Access Journals (Sweden)

    Ruqaya M Elsayed Goda

    2017-01-01

    Conclusion: We concluded that ultrasound guided transverses abdominis plane block and thoracic paravertebral block were safe and effective anesthetic technique for upper abdominal surgery with longer and potent postoperative analgesia in thoracic paravertebral block than transverses abdominis block.

  1. Optimal behaviour can violate the principle of regularity.

    Science.gov (United States)

    Trimmer, Pete C

    2013-07-22

    Understanding decisions is a fundamental aim of behavioural ecology, psychology and economics. The regularity axiom of utility theory holds that a preference between options should be maintained when other options are made available. Empirical studies have shown that animals violate regularity but this has not been understood from a theoretical perspective, such decisions have therefore been labelled as irrational. Here, I use models of state-dependent behaviour to demonstrate that choices can violate regularity even when behavioural strategies are optimal. I also show that the range of conditions over which regularity should be violated can be larger when options do not always persist into the future. Consequently, utility theory--based on axioms, including transitivity, regularity and the independence of irrelevant alternatives--is undermined, because even alternatives that are never chosen by an animal (in its current state) can be relevant to a decision.

  2. Automatic Matching of Multi-Source Satellite Images: A Case Study on ZY-1-02C and ETM+

    Directory of Open Access Journals (Sweden)

    Bo Wang

    2017-10-01

    Full Text Available The ever-growing number of applications for satellites is being compromised by their poor direct positioning precision. Existing orthoimages, such as enhanced thematic mapper (ETM+ orthoimages, can provide georeferences or improve the geo-referencing accuracy of satellite images, such ZY-1-02C images that have unsatisfactory positioning precision, thus enhancing their processing efficiency and application. In this paper, a feasible image matching approach using multi-source satellite images is proposed on the basis of an experiment carried out with ZY-1-02C Level 1 images and ETM+ orthoimages. The proposed approach overcame differences in rotation angle, scale, and translation between images. The rotation and scale variances were evaluated on the basis of rational polynomial coefficients. The translation vectors were generated after blocking the overall phase correlation. Then, normalized cross-correlation and least-squares matching were applied for matching. Finally, the gross errors of the corresponding points were eliminated by local statistic vectors in a TIN structure. Experimental results showed a matching precision of less than two pixels (root-mean-square error, and comparison results indicated that the proposed method outperforms Scale-Invariant Feature Transform (SIFT, Speeded Up Robust Features (SURF, and Affine-Scale Invariant Feature Transform (A-SIFT in terms of reliability and efficiency.

  3. Dimensional regularization in configuration space

    International Nuclear Information System (INIS)

    Bollini, C.G.; Giambiagi, J.J.

    1995-09-01

    Dimensional regularization is introduced in configuration space by Fourier transforming in D-dimensions the perturbative momentum space Green functions. For this transformation, Bochner theorem is used, no extra parameters, such as those of Feynman or Bogoliubov-Shirkov are needed for convolutions. The regularized causal functions in x-space have ν-dependent moderated singularities at the origin. They can be multiplied together and Fourier transformed (Bochner) without divergence problems. The usual ultraviolet divergences appear as poles of the resultant functions of ν. Several example are discussed. (author). 9 refs

  4. Matrix regularization of 4-manifolds

    OpenAIRE

    Trzetrzelewski, M.

    2012-01-01

    We consider products of two 2-manifolds such as S^2 x S^2, embedded in Euclidean space and show that the corresponding 4-volume preserving diffeomorphism algebra can be approximated by a tensor product SU(N)xSU(N) i.e. functions on a manifold are approximated by the Kronecker product of two SU(N) matrices. A regularization of the 4-sphere is also performed by constructing N^2 x N^2 matrix representations of the 4-algebra (and as a byproduct of the 3-algebra which makes the regularization of S...

  5. Regular Breakfast and Blood Lead Levels among Preschool Children

    Directory of Open Access Journals (Sweden)

    Needleman Herbert

    2011-04-01

    Full Text Available Abstract Background Previous studies have shown that fasting increases lead absorption in the gastrointestinal tract of adults. Regular meals/snacks are recommended as a nutritional intervention for lead poisoning in children, but epidemiological evidence of links between fasting and blood lead levels (B-Pb is rare. The purpose of this study was to examine the association between eating a regular breakfast and B-Pb among children using data from the China Jintan Child Cohort Study. Methods Parents completed a questionnaire regarding children's breakfast-eating habit (regular or not, demographics, and food frequency. Whole blood samples were collected from 1,344 children for the measurements of B-Pb and micronutrients (iron, copper, zinc, calcium, and magnesium. B-Pb and other measures were compared between children with and without regular breakfast. Linear regression modeling was used to evaluate the association between regular breakfast and log-transformed B-Pb. The association between regular breakfast and risk of lead poisoning (B-Pb≥10 μg/dL was examined using logistic regression modeling. Results Median B-Pb among children who ate breakfast regularly and those who did not eat breakfast regularly were 6.1 μg/dL and 7.2 μg/dL, respectively. Eating breakfast was also associated with greater zinc blood levels. Adjusting for other relevant factors, the linear regression model revealed that eating breakfast regularly was significantly associated with lower B-Pb (beta = -0.10 units of log-transformed B-Pb compared with children who did not eat breakfast regularly, p = 0.02. Conclusion The present study provides some initial human data supporting the notion that eating a regular breakfast might reduce B-Pb in young children. To our knowledge, this is the first human study exploring the association between breakfast frequency and B-Pb in young children.

  6. On the equivalence of different regularization methods

    International Nuclear Information System (INIS)

    Brzezowski, S.

    1985-01-01

    The R-circunflex-operation preceded by the regularization procedure is discussed. Some arguments are given, according to which the results may depend on the method of regularization, introduced in order to avoid divergences in perturbation calculations. 10 refs. (author)

  7. Fermion-scalar conformal blocks

    Energy Technology Data Exchange (ETDEWEB)

    Iliesiu, Luca [Joseph Henry Laboratories, Princeton University,Washington Road, Princeton, NJ 08544 (United States); Kos, Filip [Department of Physics, Yale University,217 Prospect Street, New Haven, CT 06520 (United States); Poland, David [Department of Physics, Yale University,217 Prospect Street, New Haven, CT 06520 (United States); School of Natural Sciences, Institute for Advanced Study,1 Einstein Dr, Princeton, New Jersey 08540 (United States); Pufu, Silviu S. [Joseph Henry Laboratories, Princeton University,Washington Road, Princeton, NJ 08544 (United States); Simmons-Duffin, David [School of Natural Sciences, Institute for Advanced Study,1 Einstein Dr, Princeton, New Jersey 08540 (United States); Yacoby, Ran [Joseph Henry Laboratories, Princeton University,Washington Road, Princeton, NJ 08544 (United States)

    2016-04-13

    We compute the conformal blocks associated with scalar-scalar-fermion-fermion 4-point functions in 3D CFTs. Together with the known scalar conformal blocks, our result completes the task of determining the so-called ‘seed blocks’ in three dimensions. Conformal blocks associated with 4-point functions of operators with arbitrary spins can now be determined from these seed blocks by using known differential operators.

  8. Accreting fluids onto regular black holes via Hamiltonian approach

    Energy Technology Data Exchange (ETDEWEB)

    Jawad, Abdul [COMSATS Institute of Information Technology, Department of Mathematics, Lahore (Pakistan); Shahzad, M.U. [COMSATS Institute of Information Technology, Department of Mathematics, Lahore (Pakistan); University of Central Punjab, CAMS, UCP Business School, Lahore (Pakistan)

    2017-08-15

    We investigate the accretion of test fluids onto regular black holes such as Kehagias-Sfetsos black holes and regular black holes with Dagum distribution function. We analyze the accretion process when different test fluids are falling onto these regular black holes. The accreting fluid is being classified through the equation of state according to the features of regular black holes. The behavior of fluid flow and the existence of sonic points is being checked for these regular black holes. It is noted that the three-velocity depends on critical points and the equation of state parameter on phase space. (orig.)

  9. Block Cipher Analysis

    DEFF Research Database (Denmark)

    Miolane, Charlotte Vikkelsø

    ensurethat no attack violatesthe securitybounds specifiedbygeneric attack namely exhaustivekey search and table lookup attacks. This thesis contains a general introduction to cryptography with focus on block ciphers and important block cipher designs, in particular the Advanced Encryption Standard(AES...... on small scale variants of AES. In the final part of the thesis we present a new block cipher proposal Present and examine its security against algebraic and differential cryptanalysis in particular....

  10. Motives and barriers to safer sex and regular STI testing among MSM soon after HIV diagnosis.

    Science.gov (United States)

    Heijman, Titia; Zuure, Freke; Stolte, Ineke; Davidovich, Udi

    2017-03-07

    Understanding why some recently with HIV diagnosed men who have sex with men (MSM) choose for safer sex and regular STI testing, whereas others do not, is important for the development of interventions that aim to improve the sexual health of those newly infected. To gain insight into motives and barriers to condom use and regular STI testing among MSM soon after HIV diagnosis, 30 HIV-positive MSM participated in semi-structured qualitative interviews on sexual health behaviours in the first year after HIV diagnosis. Typical barriers to condom use soon after diagnosis were emotions such as anger, relief, and feelings of vulnerability. Additional barriers were related to pre-diagnosis patterns of sexual-social behaviour that were difficult to change, communication difficulties, and substance use. Barriers to STI testing revolved around perceptions of low STI risk, faulty beliefs, and burdensome testing procedures. The great diversity of motives and barriers to condom use and STI testing creates a challenge to accommodate newly infected men with information, motivation, and communication skills to match their personal needs. An adaptive, tailored intervention can be a promising tool of support.

  11. Mild or borderline intellectual disability as a risk for alcohol consumption in adolescents - A matched-pair study.

    Science.gov (United States)

    Reis, Olaf; Wetzel, Britta; Häßler, Frank

    2017-04-01

    Studies that investigate the association between mild or borderline intellectual disability (MBID) and alcohol use in adolescents have not examined whether MBID is an independent risk factor for drinking. It is important to examine whether MBID is a risk factor for alcohol consumption by controlling concomitant factors in a matched-pair design. Overall, 329 students from two schools for children with MBID self-reported their drinking behavior via questionnaires, and 329 students from regular schools were matched to this group by gender, age, family composition, and parental drinking behavior. Matched pairs were compared based on alcohol consumption and motivation to drink. MBID is a protective factor, as disabled adolescents drink less on average. This effect is mainly due to larger proportions of youth with MBID who are abstinent. When male adolescents with MBID begin to drink, they are at an increased risk for intoxication and subsequent at-risk behaviors. Motivations to drink were explained by an interaction between MBID and consumption patterns. For male adolescents with MBID, there appears to be an "all-or-nothing" principle that guides alcohol consumption, which suggests a need for special interventions for this group. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Analysis of jumping in the spike, block and set skills of female volleyball players

    Directory of Open Access Journals (Sweden)

    Valdir José Barbanti

    2007-09-01

    Full Text Available The purpose of this study was to quantify the different types of jump observed during volleyball matches. Jumps were classifi ed as block jumps, spike jumps or set jumps. The sample was 12 video-taped National Women’s Volleyball League matches. They were analyzed for specifi c types of jumping, such as spike jumps with and without approach; block jumps with and without step movement; and set jumps. Matches were recorded by two video cameras placed at the back court on each side of the net. Data were collected from the video tapes and each variable was recorded on a sheet of paper for subsequent statistical analysis. The results demonstrated that the highest mean numbers of any jump type per game performed by setters were of the type set jump: 39.0 ± 5.51, 57.3 ± 32.23 and 33 ± 8.49, in games of 3, 4 and 5 sets respectively. For outside hitter players the greatest number of jumps were of the type spike jump with approach, in games of 3, 4 and 5 sets respectively (20.44 ± 5.15, 29.23 ± 7.16 and 35.67 ± 13.21. Middle block players exhibited mean values for block jumps with step movements of 17.04 ± 8.19, 29.9 ± 10.85 and 34.25 ± 5.62, respectively. These results indicate that there was no difference between outside hitters and middle block players in 5-set games in terms of numbers of spike jumps with approach. There was a significant difference between setters and outside hitters in numbers of spike jumps without approach, in games of 3 and 5 sets. There were no differences between any of the positions in block jumps with step in games of 3, 4 or 5 sets. There was no difference between middle block and outside hitter players in terms of set jumps. It was concluded that setters exhibited the highest average number of set jumps per game, outside hitters exhibited the highest mean number of spike jumps with approach and middle block players exhibited highest mean numbers per match of block jumps with step movement. ABSTRACT O presente

  13. Main-chain supramolecular block copolymers.

    Science.gov (United States)

    Yang, Si Kyung; Ambade, Ashootosh V; Weck, Marcus

    2011-01-01

    Block copolymers are key building blocks for a variety of applications ranging from electronic devices to drug delivery. The material properties of block copolymers can be tuned and potentially improved by introducing noncovalent interactions in place of covalent linkages between polymeric blocks resulting in the formation of supramolecular block copolymers. Such materials combine the microphase separation behavior inherent to block copolymers with the responsiveness of supramolecular materials thereby affording dynamic and reversible materials. This tutorial review covers recent advances in main-chain supramolecular block copolymers and describes the design principles, synthetic approaches, advantages, and potential applications.

  14. Bounded Perturbation Regularization for Linear Least Squares Estimation

    KAUST Repository

    Ballal, Tarig

    2017-10-18

    This paper addresses the problem of selecting the regularization parameter for linear least-squares estimation. We propose a new technique called bounded perturbation regularization (BPR). In the proposed BPR method, a perturbation with a bounded norm is allowed into the linear transformation matrix to improve the singular-value structure. Following this, the problem is formulated as a min-max optimization problem. Next, the min-max problem is converted to an equivalent minimization problem to estimate the unknown vector quantity. The solution of the minimization problem is shown to converge to that of the ℓ2 -regularized least squares problem, with the unknown regularizer related to the norm bound of the introduced perturbation through a nonlinear constraint. A procedure is proposed that combines the constraint equation with the mean squared error (MSE) criterion to develop an approximately optimal regularization parameter selection algorithm. Both direct and indirect applications of the proposed method are considered. Comparisons with different Tikhonov regularization parameter selection methods, as well as with other relevant methods, are carried out. Numerical results demonstrate that the proposed method provides significant improvement over state-of-the-art methods.

  15. Ultrasound guided supraclavicular block.

    LENUS (Irish Health Repository)

    Hanumanthaiah, Deepak

    2013-09-01

    Ultrasound guided regional anaesthesia is becoming increasingly popular. The supraclavicular block has been transformed by ultrasound guidance into a potentially safe superficial block. We reviewed the techniques of performing supraclavicular block with special focus on ultrasound guidance.

  16. 2D Sub-Pixel Disparity Measurement Using QPEC / Medicis

    Directory of Open Access Journals (Sweden)

    M. Cournet

    2016-06-01

    Full Text Available In the frame of its earth observation missions, CNES created a library called QPEC, and one of its launcher called Medicis. QPEC / Medicis is a sub-pixel two-dimensional stereo matching algorithm that works on an image pair. This tool is a block matching algorithm, which means that it is based on a local method. Moreover it does not regularize the results found. It proposes several matching costs, such as the Zero mean Normalised Cross-Correlation or statistical measures (the Mutual Information being one of them, and different match validation flags. QPEC / Medicis is able to compute a two-dimensional dense disparity map with a subpixel precision. Hence, it is more versatile than disparity estimation methods found in computer vision literature, which often assume an epipolar geometry. CNES uses Medicis, among other applications, during the in-orbit image quality commissioning of earth observation satellites. For instance the Pléiades-HR 1A & 1B and the Sentinel-2 geometric calibrations are based on this block matching algorithm. Over the years, it has become a common tool in ground segments for in-flight monitoring purposes. For these two kinds of applications, the two-dimensional search and the local sub-pixel measure without regularization can be essential. This tool is also used to generate automatic digital elevation models, for which it was not initially dedicated. This paper deals with the QPEC / Medicis algorithm. It also presents some of its CNES applications (in-orbit commissioning, in flight monitoring or digital elevation model generation. Medicis software is distributed outside the CNES as well. This paper finally describes some of these external applications using Medicis, such as ground displacement measurement, or intra-oral scanner in the dental domain.

  17. MRI reconstruction with joint global regularization and transform learning.

    Science.gov (United States)

    Tanc, A Korhan; Eksioglu, Ender M

    2016-10-01

    Sparsity based regularization has been a popular approach to remedy the measurement scarcity in image reconstruction. Recently, sparsifying transforms learned from image patches have been utilized as an effective regularizer for the Magnetic Resonance Imaging (MRI) reconstruction. Here, we infuse additional global regularization terms to the patch-based transform learning. We develop an algorithm to solve the resulting novel cost function, which includes both patchwise and global regularization terms. Extensive simulation results indicate that the introduced mixed approach has improved MRI reconstruction performance, when compared to the algorithms which use either of the patchwise transform learning or global regularization terms alone. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Improving the precision of the keyword-matching pornographic text filtering method using a hybrid model.

    Science.gov (United States)

    Su, Gui-yang; Li, Jian-hua; Ma, Ying-hua; Li, Sheng-hong

    2004-09-01

    With the flooding of pornographic information on the Internet, how to keep people away from that offensive information is becoming one of the most important research areas in network information security. Some applications which can block or filter such information are used. Approaches in those systems can be roughly classified into two kinds: metadata based and content based. With the development of distributed technologies, content based filtering technologies will play a more and more important role in filtering systems. Keyword matching is a content based method used widely in harmful text filtering. Experiments to evaluate the recall and precision of the method showed that the precision of the method is not satisfactory, though the recall of the method is rather high. According to the results, a new pornographic text filtering model based on reconfirming is put forward. Experiments showed that the model is practical, has less loss of recall than the single keyword matching method, and has higher precision.

  19. Strictly-regular number system and data structures

    DEFF Research Database (Denmark)

    Elmasry, Amr Ahmed Abd Elmoneim; Jensen, Claus; Katajainen, Jyrki

    2010-01-01

    We introduce a new number system that we call the strictly-regular system, which efficiently supports the operations: digit-increment, digit-decrement, cut, concatenate, and add. Compared to other number systems, the strictly-regular system has distinguishable properties. It is superior to the re...

  20. Analysis of regularized Navier-Stokes equations, 2

    Science.gov (United States)

    Ou, Yuh-Roung; Sritharan, S. S.

    1989-01-01

    A practically important regularization of the Navier-Stokes equations was analyzed. As a continuation of the previous work, the structure of the attractors characterizing the solutins was studied. Local as well as global invariant manifolds were found. Regularity properties of these manifolds are analyzed.

  1. Regularities, Natural Patterns and Laws of Nature

    Directory of Open Access Journals (Sweden)

    Stathis Psillos

    2014-02-01

    Full Text Available  The goal of this paper is to sketch an empiricist metaphysics of laws of nature. The key idea is that there are regularities without regularity-enforcers. Differently put, there are natural laws without law-makers of a distinct metaphysical kind. This sketch will rely on the concept of a natural pattern and more significantly on the existence of a network of natural patterns in nature. The relation between a regularity and a pattern will be analysed in terms of mereology.  Here is the road map. In section 2, I will briefly discuss the relation between empiricism and metaphysics, aiming to show that an empiricist metaphysics is possible. In section 3, I will offer arguments against stronger metaphysical views of laws. Then, in section 4 I will motivate nomic objectivism. In section 5, I will address the question ‘what is a regularity?’ and will develop a novel answer to it, based on the notion of a natural pattern. In section 6, I will raise the question: ‘what is a law of nature?’, the answer to which will be: a law of nature is a regularity that is characterised by the unity of a natural pattern.

  2. Consistent Partial Least Squares Path Modeling via Regularization.

    Science.gov (United States)

    Jung, Sunho; Park, JaeHong

    2018-01-01

    Partial least squares (PLS) path modeling is a component-based structural equation modeling that has been adopted in social and psychological research due to its data-analytic capability and flexibility. A recent methodological advance is consistent PLS (PLSc), designed to produce consistent estimates of path coefficients in structural models involving common factors. In practice, however, PLSc may frequently encounter multicollinearity in part because it takes a strategy of estimating path coefficients based on consistent correlations among independent latent variables. PLSc has yet no remedy for this multicollinearity problem, which can cause loss of statistical power and accuracy in parameter estimation. Thus, a ridge type of regularization is incorporated into PLSc, creating a new technique called regularized PLSc. A comprehensive simulation study is conducted to evaluate the performance of regularized PLSc as compared to its non-regularized counterpart in terms of power and accuracy. The results show that our regularized PLSc is recommended for use when serious multicollinearity is present.

  3. Consistent Partial Least Squares Path Modeling via Regularization

    Directory of Open Access Journals (Sweden)

    Sunho Jung

    2018-02-01

    Full Text Available Partial least squares (PLS path modeling is a component-based structural equation modeling that has been adopted in social and psychological research due to its data-analytic capability and flexibility. A recent methodological advance is consistent PLS (PLSc, designed to produce consistent estimates of path coefficients in structural models involving common factors. In practice, however, PLSc may frequently encounter multicollinearity in part because it takes a strategy of estimating path coefficients based on consistent correlations among independent latent variables. PLSc has yet no remedy for this multicollinearity problem, which can cause loss of statistical power and accuracy in parameter estimation. Thus, a ridge type of regularization is incorporated into PLSc, creating a new technique called regularized PLSc. A comprehensive simulation study is conducted to evaluate the performance of regularized PLSc as compared to its non-regularized counterpart in terms of power and accuracy. The results show that our regularized PLSc is recommended for use when serious multicollinearity is present.

  4. Best Practices for NPT Transit Matching

    International Nuclear Information System (INIS)

    Gilligan, Kimberly V.; Whitaker, J. Michael; Oakberg, John A.; Snow, Catherine

    2016-01-01

    Transit matching is the process for relating or matching reports of shipments and receipts submitted to the International Atomic Energy Agency (IAEA). Transit matching is a component used by the IAEA in drawing safeguards conclusions and performing investigative analysis. Transit matching is part of IAEA safeguards activities and the State evaluation process, and it is included in the annual Safeguards Implementation Report (SIR). Annually, the IAEA currently receives reports of ~900,000 nuclear material transactions, of which ~500,000 are for domestic and foreign transfers. Of these the IAEA software can automatically match (i.e., machine match) about 95% of the domestic transfers and 25% of the foreign transfers. Given the increasing demands upon IAEA resources, it is highly desirable for the machine-matching process to match as many transfers as possible. Researchers at Oak Ridge National Laboratory (ORNL) have conducted an investigation funded by the National Nuclear Security Administration through the Next Generation Safeguards Initiative to identify opportunities to strengthen IAEA transit matching. Successful matching, and more specifically machine matching, is contingent on quality data from the reporting States. In February 2016, ORNL hosted representatives from three States, the IAEA, and Euratom to share results from past studies and to discuss the processes, policies, and procedures associated with State reporting for transit matching. Drawing on each entity's experience and knowledge, ORNL developed a best practices document to be shared with the international safeguards community to strengthen transit matching. This paper shares the recommendations that resulted from this strategic meeting and the next steps being taken to strengthen transit matching.

  5. Best Practices for NPT Transit Matching

    Energy Technology Data Exchange (ETDEWEB)

    Gilligan, Kimberly V. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Whitaker, J. Michael [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Oakberg, John A. [Tetra Tech, Inc., Oak Ridge, TN (United States); Snow, Catherine [Sno Consulting, LLC, Sandy, UT (United States)

    2016-09-01

    Transit matching is the process for relating or matching reports of shipments and receipts submitted to the International Atomic Energy Agency (IAEA). Transit matching is a component used by the IAEA in drawing safeguards conclusions and performing investigative analysis. Transit matching is part of IAEA safeguards activities and the State evaluation process, and it is included in the annual Safeguards Implementation Report (SIR). Annually, the IAEA currently receives reports of ~900,000 nuclear material transactions, of which ~500,000 are for domestic and foreign transfers. Of these the IAEA software can automatically match (i.e., machine match) about 95% of the domestic transfers and 25% of the foreign transfers. Given the increasing demands upon IAEA resources, it is highly desirable for the machine-matching process to match as many transfers as possible. Researchers at Oak Ridge National Laboratory (ORNL) have conducted an investigation funded by the National Nuclear Security Administration through the Next Generation Safeguards Initiative to identify opportunities to strengthen IAEA transit matching. Successful matching, and more specifically machine matching, is contingent on quality data from the reporting States. In February 2016, ORNL hosted representatives from three States, the IAEA, and Euratom to share results from past studies and to discuss the processes, policies, and procedures associated with State reporting for transit matching. Drawing on each entity's experience and knowledge, ORNL developed a best practices document to be shared with the international safeguards community to strengthen transit matching. This paper shares the recommendations that resulted from this strategic meeting and the next steps being taken to strengthen transit matching.

  6. Adductor Canal Block versus Femoral Nerve Block and Quadriceps Strength

    DEFF Research Database (Denmark)

    Jæger, Pia Therese; Nielsen, Zbigniew Jerzy Koscielniak; Henningsen, Lene Marianne

    2013-01-01

    : The authors hypothesized that the adductor canal block (ACB), a predominant sensory blockade, reduces quadriceps strength compared with placebo (primary endpoint, area under the curve, 0.5-6 h), but less than the femoral nerve block (FNB; secondary endpoint). Other secondary endpoints were...

  7. Asymmetric PS-block-(PS-co-PB)-block-PS block copolymers: morphology formation and deformation behaviour

    International Nuclear Information System (INIS)

    Adhikari, Rameshwar; Huy, Trinh An; Buschnakowski, Matthias; Michler, Goerg H; Knoll, Konrad

    2004-01-01

    Morphology formation and deformation behaviour of asymmetric styrene/butadiene triblock copolymers (total polystyrene (PS) content ∼70%) consisting of PS outer blocks held apart by a styrene-co-butadiene random copolymer block (PS-co-PB) each were investigated. The techniques used were differential scanning calorimetry, transmission electron microscopy, uniaxial tensile testing and Fourier-transform infrared spectroscopy. A significant shift of the phase behaviour relative to that of a neat symmetric triblock copolymer was observed, which can be attributed to the asymmetric architecture and the presence of PS-co-PB as a soft block. The mechanical properties and the microdeformation phenomena were mainly controlled by the nature of their solid-state morphology. Independent of morphology type, the soft phase was found to deform to a significantly higher degree of orientation when compared with the hard phase

  8. Regularization of the Boundary-Saddle-Node Bifurcation

    Directory of Open Access Journals (Sweden)

    Xia Liu

    2018-01-01

    Full Text Available In this paper we treat a particular class of planar Filippov systems which consist of two smooth systems that are separated by a discontinuity boundary. In such systems one vector field undergoes a saddle-node bifurcation while the other vector field is transversal to the boundary. The boundary-saddle-node (BSN bifurcation occurs at a critical value when the saddle-node point is located on the discontinuity boundary. We derive a local topological normal form for the BSN bifurcation and study its local dynamics by applying the classical Filippov’s convex method and a novel regularization approach. In fact, by the regularization approach a given Filippov system is approximated by a piecewise-smooth continuous system. Moreover, the regularization process produces a singular perturbation problem where the original discontinuous set becomes a center manifold. Thus, the regularization enables us to make use of the established theories for continuous systems and slow-fast systems to study the local behavior around the BSN bifurcation.

  9. EEG/MEG Source Reconstruction with Spatial-Temporal Two-Way Regularized Regression

    KAUST Repository

    Tian, Tian Siva

    2013-07-11

    In this work, we propose a spatial-temporal two-way regularized regression method for reconstructing neural source signals from EEG/MEG time course measurements. The proposed method estimates the dipole locations and amplitudes simultaneously through minimizing a single penalized least squares criterion. The novelty of our methodology is the simultaneous consideration of three desirable properties of the reconstructed source signals, that is, spatial focality, spatial smoothness, and temporal smoothness. The desirable properties are achieved by using three separate penalty functions in the penalized regression framework. Specifically, we impose a roughness penalty in the temporal domain for temporal smoothness, and a sparsity-inducing penalty and a graph Laplacian penalty in the spatial domain for spatial focality and smoothness. We develop a computational efficient multilevel block coordinate descent algorithm to implement the method. Using a simulation study with several settings of different spatial complexity and two real MEG examples, we show that the proposed method outperforms existing methods that use only a subset of the three penalty functions. © 2013 Springer Science+Business Media New York.

  10. Low-Complexity Regularization Algorithms for Image Deblurring

    KAUST Repository

    Alanazi, Abdulrahman

    2016-11-01

    Image restoration problems deal with images in which information has been degraded by blur or noise. In practice, the blur is usually caused by atmospheric turbulence, motion, camera shake, and several other mechanical or physical processes. In this study, we present two regularization algorithms for the image deblurring problem. We first present a new method based on solving a regularized least-squares (RLS) problem. This method is proposed to find a near-optimal value of the regularization parameter in the RLS problems. Experimental results on the non-blind image deblurring problem are presented. In all experiments, comparisons are made with three benchmark methods. The results demonstrate that the proposed method clearly outperforms the other methods in terms of both the output PSNR and structural similarity, as well as the visual quality of the deblurred images. To reduce the complexity of the proposed algorithm, we propose a technique based on the bootstrap method to estimate the regularization parameter in low and high-resolution images. Numerical results show that the proposed technique can effectively reduce the computational complexity of the proposed algorithms. In addition, for some cases where the point spread function (PSF) is separable, we propose using a Kronecker product so as to reduce the computations. Furthermore, in the case where the image is smooth, it is always desirable to replace the regularization term in the RLS problems by a total variation term. Therefore, we propose a novel method for adaptively selecting the regularization parameter in a so-called square root regularized total variation (SRTV). Experimental results demonstrate that our proposed method outperforms the other benchmark methods when applied to smooth images in terms of PSNR, SSIM and the restored image quality. In this thesis, we focus on the non-blind image deblurring problem, where the blur kernel is assumed to be known. However, we developed algorithms that also work

  11. Improvements in GRACE Gravity Fields Using Regularization

    Science.gov (United States)

    Save, H.; Bettadpur, S.; Tapley, B. D.

    2008-12-01

    The unconstrained global gravity field models derived from GRACE are susceptible to systematic errors that show up as broad "stripes" aligned in a North-South direction on the global maps of mass flux. These errors are believed to be a consequence of both systematic and random errors in the data that are amplified by the nature of the gravity field inverse problem. These errors impede scientific exploitation of the GRACE data products, and limit the realizable spatial resolution of the GRACE global gravity fields in certain regions. We use regularization techniques to reduce these "stripe" errors in the gravity field products. The regularization criteria are designed such that there is no attenuation of the signal and that the solutions fit the observations as well as an unconstrained solution. We have used a computationally inexpensive method, normally referred to as "L-ribbon", to find the regularization parameter. This paper discusses the characteristics and statistics of a 5-year time-series of regularized gravity field solutions. The solutions show markedly reduced stripes, are of uniformly good quality over time, and leave little or no systematic observation residuals, which is a frequent consequence of signal suppression from regularization. Up to degree 14, the signal in regularized solution shows correlation greater than 0.8 with the un-regularized CSR Release-04 solutions. Signals from large-amplitude and small-spatial extent events - such as the Great Sumatra Andaman Earthquake of 2004 - are visible in the global solutions without using special post-facto error reduction techniques employed previously in the literature. Hydrological signals as small as 5 cm water-layer equivalent in the small river basins, like Indus and Nile for example, are clearly evident, in contrast to noisy estimates from RL04. The residual variability over the oceans relative to a seasonal fit is small except at higher latitudes, and is evident without the need for de-striping or

  12. Adductor canal block versus femoral nerve block for analgesia after total knee arthroplasty

    DEFF Research Database (Denmark)

    Jaeger, Pia; Zaric, Dusanka; Fomsgaard, Jonna Storm

    2013-01-01

    Femoral nerve block (FNB), a commonly used postoperative pain treatment after total knee arthroplasty (TKA), reduces quadriceps muscle strength essential for mobilization. In contrast, adductor canal block (ACB) is predominately a sensory nerve block. We hypothesized that ACB preserves quadriceps...

  13. Privacy‐Preserving Friend Matching Protocol approach for Pre‐match in Social Networks

    DEFF Research Database (Denmark)

    Ople, Shubhangi S.; Deshmukh, Aaradhana A.; Mihovska, Albena Dimitrova

    2016-01-01

    Social services make the most use of the user profile matching to help the users to discover friends with similar social attributes (e.g. interests, location, age). However, there are many privacy concerns that prevent to enable this functionality. Privacy preserving encryption is not suitable...... for use in social networks due to its data sharing problems and information leakage. In this paper, we propose a novel framework for privacy–preserving profile matching. We implement both the client and server portion of the secure match and evaluate its performance network dataset. The results show...

  14. Computation of deformations and stresses in graphite blocks for HTR core survey purposes

    International Nuclear Information System (INIS)

    Besdo, Dieter; Theymann, W.

    1975-01-01

    Stresses and deformations in graphite fuel elements for HTRs are caused by the temperature distribution and by irradiation under influence of creep, shrinking, thermal strains, and elastic deformations. The global deformations and the stress distribution in a prismatic fuel-element containing regularly distributed axial holes for the coolant flow and the fuel sticks, can be computed in the following manner: the block with its holes is treated as an effective homogeneous continuum with an equivalent global behaviour. Assuming that the fourth-order-tensor of the elastic constants is proportional to the corresponding tensor in the constitutive equations for creep, only the effective strains are of interest. The values of temperature and dose may be given in n points of the block at certain points of time. Then, the inelastic nonthermal strains are integrated by a Runge-Kutta-procedure in the n points. When interpolated and combined with thermal strains, they are incompatible. Hence, they produce elastic deformations which cause creep and can be computed by use of a Ritz-polynomial-series by help of a specific principle of the minimum of potential energy. Excessive computing time can be avoided easily since the influence of the local variation of the elastic constants within the block is almost negligible and, therefore, of practically no importance for the determination of the elastic strains. By this reason some matrices can be calculated a priori, and the elastic deformations are obtained by multiplications of these matrices rather than inversions. Therefore, this method is particularly suited for the computation of deformations and stresses for reactor core survey purposes where a large number (up to 7000 blocks) have to be treated

  15. Block That Pain!

    Science.gov (United States)

    Skip Navigation Bar Home Current Issue Past Issues Block That Pain! Past Issues / Fall 2007 Table of ... contrast, most pain relievers used for surgical procedures block activity in all types of neurons. This can ...

  16. Deterministic automata for extended regular expressions

    Directory of Open Access Journals (Sweden)

    Syzdykov Mirzakhmet

    2017-12-01

    Full Text Available In this work we present the algorithms to produce deterministic finite automaton (DFA for extended operators in regular expressions like intersection, subtraction and complement. The method like “overriding” of the source NFA(NFA not defined with subset construction rules is used. The past work described only the algorithm for AND-operator (or intersection of regular languages; in this paper the construction for the MINUS-operator (and complement is shown.

  17. Regularities of intermediate adsorption complex relaxation

    International Nuclear Information System (INIS)

    Manukova, L.A.

    1982-01-01

    The experimental data, characterizing the regularities of intermediate adsorption complex relaxation in the polycrystalline Mo-N 2 system at 77 K are given. The method of molecular beam has been used in the investigation. The analytical expressions of change regularity in the relaxation process of full and specific rates - of transition from intermediate state into ''non-reversible'', of desorption into the gas phase and accumUlation of the particles in the intermediate state are obtained

  18. OBLIQUE MULTI-CAMERA SYSTEMS – ORIENTATION AND DENSE MATCHING ISSUES

    Directory of Open Access Journals (Sweden)

    E. Rupnik

    2014-03-01

    Full Text Available The use of oblique imagery has become a standard for many civil and mapping applications, thanks to the development of airborne digital multi-camera systems, as proposed by many companies (Blomoblique, IGI, Leica, Midas, Pictometry, Vexcel/Microsoft, VisionMap, etc.. The indisputable virtue of oblique photography lies in its simplicity of interpretation and understanding for inexperienced users allowing their use of oblique images in very different applications, such as building detection and reconstruction, building structural damage classification, road land updating and administration services, etc. The paper reports an overview of the actual oblique commercial systems and presents a workflow for the automated orientation and dense matching of large image blocks. Perspectives, potentialities, pitfalls and suggestions for achieving satisfactory results are given. Tests performed on two datasets acquired with two multi-camera systems over urban areas are also reported.

  19. Bundle Branch Block

    Science.gov (United States)

    ... known cause. Causes can include: Left bundle branch block Heart attacks (myocardial infarction) Thickened, stiffened or weakened ... myocarditis) High blood pressure (hypertension) Right bundle branch block A heart abnormality that's present at birth (congenital) — ...

  20. Sparse structure regularized ranking

    KAUST Repository

    Wang, Jim Jing-Yan; Sun, Yijun; Gao, Xin

    2014-01-01

    Learning ranking scores is critical for the multimedia database retrieval problem. In this paper, we propose a novel ranking score learning algorithm by exploring the sparse structure and using it to regularize ranking scores. To explore the sparse

  1. Lenses matching of compound eye for target positioning

    Science.gov (United States)

    Guo, Fang; Zheng, Yan Pei; Wang, Keyi

    2012-10-01

    Compound eye, as a new imaging method with multi-lens for a large field of view, could complete target positioning and detection fastly, especially at close range. Therefore it could be applicated in the fields of military and medical treatment and aviation with vast market potential and development prospect. Yet the compound eye imaging method designed use three layer construction of multiple lens array arranged in a curved surface and refractive lens and imaging sensor of CMOS. In order to simplify process structure and increase the imaging area of every sub-eye, the imaging area of every eye is coved with the whole CMOS. Therefore, for several imaging point of one target, the corresponding lens of every imaging point is unkonown, and thus to identify. So an algorithm was put forward. Firstly, according to the Regular Geometry relationship of several adjacent lenses, data organization of seven lenses with a main lens was built. Subsequently, by the data organization, when one target was caught by several unknown lenses, we search every combined type of the received lenses. And for every combined type, two lenses were selected to combine and were used to calculate one three-dimensional (3D) coordinate of the target. If the 3D coordinates are same to the some combine type of the lenses numbers, in theory, the lenses and the imaging points are matched. So according to error of the 3D coordinates is calculated by the different seven lenses numbers combines, the unknown lenses could be distinguished. The experimental results show that the presented algorithm is feasible and can complete matching task for imaging points and corresponding lenses.

  2. The Brown-Servranckx matching transformer for simultaneous RFQ to DTL H+ and H- matching

    International Nuclear Information System (INIS)

    Wadlinger, E.A.; Garnett, R.W.

    1996-01-01

    The issue involved in the simultaneous matching of H + and H - beams between an RFQ and DTL lies in the fact that both beams experience the same electric-field forces at a given position in the RFQ. Hence, the two beams are focused to the same correlation. However, matching to a DTL requires correlation of the opposite sign. The Brown-Servranckx quarter-wave (λ / 4) matching transformer system, which requires four quadrupoles, provides a method to simultaneously match H + and H - beams between an RFQ and a DTL. The method requires the use of a special RFQ section to obtain the Twiss parameter conditions β x = β y and α x = α y = 0 at the exit of the RFQ. This matching between the RFQ and DTL is described. (author)

  3. The Brown-Servranckx matching transformer for simultaneous RFQ to DTL H+ and H- matching

    International Nuclear Information System (INIS)

    Wadlinger, E.A.; Garnett, R.W.

    1996-01-01

    The issue involved in simultaneous matching of H + and H - beams between an RFQ and DTL lies in the fact that both beams experience the same electric-field forces at a given position in the RFQ. Hence, the two beams are focused to the same correlation. However, matching to a DTL requires correlation of the opposite sign. The Brown-Servranckx quarter-wave (λ/4) matching transformer system, which requires four quadrupoles, provides a method to simultaneously match H + and H - beams between an RFQ and a DTL. The method requires the use of a special RFQ section to obtain the Twiss parameter conditions β x =β y and α x =α y =0 at the exit of the RFQ. This matching between the RFQ and DTL is described

  4. A rapid method of detecting motor blocks in patients with Parkinson's disease during volitional hand movements

    Directory of Open Access Journals (Sweden)

    Popović Mirjana B.

    2002-01-01

    Full Text Available INTRODUCTION An algorithm to study hand movements in patients with Parkinson's disease (PD who experience temporary, involuntary inability to move a hand have been developed. In literature, this rather enigmatic phenomenon has been described in gait, speech, handwriting and tapping, and noted as motor blocks (MB or freezing episodes. Freezing refers to transient periods in which the voluntary motor activity being attempted by an individual is paused. It is a sudden, unplanned state of immobility that appears to arise from deficits in initiating or simultaneously and sequentially executing movements, in correcting inappropriate movements or in planning movements. The clinical evaluation of motor blocks is difficult because of a variability both within and between individuals and relationship of blocks to time of drug ingestion. In literature the terms freezing, motor block or motor freezing are used in parallel. AIM In clinical settings classical manifestations of Parkinson's Disease (akinesia bradykinesia, rigidity, tremor, axial motor performance and postural instability are typically evaluated. Recently, in literature, new computerized methods are suggested for their objective assessment. We propose monitoring of motor blocks during hand movements to be integrated. For this purpose we have developed a simple method that comprises PC computer, digitizing board and custom made software. Movement analysis is "off line", and the result is the data that describe the number, duration and onset of motor blocks. METHOD Hand trajectories are assessed during simple volitional self paced point-to-point planar hand movement by cordless magnetic mouse on a digitizing board (Drawing board III, 305 x 457 mm, GTCO Cal Comp Inc, Fig. 1. Testing included 8 Parkinsonian patients and 8 normal healthy controls, age matched, with unknown neurologic motor or sensory disorders, Table 1. Three kinematic indicators of motor blocks: 1 duration (MBTJ; 2 onset (t%; and 3

  5. A randomized trial comparing surgeon-administered intraoperative transversus abdominis plane block with anesthesiologist-administered transcutaneous block.

    Science.gov (United States)

    Narasimhulu, D M; Scharfman, L; Minkoff, H; George, B; Homel, P; Tyagaraj, K

    2018-04-27

    Injection of local anesthetic into the transversus abdominis plane (TAP block) decreases systemic morphine requirements after abdominal surgery. We compared intraoperative surgeon-administered TAP block (surgical TAP) to anesthesiologist-administered transcutaneous ultrasound-guided TAP block (conventional TAP) for post-cesarean analgesia. We hypothesized that surgical TAP blocks would take less time to perform than conventional TAP blocks. We performed a randomized trial, recruiting 41 women undergoing cesarean delivery under neuraxial anesthesia, assigning them to either surgical TAP block (n=20) or conventional TAP block (n=21). Time taken to perform the block was the primary outcome, while postoperative pain scores and 24-hour opioid requirements were secondary outcomes. Student's t-test was used to compare block time and Kruskal-Wallis test opioid consumption and pain-scores. Time taken to perform the block (2.4 vs 12.1 min, P consumption (P=0.17) and postoperative pain scores at 4, 8, 24 and 48 h were not significantly different between the groups. Surgical TAP blocks are feasible and less time consuming than conventional TAP blocks, while providing comparable analgesia after cesarean delivery. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. Sparse structure regularized ranking

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-04-17

    Learning ranking scores is critical for the multimedia database retrieval problem. In this paper, we propose a novel ranking score learning algorithm by exploring the sparse structure and using it to regularize ranking scores. To explore the sparse structure, we assume that each multimedia object could be represented as a sparse linear combination of all other objects, and combination coefficients are regarded as a similarity measure between objects and used to regularize their ranking scores. Moreover, we propose to learn the sparse combination coefficients and the ranking scores simultaneously. A unified objective function is constructed with regard to both the combination coefficients and the ranking scores, and is optimized by an iterative algorithm. Experiments on two multimedia database retrieval data sets demonstrate the significant improvements of the propose algorithm over state-of-the-art ranking score learning algorithms.

  7. Block and Gradient Copoly(2-oxazoline) Micelles: Strikingly Different on the Inside.

    Science.gov (United States)

    Filippov, Sergey K; Verbraeken, Bart; Konarev, Petr V; Svergun, Dmitri I; Angelov, Borislav; Vishnevetskaya, Natalya S; Papadakis, Christine M; Rogers, Sarah; Radulescu, Aurel; Courtin, Tim; Martins, José C; Starovoytova, Larisa; Hruby, Martin; Stepanek, Petr; Kravchenko, Vitaly S; Potemkin, Igor I; Hoogenboom, Richard

    2017-08-17

    Herein, we provide a direct proof for differences in the micellar structure of amphiphilic diblock and gradient copolymers, thereby unambiguously demonstrating the influence of monomer distribution along the polymer chains on the micellization behavior. The internal structure of amphiphilic block and gradient co poly(2-oxazolines) based on the hydrophilic poly(2-methyl-2-oxazoline) (PMeOx) and the hydrophobic poly(2-phenyl-2-oxazoline) (PPhOx) was studied in water and water-ethanol mixtures by small-angle X-ray scattering (SAXS), small-angle neutron scattering (SANS), static and dynamic light scattering (SLS/DLS), and 1 H NMR spectroscopy. Contrast matching SANS experiments revealed that block copolymers form micelles with a uniform density profile of the core. In contrast to popular assumption, the outer part of the core of the gradient copolymer micelles has a distinctly higher density than the middle of the core. We attribute the latter finding to back-folding of chains resulting from hydrophilic-hydrophobic interactions, leading to a new type of micelles that we refer to as micelles with a "bitterball-core" structure.

  8. 20 CFR 226.35 - Deductions from regular annuity rate.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Deductions from regular annuity rate. 226.35... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Computing a Spouse or Divorced Spouse Annuity § 226.35 Deductions from regular annuity rate. The regular annuity rate of the spouse and divorced...

  9. Cross-Modality Image Synthesis via Weakly Coupled and Geometry Co-Regularized Joint Dictionary Learning.

    Science.gov (United States)

    Huang, Yawen; Shao, Ling; Frangi, Alejandro F

    2018-03-01

    Multi-modality medical imaging is increasingly used for comprehensive assessment of complex diseases in either diagnostic examinations or as part of medical research trials. Different imaging modalities provide complementary information about living tissues. However, multi-modal examinations are not always possible due to adversary factors, such as patient discomfort, increased cost, prolonged scanning time, and scanner unavailability. In additionally, in large imaging studies, incomplete records are not uncommon owing to image artifacts, data corruption or data loss, which compromise the potential of multi-modal acquisitions. In this paper, we propose a weakly coupled and geometry co-regularized joint dictionary learning method to address the problem of cross-modality synthesis while considering the fact that collecting the large amounts of training data is often impractical. Our learning stage requires only a few registered multi-modality image pairs as training data. To employ both paired images and a large set of unpaired data, a cross-modality image matching criterion is proposed. Then, we propose a unified model by integrating such a criterion into the joint dictionary learning and the observed common feature space for associating cross-modality data for the purpose of synthesis. Furthermore, two regularization terms are added to construct robust sparse representations. Our experimental results demonstrate superior performance of the proposed model over state-of-the-art methods.

  10. Patient satisfaction with health-care professionals and structure is not affected by longer hospital stay and complications after lung resection: a case-matched analysis.

    Science.gov (United States)

    Pompili, Cecilia; Tiberi, Michela; Salati, Michele; Refai, Majed; Xiumé, Francesco; Brunelli, Alessandro

    2015-02-01

    The objective of this investigation was to assess satisfaction with care of patients with long hospital stay (LHS) or complications after pulmonary resection in comparison with case-matched counterparts with a regular postoperative course. This is a prospective observational analysis on 171 consecutive patients submitted to pulmonary resections (78 wedges, 8 segmentectomies, 83 lobectomies, 3 pneumonectomies) for benign (35), primary (93) or secondary malignant (43) diseases. A hospital stay >7 days was defined as long (LHS). Major cardiopulmonary complications were defined according to the ESTS database. Patient satisfaction was assessed by the administration of the EORTC IN-PATSAT32 module at discharge. The questionnaire is a 32-item self-administered survey including different scales, reflecting the perceived level of satisfaction about the care provided by doctors, nurses and other personnel. To minimize selection bias, propensity score case-matching technique was applied to generate two sets of matched patients: patients with LHS with counterparts without it; patients with complications with counterparts without it. Median length of postoperative stay was 4 days (range 2-43). Forty-one patients (24%) had a hospital stay>7 days and 21 developed cardiopulmonary complications (12%). Propensity score yielded two well-matched groups of 41 patients with and without LHS. There were no significant differences in any patient satisfaction scale between the two groups. The comparison of the results of the patient satisfaction questionnaire between the two matched groups of 21 patients with and without complications did not show significant differences in any scale. Patients experiencing poor outcomes such as long hospital stay or complications have similar perception of quality of care compared with those with regular outcomes. Patient-reported outcome measures are becoming increasingly important in the evaluation of the quality of care and may complement more

  11. Learning About Time Within the Spinal Cord II: Evidence that Temporal Regularity is Encoded by a Spinal Oscillator

    Directory of Open Access Journals (Sweden)

    Kuan Hsien Lee

    2016-02-01

    Full Text Available How a stimulus impacts spinal cord function depends upon temporal relations. When intermittent noxious stimulation (shock is applied and the interval between shock pulses is varied (unpredictable, it induces a lasting alteration that inhibits adaptive learning. If the same stimulus is applied in a temporally regular (predictable manner, the capacity to learn is preserved and a protective/restorative effect is engaged that counters the adverse effect of variable stimulation. Sensitivity to temporal relations implies a capacity to encode time. This study explores how spinal neurons discriminate variable and fixed spaced stimulation. Communication with the brain was blocked by means of a spinal transection and adaptive capacity was tested using an instrumental learning task. In this task, subjects must learn to maintain a hind limb in a flexed position to minimize shock exposure. To evaluate the possibility that a distinct class of afferent fibers provide a sensory cue for regularity, we manipulated the temporal relation between shocks given to two dermatomes (leg and tail. Evidence for timing emerged when the stimuli were applied in a coherent manner across dermatomes, implying that a central (spinal process detects regularity. Next, we show that fixed spaced stimulation has a restorative effect when half the physical stimuli are randomly omitted, as long as the stimuli remain in phase, suggesting that stimulus regularity is encoded by an internal oscillator Research suggests that the oscillator that drives the tempo of stepping depends upon neurons within the rostral lumbar (L1-L2 region. Disrupting communication with the L1-L2 tissue by means of a L3 transection eliminated the restorative effect of fixed spaced stimulation. Implications of the results for step training and rehabilitation after injury are discussed.

  12. Traveling wave parametric amplifier with Josephson junctions using minimal resonator phase matching

    International Nuclear Information System (INIS)

    White, T. C.; Mutus, J. Y.; Hoi, I.-C.; Barends, R.; Campbell, B.; Chen, Yu; Chen, Z.; Chiaro, B.; Dunsworth, A.; Jeffrey, E.; Kelly, J.; Neill, C.; O'Malley, P. J. J.; Roushan, P.; Sank, D.; Vainsencher, A.; Wenner, J.; Martinis, John M.; Megrant, A.; Chaudhuri, S.

    2015-01-01

    Josephson parametric amplifiers have become a critical tool in superconducting device physics due to their high gain and quantum-limited noise. Traveling wave parametric amplifiers (TWPAs) promise similar noise performance, while allowing for significant increases in both bandwidth and dynamic range. We present a TWPA device based on an LC-ladder transmission line of Josephson junctions and parallel plate capacitors using low-loss amorphous silicon dielectric. Crucially, we have inserted λ/4 resonators at regular intervals along the transmission line in order to maintain the phase matching condition between pump, signal, and idler and increase gain. We achieve an average gain of 12 dB across a 4 GHz span, along with an average saturation power of −92 dBm with noise approaching the quantum limit

  13. Regularization theory for ill-posed problems selected topics

    CERN Document Server

    Lu, Shuai

    2013-01-01

    Thismonograph is a valuable contribution to thehighly topical and extremly productive field ofregularisationmethods for inverse and ill-posed problems. The author is an internationally outstanding and acceptedmathematicianin this field. In his book he offers a well-balanced mixtureof basic and innovative aspects.He demonstrates new,differentiatedviewpoints, and important examples for applications. The bookdemontrates thecurrent developments inthe field of regularization theory,such as multiparameter regularization and regularization in learning theory. The book is written for graduate and PhDs

  14. Thinking Outside the Block: An Innovative Alternative to 4X4 Block Scheduling.

    Science.gov (United States)

    Frank, Myra

    2002-01-01

    Introduces a 4x1 block scheduling method that was developed as an alternative to 4x4 block scheduling. Schedules Fridays for summer school, test preparation, and enrichment and elective courses. Includes suggestions on how to alleviate drawbacks of the 4x1 block schedule. (YDS)

  15. 20 CFR 226.34 - Divorced spouse regular annuity rate.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Divorced spouse regular annuity rate. 226.34... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Computing a Spouse or Divorced Spouse Annuity § 226.34 Divorced spouse regular annuity rate. The regular annuity rate of a divorced spouse is equal to...

  16. Distribution of short block copolymer chains in Binary Blends of Block Copolymers Having Hydrogen Bonding

    Science.gov (United States)

    Kwak, Jongheon; Han, Sunghyun; Kim, Jin Kon

    2014-03-01

    A binary mixture of two block copolymers whose blocks are capable of forming the hydrogen bonding allows one to obtain various microdomains that could not be expected for neat block copolymer. For instance, the binary blend of symmetric polystyrene-block-poly(2-vinylpyridine) copolymer (PS-b-P2VP) and polystyrene-block-polyhydroxystyrene copolymer (PS-b-PHS) blends where the hydrogen bonding occurred between P2VP and PHS showed hexagonally packed (HEX) cylindrical and body centered cubic (BCC) spherical microdomains. To know the exact location of short block copolymer chains at the interface, we synthesized deuterated polystyrene-block-polyhydroxystyrene copolymer (dPS-b-PHS) and prepared a binary mixture with PS-b-P2VP. We investigate, via small angle X-ray scattering (SAXS) and neutron reflectivity (NR), the exact location of shorter dPS block chain near the interface of the microdomains.

  17. Matched-Filter Thermography

    Directory of Open Access Journals (Sweden)

    Nima Tabatabaei

    2018-04-01

    Full Text Available Conventional infrared thermography techniques, including pulsed and lock-in thermography, have shown great potential for non-destructive evaluation of broad spectrum of materials, spanning from metals to polymers to biological tissues. However, performance of these techniques is often limited due to the diffuse nature of thermal wave fields, resulting in an inherent compromise between inspection depth and depth resolution. Recently, matched-filter thermography has been introduced as a means for overcoming this classic limitation to enable depth-resolved subsurface thermal imaging and improving axial/depth resolution. This paper reviews the basic principles and experimental results of matched-filter thermography: first, mathematical and signal processing concepts related to matched-fileting and pulse compression are discussed. Next, theoretical modeling of thermal-wave responses to matched-filter thermography using two categories of pulse compression techniques (linear frequency modulation and binary phase coding are reviewed. Key experimental results from literature demonstrating the maintenance of axial resolution while inspecting deep into opaque and turbid media are also presented and discussed. Finally, the concept of thermal coherence tomography for deconvolution of thermal responses of axially superposed sources and creation of depth-selective images in a diffusion-wave field is reviewed.

  18. Minimum Description Length Block Finder, a Method to Identify Haplotype Blocks and to Compare the Strength of Block Boundaries

    OpenAIRE

    Mannila, H.; Koivisto, M.; Perola, M.; Varilo, T.; Hennah, W.; Ekelund, J.; Lukk, M.; Peltonen, L.; Ukkonen, E.

    2003-01-01

    We describe a new probabilistic method for finding haplotype blocks that is based on the use of the minimum description length (MDL) principle. We give a rigorous definition of the quality of a segmentation of a genomic region into blocks and describe a dynamic programming algorithm for finding the optimal segmentation with respect to this measure. We also describe a method for finding the probability of a block boundary for each pair of adjacent markers: this gives a tool for evaluating the ...

  19. Physical characteristics of elite adolescent female basketball players and their relationship to match performance

    Science.gov (United States)

    Montalvo, Alicia; Latinjak, Alexander; Unnithan, Viswanath

    2016-01-01

    Abstract There were two aims of this study: first, to investigate physical fitness and match performance differences between under-16 (U16) and under-18 (U18) female basketball players, and second, to evaluate the relationship between physical fitness and game-related performances. Twenty-three young, female, elite Spanish basketball players (16.2 1.2 years) participated in the study. The sample was divided into two groups: U16 and U18 players. The average scores from pre- and post-season physical fitness measurements were used for subsequent analyses. Anthropometric variables were also measured. To evaluate game performance, game-related statistics, including the number of games and minutes played, points, rebounds, assists, steals and blocks per game, were recorded for every competitive match in one season. When anthropometric and physical performance variables were compared between groups, the U18 group demonstrated significantly (pjump capacity, speed, agility, anaerobic power, repeated sprint ability and aerobic power (p ≤ 0.005). These findings can help optimize training programs for young, elite female basketball players. PMID:28149421

  20. Dimensional regularization and analytical continuation at finite temperature

    International Nuclear Information System (INIS)

    Chen Xiangjun; Liu Lianshou

    1998-01-01

    The relationship between dimensional regularization and analytical continuation of infrared divergent integrals at finite temperature is discussed and a method of regularization of infrared divergent integrals and infrared divergent sums is given

  1. Physical characteristics of elite adolescent female basketball players and their relationship to match performance

    Directory of Open Access Journals (Sweden)

    Fort-Vanmeerhaeghe Azahara

    2016-12-01

    Full Text Available There were two aims of this study: first, to investigate physical fitness and match performance differences between under-16 (U16 and under-18 (U18 female basketball players, and second, to evaluate the relationship between physical fitness and game-related performances. Twenty-three young, female, elite Spanish basketball players (16.2 1.2 years participated in the study. The sample was divided into two groups: U16 and U18 players. The average scores from pre- and post-season physical fitness measurements were used for subsequent analyses. Anthropometric variables were also measured. To evaluate game performance, game-related statistics, including the number of games and minutes played, points, rebounds, assists, steals and blocks per game, were recorded for every competitive match in one season. When anthropometric and physical performance variables were compared between groups, the U18 group demonstrated significantly (p<0.05 higher values in upper (+21.2% and lower (+27.11% limb strength compared to the U16 group. Furthermore, no significant differences between groups were observed in match performance outcomes. Only two performance variables, steals and assists per game, correlated significantly with jump capacity, speed, agility, anaerobic power, repeated sprint ability and aerobic power (p ≤ 0.005. These findings can help optimize training programs for young, elite female basketball players.

  2. Pore pressure control on faulting behavior in a block-gouge system

    Science.gov (United States)

    Yang, Z.; Juanes, R.

    2016-12-01

    Pore fluid pressure in a fault zone can be altered by natural processes (e.g., mineral dehydration and thermal pressurization) and industrial operations involving subsurface fluid injection/extraction for the development of energy and water resources. However, the effect of pore pressure change on the stability and slip motion of a preexisting geologic fault remain poorly understood; yet they are critical for the assessment of seismic risk. In this work, we develop a micromechanical model to investigate the effect of pore pressure on faulting behavior. The model couples pore network fluid flow and mechanics of the solid grains. We conceptualize the fault zone as a gouge layer sandwiched between two blocks; the block material is represented by a group of contact-bonded grains and the gouge is composed of unbonded grains. A pore network is extracted from the particulate pack of the block-gouge system with pore body volumes and pore throat conductivities calculated rigorously based on the geometry of the local pore space. Pore fluid exerts pressure force onto the grains, the motion of which is solved using the discrete element method (DEM). The model updates the pore network regularly in response to deformation of the solid matrix. We study the fault stability in the presence of a pressure inhomogeneity (gradient) across the gouge layer, and compare it with the case of homogeneous pore pressure. We consider both normal and thrust faulting scenarios with a focus on the onset of shear failure along the block-gouge interfaces. Numerical simulations show that the slip behavior is characterized by intermittent dynamics, which is evident in the number of slipping contacts at the block-gouge interfaces and the total kinetic energy of the gouge particles. Numerical results also show that, for the case of pressure inhomogeneity, the onset of slip occurs earlier for the side with higher pressure, and that this onset appears to be controlled by the maximum pressure of both sides

  3. Effect of Regular Exercise on Anxiety and Self-Esteem Level in College Students

    Directory of Open Access Journals (Sweden)

    Zahra Hamidah

    2015-09-01

    Full Text Available Background: Regular exercise is often presented as an effective tool to influence the psychological aspect of a human being. Recent studies show that anxiety and self-esteem are the most important psychological aspects especially in college students. This study aimed to determine the differences of anxiety and self-esteem level between students who joined and did not join regular exercise program, Pendidikan Dasar XXI Atlas Medical Pioneer (Pendas XXI AMP, in the Faculty of Medicine, Universitas Padjadjaran. Methods: A cross-sectional comparative study was carried out to 64 students who joined and did not join Pendas XXI AMP. Thirty six students (12 females and 20 males who joined Pendas XXI AMP participated in aerobic and anaerobic exercise sessions lasting for 30 minutes per session, three times in 5 months. The control group was 32 students who did not join Pendas XXI AMP, with matching gender composition as the case group (12 females and 20 males. Two questionnaires, Zung Self-Rating Anxiety Scale questionnaire and Rosenberg’s Self-Esteem Scale questionnaire, were administered to both groups. The data were analyzed using chi-square test (α=0.05. Results: : There were statistically significant differences in anxiety level (p=0.016 and self-esteem level (p=0.039 between case and control groups. The students who joined Pendas XXI AMP have lower anxiety and higher self-esteem levels. Conclusions: Planned, structured, and repeated physical activities have a positive influence in anxiety and self-esteem levels.

  4. Spatial competition with intermediated matching

    NARCIS (Netherlands)

    van Raalte, C.L.J.P.; Webers, H.M.

    1995-01-01

    This paper analyzes the spatial competition in commission fees between two match makers. These match makers serve as middlemen between buyers and sellers who are located uniformly on a circle. The profits of the match makers are determined by their respective market sizes. A limited willingness to

  5. PREDICTING THE MATCH OUTCOME IN ONE DAY INTERNATIONAL CRICKET MATCHES, WHILE THE GAME IS IN PROGRESS

    Directory of Open Access Journals (Sweden)

    Michael Bailey

    2006-12-01

    Full Text Available Millions of dollars are wagered on the outcome of one day international (ODI cricket matches, with a large percentage of bets occurring after the game has commenced. Using match information gathered from all 2200 ODI matches played prior to January 2005, a range of variables that could independently explain statistically significant proportions of variation associated with the predicted run totals and match outcomes were created. Such variables include home ground advantage, past performances, match experience, performance at the specific venue, performance against the specific opposition, experience at the specific venue and current form. Using a multiple linear regression model, prediction variables were numerically weighted according to statistical significance and used to predict the match outcome. With the use of the Duckworth-Lewis method to determine resources remaining, at the end of each completed over, the predicted run total of the batting team could be updated to provide a more accurate prediction of the match outcome. By applying this prediction approach to a holdout sample of matches, the efficiency of the "in the run" wagering market could be assessed. Preliminary results suggest that the market is prone to overreact to events occurring throughout the course of the match, thus creating brief inefficiencies in the wagering market

  6. Regular and conformal regular cores for static and rotating solutions

    Energy Technology Data Exchange (ETDEWEB)

    Azreg-Aïnou, Mustapha

    2014-03-07

    Using a new metric for generating rotating solutions, we derive in a general fashion the solution of an imperfect fluid and that of its conformal homolog. We discuss the conditions that the stress–energy tensors and invariant scalars be regular. On classical physical grounds, it is stressed that conformal fluids used as cores for static or rotating solutions are exempt from any malicious behavior in that they are finite and defined everywhere.

  7. Regular and conformal regular cores for static and rotating solutions

    International Nuclear Information System (INIS)

    Azreg-Aïnou, Mustapha

    2014-01-01

    Using a new metric for generating rotating solutions, we derive in a general fashion the solution of an imperfect fluid and that of its conformal homolog. We discuss the conditions that the stress–energy tensors and invariant scalars be regular. On classical physical grounds, it is stressed that conformal fluids used as cores for static or rotating solutions are exempt from any malicious behavior in that they are finite and defined everywhere.

  8. Low-rank matrix approximation with manifold regularization.

    Science.gov (United States)

    Zhang, Zhenyue; Zhao, Keke

    2013-07-01

    This paper proposes a new model of low-rank matrix factorization that incorporates manifold regularization to the matrix factorization. Superior to the graph-regularized nonnegative matrix factorization, this new regularization model has globally optimal and closed-form solutions. A direct algorithm (for data with small number of points) and an alternate iterative algorithm with inexact inner iteration (for large scale data) are proposed to solve the new model. A convergence analysis establishes the global convergence of the iterative algorithm. The efficiency and precision of the algorithm are demonstrated numerically through applications to six real-world datasets on clustering and classification. Performance comparison with existing algorithms shows the effectiveness of the proposed method for low-rank factorization in general.

  9. The wild tapered block bootstrap

    DEFF Research Database (Denmark)

    Hounyo, Ulrich

    In this paper, a new resampling procedure, called the wild tapered block bootstrap, is introduced as a means of calculating standard errors of estimators and constructing confidence regions for parameters based on dependent heterogeneous data. The method consists in tapering each overlapping block...... of the series first, the applying the standard wild bootstrap for independent and heteroscedastic distrbuted observations to overlapping tapered blocks in an appropriate way. Its perserves the favorable bias and mean squared error properties of the tapered block bootstrap, which is the state-of-the-art block......-order asymptotic validity of the tapered block bootstrap as well as the wild tapered block bootstrap approximation to the actual distribution of the sample mean is also established when data are assumed to satisfy a near epoch dependent condition. The consistency of the bootstrap variance estimator for the sample...

  10. Regularity criteria for incompressible magnetohydrodynamics equations in three dimensions

    International Nuclear Information System (INIS)

    Lin, Hongxia; Du, Lili

    2013-01-01

    In this paper, we give some new global regularity criteria for three-dimensional incompressible magnetohydrodynamics (MHD) equations. More precisely, we provide some sufficient conditions in terms of the derivatives of the velocity or pressure, for the global regularity of strong solutions to 3D incompressible MHD equations in the whole space, as well as for periodic boundary conditions. Moreover, the regularity criterion involving three of the nine components of the velocity gradient tensor is also obtained. The main results generalize the recent work by Cao and Wu (2010 Two regularity criteria for the 3D MHD equations J. Diff. Eqns 248 2263–74) and the analysis in part is based on the works by Cao C and Titi E (2008 Regularity criteria for the three-dimensional Navier–Stokes equations Indiana Univ. Math. J. 57 2643–61; 2011 Gobal regularity criterion for the 3D Navier–Stokes equations involving one entry of the velocity gradient tensor Arch. Rational Mech. Anal. 202 919–32) for 3D incompressible Navier–Stokes equations. (paper)

  11. One feature of the activated southern Ordos block: the Ziwuling small earthquake cluster

    Directory of Open Access Journals (Sweden)

    Li Yuhang

    2014-08-01

    Full Text Available Small earthquakes (Ms > 2.0 have been recorded from 1970 to the present day and reveal a significant difference in seismicity between the stable Ordos block and its active surrounding area. The southern Ordos block is a conspicuous small earthquake belt clustered and isolated along the NNW direction and extends to the inner stable Ordos block; no active fault can match this small earthquake cluster. In this paper, we analyze the dynamic mechanism of this small earthquake cluster based on the GPS velocity field (from 1999 to 2007, which are mainly from Crustal Movement Observation Network of China (CMONOC with respect to the north and south China blocks. The principal direction of strain rate field, the expansion ratefield, the maximum shear strain rate, and the rotation rate were constrained using the GPS velocity field. The results show that the velocity field, which is bounded by the small earthquake cluster from Tongchuan to Weinan, differs from the strain rate field, and the crustal deformation is left-lateral shear. This left-lateral shear belt not only spatially coincides with the Neo-tectonic belt in the Weihe Basin but also with the NNW small earthquake cluster (the Ziwuling small earthquake cluster. Based on these studies, we speculate that the NNW small earthquake cluster is caused by left-lateral shear slip, which is prone to strain accumulation. When the strain releases along the weak zone of structure, small earthquakes diffuse within its upper crust. The maximum principal compression strees direction changed from NE-SW to NEE-SWW, and the former reverse faults in the southwestern margin of the Ordos block became a left-lateral strike slip due to readjustment of the tectonic strees field after the middle Pleistocene. The NNW Neo-tectonic belt in the Weihe Basin, the different movement character of the inner Weihe Basin (which was demonstrated through GPS measurements and the small earthquake cluster belt reflect the activated

  12. Regular-fat dairy and human health

    DEFF Research Database (Denmark)

    Astrup, Arne; Bradley, Beth H Rice; Brenna, J Thomas

    2016-01-01

    In recent history, some dietary recommendations have treated dairy fat as an unnecessary source of calories and saturated fat in the human diet. These assumptions, however, have recently been brought into question by current research on regular fat dairy products and human health. In an effort to......, cheese and yogurt, can be important components of an overall healthy dietary pattern. Systematic examination of the effects of dietary patterns that include regular-fat milk, cheese and yogurt on human health is warranted....

  13. Bounded Perturbation Regularization for Linear Least Squares Estimation

    KAUST Repository

    Ballal, Tarig; Suliman, Mohamed Abdalla Elhag; Al-Naffouri, Tareq Y.

    2017-01-01

    This paper addresses the problem of selecting the regularization parameter for linear least-squares estimation. We propose a new technique called bounded perturbation regularization (BPR). In the proposed BPR method, a perturbation with a bounded

  14. Automated pulmonary lobar ventilation measurements using volume-matched thoracic CT and MRI

    Science.gov (United States)

    Guo, F.; Svenningsen, S.; Bluemke, E.; Rajchl, M.; Yuan, J.; Fenster, A.; Parraga, G.

    2015-03-01

    Objectives: To develop and evaluate an automated registration and segmentation pipeline for regional lobar pulmonary structure-function measurements, using volume-matched thoracic CT and MRI in order to guide therapy. Methods: Ten subjects underwent pulmonary function tests and volume-matched 1H and 3He MRI and thoracic CT during a single 2-hr visit. CT was registered to 1H MRI using an affine method that incorporated block-matching and this was followed by a deformable step using free-form deformation. The resultant deformation field was used to deform the associated CT lobe mask that was generated using commercial software. 3He-1H image registration used the same two-step registration method and 3He ventilation was segmented using hierarchical k-means clustering. Whole lung and lobar 3He ventilation and ventilation defect percent (VDP) were generated by mapping ventilation defects to CT-defined whole lung and lobe volumes. Target CT-3He registration accuracy was evaluated using region- , surface distance- and volume-based metrics. Automated whole lung and lobar VDP was compared with semi-automated and manual results using paired t-tests. Results: The proposed pipeline yielded regional spatial agreement of 88.0+/-0.9% and surface distance error of 3.9+/-0.5 mm. Automated and manual whole lung and lobar ventilation and VDP were not significantly different and they were significantly correlated (r = 0.77, p pulmonary structural-functional maps with high accuracy and robustness, providing an important tool for image-guided pulmonary interventions.

  15. Recognition Memory for Novel Stimuli: The Structural Regularity Hypothesis

    Science.gov (United States)

    Cleary, Anne M.; Morris, Alison L.; Langley, Moses M.

    2007-01-01

    Early studies of human memory suggest that adherence to a known structural regularity (e.g., orthographic regularity) benefits memory for an otherwise novel stimulus (e.g., G. A. Miller, 1958). However, a more recent study suggests that structural regularity can lead to an increase in false-positive responses on recognition memory tests (B. W. A.…

  16. Approaches for Stereo Matching

    Directory of Open Access Journals (Sweden)

    Takouhi Ozanian

    1995-04-01

    Full Text Available This review focuses on the last decade's development of the computational stereopsis for recovering three-dimensional information. The main components of the stereo analysis are exposed: image acquisition and camera modeling, feature selection, feature matching and disparity interpretation. A brief survey is given of the well known feature selection approaches and the estimation parameters for this selection are mentioned. The difficulties in identifying correspondent locations in the two images are explained. Methods as to how effectively to constrain the search for correct solution of the correspondence problem are discussed, as are strategies for the whole matching process. Reasons for the occurrence of matching errors are considered. Some recently proposed approaches, employing new ideas in the modeling of stereo matching in terms of energy minimization, are described. Acknowledging the importance of computation time for real-time applications, special attention is paid to parallelism as a way to achieve the required level of performance. The development of trinocular stereo analysis as an alternative to the conventional binocular one, is described. Finally a classification based on the test images for verification of the stereo matching algorithms, is supplied.

  17. Paroxysmal atrioventricular block: Electrophysiological mechanism of phase 4 conduction block in the His-Purkinje system: A comparison with phase 3 block.

    Science.gov (United States)

    Shenasa, Mohammad; Josephson, Mark E; Wit, Andrew L

    2017-11-01

    Paroxysmal atrioventricular (A-V) block is relatively rare, and due to its transient nature, it is often under recognized. It is often triggered by atrial, junctional, or ventricular premature beats, and occurs in the presence of a diseased His-Purkinje system (HPS). Here, we present a 45-year-old white male who was admitted for observation due to recurrent syncope and near-syncope, who had paroxysmal A-V block. The likely cellular electrophysiological mechanisms(s) of paroxysmal A-V block and its differential diagnosis and management are discussed. Continuous electrocardiographic monitoring was done while the patient was in the cardiac unit. Multiple episodes of paroxysmal A-V block were documented in this case. All episodes were initiated and terminated with atrial/junctional premature beats. The patient underwent permanent pacemaker implantation and has remained asymptomatic since then. Paroxysmal A-V block is rare and often causes syncope or near-syncope. Permanent pacemaker implantation is indicated according to the current guidelines. Paroxysmal A-V block occurs in the setting of diseased HPS and is bradycardia-dependent. The detailed electrophysiological mechanisms, which involve phase 4 diastolic depolarization, and differential diagnosis are discussed. © 2017 Wiley Periodicals, Inc.

  18. Stinging Insect Matching Game

    Science.gov (United States)

    ... for Kids ▸ Stinging Insect Matching Game Share | Stinging Insect Matching Game Stinging insects can ruin summer fun for those who are ... the difference between the different kinds of stinging insects in order to keep your summer safe and ...

  19. Regularization Techniques for Linear Least-Squares Problems

    KAUST Repository

    Suliman, Mohamed

    2016-04-01

    Linear estimation is a fundamental branch of signal processing that deals with estimating the values of parameters from a corrupted measured data. Throughout the years, several optimization criteria have been used to achieve this task. The most astonishing attempt among theses is the linear least-squares. Although this criterion enjoyed a wide popularity in many areas due to its attractive properties, it appeared to suffer from some shortcomings. Alternative optimization criteria, as a result, have been proposed. These new criteria allowed, in one way or another, the incorporation of further prior information to the desired problem. Among theses alternative criteria is the regularized least-squares (RLS). In this thesis, we propose two new algorithms to find the regularization parameter for linear least-squares problems. In the constrained perturbation regularization algorithm (COPRA) for random matrices and COPRA for linear discrete ill-posed problems, an artificial perturbation matrix with a bounded norm is forced into the model matrix. This perturbation is introduced to enhance the singular value structure of the matrix. As a result, the new modified model is expected to provide a better stabilize substantial solution when used to estimate the original signal through minimizing the worst-case residual error function. Unlike many other regularization algorithms that go in search of minimizing the estimated data error, the two new proposed algorithms are developed mainly to select the artifcial perturbation bound and the regularization parameter in a way that approximately minimizes the mean-squared error (MSE) between the original signal and its estimate under various conditions. The first proposed COPRA method is developed mainly to estimate the regularization parameter when the measurement matrix is complex Gaussian, with centered unit variance (standard), and independent and identically distributed (i.i.d.) entries. Furthermore, the second proposed COPRA

  20. Regularized Regression and Density Estimation based on Optimal Transport

    KAUST Repository

    Burger, M.

    2012-03-11

    The aim of this paper is to investigate a novel nonparametric approach for estimating and smoothing density functions as well as probability densities from discrete samples based on a variational regularization method with the Wasserstein metric as a data fidelity. The approach allows a unified treatment of discrete and continuous probability measures and is hence attractive for various tasks. In particular, the variational model for special regularization functionals yields a natural method for estimating densities and for preserving edges in the case of total variation regularization. In order to compute solutions of the variational problems, a regularized optimal transport problem needs to be solved, for which we discuss several formulations and provide a detailed analysis. Moreover, we compute special self-similar solutions for standard regularization functionals and we discuss several computational approaches and results. © 2012 The Author(s).

  1. Designers Block 2002

    DEFF Research Database (Denmark)

    Dickson, Thomas

    2002-01-01

    Artiklen indleder med: ved siden aaf Londons etablerede designmesse '100% Design', er der vokset et undergrundsmiljø af designudstillinger op. Det dominerende og mest kendte initiativ er Designers Block, der i år udstillede to steder i byen. Designers Block er et mere uformelt udstillingsforum...

  2. Energy functions for regularization algorithms

    Science.gov (United States)

    Delingette, H.; Hebert, M.; Ikeuchi, K.

    1991-01-01

    Regularization techniques are widely used for inverse problem solving in computer vision such as surface reconstruction, edge detection, or optical flow estimation. Energy functions used for regularization algorithms measure how smooth a curve or surface is, and to render acceptable solutions these energies must verify certain properties such as invariance with Euclidean transformations or invariance with parameterization. The notion of smoothness energy is extended here to the notion of a differential stabilizer, and it is shown that to void the systematic underestimation of undercurvature for planar curve fitting, it is necessary that circles be the curves of maximum smoothness. A set of stabilizers is proposed that meet this condition as well as invariance with rotation and parameterization.

  3. Three regularities of recognition memory: the role of bias.

    Science.gov (United States)

    Hilford, Andrew; Maloney, Laurence T; Glanzer, Murray; Kim, Kisok

    2015-12-01

    A basic assumption of Signal Detection Theory is that decisions are made on the basis of likelihood ratios. In a preceding paper, Glanzer, Hilford, and Maloney (Psychonomic Bulletin & Review, 16, 431-455, 2009) showed that the likelihood ratio assumption implies that three regularities will occur in recognition memory: (1) the Mirror Effect, (2) the Variance Effect, (3) the normalized Receiver Operating Characteristic (z-ROC) Length Effect. The paper offered formal proofs and computational demonstrations that decisions based on likelihood ratios produce the three regularities. A survey of data based on group ROCs from 36 studies validated the likelihood ratio assumption by showing that its three implied regularities are ubiquitous. The study noted, however, that bias, another basic factor in Signal Detection Theory, can obscure the Mirror Effect. In this paper we examine how bias affects the regularities at the theoretical level. The theoretical analysis shows: (1) how bias obscures the Mirror Effect, not the other two regularities, and (2) four ways to counter that obscuring. We then report the results of five experiments that support the theoretical analysis. The analyses and the experimental results also demonstrate: (1) that the three regularities govern individual, as well as group, performance, (2) alternative explanations of the regularities are ruled out, and (3) that Signal Detection Theory, correctly applied, gives a simple and unified explanation of recognition memory data.

  4. Method of transferring regular shaped vessel into cell

    International Nuclear Information System (INIS)

    Murai, Tsunehiko.

    1997-01-01

    The present invention concerns a method of transferring regular shaped vessels from a non-contaminated area to a contaminated cell. A passage hole for allowing the regular shaped vessels to pass in the longitudinal direction is formed to a partitioning wall at the bottom of the contaminated cell. A plurality of regular shaped vessel are stacked in multiple stages in a vertical direction from the non-contaminated area present below the passage hole, allowed to pass while being urged and transferred successively into the contaminated cell. As a result, since they are transferred while substantially closing the passage hole by the regular shaped vessels, radiation rays or contaminated materials are prevented from discharging from the contaminated cell to the non-contaminated area. Since there is no requirement to open/close an isolation door frequently, the workability upon transfer can be improved remarkably. In addition, the sealing member for sealing the gap between the regular shaped vessel passing through the passage hole and the partitioning wall of the bottom is disposed to the passage hole, the contaminated materials in the contaminated cells can be prevented from discharging from the gap to the non-contaminated area. (N.H.)

  5. Carbohydrate ingestion before and during soccer match play and blood glucose and lactate concentrations.

    Science.gov (United States)

    Russell, Mark; Benton, David; Kingsley, Michael

    2014-01-01

    The ingestion of carbohydrate (CHO) before and during exercise and at halftime is commonly recommended to soccer players for maintaining blood glucose concentrations throughout match play. However, an exercise-induced rebound glycemic response has been observed in the early stages of the second half of simulated soccer-specific exercise when CHO-electrolyte beverages were consumed regularly. Therefore, the metabolic effects of CHO beverage consumption throughout soccer match play remain unclear. To investigate the blood glucose and blood lactate responses to CHOs ingested before and during soccer match play. Crossover study. Applied research study. Ten male outfield academy soccer players (age = 15.6 ± 0.2 years, height = 1.74 ± 0.02 m, mass = 65.3 ± 1.9 kg, estimated maximal oxygen consumption = 58.4 ± 0.8 mL·kg(-1)·min(-1)). Players received a 6% CHO-electrolyte solution or an electrolyte (placebo) solution 2 hours before kickoff, before each half (within 10 minutes), and every 15 minutes throughout exercise. Blood samples were obtained at rest, every 15 minutes during the match (first half: 0-15, 15-30, and 30-45 minutes; second half: 45-60, 60-75, and 75-90 minutes) and 10 minutes into the halftime break. Metabolic responses (blood glucose and blood lactate concentrations) and markers of exercise intensity (heart rate) were recorded. Supplementation influenced the blood glucose response to exercise (time × treatment interaction effect: P ≤ .05), such that glucose concentrations were higher at 30 to 45 minutes in the CHO than in the placebo condition. However, in the second half, blood glucose concentrations were similar between conditions because of transient reductions from peak values occurring in both trials at halftime. Blood lactate concentrations were elevated above those at rest in the first 15 minutes of exercise (time-of-sample effect: P interaction effect: P = .49). Ingestion of a 6% CHO-electrolyte beverage before and during soccer match

  6. Automatic Constraint Detection for 2D Layout Regularization.

    Science.gov (United States)

    Jiang, Haiyong; Nan, Liangliang; Yan, Dong-Ming; Dong, Weiming; Zhang, Xiaopeng; Wonka, Peter

    2016-08-01

    In this paper, we address the problem of constraint detection for layout regularization. The layout we consider is a set of two-dimensional elements where each element is represented by its bounding box. Layout regularization is important in digitizing plans or images, such as floor plans and facade images, and in the improvement of user-created contents, such as architectural drawings and slide layouts. To regularize a layout, we aim to improve the input by detecting and subsequently enforcing alignment, size, and distance constraints between layout elements. Similar to previous work, we formulate layout regularization as a quadratic programming problem. In addition, we propose a novel optimization algorithm that automatically detects constraints. We evaluate the proposed framework using a variety of input layouts from different applications. Our results demonstrate that our method has superior performance to the state of the art.

  7. Automatic Constraint Detection for 2D Layout Regularization

    KAUST Repository

    Jiang, Haiyong

    2015-09-18

    In this paper, we address the problem of constraint detection for layout regularization. As layout we consider a set of two-dimensional elements where each element is represented by its bounding box. Layout regularization is important for digitizing plans or images, such as floor plans and facade images, and for the improvement of user created contents, such as architectural drawings and slide layouts. To regularize a layout, we aim to improve the input by detecting and subsequently enforcing alignment, size, and distance constraints between layout elements. Similar to previous work, we formulate the layout regularization as a quadratic programming problem. In addition, we propose a novel optimization algorithm to automatically detect constraints. In our results, we evaluate the proposed framework on a variety of input layouts from different applications, which demonstrates our method has superior performance to the state of the art.

  8. Lavrentiev regularization method for nonlinear ill-posed problems

    International Nuclear Information System (INIS)

    Kinh, Nguyen Van

    2002-10-01

    In this paper we shall be concerned with Lavientiev regularization method to reconstruct solutions x 0 of non ill-posed problems F(x)=y o , where instead of y 0 noisy data y δ is an element of X with absolut(y δ -y 0 ) ≤ δ are given and F:X→X is an accretive nonlinear operator from a real reflexive Banach space X into itself. In this regularization method solutions x α δ are obtained by solving the singularly perturbed nonlinear operator equation F(x)+α(x-x*)=y δ with some initial guess x*. Assuming certain conditions concerning the operator F and the smoothness of the element x*-x 0 we derive stability estimates which show that the accuracy of the regularized solutions is order optimal provided that the regularization parameter α has been chosen properly. (author)

  9. Effect of Dietary Weight Loss on Menstrual Regularity in Obese Young Adult Women with Polycystic Ovary Syndrome.

    Science.gov (United States)

    Marzouk, Tayseer M; Sayed Ahmed, Waleed A

    2015-12-01

    To investigate the effect of dietary weight loss on menstrual regularity in obese adolescent women with polycystic ovary syndrome (PCOS). A randomized controlled trial was held at the Faculty of Nursing, Mansoura University, and the Obesity Clinic of the Rheumatology Department at Mansoura University Hospitals between July 2011 and January 2013. Sixty adolescent women with PCOS, body mass index (BMI) greater than 30, and complaints of menstrual irregularities were included in this study. Enrolled women were divided equally and randomly into 2 groups: intervention and control groups. Women in the intervention group (n = 30) were subject to an intensive dietary educational program with instructions to follow a conventional energy restricted diet, whereas women in the control group were instructed to follow the same healthy diet of the first group without calorie restriction. Menstrual regularity, weight loss, the effect on waist circumference, and hirsutism score. The 2 groups were initially matched in average body weight, BMI, hirsutism score, and waist circumference. Six months later, there were significant decreases in all parameters in the weight reduction group. In addition, more menstrual episodes were recorded in the weight reduction compared with the control group (3.1 ± 1.2 vs. 2.3 ± 1.3; P = .010). Also, BMI, waist circumference, and hirsutism score were all significantly decreased at the end of the study. Dietary weight loss in adolescent women with PCOS resulted in significant improvement in menstrual regularity, BMI, waist circumference, and hirsutism score. Copyright © 2015 North American Society for Pediatric and Adolescent Gynecology. Published by Elsevier Inc. All rights reserved.

  10. Online Manifold Regularization by Dual Ascending Procedure

    Directory of Open Access Journals (Sweden)

    Boliang Sun

    2013-01-01

    Full Text Available We propose a novel online manifold regularization framework based on the notion of duality in constrained optimization. The Fenchel conjugate of hinge functions is a key to transfer manifold regularization from offline to online in this paper. Our algorithms are derived by gradient ascent in the dual function. For practical purpose, we propose two buffering strategies and two sparse approximations to reduce the computational complexity. Detailed experiments verify the utility of our approaches. An important conclusion is that our online MR algorithms can handle the settings where the target hypothesis is not fixed but drifts with the sequence of examples. We also recap and draw connections to earlier works. This paper paves a way to the design and analysis of online manifold regularization algorithms.

  11. Associations between hypo-HDL cholesterolemia and cardiometabolic risk factors in middle-aged men and women: Independence of habitual alcohol drinking, smoking and regular exercise.

    Science.gov (United States)

    Wakabayashi, Ichiro; Daimon, Takashi

    Hypo-HDL cholesterolemia is a potent cardiovascular risk factor, and HDL cholesterol level is influenced by lifestyles including alcohol drinking, smoking and regular exercise. The aim of this study was to clarify the relationships between hypo-HDL cholesterolemia and cardiovascular risk factors and to determine whether or not these relationships depend on the above-mentioned lifestyles. The subjects were 3456 men and 2510 women (35-60 years of age) showing low HDL cholesterol levels (smoking and regular exercise (men, n=333; women, n=1410) and their age-matched control subjects were also analysed. Both in men and in women of overall subjects and subjects without histories of alcohol drinking, smoking and regular exercise, odds ratios of subjects with hypo-HDL cholesterolemia vs. subjects with normo-HDL cholesterolemia for high body mass index, high waist-to-height ratio, high triglycerides, high lipid accumulation product and multiple risk factors (three or more out of obesity, hypertension, dyslipidaemia and diabetes) were significantly higher than the reference level of 1.00. These associations in overall subjects were found when the above habits were adjusted. Hypo-HDL cholesterolemic men and women have adverse cardiovascular profiles, such as obesity, hypertriglyceridemia and multiple risk factors, independently of age, alcohol drinking, smoking and regular exercise. Copyright © 2016 Asia Oceania Association for the Study of Obesity. Published by Elsevier Ltd. All rights reserved.

  12. Association between long work hours and depressive state: a pilot study of propensity score matched Japanese white-collar workers.

    Science.gov (United States)

    Uchida, Mitsuo; Morita, Hiroshi

    2018-06-01

    Although long work hours have been associated with various physical health problems, studies of their association with mental health have yielded inconsistent results, due to differences in study settings, study outcome and/or unmeasured background factors. In this study, we used a propensity score method to evaluate the association between work hours and depressive state. A total of 467 Japanese white-collar workers were surveyed and divided into long and regular work hour groups according to overtime work records. Propensity score matching was performed based on 32 individual background and workplace factors, yielding 74 pairs of propensity-matched subjects. CES-D score, an indicator of depressive state, did not differ significantly among the two groups (p=0.203). However, work motivation, work control, social support and emotional stability correlated with CES-D score. These findings suggest that work control and social support factors are more associated with depressive state than control of work hours. These results also suggest that it is possible to use propensity score matching to evaluate the association between work hours and mental health in occupational study settings. Further studies, in larger populations, are required to determine the association between work hours and mental health parameters.

  13. Arthroscopic medial meniscus trimming or repair under nerve blocks: Which nerves should be blocked?

    Science.gov (United States)

    Taha, AM; Abd-Elmaksoud, AM

    2016-01-01

    Background: This study aimed to determine the role of the sciatic and obturator nerve blocks (in addition to femoral block) in providing painless arthroscopic medial meniscus trimming/repair. Materials and Methods: One hundred and twenty patients with medial meniscus tear, who had been scheduled to knee arthroscopy, were planned to be included in this controlled prospective double-blind study. The patients were randomly allocated into three equal groups; FSO, FS, and FO. The femoral, sciatic, and obturator nerves were blocked in FSO groups. The femoral and sciatic nerves were blocked in FS group, while the femoral and obturator nerves were blocked in FO group. Intraoperative pain and its causative surgical maneuver were recorded. Results: All the patients (n = 7, 100%) in FO group had intraoperative pain. The research was terminated in this group but completed in FS and FSO groups (40 patients each). During valgus positioning of the knee for surgical management of the medial meniscus tear, the patients in FS group experienced pain more frequently than those in FSO group (P = 0.005). Conclusion: Adding a sciatic nerve block to the femoral nerve block is important for painless knee arthroscopy. Further adding of an obturator nerve block may be needed when a valgus knee position is required to manage the medial meniscus tear. PMID:27375382

  14. Real-time eSports Match Result Prediction

    OpenAIRE

    Yang, Yifan; Qin, Tian; Lei, Yu-Heng

    2016-01-01

    In this paper, we try to predict the winning team of a match in the multiplayer eSports game Dota 2. To address the weaknesses of previous work, we consider more aspects of prior (pre-match) features from individual players' match history, as well as real-time (during-match) features at each minute as the match progresses. We use logistic regression, the proposed Attribute Sequence Model, and their combinations as the prediction models. In a dataset of 78362 matches where 20631 matches contai...

  15. Regular graph construction for semi-supervised learning

    International Nuclear Information System (INIS)

    Vega-Oliveros, Didier A; Berton, Lilian; Eberle, Andre Mantini; Lopes, Alneu de Andrade; Zhao, Liang

    2014-01-01

    Semi-supervised learning (SSL) stands out for using a small amount of labeled points for data clustering and classification. In this scenario graph-based methods allow the analysis of local and global characteristics of the available data by identifying classes or groups regardless data distribution and representing submanifold in Euclidean space. Most of methods used in literature for SSL classification do not worry about graph construction. However, regular graphs can obtain better classification accuracy compared to traditional methods such as k-nearest neighbor (kNN), since kNN benefits the generation of hubs and it is not appropriate for high-dimensionality data. Nevertheless, methods commonly used for generating regular graphs have high computational cost. We tackle this problem introducing an alternative method for generation of regular graphs with better runtime performance compared to methods usually find in the area. Our technique is based on the preferential selection of vertices according some topological measures, like closeness, generating at the end of the process a regular graph. Experiments using the global and local consistency method for label propagation show that our method provides better or equal classification rate in comparison with kNN

  16. Bus Stops and Pedestrian-Motor Vehicle Collisions in Lima, Peru: A Matched Case-Control Study

    Science.gov (United States)

    Quistberg, D. Alex; Koepsell, Thomas D.; Johnston, Brian D.; Boyle, Linda Ng; Miranda, J. Jaime; Ebel, Beth E.

    2015-01-01

    Objective To evaluate the relationship between bus stop characteristics and pedestrian-motor vehicle collisions. Design Matched case-control study where the units of study were pedestrian crossing. Setting Random sample of 11 police commissaries in Lima, Peru. Data collection occurred from February, 2011 to September, 2011. Participants 97 intersection cases representing 1,134 collisions and 40 mid-block cases representing 469 collisions that occurred between October, 2010 and January, 2011 and their matched controls. Main Exposures Presence of a bus stop and specific bus stop characteristics. Main Outcome Occurrence of a pedestrian-motor vehicle collision. Results Intersections with bus stops were three times more likely to have a pedestrian-vehicle collision (OR 3.28, 95% CI 1.53-7.03), relative to intersections without bus stops. Both formal and informal bus stops were associated with a higher odds of a collision at intersections (OR 6.23, 95% CI 1.76-22.0 and OR 2.98, 1.37-6.49). At mid-block sites, bus stops on a bus-dedicated transit lane were also associated with collision risk (OR 2.36, 95% CI 1.02-5.42). All bus stops were located prior to the intersection, contrary to practices in most high income countries. Conclusions In urban Lima, the presence of a bus stop was associated with a three-fold increase in risk of a pedestrian collision. The highly competitive environment among bus companies may provide an economic incentive for risky practices such as dropping off passengers in the middle of traffic and jockeying for position with other buses. Bus stop placement should be considered to improve pedestrian safety. PMID:24357516

  17. Coastal protection using topological interlocking blocks

    Science.gov (United States)

    Pasternak, Elena; Dyskin, Arcady; Pattiaratchi, Charitha; Pelinovsky, Efim

    2013-04-01

    The coastal protection systems mainly rely on the self-weight of armour blocks to ensure its stability. We propose a system of interlocking armour blocks, which form plate-shape assemblies. The shape and the position of the blocks are chosen in such a way as to impose kinematic constraints that prevent the blocks from being removed from the assembly. The topological interlocking shapes include simple convex blocks such as platonic solids, the most practical being tetrahedra, cubes and octahedra. Another class of topological interlocking blocks is so-called osteomorphic blocks, which form plate-like assemblies tolerant to random block removal (almost 25% of blocks need to be removed for the assembly to loose integrity). Both classes require peripheral constraint, which can be provided either by the weight of the blocks or post-tensioned internal cables. The interlocking assemblies provide increased stability because lifting one block involves lifting (and bending) the whole assembly. We model the effect of interlocking by introducing an equivalent additional self-weight of the armour blocks. This additional self-weight is proportional to the critical pressure needed to cause bending of the interlocking assembly when it loses stability. Using beam approximation we find an equivalent stability coefficient for interlocking. It is found to be greater than the stability coefficient of a structure with similar blocks without interlocking. In the case when the peripheral constraint is provided by the weight of the blocks and for the slope angle of 45o, the effective stability coefficient for a structure of 100 blocks is 33% higher than the one for a similar structure without interlocking. Further increase in the stability coefficient can be reached by a specially constructed peripheral constraint system, for instance by using post-tension cables.

  18. An Improvement on LSB Matching and LSB Matching Revisited Steganography Methods

    OpenAIRE

    Qazanfari, Kazem; Safabakhsh, Reza

    2017-01-01

    The aim of the steganography methods is to communicate securely in a completely undetectable manner. LSB Matching and LSB Matching Revisited steganography methods are two general and esiest methods to achieve this aim. Being secured against first order steganalysis methods is the most important feature of these methods. On the other hand, these methods don't consider inter pixel dependency. Therefore, recently, several steganalysis methods are proposed that by using co-occurrence matrix detec...

  19. Nalbuphine as an adjuvant to 0.25% levobupivacaine in ultrasound-guided supraclavicular block provided prolonged sensory block and similar motor block durations (RCT).

    Science.gov (United States)

    Abdelhamid, Bassant Mohamed; Omar, Heba

    2018-05-28

    Prolonged postoperative analgesia with early motor recovery for early rehabilitation is a challenge in regional block. The purpose of this study is to evaluate the effect of adding 20 mg nalbuphine to 25 ml of 0.25% levobupivacaine in supraclavicular brachial plexus block. One hundred thirty-five (135) patients scheduled for hand and forearm surgeries with supraclavicular block were randomly allocated into three equal groups. Group L received 25 ml of 0.5% levobupivacaine + 1 ml normal saline; group H received 25 ml of 0.25% levobupivacaine + 1 ml normal saline; and group N received 25 ml of 0.25% levobupivacaine + 1 ml (20 mg) nalbuphine. Onset time and duration of sensory and motor block, and time to first analgesic dose were recorded. Sensory block onset was comparable between the three groups. Motor block onset in group L and group N was comparable (13.16 ± 3.07 and 13.84 ± 3.05 min, respectively) and was shorter than that in group H (15.71 ± 2 0.91 min). Sensory block duration in group L and group N was comparable (522.22 ± 69.57 and 533.78 ± 66.03 min, respectively) and was longer than that in group H (342.67 ± 92.80 min). Motor block duration in group N and group H was comparable (272.00 ± 59.45 and 249.78 ± 66.01 min, respectively) and was shorter than that in group L (334.67 ± 57.90 min). Time to first analgesic dose was significantly longer in group N (649.78 ± 114.76 min) than that of group L and group H (575.56 ± 96.85 and 375.56 ± 84.49 min, respectively) and longer in group L when compared to group H. Adding 20 mg nalbuphine to 25 ml of 0.25% levobupivacaine in supraclavicular block provided prolonged duration of sensory block with similar duration of motor block.

  20. Physical model of dimensional regularization

    Energy Technology Data Exchange (ETDEWEB)

    Schonfeld, Jonathan F.

    2016-12-15

    We explicitly construct fractals of dimension 4-ε on which dimensional regularization approximates scalar-field-only quantum-field theory amplitudes. The construction does not require fractals to be Lorentz-invariant in any sense, and we argue that there probably is no Lorentz-invariant fractal of dimension greater than 2. We derive dimensional regularization's power-law screening first for fractals obtained by removing voids from 3-dimensional Euclidean space. The derivation applies techniques from elementary dielectric theory. Surprisingly, fractal geometry by itself does not guarantee the appropriate power-law behavior; boundary conditions at fractal voids also play an important role. We then extend the derivation to 4-dimensional Minkowski space. We comment on generalization to non-scalar fields, and speculate about implications for quantum gravity. (orig.)

  1. Information-theoretic semi-supervised metric learning via entropy regularization.

    Science.gov (United States)

    Niu, Gang; Dai, Bo; Yamada, Makoto; Sugiyama, Masashi

    2014-08-01

    We propose a general information-theoretic approach to semi-supervised metric learning called SERAPH (SEmi-supervised metRic leArning Paradigm with Hypersparsity) that does not rely on the manifold assumption. Given the probability parameterized by a Mahalanobis distance, we maximize its entropy on labeled data and minimize its entropy on unlabeled data following entropy regularization. For metric learning, entropy regularization improves manifold regularization by considering the dissimilarity information of unlabeled data in the unsupervised part, and hence it allows the supervised and unsupervised parts to be integrated in a natural and meaningful way. Moreover, we regularize SERAPH by trace-norm regularization to encourage low-dimensional projections associated with the distance metric. The nonconvex optimization problem of SERAPH could be solved efficiently and stably by either a gradient projection algorithm or an EM-like iterative algorithm whose M-step is convex. Experiments demonstrate that SERAPH compares favorably with many well-known metric learning methods, and the learned Mahalanobis distance possesses high discriminability even under noisy environments.

  2. A Novel Transfer Learning Method Based on Common Space Mapping and Weighted Domain Matching

    KAUST Repository

    Liang, Ru-Ze; Xie, Wei; Li, Weizhi; Wang, Hongqi; Wang, Jim Jing-Yan; Taylor, Lisa

    2017-01-01

    In this paper, we propose a novel learning framework for the problem of domain transfer learning. We map the data of two domains to one single common space, and learn a classifier in this common space. Then we adapt the common classifier to the two domains by adding two adaptive functions to it respectively. In the common space, the target domain data points are weighted and matched to the target domain in term of distributions. The weighting terms of source domain data points and the target domain classification responses are also regularized by the local reconstruction coefficients. The novel transfer learning framework is evaluated over some benchmark cross-domain data sets, and it outperforms the existing state-of-the-art transfer learning methods.

  3. A Novel Transfer Learning Method Based on Common Space Mapping and Weighted Domain Matching

    KAUST Repository

    Liang, Ru-Ze

    2017-01-17

    In this paper, we propose a novel learning framework for the problem of domain transfer learning. We map the data of two domains to one single common space, and learn a classifier in this common space. Then we adapt the common classifier to the two domains by adding two adaptive functions to it respectively. In the common space, the target domain data points are weighted and matched to the target domain in term of distributions. The weighting terms of source domain data points and the target domain classification responses are also regularized by the local reconstruction coefficients. The novel transfer learning framework is evaluated over some benchmark cross-domain data sets, and it outperforms the existing state-of-the-art transfer learning methods.

  4. Efficient line matching with homography

    Science.gov (United States)

    Shen, Yan; Dai, Yuxing; Zhu, Zhiliang

    2018-03-01

    In this paper, we propose a novel approach to line matching based on homography. The basic idea is to use cheaply obtainable matched points to boost the similarity between two images. Two types of homography method, which are estimated by direct linear transformation, transform images and extract their similar parts, laying a foundation for the use of optical flow tracking. The merit of the similarity is that rapid matching can be achieved by regionalizing line segments and local searching. For multiple homography estimation that can perform better than one global homography, we introduced the rank-one modification method of singular value decomposition to reduce the computation cost. The proposed approach results in point-to-point matches, which can be utilized with state-of-the-art point-match-based structures from motion (SfM) frameworks seamlessly. The outstanding performance and feasible robustness of our approach are demonstrated in this paper.

  5. Fluctuations of quantum fields via zeta function regularization

    International Nuclear Information System (INIS)

    Cognola, Guido; Zerbini, Sergio; Elizalde, Emilio

    2002-01-01

    Explicit expressions for the expectation values and the variances of some observables, which are bilinear quantities in the quantum fields on a D-dimensional manifold, are derived making use of zeta function regularization. It is found that the variance, related to the second functional variation of the effective action, requires a further regularization and that the relative regularized variance turns out to be 2/N, where N is the number of the fields, thus being independent of the dimension D. Some illustrating examples are worked through. The issue of the stress tensor is also briefly addressed

  6. X-ray computed tomography using curvelet sparse regularization.

    Science.gov (United States)

    Wieczorek, Matthias; Frikel, Jürgen; Vogel, Jakob; Eggl, Elena; Kopp, Felix; Noël, Peter B; Pfeiffer, Franz; Demaret, Laurent; Lasser, Tobias

    2015-04-01

    Reconstruction of x-ray computed tomography (CT) data remains a mathematically challenging problem in medical imaging. Complementing the standard analytical reconstruction methods, sparse regularization is growing in importance, as it allows inclusion of prior knowledge. The paper presents a method for sparse regularization based on the curvelet frame for the application to iterative reconstruction in x-ray computed tomography. In this work, the authors present an iterative reconstruction approach based on the alternating direction method of multipliers using curvelet sparse regularization. Evaluation of the method is performed on a specifically crafted numerical phantom dataset to highlight the method's strengths. Additional evaluation is performed on two real datasets from commercial scanners with different noise characteristics, a clinical bone sample acquired in a micro-CT and a human abdomen scanned in a diagnostic CT. The results clearly illustrate that curvelet sparse regularization has characteristic strengths. In particular, it improves the restoration and resolution of highly directional, high contrast features with smooth contrast variations. The authors also compare this approach to the popular technique of total variation and to traditional filtered backprojection. The authors conclude that curvelet sparse regularization is able to improve reconstruction quality by reducing noise while preserving highly directional features.

  7. Geographic Trends in the Plastic Surgery Match.

    Science.gov (United States)

    Silvestre, Jason; Lin, Ines C; Serletti, Joseph M; Chang, Benjamin

    2016-01-01

    The integrated plastic surgery match is among the most competitive residency matches in recent years. Although previous studies have correlated applicant characteristics with successful match outcomes, none have comprehensively investigated the role of geography in the match. This study elucidates regional biases in the match. Plastic surgery residents who matched during 2011-2015 were eligible for study inclusion. Names of residents were obtained from official residency program websites and cross-referenced with data obtained from the Student Doctor Network. For each resident, region of residency program and medical school were compared. From 67 programs, 622 residents were identified. Most graduated from US medical schools (97.9%). A total of 94 residents matched at a home institution (15.1%). Half of the residents matched in the same region as their medical school (48.9%). Programs in the South matched the greatest number of residents from the same region (60.8%), whereas West programs matched the least (30.8%, p < 0.001). No regional differences existed regarding residents matching at their home institution (p = 0.268). More women matched at West programs (43.1%) versus East programs (30.6%, p < 0.05). A significant number of residents matched at their home institution. Roughly, half matched at a program in the same region as their medical school. Whether this regional phenomenon stems from applicant or program factors remains unknown. Yet, given the limited number of interviews and the high costs of interviewing, applicants and programs can use these data to help optimize the match process. Copyright © 2015 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  8. Semisupervised Support Vector Machines With Tangent Space Intrinsic Manifold Regularization.

    Science.gov (United States)

    Sun, Shiliang; Xie, Xijiong

    2016-09-01

    Semisupervised learning has been an active research topic in machine learning and data mining. One main reason is that labeling examples is expensive and time-consuming, while there are large numbers of unlabeled examples available in many practical problems. So far, Laplacian regularization has been widely used in semisupervised learning. In this paper, we propose a new regularization method called tangent space intrinsic manifold regularization. It is intrinsic to data manifold and favors linear functions on the manifold. Fundamental elements involved in the formulation of the regularization are local tangent space representations, which are estimated by local principal component analysis, and the connections that relate adjacent tangent spaces. Simultaneously, we explore its application to semisupervised classification and propose two new learning algorithms called tangent space intrinsic manifold regularized support vector machines (TiSVMs) and tangent space intrinsic manifold regularized twin SVMs (TiTSVMs). They effectively integrate the tangent space intrinsic manifold regularization consideration. The optimization of TiSVMs can be solved by a standard quadratic programming, while the optimization of TiTSVMs can be solved by a pair of standard quadratic programmings. The experimental results of semisupervised classification problems show the effectiveness of the proposed semisupervised learning algorithms.

  9. Regularity and chaos in cavity QED

    International Nuclear Information System (INIS)

    Bastarrachea-Magnani, Miguel Angel; López-del-Carpio, Baldemar; Chávez-Carlos, Jorge; Lerma-Hernández, Sergio; Hirsch, Jorge G

    2017-01-01

    The interaction of a quantized electromagnetic field in a cavity with a set of two-level atoms inside it can be described with algebraic Hamiltonians of increasing complexity, from the Rabi to the Dicke models. Their algebraic character allows, through the use of coherent states, a semiclassical description in phase space, where the non-integrable Dicke model has regions associated with regular and chaotic motion. The appearance of classical chaos can be quantified calculating the largest Lyapunov exponent over the whole available phase space for a given energy. In the quantum regime, employing efficient diagonalization techniques, we are able to perform a detailed quantitative study of the regular and chaotic regions, where the quantum participation ratio (P R ) of coherent states on the eigenenergy basis plays a role equivalent to the Lyapunov exponent. It is noted that, in the thermodynamic limit, dividing the participation ratio by the number of atoms leads to a positive value in chaotic regions, while it tends to zero in the regular ones. (paper)

  10. Approximate design theory for a simple block design with random block effects

    OpenAIRE

    Christof, Karin

    1985-01-01

    Approximate design theory for a simple block design with random block effects / K. Christof ; F. Pukelsheim. - In: Linear statistical inference / ed. by T. Calinski ... - Berlin u. a. : Springer, 1985. - S. 20-28. - (Lecture notes in statistics ; 35)

  11. Cognitive Aspects of Regularity Exhibit When Neighborhood Disappears

    Science.gov (United States)

    Chen, Sau-Chin; Hu, Jon-Fan

    2015-01-01

    Although regularity refers to the compatibility between pronunciation of character and sound of phonetic component, it has been suggested as being part of consistency, which is defined by neighborhood characteristics. Two experiments demonstrate how regularity effect is amplified or reduced by neighborhood characteristics and reveals the…

  12. Ontology Matching Across Domains

    Science.gov (United States)

    2010-05-01

    matching include GMO [1], Anchor-Prompt [2], and Similarity Flooding [3]. GMO is an iterative structural matcher, which uses RDF bipartite graphs to...AFRL under contract# FA8750-09-C-0058. References [1] Hu, W., Jian, N., Qu, Y., Wang, Y., “ GMO : a graph matching for ontologies”, in: Proceedings of

  13. Electron injection and acceleration in the plasma bubble regime driven by an ultraintense laser pulse combined with using dense-plasma wall and block

    Science.gov (United States)

    Zhao, Xue-Yan; Xie, Bai-Song; Wu, Hai-Cheng; Zhang, Shan; Hong, Xue-Ren; Aimidula, Aimierding

    2012-03-01

    An optimizing and alternative scheme for electron injection and acceleration in the wake bubble driven by an ultraintense laser pulse is presented. In this scheme, the dense-plasma wall with an inner diameter matching the expected bubble size is placed along laser propagation direction. Meanwhile, a dense-plasma block dense-plasma is adhered inward transversely at some certain position of the wall. Particle-in-cell simulations are performed, which demonstrate that the block plays an important role in the first electron injection and acceleration. The result shows that a collimated electron bunch with a total number of about 4.04×108μm-1 can be generated and accelerated stably to 1.61 GeV peak energy with 2.6% energy spread. The block contributes about 50% to the accelerated electron injection bunch by tracing and sorting statistically the source.

  14. Equation level matching: An extension of the method of matched asymptotic expansion for problems of wave propagation

    Science.gov (United States)

    Faria, Luiz; Rosales, Rodolfo

    2017-11-01

    We introduce an alternative to the method of matched asymptotic expansions. In the ``traditional'' implementation, approximate solutions, valid in different (but overlapping) regions are matched by using ``intermediate'' variables. Here we propose to match at the level of the equations involved, via a ``uniform expansion'' whose equations enfold those of the approximations to be matched. This has the advantage that one does not need to explicitly solve the asymptotic equations to do the matching, which can be quite impossible for some problems. In addition, it allows matching to proceed in certain wave situations where the traditional approach fails because the time behaviors differ (e.g., one of the expansions does not include dissipation). On the other hand, this approach does not provide the fairly explicit approximations resulting from standard matching. In fact, this is not even its aim, which to produce the ``simplest'' set of equations that capture the behavior. Ruben Rosales work was partially supported by NSF Grants DMS-1614043 and DMS-1719637.

  15. Reducing the likelihood of long tennis matches.

    Science.gov (United States)

    Barnett, Tristan; Alan, Brown; Pollard, Graham

    2006-01-01

    Long matches can cause problems for tournaments. For example, the starting times of subsequent matches can be substantially delayed causing inconvenience to players, spectators, officials and television scheduling. They can even be seen as unfair in the tournament setting when the winner of a very long match, who may have negative aftereffects from such a match, plays the winner of an average or shorter length match in the next round. Long matches can also lead to injuries to the participating players. One factor that can lead to long matches is the use of the advantage set as the fifth set, as in the Australian Open, the French Open and Wimbledon. Another factor is long rallies and a greater than average number of points per game. This tends to occur more frequently on the slower surfaces such as at the French Open. The mathematical method of generating functions is used to show that the likelihood of long matches can be substantially reduced by using the tiebreak game in the fifth set, or more effectively by using a new type of game, the 50-40 game, throughout the match. Key PointsThe cumulant generating function has nice properties for calculating the parameters of distributions in a tennis matchA final tiebreaker set reduces the length of matches as currently being used in the US OpenA new 50-40 game reduces the length of matches whilst maintaining comparable probabilities for the better player to win the match.

  16. Pattern recognition and string matching

    CERN Document Server

    Cheng, Xiuzhen

    2002-01-01

    The research and development of pattern recognition have proven to be of importance in science, technology, and human activity. Many useful concepts and tools from different disciplines have been employed in pattern recognition. Among them is string matching, which receives much theoretical and practical attention. String matching is also an important topic in combinatorial optimization. This book is devoted to recent advances in pattern recognition and string matching. It consists of twenty eight chapters written by different authors, addressing a broad range of topics such as those from classifica­ tion, matching, mining, feature selection, and applications. Each chapter is self-contained, and presents either novel methodological approaches or applications of existing theories and techniques. The aim, intent, and motivation for publishing this book is to pro­ vide a reference tool for the increasing number of readers who depend upon pattern recognition or string matching in some way. This includes student...

  17. Matching faces with emotional expressions

    Directory of Open Access Journals (Sweden)

    Wenfeng eChen

    2011-08-01

    Full Text Available There is some evidence that faces with a happy expression are recognized better than faces with other expressions. However, little is known about whether this happy face advantage also applies to perceptual face matching, and whether similar differences exist among other expressions. Using a sequential matching paradigm, we systematically compared the effects of seven basic facial expressions on identity recognition. Identity matching was quickest when a pair of faces had an identical happy/sad/neutral expression, poorer when they had a fearful/surprise/angry expression, and poorest when they had a disgust expression. Faces with a happy/sad/fear/surprise expression were matched faster than those with an anger/disgust expression when the second face in a pair had a neutral expression. These results demonstrate that effects of facial expression on identity recognition are not limited to happy faces when a learned face is immediately tested. The results suggest different influences of expression in perceptual matching and long-term recognition memory.

  18. The Kent Face Matching Test.

    Science.gov (United States)

    Fysh, Matthew C; Bindemann, Markus

    2018-05-01

    This study presents the Kent Face Matching Test (KFMT), which comprises 200 same-identity and 20 different-identity pairs of unfamiliar faces. Each face pair consists of a photograph from a student ID card and a high-quality portrait that was taken at least three months later. The test is designed to complement existing resources for face-matching research, by providing a more ecologically valid stimulus set that captures the natural variability that can arise in a person's appearance over time. Two experiments are presented to demonstrate that the KFMT provides a challenging measure of face matching but correlates with established tests. Experiment 1 compares a short version of this test with the optimized Glasgow Face Matching Test (GFMT). In Experiment 2, a longer version of the KFMT, with infrequent identity mismatches, is correlated with performance on the Cambridge Face Memory Test (CFMT) and the Cambridge Face Perception Test (CFPT). The KFMT is freely available for use in face-matching research. © 2017 The British Psychological Society.

  19. An adaptive regularization parameter choice strategy for multispectral bioluminescence tomography

    Energy Technology Data Exchange (ETDEWEB)

    Feng Jinchao; Qin Chenghu; Jia Kebin; Han Dong; Liu Kai; Zhu Shouping; Yang Xin; Tian Jie [Medical Image Processing Group, Institute of Automation, Chinese Academy of Sciences, P. O. Box 2728, Beijing 100190 (China); College of Electronic Information and Control Engineering, Beijing University of Technology, Beijing 100124 (China); Medical Image Processing Group, Institute of Automation, Chinese Academy of Sciences, P. O. Box 2728, Beijing 100190 (China); Medical Image Processing Group, Institute of Automation, Chinese Academy of Sciences, P. O. Box 2728, Beijing 100190 (China) and School of Life Sciences and Technology, Xidian University, Xi' an 710071 (China)

    2011-11-15

    Purpose: Bioluminescence tomography (BLT) provides an effective tool for monitoring physiological and pathological activities in vivo. However, the measured data in bioluminescence imaging are corrupted by noise. Therefore, regularization methods are commonly used to find a regularized solution. Nevertheless, for the quality of the reconstructed bioluminescent source obtained by regularization methods, the choice of the regularization parameters is crucial. To date, the selection of regularization parameters remains challenging. With regards to the above problems, the authors proposed a BLT reconstruction algorithm with an adaptive parameter choice rule. Methods: The proposed reconstruction algorithm uses a diffusion equation for modeling the bioluminescent photon transport. The diffusion equation is solved with a finite element method. Computed tomography (CT) images provide anatomical information regarding the geometry of the small animal and its internal organs. To reduce the ill-posedness of BLT, spectral information and the optimal permissible source region are employed. Then, the relationship between the unknown source distribution and multiview and multispectral boundary measurements is established based on the finite element method and the optimal permissible source region. Since the measured data are noisy, the BLT reconstruction is formulated as l{sub 2} data fidelity and a general regularization term. When choosing the regularization parameters for BLT, an efficient model function approach is proposed, which does not require knowledge of the noise level. This approach only requests the computation of the residual and regularized solution norm. With this knowledge, we construct the model function to approximate the objective function, and the regularization parameter is updated iteratively. Results: First, the micro-CT based mouse phantom was used for simulation verification. Simulation experiments were used to illustrate why multispectral data were used

  20. Solving block linear systems with low-rank off-diagonal blocks is easily parallelizable

    Energy Technology Data Exchange (ETDEWEB)

    Menkov, V. [Indiana Univ., Bloomington, IN (United States)

    1996-12-31

    An easily and efficiently parallelizable direct method is given for solving a block linear system Bx = y, where B = D + Q is the sum of a non-singular block diagonal matrix D and a matrix Q with low-rank blocks. This implicitly defines a new preconditioning method with an operation count close to the cost of calculating a matrix-vector product Qw for some w, plus at most twice the cost of calculating Qw for some w. When implemented on a parallel machine the processor utilization can be as good as that of those operations. Order estimates are given for the general case, and an implementation is compared to block SSOR preconditioning.

  1. Rolling block mazes are PSPACE-complete

    NARCIS (Netherlands)

    Buchin, K.; Buchin, M.

    2012-01-01

    In a rolling block maze, one or more blocks lie on a rectangular board with square cells. In most mazes, the blocks have size k × m × n where k, m, n are integers that determine the size of the block in terms of units of the size of the board cells. The task of a rolling block maze is to roll a

  2. Probability matching and strategy availability

    OpenAIRE

    J. Koehler, Derek; Koehler, Derek J.; James, Greta

    2010-01-01

    Findings from two experiments indicate that probability matching in sequential choice arises from an asymmetry in strategy availability: The matching strategy comes readily to mind, whereas a superior alternative strategy, maximizing, does not. First, compared with the minority who spontaneously engage in maximizing, the majority of participants endorse maximizing as superior to matching in a direct comparison when both strategies are described. Second, when the maximizing strategy is brought...

  3. An Incentive Theory of Matching

    OpenAIRE

    Brown, Alessio J. G.; Merkl, Christian; Snower, Dennis J.

    2010-01-01

    This paper examines the labour market matching process by distinguishing its two component stages: the contact stage, in which job searchers make contact with employers and the selection stage, in which they decide whether to match. We construct a theoretical model explaining two-sided selection through microeconomic incentives. Firms face adjustment costs in responding to heterogeneous variations in the characteristics of workers and jobs. Matches and separations are described through firms'...

  4. A PMT-Block test bench

    International Nuclear Information System (INIS)

    Adragna, P.; Antonaki, A.

    2006-01-01

    The front-end electronics of the ATLAS hadronic calorimeter (Tile Cal) is housed in a unit, called PMT-Block. The PMT-Block is a compact instrument comprising a light mixer, a PMT together with its divider and a 3-in-1 card, which provides shaping, amplification and integration for the signals. This instrument needs to be qualified before being assembled on the detector. A PMT-Block test bench has been developed for this purpose. This test bench is a system which allows fast, albeit accurate enough, measurements of the main properties of a complete PMT-Block. The system, both hardware and software, and the protocol used for the PMT-Blocks characterization are described in detail in this report. The results obtained in the test of about 10 000 PMT-Blocks needed for the instrumentation of the ATLAS (LHC-CERN) hadronic Tile Calorimeter are also reported

  5. A PMT-Block test bench

    Energy Technology Data Exchange (ETDEWEB)

    Adragna, P [Dipartimento di Fisica ' E.Fermi' , Universita di Pisa and Istituto Nazionale di Fisica Nucleare, Sezione di Pisa, Largo B. Pontecorvo 3, Pisa 56127 (Italy); Universita degli studi di Siena, via Roma 56, 53100 Siena (Italy); Antonaki, A [Institute of Accelerating Systems and Applications, P.O. Box 17214, Athens 10024 (Greece); National Capodistrian University of Athens, 30 Panepistimiou st., Athens 10679 (Greece)] (and others)

    2006-08-01

    The front-end electronics of the ATLAS hadronic calorimeter (Tile Cal) is housed in a unit, called PMT-Block. The PMT-Block is a compact instrument comprising a light mixer, a PMT together with its divider and a 3-in-1 card, which provides shaping, amplification and integration for the signals. This instrument needs to be qualified before being assembled on the detector. A PMT-Block test bench has been developed for this purpose. This test bench is a system which allows fast, albeit accurate enough, measurements of the main properties of a complete PMT-Block. The system, both hardware and software, and the protocol used for the PMT-Blocks characterization are described in detail in this report. The results obtained in the test of about 10 000 PMT-Blocks needed for the instrumentation of the ATLAS (LHC-CERN) hadronic Tile Calorimeter are also reported.

  6. Masquerading bundle branch block as a presenting manifestation of complete atrioventricular block that caused syncope.

    Science.gov (United States)

    Jiao, Zhenyu; Tian, Ying; Yang, Xinchun; Liu, Xingpeng

    2017-10-01

    A 59-year-old male patient was admitted with the main complaints of stuffiness and shortness of breath. An ECG from precordial leads on admission showed masquerading bundle branch block. Syncope frequently occurred after admission. During syncope episodes, ECG telemetry showed that the syncope was caused by intermittent complete atrioventricular block, with the longest RR interval lasting for 4.36 s. At the gap of syncope, ECG showed complete right bundle branch block accompanied by alternation of left anterior fascicular block and left posterior fascicular block. The patient was implanted with a dual-chamber permanent pacemaker. Follow-up of 9 months showed no reoccurrence of syncope.

  7. The Interaction Between Schema Matching and Record Matching in Data Integration

    KAUST Repository

    Gu, Binbin

    2016-09-20

    Schema Matching (SM) and Record Matching (RM) are two necessary steps in integrating multiple relational tables of different schemas, where SM unifies the schemas and RM detects records referring to the same real-world entity. The two processes have been thoroughly studied separately, but few attention has been paid to the interaction of SM and RM. In this work, we find that, even alternating them in a simple manner, SM and RM can benefit from each other to reach a better integration performance (i.e., in terms of precision and recall). Therefore, combining SM and RM is a promising solution for improving data integration. To this end, we define novel matching rules for SM and RM, respectively, that is, every SM decision is made based on intermediate RM results, and vice versa, such that SM and RM can be performed alternately. The quality of integration is guaranteed by a Matching Likelihood Estimation model and the control of semantic drift, which prevent the effect of mismatch magnification. To reduce the computational cost, we design an index structure based on q-grams and a greedy search algorithm that can reduce around 90 percent overhead of the interaction. Extensive experiments on three data collections show that the combination and interaction between SM and RM significantly outperforms previous works that conduct SM and RM separately.

  8. Optimal Packed String Matching

    DEFF Research Database (Denmark)

    Ben-Kiki, Oren; Bille, Philip; Breslauer, Dany

    2011-01-01

    In the packed string matching problem, each machine word accommodates – characters, thus an n-character text occupies n/– memory words. We extend the Crochemore-Perrin constantspace O(n)-time string matching algorithm to run in optimal O(n/–) time and even in real-time, achieving a factor – speed...

  9. Backfilling of deposition tunnels, block alternative

    International Nuclear Information System (INIS)

    Keto, P.; Roennqvist, P.-E.

    2007-03-01

    This report presents a preliminary process description of backfilling the deposition tunnels with pre-compacted blocks consisting of a mixture of bentonite and ballast (30:70). The process was modified for the Finnish KBS-3V type repository assuming that the amount of spent fuel canisters disposed of yearly is 40. Backfilling blocks (400 x 300 x 300 mm) are prepared in a block production plant with a hydraulic press with an estimated production capacity of 840 blocks per day. Some of the blocks are modified further to fit the profile of the tunnel roof. Prior to the installation of the blocks, the deposition tunnel floor is levelled with a mixture of bentonite and ballast (15:85). The blocks are placed in the tunnel with a modified reach truck. Centrifugal pellet throwing equipment is used to fill the gap between the blocks and the rock surface with bentonite pellets. Based on a preliminary assessment, the average dry density achieved with block backfill is sufficient to fulfil the criteria set for the backfill in order to ensure long-term safety and radiation protection. However, there are uncertainties concerning saturation, homogenisation, erosion, piping and self-healing of the block backfill that need to be studied further with laboratory and field tests. In addition, development efforts and testing concerning block manufacturing and installation are required to verify the technical feasibility of the concept. (orig.)

  10. Matrix regularization of embedded 4-manifolds

    International Nuclear Information System (INIS)

    Trzetrzelewski, Maciej

    2012-01-01

    We consider products of two 2-manifolds such as S 2 ×S 2 , embedded in Euclidean space and show that the corresponding 4-volume preserving diffeomorphism algebra can be approximated by a tensor product SU(N)⊗SU(N) i.e. functions on a manifold are approximated by the Kronecker product of two SU(N) matrices. A regularization of the 4-sphere is also performed by constructing N 2 ×N 2 matrix representations of the 4-algebra (and as a byproduct of the 3-algebra which makes the regularization of S 3 also possible).

  11. Optimal Tikhonov Regularization in Finite-Frequency Tomography

    Science.gov (United States)

    Fang, Y.; Yao, Z.; Zhou, Y.

    2017-12-01

    The last decade has witnessed a progressive transition in seismic tomography from ray theory to finite-frequency theory which overcomes the resolution limit of the high-frequency approximation in ray theory. In addition to approximations in wave propagation physics, a main difference between ray-theoretical tomography and finite-frequency tomography is the sparseness of the associated sensitivity matrix. It is well known that seismic tomographic problems are ill-posed and regularizations such as damping and smoothing are often applied to analyze the tradeoff between data misfit and model uncertainty. The regularizations depend on the structure of the matrix as well as noise level of the data. Cross-validation has been used to constrain data uncertainties in body-wave finite-frequency inversions when measurements at multiple frequencies are available to invert for a common structure. In this study, we explore an optimal Tikhonov regularization in surface-wave phase-velocity tomography based on minimization of an empirical Bayes risk function using theoretical training datasets. We exploit the structure of the sensitivity matrix in the framework of singular value decomposition (SVD) which also allows for the calculation of complete resolution matrix. We compare the optimal Tikhonov regularization in finite-frequency tomography with traditional tradeo-off analysis using surface wave dispersion measurements from global as well as regional studies.

  12. Fractional Regularization Term for Variational Image Registration

    Directory of Open Access Journals (Sweden)

    Rafael Verdú-Monedero

    2009-01-01

    Full Text Available Image registration is a widely used task of image analysis with applications in many fields. Its classical formulation and current improvements are given in the spatial domain. In this paper a regularization term based on fractional order derivatives is formulated. This term is defined and implemented in the frequency domain by translating the energy functional into the frequency domain and obtaining the Euler-Lagrange equations which minimize it. The new regularization term leads to a simple formulation and design, being applicable to higher dimensions by using the corresponding multidimensional Fourier transform. The proposed regularization term allows for a real gradual transition from a diffusion registration to a curvature registration which is best suited to some applications and it is not possible in the spatial domain. Results with 3D actual images show the validity of this approach.

  13. Comparative study between ultrasound guided tap block and paravertebral block in upper abdominal surgeries. Randomized controlled trial

    Directory of Open Access Journals (Sweden)

    Ruqaya M. Elsayed

    2017-01-01

    Conclusion: We concluded that ultrasound guided transversus abdominis plane block and thoracic paravertebral block were safe and effective anesthetic technique for upper abdominal surgery with longer and potent postoperative analgesia in thoracic paravertebral block than transversus abdominis block.

  14. Reducing errors in the GRACE gravity solutions using regularization

    Science.gov (United States)

    Save, Himanshu; Bettadpur, Srinivas; Tapley, Byron D.

    2012-09-01

    The nature of the gravity field inverse problem amplifies the noise in the GRACE data, which creeps into the mid and high degree and order harmonic coefficients of the Earth's monthly gravity fields provided by GRACE. Due to the use of imperfect background models and data noise, these errors are manifested as north-south striping in the monthly global maps of equivalent water heights. In order to reduce these errors, this study investigates the use of the L-curve method with Tikhonov regularization. L-curve is a popular aid for determining a suitable value of the regularization parameter when solving linear discrete ill-posed problems using Tikhonov regularization. However, the computational effort required to determine the L-curve is prohibitively high for a large-scale problem like GRACE. This study implements a parameter-choice method, using Lanczos bidiagonalization which is a computationally inexpensive approximation to L-curve. Lanczos bidiagonalization is implemented with orthogonal transformation in a parallel computing environment and projects a large estimation problem on a problem of the size of about 2 orders of magnitude smaller for computing the regularization parameter. Errors in the GRACE solution time series have certain characteristics that vary depending on the ground track coverage of the solutions. These errors increase with increasing degree and order. In addition, certain resonant and near-resonant harmonic coefficients have higher errors as compared with the other coefficients. Using the knowledge of these characteristics, this study designs a regularization matrix that provides a constraint on the geopotential coefficients as a function of its degree and order. This regularization matrix is then used to compute the appropriate regularization parameter for each monthly solution. A 7-year time-series of the candidate regularized solutions (Mar 2003-Feb 2010) show markedly reduced error stripes compared with the unconstrained GRACE release 4

  15. Robust point matching via vector field consensus.

    Science.gov (United States)

    Jiayi Ma; Ji Zhao; Jinwen Tian; Yuille, Alan L; Zhuowen Tu

    2014-04-01

    In this paper, we propose an efficient algorithm, called vector field consensus, for establishing robust point correspondences between two sets of points. Our algorithm starts by creating a set of putative correspondences which can contain a very large number of false correspondences, or outliers, in addition to a limited number of true correspondences (inliers). Next, we solve for correspondence by interpolating a vector field between the two point sets, which involves estimating a consensus of inlier points whose matching follows a nonparametric geometrical constraint. We formulate this a maximum a posteriori (MAP) estimation of a Bayesian model with hidden/latent variables indicating whether matches in the putative set are outliers or inliers. We impose nonparametric geometrical constraints on the correspondence, as a prior distribution, using Tikhonov regularizers in a reproducing kernel Hilbert space. MAP estimation is performed by the EM algorithm which by also estimating the variance of the prior model (initialized to a large value) is able to obtain good estimates very quickly (e.g., avoiding many of the local minima inherent in this formulation). We illustrate this method on data sets in 2D and 3D and demonstrate that it is robust to a very large number of outliers (even up to 90%). We also show that in the special case where there is an underlying parametric geometrical model (e.g., the epipolar line constraint) that we obtain better results than standard alternatives like RANSAC if a large number of outliers are present. This suggests a two-stage strategy, where we use our nonparametric model to reduce the size of the putative set and then apply a parametric variant of our approach to estimate the geometric parameters. Our algorithm is computationally efficient and we provide code for others to use it. In addition, our approach is general and can be applied to other problems, such as learning with a badly corrupted training data set.

  16. Likelihood ratio decisions in memory: three implied regularities.

    Science.gov (United States)

    Glanzer, Murray; Hilford, Andrew; Maloney, Laurence T

    2009-06-01

    We analyze four general signal detection models for recognition memory that differ in their distributional assumptions. Our analyses show that a basic assumption of signal detection theory, the likelihood ratio decision axis, implies three regularities in recognition memory: (1) the mirror effect, (2) the variance effect, and (3) the z-ROC length effect. For each model, we present the equations that produce the three regularities and show, in computed examples, how they do so. We then show that the regularities appear in data from a range of recognition studies. The analyses and data in our study support the following generalization: Individuals make efficient recognition decisions on the basis of likelihood ratios.

  17. Low-Rank Matrix Factorization With Adaptive Graph Regularizer.

    Science.gov (United States)

    Lu, Gui-Fu; Wang, Yong; Zou, Jian

    2016-05-01

    In this paper, we present a novel low-rank matrix factorization algorithm with adaptive graph regularizer (LMFAGR). We extend the recently proposed low-rank matrix with manifold regularization (MMF) method with an adaptive regularizer. Different from MMF, which constructs an affinity graph in advance, LMFAGR can simultaneously seek graph weight matrix and low-dimensional representations of data. That is, graph construction and low-rank matrix factorization are incorporated into a unified framework, which results in an automatically updated graph rather than a predefined one. The experimental results on some data sets demonstrate that the proposed algorithm outperforms the state-of-the-art low-rank matrix factorization methods.

  18. Online Manifold Regularization by Dual Ascending Procedure

    OpenAIRE

    Sun, Boliang; Li, Guohui; Jia, Li; Zhang, Hui

    2013-01-01

    We propose a novel online manifold regularization framework based on the notion of duality in constrained optimization. The Fenchel conjugate of hinge functions is a key to transfer manifold regularization from offline to online in this paper. Our algorithms are derived by gradient ascent in the dual function. For practical purpose, we propose two buffering strategies and two sparse approximations to reduce the computational complexity. Detailed experiments verify the utility of our approache...

  19. Degree-regular triangulations of torus and Klein bottle

    Indian Academy of Sciences (India)

    Home; Journals; Proceedings – Mathematical Sciences; Volume 115; Issue 3 ... A triangulation of a connected closed surface is called degree-regular if each of its vertices have the same degree. ... In [5], Datta and Nilakantan have classified all the degree-regular triangulations of closed surfaces on at most 11 vertices.

  20. The relationship between lifestyle regularity and subjective sleep quality

    Science.gov (United States)

    Monk, Timothy H.; Reynolds, Charles F 3rd; Buysse, Daniel J.; DeGrazia, Jean M.; Kupfer, David J.

    2003-01-01

    In previous work we have developed a diary instrument-the Social Rhythm Metric (SRM), which allows the assessment of lifestyle regularity-and a questionnaire instrument--the Pittsburgh Sleep Quality Index (PSQI), which allows the assessment of subjective sleep quality. The aim of the present study was to explore the relationship between lifestyle regularity and subjective sleep quality. Lifestyle regularity was assessed by both standard (SRM-17) and shortened (SRM-5) metrics; subjective sleep quality was assessed by the PSQI. We hypothesized that high lifestyle regularity would be conducive to better sleep. Both instruments were given to a sample of 100 healthy subjects who were studied as part of a variety of different experiments spanning a 9-yr time frame. Ages ranged from 19 to 49 yr (mean age: 31.2 yr, s.d.: 7.8 yr); there were 48 women and 52 men. SRM scores were derived from a two-week diary. The hypothesis was confirmed. There was a significant (rho = -0.4, p subjects with higher levels of lifestyle regularity reported fewer sleep problems. This relationship was also supported by a categorical analysis, where the proportion of "poor sleepers" was doubled in the "irregular types" group as compared with the "non-irregular types" group. Thus, there appears to be an association between lifestyle regularity and good sleep, though the direction of causality remains to be tested.