WorldWideScience

Sample records for regular step structures

  1. Recursive regularization step for high-order lattice Boltzmann methods

    Science.gov (United States)

    Coreixas, Christophe; Wissocq, Gauthier; Puigt, Guillaume; Boussuge, Jean-François; Sagaut, Pierre

    2017-09-01

    A lattice Boltzmann method (LBM) with enhanced stability and accuracy is presented for various Hermite tensor-based lattice structures. The collision operator relies on a regularization step, which is here improved through a recursive computation of nonequilibrium Hermite polynomial coefficients. In addition to the reduced computational cost of this procedure with respect to the standard one, the recursive step allows to considerably enhance the stability and accuracy of the numerical scheme by properly filtering out second- (and higher-) order nonhydrodynamic contributions in under-resolved conditions. This is first shown in the isothermal case where the simulation of the doubly periodic shear layer is performed with a Reynolds number ranging from 104 to 106, and where a thorough analysis of the case at Re=3 ×104 is conducted. In the latter, results obtained using both regularization steps are compared against the Bhatnagar-Gross-Krook LBM for standard (D2Q9) and high-order (D2V17 and D2V37) lattice structures, confirming the tremendous increase of stability range of the proposed approach. Further comparisons on thermal and fully compressible flows, using the general extension of this procedure, are then conducted through the numerical simulation of Sod shock tubes with the D2V37 lattice. They confirm the stability increase induced by the recursive approach as compared with the standard one.

  2. Near-Regular Structure Discovery Using Linear Programming

    KAUST Repository

    Huang, Qixing

    2014-06-02

    Near-regular structures are common in manmade and natural objects. Algorithmic detection of such regularity greatly facilitates our understanding of shape structures, leads to compact encoding of input geometries, and enables efficient generation and manipulation of complex patterns on both acquired and synthesized objects. Such regularity manifests itself both in the repetition of certain geometric elements, as well as in the structured arrangement of the elements. We cast the regularity detection problem as an optimization and efficiently solve it using linear programming techniques. Our optimization has a discrete aspect, that is, the connectivity relationships among the elements, as well as a continuous aspect, namely the locations of the elements of interest. Both these aspects are captured by our near-regular structure extraction framework, which alternates between discrete and continuous optimizations. We demonstrate the effectiveness of our framework on a variety of problems including near-regular structure extraction, structure-preserving pattern manipulation, and markerless correspondence detection. Robustness results with respect to geometric and topological noise are presented on synthesized, real-world, and also benchmark datasets. © 2014 ACM.

  3. Intermediate surface structure between step bunching and step flow in SrRuO3 thin film growth

    Science.gov (United States)

    Bertino, Giulia; Gura, Anna; Dawber, Matthew

    We performed a systematic study of SrRuO3 thin films grown on TiO2 terminated SrTiO3 substrates using off-axis magnetron sputtering. We investigated the step bunching formation and the evolution of the SRO film morphology by varying the step size of the substrate, the growth temperature and the film thickness. The thin films were characterized using Atomic Force Microscopy and X-Ray Diffraction. We identified single and multiple step bunching and step flow growth regimes as a function of the growth parameters. Also, we clearly observe a stronger influence of the step size of the substrate on the evolution of the SRO film surface with respect to the other growth parameters. Remarkably, we observe the formation of a smooth, regular and uniform ``fish skin'' structure at the transition between one regime and another. We believe that the fish skin structure results from the merging of 2D flat islands predicted by previous models. The direct observation of this transition structure allows us to better understand how and when step bunching develops in the growth of SrRuO3 thin films.

  4. Sparse structure regularized ranking

    KAUST Repository

    Wang, Jim Jing-Yan; Sun, Yijun; Gao, Xin

    2014-01-01

    Learning ranking scores is critical for the multimedia database retrieval problem. In this paper, we propose a novel ranking score learning algorithm by exploring the sparse structure and using it to regularize ranking scores. To explore the sparse

  5. Sparse structure regularized ranking

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-04-17

    Learning ranking scores is critical for the multimedia database retrieval problem. In this paper, we propose a novel ranking score learning algorithm by exploring the sparse structure and using it to regularize ranking scores. To explore the sparse structure, we assume that each multimedia object could be represented as a sparse linear combination of all other objects, and combination coefficients are regarded as a similarity measure between objects and used to regularize their ranking scores. Moreover, we propose to learn the sparse combination coefficients and the ranking scores simultaneously. A unified objective function is constructed with regard to both the combination coefficients and the ranking scores, and is optimized by an iterative algorithm. Experiments on two multimedia database retrieval data sets demonstrate the significant improvements of the propose algorithm over state-of-the-art ranking score learning algorithms.

  6. Diffusion coefficients for periodically induced multi-step persistent walks on regular lattices

    International Nuclear Information System (INIS)

    Gilbert, Thomas; Sanders, David P

    2012-01-01

    We present a generalization of our formalism for the computation of diffusion coefficients of multi-step persistent random walks on regular lattices to walks which include zero-displacement states. This situation is especially relevant to systems where tracer particles move across potential barriers as a result of the action of a periodic forcing whose period sets the timescale between transitions. (paper)

  7. Effects of the lateral amplitude and regularity of upper body fluctuation on step time variability evaluated using return map analysis.

    Science.gov (United States)

    Chidori, Kazuhiro; Yamamoto, Yuji

    2017-01-01

    The aim of this study was to evaluate the effects of the lateral amplitude and regularity of upper body fluctuation on step time variability. Return map analysis was used to clarify the relationship between step time variability and a history of falling. Eleven healthy, community-dwelling older adults and twelve younger adults participated in the study. All of the subjects walked 25 m at a comfortable speed. Trunk acceleration was measured using triaxial accelerometers attached to the third lumbar vertebrae (L3) and the seventh cervical vertebrae (C7). The normalized average magnitude of acceleration, the coefficient of determination ($R^2$) of the return map, and the step time variabilities, were calculated. Cluster analysis using the average fluctuation and the regularity of C7 fluctuation identified four walking patterns in the mediolateral (ML) direction. The participants with higher fluctuation and lower regularity showed significantly greater step time variability compared with the others. Additionally, elderly participants who had fallen in the past year had higher amplitude and a lower regularity of fluctuation during walking. In conclusion, by focusing on the time evolution of each step, it is possible to understand the cause of stride and/or step time variability that is associated with a risk of falls.

  8. Fine structures on zero-field steps in low-loss Josephson tunnel junctions

    DEFF Research Database (Denmark)

    Monaco, Roberto; Barbara, Paola; Mygind, Jesper

    1993-01-01

    The first zero-field step in the current-voltage characteristic of intermediate-length, high-quality, low-loss Nb/Al-AlOx/Nb Josephson tunnel junctions has been carefully investigated as a function of temperature. When decreasing the temperature, a number of structures develop in the form...... of regular and slightly hysteretic steps whose voltage position depends on the junction temperature and length. This phenomenon is interesting for the study of nonlinear dynamics and for application of long Josephson tunnel junctions as microwave and millimeter-wavelength oscillators....

  9. Recognition Memory for Novel Stimuli: The Structural Regularity Hypothesis

    Science.gov (United States)

    Cleary, Anne M.; Morris, Alison L.; Langley, Moses M.

    2007-01-01

    Early studies of human memory suggest that adherence to a known structural regularity (e.g., orthographic regularity) benefits memory for an otherwise novel stimulus (e.g., G. A. Miller, 1958). However, a more recent study suggests that structural regularity can lead to an increase in false-positive responses on recognition memory tests (B. W. A.…

  10. Structure of period-2 step-1 accelerator island in area preserving maps

    International Nuclear Information System (INIS)

    Hirose, K.; Ichikawa, Y.H.; Saito, S.

    1996-03-01

    Since the multi-periodic accelerator modes manifest their contribution even in the region of small stochastic parameters, analysis of such regular motion appears to be critical to explore the stochastic properties of the Hamiltonian system. Here, structure of period-2 step-1 accelerator mode is analyzed for the systems described by the Harper map and by the standard map. The stability criterions have been analyzed in detail in comparison with numerical analyses. The period-3 squeezing around the period-2 step-1 islands is identified in the standard map. (author)

  11. Identification of moving vehicle forces on bridge structures via moving average Tikhonov regularization

    Science.gov (United States)

    Pan, Chu-Dong; Yu, Ling; Liu, Huan-Lin

    2017-08-01

    Traffic-induced moving force identification (MFI) is a typical inverse problem in the field of bridge structural health monitoring. Lots of regularization-based methods have been proposed for MFI. However, the MFI accuracy obtained from the existing methods is low when the moving forces enter into and exit a bridge deck due to low sensitivity of structural responses to the forces at these zones. To overcome this shortcoming, a novel moving average Tikhonov regularization method is proposed for MFI by combining with the moving average concepts. Firstly, the bridge-vehicle interaction moving force is assumed as a discrete finite signal with stable average value (DFS-SAV). Secondly, the reasonable signal feature of DFS-SAV is quantified and introduced for improving the penalty function (∣∣x∣∣2 2) defined in the classical Tikhonov regularization. Then, a feasible two-step strategy is proposed for selecting regularization parameter and balance coefficient defined in the improved penalty function. Finally, both numerical simulations on a simply-supported beam and laboratory experiments on a hollow tube beam are performed for assessing the accuracy and the feasibility of the proposed method. The illustrated results show that the moving forces can be accurately identified with a strong robustness. Some related issues, such as selection of moving window length, effect of different penalty functions, and effect of different car speeds, are discussed as well.

  12. Regular expressions cookbook

    CERN Document Server

    Goyvaerts, Jan

    2009-01-01

    This cookbook provides more than 100 recipes to help you crunch data and manipulate text with regular expressions. Every programmer can find uses for regular expressions, but their power doesn't come worry-free. Even seasoned users often suffer from poor performance, false positives, false negatives, or perplexing bugs. Regular Expressions Cookbook offers step-by-step instructions for some of the most common tasks involving this tool, with recipes for C#, Java, JavaScript, Perl, PHP, Python, Ruby, and VB.NET. With this book, you will: Understand the basics of regular expressions through a

  13. Discrete maximal regularity of time-stepping schemes for fractional evolution equations.

    Science.gov (United States)

    Jin, Bangti; Li, Buyang; Zhou, Zhi

    2018-01-01

    In this work, we establish the maximal [Formula: see text]-regularity for several time stepping schemes for a fractional evolution model, which involves a fractional derivative of order [Formula: see text], [Formula: see text], in time. These schemes include convolution quadratures generated by backward Euler method and second-order backward difference formula, the L1 scheme, explicit Euler method and a fractional variant of the Crank-Nicolson method. The main tools for the analysis include operator-valued Fourier multiplier theorem due to Weis (Math Ann 319:735-758, 2001. doi:10.1007/PL00004457) and its discrete analogue due to Blunck (Stud Math 146:157-176, 2001. doi:10.4064/sm146-2-3). These results generalize the corresponding results for parabolic problems.

  14. AFM tip characterization by using FFT filtered images of step structures

    Energy Technology Data Exchange (ETDEWEB)

    Yan, Yongda, E-mail: yanyongda@hit.edu.cn [Key Laboratory of Micro-systems and Micro-structures Manufacturing of Ministry of Education, Harbin Institute of Technology, Harbin, Heilongjiang 150001 (China); Center For Precision Engineering, Harbin Institute of Technology, Harbin, Heilongjiang 150001 (China); Xue, Bo [Key Laboratory of Micro-systems and Micro-structures Manufacturing of Ministry of Education, Harbin Institute of Technology, Harbin, Heilongjiang 150001 (China); Center For Precision Engineering, Harbin Institute of Technology, Harbin, Heilongjiang 150001 (China); Hu, Zhenjiang; Zhao, Xuesen [Center For Precision Engineering, Harbin Institute of Technology, Harbin, Heilongjiang 150001 (China)

    2016-01-15

    The measurement resolution of an atomic force microscope (AFM) is largely dependent on the radius of the tip. Meanwhile, when using AFM to study nanoscale surface properties, the value of the tip radius is needed in calculations. As such, estimation of the tip radius is important for analyzing results taken using an AFM. In this study, a geometrical model created by scanning a step structure with an AFM tip was developed. The tip was assumed to have a hemispherical cone shape. Profiles simulated by tips with different scanning radii were calculated by fast Fourier transform (FFT). By analyzing the influence of tip radius variation on the spectra of simulated profiles, it was found that low-frequency harmonics were more susceptible, and that the relationship between the tip radius and the low-frequency harmonic amplitude of the step structure varied monotonically. Based on this regularity, we developed a new method to characterize the radius of the hemispherical tip. The tip radii estimated with this approach were comparable to the results obtained using scanning electron microscope imaging and blind reconstruction methods. - Highlights: • The AFM tips with different radii were simulated to scan a nano-step structure. • The spectra of the simulation scans under different radii were analyzed. • The functions of tip radius and harmonic amplitude were used for evaluating tip. • The proposed method has been validated by SEM imaging and blind reconstruction.

  15. Structural characterization of the packings of granular regular polygons.

    Science.gov (United States)

    Wang, Chuncheng; Dong, Kejun; Yu, Aibing

    2015-12-01

    By using a recently developed method for discrete modeling of nonspherical particles, we simulate the random packings of granular regular polygons with three to 11 edges under gravity. The effects of shape and friction on the packing structures are investigated by various structural parameters, including packing fraction, the radial distribution function, coordination number, Voronoi tessellation, and bond-orientational order. We find that packing fraction is generally higher for geometrically nonfrustrated regular polygons, and can be increased by the increase of edge number and decrease of friction. The changes of packing fraction are linked with those of the microstructures, such as the variations of the translational and orientational orders and local configurations. In particular, the free areas of Voronoi tessellations (which are related to local packing fractions) can be described by log-normal distributions for all polygons. The quantitative analyses establish a clearer picture for the packings of regular polygons.

  16. Chord length distributions between hard disks and spheres in regular, semi-regular, and quasi-random structures

    International Nuclear Information System (INIS)

    Olson, Gordon L.

    2008-01-01

    In binary stochastic media in two- and three-dimensions consisting of randomly placed impenetrable disks or spheres, the chord lengths in the background material between disks and spheres closely follow exponential distributions if the disks and spheres occupy less than 10% of the medium. This work demonstrates that for regular spatial structures of disks and spheres, the tails of the chord length distributions (CLDs) follow power laws rather than exponentials. In dilute media, when the disks and spheres are widely spaced, the slope of the power law seems to be independent of the details of the structure. When approaching a close-packed arrangement, the exact placement of the spheres can make a significant difference. When regular structures are perturbed by small random displacements, the CLDs become power laws with steeper slopes. An example CLD from a quasi-random distribution of spheres in clusters shows a modified exponential distribution

  17. Chord length distributions between hard disks and spheres in regular, semi-regular, and quasi-random structures

    Energy Technology Data Exchange (ETDEWEB)

    Olson, Gordon L. [Computer and Computational Sciences Division (CCS-2), Los Alamos National Laboratory, 5 Foxglove Circle, Madison, WI 53717 (United States)], E-mail: olson99@tds.net

    2008-11-15

    In binary stochastic media in two- and three-dimensions consisting of randomly placed impenetrable disks or spheres, the chord lengths in the background material between disks and spheres closely follow exponential distributions if the disks and spheres occupy less than 10% of the medium. This work demonstrates that for regular spatial structures of disks and spheres, the tails of the chord length distributions (CLDs) follow power laws rather than exponentials. In dilute media, when the disks and spheres are widely spaced, the slope of the power law seems to be independent of the details of the structure. When approaching a close-packed arrangement, the exact placement of the spheres can make a significant difference. When regular structures are perturbed by small random displacements, the CLDs become power laws with steeper slopes. An example CLD from a quasi-random distribution of spheres in clusters shows a modified exponential distribution.

  18. Applying 4-regular grid structures in large-scale access networks

    DEFF Research Database (Denmark)

    Pedersen, Jens Myrup; Knudsen, Thomas P.; Patel, Ahmed

    2006-01-01

    4-Regular grid structures have been used in multiprocessor systems for decades due to a number of nice properties with regard to routing, protection, and restoration, together with a straightforward planar layout. These qualities are to an increasing extent demanded also in largescale access...... networks, but concerning protection and restoration these demands have been met only to a limited extent by the commonly used ring and tree structures. To deal with the fact that classical 4-regular grid structures are not directly applicable in such networks, this paper proposes a number of extensions...... concerning restoration, protection, scalability, embeddability, flexibility, and cost. The extensions are presented as a tool case, which can be used for implementing semi-automatic and in the longer term full automatic network planning tools....

  19. Sparse regularization for EIT reconstruction incorporating structural information derived from medical imaging.

    Science.gov (United States)

    Gong, Bo; Schullcke, Benjamin; Krueger-Ziolek, Sabine; Mueller-Lisse, Ullrich; Moeller, Knut

    2016-06-01

    Electrical impedance tomography (EIT) reconstructs the conductivity distribution of a domain using electrical data on its boundary. This is an ill-posed inverse problem usually solved on a finite element mesh. For this article, a special regularization method incorporating structural information of the targeted domain is proposed and evaluated. Structural information was obtained either from computed tomography images or from preliminary EIT reconstructions by a modified k-means clustering. The proposed regularization method integrates this structural information into the reconstruction as a soft constraint preferring sparsity in group level. A first evaluation with Monte Carlo simulations indicated that the proposed solver is more robust to noise and the resulting images show fewer artifacts. This finding is supported by real data analysis. The structure based regularization has the potential to balance structural a priori information with data driven reconstruction. It is robust to noise, reduces artifacts and produces images that reflect anatomy and are thus easier to interpret for physicians.

  20. SQoS based Planning using 4-regular Grid for Optical Fiber Metworks

    DEFF Research Database (Denmark)

    Riaz, Muhammad Tahir; Pedersen, Jens Myrup; Madsen, Ole Brun

    optical fiber based network infrastructures. In the first step of SQoS based planning, this paper describes how 4-regular Grid structures can be implemented in the physical level of optical fiber network infrastructures. A systematic approach for implementing the Grid structure is presented. We used...

  1. SQoS based Planning using 4-regular Grid for Optical Fiber Networks

    DEFF Research Database (Denmark)

    Riaz, Muhammad Tahir; Pedersen, Jens Myrup; Madsen, Ole Brun

    2005-01-01

    optical fiber based network infrastructures. In the first step of SQoS based planning, this paper describes how 4-regular Grid structures can be implemented in the physical level of optical fiber network infrastructures. A systematic approach for implementing the Grid structure is presented. We used...

  2. Driver training in steps (DTS).

    NARCIS (Netherlands)

    2010-01-01

    For some years now, it has been possible in the Netherlands to follow a Driver Training in Steps (DTS) as well as the regular driver training. The DTS is a structured training method with clear training objectives which are categorized in four modules. Although the DTS is considerably better than

  3. Followee recommendation in microblog using matrix factorization model with structural regularization.

    Science.gov (United States)

    Yu, Yan; Qiu, Robin G

    2014-01-01

    Microblog that provides us a new communication and information sharing platform has been growing exponentially since it emerged just a few years ago. To microblog users, recommending followees who can serve as high quality information sources is a competitive service. To address this problem, in this paper we propose a matrix factorization model with structural regularization to improve the accuracy of followee recommendation in microblog. More specifically, we adapt the matrix factorization model in traditional item recommender systems to followee recommendation in microblog and use structural regularization to exploit structure information of social network to constrain matrix factorization model. The experimental analysis on a real-world dataset shows that our proposed model is promising.

  4. Regularities of structure formation on different stages of WC-Co hard alloys fabrication

    Energy Technology Data Exchange (ETDEWEB)

    Chernyavskij, K S

    1987-03-01

    Some regularities of structural transformations in powder products of the hard alloys fabrication have been formulated on the basis of results of the author works and other native and foreign reseachers. New data confirming the influene of technological prehistory of carbide powder on the mechanism of its particle grinding as well as the influence of the structural-energy state of WC powder on the course of the WC-Co alloy structure formation processes are given. Some possibilities for the application in practice of the regularities studied are considered.

  5. On Hierarchical Extensions of Large-Scale 4-regular Grid Network Structures

    DEFF Research Database (Denmark)

    Pedersen, Jens Myrup; Patel, A.; Knudsen, Thomas Phillip

    It is studied how the introduction of ordered hierarchies in 4-regular grid network structures decreses distances remarkably, while at the same time allowing for simple topological routing schemes. Both meshes and tori are considered; in both cases non-hierarchical structures have power law depen...

  6. Regularities development of entrepreneurial structures in regions

    Directory of Open Access Journals (Sweden)

    Julia Semenovna Pinkovetskaya

    2012-12-01

    Full Text Available Consider regularities and tendencies for the three types of entrepreneurial structures — small enterprises, medium enterprises and individual entrepreneurs. The aim of the research was to confirm the possibilities of describing indicators of aggregate entrepreneurial structures with the use of normal law distribution functions. Presented proposed by the author the methodological approach and results of construction of the functions of the density distribution for the main indicators for the various objects: the Russian Federation, regions, as well as aggregates ofentrepreneurial structures, specialized in certain forms ofeconomic activity. All the developed functions, as shown by the logical and statistical analysis, are of high quality and well-approximate the original data. In general, the proposed methodological approach is versatile and can be used in further studies of aggregates of entrepreneurial structures. The received results can be applied in solving a wide range of problems justify the need for personnel and financial resources at the federal, regional and municipal levels, as well as the formation of plans and forecasts of development entrepreneurship and improvement of this sector of the economy.

  7. Structure-Based Low-Rank Model With Graph Nuclear Norm Regularization for Noise Removal.

    Science.gov (United States)

    Ge, Qi; Jing, Xiao-Yuan; Wu, Fei; Wei, Zhi-Hui; Xiao, Liang; Shao, Wen-Ze; Yue, Dong; Li, Hai-Bo

    2017-07-01

    Nonlocal image representation methods, including group-based sparse coding and block-matching 3-D filtering, have shown their great performance in application to low-level tasks. The nonlocal prior is extracted from each group consisting of patches with similar intensities. Grouping patches based on intensity similarity, however, gives rise to disturbance and inaccuracy in estimation of the true images. To address this problem, we propose a structure-based low-rank model with graph nuclear norm regularization. We exploit the local manifold structure inside a patch and group the patches by the distance metric of manifold structure. With the manifold structure information, a graph nuclear norm regularization is established and incorporated into a low-rank approximation model. We then prove that the graph-based regularization is equivalent to a weighted nuclear norm and the proposed model can be solved by a weighted singular-value thresholding algorithm. Extensive experiments on additive white Gaussian noise removal and mixed noise removal demonstrate that the proposed method achieves a better performance than several state-of-the-art algorithms.

  8. Selection of regularization parameter for l1-regularized damage detection

    Science.gov (United States)

    Hou, Rongrong; Xia, Yong; Bao, Yuequan; Zhou, Xiaoqing

    2018-06-01

    The l1 regularization technique has been developed for structural health monitoring and damage detection through employing the sparsity condition of structural damage. The regularization parameter, which controls the trade-off between data fidelity and solution size of the regularization problem, exerts a crucial effect on the solution. However, the l1 regularization problem has no closed-form solution, and the regularization parameter is usually selected by experience. This study proposes two strategies of selecting the regularization parameter for the l1-regularized damage detection problem. The first method utilizes the residual and solution norms of the optimization problem and ensures that they are both small. The other method is based on the discrepancy principle, which requires that the variance of the discrepancy between the calculated and measured responses is close to the variance of the measurement noise. The two methods are applied to a cantilever beam and a three-story frame. A range of the regularization parameter, rather than one single value, can be determined. When the regularization parameter in this range is selected, the damage can be accurately identified even for multiple damage scenarios. This range also indicates the sensitivity degree of the damage identification problem to the regularization parameter.

  9. Using Tikhonov Regularization for Spatial Projections from CSR Regularized Spherical Harmonic GRACE Solutions

    Science.gov (United States)

    Save, H.; Bettadpur, S. V.

    2013-12-01

    It has been demonstrated before that using Tikhonov regularization produces spherical harmonic solutions from GRACE that have very little residual stripes while capturing all the signal observed by GRACE within the noise level. This paper demonstrates a two-step process and uses Tikhonov regularization to remove the residual stripes in the CSR regularized spherical harmonic coefficients when computing the spatial projections. We discuss methods to produce mass anomaly grids that have no stripe features while satisfying the necessary condition of capturing all observed signal within the GRACE noise level.

  10. Mixture models with entropy regularization for community detection in networks

    Science.gov (United States)

    Chang, Zhenhai; Yin, Xianjun; Jia, Caiyan; Wang, Xiaoyang

    2018-04-01

    Community detection is a key exploratory tool in network analysis and has received much attention in recent years. NMM (Newman's mixture model) is one of the best models for exploring a range of network structures including community structure, bipartite and core-periphery structures, etc. However, NMM needs to know the number of communities in advance. Therefore, in this study, we have proposed an entropy regularized mixture model (called EMM), which is capable of inferring the number of communities and identifying network structure contained in a network, simultaneously. In the model, by minimizing the entropy of mixing coefficients of NMM using EM (expectation-maximization) solution, the small clusters contained little information can be discarded step by step. The empirical study on both synthetic networks and real networks has shown that the proposed model EMM is superior to the state-of-the-art methods.

  11. Influence of the volume ratio of solid phase on carrying capacity of regular porous structure

    Directory of Open Access Journals (Sweden)

    Monkova Katarina

    2017-01-01

    Full Text Available Direct metal laser sintering is spread technology today. The main advantage of this method is the ability to produce parts which have a very complex geometry and which can be produced only in very complicated way by classical conventional methods. Special category of such components are parts with porous structure, which can give to the product extraordinary combination of properties. The article deals with some aspects that influence the manufacturing of regular porous structures in spite of the fact that input technological parameters at various samples were the same. The main goal of presented research has been to investigate the influence of the volume ratio of solid phase on carrying capacity of regular porous structure. Realized tests have indicated that the unit of regular porous structure with lower volume ratio is able to carry a greater load to failure than the unit with higher volume ratio.

  12. The significance of the structural regularity for the seismic response of buildings

    International Nuclear Information System (INIS)

    Hampe, E.; Goldbach, R.; Schwarz, J.

    1991-01-01

    The paper gives an state-of-the-art report about the international design practice and submits fundamentals for a systematic approach to the solution of that problem. Different criteria of regularity are presented and discussed with respect to EUROCODE Nr. 8. Still remaining questions and the main topics of future research activities are announced and come into consideration. Frame structures with or without additional stiffening wall elements are investigated to illustrate the qualitative differences of the vibrational properties and the earthquake response of regular and irregular systems. (orig./HP) [de

  13. Strictly-regular number system and data structures

    DEFF Research Database (Denmark)

    Elmasry, Amr Ahmed Abd Elmoneim; Jensen, Claus; Katajainen, Jyrki

    2010-01-01

    We introduce a new number system that we call the strictly-regular system, which efficiently supports the operations: digit-increment, digit-decrement, cut, concatenate, and add. Compared to other number systems, the strictly-regular system has distinguishable properties. It is superior to the re...

  14. Design and fabrication of a chitosan hydrogel with gradient structures via a step-by-step cross-linking process.

    Science.gov (United States)

    Xu, Yongxiang; Yuan, Shenpo; Han, Jianmin; Lin, Hong; Zhang, Xuehui

    2017-11-15

    The development of scaffolds to mimic the gradient structure of natural tissue is an important consideration for effective tissue engineering. In the present study, a physical cross-linking chitosan hydrogel with gradient structures was fabricated via a step-by-step cross-linking process using sodium tripolyphosphate and sodium hydroxide as sequential cross-linkers. Chitosan hydrogels with different structures (single, double, and triple layers) were prepared by modifying the gelling process. The properties of the hydrogels were further adjusted by varying the gelling conditions, such as gelling time, pH, and composition of the crosslinking solution. Slight cytotoxicity was showed in MTT assay for hydrogels with uncross-linking chitosan solution and non-cytotoxicity was showed for other hydrogels. The results suggest that step-by-step cross-linking represents a practicable method to fabricate scaffolds with gradient structures. Copyright © 2017. Published by Elsevier Ltd.

  15. Optimal analysis of structures by concepts of symmetry and regularity

    CERN Document Server

    Kaveh, Ali

    2013-01-01

    Optimal analysis is defined as an analysis that creates and uses sparse, well-structured and well-conditioned matrices. The focus is on efficient methods for eigensolution of matrices involved in static, dynamic and stability analyses of symmetric and regular structures, or those general structures containing such components. Powerful tools are also developed for configuration processing, which is an important issue in the analysis and design of space structures and finite element models. Different mathematical concepts are combined to make the optimal analysis of structures feasible. Canonical forms from matrix algebra, product graphs from graph theory and symmetry groups from group theory are some of the concepts involved in the variety of efficient methods and algorithms presented. The algorithms elucidated in this book enable analysts to handle large-scale structural systems by lowering their computational cost, thus fulfilling the requirement for faster analysis and design of future complex systems. The ...

  16. General inverse problems for regular variation

    DEFF Research Database (Denmark)

    Damek, Ewa; Mikosch, Thomas Valentin; Rosinski, Jan

    2014-01-01

    Regular variation of distributional tails is known to be preserved by various linear transformations of some random structures. An inverse problem for regular variation aims at understanding whether the regular variation of a transformed random object is caused by regular variation of components ...

  17. Subharmonic structure of Shapiro steps in frustrated superconducting arrays

    International Nuclear Information System (INIS)

    Kim, S.; Kim, B.J.; Choi, M.Y.

    1995-01-01

    Two-dimensional superconducting arrays with combined direct and alternating applied currents are studied both analytically and numerically. In particular, we investigate in detail current-voltage characteristics of a square array with 1/2 flux quantum per plaquette and triangular arrays with 1/2 and 1/4 flux quantum per plaquette. At zero temperature reduced equations of motion are obtained through the use of the translational symmetry present in the systems. The reduced equations lead to a series of subharmonic steps in addition to the standard integer and fractional giant Shapiro steps, producing devil's staircase structure. This devil's staircase structure reflects the existence of dynamically generated states in addition to the states originating from degenerate ground states in equilibrium. Widths of the subharmonic steps as functions of the amplitudes of alternating currents display Bessel-function-type behavior. We also present results of extensive numerical simulations, which indeed reveal the subharmonic steps together with their stability against small thermal fluctuations. Implications for topological invariance are also discussed

  18. A function space framework for structural total variation regularization with applications in inverse problems

    Science.gov (United States)

    Hintermüller, Michael; Holler, Martin; Papafitsoros, Kostas

    2018-06-01

    In this work, we introduce a function space setting for a wide class of structural/weighted total variation (TV) regularization methods motivated by their applications in inverse problems. In particular, we consider a regularizer that is the appropriate lower semi-continuous envelope (relaxation) of a suitable TV type functional initially defined for sufficiently smooth functions. We study examples where this relaxation can be expressed explicitly, and we also provide refinements for weighted TV for a wide range of weights. Since an integral characterization of the relaxation in function space is, in general, not always available, we show that, for a rather general linear inverse problems setting, instead of the classical Tikhonov regularization problem, one can equivalently solve a saddle-point problem where no a priori knowledge of an explicit formulation of the structural TV functional is needed. In particular, motivated by concrete applications, we deduce corresponding results for linear inverse problems with norm and Poisson log-likelihood data discrepancy terms. Finally, we provide proof-of-concept numerical examples where we solve the saddle-point problem for weighted TV denoising as well as for MR guided PET image reconstruction.

  19. Cerebral perfusion computed tomography deconvolution via structure tensor total variation regularization

    Energy Technology Data Exchange (ETDEWEB)

    Zeng, Dong; Zhang, Xinyu; Bian, Zhaoying, E-mail: zybian@smu.edu.cn, E-mail: jhma@smu.edu.cn; Huang, Jing; Zhang, Hua; Lu, Lijun; Lyu, Wenbing; Feng, Qianjin; Chen, Wufan; Ma, Jianhua, E-mail: zybian@smu.edu.cn, E-mail: jhma@smu.edu.cn [Department of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong 510515, China and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong 510515 (China); Zhang, Jing [Department of Radiology, Tianjin Medical University General Hospital, Tianjin 300052 (China)

    2016-05-15

    Purpose: Cerebral perfusion computed tomography (PCT) imaging as an accurate and fast acute ischemic stroke examination has been widely used in clinic. Meanwhile, a major drawback of PCT imaging is the high radiation dose due to its dynamic scan protocol. The purpose of this work is to develop a robust perfusion deconvolution approach via structure tensor total variation (STV) regularization (PD-STV) for estimating an accurate residue function in PCT imaging with the low-milliampere-seconds (low-mAs) data acquisition. Methods: Besides modeling the spatio-temporal structure information of PCT data, the STV regularization of the present PD-STV approach can utilize the higher order derivatives of the residue function to enhance denoising performance. To minimize the objective function, the authors propose an effective iterative algorithm with a shrinkage/thresholding scheme. A simulation study on a digital brain perfusion phantom and a clinical study on an old infarction patient were conducted to validate and evaluate the performance of the present PD-STV approach. Results: In the digital phantom study, visual inspection and quantitative metrics (i.e., the normalized mean square error, the peak signal-to-noise ratio, and the universal quality index) assessments demonstrated that the PD-STV approach outperformed other existing approaches in terms of the performance of noise-induced artifacts reduction and accurate perfusion hemodynamic maps (PHM) estimation. In the patient data study, the present PD-STV approach could yield accurate PHM estimation with several noticeable gains over other existing approaches in terms of visual inspection and correlation analysis. Conclusions: This study demonstrated the feasibility and efficacy of the present PD-STV approach in utilizing STV regularization to improve the accuracy of residue function estimation of cerebral PCT imaging in the case of low-mAs.

  20. On Line Segment Length and Mapping 4-regular Grid Structures in Network Infrastructures

    DEFF Research Database (Denmark)

    Riaz, Muhammad Tahir; Nielsen, Rasmus Hjorth; Pedersen, Jens Myrup

    2006-01-01

    The paper focuses on mapping the road network into 4-regular grid structures. A mapping algorithm is proposed. To model the road network GIS data have been used. The Geographic Information System (GIS) data for the road network are composed with different size of line segment lengths...

  1. "Equilibrium structure of monatomic steps on vicinal Si(001)

    NARCIS (Netherlands)

    Zandvliet, Henricus J.W.; Elswijk, H.B.; van Loenen, E.J.; Dijkkamp, D.

    1992-01-01

    The equilibrium structure of monatomic steps on vicinal Si(001) is described in terms of anisotropic nearest-neighbor and isotropic second-nearest-neighbor interactions between dimers. By comparing scanning-tunneling-microscopy data and this equilibrium structure, we obtained interaction energies of

  2. Process of motion by unit steps over a surface provided with elements regularly arranged

    International Nuclear Information System (INIS)

    Cooper, D.E.; Hendee, L.C. III; Hill, W.G. Jr.; Leshem, Adam; Marugg, M.L.

    1977-01-01

    This invention concerns a process for moving by unit steps an apparatus travelling over a surface provided with an array of orifices aligned and evenly spaced in several lines and several parallel rows regularly spaced, the lines and rows being parallel to axes x and y of Cartesian co-ordinates, each orifice having a separate address in the Cartesian co-ordinate system. The surface travelling apparatus has two previously connected arms aranged in directions transversal to each other thus forming an angle corresponding to the intersection of axes x and y. In the inspection and/or repair of nuclear or similar steam generator tubes, it is desirable that such an apparatus should be able to move in front of a surface comprising an array of orifices by the selective alternate introduction and retraction of two sets of anchoring claws of the two respective arms, in relation to the orifices of the array, it being possible to shift the arms in a movement of translation, transversally to each other, as a set of claws is withdrawn from the orifices. The invention concerns a process and aparatus as indicated above that reduces to a minimum the path length of the apparatus between the orifices it is effectively opposite and a given orifice [fr

  3. Effective field theory dimensional regularization

    International Nuclear Information System (INIS)

    Lehmann, Dirk; Prezeau, Gary

    2002-01-01

    A Lorentz-covariant regularization scheme for effective field theories with an arbitrary number of propagating heavy and light particles is given. This regularization scheme leaves the low-energy analytic structure of Greens functions intact and preserves all the symmetries of the underlying Lagrangian. The power divergences of regularized loop integrals are controlled by the low-energy kinematic variables. Simple diagrammatic rules are derived for the regularization of arbitrary one-loop graphs and the generalization to higher loops is discussed

  4. Effective field theory dimensional regularization

    Science.gov (United States)

    Lehmann, Dirk; Prézeau, Gary

    2002-01-01

    A Lorentz-covariant regularization scheme for effective field theories with an arbitrary number of propagating heavy and light particles is given. This regularization scheme leaves the low-energy analytic structure of Greens functions intact and preserves all the symmetries of the underlying Lagrangian. The power divergences of regularized loop integrals are controlled by the low-energy kinematic variables. Simple diagrammatic rules are derived for the regularization of arbitrary one-loop graphs and the generalization to higher loops is discussed.

  5. STRUCTURE OPTIMIZATION OF RESERVATION BY PRECISE QUADRATIC REGULARIZATION

    Directory of Open Access Journals (Sweden)

    KOSOLAP A. I.

    2015-11-01

    Full Text Available The problem of optimization of the structure of systems redundancy elements. Such problems arise in the design of complex systems. To improve the reliability of operation of such systems of its elements are duplicated. This increases system cost and improves its reliability. When optimizing these systems is maximized probability of failure of the entire system while limiting its cost or the cost is minimized for a given probability of failure-free operation. A mathematical model of the problem is a discrete backup multiextremal. To search for the global extremum of currently used methods of Lagrange multipliers, coordinate descent, dynamic programming, random search. These methods guarantee a just and local solutions are used in the backup tasks of small dimension. In the work for solving redundancy uses a new method for accurate quadratic regularization. This method allows you to convert the original discrete problem to the maximization of multi vector norm on a convex set. This means that the diversity of the tasks given to the problem of redundancy maximize vector norm on a convex set. To solve the problem, a reformed straightdual interior point methods. Currently, it is the best method for local optimization of nonlinear problems. Transformed the task includes a new auxiliary variable, which is determined by dichotomy. There have been numerous comparative numerical experiments in problems with the number of redundant subsystems to one hundred. These experiments confirm the effectiveness of the method of precise quadratic regularization for solving problems of redundancy.

  6. Multiple graph regularized protein domain ranking.

    Science.gov (United States)

    Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin

    2012-11-19

    Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications.

  7. Quantum transport with long-range steps on Watts-Strogatz networks

    Science.gov (United States)

    Wang, Yan; Xu, Xin-Jian

    2016-07-01

    We study transport dynamics of quantum systems with long-range steps on the Watts-Strogatz network (WSN) which is generated by rewiring links of the regular ring. First, we probe physical systems modeled by the discrete nonlinear schrödinger (DNLS) equation. Using the localized initial condition, we compute the time-averaged occupation probability of the initial site, which is related to the nonlinearity, the long-range steps and rewiring links. Self-trapping transitions occur at large (small) nonlinear parameters for coupling ɛ=-1 (1), as long-range interactions are intensified. The structure disorder induced by random rewiring, however, has dual effects for ɛ=-1 and inhibits the self-trapping behavior for ɛ=1. Second, we investigate continuous-time quantum walks (CTQW) on the regular ring ruled by the discrete linear schrödinger (DLS) equation. It is found that only the presence of the long-range steps does not affect the efficiency of the coherent exciton transport, while only the allowance of random rewiring enhances the partial localization. If both factors are considered simultaneously, localization is greatly strengthened, and the transport becomes worse.

  8. Manifold Regularized Correlation Object Tracking

    OpenAIRE

    Hu, Hongwei; Ma, Bo; Shen, Jianbing; Shao, Ling

    2017-01-01

    In this paper, we propose a manifold regularized correlation tracking method with augmented samples. To make better use of the unlabeled data and the manifold structure of the sample space, a manifold regularization-based correlation filter is introduced, which aims to assign similar labels to neighbor samples. Meanwhile, the regression model is learned by exploiting the block-circulant structure of matrices resulting from the augmented translated samples over multiple base samples cropped fr...

  9. Multiple graph regularized protein domain ranking

    KAUST Repository

    Wang, Jim Jing-Yan

    2012-11-19

    Background: Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods.Results: To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods.Conclusion: The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications. 2012 Wang et al; licensee BioMed Central Ltd.

  10. Multiple graph regularized protein domain ranking

    KAUST Repository

    Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin

    2012-01-01

    Background: Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods.Results: To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods.Conclusion: The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications. 2012 Wang et al; licensee BioMed Central Ltd.

  11. Multiple graph regularized protein domain ranking

    Directory of Open Access Journals (Sweden)

    Wang Jim

    2012-11-01

    Full Text Available Abstract Background Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. Results To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. Conclusion The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications.

  12. Mechanical properties of regular hexahedral lattice structure formed by selective laser melting

    International Nuclear Information System (INIS)

    Sun, Jianfeng; Yang, Yongqiang; Wang, Di

    2013-01-01

    The Ti–6Al–4V lattice structure is widely used in the aerospace field. This research first designs a regular hexahedral unit, processes the lattice structure composed of the Ti–6Al–4V units by selective laser melting technology, obtains the experimental fracture load and the compression deformation of them through compression tests, then conducts a simulation of the unit and the lattice structure through ANSYS to analyze the failure point. Later, according to the force condition of the point, the model of maximum load is built, through which the analytical formula of the fracture load of the unit and the lattice structure are obtained. The results of groups of experiments demonstrate that there exists an exponential relationship between the practical fracture load and the porosity of the lattice structure. There also exists a trigonometric function relationship between the compression deformation and the porosity of the lattice structure. The fracture analysis indicates that fracture of the units and lattice structure is brittle fracture due to cleavage fracture. (paper)

  13. [Clustered regularly interspaced short palindromic repeats: structure, function and application--a review].

    Science.gov (United States)

    Cui, Yujun; Li, Yanjun; Yan, Yanfeng; Yang, Ruifu

    2008-11-01

    CRISPRs (Clustered Regularly Interspaced Short Palindromic Repeats), the basis of spoligotyping technology, can provide prokaryotes with heritable adaptive immunity against phages' invasion. Studies on CRISPR loci and their associated elements, including various CAS (CRISPR-associated) proteins and leader sequences, are still in its infant period. We introduce the brief history', structure, function, bioinformatics research and application of this amazing immunity system in prokaryotic organism for inspiring more scientists to find their interest in this developing topic.

  14. Some regularity of the grain size distribution in nuclear fuel with controllable structure

    International Nuclear Information System (INIS)

    Loktev, Igor

    2008-01-01

    It is known, the fission gas release from ceramic nuclear fuel depends from average size of grains. To increase grain size they use additives which activate sintering of pellets. However, grain size distribution influences on fission gas release also. Fuel with different structures, but with the same average size of grains has different fission gas release. Other structure elements, which influence operational behavior of fuel, are pores and inclusions. Earlier, in Kyoto, questions of distribution of grain size for fuel with 'natural' structure were discussed. Some regularity of grain size distribution of fuel with controllable structure and high average size of grains are considered in the report. Influence of inclusions and pores on an error of the automated definition of parameters of structure is shown. The criterion, which describe of behavior of fuel with specific grain size distribution, is offered

  15. Traveling in the dark: the legibility of a regular and predictable structure of the environment extends beyond its borders.

    Science.gov (United States)

    Yaski, Osnat; Portugali, Juval; Eilam, David

    2012-04-01

    The physical structure of the surrounding environment shapes the paths of progression, which in turn reflect the structure of the environment and the way that it shapes behavior. A regular and coherent physical structure results in paths that extend over the entire environment. In contrast, irregular structure results in traveling over a confined sector of the area. In this study, rats were tested in a dark arena in which half the area contained eight objects in a regular grid layout, and the other half contained eight objects in an irregular layout. In subsequent trials, a salient landmark was placed first within the irregular half, and then within the grid. We hypothesized that rats would favor travel in the area with regular order, but found that activity in the area with irregular object layout did not differ from activity in the area with grid layout, even when the irregular half included a salient landmark. Thus, the grid impact in one arena half extended to the other half and overshadowed the presumed impact of the salient landmark. This could be explained by mechanisms that control spatial behavior, such as grid cells and odometry. However, when objects were spaced irregularly over the entire arena, the salient landmark became dominant and the paths converged upon it, especially from objects with direct access to the salient landmark. Altogether, three environmental properties: (i) regular and predictable structure; (ii) salience of landmarks; and (iii) accessibility, hierarchically shape the paths of progression in a dark environment. Copyright © 2012 Elsevier B.V. All rights reserved.

  16. UNFOLDED REGULAR AND SEMI-REGULAR POLYHEDRA

    Directory of Open Access Journals (Sweden)

    IONIŢĂ Elena

    2015-06-01

    Full Text Available This paper proposes a presentation unfolding regular and semi-regular polyhedra. Regular polyhedra are convex polyhedra whose faces are regular and equal polygons, with the same number of sides, and whose polyhedral angles are also regular and equal. Semi-regular polyhedra are convex polyhedra with regular polygon faces, several types and equal solid angles of the same type. A net of a polyhedron is a collection of edges in the plane which are the unfolded edges of the solid. Modeling and unfolding Platonic and Arhimediene polyhedra will be using 3dsMAX program. This paper is intended as an example of descriptive geometry applications.

  17. The structure of stepped surfaces

    International Nuclear Information System (INIS)

    Algra, A.J.

    1981-01-01

    The state-of-the-art of Low Energy Ion Scattering (LEIS) as far as multiple scattering effects are concerned, is discussed. The ion fractions of lithium, sodium and potassium scattered from a copper (100) surface have been measured as a function of several experimental parameters. The ratio of the intensities of the single and double scattering peaks observed in ion scattering spectroscopy has been determined and ion scattering spectroscopy applied in the multiple scattering mode is used to determine the structure of a stepped Cu(410) surface. The average relaxation of the (100) terraces of this surface appears to be very small. The adsorption of oxygen on this surface has been studied with LEIS and it is indicated that oxygen absorbs dissociatively. (C.F.)

  18. Modelling the harmonized tertiary Institutions Salary Structure ...

    African Journals Online (AJOL)

    This paper analyses the Harmonized Tertiary Institution Salary Structure (HATISS IV) used in Nigeria. The irregularities in the structure are highlighted. A model that assumes a polynomial trend for the zero step salary, and exponential trend for the incremental rates, is suggested for the regularization of the structure.

  19. Manifold Regularized Correlation Object Tracking.

    Science.gov (United States)

    Hu, Hongwei; Ma, Bo; Shen, Jianbing; Shao, Ling

    2018-05-01

    In this paper, we propose a manifold regularized correlation tracking method with augmented samples. To make better use of the unlabeled data and the manifold structure of the sample space, a manifold regularization-based correlation filter is introduced, which aims to assign similar labels to neighbor samples. Meanwhile, the regression model is learned by exploiting the block-circulant structure of matrices resulting from the augmented translated samples over multiple base samples cropped from both target and nontarget regions. Thus, the final classifier in our method is trained with positive, negative, and unlabeled base samples, which is a semisupervised learning framework. A block optimization strategy is further introduced to learn a manifold regularization-based correlation filter for efficient online tracking. Experiments on two public tracking data sets demonstrate the superior performance of our tracker compared with the state-of-the-art tracking approaches.

  20. Shapes of isolated domains and field induced evolution of regular and random 2D domain structures in LiNbO3 and LiTaO3

    International Nuclear Information System (INIS)

    Chernykh, A.; Shur, V.; Nikolaeva, E.; Shishkin, E.; Shur, A.; Terabe, K.; Kurimura, S.; Kitamura, K.; Gallo, K.

    2005-01-01

    The variety of the shapes of isolated domains, revealed in congruent and stoichiometric LiTaO 3 and LiNbO 3 by chemical etching and visualized by optical and scanning probe microscopy, was obtained by computer simulation. The kinetic nature of the domain shape was clearly demonstrated. The kinetics of domain structure with the dominance of the growth of the steps formed at the domain walls as a result of domain merging was investigated experimentally in slightly distorted artificial regular two-dimensional (2D) hexagonal domain structure and random natural one. The artificial structure has been realized in congruent LiNbO 3 by 2D electrode pattern produced by photolithography. The polarization reversal in congruent LiTaO 3 was investigated as an example of natural domain growth limited by merging. The switching process defined by domain merging was studied by computer simulation. The crucial dependence of the switching kinetics on the nuclei concentration has been revealed

  1. Predicting algal growth inhibition toxicity: three-step strategy using structural and physicochemical properties.

    Science.gov (United States)

    Furuhama, A; Hasunuma, K; Hayashi, T I; Tatarazako, N

    2016-05-01

    We propose a three-step strategy that uses structural and physicochemical properties of chemicals to predict their 72 h algal growth inhibition toxicities against Pseudokirchneriella subcapitata. In Step 1, using a log D-based criterion and structural alerts, we produced an interspecies QSAR between algal and acute daphnid toxicities for initial screening of chemicals. In Step 2, we categorized chemicals according to the Verhaar scheme for aquatic toxicity, and we developed QSARs for toxicities of Class 1 (non-polar narcotic) and Class 2 (polar narcotic) chemicals by means of simple regression with a hydrophobicity descriptor and multiple regression with a hydrophobicity descriptor and a quantum chemical descriptor. Using the algal toxicities of the Class 1 chemicals, we proposed a baseline QSAR for calculating their excess toxicities. In Step 3, we used structural profiles to predict toxicity either quantitatively or qualitatively and to assign chemicals to the following categories: Pesticide, Reactive, Toxic, Toxic low and Uncategorized. Although this three-step strategy cannot be used to estimate the algal toxicities of all chemicals, it is useful for chemicals within its domain. The strategy is also applicable as a component of Integrated Approaches to Testing and Assessment.

  2. Comment on "Step dynamics and equilibrium structure of monoatomic steps on Si(001)-2x1" by J.R. Sanchez and C.M. Aldao

    NARCIS (Netherlands)

    Zandvliet, Henricus J.W.; Wulfhekel, W.C.U.; Hendriksen, B.; Poelsema, Bene

    1997-01-01

    In contrast to a recent claim by Sánchez and Aldao [Phys. Rev. B 54, R11 058 (1996)] that the relaxation dynamics of attachment processes influences the equilibrium step structure we argue that the step structure in thermodynamic equilibrium is only governed by the configurational free energy

  3. Information-theoretic semi-supervised metric learning via entropy regularization.

    Science.gov (United States)

    Niu, Gang; Dai, Bo; Yamada, Makoto; Sugiyama, Masashi

    2014-08-01

    We propose a general information-theoretic approach to semi-supervised metric learning called SERAPH (SEmi-supervised metRic leArning Paradigm with Hypersparsity) that does not rely on the manifold assumption. Given the probability parameterized by a Mahalanobis distance, we maximize its entropy on labeled data and minimize its entropy on unlabeled data following entropy regularization. For metric learning, entropy regularization improves manifold regularization by considering the dissimilarity information of unlabeled data in the unsupervised part, and hence it allows the supervised and unsupervised parts to be integrated in a natural and meaningful way. Moreover, we regularize SERAPH by trace-norm regularization to encourage low-dimensional projections associated with the distance metric. The nonconvex optimization problem of SERAPH could be solved efficiently and stably by either a gradient projection algorithm or an EM-like iterative algorithm whose M-step is convex. Experiments demonstrate that SERAPH compares favorably with many well-known metric learning methods, and the learned Mahalanobis distance possesses high discriminability even under noisy environments.

  4. Regular website transformation to mobile friendly methodology development

    OpenAIRE

    Miščenkov, Ilja

    2017-01-01

    Nowadays, rate of technology improvement grows faster than ever which results in increased mobile device usage. Internet users often choose to browse their favorite websites via computers as well as mobile devices, however, not every website is suited to be displayed on both types of technology. As an example the website of Vilnius University’s Mathematics and Informatics faculty. Therefore the objective of this work is to develop a step-by-step procedure which is used to turn a regular websi...

  5. Formation Mechanism and Binding Energy for Body-Centred Regular Icosahedral Structure of Li13 Cluster

    International Nuclear Information System (INIS)

    Liu Weina; Li Ping; Gou Qingquan; Zhao Yanping

    2008-01-01

    The formation mechanism for the body-centred regular icosahedral structure of Li 13 cluster is proposed. The curve of the total energy versus the separation R between the nucleus at the centre and nuclei at the apexes for this structure of Li 13 has been calculated by using the method of Gou's modified arrangement channel quantum mechanics (MACQM). The result shows that the curve has a minimal energy of -96.951 39 a.u. at R = 5.46a 0 . When R approaches to infinity, the total energy of thirteen lithium atoms has the value of -96.564 38 a.u. So the binding energy of Li 13 with respect to thirteen lithium atoms is 0.387 01 a.u. Therefore the binding energy per atom for Li 13 is 0.029 77 a.u. or 0.810 eV, which is greater than the binding energy per atom of 0.453 eV for Li 2 , 0.494 eV for Li 3 , 0.7878 eV for Li 4 , 0.632 eV for Li 5 , and 0.674 eV for Li 7 calculated by us previously. This means that the Li 13 cluster may be formed stably in a body-centred regular icosahedral structure with a greater binding energy

  6. Iterative regularization in intensity-modulated radiation therapy optimization

    International Nuclear Information System (INIS)

    Carlsson, Fredrik; Forsgren, Anders

    2006-01-01

    A common way to solve intensity-modulated radiation therapy (IMRT) optimization problems is to use a beamlet-based approach. The approach is usually employed in a three-step manner: first a beamlet-weight optimization problem is solved, then the fluence profiles are converted into step-and-shoot segments, and finally postoptimization of the segment weights is performed. A drawback of beamlet-based approaches is that beamlet-weight optimization problems are ill-conditioned and have to be regularized in order to produce smooth fluence profiles that are suitable for conversion. The purpose of this paper is twofold: first, to explain the suitability of solving beamlet-based IMRT problems by a BFGS quasi-Newton sequential quadratic programming method with diagonal initial Hessian estimate, and second, to empirically show that beamlet-weight optimization problems should be solved in relatively few iterations when using this optimization method. The explanation of the suitability is based on viewing the optimization method as an iterative regularization method. In iterative regularization, the optimization problem is solved approximately by iterating long enough to obtain a solution close to the optimal one, but terminating before too much noise occurs. Iterative regularization requires an optimization method that initially proceeds in smooth directions and makes rapid initial progress. Solving ten beamlet-based IMRT problems with dose-volume objectives and bounds on the beamlet-weights, we find that the considered optimization method fulfills the requirements for performing iterative regularization. After segment-weight optimization, the treatments obtained using 35 beamlet-weight iterations outperform the treatments obtained using 100 beamlet-weight iterations, both in terms of objective value and of target uniformity. We conclude that iterating too long may in fact deteriorate the quality of the deliverable plan

  7. On the MSE Performance and Optimization of Regularized Problems

    KAUST Repository

    Alrashdi, Ayed

    2016-11-01

    The amount of data that has been measured, transmitted/received, and stored in the recent years has dramatically increased. So, today, we are in the world of big data. Fortunately, in many applications, we can take advantages of possible structures and patterns in the data to overcome the curse of dimensionality. The most well known structures include sparsity, low-rankness, block sparsity. This includes a wide range of applications such as machine learning, medical imaging, signal processing, social networks and computer vision. This also led to a specific interest in recovering signals from noisy compressed measurements (Compressed Sensing (CS) problem). Such problems are generally ill-posed unless the signal is structured. The structure can be captured by a regularizer function. This gives rise to a potential interest in regularized inverse problems, where the process of reconstructing the structured signal can be modeled as a regularized problem. This thesis particularly focuses on finding the optimal regularization parameter for such problems, such as ridge regression, LASSO, square-root LASSO and low-rank Generalized LASSO. Our goal is to optimally tune the regularizer to minimize the mean-squared error (MSE) of the solution when the noise variance or structure parameters are unknown. The analysis is based on the framework of the Convex Gaussian Min-max Theorem (CGMT) that has been used recently to precisely predict performance errors.

  8. Structural analysis and biological activity of a highly regular glycosaminoglycan from Achatina fulica.

    Science.gov (United States)

    Liu, Jie; Zhou, Lutan; He, Zhicheng; Gao, Na; Shang, Feineng; Xu, Jianping; Li, Zi; Yang, Zengming; Wu, Mingyi; Zhao, Jinhua

    2018-02-01

    Edible snails have been widely used as a health food and medicine in many countries. A unique glycosaminoglycan (AF-GAG) was purified from Achatina fulica. Its structure was analyzed and characterized by chemical and instrumental methods, such as Fourier transform infrared spectroscopy, analysis of monosaccharide composition, and 1D/2D nuclear magnetic resonance spectroscopy. Chemical composition analysis indicated that AF-GAG is composed of iduronic acid (IdoA) and N-acetyl-glucosamine (GlcNAc) and its average molecular weight is 118kDa. Structural analysis clarified that the uronic acid unit in glycosaminoglycan (GAG) is the fully epimerized and the sequence of AF-GAG is →4)-α-GlcNAc (1→4)-α-IdoA2S (1→. Although its structure with a uniform repeating disaccharide is similar to those of heparin and heparan sulfate, this GAG is structurally highly regular and homogeneous. Anticoagulant activity assays indicated that AF-GAG exhibits no anticoagulant activities, but considering its structural characteristic, other bioactivities such as heparanase inhibition may be worthy of further study. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Analysis of regularized Navier-Stokes equations, 2

    Science.gov (United States)

    Ou, Yuh-Roung; Sritharan, S. S.

    1989-01-01

    A practically important regularization of the Navier-Stokes equations was analyzed. As a continuation of the previous work, the structure of the attractors characterizing the solutins was studied. Local as well as global invariant manifolds were found. Regularity properties of these manifolds are analyzed.

  10. Simulations of fine structures on the zero field steps of Josephson tunnel junctions

    DEFF Research Database (Denmark)

    Scheuermann, M.; Chi, C. C.; Pedersen, Niels Falsig

    1986-01-01

    Fine structures on the zero field steps of long Josephson tunnel junctions are simulated for junctions with the bias current injected into the junction at the edges. These structures are due to the coupling between self-generated plasma oscillations and the traveling fluxon. The plasma oscillations...... are generated by the interaction of the bias current with the fluxon at the junction edges. On the first zero field step, the voltages of successive fine structures are given by Vn=[h-bar]/2e(2omegap/n), where n is an even integer. Applied Physics Letters is copyrighted by The American Institute of Physics....

  11. Regularities of radiation heredity

    International Nuclear Information System (INIS)

    Skakov, M.K.; Melikhov, V.D.

    2001-01-01

    One analyzed regularities of radiation heredity in metals and alloys. One made conclusion about thermodynamically irreversible changes in structure of materials under irradiation. One offers possible ways of heredity transmittance of radiation effects at high-temperature transformations in the materials. Phenomenon of radiation heredity may be turned to practical use to control structure of liquid metal and, respectively, structure of ingot via preliminary radiation treatment of charge. Concentration microheterogeneities in material defect structure induced by preliminary irradiation represent the genetic factor of radiation heredity [ru

  12. Contribution to regularizing iterative method development for attenuation correction in gamma emission tomography

    International Nuclear Information System (INIS)

    Cao, A.

    1981-07-01

    This study is concerned with the transverse axial gamma emission tomography. The problem of self-attenuation of radiations in biologic tissues is raised. The regularizing iterative method is developed, as a reconstruction method of 3 dimensional images. The different steps from acquisition to results, necessary to its application, are described. Organigrams relative to each step are explained. Comparison notion between two reconstruction methods is introduced. Some methods used for the comparison or to bring about the characteristics of a reconstruction technique are defined. The studies realized to test the regularizing iterative method are presented and results are analyzed [fr

  13. Consistent Partial Least Squares Path Modeling via Regularization.

    Science.gov (United States)

    Jung, Sunho; Park, JaeHong

    2018-01-01

    Partial least squares (PLS) path modeling is a component-based structural equation modeling that has been adopted in social and psychological research due to its data-analytic capability and flexibility. A recent methodological advance is consistent PLS (PLSc), designed to produce consistent estimates of path coefficients in structural models involving common factors. In practice, however, PLSc may frequently encounter multicollinearity in part because it takes a strategy of estimating path coefficients based on consistent correlations among independent latent variables. PLSc has yet no remedy for this multicollinearity problem, which can cause loss of statistical power and accuracy in parameter estimation. Thus, a ridge type of regularization is incorporated into PLSc, creating a new technique called regularized PLSc. A comprehensive simulation study is conducted to evaluate the performance of regularized PLSc as compared to its non-regularized counterpart in terms of power and accuracy. The results show that our regularized PLSc is recommended for use when serious multicollinearity is present.

  14. Consistent Partial Least Squares Path Modeling via Regularization

    Directory of Open Access Journals (Sweden)

    Sunho Jung

    2018-02-01

    Full Text Available Partial least squares (PLS path modeling is a component-based structural equation modeling that has been adopted in social and psychological research due to its data-analytic capability and flexibility. A recent methodological advance is consistent PLS (PLSc, designed to produce consistent estimates of path coefficients in structural models involving common factors. In practice, however, PLSc may frequently encounter multicollinearity in part because it takes a strategy of estimating path coefficients based on consistent correlations among independent latent variables. PLSc has yet no remedy for this multicollinearity problem, which can cause loss of statistical power and accuracy in parameter estimation. Thus, a ridge type of regularization is incorporated into PLSc, creating a new technique called regularized PLSc. A comprehensive simulation study is conducted to evaluate the performance of regularized PLSc as compared to its non-regularized counterpart in terms of power and accuracy. The results show that our regularized PLSc is recommended for use when serious multicollinearity is present.

  15. Metric regularity and subdifferential calculus

    International Nuclear Information System (INIS)

    Ioffe, A D

    2000-01-01

    The theory of metric regularity is an extension of two classical results: the Lyusternik tangent space theorem and the Graves surjection theorem. Developments in non-smooth analysis in the 1980s and 1990s paved the way for a number of far-reaching extensions of these results. It was also well understood that the phenomena behind the results are of metric origin, not connected with any linear structure. At the same time it became clear that some basic hypotheses of the subdifferential calculus are closely connected with the metric regularity of certain set-valued maps. The survey is devoted to the metric theory of metric regularity and its connection with subdifferential calculus in Banach spaces

  16. On the regularized fermionic projector of the vacuum

    Science.gov (United States)

    Finster, Felix

    2008-03-01

    We construct families of fermionic projectors with spherically symmetric regularization, which satisfy the condition of a distributional MP-product. The method is to analyze regularization tails with a power law or logarithmic scaling in composite expressions in the fermionic projector. The resulting regularizations break the Lorentz symmetry and give rise to a multilayer structure of the fermionic projector near the light cone. Furthermore, we construct regularizations which go beyond the distributional MP-product in that they yield additional distributional contributions supported at the origin. The remaining freedom for the regularization parameters and the consequences for the normalization of the fermionic states are discussed.

  17. On the regularized fermionic projector of the vacuum

    International Nuclear Information System (INIS)

    Finster, Felix

    2008-01-01

    We construct families of fermionic projectors with spherically symmetric regularization, which satisfy the condition of a distributional MP-product. The method is to analyze regularization tails with a power law or logarithmic scaling in composite expressions in the fermionic projector. The resulting regularizations break the Lorentz symmetry and give rise to a multilayer structure of the fermionic projector near the light cone. Furthermore, we construct regularizations which go beyond the distributional MP-product in that they yield additional distributional contributions supported at the origin. The remaining freedom for the regularization parameters and the consequences for the normalization of the fermionic states are discussed

  18. Neuregulin: First Steps Towards a Structure

    Science.gov (United States)

    Ferree, D. S.; Malone, C. C.; Karr, L. J.

    2003-01-01

    Neuregulins are growth factor domain proteins with diverse bioactivities, such as cell proliferation, receptor binding, and differentiation. Neureguh- 1 binds to two members of the ErbB class I tyrosine kinase receptors, ErbB3 and ErbB4. A number of human cancers overexpress the ErbB receptors, and neuregulin can modulate the growth of certain cancer types. Neuregulin-1 has been shown to promote the migration of invasive gliomas of the central nervous system. Neuregulin has also been implicated in schizophrenia, multiple sclerosis and abortive cardiac abnormalities. The full function of neuregulin-1 is not known. In this study we are inserting a cDNA clone obtained from American Type Culture Collection into E.coli expression vectors to express neuregulin- 1 protein. Metal chelate affinity chromatography is used for recombinant protein purification. Crystallization screening will proceed for X-ray diffraction studies following expression, optimization, and protein purification. In spite of medical and scholarly interest in the neuregulins, there are currently no high-resolution structures available for these proteins. Here we present the first steps toward attaining a high-resolution structure of neuregulin- 1, which will help enable us to better understand its function

  19. Optimal Tikhonov Regularization in Finite-Frequency Tomography

    Science.gov (United States)

    Fang, Y.; Yao, Z.; Zhou, Y.

    2017-12-01

    The last decade has witnessed a progressive transition in seismic tomography from ray theory to finite-frequency theory which overcomes the resolution limit of the high-frequency approximation in ray theory. In addition to approximations in wave propagation physics, a main difference between ray-theoretical tomography and finite-frequency tomography is the sparseness of the associated sensitivity matrix. It is well known that seismic tomographic problems are ill-posed and regularizations such as damping and smoothing are often applied to analyze the tradeoff between data misfit and model uncertainty. The regularizations depend on the structure of the matrix as well as noise level of the data. Cross-validation has been used to constrain data uncertainties in body-wave finite-frequency inversions when measurements at multiple frequencies are available to invert for a common structure. In this study, we explore an optimal Tikhonov regularization in surface-wave phase-velocity tomography based on minimization of an empirical Bayes risk function using theoretical training datasets. We exploit the structure of the sensitivity matrix in the framework of singular value decomposition (SVD) which also allows for the calculation of complete resolution matrix. We compare the optimal Tikhonov regularization in finite-frequency tomography with traditional tradeo-off analysis using surface wave dispersion measurements from global as well as regional studies.

  20. Diverse Regular Employees and Non-regular Employment (Japanese)

    OpenAIRE

    MORISHIMA Motohiro

    2011-01-01

    Currently there are high expectations for the introduction of policies related to diverse regular employees. These policies are a response to the problem of disparities between regular and non-regular employees (part-time, temporary, contract and other non-regular employees) and will make it more likely that workers can balance work and their private lives while companies benefit from the advantages of regular employment. In this paper, I look at two issues that underlie this discussion. The ...

  1. Implementation of a variable-step integration technique for nonlinear structural dynamic analysis

    International Nuclear Information System (INIS)

    Underwood, P.; Park, K.C.

    1977-01-01

    The paper presents the implementation of a recently developed unconditionally stable implicit time integration method into a production computer code for the transient response analysis of nonlinear structural dynamic systems. The time integrator is packaged with two significant features; a variable step size that is automatically determined and this is accomplished without additional matrix refactorizations. The equations of motion solved by the time integrator must be cast in the pseudo-force form, and this provides the mechanism for controlling the step size. Step size control is accomplished by extrapolating the pseudo-force to the next time (the predicted pseudo-force), then performing the integration step and then recomputing the pseudo-force based on the current solution (the correct pseudo-force); from this data an error norm is constructed, the value of which determines the step size for the next step. To avoid refactoring the required matrix with each step size change a matrix scaling technique is employed, which allows step sizes to change by a factor of 100 without refactoring. If during a computer run the integrator determines it can run with a step size larger than 100 times the original minimum step size, the matrix is refactored to take advantage of the larger step size. The strategy for effecting these features are discussed in detail. (Auth.)

  2. Front propagation in a regular vortex lattice: Dependence on the vortex structure.

    Science.gov (United States)

    Beauvier, E; Bodea, S; Pocheau, A

    2017-11-01

    We investigate the dependence on the vortex structure of the propagation of fronts in stirred flows. For this, we consider a regular set of vortices whose structure is changed by varying both their boundary conditions and their aspect ratios. These configurations are investigated experimentally in autocatalytic solutions stirred by electroconvective flows and numerically from kinematic simulations based on the determination of the dominant Fourier mode of the vortex stream function in each of them. For free lateral boundary conditions, i.e., in an extended vortex lattice, it is found that both the flow structure and the front propagation negligibly depend on vortex aspect ratios. For rigid lateral boundary conditions, i.e., in a vortex chain, vortices involve a slight dependence on their aspect ratios which surprisingly yields a noticeable decrease of the enhancement of front velocity by flow advection. These different behaviors reveal a sensitivity of the mean front velocity on the flow subscales. It emphasizes the intrinsic multiscale nature of front propagation in stirred flows and the need to take into account not only the intensity of vortex flows but also their inner structure to determine front propagation at a large scale. Differences between experiments and simulations suggest the occurrence of secondary flows in vortex chains at large velocity and large aspect ratios.

  3. The Impact of Computerization on Regular Employment (Japanese)

    OpenAIRE

    SUNADA Mitsuru; HIGUCHI Yoshio; ABE Masahiro

    2004-01-01

    This paper uses micro data from the Basic Survey of Japanese Business Structure and Activity to analyze the effects of companies' introduction of information and telecommunications technology on employment structures, especially regular versus non-regular employment. Firstly, examination of trends in the ratio of part-time workers recorded in the Basic Survey shows that part-time worker ratios in manufacturing firms are rising slightly, but that companies with a high proportion of part-timers...

  4. Laplacian manifold regularization method for fluorescence molecular tomography

    Science.gov (United States)

    He, Xuelei; Wang, Xiaodong; Yi, Huangjian; Chen, Yanrong; Zhang, Xu; Yu, Jingjing; He, Xiaowei

    2017-04-01

    Sparse regularization methods have been widely used in fluorescence molecular tomography (FMT) for stable three-dimensional reconstruction. Generally, ℓ1-regularization-based methods allow for utilizing the sparsity nature of the target distribution. However, in addition to sparsity, the spatial structure information should be exploited as well. A joint ℓ1 and Laplacian manifold regularization model is proposed to improve the reconstruction performance, and two algorithms (with and without Barzilai-Borwein strategy) are presented to solve the regularization model. Numerical studies and in vivo experiment demonstrate that the proposed Gradient projection-resolved Laplacian manifold regularization method for the joint model performed better than the comparative algorithm for ℓ1 minimization method in both spatial aggregation and location accuracy.

  5. The persistence of the attentional bias to regularities in a changing environment.

    Science.gov (United States)

    Yu, Ru Qi; Zhao, Jiaying

    2015-10-01

    The environment often is stable, but some aspects may change over time. The challenge for the visual system is to discover and flexibly adapt to the changes. We examined how attention is shifted in the presence of changes in the underlying structure of the environment. In six experiments, observers viewed four simultaneous streams of objects while performing a visual search task. In the first half of each experiment, the stream in the structured location contained regularities, the shapes in the random location were randomized, and gray squares appeared in two neutral locations. In the second half, the stream in the structured or the random location may change. In the first half of all experiments, visual search was facilitated in the structured location, suggesting that attention was consistently biased toward regularities. In the second half, this bias persisted in the structured location when no change occurred (Experiment 1), when the regularities were removed (Experiment 2), or when new regularities embedded in the original or novel stimuli emerged in the previously random location (Experiments 3 and 6). However, visual search was numerically but no longer reliably faster in the structured location when the initial regularities were removed and new regularities were introduced in the previously random location (Experiment 4), or when novel random stimuli appeared in the random location (Experiment 5). This suggests that the attentional bias was weakened. Overall, the results demonstrate that the attentional bias to regularities was persistent but also sensitive to changes in the environment.

  6. Rapid decay of vacancy islands at step edges on Ag(111): step orientation dependence

    International Nuclear Information System (INIS)

    Shen, Mingmin; Thiel, P A; Jenks, Cynthia J; Evans, J W

    2010-01-01

    Previous work has established that vacancy islands or pits fill much more quickly when they are in contact with a step edge, such that the common boundary is a double step. The present work focuses on the effect of the orientation of that step, with two possibilities existing for a face centered cubic (111) surface: A- and B-type steps. We find that the following features can depend on the orientation: (1) the shapes of islands while they shrink; (2) whether the island remains attached to the step edge; and (3) the rate of filling. The first two effects can be explained by the different rates of adatom diffusion along the A- and B-steps that define the pit, enhanced by the different filling rates. The third observation-the difference in the filling rate itself-is explained within the context of the concerted exchange mechanism at the double step. This process is facile at all regular sites along B-steps, but only at kink sites along A-steps, which explains the different rates. We also observe that oxygen can greatly accelerate the decay process, although it has no apparent effect on an isolated vacancy island (i.e. an island that is not in contact with a step).

  7. REGULARITIES AND MECHANISM OF FORMATION OF STRUCTURE OF THE MECHANICALLY ALLOYED COMPOSITIONS GROUND ON THE BASIS OF METAL SYSTEMS

    Directory of Open Access Journals (Sweden)

    F. G. Lovshenko

    2014-01-01

    Full Text Available Experimentally determined regularities and mechanism of formation of structure of the mechanically alloyed compositions foundations on the basis of the widely applied in mechanical engineering metals – iron, nickel, aluminum, copper are given. 

  8. Determination of the structures of small gold clusters on stepped magnesia by density functional calculations.

    Science.gov (United States)

    Damianos, Konstantina; Ferrando, Riccardo

    2012-02-21

    The structural modifications of small supported gold clusters caused by realistic surface defects (steps) in the MgO(001) support are investigated by computational methods. The most stable gold cluster structures on a stepped MgO(001) surface are searched for in the size range up to 24 Au atoms, and locally optimized by density-functional calculations. Several structural motifs are found within energy differences of 1 eV: inclined leaflets, arched leaflets, pyramidal hollow cages and compact structures. We show that the interaction with the step clearly modifies the structures with respect to adsorption on the flat defect-free surface. We find that leaflet structures clearly dominate for smaller sizes. These leaflets are either inclined and quasi-horizontal, or arched, at variance with the case of the flat surface in which vertical leaflets prevail. With increasing cluster size pyramidal hollow cages begin to compete against leaflet structures. Cage structures become more and more favourable as size increases. The only exception is size 20, at which the tetrahedron is found as the most stable isomer. This tetrahedron is however quite distorted. The comparison of two different exchange-correlation functionals (Perdew-Burke-Ernzerhof and local density approximation) show the same qualitative trends. This journal is © The Royal Society of Chemistry 2012

  9. Enhancing Low-Rank Subspace Clustering by Manifold Regularization.

    Science.gov (United States)

    Liu, Junmin; Chen, Yijun; Zhang, JiangShe; Xu, Zongben

    2014-07-25

    Recently, low-rank representation (LRR) method has achieved great success in subspace clustering (SC), which aims to cluster the data points that lie in a union of low-dimensional subspace. Given a set of data points, LRR seeks the lowest rank representation among the many possible linear combinations of the bases in a given dictionary or in terms of the data itself. However, LRR only considers the global Euclidean structure, while the local manifold structure, which is often important for many real applications, is ignored. In this paper, to exploit the local manifold structure of the data, a manifold regularization characterized by a Laplacian graph has been incorporated into LRR, leading to our proposed Laplacian regularized LRR (LapLRR). An efficient optimization procedure, which is based on alternating direction method of multipliers (ADMM), is developed for LapLRR. Experimental results on synthetic and real data sets are presented to demonstrate that the performance of LRR has been enhanced by using the manifold regularization.

  10. Neighborhood Regularized Logistic Matrix Factorization for Drug-Target Interaction Prediction.

    Science.gov (United States)

    Liu, Yong; Wu, Min; Miao, Chunyan; Zhao, Peilin; Li, Xiao-Li

    2016-02-01

    In pharmaceutical sciences, a crucial step of the drug discovery process is the identification of drug-target interactions. However, only a small portion of the drug-target interactions have been experimentally validated, as the experimental validation is laborious and costly. To improve the drug discovery efficiency, there is a great need for the development of accurate computational approaches that can predict potential drug-target interactions to direct the experimental verification. In this paper, we propose a novel drug-target interaction prediction algorithm, namely neighborhood regularized logistic matrix factorization (NRLMF). Specifically, the proposed NRLMF method focuses on modeling the probability that a drug would interact with a target by logistic matrix factorization, where the properties of drugs and targets are represented by drug-specific and target-specific latent vectors, respectively. Moreover, NRLMF assigns higher importance levels to positive observations (i.e., the observed interacting drug-target pairs) than negative observations (i.e., the unknown pairs). Because the positive observations are already experimentally verified, they are usually more trustworthy. Furthermore, the local structure of the drug-target interaction data has also been exploited via neighborhood regularization to achieve better prediction accuracy. We conducted extensive experiments over four benchmark datasets, and NRLMF demonstrated its effectiveness compared with five state-of-the-art approaches.

  11. Mathematic Model of Digital Control System with PID Regulator and Regular Step of Quantization with Information Transfer via the Channel of Plural Access

    Science.gov (United States)

    Abramov, G. V.; Emeljanov, A. E.; Ivashin, A. L.

    Theoretical bases for modeling a digital control system with information transfer via the channel of plural access and a regular quantization cycle are submitted. The theory of dynamic systems with random changes of the structure including elements of the Markov random processes theory is used for a mathematical description of a network control system. The characteristics of similar control systems are received. Experimental research of the given control systems is carried out.

  12. Lifshitz anomalies, Ward identities and split dimensional regularization

    Energy Technology Data Exchange (ETDEWEB)

    Arav, Igal; Oz, Yaron; Raviv-Moshe, Avia [Raymond and Beverly Sackler School of Physics and Astronomy, Tel-Aviv University,55 Haim Levanon street, Tel-Aviv, 69978 (Israel)

    2017-03-16

    We analyze the structure of the stress-energy tensor correlation functions in Lifshitz field theories and construct the corresponding anomalous Ward identities. We develop a framework for calculating the anomaly coefficients that employs a split dimensional regularization and the pole residues. We demonstrate the procedure by calculating the free scalar Lifshitz scale anomalies in 2+1 spacetime dimensions. We find that the analysis of the regularization dependent trivial terms requires a curved spacetime description without a foliation structure. We discuss potential ambiguities in Lifshitz scale anomaly definitions.

  13. Lifshitz anomalies, Ward identities and split dimensional regularization

    International Nuclear Information System (INIS)

    Arav, Igal; Oz, Yaron; Raviv-Moshe, Avia

    2017-01-01

    We analyze the structure of the stress-energy tensor correlation functions in Lifshitz field theories and construct the corresponding anomalous Ward identities. We develop a framework for calculating the anomaly coefficients that employs a split dimensional regularization and the pole residues. We demonstrate the procedure by calculating the free scalar Lifshitz scale anomalies in 2+1 spacetime dimensions. We find that the analysis of the regularization dependent trivial terms requires a curved spacetime description without a foliation structure. We discuss potential ambiguities in Lifshitz scale anomaly definitions.

  14. Coordinate-invariant regularization

    International Nuclear Information System (INIS)

    Halpern, M.B.

    1987-01-01

    A general phase-space framework for coordinate-invariant regularization is given. The development is geometric, with all regularization contained in regularized DeWitt Superstructures on field deformations. Parallel development of invariant coordinate-space regularization is obtained by regularized functional integration of the momenta. As representative examples of the general formulation, the regularized general non-linear sigma model and regularized quantum gravity are discussed. copyright 1987 Academic Press, Inc

  15. Parameter optimization in the regularized kernel minimum noise fraction transformation

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Vestergaard, Jacob Schack

    2012-01-01

    Based on the original, linear minimum noise fraction (MNF) transformation and kernel principal component analysis, a kernel version of the MNF transformation was recently introduced. Inspired by we here give a simple method for finding optimal parameters in a regularized version of kernel MNF...... analysis. We consider the model signal-to-noise ratio (SNR) as a function of the kernel parameters and the regularization parameter. In 2-4 steps of increasingly refined grid searches we find the parameters that maximize the model SNR. An example based on data from the DLR 3K camera system is given....

  16. Tailoring structures through two-step annealing process in nanostructured aluminum produced by accumulative roll-bonding

    DEFF Research Database (Denmark)

    Kamikawa, Naoya; Huang, Xiaoxu; Hansen, Niels

    2008-01-01

    temperature before annealing at high temperature. By this two-step process, the structure is homogenized and the stored energy is reduced significantly during the first annealing step. As an example, high-purity aluminum has been deformed to a total reduction of 98.4% (equivalent strain of 4.......8) by accumulative roll-bonding at room temperature. Isochronal annealing for 0.5 h of the deformed samples shows the occurrence of recrystallization at 200 °C and above. However, when introducing an annealing step for 6 h at 175 °C, no significant recrystallization is observed and relatively homogeneous structures...... are obtained when the samples afterwards are annealed at higher temperatures up to 300 °C. To underpin these observations, the structural evolution has been characterized by transmission electron microscopy, showing that significant annihilation of high-angle boundaries, low-angle dislocation boundaries...

  17. Manifold regularization for sparse unmixing of hyperspectral images.

    Science.gov (United States)

    Liu, Junmin; Zhang, Chunxia; Zhang, Jiangshe; Li, Huirong; Gao, Yuelin

    2016-01-01

    Recently, sparse unmixing has been successfully applied to spectral mixture analysis of remotely sensed hyperspectral images. Based on the assumption that the observed image signatures can be expressed in the form of linear combinations of a number of pure spectral signatures known in advance, unmixing of each mixed pixel in the scene is to find an optimal subset of signatures in a very large spectral library, which is cast into the framework of sparse regression. However, traditional sparse regression models, such as collaborative sparse regression , ignore the intrinsic geometric structure in the hyperspectral data. In this paper, we propose a novel model, called manifold regularized collaborative sparse regression , by introducing a manifold regularization to the collaborative sparse regression model. The manifold regularization utilizes a graph Laplacian to incorporate the locally geometrical structure of the hyperspectral data. An algorithm based on alternating direction method of multipliers has been developed for the manifold regularized collaborative sparse regression model. Experimental results on both the simulated and real hyperspectral data sets have demonstrated the effectiveness of our proposed model.

  18. An experimentalists view on the analogy between step edges and quantum mechanical particles

    NARCIS (Netherlands)

    Zandvliet, Henricus J.W.

    1995-01-01

    Guided by scanning tunnelling microscopy images of regularly stepped surfaces it will be illustrated that there is a striking similarity between the behaviour of monoatomic step edges and quantum mechanical particles (spinless fermions). The direction along the step edge is equivalent to the time,

  19. A lattice Boltzmann model for substrates with regularly structured surface roughness

    Science.gov (United States)

    Yagub, A.; Farhat, H.; Kondaraju, S.; Singh, T.

    2015-11-01

    Superhydrophobic surface characteristics are important in many industrial applications, ranging from the textile to the military. It was observed that surfaces fabricated with nano/micro roughness can manipulate the droplet contact angle, thus providing an opportunity to control the droplet wetting characteristics. The Shan and Chen (SC) lattice Boltzmann model (LBM) is a good numerical tool, which holds strong potentials to qualify for simulating droplets wettability. This is due to its realistic nature of droplet contact angle (CA) prediction on flat smooth surfaces. But SC-LBM was not able to replicate the CA on rough surfaces because it lacks a real representation of the physics at work under these conditions. By using a correction factor to influence the interfacial tension within the asperities, the physical forces acting on the droplet at its contact lines were mimicked. This approach allowed the model to replicate some experimentally confirmed Wenzel and Cassie wetting cases. Regular roughness structures with different spacing were used to validate the study using the classical Wenzel and Cassie equations. The present work highlights the strength and weakness of the SC model and attempts to qualitatively conform it to the fundamental physics, which causes a change in the droplet apparent contact angle, when placed on nano/micro structured surfaces.

  20. Regular Expression Matching and Operational Semantics

    Directory of Open Access Journals (Sweden)

    Asiri Rathnayake

    2011-08-01

    Full Text Available Many programming languages and tools, ranging from grep to the Java String library, contain regular expression matchers. Rather than first translating a regular expression into a deterministic finite automaton, such implementations typically match the regular expression on the fly. Thus they can be seen as virtual machines interpreting the regular expression much as if it were a program with some non-deterministic constructs such as the Kleene star. We formalize this implementation technique for regular expression matching using operational semantics. Specifically, we derive a series of abstract machines, moving from the abstract definition of matching to increasingly realistic machines. First a continuation is added to the operational semantics to describe what remains to be matched after the current expression. Next, we represent the expression as a data structure using pointers, which enables redundant searches to be eliminated via testing for pointer equality. From there, we arrive both at Thompson's lockstep construction and a machine that performs some operations in parallel, suitable for implementation on a large number of cores, such as a GPU. We formalize the parallel machine using process algebra and report some preliminary experiments with an implementation on a graphics processor using CUDA.

  1. Steps to preventing Type 2 diabetes: Exercise, walk more, or sit less?

    Directory of Open Access Journals (Sweden)

    Catrine eTudor-Locke

    2012-11-01

    Full Text Available Accumulated evidence supports the promotion of structured exercise for treating prediabetes and preventing Type 2 diabetes. Unfortunately, contemporary societal changes in lifestyle behaviors (occupational, domestic, transportation, and leisure time have resulted in a notable widespread deficiency of non-exercise physical activity (e.g., ambulatory activity undertaken outside the context of purposeful exercise that has been simultaneously exchanged for an excess in sedentary behaviors (e.g., desk work, labor saving devices, motor vehicle travel, and screen-based leisure time pursuits. It is possible that the known beneficial effects of more structured forms of exercise are attenuated or otherwise undermined against this backdrop of normalized and ubiquitous slothful living. Although public health guidelines have traditionally focused on promoting a detailed exercise prescription, it is evident that the more pressing need is to revise and expand the message to address this insidious and deleterious lifestyle shift. Specifically, we recommend that adults avoid averaging < 5,000 steps/day and strive to average ≥ 7,500 steps/day, of which ≥ 3,000 steps (representing at least 30 minutes should be taken at a cadence ≥ 100 steps/min. They should also practice regularly breaking up extended bouts of sitting with ambulatory activity. Simply put, we must consider advocating a whole message to walk more, sit less, and exercise.

  2. Selective adsorption of a supramolecular structure on flat and stepped gold surfaces

    Science.gov (United States)

    Peköz, Rengin; Donadio, Davide

    2018-04-01

    Halogenated aromatic molecules assemble on surfaces forming both hydrogen and halogen bonds. Even though these systems have been intensively studied on flat metal surfaces, high-index vicinal surfaces remain challenging, as they may induce complex adsorbate structures. The adsorption of 2,6-dibromoanthraquinone (2,6-DBAQ) on flat and stepped gold surfaces is studied by means of van der Waals corrected density functional theory. Equilibrium geometries and corresponding adsorption energies are systematically investigated for various different adsorption configurations. It is shown that bridge sites and step edges are the preferred adsorption sites for single molecules on flat and stepped surfaces, respectively. The role of van der Waals interactions, halogen bonds and hydrogen bonds are explored for a monolayer coverage of 2,6-DBAQ molecules, revealing that molecular flexibility and intermolecular interactions stabilize two-dimensional networks on both flat and stepped surfaces. Our results provide a rationale for experimental observation of molecular carpeting on high-index vicinal surfaces of transition metals.

  3. Geostatistical regularization operators for geophysical inverse problems on irregular meshes

    Science.gov (United States)

    Jordi, C.; Doetsch, J.; Günther, T.; Schmelzbach, C.; Robertsson, J. OA

    2018-05-01

    Irregular meshes allow to include complicated subsurface structures into geophysical modelling and inverse problems. The non-uniqueness of these inverse problems requires appropriate regularization that can incorporate a priori information. However, defining regularization operators for irregular discretizations is not trivial. Different schemes for calculating smoothness operators on irregular meshes have been proposed. In contrast to classical regularization constraints that are only defined using the nearest neighbours of a cell, geostatistical operators include a larger neighbourhood around a particular cell. A correlation model defines the extent of the neighbourhood and allows to incorporate information about geological structures. We propose an approach to calculate geostatistical operators for inverse problems on irregular meshes by eigendecomposition of a covariance matrix that contains the a priori geological information. Using our approach, the calculation of the operator matrix becomes tractable for 3-D inverse problems on irregular meshes. We tested the performance of the geostatistical regularization operators and compared them against the results of anisotropic smoothing in inversions of 2-D surface synthetic electrical resistivity tomography (ERT) data as well as in the inversion of a realistic 3-D cross-well synthetic ERT scenario. The inversions of 2-D ERT and seismic traveltime field data with geostatistical regularization provide results that are in good accordance with the expected geology and thus facilitate their interpretation. In particular, for layered structures the geostatistical regularization provides geologically more plausible results compared to the anisotropic smoothness constraints.

  4. Structural comparison of anodic nanoporous-titania fabricated from single-step and three-step of anodization using two paralleled-electrodes anodizing cell

    Directory of Open Access Journals (Sweden)

    Mallika Thabuot

    2016-02-01

    Full Text Available Anodization of Ti sheet in the ethylene glycol electrolyte containing 0.38wt% NH4F with the addition of 1.79wt% H2O at room temperature was studied. Applied potential of 10-60 V and anodizing time of 1-3 h were conducted by single-step and three-step of anodization within the two paralleled-electrodes anodizing cell. Their structural and textural properties were investigated by X-ray diffraction (XRD and scanning electron microscopy (SEM. After annealing at 600°C in the air furnace for 3 h, TiO2-nanotubes was transformed to the higher proportion of anatase crystal phase. Also crystallization of anatase phase was enhanced as the duration of anodization as the final step increased. By using single-step of anodization, pore texture of oxide film was started to reveal at the applied potential of 30 V. Better orderly arrangement of the TiO2-nanotubes array with larger pore size was obtained with the increase of applied potential. The applied potential of 60 V was selected for the three-step of anodization with anodizing time of 1-3 h. Results showed that the well-smooth surface coverage with higher density of porous-TiO2 was achieved using prolonging time at the first and second step, however, discontinuity tube in length was produced instead of the long-vertical tube. Layer thickness of anodic oxide film depended on the anodizing time at the last step of anodization. More well arrangement of nanostructured-TiO2 was produced using three-step of anodization under 60 V with 3 h for each step.

  5. Structural properties and complexity of a new network class: Collatz step graphs.

    Directory of Open Access Journals (Sweden)

    Frank Emmert-Streib

    Full Text Available In this paper, we introduce a biologically inspired model to generate complex networks. In contrast to many other construction procedures for growing networks introduced so far, our method generates networks from one-dimensional symbol sequences that are related to the so called Collatz problem from number theory. The major purpose of the present paper is, first, to derive a symbol sequence from the Collatz problem, we call the step sequence, and investigate its structural properties. Second, we introduce a construction procedure for growing networks that is based on these step sequences. Third, we investigate the structural properties of this new network class including their finite scaling and asymptotic behavior of their complexity, average shortest path lengths and clustering coefficients. Interestingly, in contrast to many other network models including the small-world network from Watts & Strogatz, we find that CS graphs become 'smaller' with an increasing size.

  6. Basis Expansion Approaches for Regularized Sequential Dictionary Learning Algorithms With Enforced Sparsity for fMRI Data Analysis.

    Science.gov (United States)

    Seghouane, Abd-Krim; Iqbal, Asif

    2017-09-01

    Sequential dictionary learning algorithms have been successfully applied to functional magnetic resonance imaging (fMRI) data analysis. fMRI data sets are, however, structured data matrices with the notions of temporal smoothness in the column direction. This prior information, which can be converted into a constraint of smoothness on the learned dictionary atoms, has seldomly been included in classical dictionary learning algorithms when applied to fMRI data analysis. In this paper, we tackle this problem by proposing two new sequential dictionary learning algorithms dedicated to fMRI data analysis by accounting for this prior information. These algorithms differ from the existing ones in their dictionary update stage. The steps of this stage are derived as a variant of the power method for computing the SVD. The proposed algorithms generate regularized dictionary atoms via the solution of a left regularized rank-one matrix approximation problem where temporal smoothness is enforced via regularization through basis expansion and sparse basis expansion in the dictionary update stage. Applications on synthetic data experiments and real fMRI data sets illustrating the performance of the proposed algorithms are provided.

  7. DESIGN OF STRUCTURAL ELEMENTS IN THE EVENT OF THE PRE-SET RELIABILITY, REGULAR LOAD AND BEARING CAPACITY DISTRIBUTION

    Directory of Open Access Journals (Sweden)

    Tamrazyan Ashot Georgievich

    2012-10-01

    Full Text Available Accurate and adequate description of external influences and of the bearing capacity of the structural material requires the employment of the probability theory methods. In this regard, the characteristic that describes the probability of failure-free operation is required. The characteristic of reliability means that the maximum stress caused by the action of the load will not exceed the bearing capacity. In this paper, the author presents a solution to the problem of calculation of structures, namely, the identification of reliability of pre-set design parameters, in particular, cross-sectional dimensions. If the load distribution pattern is available, employment of the regularities of distributed functions make it possible to find the pattern of distribution of maximum stresses over the structure. Similarly, we can proceed to the design of structures of pre-set rigidity, reliability and stability in the case of regular load distribution. We consider the element of design (a monolithic concrete slab, maximum stress S which depends linearly on load q. Within a pre-set period of time, the probability will not exceed the values according to the Poisson law. The analysis demonstrates that the variability of the bearing capacity produces a stronger effect on relative sizes of cross sections of a slab than the variability of loads. It is therefore particularly important to reduce the coefficient of variation of the load capacity. One of the methods contemplates the truncation of the bearing capacity distribution by pre-culling the construction material.

  8. Enhanced manifold regularization for semi-supervised classification.

    Science.gov (United States)

    Gan, Haitao; Luo, Zhizeng; Fan, Yingle; Sang, Nong

    2016-06-01

    Manifold regularization (MR) has become one of the most widely used approaches in the semi-supervised learning field. It has shown superiority by exploiting the local manifold structure of both labeled and unlabeled data. The manifold structure is modeled by constructing a Laplacian graph and then incorporated in learning through a smoothness regularization term. Hence the labels of labeled and unlabeled data vary smoothly along the geodesics on the manifold. However, MR has ignored the discriminative ability of the labeled and unlabeled data. To address the problem, we propose an enhanced MR framework for semi-supervised classification in which the local discriminative information of the labeled and unlabeled data is explicitly exploited. To make full use of labeled data, we firstly employ a semi-supervised clustering method to discover the underlying data space structure of the whole dataset. Then we construct a local discrimination graph to model the discriminative information of labeled and unlabeled data according to the discovered intrinsic structure. Therefore, the data points that may be from different clusters, though similar on the manifold, are enforced far away from each other. Finally, the discrimination graph is incorporated into the MR framework. In particular, we utilize semi-supervised fuzzy c-means and Laplacian regularized Kernel minimum squared error for semi-supervised clustering and classification, respectively. Experimental results on several benchmark datasets and face recognition demonstrate the effectiveness of our proposed method.

  9. Graph theoretical ordering of structures as a basis for systematic searches for regularities in molecular data

    International Nuclear Information System (INIS)

    Randic, M.; Wilkins, C.L.

    1979-01-01

    Selected molecular data on alkanes have been reexamined in a search for general regularities in isomeric variations. In contrast to the prevailing approaches concerned with fitting data by searching for optimal parameterization, the present work is primarily aimed at established trends, i.e., searching for relative magnitudes and their regularities among the isomers. Such an approach is complementary to curve fitting or correlation seeking procedures. It is particularly useful when there are incomplete data which allow trends to be recognized but no quantitative correlation to be established. One proceeds by first ordering structures. One way is to consider molecular graphs and enumerate paths of different length as the basic graph invariant. It can be shown that, for several thermodynamic molecular properties, the number of paths of length two (p 2 ) and length three (p 3 ) are critical. Hence, an ordering based on p 2 and p 3 indicates possible trends and behavior for many molecular properties, some of which relate to others, some which do not. By considering a grid graph derived by attributing to each isomer coordinates (p 2 ,p 3 ) and connecting points along the coordinate axis, one obtains a simple presentation useful for isomer structural interrelations. This skeletal frame is one upon which possible trends for different molecular properties may be conveniently represented. The significance of the results and their conceptual value is discussed. 16 figures, 3 tables

  10. Cardiac C-arm computed tomography using a 3D + time ROI reconstruction method with spatial and temporal regularization

    Energy Technology Data Exchange (ETDEWEB)

    Mory, Cyril, E-mail: cyril.mory@philips.com [Université de Lyon, CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Université Lyon 1, F-69621 Villeurbanne Cedex (France); Philips Research Medisys, 33 rue de Verdun, 92156 Suresnes (France); Auvray, Vincent; Zhang, Bo [Philips Research Medisys, 33 rue de Verdun, 92156 Suresnes (France); Grass, Michael; Schäfer, Dirk [Philips Research, Röntgenstrasse 24–26, D-22335 Hamburg (Germany); Chen, S. James; Carroll, John D. [Department of Medicine, Division of Cardiology, University of Colorado Denver, 12605 East 16th Avenue, Aurora, Colorado 80045 (United States); Rit, Simon [Université de Lyon, CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Université Lyon 1 (France); Centre Léon Bérard, 28 rue Laënnec, F-69373 Lyon (France); Peyrin, Françoise [Université de Lyon, CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Université Lyon 1, F-69621 Villeurbanne Cedex (France); X-ray Imaging Group, European Synchrotron, Radiation Facility, BP 220, F-38043 Grenoble Cedex (France); Douek, Philippe; Boussel, Loïc [Université de Lyon, CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Université Lyon 1 (France); Hospices Civils de Lyon, 28 Avenue du Doyen Jean Lépine, 69500 Bron (France)

    2014-02-15

    Purpose: Reconstruction of the beating heart in 3D + time in the catheter laboratory using only the available C-arm system would improve diagnosis, guidance, device sizing, and outcome control for intracardiac interventions, e.g., electrophysiology, valvular disease treatment, structural or congenital heart disease. To obtain such a reconstruction, the patient's electrocardiogram (ECG) must be recorded during the acquisition and used in the reconstruction. In this paper, the authors present a 4D reconstruction method aiming to reconstruct the heart from a single sweep 10 s acquisition. Methods: The authors introduce the 4D RecOnstructiOn using Spatial and TEmporal Regularization (short 4D ROOSTER) method, which reconstructs all cardiac phases at once, as a 3D + time volume. The algorithm alternates between a reconstruction step based on conjugate gradient and four regularization steps: enforcing positivity, averaging along time outside a motion mask that contains the heart and vessels, 3D spatial total variation minimization, and 1D temporal total variation minimization. Results: 4D ROOSTER recovers the different temporal representations of a moving Shepp and Logan phantom, and outperforms both ECG-gated simultaneous algebraic reconstruction technique and prior image constrained compressed sensing on a clinical case. It generates 3D + time reconstructions with sharp edges which can be used, for example, to estimate the patient's left ventricular ejection fraction. Conclusions: 4D ROOSTER can be applied for human cardiac C-arm CT, and potentially in other dynamic tomography areas. It can easily be adapted to other problems as regularization is decoupled from projection and back projection.

  11. Cardiac C-arm computed tomography using a 3D + time ROI reconstruction method with spatial and temporal regularization

    International Nuclear Information System (INIS)

    Mory, Cyril; Auvray, Vincent; Zhang, Bo; Grass, Michael; Schäfer, Dirk; Chen, S. James; Carroll, John D.; Rit, Simon; Peyrin, Françoise; Douek, Philippe; Boussel, Loïc

    2014-01-01

    Purpose: Reconstruction of the beating heart in 3D + time in the catheter laboratory using only the available C-arm system would improve diagnosis, guidance, device sizing, and outcome control for intracardiac interventions, e.g., electrophysiology, valvular disease treatment, structural or congenital heart disease. To obtain such a reconstruction, the patient's electrocardiogram (ECG) must be recorded during the acquisition and used in the reconstruction. In this paper, the authors present a 4D reconstruction method aiming to reconstruct the heart from a single sweep 10 s acquisition. Methods: The authors introduce the 4D RecOnstructiOn using Spatial and TEmporal Regularization (short 4D ROOSTER) method, which reconstructs all cardiac phases at once, as a 3D + time volume. The algorithm alternates between a reconstruction step based on conjugate gradient and four regularization steps: enforcing positivity, averaging along time outside a motion mask that contains the heart and vessels, 3D spatial total variation minimization, and 1D temporal total variation minimization. Results: 4D ROOSTER recovers the different temporal representations of a moving Shepp and Logan phantom, and outperforms both ECG-gated simultaneous algebraic reconstruction technique and prior image constrained compressed sensing on a clinical case. It generates 3D + time reconstructions with sharp edges which can be used, for example, to estimate the patient's left ventricular ejection fraction. Conclusions: 4D ROOSTER can be applied for human cardiac C-arm CT, and potentially in other dynamic tomography areas. It can easily be adapted to other problems as regularization is decoupled from projection and back projection

  12. Regularized lattice Boltzmann model for immiscible two-phase flows with power-law rheology

    Science.gov (United States)

    Ba, Yan; Wang, Ningning; Liu, Haihu; Li, Qiang; He, Guoqiang

    2018-03-01

    In this work, a regularized lattice Boltzmann color-gradient model is developed for the simulation of immiscible two-phase flows with power-law rheology. This model is as simple as the Bhatnagar-Gross-Krook (BGK) color-gradient model except that an additional regularization step is introduced prior to the collision step. In the regularization step, the pseudo-inverse method is adopted as an alternative solution for the nonequilibrium part of the total distribution function, and it can be easily extended to other discrete velocity models no matter whether a forcing term is considered or not. The obtained expressions for the nonequilibrium part are merely related to macroscopic variables and velocity gradients that can be evaluated locally. Several numerical examples, including the single-phase and two-phase layered power-law fluid flows between two parallel plates, and the droplet deformation and breakup in a simple shear flow, are conducted to test the capability and accuracy of the proposed color-gradient model. Results show that the present model is more stable and accurate than the BGK color-gradient model for power-law fluids with a wide range of power-law indices. Compared to its multiple-relaxation-time counterpart, the present model can increase the computing efficiency by around 15%, while keeping the same accuracy and stability. Also, the present model is found to be capable of reasonably predicting the critical capillary number of droplet breakup.

  13. Structures of adsorbed CO on atomically smooth and on stepped sngle crystal surfaces

    International Nuclear Information System (INIS)

    Madey, T.E.; Houston, J.E.

    1980-01-01

    The structures of molecular CO adsorbed on atomically smooth surfaces and on surfaces containing monatomic steps have been studied using the electron stimulated desorption ion angular distribution (ESDIAD) method. For CO adsorbed on the close packed Ru(001) and W(110) surfaces, the dominant bonding mode is via the carbon atom, with the CO molecular axis perpendicular to the plane of the surface. For CO on atomicaly rough Pd(210), and for CO adsorbed at step sites on four different surfaces vicinal to W(110), the axis of the molecule is tilted or inclined away from the normal to the surface. The ESDIAD method, in which ion desorption angles are related to surface bond angles, provides a direct determination of the structures of adsorbed molecules and molecular complexes on surfaces

  14. Manifold Based Low-rank Regularization for Image Restoration and Semi-supervised Learning

    OpenAIRE

    Lai, Rongjie; Li, Jia

    2017-01-01

    Low-rank structures play important role in recent advances of many problems in image science and data science. As a natural extension of low-rank structures for data with nonlinear structures, the concept of the low-dimensional manifold structure has been considered in many data processing problems. Inspired by this concept, we consider a manifold based low-rank regularization as a linear approximation of manifold dimension. This regularization is less restricted than the global low-rank regu...

  15. One-step sol-gel imprint lithography for guided-mode resonance structures.

    Science.gov (United States)

    Huang, Yin; Liu, Longju; Johnson, Michael; C Hillier, Andrew; Lu, Meng

    2016-03-04

    Guided-mode resonance (GMR) structures consisting of sub-wavelength periodic gratings are capable of producing narrow-linewidth optical resonances. This paper describes a sol-gel-based imprint lithography method for the fabrication of submicron 1D and 2D GMR structures. This method utilizes a patterned polydimethylsiloxane (PDMS) mold to fabricate the grating coupler and waveguide for a GMR device using a sol-gel thin film in a single step. An organic-inorganic hybrid sol-gel film was selected as the imprint material because of its relatively high refractive index. The optical responses of several sol-gel GMR devices were characterized, and the experimental results were in good agreement with the results of electromagnetic simulations. The influence of processing parameters was investigated in order to determine how finely the spectral response and resonant wavelength of the GMR devices could be tuned. As an example potential application, refractometric sensing experiments were performed using a 1D sol-gel device. The results demonstrated a refractive index sensitivity of 50 nm/refractive index unit. This one-step fabrication process offers a simple, rapid, and low-cost means of fabricating GMR structures. We anticipate that this method can be valuable in the development of various GMR-based devices as it can readily enable the fabrication of complex shapes and allow the doping of optically active materials into sol-gel thin film.

  16. One-step sol–gel imprint lithography for guided-mode resonance structures

    International Nuclear Information System (INIS)

    Huang, Yin; Liu, Longju; Lu, Meng; Johnson, Michael; C Hillier, Andrew

    2016-01-01

    Guided-mode resonance (GMR) structures consisting of sub-wavelength periodic gratings are capable of producing narrow-linewidth optical resonances. This paper describes a sol–gel-based imprint lithography method for the fabrication of submicron 1D and 2D GMR structures. This method utilizes a patterned polydimethylsiloxane (PDMS) mold to fabricate the grating coupler and waveguide for a GMR device using a sol–gel thin film in a single step. An organic–inorganic hybrid sol–gel film was selected as the imprint material because of its relatively high refractive index. The optical responses of several sol–gel GMR devices were characterized, and the experimental results were in good agreement with the results of electromagnetic simulations. The influence of processing parameters was investigated in order to determine how finely the spectral response and resonant wavelength of the GMR devices could be tuned. As an example potential application, refractometric sensing experiments were performed using a 1D sol–gel device. The results demonstrated a refractive index sensitivity of 50 nm/refractive index unit. This one-step fabrication process offers a simple, rapid, and low-cost means of fabricating GMR structures. We anticipate that this method can be valuable in the development of various GMR-based devices as it can readily enable the fabrication of complex shapes and allow the doping of optically active materials into sol–gel thin film. (paper)

  17. A Regularization SAA Scheme for a Stochastic Mathematical Program with Complementarity Constraints

    Directory of Open Access Journals (Sweden)

    Yu-xin Li

    2014-01-01

    Full Text Available To reflect uncertain data in practical problems, stochastic versions of the mathematical program with complementarity constraints (MPCC have drawn much attention in the recent literature. Our concern is the detailed analysis of convergence properties of a regularization sample average approximation (SAA method for solving a stochastic mathematical program with complementarity constraints (SMPCC. The analysis of this regularization method is carried out in three steps: First, the almost sure convergence of optimal solutions of the regularized SAA problem to that of the true problem is established by the notion of epiconvergence in variational analysis. Second, under MPCC-MFCQ, which is weaker than MPCC-LICQ, we show that any accumulation point of Karash-Kuhn-Tucker points of the regularized SAA problem is almost surely a kind of stationary point of SMPCC as the sample size tends to infinity. Finally, some numerical results are reported to show the efficiency of the method proposed.

  18. A two-step FEM-SEM approach for wave propagation analysis in cable structures

    Science.gov (United States)

    Zhang, Songhan; Shen, Ruili; Wang, Tao; De Roeck, Guido; Lombaert, Geert

    2018-02-01

    Vibration-based methods are among the most widely studied in structural health monitoring (SHM). It is well known, however, that the low-order modes, characterizing the global dynamic behaviour of structures, are relatively insensitive to local damage. Such local damage may be easier to detect by methods based on wave propagation which involve local high frequency behaviour. The present work considers the numerical analysis of wave propagation in cables. A two-step approach is proposed which allows taking into account the cable sag and the distribution of the axial forces in the wave propagation analysis. In the first step, the static deformation and internal forces are obtained by the finite element method (FEM), taking into account geometric nonlinear effects. In the second step, the results from the static analysis are used to define the initial state of the dynamic analysis which is performed by means of the spectral element method (SEM). The use of the SEM in the second step of the analysis allows for a significant reduction in computational costs as compared to a FE analysis. This methodology is first verified by means of a full FE analysis for a single stretched cable. Next, simulations are made to study the effects of damage in a single stretched cable and a cable-supported truss. The results of the simulations show how damage significantly affects the high frequency response, confirming the potential of wave propagation based methods for SHM.

  19. A matrix-free, implicit, incompressible fractional-step algorithm for fluid–structure interaction applications

    CSIR Research Space (South Africa)

    Oxtoby, Oliver F

    2012-05-01

    Full Text Available In this paper we detail a fast, fully-coupled, partitioned fluid–structure interaction (FSI) scheme. For the incompressible fluid, new fractional-step algorithms are proposed which make possible the fully implicit, but matrixfree, parallel solution...

  20. Bounded Perturbation Regularization for Linear Least Squares Estimation

    KAUST Repository

    Ballal, Tarig

    2017-10-18

    This paper addresses the problem of selecting the regularization parameter for linear least-squares estimation. We propose a new technique called bounded perturbation regularization (BPR). In the proposed BPR method, a perturbation with a bounded norm is allowed into the linear transformation matrix to improve the singular-value structure. Following this, the problem is formulated as a min-max optimization problem. Next, the min-max problem is converted to an equivalent minimization problem to estimate the unknown vector quantity. The solution of the minimization problem is shown to converge to that of the ℓ2 -regularized least squares problem, with the unknown regularizer related to the norm bound of the introduced perturbation through a nonlinear constraint. A procedure is proposed that combines the constraint equation with the mean squared error (MSE) criterion to develop an approximately optimal regularization parameter selection algorithm. Both direct and indirect applications of the proposed method are considered. Comparisons with different Tikhonov regularization parameter selection methods, as well as with other relevant methods, are carried out. Numerical results demonstrate that the proposed method provides significant improvement over state-of-the-art methods.

  1. A regularized matrix factorization approach to induce structured sparse-low-rank solutions in the EEG inverse problem

    DEFF Research Database (Denmark)

    Montoya-Martinez, Jair; Artes-Rodriguez, Antonio; Pontil, Massimiliano

    2014-01-01

    We consider the estimation of the Brain Electrical Sources (BES) matrix from noisy electroencephalographic (EEG) measurements, commonly named as the EEG inverse problem. We propose a new method to induce neurophysiological meaningful solutions, which takes into account the smoothness, structured...... sparsity, and low rank of the BES matrix. The method is based on the factorization of the BES matrix as a product of a sparse coding matrix and a dense latent source matrix. The structured sparse-low-rank structure is enforced by minimizing a regularized functional that includes the ℓ21-norm of the coding...... matrix and the squared Frobenius norm of the latent source matrix. We develop an alternating optimization algorithm to solve the resulting nonsmooth-nonconvex minimization problem. We analyze the convergence of the optimization procedure, and we compare, under different synthetic scenarios...

  2. A Two-Step Strategy for System Identification of Civil Structures for Structural Health Monitoring Using Wavelet Transform and Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Carlos Andres Perez-Ramirez

    2017-01-01

    Full Text Available Nowadays, the accurate identification of natural frequencies and damping ratios play an important role in smart civil engineering, since they can be used for seismic design, vibration control, and condition assessment, among others. To achieve it in practical way, it is required to instrument the structure and apply techniques which are able to deal with noise-corrupted and non-linear signals, as they are common features in real-life civil structures. In this article, a two-step strategy is proposed for performing accurate modal parameters identification in an automated manner. In the first step, it is obtained and decomposed the measured signals using the natural excitation technique and the synchrosqueezed wavelet transform, respectively. Then, the second step estimates the modal parameters by solving an optimization problem employing a genetic algorithm-based approach, where the micropopulation concept is used to improve the speed convergence as well as the accuracy of the estimated values. The accuracy and effectiveness of the proposal are tested using both the simulated response of a benchmark structure and the measurements of a real eight-story building. The obtained results show that the proposed strategy can estimate the modal parameters accurately, indicating than the proposal can be considered as an alternative to perform the abovementioned task.

  3. Fish mouths as engineering structures for vortical cross-step filtration

    Science.gov (United States)

    Sanderson, S. Laurie; Roberts, Erin; Lineburg, Jillian; Brooks, Hannah

    2016-03-01

    Suspension-feeding fishes such as goldfish and whale sharks retain prey without clogging their oral filters, whereas clogging is a major expense in industrial crossflow filtration of beer, dairy foods and biotechnology products. Fishes' abilities to retain particles that are smaller than the pore size of the gill-raker filter, including extraction of particles despite large holes in the filter, also remain unexplained. Here we show that unexplored combinations of engineering structures (backward-facing steps forming d-type ribs on the porous surface of a cone) cause fluid dynamic phenomena distinct from current biological and industrial filter operations. This vortical cross-step filtration model prevents clogging and explains the transport of tiny concentrated particles to the oesophagus using a hydrodynamic tongue. Mass transfer caused by vortices along d-type ribs in crossflow is applicable to filter-feeding duck beak lamellae and whale baleen plates, as well as the fluid mechanics of ventilation at fish gill filaments.

  4. Step dynamics and terrace-width distribution on flame-annealed gold films: The effect of step-step interaction

    International Nuclear Information System (INIS)

    Shimoni, Nira; Ayal, Shai; Millo, Oded

    2000-01-01

    Dynamics of atomic steps and the terrace-width distribution within step bunches on flame-annealed gold films are studied using scanning tunneling microscopy. The distribution is narrower than commonly observed for vicinal planes and has a Gaussian shape, indicating a short-range repulsive interaction between the steps, with an apparently large interaction constant. The dynamics of the atomic steps, on the other hand, appear to be influenced, in addition to these short-range interactions, also by a longer-range attraction of steps towards step bunches. Both types of interactions promote self-ordering of terrace structures on the surface. When current is driven through the films a step-fingering instability sets in, reminiscent of the Bales-Zangwill instability

  5. Elastic-net regularization approaches for genome-wide association studies of rheumatoid arthritis.

    Science.gov (United States)

    Cho, Seoae; Kim, Haseong; Oh, Sohee; Kim, Kyunga; Park, Taesung

    2009-12-15

    The current trend in genome-wide association studies is to identify regions where the true disease-causing genes may lie by evaluating thousands of single-nucleotide polymorphisms (SNPs) across the whole genome. However, many challenges exist in detecting disease-causing genes among the thousands of SNPs. Examples include multicollinearity and multiple testing issues, especially when a large number of correlated SNPs are simultaneously tested. Multicollinearity can often occur when predictor variables in a multiple regression model are highly correlated, and can cause imprecise estimation of association. In this study, we propose a simple stepwise procedure that identifies disease-causing SNPs simultaneously by employing elastic-net regularization, a variable selection method that allows one to address multicollinearity. At Step 1, the single-marker association analysis was conducted to screen SNPs. At Step 2, the multiple-marker association was scanned based on the elastic-net regularization. The proposed approach was applied to the rheumatoid arthritis (RA) case-control data set of Genetic Analysis Workshop 16. While the selected SNPs at the screening step are located mostly on chromosome 6, the elastic-net approach identified putative RA-related SNPs on other chromosomes in an increased proportion. For some of those putative RA-related SNPs, we identified the interactions with sex, a well known factor affecting RA susceptibility.

  6. Passive control of coherent structures in a modified backwards-facing step flow

    Science.gov (United States)

    Ormonde, Pedro C.; Cavalieri, André V. G.; Silva, Roberto G. A. da; Avelar, Ana C.

    2018-05-01

    We study a modified backwards-facing step flow, with the addition of two different plates; one is a baseline, impermeable plate and the second a perforated one. An experimental investigation is carried out for a turbulent reattaching shear layer downstream of the two plates. The proposed setup is a model configuration to study how the plate characteristics affect the separated shear layer and how turbulent kinetic energies and large-scale coherent structures are modified. Measurements show that the perforated plate changes the mean flow field, mostly by reducing the intensity of reverse flow close to the bottom wall. Disturbance amplitudes are significantly reduced up to five step heights downstream of the trailing edge of the plate, more specifically in the recirculation region. A loudspeaker is then used to introduce phase-locked, low-amplitude perturbations upstream of the plates, and phase-averaged measurements allow a quantitative study of large-scale structures in the shear-layer. The evolution of such coherent structures is evaluated in light of linear stability theory, comparing the eigenfunction of the Kelvin-Helmholtz mode to the experimental results. We observe a close match of linear-stability eigenfunctions with phase-averaged amplitudes for the two tested Strouhal numbers. The perforated plate is found to reduce the amplitude of the Kelvin-Helmholtz coherent structures in comparison to the baseline, impermeable plate, a behavior consistent with the predicted amplification trends from linear stability.

  7. A step-by-step experiment of 3C-SiC hetero-epitaxial growth on 4H-SiC by CVD

    Energy Technology Data Exchange (ETDEWEB)

    Xin, Bin [School of Microelectronics, Xidian University, Key Laboratory of Wide Band-Gap Semiconductor Materials and Devices, Xi’an 710071 (China); Jia, Ren-Xu, E-mail: rxjia@mail.xidian.edu.cn [School of Microelectronics, Xidian University, Key Laboratory of Wide Band-Gap Semiconductor Materials and Devices, Xi’an 710071 (China); Hu, Ji-Chao [School of Microelectronics, Xidian University, Key Laboratory of Wide Band-Gap Semiconductor Materials and Devices, Xi’an 710071 (China); Tsai, Cheng-Ying [Graduate Institute of Electronics Engineering, National Taiwan University, 10617 Taipei, Taiwan (China); Lin, Hao-Hsiung, E-mail: hhlin@ntu.edu.tw [Graduate Institute of Electronics Engineering, National Taiwan University, 10617 Taipei, Taiwan (China); Graduate Institute of Photonics and Optoelectronics, National Taiwan University, 10617 Taipei, Taiwan (China); Zhang, Yu-Ming [School of Microelectronics, Xidian University, Key Laboratory of Wide Band-Gap Semiconductor Materials and Devices, Xi’an 710071 (China)

    2015-12-01

    Highlights: • A step-by-step experiment to investigate the growth mechanism of SiC hetero-epitaxial is proposed. • It has shown protrusive regular “hill” morphology with much lower density of DPB defect in our experiment, which normally were in high density with shallow groove. Based on the defect morphology, an anisotropy migration rate phenomenon of adatoms has been regarded as forming the morphology of DPB defects and a new “DPB defects assist epitaxy” growth mode has been proposed based on Frank-van der Merwe growth mode. - Abstract: To investigate the growth mechanism of hetero-epitaxial SiC, a step-by-step experiment of 3C-SiC epitaxial layers grown on 4H-SiC on-axis substrates by the CVD method are reported in this paper. Four step experiments with four one-quarter 4H-SiC wafers were performed. Optical microscopy and atomic force microscopy (AFM) were used to characterize the morphology of the epitaxial layers. It was previously found that the main factor affecting the epilayer morphology was double-positioning boundary (DPB) defects, which normally were in high density with shallow grooves. However, a protrusive regular “hill” morphology with a much lower density was shown in our experiment in high-temperature growth conditions. The anisotropic migration of adatoms is regarded as forming the morphology of DPB defects, and a new “DPB defects assist epitaxy” growth mode has been proposed based on the Frank-van der Merwe growth mode. Raman spectroscopy and X-ray diffraction were used to examine the polytypes and the quality of the epitaxial layers.

  8. Manifold regularized multitask learning for semi-supervised multilabel image classification.

    Science.gov (United States)

    Luo, Yong; Tao, Dacheng; Geng, Bo; Xu, Chao; Maybank, Stephen J

    2013-02-01

    It is a significant challenge to classify images with multiple labels by using only a small number of labeled samples. One option is to learn a binary classifier for each label and use manifold regularization to improve the classification performance by exploring the underlying geometric structure of the data distribution. However, such an approach does not perform well in practice when images from multiple concepts are represented by high-dimensional visual features. Thus, manifold regularization is insufficient to control the model complexity. In this paper, we propose a manifold regularized multitask learning (MRMTL) algorithm. MRMTL learns a discriminative subspace shared by multiple classification tasks by exploiting the common structure of these tasks. It effectively controls the model complexity because different tasks limit one another's search volume, and the manifold regularization ensures that the functions in the shared hypothesis space are smooth along the data manifold. We conduct extensive experiments, on the PASCAL VOC'07 dataset with 20 classes and the MIR dataset with 38 classes, by comparing MRMTL with popular image classification algorithms. The results suggest that MRMTL is effective for image classification.

  9. Total variation regularization for fMRI-based prediction of behavior

    Science.gov (United States)

    Michel, Vincent; Gramfort, Alexandre; Varoquaux, Gaël; Eger, Evelyn; Thirion, Bertrand

    2011-01-01

    While medical imaging typically provides massive amounts of data, the extraction of relevant information for predictive diagnosis remains a difficult challenge. Functional MRI (fMRI) data, that provide an indirect measure of task-related or spontaneous neuronal activity, are classically analyzed in a mass-univariate procedure yielding statistical parametric maps. This analysis framework disregards some important principles of brain organization: population coding, distributed and overlapping representations. Multivariate pattern analysis, i.e., the prediction of behavioural variables from brain activation patterns better captures this structure. To cope with the high dimensionality of the data, the learning method has to be regularized. However, the spatial structure of the image is not taken into account in standard regularization methods, so that the extracted features are often hard to interpret. More informative and interpretable results can be obtained with the ℓ1 norm of the image gradient, a.k.a. its Total Variation (TV), as regularization. We apply for the first time this method to fMRI data, and show that TV regularization is well suited to the purpose of brain mapping while being a powerful tool for brain decoding. Moreover, this article presents the first use of TV regularization for classification. PMID:21317080

  10. On structure-exploiting trust-region regularized nonlinear least squares algorithms for neural-network learning.

    Science.gov (United States)

    Mizutani, Eiji; Demmel, James W

    2003-01-01

    This paper briefly introduces our numerical linear algebra approaches for solving structured nonlinear least squares problems arising from 'multiple-output' neural-network (NN) models. Our algorithms feature trust-region regularization, and exploit sparsity of either the 'block-angular' residual Jacobian matrix or the 'block-arrow' Gauss-Newton Hessian (or Fisher information matrix in statistical sense) depending on problem scale so as to render a large class of NN-learning algorithms 'efficient' in both memory and operation costs. Using a relatively large real-world nonlinear regression application, we shall explain algorithmic strengths and weaknesses, analyzing simulation results obtained by both direct and iterative trust-region algorithms with two distinct NN models: 'multilayer perceptrons' (MLP) and 'complementary mixtures of MLP-experts' (or neuro-fuzzy modular networks).

  11. Hessian regularization based non-negative matrix factorization for gene expression data clustering.

    Science.gov (United States)

    Liu, Xiao; Shi, Jun; Wang, Congzhi

    2015-01-01

    Since a key step in the analysis of gene expression data is to detect groups of genes that have similar expression patterns, clustering technique is then commonly used to analyze gene expression data. Data representation plays an important role in clustering analysis. The non-negative matrix factorization (NMF) is a widely used data representation method with great success in machine learning. Although the traditional manifold regularization method, Laplacian regularization (LR), can improve the performance of NMF, LR still suffers from the problem of its weak extrapolating power. Hessian regularization (HR) is a newly developed manifold regularization method, whose natural properties make it more extrapolating, especially for small sample data. In this work, we propose the HR-based NMF (HR-NMF) algorithm, and then apply it to represent gene expression data for further clustering task. The clustering experiments are conducted on five commonly used gene datasets, and the results indicate that the proposed HR-NMF outperforms LR-based NMM and original NMF, which suggests the potential application of HR-NMF for gene expression data.

  12. Distance-regular graphs

    NARCIS (Netherlands)

    van Dam, Edwin R.; Koolen, Jack H.; Tanaka, Hajime

    2016-01-01

    This is a survey of distance-regular graphs. We present an introduction to distance-regular graphs for the reader who is unfamiliar with the subject, and then give an overview of some developments in the area of distance-regular graphs since the monograph 'BCN'[Brouwer, A.E., Cohen, A.M., Neumaier,

  13. Two-step variable selection in quantile regression models

    Directory of Open Access Journals (Sweden)

    FAN Yali

    2015-06-01

    Full Text Available We propose a two-step variable selection procedure for high dimensional quantile regressions, in which the dimension of the covariates, pn is much larger than the sample size n. In the first step, we perform ℓ1 penalty, and we demonstrate that the first step penalized estimator with the LASSO penalty can reduce the model from an ultra-high dimensional to a model whose size has the same order as that of the true model, and the selected model can cover the true model. The second step excludes the remained irrelevant covariates by applying the adaptive LASSO penalty to the reduced model obtained from the first step. Under some regularity conditions, we show that our procedure enjoys the model selection consistency. We conduct a simulation study and a real data analysis to evaluate the finite sample performance of the proposed approach.

  14. Optimal Design of the Adaptive Normalized Matched Filter Detector using Regularized Tyler Estimators

    KAUST Repository

    Kammoun, Abla; Couillet, Romain; Pascal, Frederic; Alouini, Mohamed-Slim

    2017-01-01

    This article addresses improvements on the design of the adaptive normalized matched filter (ANMF) for radar detection. It is well-acknowledged that the estimation of the noise-clutter covariance matrix is a fundamental step in adaptive radar detection. In this paper, we consider regularized estimation methods which force by construction the eigenvalues of the covariance estimates to be greater than a positive regularization parameter ρ. This makes them more suitable for high dimensional problems with a limited number of secondary data samples than traditional sample covariance estimates. The motivation behind this work is to understand the effect and properly set the value of ρthat improves estimate conditioning while maintaining a low estimation bias. More specifically, we consider the design of the ANMF detector for two kinds of regularized estimators, namely the regularized sample covariance matrix (RSCM), the regularized Tyler estimator (RTE). The rationale behind this choice is that the RTE is efficient in mitigating the degradation caused by the presence of impulsive noises while inducing little loss when the noise is Gaussian. Based on asymptotic results brought by recent tools from random matrix theory, we propose a design for the regularization parameter that maximizes the asymptotic detection probability under constant asymptotic false alarm rates. Provided Simulations support the efficiency of the proposed method, illustrating its gain over conventional settings of the regularization parameter.

  15. Optimal Design of the Adaptive Normalized Matched Filter Detector using Regularized Tyler Estimators

    KAUST Repository

    Kammoun, Abla

    2017-10-25

    This article addresses improvements on the design of the adaptive normalized matched filter (ANMF) for radar detection. It is well-acknowledged that the estimation of the noise-clutter covariance matrix is a fundamental step in adaptive radar detection. In this paper, we consider regularized estimation methods which force by construction the eigenvalues of the covariance estimates to be greater than a positive regularization parameter ρ. This makes them more suitable for high dimensional problems with a limited number of secondary data samples than traditional sample covariance estimates. The motivation behind this work is to understand the effect and properly set the value of ρthat improves estimate conditioning while maintaining a low estimation bias. More specifically, we consider the design of the ANMF detector for two kinds of regularized estimators, namely the regularized sample covariance matrix (RSCM), the regularized Tyler estimator (RTE). The rationale behind this choice is that the RTE is efficient in mitigating the degradation caused by the presence of impulsive noises while inducing little loss when the noise is Gaussian. Based on asymptotic results brought by recent tools from random matrix theory, we propose a design for the regularization parameter that maximizes the asymptotic detection probability under constant asymptotic false alarm rates. Provided Simulations support the efficiency of the proposed method, illustrating its gain over conventional settings of the regularization parameter.

  16. Regularized Statistical Analysis of Anatomy

    DEFF Research Database (Denmark)

    Sjöstrand, Karl

    2007-01-01

    This thesis presents the application and development of regularized methods for the statistical analysis of anatomical structures. Focus is on structure-function relationships in the human brain, such as the connection between early onset of Alzheimer’s disease and shape changes of the corpus...... and mind. Statistics represents a quintessential part of such investigations as they are preluded by a clinical hypothesis that must be verified based on observed data. The massive amounts of image data produced in each examination pose an important and interesting statistical challenge...... efficient algorithms which make the analysis of large data sets feasible, and gives examples of applications....

  17. Considerations for the independent reaction times and step-by-step methods for radiation chemistry simulations

    Science.gov (United States)

    Plante, Ianik; Devroye, Luc

    2017-10-01

    Ionizing radiation interacts with the water molecules of the tissues mostly by ionizations and excitations, which result in the formation of the radiation track structure and the creation of radiolytic species such as H.,.OH, H2, H2O2, and e-aq. After their creation, these species diffuse and may chemically react with the neighboring species and with the molecules of the medium. Therefore radiation chemistry is of great importance in radiation biology. As the chemical species are not distributed homogeneously, the use of conventional models of homogeneous reactions cannot completely describe the reaction kinetics of the particles. Actually, many simulations of radiation chemistry are done using the Independent Reaction Time (IRT) method, which is a very fast technique to calculate radiochemical yields but which do not calculate the positions of the radiolytic species as a function of time. Step-by-step (SBS) methods, which are able to provide such information, have been used only sparsely because these are time-consuming in terms of calculation. Recent improvements in computer performance now allow the regular use of the SBS method in radiation chemistry. The SBS and IRT methods are both based on the Green's functions of the diffusion equation (GFDE). In this paper, several sampling algorithms of the GFDE and for the IRT method are presented. We show that the IRT and SBS methods are exactly equivalent for 2-particles systems for diffusion and partially diffusion-controlled reactions between non-interacting particles. We also show that the results obtained with the SBS simulation method with periodic boundary conditions are in agreement with the predictions by classical reaction kinetics theory, which is an important step towards using this method for modelling of biochemical networks and metabolic pathways involved in oxidative stress. Finally, the first simulation results obtained with the code RITRACKS (Relativistic Ion Tracks) are presented.

  18. LL-regular grammars

    NARCIS (Netherlands)

    Nijholt, Antinus

    1980-01-01

    Culik II and Cogen introduced the class of LR-regular grammars, an extension of the LR(k) grammars. In this paper we consider an analogous extension of the LL(k) grammars called the LL-regular grammars. The relation of this class of grammars to other classes of grammars will be shown. Any LL-regular

  19. Periodic vortex pinning by regular structures in Nb thin films: magnetic vs. structural effects

    Science.gov (United States)

    Montero, Maria Isabel; Jonsson-Akerman, B. Johan; Schuller, Ivan K.

    2001-03-01

    The defects present in a superconducting material can lead to a great variety of static and dynamic vortex phases. In particular, the interaction of the vortex lattice with regular arrays of pinning centers such as holes or magnetic dots gives rise to commensurability effects. These commensurability effects can be observed in the magnetoresistance and in the critical current dependence with the applied field. In recent years, experimental results have shown that there is a dependence of the periodic pinning effect on the properties of the vortex lattice (i.e. vortex-vortex interactions, elastic energy and vortex velocity) and also on the dots characteristics (i.e. dot size, distance between dots, magnetic character of the dot material, etc). However, there is not still a good understanding of the nature of the main pinning mechanisms by the magnetic dots. To clarify this important issue, we have studied and compared the periodic pinning effects in Nb films with rectangular arrays of Ni, Co and Fe dots, as well as the pinning effects in a Nb film deposited on a hole patterned substrate without any magnetic material. We will discuss the differences on pinning energies arising from magnetic effects as compared to structural effects of the superconducting film. This work was supported by NSF and DOE. M.I. Montero acknowledges postdoctoral fellowship by the Secretaria de Estado de Educacion y Universidades (Spain).

  20. Comparison of structural re-organisations observed on pre-patterned vicinal Si(1 1 1) and Si(1 0 0) surfaces during heat treatment

    International Nuclear Information System (INIS)

    Kraus, A.; Neddermeyer, H.; Wulfhekel, W.; Sander, D.; Maroutian, T.; Dulot, F.; Martinez-Gil, A.; Hanbuecken, M.

    2004-01-01

    The creation of distinct, periodically structured vicinal Si(1 1 1) and (1 0 0) substrates has been studied using scanning tunnelling microscopy at various temperatures. The vicinal Si(1 1 1) and (1 0 0) surfaces transform under heat treatment in a self-organised way into flat and stepped regions. Optical and electron beam lithography is used to produce a regular hole pattern on the surfaces, which interferes with the temperature-driven morphological changes. The step motions are strongly influenced by this pre-patterning. Pre-patterned Si(1 1 1) surfaces transform into regular one-dimensional (1D) and two-dimensional (2D) morphologies, which consist of terraces and arrangements of step bunches and facets. On pre-patterned Si(1 0 0) substrates different re-organisations were observed where checkerboard-like 2D structures are obtained

  1. Move-step structures of literature Ph.D. theses in the Japanese and UK higher education

    Directory of Open Access Journals (Sweden)

    Masumi Ono

    2017-02-01

    Full Text Available This study investigates the move-step structures of Japanese and English introductory chapters of literature Ph.D. theses and perceptions of Ph.D. supervisors in the Japanese and UK higher education contexts. In this study, 51 Japanese and 48 English introductory chapters of literature Ph.D. theses written by first language writers of Japanese or English were collected from three Japanese and three British universities. Genre analysis of 99 introductory chapters was conducted using a revised “Create a Research Space” (CARS model (Swales, 1990, 2004. Semi-structured interviews were also carried out with seven Japanese supervisors and ten British supervisors. The findings showed that the introductory chapters of literature Ph.D. theses had 13 move-specific steps and five move-independent steps, each of which presented different cyclical patterns, indicating cross-cultural similarities and differences between the two language groups. The perceptions of supervisors varied in terms of the importance and the sequence of individual steps in the introductory chapters. Based on the textual and interview analyses, a discipline-oriented Open-CARS model is proposed for pedagogical purposes of teaching and writing about this genre in Japanese or English in the field of literature and related fields.

  2. Topology and Control of Transformerless High Voltage Grid-connected PV System Based on Cascade Step-up Structure

    DEFF Research Database (Denmark)

    Yang, Zilong; Wang, Zhe; Zhang, Ying

    2017-01-01

    -up structure, instead of applying line-frequency step-up transformer, is proposed to connect PV directly to the 10 kV medium voltage grid. This series-connected step-up PV system integrates with multiple functions, including separated maximum power point tracking (MPPT), centralized energy storage, power...

  3. A quadratically regularized functional canonical correlation analysis for identifying the global structure of pleiotropy with NGS data.

    Science.gov (United States)

    Lin, Nan; Zhu, Yun; Fan, Ruzong; Xiong, Momiao

    2017-10-01

    Investigating the pleiotropic effects of genetic variants can increase statistical power, provide important information to achieve deep understanding of the complex genetic structures of disease, and offer powerful tools for designing effective treatments with fewer side effects. However, the current multiple phenotype association analysis paradigm lacks breadth (number of phenotypes and genetic variants jointly analyzed at the same time) and depth (hierarchical structure of phenotype and genotypes). A key issue for high dimensional pleiotropic analysis is to effectively extract informative internal representation and features from high dimensional genotype and phenotype data. To explore correlation information of genetic variants, effectively reduce data dimensions, and overcome critical barriers in advancing the development of novel statistical methods and computational algorithms for genetic pleiotropic analysis, we proposed a new statistic method referred to as a quadratically regularized functional CCA (QRFCCA) for association analysis which combines three approaches: (1) quadratically regularized matrix factorization, (2) functional data analysis and (3) canonical correlation analysis (CCA). Large-scale simulations show that the QRFCCA has a much higher power than that of the ten competing statistics while retaining the appropriate type 1 errors. To further evaluate performance, the QRFCCA and ten other statistics are applied to the whole genome sequencing dataset from the TwinsUK study. We identify a total of 79 genes with rare variants and 67 genes with common variants significantly associated with the 46 traits using QRFCCA. The results show that the QRFCCA substantially outperforms the ten other statistics.

  4. Stepping to the Beat: Feasibility and Potential Efficacy of a Home-Based Auditory-Cued Step Training Program in Chronic Stroke

    Directory of Open Access Journals (Sweden)

    Rachel L. Wright

    2017-08-01

    Full Text Available BackgroundHemiparesis after stroke typically results in a reduced walking speed, an asymmetrical gait pattern and a reduced ability to make gait adjustments. The purpose of this pilot study was to investigate the feasibility and preliminary efficacy of home-based training involving auditory cueing of stepping in place.MethodsTwelve community-dwelling participants with chronic hemiparesis completed two 3-week blocks of home-based stepping to music overlaid with an auditory metronome. Tempo of the metronome was increased 5% each week. One 3-week block used a regular metronome, whereas the other 3-week block had phase shift perturbations randomly inserted to cue stepping adjustments.ResultsAll participants reported that they enjoyed training, with 75% completing all training blocks. No adverse events were reported. Walking speed, Timed Up and Go (TUG time and Dynamic Gait Index (DGI scores (median [inter-quartile range] significantly improved between baseline (speed = 0.61 [0.32, 0.85] m⋅s−1; TUG = 20.0 [16.0, 39.9] s; DGI = 14.5 [11.3, 15.8] and post stepping training (speed = 0.76 [0.39, 1.03] m⋅s−1; TUG = 16.3 [13.3, 35.1] s; DGI = 16.0 [14.0, 19.0] and was maintained at follow-up (speed = 0.75 [0.41, 1.03] m⋅s−1; TUG = 16.5 [12.9, 34.1] s; DGI = 16.5 [13.5, 19.8].ConclusionThis pilot study suggests that auditory-cued stepping conducted at home was feasible and well-tolerated by participants post-stroke, with improvements in walking and functional mobility. No differences were detected between regular and phase-shift training with the metronome at each assessment point.

  5. Analytic stochastic regularization and gange invariance

    International Nuclear Information System (INIS)

    Abdalla, E.; Gomes, M.; Lima-Santos, A.

    1986-05-01

    A proof that analytic stochastic regularization breaks gauge invariance is presented. This is done by an explicit one loop calculation of the vaccum polarization tensor in scalar electrodynamics, which turns out not to be transversal. The counterterm structure, Langevin equations and the construction of composite operators in the general framework of stochastic quantization, are also analysed. (Author) [pt

  6. Further investigation on "A multiplicative regularization for force reconstruction"

    Science.gov (United States)

    Aucejo, M.; De Smet, O.

    2018-05-01

    We have recently proposed a multiplicative regularization to reconstruct mechanical forces acting on a structure from vibration measurements. This method does not require any selection procedure for choosing the regularization parameter, since the amount of regularization is automatically adjusted throughout an iterative resolution process. The proposed iterative algorithm has been developed with performance and efficiency in mind, but it is actually a simplified version of a full iterative procedure not described in the original paper. The present paper aims at introducing the full resolution algorithm and comparing it with its simplified version in terms of computational efficiency and solution accuracy. In particular, it is shown that both algorithms lead to very similar identified solutions.

  7. Structural Studies of Silver Nanoparticles Obtained Through Single-Step Green Synthesis

    Science.gov (United States)

    Prasad Peddi, Siva; Abdallah Sadeh, Bilal

    2015-10-01

    Green synthesis of silver Nanoparticles (AGNP's) has been the most prominent among the metallic nanoparticles for research for over a decade and half now due to both the simplicity of preparation and the applicability of biological species with extensive applications in medicine and biotechnology to reduce and trap the particles. The current article uses Eclipta Prostrata leaf extract as the biological species to cap the AGNP's through a single step process. The characterization data obtained was used for the analysis of the sample structure. The article emphasizes the disquisition of their shape and size of the lattice parameters and proposes a general scheme and a mathematical model for the analysis of their dependence. The data of the synthesized AGNP's has been used to advantage through the introduction of a structural shape factor for the crystalline nanoparticles. The properties of the structure of the AGNP's proposed and evaluated through a theoretical model was undeviating with the experimental consequences. This modus operandi gives scope for the structural studies of ultrafine particles prepared using biological methods.

  8. Convex nonnegative matrix factorization with manifold regularization.

    Science.gov (United States)

    Hu, Wenjun; Choi, Kup-Sze; Wang, Peiliang; Jiang, Yunliang; Wang, Shitong

    2015-03-01

    Nonnegative Matrix Factorization (NMF) has been extensively applied in many areas, including computer vision, pattern recognition, text mining, and signal processing. However, nonnegative entries are usually required for the data matrix in NMF, which limits its application. Besides, while the basis and encoding vectors obtained by NMF can represent the original data in low dimension, the representations do not always reflect the intrinsic geometric structure embedded in the data. Motivated by manifold learning and Convex NMF (CNMF), we propose a novel matrix factorization method called Graph Regularized and Convex Nonnegative Matrix Factorization (GCNMF) by introducing a graph regularized term into CNMF. The proposed matrix factorization technique not only inherits the intrinsic low-dimensional manifold structure, but also allows the processing of mixed-sign data matrix. Clustering experiments on nonnegative and mixed-sign real-world data sets are conducted to demonstrate the effectiveness of the proposed method. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. An iterative method for Tikhonov regularization with a general linear regularization operator

    NARCIS (Netherlands)

    Hochstenbach, M.E.; Reichel, L.

    2010-01-01

    Tikhonov regularization is one of the most popular approaches to solve discrete ill-posed problems with error-contaminated data. A regularization operator and a suitable value of a regularization parameter have to be chosen. This paper describes an iterative method, based on Golub-Kahan

  10. Regular Topographic Patterning of Karst Depressions Suggests Landscape Self-Organization

    Science.gov (United States)

    Quintero, C.; Cohen, M. J.

    2017-12-01

    Thousands of wetland depressions that are commonly host to cypress domes dot the sub-tropical limestone landscape of South Florida. The origin of these depression features has been the topic of debate. Here we build upon the work of previous surveyors of this landscape to analyze the morphology and spatial distribution of depressions on the Big Cypress landscape. We took advantage of the emergence and availability of high resolution Light Direction and Ranging (LiDAR) technology and ArcMap GIS software to analyze the structure and regularity of landscape features with methods unavailable to past surveyors. Six 2.25 km2 LiDAR plots within the preserve were selected for remote analysis and one depression feature within each plot was selected for more intensive sediment and water depth surveying. Depression features on the Big Cypress landscape were found to show strong evidence of regular spatial patterning. Periodicity, a feature of regularly patterned landscapes, is apparent in both Variograms and Radial Spectrum Analyses. Size class distributions of the identified features indicate constrained feature sizes while Average Nearest Neighbor analyses support the inference of dispersed features with non-random spacing. The presence of regular patterning on this landscape strongly implies biotic reinforcement of spatial structure by way of the scale dependent feedback. In characterizing the structure of this wetland landscape we add to the growing body of work dedicated to documenting how water, life and geology may interact to shape the natural landscapes we see today.

  11. Regular Expression Pocket Reference

    CERN Document Server

    Stubblebine, Tony

    2007-01-01

    This handy little book offers programmers a complete overview of the syntax and semantics of regular expressions that are at the heart of every text-processing application. Ideal as a quick reference, Regular Expression Pocket Reference covers the regular expression APIs for Perl 5.8, Ruby (including some upcoming 1.9 features), Java, PHP, .NET and C#, Python, vi, JavaScript, and the PCRE regular expression libraries. This concise and easy-to-use reference puts a very powerful tool for manipulating text and data right at your fingertips. Composed of a mixture of symbols and text, regular exp

  12. Recurrence of random walks with long-range steps generated by fractional Laplacian matrices on regular networks and simple cubic lattices

    Science.gov (United States)

    Michelitsch, T. M.; Collet, B. A.; Riascos, A. P.; Nowakowski, A. F.; Nicolleau, F. C. G. A.

    2017-12-01

    We analyze a Markovian random walk strategy on undirected regular networks involving power matrix functions of the type L\\frac{α{2}} where L indicates a ‘simple’ Laplacian matrix. We refer to such walks as ‘fractional random walks’ with admissible interval 0walk. From these analytical results we establish a generalization of Polya’s recurrence theorem for fractional random walks on d-dimensional infinite lattices: The fractional random walk is transient for dimensions d > α (recurrent for d≤slantα ) of the lattice. As a consequence, for 0walk is transient for all lattice dimensions d=1, 2, .. and in the range 1≤slantα walk is transient only for lattice dimensions d≥slant 3 . The generalization of Polya’s recurrence theorem remains valid for the class of random walks with Lévy flight asymptotics for long-range steps. We also analyze the mean first passage probabilities, mean residence times, mean first passage times and global mean first passage times (Kemeny constant) for the fractional random walk. For an infinite 1D lattice (infinite ring) we obtain for the transient regime 0walk is generated by the non-diagonality of the fractional Laplacian matrix with Lévy-type heavy tailed inverse power law decay for the probability of long-range moves. This non-local and asymptotic behavior of the fractional random walk introduces small-world properties with the emergence of Lévy flights on large (infinite) lattices.

  13. Scanning moiré and spatial-offset phase-stepping for surface inspection of structures

    Science.gov (United States)

    Yoneyama, S.; Morimoto, Y.; Fujigaki, M.; Ikeda, Y.

    2005-06-01

    In order to develop a high-speed and accurate surface inspection system of structures such as tunnels, a new surface profile measurement method using linear array sensors is studied. The sinusoidal grating is projected on a structure surface. Then, the deformed grating is scanned by linear array sensors that move together with the grating projector. The phase of the grating is analyzed by a spatial offset phase-stepping method to perform accurate measurement. The surface profile measurements of the wall with bricks and the concrete surface of a structure are demonstrated using the proposed method. The change of geometry or fabric of structures and the defects on structure surfaces can be detected by the proposed method. It is expected that the surface profile inspection system of tunnels measuring from a running train can be constructed based on the proposed method.

  14. Total variation regularization in measurement and image space for PET reconstruction

    KAUST Repository

    Burger, M

    2014-09-18

    © 2014 IOP Publishing Ltd. The aim of this paper is to test and analyse a novel technique for image reconstruction in positron emission tomography, which is based on (total variation) regularization on both the image space and the projection space. We formulate our variational problem considering both total variation penalty terms on the image and on an idealized sinogram to be reconstructed from a given Poisson distributed noisy sinogram. We prove existence, uniqueness and stability results for the proposed model and provide some analytical insight into the structures favoured by joint regularization. For the numerical solution of the corresponding discretized problem we employ the split Bregman algorithm and extensively test the approach in comparison to standard total variation regularization on the image. The numerical results show that an additional penalty on the sinogram performs better on reconstructing images with thin structures.

  15. Wavelet domain image restoration with adaptive edge-preserving regularization.

    Science.gov (United States)

    Belge, M; Kilmer, M E; Miller, E L

    2000-01-01

    In this paper, we consider a wavelet based edge-preserving regularization scheme for use in linear image restoration problems. Our efforts build on a collection of mathematical results indicating that wavelets are especially useful for representing functions that contain discontinuities (i.e., edges in two dimensions or jumps in one dimension). We interpret the resulting theory in a statistical signal processing framework and obtain a highly flexible framework for adapting the degree of regularization to the local structure of the underlying image. In particular, we are able to adapt quite easily to scale-varying and orientation-varying features in the image while simultaneously retaining the edge preservation properties of the regularizer. We demonstrate a half-quadratic algorithm for obtaining the restorations from observed data.

  16. Multiple graph regularized nonnegative matrix factorization

    KAUST Repository

    Wang, Jim Jing-Yan

    2013-10-01

    Non-negative matrix factorization (NMF) has been widely used as a data representation method based on components. To overcome the disadvantage of NMF in failing to consider the manifold structure of a data set, graph regularized NMF (GrNMF) has been proposed by Cai et al. by constructing an affinity graph and searching for a matrix factorization that respects graph structure. Selecting a graph model and its corresponding parameters is critical for this strategy. This process is usually carried out by cross-validation or discrete grid search, which are time consuming and prone to overfitting. In this paper, we propose a GrNMF, called MultiGrNMF, in which the intrinsic manifold is approximated by a linear combination of several graphs with different models and parameters inspired by ensemble manifold regularization. Factorization metrics and linear combination coefficients of graphs are determined simultaneously within a unified object function. They are alternately optimized in an iterative algorithm, thus resulting in a novel data representation algorithm. Extensive experiments on a protein subcellular localization task and an Alzheimer\\'s disease diagnosis task demonstrate the effectiveness of the proposed algorithm. © 2013 Elsevier Ltd. All rights reserved.

  17. Crowdsourcing step-by-step information extraction to enhance existing how-to videos

    OpenAIRE

    Nguyen, Phu Tran; Weir, Sarah; Guo, Philip J.; Miller, Robert C.; Gajos, Krzysztof Z.; Kim, Ju Ho

    2014-01-01

    Millions of learners today use how-to videos to master new skills in a variety of domains. But browsing such videos is often tedious and inefficient because video player interfaces are not optimized for the unique step-by-step structure of such videos. This research aims to improve the learning experience of existing how-to videos with step-by-step annotations. We first performed a formative study to verify that annotations are actually useful to learners. We created ToolScape, an interac...

  18. Non destructive testing of heterogeneous structures with a step frequency radar

    International Nuclear Information System (INIS)

    Cattin, V.; Chaillout, J.J.

    1998-01-01

    Ground penetrating radar have shown increasing potential in diagnostic of soils or concrete, but the realisation of such a system and the interpretation of data produced by this technique require a clear understanding of the physical electromagnetic processes that appear between media and waves. In this paper are studied the performances of a step frequency radar as a nondestructive technique to evaluate different heterogeneous laboratory size structures. Some critical points are studied like material properties, antenna effect and image reconstruction algorithm, to determine its viability to distinguish smallest region of interest

  19. Analytic stochastic regularization and gauge theories

    International Nuclear Information System (INIS)

    Abdalla, E.; Gomes, M.; Lima-Santos, A.

    1987-04-01

    We prove that analytic stochatic regularization braks gauge invariance. This is done by an explicit one loop calculation of the two three and four point vertex functions of the gluon field in scalar chromodynamics, which turns out not to be geuge invariant. We analyse the counter term structure, Langevin equations and the construction of composite operators in the general framework of stochastic quantization. (author) [pt

  20. Low-Complexity Regularization Algorithms for Image Deblurring

    KAUST Repository

    Alanazi, Abdulrahman

    2016-11-01

    Image restoration problems deal with images in which information has been degraded by blur or noise. In practice, the blur is usually caused by atmospheric turbulence, motion, camera shake, and several other mechanical or physical processes. In this study, we present two regularization algorithms for the image deblurring problem. We first present a new method based on solving a regularized least-squares (RLS) problem. This method is proposed to find a near-optimal value of the regularization parameter in the RLS problems. Experimental results on the non-blind image deblurring problem are presented. In all experiments, comparisons are made with three benchmark methods. The results demonstrate that the proposed method clearly outperforms the other methods in terms of both the output PSNR and structural similarity, as well as the visual quality of the deblurred images. To reduce the complexity of the proposed algorithm, we propose a technique based on the bootstrap method to estimate the regularization parameter in low and high-resolution images. Numerical results show that the proposed technique can effectively reduce the computational complexity of the proposed algorithms. In addition, for some cases where the point spread function (PSF) is separable, we propose using a Kronecker product so as to reduce the computations. Furthermore, in the case where the image is smooth, it is always desirable to replace the regularization term in the RLS problems by a total variation term. Therefore, we propose a novel method for adaptively selecting the regularization parameter in a so-called square root regularized total variation (SRTV). Experimental results demonstrate that our proposed method outperforms the other benchmark methods when applied to smooth images in terms of PSNR, SSIM and the restored image quality. In this thesis, we focus on the non-blind image deblurring problem, where the blur kernel is assumed to be known. However, we developed algorithms that also work

  1. Metal-assisted etch combined with regularizing etch

    Energy Technology Data Exchange (ETDEWEB)

    Yim, Joanne; Miller, Jeff; Jura, Michael; Black, Marcie R.; Forziati, Joanne; Murphy, Brian; Magliozzi, Lauren

    2018-03-06

    In an aspect of the disclosure, a process for forming nanostructuring on a silicon-containing substrate is provided. The process comprises (a) performing metal-assisted chemical etching on the substrate, (b) performing a clean, including partial or total removal of the metal used to assist the chemical etch, and (c) performing an isotropic or substantially isotropic chemical etch subsequently to the metal-assisted chemical etch of step (a). In an alternative aspect of the disclosure, the process comprises (a) performing metal-assisted chemical etching on the substrate, (b) cleaning the substrate, including removal of some or all of the assisting metal, and (c) performing a chemical etch which results in regularized openings in the silicon substrate.

  2. The geometry of continuum regularization

    International Nuclear Information System (INIS)

    Halpern, M.B.

    1987-03-01

    This lecture is primarily an introduction to coordinate-invariant regularization, a recent advance in the continuum regularization program. In this context, the program is seen as fundamentally geometric, with all regularization contained in regularized DeWitt superstructures on field deformations

  3. Regular expression containment

    DEFF Research Database (Denmark)

    Henglein, Fritz; Nielsen, Lasse

    2011-01-01

    We present a new sound and complete axiomatization of regular expression containment. It consists of the conventional axiomatiza- tion of concatenation, alternation, empty set and (the singleton set containing) the empty string as an idempotent semiring, the fixed- point rule E* = 1 + E × E......* for Kleene-star, and a general coin- duction rule as the only additional rule. Our axiomatization gives rise to a natural computational inter- pretation of regular expressions as simple types that represent parse trees, and of containment proofs as coercions. This gives the axiom- atization a Curry......-Howard-style constructive interpretation: Con- tainment proofs do not only certify a language-theoretic contain- ment, but, under our computational interpretation, constructively transform a membership proof of a string in one regular expres- sion into a membership proof of the same string in another regular expression. We...

  4. Supersymmetric dimensional regularization

    International Nuclear Information System (INIS)

    Siegel, W.; Townsend, P.K.; van Nieuwenhuizen, P.

    1980-01-01

    There is a simple modification of dimension regularization which preserves supersymmetry: dimensional reduction to real D < 4, followed by analytic continuation to complex D. In terms of component fields, this means fixing the ranges of all indices on the fields (and therefore the numbers of Fermi and Bose components). For superfields, it means continuing in the dimensionality of x-space while fixing the dimensionality of theta-space. This regularization procedure allows the simple manipulation of spinor derivatives in supergraph calculations. The resulting rules are: (1) First do all algebra exactly as in D = 4; (2) Then do the momentum integrals as in ordinary dimensional regularization. This regularization procedure needs extra rules before one can say that it is consistent. Such extra rules needed for superconformal anomalies are discussed. Problems associated with renormalizability and higher order loops are also discussed

  5. Adaptive discretizations for the choice of a Tikhonov regularization parameter in nonlinear inverse problems

    International Nuclear Information System (INIS)

    Kaltenbacher, Barbara; Kirchner, Alana; Vexler, Boris

    2011-01-01

    Parameter identification problems for partial differential equations usually lead to nonlinear inverse problems. A typical property of such problems is their instability, which requires regularization techniques, like, e.g., Tikhonov regularization. The main focus of this paper will be on efficient methods for determining a suitable regularization parameter by using adaptive finite element discretizations based on goal-oriented error estimators. A well-established method for the determination of a regularization parameter is the discrepancy principle where the residual norm, considered as a function i of the regularization parameter, should equal an appropriate multiple of the noise level. We suggest to solve the resulting scalar nonlinear equation by an inexact Newton method, where in each iteration step, a regularized problem is solved at a different discretization level. The proposed algorithm is an extension of the method suggested in Griesbaum A et al (2008 Inverse Problems 24 025025) for linear inverse problems, where goal-oriented error estimators for i and its derivative are used for adaptive refinement strategies in order to keep the discretization level as coarse as possible to save computational effort but fine enough to guarantee global convergence of the inexact Newton method. This concept leads to a highly efficient method for determining the Tikhonov regularization parameter for nonlinear ill-posed problems. Moreover, we prove that with the so-obtained regularization parameter and an also adaptively discretized Tikhonov minimizer, usual convergence and regularization results from the continuous setting can be recovered. As a matter of fact, it is shown that it suffices to use stationary points of the Tikhonov functional. The efficiency of the proposed method is demonstrated by means of numerical experiments. (paper)

  6. One-Step Solvent Evaporation-Assisted 3D Printing of Piezoelectric PVDF Nanocomposite Structures.

    Science.gov (United States)

    Bodkhe, Sampada; Turcot, Gabrielle; Gosselin, Frederick P; Therriault, Daniel

    2017-06-21

    Development of a 3D printable material system possessing inherent piezoelectric properties to fabricate integrable sensors in a single-step printing process without poling is of importance to the creation of a wide variety of smart structures. Here, we study the effect of addition of barium titanate nanoparticles in nucleating piezoelectric β-polymorph in 3D printable polyvinylidene fluoride (PVDF) and fabrication of the layer-by-layer and self-supporting piezoelectric structures on a micro- to millimeter scale by solvent evaporation-assisted 3D printing at room temperature. The nanocomposite formulation obtained after a comprehensive investigation of composition and processing techniques possesses a piezoelectric coefficient, d 31 , of 18 pC N -1 , which is comparable to that of typical poled and stretched commercial PVDF film sensors. A 3D contact sensor that generates up to 4 V upon gentle finger taps demonstrates the efficacy of the fabrication technique. Our one-step 3D printing of piezoelectric nanocomposites can form ready-to-use, complex-shaped, flexible, and lightweight piezoelectric devices. When combined with other 3D printable materials, they could serve as stand-alone or embedded sensors in aerospace, biomedicine, and robotic applications.

  7. Constrained least squares regularization in PET

    International Nuclear Information System (INIS)

    Choudhury, K.R.; O'Sullivan, F.O.

    1996-01-01

    Standard reconstruction methods used in tomography produce images with undesirable negative artifacts in background and in areas of high local contrast. While sophisticated statistical reconstruction methods can be devised to correct for these artifacts, their computational implementation is excessive for routine operational use. This work describes a technique for rapid computation of approximate constrained least squares regularization estimates. The unique feature of the approach is that it involves no iterative projection or backprojection steps. This contrasts with the familiar computationally intensive algorithms based on algebraic reconstruction (ART) or expectation-maximization (EM) methods. Experimentation with the new approach for deconvolution and mixture analysis shows that the root mean square error quality of estimators based on the proposed algorithm matches and usually dominates that of more elaborate maximum likelihood, at a fraction of the computational effort

  8. Structural Elucidation and Biological Activity of a Highly Regular Fucosylated Glycosaminoglycan from the Edible Sea Cucumber Stichopus herrmanni.

    Science.gov (United States)

    Li, Xiaomei; Luo, Lan; Cai, Ying; Yang, Wenjiao; Lin, Lisha; Li, Zi; Gao, Na; Purcell, Steven W; Wu, Mingyi; Zhao, Jinhua

    2017-10-25

    Edible sea cucumbers are widely used as a health food and medicine. A fucosylated glycosaminoglycan (FG) was purified from the high-value sea cucumber Stichopus herrmanni. Its physicochemical properties and structure were analyzed and characterized by chemical and instrumental methods. Chemical analysis indicated that this FG with a molecular weight of ∼64 kDa is composed of N-acetyl-d-galactosamine, d-glucuronic acid (GlcA), and l-fucose. Structural analysis clarified that the FG contains the chondroitin sulfate E-like backbone, with mostly 2,4-di-O-sulfated (85%) and some 3,4-di-O-sulfated (10%) and 4-O-sulfated (5%) fucose side chains that link to the C3 position of GlcA. This FG is structurally highly regular and homogeneous, differing from the FGs of other sea cucumbers, for its sulfation patterns are simpler. Biological activity assays indicated that it is a strong anticoagulant, inhibiting thrombin and intrinsic factor Xase. Our results expand the knowledge on structural types of FG and illustrate its biological activity as a functional food material.

  9. Regularization by External Variables

    DEFF Research Database (Denmark)

    Bossolini, Elena; Edwards, R.; Glendinning, P. A.

    2016-01-01

    Regularization was a big topic at the 2016 CRM Intensive Research Program on Advances in Nonsmooth Dynamics. There are many open questions concerning well known kinds of regularization (e.g., by smoothing or hysteresis). Here, we propose a framework for an alternative and important kind of regula......Regularization was a big topic at the 2016 CRM Intensive Research Program on Advances in Nonsmooth Dynamics. There are many open questions concerning well known kinds of regularization (e.g., by smoothing or hysteresis). Here, we propose a framework for an alternative and important kind...

  10. Regular Single Valued Neutrosophic Hypergraphs

    Directory of Open Access Journals (Sweden)

    Muhammad Aslam Malik

    2016-12-01

    Full Text Available In this paper, we define the regular and totally regular single valued neutrosophic hypergraphs, and discuss the order and size along with properties of regular and totally regular single valued neutrosophic hypergraphs. We also extend work on completeness of single valued neutrosophic hypergraphs.

  11. Guangxi crustal structural evolution and the formation and distribution regularities of U-rich strata

    International Nuclear Information System (INIS)

    Kang Zili.

    1989-01-01

    Based on summing up Guangxi geotectonic features and evolutionary regularities, this paper discusses the occurrence features, formation conditions and time-space distribution regularities of various U-rich strata during the development of geosyncline, platform and diwa stages, Especially, during diwa stage all those U-rich strata might be reworked to a certain degree and resulted in the mobilization of uranium, then enriching to form polygenetic composite uranium ore deposits with stratabound features. This study will be helpful for prospecting in the region

  12. Void Structures in Regularly Patterned ZnO Nanorods Grown with the Hydrothermal Method

    Directory of Open Access Journals (Sweden)

    Yu-Feng Yao

    2014-01-01

    Full Text Available The void structures and related optical properties after thermal annealing with ambient oxygen in regularly patterned ZnO nanrorod (NR arrays grown with the hydrothermal method are studied. In increasing the thermal annealing temperature, void distribution starts from the bottom and extends to the top of an NR in the vertical (c-axis growth region. When the annealing temperature is higher than 400°C, void distribution spreads into the lateral (m-axis growth region. Photoluminescence measurement shows that the ZnO band-edge emission, in contrast to defect emission in the yellow-red range, is the strongest under the n-ZnO NR process conditions of 0.003 M in Ga-doping concentration and 300°C in thermal annealing temperature with ambient oxygen. Energy dispersive X-ray spectroscopy data indicate that the concentration of hydroxyl groups in the vertical growth region is significantly higher than that in the lateral growth region. During thermal annealing, hydroxyl groups are desorbed from the NR leaving anion vacancies for reacting with cation vacancies to form voids.

  13. A structured four-step curriculum in basic laparoscopy

    DEFF Research Database (Denmark)

    Strandbygaard, Jeanett; Bjerrum, Flemming; Maagaard, Mathilde

    2014-01-01

    The objective of this study was to develop a 4-step curriculum in basic laparoscopy consisting of validated modules integrating a cognitive component, a practical component and a procedural component.......The objective of this study was to develop a 4-step curriculum in basic laparoscopy consisting of validated modules integrating a cognitive component, a practical component and a procedural component....

  14. Constrained Perturbation Regularization Approach for Signal Estimation Using Random Matrix Theory

    KAUST Repository

    Suliman, Mohamed Abdalla Elhag

    2016-10-06

    In this work, we propose a new regularization approach for linear least-squares problems with random matrices. In the proposed constrained perturbation regularization approach, an artificial perturbation matrix with a bounded norm is forced into the system model matrix. This perturbation is introduced to improve the singular-value structure of the model matrix and, hence, the solution of the estimation problem. Relying on the randomness of the model matrix, a number of deterministic equivalents from random matrix theory are applied to derive the near-optimum regularizer that minimizes the mean-squared error of the estimator. Simulation results demonstrate that the proposed approach outperforms a set of benchmark regularization methods for various estimated signal characteristics. In addition, simulations show that our approach is robust in the presence of model uncertainty.

  15. Image deblurring using a perturbation-basec regularization approach

    KAUST Repository

    Alanazi, Abdulrahman

    2017-11-02

    The image restoration problem deals with images in which information has been degraded by blur or noise. In this work, we present a new method for image deblurring by solving a regularized linear least-squares problem. In the proposed method, a synthetic perturbation matrix with a bounded norm is forced into the discrete ill-conditioned model matrix. This perturbation is added to enhance the singular-value structure of the matrix and hence to provide an improved solution. A method is proposed to find a near-optimal value of the regularization parameter for the proposed approach. To reduce the computational complexity, we present a technique based on the bootstrapping method to estimate the regularization parameter for both low and high-resolution images. Experimental results on the image deblurring problem are presented. Comparisons are made with three benchmark methods and the results demonstrate that the proposed method clearly outperforms the other methods in terms of both the output PSNR and SSIM values.

  16. Image deblurring using a perturbation-basec regularization approach

    KAUST Repository

    Alanazi, Abdulrahman; Ballal, Tarig; Masood, Mudassir; Al-Naffouri, Tareq Y.

    2017-01-01

    The image restoration problem deals with images in which information has been degraded by blur or noise. In this work, we present a new method for image deblurring by solving a regularized linear least-squares problem. In the proposed method, a synthetic perturbation matrix with a bounded norm is forced into the discrete ill-conditioned model matrix. This perturbation is added to enhance the singular-value structure of the matrix and hence to provide an improved solution. A method is proposed to find a near-optimal value of the regularization parameter for the proposed approach. To reduce the computational complexity, we present a technique based on the bootstrapping method to estimate the regularization parameter for both low and high-resolution images. Experimental results on the image deblurring problem are presented. Comparisons are made with three benchmark methods and the results demonstrate that the proposed method clearly outperforms the other methods in terms of both the output PSNR and SSIM values.

  17. Implicit Regularization for Reconstructing 3D Building Rooftop Models Using Airborne LiDAR Data

    Directory of Open Access Journals (Sweden)

    Jaewook Jung

    2017-03-01

    Full Text Available With rapid urbanization, highly accurate and semantically rich virtualization of building assets in 3D become more critical for supporting various applications, including urban planning, emergency response and location-based services. Many research efforts have been conducted to automatically reconstruct building models at city-scale from remotely sensed data. However, developing a fully-automated photogrammetric computer vision system enabling the massive generation of highly accurate building models still remains a challenging task. One the most challenging task for 3D building model reconstruction is to regularize the noises introduced in the boundary of building object retrieved from a raw data with lack of knowledge on its true shape. This paper proposes a data-driven modeling approach to reconstruct 3D rooftop models at city-scale from airborne laser scanning (ALS data. The focus of the proposed method is to implicitly derive the shape regularity of 3D building rooftops from given noisy information of building boundary in a progressive manner. This study covers a full chain of 3D building modeling from low level processing to realistic 3D building rooftop modeling. In the element clustering step, building-labeled point clouds are clustered into homogeneous groups by applying height similarity and plane similarity. Based on segmented clusters, linear modeling cues including outer boundaries, intersection lines, and step lines are extracted. Topology elements among the modeling cues are recovered by the Binary Space Partitioning (BSP technique. The regularity of the building rooftop model is achieved by an implicit regularization process in the framework of Minimum Description Length (MDL combined with Hypothesize and Test (HAT. The parameters governing the MDL optimization are automatically estimated based on Min-Max optimization and Entropy-based weighting method. The performance of the proposed method is tested over the International

  18. Implicit Regularization for Reconstructing 3D Building Rooftop Models Using Airborne LiDAR Data.

    Science.gov (United States)

    Jung, Jaewook; Jwa, Yoonseok; Sohn, Gunho

    2017-03-19

    With rapid urbanization, highly accurate and semantically rich virtualization of building assets in 3D become more critical for supporting various applications, including urban planning, emergency response and location-based services. Many research efforts have been conducted to automatically reconstruct building models at city-scale from remotely sensed data. However, developing a fully-automated photogrammetric computer vision system enabling the massive generation of highly accurate building models still remains a challenging task. One the most challenging task for 3D building model reconstruction is to regularize the noises introduced in the boundary of building object retrieved from a raw data with lack of knowledge on its true shape. This paper proposes a data-driven modeling approach to reconstruct 3D rooftop models at city-scale from airborne laser scanning (ALS) data. The focus of the proposed method is to implicitly derive the shape regularity of 3D building rooftops from given noisy information of building boundary in a progressive manner. This study covers a full chain of 3D building modeling from low level processing to realistic 3D building rooftop modeling. In the element clustering step, building-labeled point clouds are clustered into homogeneous groups by applying height similarity and plane similarity. Based on segmented clusters, linear modeling cues including outer boundaries, intersection lines, and step lines are extracted. Topology elements among the modeling cues are recovered by the Binary Space Partitioning (BSP) technique. The regularity of the building rooftop model is achieved by an implicit regularization process in the framework of Minimum Description Length (MDL) combined with Hypothesize and Test (HAT). The parameters governing the MDL optimization are automatically estimated based on Min-Max optimization and Entropy-based weighting method. The performance of the proposed method is tested over the International Society for

  19. Step driven competitive epitaxial and self-limited growth of graphene on copper surface

    Directory of Open Access Journals (Sweden)

    Lili Fan

    2011-09-01

    Full Text Available The existence of surface steps was found to have significant function and influence on the growth of graphene on copper via chemical vapor deposition. The two typical growth modes involved were found to be influenced by the step morphologies on copper surface, which led to our proposed step driven competitive growth mechanism. We also discovered a protective role of graphene in preserving steps on copper surface. Our results showed that wide and high steps promoted epitaxial growth and yielded multilayer graphene domains with regular shape, while dense and low steps favored self-limited growth and led to large-area monolayer graphene films. We have demonstrated that controllable growth of graphene domains of specific shape and large-area continuous graphene films are feasible.

  20. Application of the thermal step method to space charge measurements in inhomogeneous solid insulating structures: A theoretical approach

    International Nuclear Information System (INIS)

    Cernomorcenco, Andrei; Notingher, Petru Jr.

    2008-01-01

    The thermal step method is a nondestructive technique for determining electric charge distribution across solid insulating structures. It consists in measuring and analyzing a transient capacitive current due to the redistribution of influence charges when the sample is crossed by a thermal wave. This work concerns the application of the technique to inhomogeneous insulating structures. A general equation of the thermal step current appearing in such a sample is established. It is shown that this expression is close to the one corresponding to a homogeneous sample and allows using similar techniques for calculating electric field and charge distribution

  1. SparseBeads data: benchmarking sparsity-regularized computed tomography

    DEFF Research Database (Denmark)

    Jørgensen, Jakob Sauer; Coban, Sophia B.; Lionheart, William R. B.

    2017-01-01

    -regularized reconstruction. A collection of 48 x-ray CT datasets called SparseBeads was designed for benchmarking SR reconstruction algorithms. Beadpacks comprising glass beads of five different sizes as well as mixtures were scanned in a micro-CT scanner to provide structured datasets with variable image sparsity levels...

  2. On a correspondence between regular and non-regular operator monotone functions

    DEFF Research Database (Denmark)

    Gibilisco, P.; Hansen, Frank; Isola, T.

    2009-01-01

    We prove the existence of a bijection between the regular and the non-regular operator monotone functions satisfying a certain functional equation. As an application we give a new proof of the operator monotonicity of certain functions related to the Wigner-Yanase-Dyson skew information....

  3. The Evolution of Frequency Distributions: Relating Regularization to Inductive Biases through Iterated Learning

    Science.gov (United States)

    Reali, Florencia; Griffiths, Thomas L.

    2009-01-01

    The regularization of linguistic structures by learners has played a key role in arguments for strong innate constraints on language acquisition, and has important implications for language evolution. However, relating the inductive biases of learners to regularization behavior in laboratory tasks can be challenging without a formal model. In this…

  4. One-Step Synthesis of Hierarchical ZSM-5 Using Cetyltrimethylammonium as Mesoporogen and Structure-Directing Agent

    OpenAIRE

    Meng, Lingqian; Mezari, Brahim; Goesten, Maarten G.; Hensen, Emiel J. M.

    2017-01-01

    Hierarchical ZSM-5 zeolite is hydrothermally synthesized in a single step with cetyltrimethylammonium (CTA) hydroxide acting as mesoporogen and structure-directing agent. Essential to this synthesis is the replacement of NaOH with KOH. An in-depth solid-state NMR study reveals that, after early electrostatic interaction between condensed silica and the head group of CTA, ZSM-5 crystallizes around the structure-directing agent. The crucial aspect of using KOH instead of NaOH lies in the faster...

  5. Image super-resolution reconstruction based on regularization technique and guided filter

    Science.gov (United States)

    Huang, De-tian; Huang, Wei-qin; Gu, Pei-ting; Liu, Pei-zhong; Luo, Yan-min

    2017-06-01

    In order to improve the accuracy of sparse representation coefficients and the quality of reconstructed images, an improved image super-resolution algorithm based on sparse representation is presented. In the sparse coding stage, the autoregressive (AR) regularization and the non-local (NL) similarity regularization are introduced to improve the sparse coding objective function. A group of AR models which describe the image local structures are pre-learned from the training samples, and one or several suitable AR models can be adaptively selected for each image patch to regularize the solution space. Then, the image non-local redundancy is obtained by the NL similarity regularization to preserve edges. In the process of computing the sparse representation coefficients, the feature-sign search algorithm is utilized instead of the conventional orthogonal matching pursuit algorithm to improve the accuracy of the sparse coefficients. To restore image details further, a global error compensation model based on weighted guided filter is proposed to realize error compensation for the reconstructed images. Experimental results demonstrate that compared with Bicubic, L1SR, SISR, GR, ANR, NE + LS, NE + NNLS, NE + LLE and A + (16 atoms) methods, the proposed approach has remarkable improvement in peak signal-to-noise ratio, structural similarity and subjective visual perception.

  6. Stochastic analytic regularization

    International Nuclear Information System (INIS)

    Alfaro, J.

    1984-07-01

    Stochastic regularization is reexamined, pointing out a restriction on its use due to a new type of divergence which is not present in the unregulated theory. Furthermore, we introduce a new form of stochastic regularization which permits the use of a minimal subtraction scheme to define the renormalized Green functions. (author)

  7. SparseBeads data: benchmarking sparsity-regularized computed tomography

    Science.gov (United States)

    Jørgensen, Jakob S.; Coban, Sophia B.; Lionheart, William R. B.; McDonald, Samuel A.; Withers, Philip J.

    2017-12-01

    Sparsity regularization (SR) such as total variation (TV) minimization allows accurate image reconstruction in x-ray computed tomography (CT) from fewer projections than analytical methods. Exactly how few projections suffice and how this number may depend on the image remain poorly understood. Compressive sensing connects the critical number of projections to the image sparsity, but does not cover CT, however empirical results suggest a similar connection. The present work establishes for real CT data a connection between gradient sparsity and the sufficient number of projections for accurate TV-regularized reconstruction. A collection of 48 x-ray CT datasets called SparseBeads was designed for benchmarking SR reconstruction algorithms. Beadpacks comprising glass beads of five different sizes as well as mixtures were scanned in a micro-CT scanner to provide structured datasets with variable image sparsity levels, number of projections and noise levels to allow the systematic assessment of parameters affecting performance of SR reconstruction algorithms6. Using the SparseBeads data, TV-regularized reconstruction quality was assessed as a function of numbers of projections and gradient sparsity. The critical number of projections for satisfactory TV-regularized reconstruction increased almost linearly with the gradient sparsity. This establishes a quantitative guideline from which one may predict how few projections to acquire based on expected sample sparsity level as an aid in planning of dose- or time-critical experiments. The results are expected to hold for samples of similar characteristics, i.e. consisting of few, distinct phases with relatively simple structure. Such cases are plentiful in porous media, composite materials, foams, as well as non-destructive testing and metrology. For samples of other characteristics the proposed methodology may be used to investigate similar relations.

  8. Spectral Regularization Algorithms for Learning Large Incomplete Matrices.

    Science.gov (United States)

    Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert

    2010-03-01

    We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 10(6) × 10(6) incomplete matrix with 10(5) observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques.

  9. Selective confinement of vibrations in composite systems with alternate quasi-regular sequences

    International Nuclear Information System (INIS)

    Montalban, A.; Velasco, V.R.; Tutor, J.; Fernandez-Velicia, F.J.

    2007-01-01

    We have studied the atom displacements and the vibrational frequencies of 1D systems formed by combinations of Fibonacci, Thue-Morse and Rudin-Shapiro quasi-regular stacks and their alternate ones. The materials are described by nearest-neighbor force constants and the corresponding atom masses, particularized to the Al, Ag systems. These structures exhibit differences in the frequency spectrum as compared to the original simple quasi-regular generations but the most important feature is the presence of separate confinement of the atom displacements in one of the sequences forming the total composite structure for different frequency ranges

  10. Selective confinement of vibrations in composite systems with alternate quasi-regular sequences

    Energy Technology Data Exchange (ETDEWEB)

    Montalban, A. [Departamento de Ciencia y Tecnologia de Materiales, Division de Optica, Universidad Miguel Hernandez, 03202 Elche (Spain); Velasco, V.R. [Instituto de Ciencia de Materiales de Madrid, CSIC, Sor Juana Ines de la Cruz 3, 28049 Madrid (Spain)]. E-mail: vrvr@icmm.csic.es; Tutor, J. [Departamento de Fisica Aplicada, Universidad Autonoma de Madrid, Cantoblanco, 28049 Madrid (Spain); Fernandez-Velicia, F.J. [Departamento de Fisica de los Materiales, Facultad de Ciencias, Universidad Nacional de Educacion a Distancia, Senda del Rey 9, 28080 Madrid (Spain)

    2007-01-01

    We have studied the atom displacements and the vibrational frequencies of 1D systems formed by combinations of Fibonacci, Thue-Morse and Rudin-Shapiro quasi-regular stacks and their alternate ones. The materials are described by nearest-neighbor force constants and the corresponding atom masses, particularized to the Al, Ag systems. These structures exhibit differences in the frequency spectrum as compared to the original simple quasi-regular generations but the most important feature is the presence of separate confinement of the atom displacements in one of the sequences forming the total composite structure for different frequency ranges.

  11. Regularization of Hamilton-Lagrangian guiding center theories

    International Nuclear Information System (INIS)

    Correa-Restrepo, D.; Wimmel, H.K.

    1985-04-01

    The Hamilton-Lagrangian guiding-center (G.C.) theories of Littlejohn, Wimmel, and Pfirsch show a singularity for B-fields with non-vanishing parallel curl at a critical value of vsub(parallel), which complicates applications. The singularity is related to a sudden breakdown, at a critical vsub(parallel), of gyration in the exact particle mechanics. While the latter is a real effect, the G.C. singularity can be removed. To this end a regularization method is defined that preserves the Hamilton-Lagrangian structure and the conservation theorems. For demonstration this method is applied to the standard G.C. theory (without polarization drift). Liouville's theorem and G.C. kinetic equations are also derived in regularized form. The method could equally well be applied to the case with polarization drift and to relativistic G.C. theory. (orig.)

  12. Boosting Maintenance in Working Memory with Temporal Regularities

    Science.gov (United States)

    Plancher, Gaën; Lévêque, Yohana; Fanuel, Lison; Piquandet, Gaëlle; Tillmann, Barbara

    2018-01-01

    Music cognition research has provided evidence for the benefit of temporally regular structures guiding attention over time. The present study investigated whether maintenance in working memory can benefit from an isochronous rhythm. Participants were asked to remember series of 6 letters for serial recall. In the rhythm condition of Experiment…

  13. A new approach to nonlinear constrained Tikhonov regularization

    KAUST Repository

    Ito, Kazufumi

    2011-09-16

    We present a novel approach to nonlinear constrained Tikhonov regularization from the viewpoint of optimization theory. A second-order sufficient optimality condition is suggested as a nonlinearity condition to handle the nonlinearity of the forward operator. The approach is exploited to derive convergence rate results for a priori as well as a posteriori choice rules, e.g., discrepancy principle and balancing principle, for selecting the regularization parameter. The idea is further illustrated on a general class of parameter identification problems, for which (new) source and nonlinearity conditions are derived and the structural property of the nonlinearity term is revealed. A number of examples including identifying distributed parameters in elliptic differential equations are presented. © 2011 IOP Publishing Ltd.

  14. Facile one-step synthesis and photoluminescence properties of Ag–ZnO core–shell structure

    International Nuclear Information System (INIS)

    Zhai, HongJu; Wang, LiJing; Han, DongLai; Wang, Huan; Wang, Jian; Liu, XiaoYan; Lin, Xue; Li, XiuYan; Gao, Ming; Yang, JingHai

    2014-01-01

    Graphical abstract: The PL of the Ag–ZnO core-shell nanostructure showed obvious increase of UV emission and slight decrease of visible light emission compared to that of the pure ZnO. With the calcination temperature increasing from 300 to 600 °C, the primary peak located at 380 nm became stronger and sharper, indicating that the increasing calcination temperature made the samples crystallize better. - Highlights: • Ag-ZnO core-shell structure was obtained via a simple one-step solvothermal process. • The approach was simple, mild, low cost, reproducible and easy-to-handle. • The obvious enhancement of UV luminescent has been observed. • Effects of the calcining temperature to luminescence were investigated in detail. - Abstract: Ag–ZnO core–shell structures were gained via one-step solvothermal process. The products were characterized by means of X-ray diffraction (XRD), transmission electron microscopy (TEM), Raman spectroscopy, photoluminescence (PL) and UV–vis spectroscopy, respectively. It was shown that the properties were greatly changed compared to pure ZnO from the PL and Raman spectra, which indicated the strong interfacial interaction between ZnO and Ag. The work provides a feasible method to synthesize Ag–ZnO core–shell structure photocatalyst, which is promising in the further practical application of ZnO-based photocatalytic materials

  15. Multi-view clustering via multi-manifold regularized non-negative matrix factorization.

    Science.gov (United States)

    Zong, Linlin; Zhang, Xianchao; Zhao, Long; Yu, Hong; Zhao, Qianli

    2017-04-01

    Non-negative matrix factorization based multi-view clustering algorithms have shown their competitiveness among different multi-view clustering algorithms. However, non-negative matrix factorization fails to preserve the locally geometrical structure of the data space. In this paper, we propose a multi-manifold regularized non-negative matrix factorization framework (MMNMF) which can preserve the locally geometrical structure of the manifolds for multi-view clustering. MMNMF incorporates consensus manifold and consensus coefficient matrix with multi-manifold regularization to preserve the locally geometrical structure of the multi-view data space. We use two methods to construct the consensus manifold and two methods to find the consensus coefficient matrix, which leads to four instances of the framework. Experimental results show that the proposed algorithms outperform existing non-negative matrix factorization based algorithms for multi-view clustering. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Staircase and saw-tooth field emission steps from nanopatterned n-type GaSb surfaces

    Energy Technology Data Exchange (ETDEWEB)

    Kildemo, M.; Levinsen, Y. Inntjore; Le Roy, S.; Soenderga ring rd, E. [Department of Physics, Norwegian University of Science and Technology (NTNU), NO-7491 Trondlieim (Norway); Department of Physics, Norwegian University of Science and Technology (NTNU), NO-7491 Trondlieim, Norway and AB CERN, CH- 1211 Geneva 23 (Switzerland); Laboratoire Surface du Verre et Interfaces, UMR 125 Unite Mixte de Recherche CNRS/Saint-Gobain Laboratoire, 39 Quai Lucien Lefranc, F-93303 Aubervilliers Cedex (France)

    2009-09-15

    High resolution field emission experiments from nanopatterned GaSb surfaces consisting of densely packed nanocones prepared by low ion-beam-energy sputtering are presented. Both uncovered and metal-covered nanopatterned surfaces were studied. Surprisingly, the field emission takes place by regular steps in the field emitted current. Depending on the field, the steps are either regular, flat, plateaus, or saw-tooth shaped. To the author's knowledge, this is the first time that such results have been reported. Each discrete jump in the field emission may be understood in terms of resonant tunneling through an extended surface space charge region in an n-type, high aspect ratio, single GaSb nanocone. The staircase shape may be understood from the spatial distribution of the aspect ratio of the cones.

  17. Staircase and saw-tooth field emission steps from nanopatterned n-type GaSb surfaces

    CERN Document Server

    Kildemo, M.; Le Roy, S.; Søndergård, E.

    2009-01-01

    High resolution field emission experiments from nanopatterned GaSb surfaces consisting of densely packed nanocones prepared by low ion-beam-energy sputtering are presented. Both uncovered and metal-covered nanopatterned surfaces were studied. Surprisingly, the field emission takes place by regular steps in the field emitted current. Depending on the field, the steps are either regular, flat, plateaus, or saw-tooth shaped. To the author’s knowledge, this is the first time that such results have been reported. Each discrete jump in the field emission may be understood in terms of resonant tunneling through an extended surface space charge region in an n-type, high aspect ratio, single GaSb nanocone. The staircase shape may be understood from the spatial distribution of the aspect ratio of the cones.

  18. Multi-omic data integration enables discovery of hidden biological regularities

    DEFF Research Database (Denmark)

    Ebrahim, Ali; Brunk, Elizabeth; Tan, Justin

    2016-01-01

    Rapid growth in size and complexity of biological data sets has led to the 'Big Data to Knowledge' challenge. We develop advanced data integration methods for multi- level analysis of genomic, transcriptomic, ribosomal profiling, proteomic and fluxomic data. First, we show that pairwise integration...... of primary omics data reveals regularities that tie cellular processes together in Escherichia coli: the number of protein molecules made per mRNA transcript and the number of ribosomes required per translated protein molecule. Second, we show that genome- scale models, based on genomic and bibliomic data......, enable quantitative synchronization of disparate data types. Integrating omics data with models enabled the discovery of two novel regularities: condition invariant in vivo turnover rates of enzymes and the correlation of protein structural motifs and translational pausing. These regularities can...

  19. AN AUTOMATED METHOD FOR 3D ROOF OUTLINE GENERATION AND REGULARIZATION IN AIRBONE LASER SCANNER DATA

    Directory of Open Access Journals (Sweden)

    S. N. Perera

    2012-07-01

    Full Text Available In this paper, an automatic approach for the generation and regularization of 3D roof boundaries in Airborne Laser scanner data is presented. The workflow is commenced by segmentation of the point clouds. A classification step and a rule based roof extraction step are followed the planar segmentation. Refinement on roof extraction is performed in order to minimize the effect due to urban vegetation. Boundary points of the connected roof planes are extracted and fitted series of straight line segments. Each line is then regularized with respect to the dominant building orientation. We introduce the usage of cycle graphs for the best use of topological information. Ridge-lines and step-edges are basically extracted to recognise correct topological relationships among the roof faces. Inner roof corners are geometrically fitted based on the closed cycle graphs. Outer boundary is reconstructed using the same concept but with the outer most cycle graph. In here, union of the sub cycles is taken. Intermediate line segments (outer bounds are intersected to reconstruct the roof eave lines. Two test areas with two different point densities are tested with the developed approach. Performance analysis of the test results is provided to demonstrate the applicability of the method.

  20. Improving Conductivity Image Quality Using Block Matrix-based Multiple Regularization (BMMR Technique in EIT: A Simulation Study

    Directory of Open Access Journals (Sweden)

    Tushar Kanti Bera

    2011-06-01

    Full Text Available A Block Matrix based Multiple Regularization (BMMR technique is proposed for improving conductivity image quality in EIT. The response matrix (JTJ has been partitioned into several sub-block matrices and the highest eigenvalue of each sub-block matrices has been chosen as regularization parameter for the nodes contained by that sub-block. Simulated boundary data are generated for circular domain with circular inhomogeneity and the conductivity images are reconstructed in a Model Based Iterative Image Reconstruction (MoBIIR algorithm. Conductivity images are reconstructed with BMMR technique and the results are compared with the Single-step Tikhonov Regularization (STR and modified Levenberg-Marquardt Regularization (LMR methods. It is observed that the BMMR technique reduces the projection error and solution error and improves the conductivity reconstruction in EIT. Result show that the BMMR method also improves the image contrast and inhomogeneity conductivity profile and hence the reconstructed image quality is enhanced. ;doi:10.5617/jeb.170 J Electr Bioimp, vol. 2, pp. 33-47, 2011

  1. Regular Network Class Features Enhancement Using an Evolutionary Synthesis Algorithm

    Directory of Open Access Journals (Sweden)

    O. G. Monahov

    2014-01-01

    Full Text Available This paper investigates a solution of the optimization problem concerning the construction of diameter-optimal regular networks (graphs. Regular networks are of practical interest as the graph-theoretical models of reliable communication networks of parallel supercomputer systems, as a basis of the structure in a model of small world in optical and neural networks. It presents a new class of parametrically described regular networks - hypercirculant networks (graphs. An approach that uses evolutionary algorithms for the automatic generation of parametric descriptions of optimal hypercirculant networks is developed. Synthesis of optimal hypercirculant networks is based on the optimal circulant networks with smaller degree of nodes. To construct optimal hypercirculant networks is used a template of circulant network from the known optimal families of circulant networks with desired number of nodes and with smaller degree of nodes. Thus, a generating set of the circulant network is used as a generating subset of the hypercirculant network, and the missing generators are synthesized by means of the evolutionary algorithm, which is carrying out minimization of diameter (average diameter of networks. A comparative analysis of the structural characteristics of hypercirculant, toroidal, and circulant networks is conducted. The advantage hypercirculant networks under such structural characteristics, as diameter, average diameter, and the width of bisection, with comparable costs of the number of nodes and the number of connections is demonstrated. It should be noted the advantage of hypercirculant networks of dimension three over four higher-dimensional tori. Thus, the optimization of hypercirculant networks of dimension three is more efficient than the introduction of an additional dimension for the corresponding toroidal structures. The paper also notes the best structural parameters of hypercirculant networks in comparison with iBT-networks previously

  2. Hierarchical regular small-world networks

    International Nuclear Information System (INIS)

    Boettcher, Stefan; Goncalves, Bruno; Guclu, Hasan

    2008-01-01

    Two new networks are introduced that resemble small-world properties. These networks are recursively constructed but retain a fixed, regular degree. They possess a unique one-dimensional lattice backbone overlaid by a hierarchical sequence of long-distance links, mixing real-space and small-world features. Both networks, one 3-regular and the other 4-regular, lead to distinct behaviors, as revealed by renormalization group studies. The 3-regular network is planar, has a diameter growing as √N with system size N, and leads to super-diffusion with an exact, anomalous exponent d w = 1.306..., but possesses only a trivial fixed point T c = 0 for the Ising ferromagnet. In turn, the 4-regular network is non-planar, has a diameter growing as ∼2 √(log 2 N 2 ) , exhibits 'ballistic' diffusion (d w = 1), and a non-trivial ferromagnetic transition, T c > 0. It suggests that the 3-regular network is still quite 'geometric', while the 4-regular network qualifies as a true small world with mean-field properties. As an engineering application we discuss synchronization of processors on these networks. (fast track communication)

  3. Regularization Techniques for Linear Least-Squares Problems

    KAUST Repository

    Suliman, Mohamed

    2016-04-01

    Linear estimation is a fundamental branch of signal processing that deals with estimating the values of parameters from a corrupted measured data. Throughout the years, several optimization criteria have been used to achieve this task. The most astonishing attempt among theses is the linear least-squares. Although this criterion enjoyed a wide popularity in many areas due to its attractive properties, it appeared to suffer from some shortcomings. Alternative optimization criteria, as a result, have been proposed. These new criteria allowed, in one way or another, the incorporation of further prior information to the desired problem. Among theses alternative criteria is the regularized least-squares (RLS). In this thesis, we propose two new algorithms to find the regularization parameter for linear least-squares problems. In the constrained perturbation regularization algorithm (COPRA) for random matrices and COPRA for linear discrete ill-posed problems, an artificial perturbation matrix with a bounded norm is forced into the model matrix. This perturbation is introduced to enhance the singular value structure of the matrix. As a result, the new modified model is expected to provide a better stabilize substantial solution when used to estimate the original signal through minimizing the worst-case residual error function. Unlike many other regularization algorithms that go in search of minimizing the estimated data error, the two new proposed algorithms are developed mainly to select the artifcial perturbation bound and the regularization parameter in a way that approximately minimizes the mean-squared error (MSE) between the original signal and its estimate under various conditions. The first proposed COPRA method is developed mainly to estimate the regularization parameter when the measurement matrix is complex Gaussian, with centered unit variance (standard), and independent and identically distributed (i.i.d.) entries. Furthermore, the second proposed COPRA

  4. 75 FR 76006 - Regular Meeting

    Science.gov (United States)

    2010-12-07

    ... FARM CREDIT SYSTEM INSURANCE CORPORATION Regular Meeting AGENCY: Farm Credit System Insurance Corporation Board. ACTION: Regular meeting. SUMMARY: Notice is hereby given of the regular meeting of the Farm Credit System Insurance Corporation Board (Board). Date and Time: The meeting of the Board will be held...

  5. Effects of Irregular Bridge Columns and Feasibility of Seismic Regularity

    Science.gov (United States)

    Thomas, Abey E.

    2018-05-01

    Bridges with unequal column height is one of the main irregularities in bridge design particularly while negotiating steep valleys, making the bridges vulnerable to seismic action. The desirable behaviour of bridge columns towards seismic loading is that, they should perform in a regular fashion, i.e. the capacity of each column should be utilized evenly. But, this type of behaviour is often missing when the column heights are unequal along the length of the bridge, allowing short columns to bear the maximum lateral load. In the present study, the effects of unequal column height on the global seismic performance of bridges are studied using pushover analysis. Codes such as CalTrans (Engineering service center, earthquake engineering branch, 2013) and EC-8 (EN 1998-2: design of structures for earthquake resistance. Part 2: bridges, European Committee for Standardization, Brussels, 2005) suggests seismic regularity criterion for achieving regular seismic performance level at all the bridge columns. The feasibility of adopting these seismic regularity criterions along with those mentioned in literatures will be assessed for bridges designed as per the Indian Standards in the present study.

  6. HTSC-Josephson step contacts

    International Nuclear Information System (INIS)

    Herrmann, K.

    1994-03-01

    In this work the properties of josephson step contacts are investigated. After a short introduction into Josephson step contacts the structure, properties and the Josphson contacts of YBa 2 Cu 3 O 7-x high-T c superconductors is presented. The fabrication of HTSC step contacts and the microstructure is discussed. The electric properties of these contacts are measured together with the Josephson emission and the magnetic field dependence. The temperature dependence of the stationary transport properties is given. (WL)

  7. Sparse regularization for force identification using dictionaries

    Science.gov (United States)

    Qiao, Baijie; Zhang, Xingwu; Wang, Chenxi; Zhang, Hang; Chen, Xuefeng

    2016-04-01

    The classical function expansion method based on minimizing l2-norm of the response residual employs various basis functions to represent the unknown force. Its difficulty lies in determining the optimum number of basis functions. Considering the sparsity of force in the time domain or in other basis space, we develop a general sparse regularization method based on minimizing l1-norm of the coefficient vector of basis functions. The number of basis functions is adaptively determined by minimizing the number of nonzero components in the coefficient vector during the sparse regularization process. First, according to the profile of the unknown force, the dictionary composed of basis functions is determined. Second, a sparsity convex optimization model for force identification is constructed. Third, given the transfer function and the operational response, Sparse reconstruction by separable approximation (SpaRSA) is developed to solve the sparse regularization problem of force identification. Finally, experiments including identification of impact and harmonic forces are conducted on a cantilever thin plate structure to illustrate the effectiveness and applicability of SpaRSA. Besides the Dirac dictionary, other three sparse dictionaries including Db6 wavelets, Sym4 wavelets and cubic B-spline functions can also accurately identify both the single and double impact forces from highly noisy responses in a sparse representation frame. The discrete cosine functions can also successfully reconstruct the harmonic forces including the sinusoidal, square and triangular forces. Conversely, the traditional Tikhonov regularization method with the L-curve criterion fails to identify both the impact and harmonic forces in these cases.

  8. One-step synthesis and structural features of CdS/montmorillonite nanocomposites.

    Science.gov (United States)

    Han, Zhaohui; Zhu, Huaiyong; Bulcock, Shaun R; Ringer, Simon P

    2005-02-24

    A novel synthesis method was introduced for the nanocomposites of cadmium sulfide and montmorillonite. This method features the combination of an ion exchange process and an in situ hydrothermal decomposition process of a complex precursor, which is simple in contrast to the conventional synthesis methods that comprise two separate steps for similar nanocomposite materials. Cadmium sulfide species in the composites exist in the forms of pillars and nanoparticles, the crystallized sulfide particles are in the hexagonal phase, and the sizes change when the amount of the complex for the synthesis is varied. Structural features of the nanocomposites are similar to those of the clay host but changed because of the introduction of the sulfide into the clay.

  9. Subcortical processing of speech regularities underlies reading and music aptitude in children

    Science.gov (United States)

    2011-01-01

    Background Neural sensitivity to acoustic regularities supports fundamental human behaviors such as hearing in noise and reading. Although the failure to encode acoustic regularities in ongoing speech has been associated with language and literacy deficits, how auditory expertise, such as the expertise that is associated with musical skill, relates to the brainstem processing of speech regularities is unknown. An association between musical skill and neural sensitivity to acoustic regularities would not be surprising given the importance of repetition and regularity in music. Here, we aimed to define relationships between the subcortical processing of speech regularities, music aptitude, and reading abilities in children with and without reading impairment. We hypothesized that, in combination with auditory cognitive abilities, neural sensitivity to regularities in ongoing speech provides a common biological mechanism underlying the development of music and reading abilities. Methods We assessed auditory working memory and attention, music aptitude, reading ability, and neural sensitivity to acoustic regularities in 42 school-aged children with a wide range of reading ability. Neural sensitivity to acoustic regularities was assessed by recording brainstem responses to the same speech sound presented in predictable and variable speech streams. Results Through correlation analyses and structural equation modeling, we reveal that music aptitude and literacy both relate to the extent of subcortical adaptation to regularities in ongoing speech as well as with auditory working memory and attention. Relationships between music and speech processing are specifically driven by performance on a musical rhythm task, underscoring the importance of rhythmic regularity for both language and music. Conclusions These data indicate common brain mechanisms underlying reading and music abilities that relate to how the nervous system responds to regularities in auditory input

  10. Subcortical processing of speech regularities underlies reading and music aptitude in children.

    Science.gov (United States)

    Strait, Dana L; Hornickel, Jane; Kraus, Nina

    2011-10-17

    Neural sensitivity to acoustic regularities supports fundamental human behaviors such as hearing in noise and reading. Although the failure to encode acoustic regularities in ongoing speech has been associated with language and literacy deficits, how auditory expertise, such as the expertise that is associated with musical skill, relates to the brainstem processing of speech regularities is unknown. An association between musical skill and neural sensitivity to acoustic regularities would not be surprising given the importance of repetition and regularity in music. Here, we aimed to define relationships between the subcortical processing of speech regularities, music aptitude, and reading abilities in children with and without reading impairment. We hypothesized that, in combination with auditory cognitive abilities, neural sensitivity to regularities in ongoing speech provides a common biological mechanism underlying the development of music and reading abilities. We assessed auditory working memory and attention, music aptitude, reading ability, and neural sensitivity to acoustic regularities in 42 school-aged children with a wide range of reading ability. Neural sensitivity to acoustic regularities was assessed by recording brainstem responses to the same speech sound presented in predictable and variable speech streams. Through correlation analyses and structural equation modeling, we reveal that music aptitude and literacy both relate to the extent of subcortical adaptation to regularities in ongoing speech as well as with auditory working memory and attention. Relationships between music and speech processing are specifically driven by performance on a musical rhythm task, underscoring the importance of rhythmic regularity for both language and music. These data indicate common brain mechanisms underlying reading and music abilities that relate to how the nervous system responds to regularities in auditory input. Definition of common biological underpinnings

  11. Subcortical processing of speech regularities underlies reading and music aptitude in children

    Directory of Open Access Journals (Sweden)

    Strait Dana L

    2011-10-01

    Full Text Available Abstract Background Neural sensitivity to acoustic regularities supports fundamental human behaviors such as hearing in noise and reading. Although the failure to encode acoustic regularities in ongoing speech has been associated with language and literacy deficits, how auditory expertise, such as the expertise that is associated with musical skill, relates to the brainstem processing of speech regularities is unknown. An association between musical skill and neural sensitivity to acoustic regularities would not be surprising given the importance of repetition and regularity in music. Here, we aimed to define relationships between the subcortical processing of speech regularities, music aptitude, and reading abilities in children with and without reading impairment. We hypothesized that, in combination with auditory cognitive abilities, neural sensitivity to regularities in ongoing speech provides a common biological mechanism underlying the development of music and reading abilities. Methods We assessed auditory working memory and attention, music aptitude, reading ability, and neural sensitivity to acoustic regularities in 42 school-aged children with a wide range of reading ability. Neural sensitivity to acoustic regularities was assessed by recording brainstem responses to the same speech sound presented in predictable and variable speech streams. Results Through correlation analyses and structural equation modeling, we reveal that music aptitude and literacy both relate to the extent of subcortical adaptation to regularities in ongoing speech as well as with auditory working memory and attention. Relationships between music and speech processing are specifically driven by performance on a musical rhythm task, underscoring the importance of rhythmic regularity for both language and music. Conclusions These data indicate common brain mechanisms underlying reading and music abilities that relate to how the nervous system responds to

  12. Salt-body Inversion with Minimum Gradient Support and Sobolev Space Norm Regularizations

    KAUST Repository

    Kazei, Vladimir

    2017-05-26

    Full-waveform inversion (FWI) is a technique which solves the ill-posed seismic inversion problem of fitting our model data to the measured ones from the field. FWI is capable of providing high-resolution estimates of the model, and of handling wave propagation of arbitrary complexity (visco-elastic, anisotropic); yet, it often fails to retrieve high-contrast geological structures, such as salt. One of the reasons for the FWI failure is that the updates at earlier iterations are too smooth to capture the sharp edges of the salt boundary. We compare several regularization approaches, which promote sharpness of the edges. Minimum gradient support (MGS) regularization focuses the inversion on blocky models, even more than the total variation (TV) does. However, both approaches try to invert undesirable high wavenumbers in the model too early for a model of complex structure. Therefore, we apply the Sobolev space norm as a regularizing term in order to maintain a balance between sharp and smooth updates in FWI. We demonstrate the application of these regularizations on a Marmousi model, enriched by a chunk of salt. The model turns out to be too complex in some parts to retrieve its full velocity distribution, yet the salt shape and contrast are retrieved.

  13. Femtosecond laser pulses for fast 3-D surface profilometry of microelectronic step-structures.

    Science.gov (United States)

    Joo, Woo-Deok; Kim, Seungman; Park, Jiyong; Lee, Keunwoo; Lee, Joohyung; Kim, Seungchul; Kim, Young-Jin; Kim, Seung-Woo

    2013-07-01

    Fast, precise 3-D measurement of discontinuous step-structures fabricated on microelectronic products is essential for quality assurance of semiconductor chips, flat panel displays, and photovoltaic cells. Optical surface profilers of low-coherence interferometry have long been used for the purpose, but the vertical scanning range and speed are limited by the micro-actuators available today. Besides, the lateral field-of-view extendable for a single measurement is restricted by the low spatial coherence of broadband light sources. Here, we cope with the limitations of the conventional low-coherence interferometer by exploiting unique characteristics of femtosecond laser pulses, i.e., low temporal but high spatial coherence. By scanning the pulse repetition rate with direct reference to the Rb atomic clock, step heights of ~69.6 μm are determined with a repeatability of 10.3 nm. The spatial coherence of femtosecond pulses provides a large field-of-view with superior visibility, allowing for a high volume measurement rate of ~24,000 mm3/s.

  14. Continuum-regularized quantum gravity

    International Nuclear Information System (INIS)

    Chan Huesum; Halpern, M.B.

    1987-01-01

    The recent continuum regularization of d-dimensional Euclidean gravity is generalized to arbitrary power-law measure and studied in some detail as a representative example of coordinate-invariant regularization. The weak-coupling expansion of the theory illustrates a generic geometrization of regularized Schwinger-Dyson rules, generalizing previous rules in flat space and flat superspace. The rules are applied in a non-trivial explicit check of Einstein invariance at one loop: the cosmological counterterm is computed and its contribution is included in a verification that the graviton mass is zero. (orig.)

  15. GLOBAL OPTIMIZATION METHODS FOR GRAVITATIONAL LENS SYSTEMS WITH REGULARIZED SOURCES

    International Nuclear Information System (INIS)

    Rogers, Adam; Fiege, Jason D.

    2012-01-01

    Several approaches exist to model gravitational lens systems. In this study, we apply global optimization methods to find the optimal set of lens parameters using a genetic algorithm. We treat the full optimization procedure as a two-step process: an analytical description of the source plane intensity distribution is used to find an initial approximation to the optimal lens parameters; the second stage of the optimization uses a pixelated source plane with the semilinear method to determine an optimal source. Regularization is handled by means of an iterative method and the generalized cross validation (GCV) and unbiased predictive risk estimator (UPRE) functions that are commonly used in standard image deconvolution problems. This approach simultaneously estimates the optimal regularization parameter and the number of degrees of freedom in the source. Using the GCV and UPRE functions, we are able to justify an estimation of the number of source degrees of freedom found in previous work. We test our approach by applying our code to a subset of the lens systems included in the SLACS survey.

  16. Online co-regularized algorithms

    NARCIS (Netherlands)

    Ruijter, T. de; Tsivtsivadze, E.; Heskes, T.

    2012-01-01

    We propose an online co-regularized learning algorithm for classification and regression tasks. We demonstrate that by sequentially co-regularizing prediction functions on unlabeled data points, our algorithm provides improved performance in comparison to supervised methods on several UCI benchmarks

  17. Geometric continuum regularization of quantum field theory

    International Nuclear Information System (INIS)

    Halpern, M.B.

    1989-01-01

    An overview of the continuum regularization program is given. The program is traced from its roots in stochastic quantization, with emphasis on the examples of regularized gauge theory, the regularized general nonlinear sigma model and regularized quantum gravity. In its coordinate-invariant form, the regularization is seen as entirely geometric: only the supermetric on field deformations is regularized, and the prescription provides universal nonperturbative invariant continuum regularization across all quantum field theory. 54 refs

  18. Functional dissociation between regularity encoding and deviance detection along the auditory hierarchy.

    Science.gov (United States)

    Aghamolaei, Maryam; Zarnowiec, Katarzyna; Grimm, Sabine; Escera, Carles

    2016-02-01

    Auditory deviance detection based on regularity encoding appears as one of the basic functional properties of the auditory system. It has traditionally been assessed with the mismatch negativity (MMN) long-latency component of the auditory evoked potential (AEP). Recent studies have found earlier correlates of deviance detection based on regularity encoding. They occur in humans in the first 50 ms after sound onset, at the level of the middle-latency response of the AEP, and parallel findings of stimulus-specific adaptation observed in animal studies. However, the functional relationship between these different levels of regularity encoding and deviance detection along the auditory hierarchy has not yet been clarified. Here we addressed this issue by examining deviant-related responses at different levels of the auditory hierarchy to stimulus changes varying in their degree of deviation regarding the spatial location of a repeated standard stimulus. Auditory stimuli were presented randomly from five loudspeakers at azimuthal angles of 0°, 12°, 24°, 36° and 48° during oddball and reversed-oddball conditions. Middle-latency responses and MMN were measured. Our results revealed that middle-latency responses were sensitive to deviance but not the degree of deviation, whereas the MMN amplitude increased as a function of deviance magnitude. These findings indicated that acoustic regularity can be encoded at the level of the middle-latency response but that it takes a higher step in the auditory hierarchy for deviance magnitude to be encoded, thus providing a functional dissociation between regularity encoding and deviance detection along the auditory hierarchy. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  19. 3D first-arrival traveltime tomography with modified total variation regularization

    Science.gov (United States)

    Jiang, Wenbin; Zhang, Jie

    2018-02-01

    Three-dimensional (3D) seismic surveys have become a major tool in the exploration and exploitation of hydrocarbons. 3D seismic first-arrival traveltime tomography is a robust method for near-surface velocity estimation. A common approach for stabilizing the ill-posed inverse problem is to apply Tikhonov regularization to the inversion. However, the Tikhonov regularization method recovers smooth local structures while blurring the sharp features in the model solution. We present a 3D first-arrival traveltime tomography method with modified total variation (MTV) regularization to preserve sharp velocity contrasts and improve the accuracy of velocity inversion. To solve the minimization problem of the new traveltime tomography method, we decouple the original optimization problem into two following subproblems: a standard traveltime tomography problem with the traditional Tikhonov regularization and a L2 total variation problem. We apply the conjugate gradient method and split-Bregman iterative method to solve these two subproblems, respectively. Our synthetic examples show that the new method produces higher resolution models than the conventional traveltime tomography with Tikhonov regularization. We apply the technique to field data. The stacking section shows significant improvements with static corrections from the MTV traveltime tomography.

  20. Differences between flocculating yeast and regular industrial yeast in transcription and metabolite profiling during ethanol fermentation

    Directory of Open Access Journals (Sweden)

    Lili Li

    2017-03-01

    Full Text Available Objectives: To improve ethanolic fermentation performance of self-flocculating yeast, difference between a flocculating yeast strain and a regular industrial yeast strain was analyzed by transcriptional and metabolic approaches. Results: The number of down-regulated (industrial yeast YIC10 vs. flocculating yeast GIM2.71 and up-regulated genes were 4503 and 228, respectively. It is the economic regulation for YIC10 that non-essential genes were down-regulated, and cells put more “energy” into growth and ethanol production. Hexose transport and phosphorylation were not the limiting-steps in ethanol fermentation for GIM2.71 compared to YIC10, whereas the reaction of 1,3-disphosphoglycerate to 3-phosphoglycerate, the decarboxylation of pyruvate to acetaldehyde and its subsequent reduction to ethanol were the most limiting steps. GIM2.71 had stronger stress response than non-flocculating yeast and much more carbohydrate was distributed to other bypass, such as glycerol, acetate and trehalose synthesis. Conclusions: Differences between flocculating yeast and regular industrial yeast in transcription and metabolite profiling will provide clues for improving the fermentation performance of GIM2.71.

  1. Bypassing the Limits of Ll Regularization: Convex Sparse Signal Processing Using Non-Convex Regularization

    Science.gov (United States)

    Parekh, Ankit

    Sparsity has become the basis of some important signal processing methods over the last ten years. Many signal processing problems (e.g., denoising, deconvolution, non-linear component analysis) can be expressed as inverse problems. Sparsity is invoked through the formulation of an inverse problem with suitably designed regularization terms. The regularization terms alone encode sparsity into the problem formulation. Often, the ℓ1 norm is used to induce sparsity, so much so that ℓ1 regularization is considered to be `modern least-squares'. The use of ℓ1 norm, as a sparsity-inducing regularizer, leads to a convex optimization problem, which has several benefits: the absence of extraneous local minima, well developed theory of globally convergent algorithms, even for large-scale problems. Convex regularization via the ℓ1 norm, however, tends to under-estimate the non-zero values of sparse signals. In order to estimate the non-zero values more accurately, non-convex regularization is often favored over convex regularization. However, non-convex regularization generally leads to non-convex optimization, which suffers from numerous issues: convergence may be guaranteed to only a stationary point, problem specific parameters may be difficult to set, and the solution is sensitive to the initialization of the algorithm. The first part of this thesis is aimed toward combining the benefits of non-convex regularization and convex optimization to estimate sparse signals more effectively. To this end, we propose to use parameterized non-convex regularizers with designated non-convexity and provide a range for the non-convex parameter so as to ensure that the objective function is strictly convex. By ensuring convexity of the objective function (sum of data-fidelity and non-convex regularizer), we can make use of a wide variety of convex optimization algorithms to obtain the unique global minimum reliably. The second part of this thesis proposes a non-linear signal

  2. Regularized maximum correntropy machine

    KAUST Repository

    Wang, Jim Jing-Yan; Wang, Yunji; Jing, Bing-Yi; Gao, Xin

    2015-01-01

    In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

  3. Regularized maximum correntropy machine

    KAUST Repository

    Wang, Jim Jing-Yan

    2015-02-12

    In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

  4. Adsorption-induced step formation

    DEFF Research Database (Denmark)

    Thostrup, P.; Christoffersen, Ebbe; Lorensen, Henrik Qvist

    2001-01-01

    Through an interplay between density functional calculations, Monte Carlo simulations and scanning tunneling microscopy experiments, we show that an intermediate coverage of CO on the Pt(110) surface gives rise to a new rough equilibrium structure with more than 50% step atoms. CO is shown to bind...... so strongly to low-coordinated Pt atoms that it can break Pt-Pt bonds and spontaneously form steps on the surface. It is argued that adsorption-induced step formation may be a general effect, in particular at high gas pressures and temperatures....

  5. Minimal length uncertainty relation and ultraviolet regularization

    Science.gov (United States)

    Kempf, Achim; Mangano, Gianpiero

    1997-06-01

    Studies in string theory and quantum gravity suggest the existence of a finite lower limit Δx0 to the possible resolution of distances, at the latest on the scale of the Planck length of 10-35 m. Within the framework of the Euclidean path integral we explicitly show ultraviolet regularization in field theory through this short distance structure. Both rotation and translation invariance can be preserved. An example is studied in detail.

  6. REGULAR METHOD FOR SYNTHESIS OF BASIC BENT-SQUARES OF RANDOM ORDER

    Directory of Open Access Journals (Sweden)

    A. V. Sokolov

    2016-01-01

    Full Text Available The paper is devoted to the class construction of the most non-linear Boolean bent-functions of any length N = 2k (k = 2, 4, 6…, on the basis of their spectral representation – Agievich bent squares. These perfect algebraic constructions are used as a basis to build many new cryptographic primitives, such as generators of pseudo-random key sequences, crypto graphic S-boxes, etc. Bent-functions also find their application in the construction of C-codes in the systems with code division multiple access (CDMA to provide the lowest possible value of Peak-to-Average Power Ratio (PAPR k = 1, as well as for the construction of error-correcting codes and systems of orthogonal biphasic signals. All the numerous applications of bent-functions relate to the theory of their synthesis. However, regular methods for complete class synthesis of bent-functions of any length N = 2k are currently unknown. The paper proposes a regular synthesis method for the basic Agievich bent squares of any order n, based on a regular operator of dyadic shift. Classification for a complete set of spectral vectors of lengths (l = 8, 16, … based on a criterion of the maximum absolute value and set of absolute values of spectral components has been carried out in the paper. It has been shown that any spectral vector can be a basis for building bent squares. Results of the synthesis for the Agievich bent squares of order n = 8 have been generalized and it has been revealed that there are only 3 basic bent squares for this order, while the other 5 can be obtained with help the operation of step-cyclic shift. All the basic bent squares of order n = 16 have been synthesized that allows to construct the bent-functions of length N = 256. The obtained basic bent squares can be used either for direct synthesis of bent-functions and their practical application or for further research in order to synthesize new structures of bent squares of orders n = 16, 32, 64, …

  7. Perturbation-Based Regularization for Signal Estimation in Linear Discrete Ill-posed Problems

    KAUST Repository

    Suliman, Mohamed Abdalla Elhag; Ballal, Tarig; Al-Naffouri, Tareq Y.

    2016-01-01

    Estimating the values of unknown parameters from corrupted measured data faces a lot of challenges in ill-posed problems. In such problems, many fundamental estimation methods fail to provide a meaningful stabilized solution. In this work, we propose a new regularization approach and a new regularization parameter selection approach for linear least-squares discrete ill-posed problems. The proposed approach is based on enhancing the singular-value structure of the ill-posed model matrix to acquire a better solution. Unlike many other regularization algorithms that seek to minimize the estimated data error, the proposed approach is developed to minimize the mean-squared error of the estimator which is the objective in many typical estimation scenarios. The performance of the proposed approach is demonstrated by applying it to a large set of real-world discrete ill-posed problems. Simulation results demonstrate that the proposed approach outperforms a set of benchmark regularization methods in most cases. In addition, the approach also enjoys the lowest runtime and offers the highest level of robustness amongst all the tested benchmark regularization methods.

  8. Perturbation-Based Regularization for Signal Estimation in Linear Discrete Ill-posed Problems

    KAUST Repository

    Suliman, Mohamed Abdalla Elhag

    2016-11-29

    Estimating the values of unknown parameters from corrupted measured data faces a lot of challenges in ill-posed problems. In such problems, many fundamental estimation methods fail to provide a meaningful stabilized solution. In this work, we propose a new regularization approach and a new regularization parameter selection approach for linear least-squares discrete ill-posed problems. The proposed approach is based on enhancing the singular-value structure of the ill-posed model matrix to acquire a better solution. Unlike many other regularization algorithms that seek to minimize the estimated data error, the proposed approach is developed to minimize the mean-squared error of the estimator which is the objective in many typical estimation scenarios. The performance of the proposed approach is demonstrated by applying it to a large set of real-world discrete ill-posed problems. Simulation results demonstrate that the proposed approach outperforms a set of benchmark regularization methods in most cases. In addition, the approach also enjoys the lowest runtime and offers the highest level of robustness amongst all the tested benchmark regularization methods.

  9. Structural and biochemical analysis of nuclease domain of clustered regularly interspaced short palindromic repeat (CRISPR)-associated protein 3 (Cas3).

    Science.gov (United States)

    Mulepati, Sabin; Bailey, Scott

    2011-09-09

    RNA transcribed from clustered regularly interspaced short palindromic repeats (CRISPRs) protects many prokaryotes from invasion by foreign DNA such as viruses, conjugative plasmids, and transposable elements. Cas3 (CRISPR-associated protein 3) is essential for this CRISPR protection and is thought to mediate cleavage of the foreign DNA through its N-terminal histidine-aspartate (HD) domain. We report here the 1.8 Å crystal structure of the HD domain of Cas3 from Thermus thermophilus HB8. Structural and biochemical studies predict that this enzyme binds two metal ions at its active site. We also demonstrate that the single-stranded DNA endonuclease activity of this T. thermophilus domain is activated not by magnesium but by transition metal ions such as manganese and nickel. Structure-guided mutagenesis confirms the importance of the metal-binding residues for the nuclease activity and identifies other active site residues. Overall, these results provide a framework for understanding the role of Cas3 in the CRISPR system.

  10. Properties of regular polygons of coupled microring resonators.

    Science.gov (United States)

    Chremmos, Ioannis; Uzunoglu, Nikolaos

    2007-11-01

    The resonant properties of a closed and symmetric cyclic array of N coupled microring resonators (coupled-microring resonator regular N-gon) are for the first time determined analytically by applying the transfer matrix approach and Floquet theorem for periodic propagation in cylindrically symmetric structures. By solving the corresponding eigenvalue problem with the field amplitudes in the rings as eigenvectors, it is shown that, for even or odd N, this photonic molecule possesses 1 + N/2 or 1+N resonant frequencies, respectively. The condition for resonances is found to be identical to the familiar dispersion equation of the infinite coupled-microring resonator waveguide with a discrete wave vector. This result reveals the so far latent connection between the two optical structures and is based on the fact that, for a regular polygon, the field transfer matrix over two successive rings is independent of the polygon vertex angle. The properties of the resonant modes are discussed in detail using the illustration of Brillouin band diagrams. Finally, the practical application of a channel-dropping filter based on polygons with an even number of rings is also analyzed.

  11. Zinc oxide modified with benzylphosphonic acids as transparent electrodes in regular and inverted organic solar cell structures

    Energy Technology Data Exchange (ETDEWEB)

    Lange, Ilja; Reiter, Sina; Kniepert, Juliane; Piersimoni, Fortunato; Brenner, Thomas; Neher, Dieter, E-mail: neher@uni-potsdam.de [Institute of Physics and Astronomy, University of Potsdam, Karl-Liebknecht-Strasse 24-25, 14476 Potsdam (Germany); Pätzel, Michael; Hildebrandt, Jana; Hecht, Stefan [Department of Chemistry and IRIS Adlershof, Humboldt-Universität zu Berlin, Brook-Taylor-Str. 2, 12489 Berlin (Germany)

    2015-03-16

    An approach is presented to modify the work function of solution-processed sol-gel derived zinc oxide (ZnO) over an exceptionally wide range of more than 2.3 eV. This approach relies on the formation of dense and homogeneous self-assembled monolayers based on phosphonic acids with different dipole moments. This allows us to apply ZnO as charge selective bottom electrodes in either regular or inverted solar cell structures, using poly(3-hexylthiophene):phenyl-C71-butyric acid methyl ester as the active layer. These devices compete with or even surpass the performance of the reference on indium tin oxide/poly(3,4-ethylenedioxythiophene) polystyrene sulfonate. Our findings highlight the potential of properly modified ZnO as electron or hole extracting electrodes in hybrid optoelectronic devices.

  12. Regularities of Multifractal Measures

    Indian Academy of Sciences (India)

    First, we prove the decomposition theorem for the regularities of multifractal Hausdorff measure and packing measure in R R d . This decomposition theorem enables us to split a set into regular and irregular parts, so that we can analyze each separately, and recombine them without affecting density properties. Next, we ...

  13. One-step fabrication of superhydrophobic hierarchical structures by femtosecond laser ablation

    Energy Technology Data Exchange (ETDEWEB)

    Rukosuyev, Maxym V.; Lee, Jason [Mechanical Engineering, University of Victoria (Canada); Cho, Seong Jin; Lim, Geunbae [Mechanical Engineering, Pohang University of Science and Technology, Pohang (Korea, Republic of); Jun, Martin B.G., E-mail: mbgjun@uvic.ca [Mechanical Engineering, University of Victoria (Canada)

    2014-09-15

    Highlights: • Superhydrophobic surface patterns by femtosecond laser ablation in open air. • Micron scale ridge-like structure with superimposed submicron convex features. • Hydrophobic or even superhydrophobic behavior with no additional silanization. - Abstract: Hydrophobic surface properties are sought after in many areas of research, engineering, and consumer product development. Traditionally, hydrophobic surfaces are produced by using various types of coatings. However, introduction of foreign material onto the surface is often undesirable as it changes surface chemistry and cannot provide a long lasting solution (i.e. reapplication is needed). Therefore, surface modification by transforming the base material itself can be preferable in many applications. Femtosecond laser ablation is one of the methods that can be used to create structures on the surface that will exhibit hydrophobic behavior. The goal of the presented research was to create micro and nano-scale patterns that will exhibit hydrophobic properties with no additional post treatment. As a result, dual scale patterned structures were created on the surface of steel aluminum and tungsten carbide samples. Ablation was performed in the open air with no subsequent treatment. Resultant surfaces appeared to be strongly hydrophobic or even superhydrophobic with contact angle values of 140° and higher. In conclusion, the nature of surface hydrophobicity proved to be highly dependent on surface morphology as the base materials used are intrinsically hydrophilic. It was also proven that the hydrophobicity inducing structures could be manufactured using femtosecond laser machining in a single step with no subsequent post treatment.

  14. One-step fabrication of superhydrophobic hierarchical structures by femtosecond laser ablation

    International Nuclear Information System (INIS)

    Rukosuyev, Maxym V.; Lee, Jason; Cho, Seong Jin; Lim, Geunbae; Jun, Martin B.G.

    2014-01-01

    Highlights: • Superhydrophobic surface patterns by femtosecond laser ablation in open air. • Micron scale ridge-like structure with superimposed submicron convex features. • Hydrophobic or even superhydrophobic behavior with no additional silanization. - Abstract: Hydrophobic surface properties are sought after in many areas of research, engineering, and consumer product development. Traditionally, hydrophobic surfaces are produced by using various types of coatings. However, introduction of foreign material onto the surface is often undesirable as it changes surface chemistry and cannot provide a long lasting solution (i.e. reapplication is needed). Therefore, surface modification by transforming the base material itself can be preferable in many applications. Femtosecond laser ablation is one of the methods that can be used to create structures on the surface that will exhibit hydrophobic behavior. The goal of the presented research was to create micro and nano-scale patterns that will exhibit hydrophobic properties with no additional post treatment. As a result, dual scale patterned structures were created on the surface of steel aluminum and tungsten carbide samples. Ablation was performed in the open air with no subsequent treatment. Resultant surfaces appeared to be strongly hydrophobic or even superhydrophobic with contact angle values of 140° and higher. In conclusion, the nature of surface hydrophobicity proved to be highly dependent on surface morphology as the base materials used are intrinsically hydrophilic. It was also proven that the hydrophobicity inducing structures could be manufactured using femtosecond laser machining in a single step with no subsequent post treatment

  15. Adaptive Regularization of Neural Classifiers

    DEFF Research Database (Denmark)

    Andersen, Lars Nonboe; Larsen, Jan; Hansen, Lars Kai

    1997-01-01

    We present a regularization scheme which iteratively adapts the regularization parameters by minimizing the validation error. It is suggested to use the adaptive regularization scheme in conjunction with optimal brain damage pruning to optimize the architecture and to avoid overfitting. Furthermo......, we propose an improved neural classification architecture eliminating an inherent redundancy in the widely used SoftMax classification network. Numerical results demonstrate the viability of the method...

  16. Application of regularization technique in image super-resolution algorithm via sparse representation

    Science.gov (United States)

    Huang, De-tian; Huang, Wei-qin; Huang, Hui; Zheng, Li-xin

    2017-11-01

    To make use of the prior knowledge of the image more effectively and restore more details of the edges and structures, a novel sparse coding objective function is proposed by applying the principle of the non-local similarity and manifold learning on the basis of super-resolution algorithm via sparse representation. Firstly, the non-local similarity regularization term is constructed by using the similar image patches to preserve the edge information. Then, the manifold learning regularization term is constructed by utilizing the locally linear embedding approach to enhance the structural information. The experimental results validate that the proposed algorithm has a significant improvement compared with several super-resolution algorithms in terms of the subjective visual effect and objective evaluation indices.

  17. The atomic structure and chemistry of Fe-rich steps on antiphase boundaries in Ti-doped Bi0.9Nd0.15FeO3

    Directory of Open Access Journals (Sweden)

    Ian MacLaren

    2014-06-01

    Full Text Available Stepped antiphase boundaries are frequently observed in Ti-doped Bi0.85Nd0.15FeO3, related to the novel planar antiphase boundaries reported recently. The atomic structure and chemistry of these steps are determined by a combination of high angle annular dark field and bright field scanning transmission electron microscopy imaging, together with electron energy loss spectroscopy. The core of these steps is found to consist of 4 edge-sharing FeO6 octahedra. The structure is confirmed by image simulations using a frozen phonon multislice approach. The steps are also found to be negatively charged and, like the planar boundaries studied previously, result in polarisation of the surrounding perovskite matrix.

  18. Progressive image denoising through hybrid graph Laplacian regularization: a unified framework.

    Science.gov (United States)

    Liu, Xianming; Zhai, Deming; Zhao, Debin; Zhai, Guangtao; Gao, Wen

    2014-04-01

    Recovering images from corrupted observations is necessary for many real-world applications. In this paper, we propose a unified framework to perform progressive image recovery based on hybrid graph Laplacian regularized regression. We first construct a multiscale representation of the target image by Laplacian pyramid, then progressively recover the degraded image in the scale space from coarse to fine so that the sharp edges and texture can be eventually recovered. On one hand, within each scale, a graph Laplacian regularization model represented by implicit kernel is learned, which simultaneously minimizes the least square error on the measured samples and preserves the geometrical structure of the image data space. In this procedure, the intrinsic manifold structure is explicitly considered using both measured and unmeasured samples, and the nonlocal self-similarity property is utilized as a fruitful resource for abstracting a priori knowledge of the images. On the other hand, between two successive scales, the proposed model is extended to a projected high-dimensional feature space through explicit kernel mapping to describe the interscale correlation, in which the local structure regularity is learned and propagated from coarser to finer scales. In this way, the proposed algorithm gradually recovers more and more image details and edges, which could not been recovered in previous scale. We test our algorithm on one typical image recovery task: impulse noise removal. Experimental results on benchmark test images demonstrate that the proposed method achieves better performance than state-of-the-art algorithms.

  19. Condition Number Regularized Covariance Estimation.

    Science.gov (United States)

    Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala

    2013-06-01

    Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the "large p small n " setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required.

  20. Mimicking lizard-like surface structures upon ultrashort laser pulse irradiation of inorganic materials

    Science.gov (United States)

    Hermens, U.; Kirner, S. V.; Emonts, C.; Comanns, P.; Skoulas, E.; Mimidis, A.; Mescheder, H.; Winands, K.; Krüger, J.; Stratakis, E.; Bonse, J.

    2017-10-01

    Inorganic materials, such as steel, were functionalized by ultrashort laser pulse irradiation (fs- to ps-range) to modify the surface's wetting behavior. The laser processing was performed by scanning the laser beam across the surface of initially polished flat sample material. A systematic experimental study of the laser processing parameters (peak fluence, scan velocity, line overlap) allowed the identification of different regimes associated with characteristic surface morphologies (laser-induced periodic surface structures, grooves, spikes, etc.). Analyses of the surface using optical as well as scanning electron microscopy revealed morphologies providing the optimum similarity to the natural skin of lizards. For mimicking skin structures of moisture-harvesting lizards towards an optimization of the surface wetting behavior, additionally a two-step laser processing strategy was established for realizing hierarchical microstructures. In this approach, micrometer-scaled capillaries (step 1) were superimposed by a laser-generated regular array of small dimples (step 2). Optical focus variation imaging measurements finally disclosed the three dimensional topography of the laser processed surfaces derived from lizard skin structures. The functionality of these surfaces was analyzed in view of wetting properties.

  1. Two-step excitation structure changes of luminescence centers and strong tunable blue emission on surface of silica nanospheres

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Lei, E-mail: nanoyang@qq.com; Jiang, Zhongcheng; Dong, Jiazhang; Zhang, Liuqian [Hunan University, College of Materials Science and Engineering (China); Pan, Anlian, E-mail: anlian.pan@gmail.com; Zhuang, Xiujuan [Hunan University, Key Laboratory for Micro-Nano Physics and Technology of Hunan Province (China)

    2015-10-15

    We report a scheme for investigating two-step stimulated structure change of luminescence centers. Amorphous silica nanospheres with uniform diameter of 9–15 nm have been synthesized by Stöber method. Strong hydroxyl-related infrared-absorption band is observed in infrared spectrum. The surface hydroxyl groups exert great influence on the luminescent behavior of silica. They provide stable and intermediate energy states to accommodate excitation electrons. The existence of these surface states reduces the energy barrier of photochemical reactions, creating conditions for two-step excitation process. By carefully examining excitation and emission process, the nearest excitation band is absent in both optical absorption spectrum and excitation spectrum. This later generated state confirms the generation of new luminescence centers as well as the existence of photochemical reactions. Stimulated by different energies, two-step excitation process impels different photochemical reactions, prompting generation of different lattice defects on surface area of silica. Thereby, tunable luminescence is achieved. After thermal treatment, strong gap excitation band appears with the disappearance of strong surface excitation band. Strong blue luminescence also disappears. The research is significance to precise introducing structural defects and controlling position of luminescence peaks.

  2. Dose domain regularization of MLC leaf patterns for highly complex IMRT plans

    Energy Technology Data Exchange (ETDEWEB)

    Nguyen, Dan; Yu, Victoria Y.; Ruan, Dan; Cao, Minsong; Low, Daniel A.; Sheng, Ke, E-mail: ksheng@mednet.ucla.edu [Department of Radiation Oncology, University of California Los Angeles, Los Angeles, California 90095 (United States); O’Connor, Daniel [Department of Mathematics, University of California Los Angeles, Los Angeles, California 90095 (United States)

    2015-04-15

    Purpose: The advent of automated beam orientation and fluence optimization enables more complex intensity modulated radiation therapy (IMRT) planning using an increasing number of fields to exploit the expanded solution space. This has created a challenge in converting complex fluences to robust multileaf collimator (MLC) segments for delivery. A novel method to regularize the fluence map and simplify MLC segments is introduced to maximize delivery efficiency, accuracy, and plan quality. Methods: In this work, we implemented a novel approach to regularize optimized fluences in the dose domain. The treatment planning problem was formulated in an optimization framework to minimize the segmentation-induced dose distribution degradation subject to a total variation regularization to encourage piecewise smoothness in fluence maps. The optimization problem was solved using a first-order primal-dual algorithm known as the Chambolle-Pock algorithm. Plans for 2 GBM, 2 head and neck, and 2 lung patients were created using 20 automatically selected and optimized noncoplanar beams. The fluence was first regularized using Chambolle-Pock and then stratified into equal steps, and the MLC segments were calculated using a previously described level reducing method. Isolated apertures with sizes smaller than preset thresholds of 1–3 bixels, which are square units of an IMRT fluence map from MLC discretization, were removed from the MLC segments. Performance of the dose domain regularized (DDR) fluences was compared to direct stratification and direct MLC segmentation (DMS) of the fluences using level reduction without dose domain fluence regularization. Results: For all six cases, the DDR method increased the average planning target volume dose homogeneity (D95/D5) from 0.814 to 0.878 while maintaining equivalent dose to organs at risk (OARs). Regularized fluences were more robust to MLC sequencing, particularly to the stratification and small aperture removal. The maximum and

  3. Brans-Dicke Theory with Λ>0: Black Holes and Large Scale Structures.

    Science.gov (United States)

    Bhattacharya, Sourav; Dialektopoulos, Konstantinos F; Romano, Antonio Enea; Tomaras, Theodore N

    2015-10-30

    A step-by-step approach is followed to study cosmic structures in the context of Brans-Dicke theory with positive cosmological constant Λ and parameter ω. First, it is shown that regular stationary black-hole solutions not only have constant Brans-Dicke field ϕ, but can exist only for ω=∞, which forces the theory to coincide with the general relativity. Generalizations of the theory in order to evade this black-hole no-hair theorem are presented. It is also shown that in the absence of a stationary cosmological event horizon in the asymptotic region, a stationary black-hole horizon can support a nontrivial Brans-Dicke hair. Even more importantly, it is shown next that the presence of a stationary cosmological event horizon rules out any regular stationary solution, appropriate for the description of a star. Thus, to describe a star one has to assume that there is no such stationary horizon in the faraway asymptotic region. Under this implicit assumption generic spherical cosmic structures are studied perturbatively and it is shown that only for ω>0 or ω≲-5 their predicted maximum sizes are consistent with observations. We also point out how, many of the conclusions of this work differ qualitatively from the Λ=0 spacetimes.

  4. Performance of an attention-demanding task during treadmill walking shifts the noise qualities of step-to-step variation in step width.

    Science.gov (United States)

    Grabiner, Mark D; Marone, Jane R; Wyatt, Marilynn; Sessoms, Pinata; Kaufman, Kenton R

    2018-06-01

    The fractal scaling evident in the step-to-step fluctuations of stepping-related time series reflects, to some degree, neuromotor noise. The primary purpose of this study was to determine the extent to which the fractal scaling of step width, step width and step width variability are affected by performance of an attention-demanding task. We hypothesized that the attention-demanding task would shift the structure of the step width time series toward white, uncorrelated noise. Subjects performed two 10-min treadmill walking trials, a control trial of undisturbed walking and a trial during which they performed a mental arithmetic/texting task. Motion capture data was converted to step width time series, the fractal scaling of which were determined from their power spectra. Fractal scaling decreased by 22% during the texting condition (p Step width and step width variability increased 19% and five percent, respectively (p step width fractal scaling. The change of the fractal scaling of step width is consistent with increased cognitive demand and suggests a transition in the characteristics of the signal noise. This may reflect an important advance toward the understanding of the manner in which neuromotor noise contributes to some types of falls. However, further investigation of the repeatability of the results, the sensitivity of the results to progressive increases in cognitive load imposed by attention-demanding tasks, and the extent to which the results can be generalized to the gait of older adults seems warranted. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. Efficient operator splitting algorithm for joint sparsity-regularized SPIRiT-based parallel MR imaging reconstruction.

    Science.gov (United States)

    Duan, Jizhong; Liu, Yu; Jing, Peiguang

    2018-02-01

    Self-consistent parallel imaging (SPIRiT) is an auto-calibrating model for the reconstruction of parallel magnetic resonance imaging, which can be formulated as a regularized SPIRiT problem. The Projection Over Convex Sets (POCS) method was used to solve the formulated regularized SPIRiT problem. However, the quality of the reconstructed image still needs to be improved. Though methods such as NonLinear Conjugate Gradients (NLCG) can achieve higher spatial resolution, these methods always demand very complex computation and converge slowly. In this paper, we propose a new algorithm to solve the formulated Cartesian SPIRiT problem with the JTV and JL1 regularization terms. The proposed algorithm uses the operator splitting (OS) technique to decompose the problem into a gradient problem and a denoising problem with two regularization terms, which is solved by our proposed split Bregman based denoising algorithm, and adopts the Barzilai and Borwein method to update step size. Simulation experiments on two in vivo data sets demonstrate that the proposed algorithm is 1.3 times faster than ADMM for datasets with 8 channels. Especially, our proposal is 2 times faster than ADMM for the dataset with 32 channels. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Strength evaluation code STEP for brittle materials

    International Nuclear Information System (INIS)

    Ishihara, Masahiro; Futakawa, Masatoshi.

    1997-12-01

    In a structural design using brittle materials such as graphite and/or ceramics it is necessary to evaluate the strength of component under complex stress condition. The strength of ceramic materials is said to be influenced by the stress distribution. However, in the structural design criteria simplified stress limits had been adopted without taking account of the strength change with the stress distribution. It is, therefore, important to evaluate the strength of component on the basis of the fracture model for brittle material. Consequently, the strength evaluation program, STEP, on a brittle fracture of ceramic materials based on the competing risk theory had been developed. Two different brittle fracture modes, a surface layer fracture mode dominated by surface flaws and an internal fracture mode by internal flaws, are treated in the STEP code in order to evaluate the strength of brittle fracture. The STEP code uses stress calculation results including complex shape of structures analyzed by the generalized FEM stress analysis code, ABAQUS, so as to be possible to evaluate the strength of brittle fracture for the structures having complicate shapes. This code is, therefore, useful to evaluate the structural integrity of arbitrary shapes of components such as core graphite components in the HTTR, heat exchanger components made of ceramics materials etc. This paper describes the basic equations applying to the STEP code, code system with a combination of the STEP and the ABAQUS codes and the result of the verification analysis. (author)

  7. Structural and optical properties of Co-doped ZnO nanocrystallites prepared by a one-step solution route

    International Nuclear Information System (INIS)

    Li Ping; Wang Sha; Li Jibiao; Wei Yu

    2012-01-01

    Zinc oxide (ZnO) nanocrystallites with different Co-doping levels were successfully synthesized by a simple one-step solution route at low temperature (95 deg. C) in this study. The structure and morphology of the samples thus obtained were characterized by XRD, EDS, XPS and FESEM. Results show that cobalt ions, in the oxidation state of Co 2+ , replace Zn 2+ ions in the ZnO lattice without changing its wurtzite structure. The dopant content varies from 0.59% to 5.39%, based on Co-doping levels. The pure ZnO particles exhibit well-defined 3D flower-like morphology with an average size of 550 nm, while the particles obtained after Co-doping are mostly cauliflower-like nanoclusters with an average size of 120 nm. Both the flower-like pure ZnO and the cauliflower-like Co:ZnO nanoclusters are composed of densely arrayed nanorods. The optical properties of the ZnO nanocrystallites following Co-doping were also investigated by UV-Visible absorption and Photoluminescence spectra. Our results indicate that Co-doping can change the energy-band structure and effectively adjust the luminescence properties of ZnO nanocrystallites. - Highlights: → Co-doped ZnO nanocrystallites were synthesized via a simple one-step solution route. → Co 2+ ions incorporated into the ZnO lattice without changing its wurtzite structure. → Co-doping changed the energy band structure of ZnO. → Co-doping effectively adjusted the luminescence properties of ZnO nanocrystallites.

  8. Structural and optical properties of Co-doped ZnO nanocrystallites prepared by a one-step solution route

    Energy Technology Data Exchange (ETDEWEB)

    Li Ping, E-mail: lipingchina@yahoo.com.cn [Provincial Key Laboratory of Inorganic Nanomaterials, School of Chemistry and Materials Science, Hebei Normal University, 113 Yuhua Road, Shijiazhuang 050016, Hebei (China); Wang Sha; Li Jibiao; Wei Yu [Provincial Key Laboratory of Inorganic Nanomaterials, School of Chemistry and Materials Science, Hebei Normal University, 113 Yuhua Road, Shijiazhuang 050016, Hebei (China)

    2012-01-15

    Zinc oxide (ZnO) nanocrystallites with different Co-doping levels were successfully synthesized by a simple one-step solution route at low temperature (95 deg. C) in this study. The structure and morphology of the samples thus obtained were characterized by XRD, EDS, XPS and FESEM. Results show that cobalt ions, in the oxidation state of Co{sup 2+}, replace Zn{sup 2+} ions in the ZnO lattice without changing its wurtzite structure. The dopant content varies from 0.59% to 5.39%, based on Co-doping levels. The pure ZnO particles exhibit well-defined 3D flower-like morphology with an average size of 550 nm, while the particles obtained after Co-doping are mostly cauliflower-like nanoclusters with an average size of 120 nm. Both the flower-like pure ZnO and the cauliflower-like Co:ZnO nanoclusters are composed of densely arrayed nanorods. The optical properties of the ZnO nanocrystallites following Co-doping were also investigated by UV-Visible absorption and Photoluminescence spectra. Our results indicate that Co-doping can change the energy-band structure and effectively adjust the luminescence properties of ZnO nanocrystallites. - Highlights: > Co-doped ZnO nanocrystallites were synthesized via a simple one-step solution route. > Co{sup 2+} ions incorporated into the ZnO lattice without changing its wurtzite structure. > Co-doping changed the energy band structure of ZnO. > Co-doping effectively adjusted the luminescence properties of ZnO nanocrystallites.

  9. An adaptive surface filter for airborne laser scanning point clouds by means of regularization and bending energy

    Science.gov (United States)

    Hu, Han; Ding, Yulin; Zhu, Qing; Wu, Bo; Lin, Hui; Du, Zhiqiang; Zhang, Yeting; Zhang, Yunsheng

    2014-06-01

    The filtering of point clouds is a ubiquitous task in the processing of airborne laser scanning (ALS) data; however, such filtering processes are difficult because of the complex configuration of the terrain features. The classical filtering algorithms rely on the cautious tuning of parameters to handle various landforms. To address the challenge posed by the bundling of different terrain features into a single dataset and to surmount the sensitivity of the parameters, in this study, we propose an adaptive surface filter (ASF) for the classification of ALS point clouds. Based on the principle that the threshold should vary in accordance to the terrain smoothness, the ASF embeds bending energy, which quantitatively depicts the local terrain structure to self-adapt the filter threshold automatically. The ASF employs a step factor to control the data pyramid scheme in which the processing window sizes are reduced progressively, and the ASF gradually interpolates thin plate spline surfaces toward the ground with regularization to handle noise. Using the progressive densification strategy, regularization and self-adaption, both performance improvement and resilience to parameter tuning are achieved. When tested against the benchmark datasets provided by ISPRS, the ASF performs the best in comparison with all other filtering methods, yielding an average total error of 2.85% when optimized and 3.67% when using the same parameter set.

  10. Regularization of the Fourier series of discontinuous functions by various summation methods

    Energy Technology Data Exchange (ETDEWEB)

    Ahmad, S.S.; Beghi, L. (Padua Univ. (Italy). Seminario Matematico)

    1983-07-01

    In this paper the regularization by various summation methods of the Fourier series of functions containing discontinuities of the first and second kind are studied and the results of the numerical analyses referring to some typical periodic functions are presented. In addition to the Cesaro and Lanczos weightings, a new (i.e. cosine) weighting for accelerating the convergence rate is proposed. A comparison with the results obtained by Garibotti and Massaro with the punctual Pade approximants (PPA) technique in case of a periodic step function is also carried out.

  11. Regularized κ-distributions with non-diverging moments

    Science.gov (United States)

    Scherer, K.; Fichtner, H.; Lazar, M.

    2017-12-01

    For various plasma applications the so-called (non-relativistic) κ-distribution is widely used to reproduce and interpret the suprathermal particle populations exhibiting a power-law distribution in velocity or energy. Despite its reputation the standard κ-distribution as a concept is still disputable, mainly due to the velocity moments M l which make a macroscopic characterization possible, but whose existence is restricted only to low orders l definition of the κ-distribution itself is conditioned by the existence of the moment of order l = 2 (i.e., kinetic temperature) satisfied only for κ > 3/2 . In order to resolve these critical limitations we introduce the regularized κ-distribution with non-diverging moments. For the evaluation of all velocity moments a general analytical expression is provided enabling a significant step towards a macroscopic (fluid-like) description of space plasmas, and, in general, any system of κ-distributed particles.

  12. Condition Number Regularized Covariance Estimation*

    Science.gov (United States)

    Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala

    2012-01-01

    Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the “large p small n” setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required. PMID:23730197

  13. Dynamics of coherent states in regular and chaotic regimes of the non-integrable Dicke model

    Science.gov (United States)

    Lerma-Hernández, S.; Chávez-Carlos, J.; Bastarrachea-Magnani, M. A.; López-del-Carpio, B.; Hirsch, J. G.

    2018-04-01

    The quantum dynamics of initial coherent states is studied in the Dicke model and correlated with the dynamics, regular or chaotic, of their classical limit. Analytical expressions for the survival probability, i.e. the probability of finding the system in its initial state at time t, are provided in the regular regions of the model. The results for regular regimes are compared with those of the chaotic ones. It is found that initial coherent states in regular regions have a much longer equilibration time than those located in chaotic regions. The properties of the distributions for the initial coherent states in the Hamiltonian eigenbasis are also studied. It is found that for regular states the components with no negligible contribution are organized in sequences of energy levels distributed according to Gaussian functions. In the case of chaotic coherent states, the energy components do not have a simple structure and the number of participating energy levels is larger than in the regular cases.

  14. Nonlocal discrete regularization on weighted graphs: a framework for image and manifold processing.

    Science.gov (United States)

    Elmoataz, Abderrahim; Lezoray, Olivier; Bougleux, Sébastien

    2008-07-01

    We introduce a nonlocal discrete regularization framework on weighted graphs of the arbitrary topologies for image and manifold processing. The approach considers the problem as a variational one, which consists of minimizing a weighted sum of two energy terms: a regularization one that uses a discrete weighted p-Dirichlet energy and an approximation one. This is the discrete analogue of recent continuous Euclidean nonlocal regularization functionals. The proposed formulation leads to a family of simple and fast nonlinear processing methods based on the weighted p-Laplace operator, parameterized by the degree p of regularity, the graph structure and the graph weight function. These discrete processing methods provide a graph-based version of recently proposed semi-local or nonlocal processing methods used in image and mesh processing, such as the bilateral filter, the TV digital filter or the nonlocal means filter. It works with equal ease on regular 2-D and 3-D images, manifolds or any data. We illustrate the abilities of the approach by applying it to various types of images, meshes, manifolds, and data represented as graphs.

  15. High accuracy step gauge interferometer

    Science.gov (United States)

    Byman, V.; Jaakkola, T.; Palosuo, I.; Lassila, A.

    2018-05-01

    Step gauges are convenient transfer standards for the calibration of coordinate measuring machines. A novel interferometer for step gauge calibrations implemented at VTT MIKES is described. The four-pass interferometer follows Abbe’s principle and measures the position of the inductive probe attached to a measuring head. The measuring head of the instrument is connected to a balanced boom above the carriage by a piezo translation stage. A key part of the measuring head is an invar structure on which the inductive probe and the corner cubes of the measuring arm of the interferometer are attached. The invar structure can be elevated so that the probe is raised without breaking the laser beam. During probing, the bending of the probe and the interferometer readings are recorded and the measurement face position is extrapolated to zero force. The measurement process is fully automated and the face positions of the steps can be measured up to a length of 2 m. Ambient conditions are measured continuously and the refractive index of air is compensated for. Before measurements the step gauge is aligned with an integrated 2D coordinate measuring system. The expanded uncertainty of step gauge calibration is U=\\sqrt{{{(64 nm)}2}+{{(88× {{10}-9}L)}2}} .

  16. Discovery of deep and shallow trap states from step structures of rutile TiO2 vicinal surfaces by second harmonic and sum frequency generation spectroscopy

    International Nuclear Information System (INIS)

    Takahashi, Hiroaki; Watanabe, Ryosuke; Miyauchi, Yoshihiro; Mizutani, Goro

    2011-01-01

    In this report, local electronic structures of steps and terraces on rutile TiO 2 single crystal faces were studied by second harmonic and sum frequency generation (SHG/SFG) spectroscopy. We attained selective measurement of the local electronic states of the step bunches formed on the vicinal (17 18 1) and (15 13 0) surfaces using a recently developed step-selective probing technique. The electronic structures of the flat (110)-(1x1) (the terrace face of the vicinal surfaces) and (011)-(2x1) surfaces were also discussed. The SHG/SFG spectra showed that step structures are mainly responsible for the formation of trap states, since significant resonances from the trap states were observed only from the vicinal surfaces. We detected deep hole trap (DHT) states and shallow electron trap (SET) states selectively from the step bunches on the vicinal surfaces. Detailed analysis of the SHG/SFG spectra showed that the DHT and SET states are more likely to be induced at the top edges of the step bunches than on their hillsides. Unlike the SET states, the DHT states were observed only at the step bunches parallel to [1 1 1][equivalent to the step bunches formed on the (17 18 1) surface]. Photocatalytic activity for each TiO 2 sample was also measured through methylene blue photodegradation reactions and was found to follow the sequence: (110) < (17 18 1) < (15 13 0) < (011), indicating that steps along [0 0 1] are more reactive than steps along [1 1 1]. This result implies that the presence of the DHT states observed from the step bunches parallel to [1 1 1] did not effectively contribute to the methylene blue photodegradation reactions.

  17. Atomic Step Formation on Sapphire Surface in Ultra-precision Manufacturing

    Science.gov (United States)

    Wang, Rongrong; Guo, Dan; Xie, Guoxin; Pan, Guoshun

    2016-01-01

    Surfaces with controlled atomic step structures as substrates are highly relevant to desirable performances of materials grown on them, such as light emitting diode (LED) epitaxial layers, nanotubes and nanoribbons. However, very limited attention has been paid to the step formation in manufacturing process. In the present work, investigations have been conducted into this step formation mechanism on the sapphire c (0001) surface by using both experiments and simulations. The step evolutions at different stages in the polishing process were investigated with atomic force microscopy (AFM) and high resolution transmission electron microscopy (HRTEM). The simulation of idealized steps was constructed theoretically on the basis of experimental results. It was found that (1) the subtle atomic structures (e.g., steps with different sawteeth, as well as steps with straight and zigzag edges), (2) the periodicity and (3) the degree of order of the steps were all dependent on surface composition and miscut direction (step edge direction). A comparison between experimental results and idealized step models of different surface compositions has been made. It has been found that the structure on the polished surface was in accordance with some surface compositions (the model of single-atom steps: Al steps or O steps). PMID:27444267

  18. Regular-, irregular-, and pseudo-character processing in Chinese: The regularity effect in normal adult readers

    Directory of Open Access Journals (Sweden)

    Dustin Kai Yan Lau

    2014-03-01

    Full Text Available Background Unlike alphabetic languages, Chinese uses a logographic script. However, the pronunciation of many character’s phonetic radical has the same pronunciation as the character as a whole. These are considered regular characters and can be read through a lexical non-semantic route (Weekes & Chen, 1999. Pseudocharacters are another way to study this non-semantic route. A pseudocharacter is the combination of existing semantic and phonetic radicals in their legal positions resulting in a non-existing character (Ho, Chan, Chung, Lee, & Tsang, 2007. Pseudocharacters can be pronounced by direct derivation from the sound of its phonetic radical. Conversely, if the pronunciation of a character does not follow that of the phonetic radical, it is considered as irregular and can only be correctly read through the lexical-semantic route. The aim of the current investigation was to examine reading aloud in normal adults. We hypothesized that the regularity effect, previously described for alphabetical scripts and acquired dyslexic patients of Chinese (Weekes & Chen, 1999; Wu, Liu, Sun, Chromik, & Zhang, 2014, would also be present in normal adult Chinese readers. Method Participants. Thirty (50% female native Hong Kong Cantonese speakers with a mean age of 19.6 years and a mean education of 12.9 years. Stimuli. Sixty regular-, 60 irregular-, and 60 pseudo-characters (with at least 75% of name agreement in Chinese were matched by initial phoneme, number of strokes and family size. Additionally, regular- and irregular-characters were matched by frequency (low and consistency. Procedure. Each participant was asked to read aloud the stimuli presented on a laptop using the DMDX software. The order of stimuli presentation was randomized. Data analysis. ANOVAs were carried out by participants and items with RTs and errors as dependent variables and type of stimuli (regular-, irregular- and pseudo-character as repeated measures (F1 or between subject

  19. Reduction of Nambu-Poisson Manifolds by Regular Distributions

    Science.gov (United States)

    Das, Apurba

    2018-03-01

    The version of Marsden-Ratiu reduction theorem for Nambu-Poisson manifolds by a regular distribution has been studied by Ibáñez et al. In this paper we show that the reduction is always ensured unless the distribution is zero. Next we extend the more general Falceto-Zambon Poisson reduction theorem for Nambu-Poisson manifolds. Finally, we define gauge transformations of Nambu-Poisson structures and show that these transformations commute with the reduction procedure.

  20. Structure analysis of ultra-thin films. STM/AFM. Chousumaku no kozo kaiseki. STM/AFM

    Energy Technology Data Exchange (ETDEWEB)

    Nozoe, H; Yumura, M [National Institute of Materials and Chemical Research, Tsukuba (Japan)

    1994-03-30

    Fullerene (C60) and carbon nanotubes are expected as new carbon structures. This article describes the observation results of C60 and carbon nanotubes by means of STM (scanning tunnel microscope). The STM images of C60 thin films are illustrated, which have been obtained by annealing at 290 centigrade. It was confirmed that C60 monomolecular thin films are formed which conform to the substrate and have high regularity. The step height of C60 monomolecular thin films coincided with the step height of Cu (111) plane, which suggested that the step of films is reflected in that of Cu substrate. For the STM images under bias voltages, various images of C60 with three-fold axis of symmetry were observed. On the other hand, from STM observation of carbon nanotubes with diameter of about 30 nm which were separated and purified from the cathode deposits during the preparation process of C60, it was found that they have concentric multilayer structure. 18 refs., 7 figs.

  1. Regularity effect in prospective memory during aging

    Directory of Open Access Journals (Sweden)

    Geoffrey Blondelle

    2016-10-01

    Full Text Available Background: Regularity effect can affect performance in prospective memory (PM, but little is known on the cognitive processes linked to this effect. Moreover, its impacts with regard to aging remain unknown. To our knowledge, this study is the first to examine regularity effect in PM in a lifespan perspective, with a sample of young, intermediate, and older adults. Objective and design: Our study examined the regularity effect in PM in three groups of participants: 28 young adults (18–30, 16 intermediate adults (40–55, and 25 older adults (65–80. The task, adapted from the Virtual Week, was designed to manipulate the regularity of the various activities of daily life that were to be recalled (regular repeated activities vs. irregular non-repeated activities. We examine the role of several cognitive functions including certain dimensions of executive functions (planning, inhibition, shifting, and binding, short-term memory, and retrospective episodic memory to identify those involved in PM, according to regularity and age. Results: A mixed-design ANOVA showed a main effect of task regularity and an interaction between age and regularity: an age-related difference in PM performances was found for irregular activities (older < young, but not for regular activities. All participants recalled more regular activities than irregular ones with no age effect. It appeared that recalling of regular activities only involved planning for both intermediate and older adults, while recalling of irregular ones were linked to planning, inhibition, short-term memory, binding, and retrospective episodic memory. Conclusion: Taken together, our data suggest that planning capacities seem to play a major role in remembering to perform intended actions with advancing age. Furthermore, the age-PM-paradox may be attenuated when the experimental design is adapted by implementing a familiar context through the use of activities of daily living. The clinical

  2. J-regular rings with injectivities

    OpenAIRE

    Shen, Liang

    2010-01-01

    A ring $R$ is called a J-regular ring if R/J(R) is von Neumann regular, where J(R) is the Jacobson radical of R. It is proved that if R is J-regular, then (i) R is right n-injective if and only if every homomorphism from an $n$-generated small right ideal of $R$ to $R_{R}$ can be extended to one from $R_{R}$ to $R_{R}$; (ii) R is right FP-injective if and only if R is right (J, R)-FP-injective. Some known results are improved.

  3. Numerical response analysis of a large mat-type floating structure in regular waves; Matogata choogata futai kozobutsu no haro oto kaiseki

    Energy Technology Data Exchange (ETDEWEB)

    Yasuzawa, Y.; Kagawa, K.; Kitabayashi, K. [Kyushu University, Fukuoka (Japan); Kawano, D. [Mitsubishi Heavy Industries, Ltd., Tokyo (Japan)

    1997-08-01

    The theory and formulation for the numerical response analysis of a large floating structure in regular waves were given. This paper also reports the comparison between the experiment in the Shipping Research Institute in the Minitry of Transport and the result calculated using numerical analytic codes in this study. The effect of the bending rigidity of a floating structure and the wave direction on the dynamic response of a structure was examined by numerical calculation. When the ratio of structure length and incident wavelength (L/{lambda}) is lower, the response amplitude on the transmission side becomes higher in a wave-based response. The hydrodynamic elasticity exerts a dominant influence when L/{lambda} becomes higher. For incident oblique waves, the maximum response does not necessarily appear on the incidence side. Moreover, the response distribution is also complicated. For example, the portion where any flexible amplitude hardly appears exists. A long structure response can be predicted from a short structure response to some degree. They differ in response properties when the ridigity based on the similarity rule largely differs, irrespective of the same L/{lambda}. For higher L/{lambda}, the wave response can be easily predicted when the diffrection force is replaced by the concentrated exciting force on the incidence side. 13 refs., 14 figs., 3 tabs.

  4. Iterative Regularization with Minimum-Residual Methods

    DEFF Research Database (Denmark)

    Jensen, Toke Koldborg; Hansen, Per Christian

    2007-01-01

    subspaces. We provide a combination of theory and numerical examples, and our analysis confirms the experience that MINRES and MR-II can work as general regularization methods. We also demonstrate theoretically and experimentally that the same is not true, in general, for GMRES and RRGMRES their success......We study the regularization properties of iterative minimum-residual methods applied to discrete ill-posed problems. In these methods, the projection onto the underlying Krylov subspace acts as a regularizer, and the emphasis of this work is on the role played by the basis vectors of these Krylov...... as regularization methods is highly problem dependent....

  5. Iterative regularization with minimum-residual methods

    DEFF Research Database (Denmark)

    Jensen, Toke Koldborg; Hansen, Per Christian

    2006-01-01

    subspaces. We provide a combination of theory and numerical examples, and our analysis confirms the experience that MINRES and MR-II can work as general regularization methods. We also demonstrate theoretically and experimentally that the same is not true, in general, for GMRES and RRGMRES - their success......We study the regularization properties of iterative minimum-residual methods applied to discrete ill-posed problems. In these methods, the projection onto the underlying Krylov subspace acts as a regularizer, and the emphasis of this work is on the role played by the basis vectors of these Krylov...... as regularization methods is highly problem dependent....

  6. Refined Modeling of Flexural Deformation of Layered Plates with a Regular Structure Made from Nonlinear Hereditary Materials

    Science.gov (United States)

    Yankovskii, A. P.

    2018-01-01

    On the basis of constitutive equations of the Rabotnov nonlinear hereditary theory of creep, the problem on the rheonomic flexural behavior of layered plates with a regular structure is formu-lated. Equations allowing one to describe, with different degrees of accuracy, the stress-strain state of such plates with account of their weakened resistance to transverse shear were ob-tained. From them, the relations of the nonclassical Reissner- and Reddytype theories can be found. For axially loaded annular plates clamped at one edge and loaded quasistatically on the other edge, a simplified version of the refined theory, whose complexity is comparable to that of the Reissner and Reddy theories, is developed. The flexural strains of such metal-composite annular plates in shortterm and long-term loadings at different levels of heat action are calcu-lated. It is shown that, for plates with a relative thickness of order of 1/10, neither the classical theory, nor the traditional nonclassical Reissner and Reddy theories guarantee reliable results for deflections even with the rough 10% accuracy. The accuracy of these theories decreases at elevated temperatures and with time under long-term loadings of structures. On the basic of relations of the refined theory, it is revealed that, in bending of layered metal-composite heat-sensitive plates under elevated temperatures, marked edge effects arise in the neighborhood of the supported edge, which characterize the shear of these structures in the transverse direction

  7. A structural multidisciplinary approach to depression management in nursing-home residents: a multicentre, stepped-wedge cluster-randomised trial

    NARCIS (Netherlands)

    Leontjevas, R.; Gerritsen, D.L.; Smalbrugge, M.; Teerenstra, S.; Vernooij-Dassen, M.J.F.J.; Koopmans, R.T.C.M.

    2013-01-01

    BACKGROUND: Depression in nursing-home residents is often under-recognised. We aimed to establish the effectiveness of a structural approach to its management. METHODS: Between May 15, 2009, and April 30, 2011, we undertook a multicentre, stepped-wedge cluster-randomised trial in four provinces of

  8. Access to serviced land for the urban poor: the regularization paradox in Mexico

    Directory of Open Access Journals (Sweden)

    Alfonso Iracheta Cenecorta

    2000-01-01

    Full Text Available The insufficient supply of serviced land at affordable prices for the urban poor and the need for regularization of the consequent illegal occupations in urban areas are two of the most important issues on the Latin American land policy agenda. Taking a structural/integrated view on the functioning of the urban land market in Latin America, this paper discusses the nexus between the formal and the informal land markets. It thus exposes the perverse feedback effects that curative regularization policies may have on the process by which irregularity is produced in the first place. The paper suggests that a more effective approach to the provision of serviced land for the poor cannot be resolved within the prevailing (curative regularization programs. These programs should have the capacity to mobilize the resources that do exist into a comprehensive program that links regularization with fiscal policy, including the exploration of value capture mechanisms.

  9. Regularized Biot-Savart Laws for Modeling Magnetic Flux Ropes

    Science.gov (United States)

    Titov, Viacheslav; Downs, Cooper; Mikic, Zoran; Torok, Tibor; Linker, Jon A.

    2017-08-01

    Many existing models assume that magnetic flux ropes play a key role in solar flares and coronal mass ejections (CMEs). It is therefore important to develop efficient methods for constructing flux-rope configurations constrained by observed magnetic data and the initial morphology of CMEs. As our new step in this direction, we have derived and implemented a compact analytical form that represents the magnetic field of a thin flux rope with an axis of arbitrary shape and a circular cross-section. This form implies that the flux rope carries axial current I and axial flux F, so that the respective magnetic field is a curl of the sum of toroidal and poloidal vector potentials proportional to I and F, respectively. The vector potentials are expressed in terms of Biot-Savart laws whose kernels are regularized at the rope axis. We regularized them in such a way that for a straight-line axis the form provides a cylindrical force-free flux rope with a parabolic profile of the axial current density. So far, we set the shape of the rope axis by tracking the polarity inversion lines of observed magnetograms and estimating its height and other parameters of the rope from a calculated potential field above these lines. In spite of this heuristic approach, we were able to successfully construct pre-eruption configurations for the 2009 February13 and 2011 October 1 CME events. These applications demonstrate that our regularized Biot-Savart laws are indeed a very flexible and efficient method for energizing initial configurations in MHD simulations of CMEs. We discuss possible ways of optimizing the axis paths and other extensions of the method in order to make it more useful and robust.Research supported by NSF, NASA's HSR and LWS Programs, and AFOSR.

  10. Manufacturing Steps for Commercial Production of Nano-Structure Capacitors Final Report CRADA No. TC02159.0

    Energy Technology Data Exchange (ETDEWEB)

    Barbee, T. W. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Schena, D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-08-29

    This was a collaborative effort between Lawrence Livermore National Security, LLC as manager and operator of Lawrence Livermore National Laboratory (LLNL) and TroyCap LLC, to develop manufacturing steps for commercial production of nano-structure capacitors. The technical objective of this project was to demonstrate high deposition rates of selected dielectric materials which are 2 to 5 times larger than typical using current technology.

  11. Traveling waves of the regularized short pulse equation

    International Nuclear Information System (INIS)

    Shen, Y; Horikis, T P; Kevrekidis, P G; Frantzeskakis, D J

    2014-01-01

    The properties of the so-called regularized short pulse equation (RSPE) are explored with a particular focus on the traveling wave solutions of this model. We theoretically analyze and numerically evolve two sets of such solutions. First, using a fixed point iteration scheme, we numerically integrate the equation to find solitary waves. It is found that these solutions are well approximated by a finite sum of hyperbolic secants powers. The dependence of the soliton's parameters (height, width, etc) to the parameters of the equation is also investigated. Second, by developing a multiple scale reduction of the RSPE to the nonlinear Schrödinger equation, we are able to construct (both standing and traveling) envelope wave breather type solutions of the former, based on the solitary wave structures of the latter. Both the regular and the breathing traveling wave solutions identified are found to be robust and should thus be amenable to observations in the form of few optical cycle pulses. (paper)

  12. Teachers' Views about the Education of Gifted Students in Regular Classrooms

    Directory of Open Access Journals (Sweden)

    Neşe Kutlu Abu

    2017-12-01

    Full Text Available The purpose of this study was to investigate classroom teachers’ views about the education of gifted students in regular classrooms. The sample of the study is composed of ten primary school teachers working in the city of Amasya and had gifted students in their classes. In the present study, phenomenological research design was used. Data was collected through semi-structured interviews and analyzed descriptively in the QSR N-Vivo package program. The findings showed that teachers did not believe a need for differentiating curriculum for gifted students; rather they expressed that regular curriculum was enough for gifted students. Based on the findings, it is clear that teachers need training both on the need of differentiated education for gifted students and strategies and approaches about how to educate gifted students. Teachers’ attitudes towards gifted students in regular classrooms should be investigated so that teachers’ unsupportive beliefs about differentiation for gifted students also influence their attitudes towards gifted students.

  13. Two-way regularization for MEG source reconstruction via multilevel coordinate descent

    KAUST Repository

    Siva Tian, Tian

    2013-12-01

    Magnetoencephalography (MEG) source reconstruction refers to the inverse problem of recovering the neural activity from the MEG time course measurements. A spatiotemporal two-way regularization (TWR) method was recently proposed by Tian et al. to solve this inverse problem and was shown to outperform several one-way regularization methods and spatiotemporal methods. This TWR method is a two-stage procedure that first obtains a raw estimate of the source signals and then refines the raw estimate to ensure spatial focality and temporal smoothness using spatiotemporal regularized matrix decomposition. Although proven to be effective, the performance of two-stage TWR depends on the quality of the raw estimate. In this paper we directly solve the MEG source reconstruction problem using a multivariate penalized regression where the number of variables is much larger than the number of cases. A special feature of this regression is that the regression coefficient matrix has a spatiotemporal two-way structure that naturally invites a two-way penalty. Making use of this structure, we develop a computationally efficient multilevel coordinate descent algorithm to implement the method. This new one-stage TWR method has shown its superiority to the two-stage TWR method in three simulation studies with different levels of complexity and a real-world MEG data analysis. © 2013 Wiley Periodicals, Inc., A Wiley Company.

  14. Higher derivative regularization and chiral anomaly

    International Nuclear Information System (INIS)

    Nagahama, Yoshinori.

    1985-02-01

    A higher derivative regularization which automatically leads to the consistent chiral anomaly is analyzed in detail. It explicitly breaks all the local gauge symmetry but preserves global chiral symmetry and leads to the chirally symmetric consistent anomaly. This regularization thus clarifies the physics content contained in the consistent anomaly. We also briefly comment on the application of this higher derivative regularization to massless QED. (author)

  15. Effect of bur-cut dentin on bond strength using two all-in-one and one two-step adhesive systems.

    Science.gov (United States)

    Koase, Kaori; Inoue, Satoshi; Noda, Mamoru; Tanaka, Toru; Kawamoto, Chiharu; Takahashi, Akiko; Nakaoki, Yasuko; Sano, Hidehiko

    2004-01-01

    To compare the microtensile bond strength (MTBS) of two all-in-one adhesive systems and one experimental two-step self-etching adhesive system to two types of bur-cut dentin. Using one of the three adhesives, Xeno CF Bond (Xeno), Prompt L-Pop (PL), or the experimental two-step system ABF (ABF), resin composite was bonded to flat buccal and root dentin surfaces of eight extracted human premolars. These surfaces were produced using either regular-grit or superfine-grit diamond burs. After storage overnight in 37 degrees C water, the bonded specimens were sectioned into six or seven slices approximately 0.7 mm thick perpendicular to the bonded surface. They were then subjected to microtensile testing. The surfaces of the fractured specimens were observed microscopically to determine the failure mode. In addition, to observe the effect of conditioning, the two types of bur-cut dentin surfaces were conditioned with the adhesives, rinsed with acetone, and observed with SEM. When Xeno and PL were bonded to dentin cut with a regular-grit diamond bur, MTBS values were lower than to superfine bur-cut dentin, and failures occurred adhesively at the interface, whereas the experimental two-step adhesive showed no significant difference in microtensile bond strength between two differently cut surfaces. The all-in-one adhesives tested here improved bond strengths when bonded to superfine bur-cut dentin as a substrate, whereas the experimental two-step adhesive system showed unchanged bonding to both regular and superfine bur-cut dentin surfaces.

  16. Structural Analysis and Anticoagulant Activities of the Novel Sulfated Fucan Possessing a Regular Well-Defined Repeating Unit from Sea Cucumber

    Directory of Open Access Journals (Sweden)

    Mingyi Wu

    2015-04-01

    Full Text Available Sulfated fucans, the complex polysaccharides, exhibit various biological activities. Herein, we purified two fucans from the sea cucumbers Holothuria edulis and Ludwigothurea grisea. Their structures were verified by means of HPGPC, FT-IR, GC–MS and NMR. As a result, a novel structural motif for this type of polymers is reported. The fucans have a unique structure composed of a central core of regular (1→2 and (1→3-linked tetrasaccharide repeating units. Approximately 50% of the units from L. grisea (100% for H. edulis fucan contain sides of oligosaccharides formed by nonsulfated fucose units linked to the O-4 position of the central core. Anticoagulant activity assays indicate that the sea cucumber fucans strongly inhibit human blood clotting through the intrinsic pathways of the coagulation cascade. Moreover, the mechanism of anticoagulant action of the fucans is selective inhibition of thrombin activity by heparin cofactor II. The distinctive tetrasaccharide repeating units contribute to the anticoagulant action. Additionally, unlike the fucans from marine alga, although the sea cucumber fucans have great molecular weights and affluent sulfates, they do not induce platelet aggregation. Overall, our results may be helpful in understanding the structure-function relationships of the well-defined polysaccharides from invertebrate as new types of safer anticoagulants.

  17. Learning About Time Within the Spinal Cord II: Evidence that Temporal Regularity is Encoded by a Spinal Oscillator

    Directory of Open Access Journals (Sweden)

    Kuan Hsien Lee

    2016-02-01

    Full Text Available How a stimulus impacts spinal cord function depends upon temporal relations. When intermittent noxious stimulation (shock is applied and the interval between shock pulses is varied (unpredictable, it induces a lasting alteration that inhibits adaptive learning. If the same stimulus is applied in a temporally regular (predictable manner, the capacity to learn is preserved and a protective/restorative effect is engaged that counters the adverse effect of variable stimulation. Sensitivity to temporal relations implies a capacity to encode time. This study explores how spinal neurons discriminate variable and fixed spaced stimulation. Communication with the brain was blocked by means of a spinal transection and adaptive capacity was tested using an instrumental learning task. In this task, subjects must learn to maintain a hind limb in a flexed position to minimize shock exposure. To evaluate the possibility that a distinct class of afferent fibers provide a sensory cue for regularity, we manipulated the temporal relation between shocks given to two dermatomes (leg and tail. Evidence for timing emerged when the stimuli were applied in a coherent manner across dermatomes, implying that a central (spinal process detects regularity. Next, we show that fixed spaced stimulation has a restorative effect when half the physical stimuli are randomly omitted, as long as the stimuli remain in phase, suggesting that stimulus regularity is encoded by an internal oscillator Research suggests that the oscillator that drives the tempo of stepping depends upon neurons within the rostral lumbar (L1-L2 region. Disrupting communication with the L1-L2 tissue by means of a L3 transection eliminated the restorative effect of fixed spaced stimulation. Implications of the results for step training and rehabilitation after injury are discussed.

  18. Coexistence of Two Singularities in Dewetting Flows: Regularizing the Corner Tip

    NARCIS (Netherlands)

    Peters, I.R.; Snoeijer, Jacobus Hendrikus; Daerr, Adrian; Limat, Laurent

    2009-01-01

    Entrainment in wetting and dewetting flows often occurs through the formation of a corner with a very sharp tip. This corner singularity comes on top of the divergence of viscous stress near the contact line, which is only regularized at molecular scales. We investigate the fine structure of corners

  19. Temporally Regular Musical Primes Facilitate Subsequent Syntax Processing in Children with Specific Language Impairment.

    Science.gov (United States)

    Bedoin, Nathalie; Brisseau, Lucie; Molinier, Pauline; Roch, Didier; Tillmann, Barbara

    2016-01-01

    Children with developmental language disorders have been shown to be also impaired in rhythm and meter perception. Temporal processing and its link to language processing can be understood within the dynamic attending theory. An external stimulus can stimulate internal oscillators, which orient attention over time and drive speech signal segmentation to provide benefits for syntax processing, which is impaired in various patient populations. For children with Specific Language Impairment (SLI) and dyslexia, previous research has shown the influence of an external rhythmic stimulation on subsequent language processing by comparing the influence of a temporally regular musical prime to that of a temporally irregular prime. Here we tested whether the observed rhythmic stimulation effect is indeed due to a benefit provided by the regular musical prime (rather than a cost subsequent to the temporally irregular prime). Sixteen children with SLI and 16 age-matched controls listened to either a regular musical prime sequence or an environmental sound scene (without temporal regularities in event occurrence; i.e., referred to as "baseline condition") followed by grammatically correct and incorrect sentences. They were required to perform grammaticality judgments for each auditorily presented sentence. Results revealed that performance for the grammaticality judgments was better after the regular prime sequences than after the baseline sequences. Our findings are interpreted in the theoretical framework of the dynamic attending theory (Jones, 1976) and the temporal sampling (oscillatory) framework for developmental language disorders (Goswami, 2011). Furthermore, they encourage the use of rhythmic structures (even in non-verbal materials) to boost linguistic structure processing and outline perspectives for rehabilitation.

  20. Segmentation and classification of colon glands with deep convolutional neural networks and total variation regularization

    Directory of Open Access Journals (Sweden)

    Philipp Kainz

    2017-10-01

    Full Text Available Segmentation of histopathology sections is a necessary preprocessing step for digital pathology. Due to the large variability of biological tissue, machine learning techniques have shown superior performance over conventional image processing methods. Here we present our deep neural network-based approach for segmentation and classification of glands in tissue of benign and malignant colorectal cancer, which was developed to participate in the GlaS@MICCAI2015 colon gland segmentation challenge. We use two distinct deep convolutional neural networks (CNN for pixel-wise classification of Hematoxylin-Eosin stained images. While the first classifier separates glands from background, the second classifier identifies gland-separating structures. In a subsequent step, a figure-ground segmentation based on weighted total variation produces the final segmentation result by regularizing the CNN predictions. We present both quantitative and qualitative segmentation results on the recently released and publicly available Warwick-QU colon adenocarcinoma dataset associated with the GlaS@MICCAI2015 challenge and compare our approach to the simultaneously developed other approaches that participated in the same challenge. On two test sets, we demonstrate our segmentation performance and show that we achieve a tissue classification accuracy of 98% and 95%, making use of the inherent capability of our system to distinguish between benign and malignant tissue. Our results show that deep learning approaches can yield highly accurate and reproducible results for biomedical image analysis, with the potential to significantly improve the quality and speed of medical diagnoses.

  1. 75 FR 53966 - Regular Meeting

    Science.gov (United States)

    2010-09-02

    ... FARM CREDIT SYSTEM INSURANCE CORPORATION Regular Meeting AGENCY: Farm Credit System Insurance Corporation Board. SUMMARY: Notice is hereby given of the regular meeting of the Farm Credit System Insurance Corporation Board (Board). DATE AND TIME: The meeting of the Board will be held at the offices of the Farm...

  2. Work and family life of childrearing women workers in Japan: comparison of non-regular employees with short working hours, non-regular employees with long working hours, and regular employees.

    Science.gov (United States)

    Seto, Masako; Morimoto, Kanehisa; Maruyama, Soichiro

    2006-05-01

    This study assessed the working and family life characteristics, and the degree of domestic and work strain of female workers with different employment statuses and weekly working hours who are rearing children. Participants were the mothers of preschoolers in a large Japanese city. We classified the women into three groups according to the hours they worked and their employment conditions. The three groups were: non-regular employees working less than 30 h a week (n=136); non-regular employees working 30 h or more per week (n=141); and regular employees working 30 h or more a week (n=184). We compared among the groups the subjective values of work, financial difficulties, childcare and housework burdens, psychological effects, and strains such as work and family strain, work-family conflict, and work dissatisfaction. Regular employees were more likely to report job pressures and inflexible work schedules and to experience more strain related to work and family than non-regular employees. Non-regular employees were more likely to be facing financial difficulties. In particular, non-regular employees working longer hours tended to encounter socioeconomic difficulties and often lacked support from family and friends. Female workers with children may have different social backgrounds and different stressors according to their working hours and work status.

  3. Regularized inversion of controlled source and earthquake data

    International Nuclear Information System (INIS)

    Ramachandran, Kumar

    2012-01-01

    Estimation of the seismic velocity structure of the Earth's crust and upper mantle from travel-time data has advanced greatly in recent years. Forward modelling trial-and-error methods have been superseded by tomographic methods which allow more objective analysis of large two-dimensional and three-dimensional refraction and/or reflection data sets. The fundamental purpose of travel-time tomography is to determine the velocity structure of a medium by analysing the time it takes for a wave generated at a source point within the medium to arrive at a distribution of receiver points. Tomographic inversion of first-arrival travel-time data is a nonlinear problem since both the velocity of the medium and ray paths in the medium are unknown. The solution for such a problem is typically obtained by repeated application of linearized inversion. Regularization of the nonlinear problem reduces the ill posedness inherent in the tomographic inversion due to the under-determined nature of the problem and the inconsistencies in the observed data. This paper discusses the theory of regularized inversion for joint inversion of controlled source and earthquake data, and results from synthetic data testing and application to real data. The results obtained from tomographic inversion of synthetic data and real data from the northern Cascadia subduction zone show that the velocity model and hypocentral parameters can be efficiently estimated using this approach. (paper)

  4. Schnek: A C++ library for the development of parallel simulation codes on regular grids

    Science.gov (United States)

    Schmitz, Holger

    2018-05-01

    A large number of algorithms across the field of computational physics are formulated on grids with a regular topology. We present Schnek, a library that enables fast development of parallel simulations on regular grids. Schnek contains a number of easy-to-use modules that greatly reduce the amount of administrative code for large-scale simulation codes. The library provides an interface for reading simulation setup files with a hierarchical structure. The structure of the setup file is translated into a hierarchy of simulation modules that the developer can specify. The reader parses and evaluates mathematical expressions and initialises variables or grid data. This enables developers to write modular and flexible simulation codes with minimal effort. Regular grids of arbitrary dimension are defined as well as mechanisms for defining physical domain sizes, grid staggering, and ghost cells on these grids. Ghost cells can be exchanged between neighbouring processes using MPI with a simple interface. The grid data can easily be written into HDF5 files using serial or parallel I/O.

  5. Incremental projection approach of regularization for inverse problems

    Energy Technology Data Exchange (ETDEWEB)

    Souopgui, Innocent, E-mail: innocent.souopgui@usm.edu [The University of Southern Mississippi, Department of Marine Science (United States); Ngodock, Hans E., E-mail: hans.ngodock@nrlssc.navy.mil [Naval Research Laboratory (United States); Vidard, Arthur, E-mail: arthur.vidard@imag.fr; Le Dimet, François-Xavier, E-mail: ledimet@imag.fr [Laboratoire Jean Kuntzmann (France)

    2016-10-15

    This paper presents an alternative approach to the regularized least squares solution of ill-posed inverse problems. Instead of solving a minimization problem with an objective function composed of a data term and a regularization term, the regularization information is used to define a projection onto a convex subspace of regularized candidate solutions. The objective function is modified to include the projection of each iterate in the place of the regularization. Numerical experiments based on the problem of motion estimation for geophysical fluid images, show the improvement of the proposed method compared with regularization methods. For the presented test case, the incremental projection method uses 7 times less computation time than the regularization method, to reach the same error target. Moreover, at convergence, the incremental projection is two order of magnitude more accurate than the regularization method.

  6. Geometric regularizations and dual conifold transitions

    International Nuclear Information System (INIS)

    Landsteiner, Karl; Lazaroiu, Calin I.

    2003-01-01

    We consider a geometric regularization for the class of conifold transitions relating D-brane systems on noncompact Calabi-Yau spaces to certain flux backgrounds. This regularization respects the SL(2,Z) invariance of the flux superpotential, and allows for computation of the relevant periods through the method of Picard-Fuchs equations. The regularized geometry is a noncompact Calabi-Yau which can be viewed as a monodromic fibration, with the nontrivial monodromy being induced by the regulator. It reduces to the original, non-monodromic background when the regulator is removed. Using this regularization, we discuss the simple case of the local conifold, and show how the relevant field-theoretic information can be extracted in this approach. (author)

  7. Adaptive regularization

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Rasmussen, Carl Edward; Svarer, C.

    1994-01-01

    Regularization, e.g., in the form of weight decay, is important for training and optimization of neural network architectures. In this work the authors provide a tool based on asymptotic sampling theory, for iterative estimation of weight decay parameters. The basic idea is to do a gradient desce...

  8. Significant internal quantum efficiency enhancement of GaN/AlGaN multiple quantum wells emitting at ~350 nm via step quantum well structure design

    KAUST Repository

    Wu, Feng; Sun, Haiding; Ajia, Idris A.; Roqan, Iman S.; Zhang, Daliang; Dai, Jiangnan; Chen, Changqing; Feng, Zhe Chuan; Li, Xiaohang

    2017-01-01

    Significant internal quantum efficiency (IQE) enhancement of GaN/AlGaN multiple quantum wells (MQWs) emitting at similar to 350 nm was achieved via a step quantum well (QW) structure design. The MQW structures were grown on AlGaN/AlN/sapphire templates by metal-organic chemical vapor deposition (MOCVD). High resolution x-ray diffraction (HR-XRD) and scanning transmission electron microscopy (STEM) were performed, showing sharp interface of the MQWs. Weak beam dark field imaging was conducted, indicating a similar dislocation density of the investigated MQWs samples. The IQE of GaN/AlGaN MQWs was estimated by temperature dependent photoluminescence (TDPL). An IQE enhancement of about two times was observed for the GaN/AlGaN step QW structure, compared with conventional QW structure. Based on the theoretical calculation, this IQE enhancement was attributed to the suppressed polarization-induced field, and thus the improved electron-hole wave-function overlap in the step QW.

  9. Significant internal quantum efficiency enhancement of GaN/AlGaN multiple quantum wells emitting at ~350 nm via step quantum well structure design

    KAUST Repository

    Wu, Feng

    2017-05-03

    Significant internal quantum efficiency (IQE) enhancement of GaN/AlGaN multiple quantum wells (MQWs) emitting at similar to 350 nm was achieved via a step quantum well (QW) structure design. The MQW structures were grown on AlGaN/AlN/sapphire templates by metal-organic chemical vapor deposition (MOCVD). High resolution x-ray diffraction (HR-XRD) and scanning transmission electron microscopy (STEM) were performed, showing sharp interface of the MQWs. Weak beam dark field imaging was conducted, indicating a similar dislocation density of the investigated MQWs samples. The IQE of GaN/AlGaN MQWs was estimated by temperature dependent photoluminescence (TDPL). An IQE enhancement of about two times was observed for the GaN/AlGaN step QW structure, compared with conventional QW structure. Based on the theoretical calculation, this IQE enhancement was attributed to the suppressed polarization-induced field, and thus the improved electron-hole wave-function overlap in the step QW.

  10. Regularizing portfolio optimization

    International Nuclear Information System (INIS)

    Still, Susanne; Kondor, Imre

    2010-01-01

    The optimization of large portfolios displays an inherent instability due to estimation error. This poses a fundamental problem, because solutions that are not stable under sample fluctuations may look optimal for a given sample, but are, in effect, very far from optimal with respect to the average risk. In this paper, we approach the problem from the point of view of statistical learning theory. The occurrence of the instability is intimately related to over-fitting, which can be avoided using known regularization methods. We show how regularized portfolio optimization with the expected shortfall as a risk measure is related to support vector regression. The budget constraint dictates a modification. We present the resulting optimization problem and discuss the solution. The L2 norm of the weight vector is used as a regularizer, which corresponds to a diversification 'pressure'. This means that diversification, besides counteracting downward fluctuations in some assets by upward fluctuations in others, is also crucial because it improves the stability of the solution. The approach we provide here allows for the simultaneous treatment of optimization and diversification in one framework that enables the investor to trade off between the two, depending on the size of the available dataset.

  11. Regularizing portfolio optimization

    Science.gov (United States)

    Still, Susanne; Kondor, Imre

    2010-07-01

    The optimization of large portfolios displays an inherent instability due to estimation error. This poses a fundamental problem, because solutions that are not stable under sample fluctuations may look optimal for a given sample, but are, in effect, very far from optimal with respect to the average risk. In this paper, we approach the problem from the point of view of statistical learning theory. The occurrence of the instability is intimately related to over-fitting, which can be avoided using known regularization methods. We show how regularized portfolio optimization with the expected shortfall as a risk measure is related to support vector regression. The budget constraint dictates a modification. We present the resulting optimization problem and discuss the solution. The L2 norm of the weight vector is used as a regularizer, which corresponds to a diversification 'pressure'. This means that diversification, besides counteracting downward fluctuations in some assets by upward fluctuations in others, is also crucial because it improves the stability of the solution. The approach we provide here allows for the simultaneous treatment of optimization and diversification in one framework that enables the investor to trade off between the two, depending on the size of the available dataset.

  12. Regularity and predictability of human mobility in personal space.

    Directory of Open Access Journals (Sweden)

    Daniel Austin

    Full Text Available Fundamental laws governing human mobility have many important applications such as forecasting and controlling epidemics or optimizing transportation systems. These mobility patterns, studied in the context of out of home activity during travel or social interactions with observations recorded from cell phone use or diffusion of money, suggest that in extra-personal space humans follow a high degree of temporal and spatial regularity - most often in the form of time-independent universal scaling laws. Here we show that mobility patterns of older individuals in their home also show a high degree of predictability and regularity, although in a different way than has been reported for out-of-home mobility. Studying a data set of almost 15 million observations from 19 adults spanning up to 5 years of unobtrusive longitudinal home activity monitoring, we find that in-home mobility is not well represented by a universal scaling law, but that significant structure (predictability and regularity is uncovered when explicitly accounting for contextual data in a model of in-home mobility. These results suggest that human mobility in personal space is highly stereotyped, and that monitoring discontinuities in routine room-level mobility patterns may provide an opportunity to predict individual human health and functional status or detect adverse events and trends.

  13. Structural analysis of Hanford's single-shell 241-C-106 tank: A first step toward waste-tank remediation

    International Nuclear Information System (INIS)

    Harris, J.P.; Julyk, L.J.; Marlow, R.S.; Moore, C.J.; Day, J.P.; Dyrness, A.D.; Jagadish, P.; Shulman, J.S.

    1993-10-01

    The buried single-shell waste tank 241-C-106, located at the US Department of Energy's Hanford Site, has been a repository for various liquid radioactive waste materials since its construction in 1943. A first step toward waste tank remediation is demonstrating that remediation activities can be performed safely. Determination of the current structural capacity of this high-heat tank is an important element in this assessment. A structural finite-element model of tank 241-C-106 has been developed to assess the tank's structural integrity with respect to in situ conditions and additional remediation surface loads. To predict structural integrity realistically, the model appropriately addresses two complex issues: (1) surrounding soil-tank interaction associated with thermal expansion cycling and surcharge load distribution and (2) concrete-property degradation and creep resulting from exposure to high temperatures generated by the waste. This paper describes the development of the 241-C-106 structural model, analysis methodology, and tank-specific structural acceptance criteria

  14. Tessellating the Sphere with Regular Polygons

    Science.gov (United States)

    Soto-Johnson, Hortensia; Bechthold, Dawn

    2004-01-01

    Tessellations in the Euclidean plane and regular polygons that tessellate the sphere are reviewed. The regular polygons that can possibly tesellate the sphere are spherical triangles, squares and pentagons.

  15. Multimodal Stepped Care Approach Involving Topical Analgesics for Severe Intractable Neuropathic Pain in CRPS Type 1: A Case Report

    Directory of Open Access Journals (Sweden)

    David J. Kopsky

    2011-01-01

    Full Text Available A multimodal stepped care approach has been successfully applied to a patient with complex regional pain syndrome type 1 and severe intractable pain, not responding to regular neuropathic pain medication. The choice to administer drugs in creams was made because of the intolerable adverse effects to oral medication. With this method, peak-dose adverse effects did not occur. The multimodal stepped care approach resulted in considerable and clinically relevant decrease in pain after every step, using topical amitriptyline, ketamine, and dimethylsulphoxide.

  16. Total variation regularization of the 3-D gravity inverse problem using a randomized generalized singular value decomposition

    Science.gov (United States)

    Vatankhah, Saeed; Renaut, Rosemary A.; Ardestani, Vahid E.

    2018-04-01

    We present a fast algorithm for the total variation regularization of the 3-D gravity inverse problem. Through imposition of the total variation regularization, subsurface structures presenting with sharp discontinuities are preserved better than when using a conventional minimum-structure inversion. The associated problem formulation for the regularization is nonlinear but can be solved using an iteratively reweighted least-squares algorithm. For small-scale problems the regularized least-squares problem at each iteration can be solved using the generalized singular value decomposition. This is not feasible for large-scale, or even moderate-scale, problems. Instead we introduce the use of a randomized generalized singular value decomposition in order to reduce the dimensions of the problem and provide an effective and efficient solution technique. For further efficiency an alternating direction algorithm is used to implement the total variation weighting operator within the iteratively reweighted least-squares algorithm. Presented results for synthetic examples demonstrate that the novel randomized decomposition provides good accuracy for reduced computational and memory demands as compared to use of classical approaches.

  17. The Regularity of Optimal Irrigation Patterns

    Science.gov (United States)

    Morel, Jean-Michel; Santambrogio, Filippo

    2010-02-01

    A branched structure is observable in draining and irrigation systems, in electric power supply systems, and in natural objects like blood vessels, the river basins or the trees. Recent approaches of these networks derive their branched structure from an energy functional whose essential feature is to favor wide routes. Given a flow s in a river, a road, a tube or a wire, the transportation cost per unit length is supposed in these models to be proportional to s α with 0 measure is the Lebesgue density on a smooth open set and the irrigating measure is a single source. In that case we prove that all branches of optimal irrigation trees satisfy an elliptic equation and that their curvature is a bounded measure. In consequence all branching points in the network have a tangent cone made of a finite number of segments, and all other points have a tangent. An explicit counterexample disproves these regularity properties for non-Lebesgue irrigated measures.

  18. Accretion onto some well-known regular black holes

    International Nuclear Information System (INIS)

    Jawad, Abdul; Shahzad, M.U.

    2016-01-01

    In this work, we discuss the accretion onto static spherically symmetric regular black holes for specific choices of the equation of state parameter. The underlying regular black holes are charged regular black holes using the Fermi-Dirac distribution, logistic distribution, nonlinear electrodynamics, respectively, and Kehagias-Sftesos asymptotically flat regular black holes. We obtain the critical radius, critical speed, and squared sound speed during the accretion process near the regular black holes. We also study the behavior of radial velocity, energy density, and the rate of change of the mass for each of the regular black holes. (orig.)

  19. Accretion onto some well-known regular black holes

    Energy Technology Data Exchange (ETDEWEB)

    Jawad, Abdul; Shahzad, M.U. [COMSATS Institute of Information Technology, Department of Mathematics, Lahore (Pakistan)

    2016-03-15

    In this work, we discuss the accretion onto static spherically symmetric regular black holes for specific choices of the equation of state parameter. The underlying regular black holes are charged regular black holes using the Fermi-Dirac distribution, logistic distribution, nonlinear electrodynamics, respectively, and Kehagias-Sftesos asymptotically flat regular black holes. We obtain the critical radius, critical speed, and squared sound speed during the accretion process near the regular black holes. We also study the behavior of radial velocity, energy density, and the rate of change of the mass for each of the regular black holes. (orig.)

  20. Accretion onto some well-known regular black holes

    Science.gov (United States)

    Jawad, Abdul; Shahzad, M. Umair

    2016-03-01

    In this work, we discuss the accretion onto static spherically symmetric regular black holes for specific choices of the equation of state parameter. The underlying regular black holes are charged regular black holes using the Fermi-Dirac distribution, logistic distribution, nonlinear electrodynamics, respectively, and Kehagias-Sftesos asymptotically flat regular black holes. We obtain the critical radius, critical speed, and squared sound speed during the accretion process near the regular black holes. We also study the behavior of radial velocity, energy density, and the rate of change of the mass for each of the regular black holes.

  1. Diagrammatic methods in phase-space regularization

    International Nuclear Information System (INIS)

    Bern, Z.; Halpern, M.B.; California Univ., Berkeley

    1987-11-01

    Using the scalar prototype and gauge theory as the simplest possible examples, diagrammatic methods are developed for the recently proposed phase-space form of continuum regularization. A number of one-loop and all-order applications are given, including general diagrammatic discussions of the nogrowth theorem and the uniqueness of the phase-space stochastic calculus. The approach also generates an alternate derivation of the equivalence of the large-β phase-space regularization to the more conventional coordinate-space regularization. (orig.)

  2. SPAR-H Step-by-Step Guidance

    Energy Technology Data Exchange (ETDEWEB)

    W. J. Galyean; A. M. Whaley; D. L. Kelly; R. L. Boring

    2011-05-01

    This guide provides step-by-step guidance on the use of the SPAR-H method for quantifying Human Failure Events (HFEs). This guide is intended to be used with the worksheets provided in: 'The SPAR-H Human Reliability Analysis Method,' NUREG/CR-6883, dated August 2005. Each step in the process of producing a Human Error Probability (HEP) is discussed. These steps are: Step-1, Categorizing the HFE as Diagnosis and/or Action; Step-2, Rate the Performance Shaping Factors; Step-3, Calculate PSF-Modified HEP; Step-4, Accounting for Dependence, and; Step-5, Minimum Value Cutoff. The discussions on dependence are extensive and include an appendix that describes insights obtained from the psychology literature.

  3. SPAR-H Step-by-Step Guidance

    International Nuclear Information System (INIS)

    Galyean, W.J.; Whaley, A.M.; Kelly, D.L.; Boring, R.L.

    2011-01-01

    This guide provides step-by-step guidance on the use of the SPAR-H method for quantifying Human Failure Events (HFEs). This guide is intended to be used with the worksheets provided in: 'The SPAR-H Human Reliability Analysis Method,' NUREG/CR-6883, dated August 2005. Each step in the process of producing a Human Error Probability (HEP) is discussed. These steps are: Step-1, Categorizing the HFE as Diagnosis and/or Action; Step-2, Rate the Performance Shaping Factors; Step-3, Calculate PSF-Modified HEP; Step-4, Accounting for Dependence, and; Step-5, Minimum Value Cutoff. The discussions on dependence are extensive and include an appendix that describes insights obtained from the psychology literature.

  4. From inactive to regular jogger

    DEFF Research Database (Denmark)

    Lund-Cramer, Pernille; Brinkmann Løite, Vibeke; Bredahl, Thomas Viskum Gjelstrup

    study was conducted using individual semi-structured interviews on how a successful long-term behavior change had been achieved. Ten informants were purposely selected from participants in the DANO-RUN research project (7 men, 3 women, average age 41.5). Interviews were performed on the basis of Theory...... of Planned Behavior (TPB) and The Transtheoretical Model (TTM). Coding and analysis of interviews were performed using NVivo 10 software. Results TPB: During the behavior change process, the intention to jogging shifted from a focus on weight loss and improved fitness to both physical health, psychological......Title From inactive to regular jogger - a qualitative study of achieved behavioral change among recreational joggers Authors Pernille Lund-Cramer & Vibeke Brinkmann Løite Purpose Despite extensive knowledge of barriers to physical activity, most interventions promoting physical activity have proven...

  5. SPAR-H Step-by-Step Guidance

    Energy Technology Data Exchange (ETDEWEB)

    April M. Whaley; Dana L. Kelly; Ronald L. Boring; William J. Galyean

    2012-06-01

    Step-by-step guidance was developed recently at Idaho National Laboratory for the US Nuclear Regulatory Commission on the use of the Standardized Plant Analysis Risk-Human Reliability Analysis (SPAR-H) method for quantifying Human Failure Events (HFEs). This work was done to address SPAR-H user needs, specifically requests for additional guidance on the proper application of various aspects of the methodology. This paper overviews the steps of the SPAR-H analysis process and highlights some of the most important insights gained during the development of the step-by-step directions. This supplemental guidance for analysts is applicable when plant-specific information is available, and goes beyond the general guidance provided in existing SPAR-H documentation. The steps highlighted in this paper are: Step-1, Categorizing the HFE as Diagnosis and/or Action; Step-2, Rate the Performance Shaping Factors; Step-3, Calculate PSF-Modified HEP; Step-4, Accounting for Dependence, and; Step-5, Minimum Value Cutoff.

  6. Temporal regularity of the environment drives time perception

    OpenAIRE

    van Rijn, H; Rhodes, D; Di Luca, M

    2016-01-01

    It’s reasonable to assume that a regularly paced sequence should be perceived as regular, but here we show that perceived regularity depends on the context in which the sequence is embedded. We presented one group of participants with perceptually regularly paced sequences, and another group of participants with mostly irregularly paced sequences (75% irregular, 25% regular). The timing of the final stimulus in each sequence could be var- ied. In one experiment, we asked whether the last stim...

  7. A Regular k-Shrinkage Thresholding Operator for the Removal of Mixed Gaussian-Impulse Noise

    Directory of Open Access Journals (Sweden)

    Han Pan

    2017-01-01

    Full Text Available The removal of mixed Gaussian-impulse noise plays an important role in many areas, such as remote sensing. However, traditional methods may be unaware of promoting the degree of the sparsity adaptively after decomposing into low rank component and sparse component. In this paper, a new problem formulation with regular spectral k-support norm and regular k-support l1 norm is proposed. A unified framework is developed to capture the intrinsic sparsity structure of all two components. To address the resulting problem, an efficient minimization scheme within the framework of accelerated proximal gradient is proposed. This scheme is achieved by alternating regular k-shrinkage thresholding operator. Experimental comparison with the other state-of-the-art methods demonstrates the efficacy of the proposed method.

  8. New method for minimizing regular functions with constraints on parameter region

    International Nuclear Information System (INIS)

    Kurbatov, V.S.; Silin, I.N.

    1993-01-01

    The new method of function minimization is developed. Its main features are considered. It is possible minimization of regular function with the arbitrary structure. For χ 2 -like function the usage of simplified second derivatives is possible with the control of correctness. The constraints of arbitrary structure can be used. The means for fast movement along multidimensional valleys are used. The method is tested on real data of K π2 decay of the experiment on rare K - -decays. 6 refs

  9. Structural characterization of epitaxial YBa2Cu3O7 thin films on step-edge substrates by means of high-resolution electron microscopy

    International Nuclear Information System (INIS)

    Jia, C.L.; Kabius, B.; Urban, K.

    1993-01-01

    The microstructure of YBa 2 Cu 3 O 7 films epitaxially grown on step-edge (0 0 1) SrTiO 3 and LaAlO 3 substrates has been characterized by means of high-resolution electron microscopy. The results indicate a relationship between the microstructure of the film across a step and the angle the step makes with the substrate plane. On a steep, high-angle step, the film grows with its c-axis perpendicular to that of the film on substrate surface so that two grain boundaries are formed. In the upper grain boundary, on the average, a (0 1 3) habit plane alternates with a (1 0 3) habit plane. This alternating structure is caused by twinning in the orthorhombic structure. The lower boundaries consist of a chain of (0 1 3)(0 1 3) and (0 1 0)(0 0 1) type segments exhibiting a tendency to tilt the whole habit plane toward the a-b plane of the flank film. Dislocations, stacking faults and misfit strains were also observed in or close to the boundaries. (orig.)

  10. Hessian-regularized co-training for social activity recognition.

    Science.gov (United States)

    Liu, Weifeng; Li, Yang; Lin, Xu; Tao, Dacheng; Wang, Yanjiang

    2014-01-01

    Co-training is a major multi-view learning paradigm that alternately trains two classifiers on two distinct views and maximizes the mutual agreement on the two-view unlabeled data. Traditional co-training algorithms usually train a learner on each view separately and then force the learners to be consistent across views. Although many co-trainings have been developed, it is quite possible that a learner will receive erroneous labels for unlabeled data when the other learner has only mediocre accuracy. This usually happens in the first rounds of co-training, when there are only a few labeled examples. As a result, co-training algorithms often have unstable performance. In this paper, Hessian-regularized co-training is proposed to overcome these limitations. Specifically, each Hessian is obtained from a particular view of examples; Hessian regularization is then integrated into the learner training process of each view by penalizing the regression function along the potential manifold. Hessian can properly exploit the local structure of the underlying data manifold. Hessian regularization significantly boosts the generalizability of a classifier, especially when there are a small number of labeled examples and a large number of unlabeled examples. To evaluate the proposed method, extensive experiments were conducted on the unstructured social activity attribute (USAA) dataset for social activity recognition. Our results demonstrate that the proposed method outperforms baseline methods, including the traditional co-training and LapCo algorithms.

  11. Hessian-regularized co-training for social activity recognition.

    Directory of Open Access Journals (Sweden)

    Weifeng Liu

    Full Text Available Co-training is a major multi-view learning paradigm that alternately trains two classifiers on two distinct views and maximizes the mutual agreement on the two-view unlabeled data. Traditional co-training algorithms usually train a learner on each view separately and then force the learners to be consistent across views. Although many co-trainings have been developed, it is quite possible that a learner will receive erroneous labels for unlabeled data when the other learner has only mediocre accuracy. This usually happens in the first rounds of co-training, when there are only a few labeled examples. As a result, co-training algorithms often have unstable performance. In this paper, Hessian-regularized co-training is proposed to overcome these limitations. Specifically, each Hessian is obtained from a particular view of examples; Hessian regularization is then integrated into the learner training process of each view by penalizing the regression function along the potential manifold. Hessian can properly exploit the local structure of the underlying data manifold. Hessian regularization significantly boosts the generalizability of a classifier, especially when there are a small number of labeled examples and a large number of unlabeled examples. To evaluate the proposed method, extensive experiments were conducted on the unstructured social activity attribute (USAA dataset for social activity recognition. Our results demonstrate that the proposed method outperforms baseline methods, including the traditional co-training and LapCo algorithms.

  12. FORMATION REGULARITIES OF PHASE COMPOSITION, STRUCTURE AND PROPERTIES DURING MECHANICAL ALLOYING OF BINARY ALUMINUM COMPOSITES

    Directory of Open Access Journals (Sweden)

    F. G. Lovshenko

    2015-01-01

    Full Text Available The paper presents investigation results pertaining to  ascertainment of formation regularities of phase composition and structure during mechanical alloying of binary aluminium composites/substances. The invetigations have been executed while applying a wide range of methods, devices and equipment used in modern material science. The obtained data complement each other. It has been established that presence of oxide and hydro-oxide films on aluminium powder  and introduction of surface-active substance in the composite have significant effect on mechanically and thermally activated phase transformations and properties of semi-finished products.  Higher fatty acids have been used as a surface active substance.The mechanism of mechanically activated solid solution formation has been identified. Its essence is  a formation of  specific quasi-solutions at the initial stage of processing. Mechanical and chemical interaction between components during formation of other phases has taken place along with dissolution  in aluminium while processing powder composites. Granule basis is formed according to the dynamic recrystallization mechanism and possess submicrocrystal structural type with the granule dimension basis less than 100 nm and the grains are divided in block size of not more than 20 nm with oxide inclusions of 10–20 nm size.All the compounds  with the addition of  surface-active substances including aluminium powder without alloying elements obtained by processing in mechanic reactor are disperse hardened. In some cases disperse hardening is accompanied by dispersive and solid solution hardnening process. Complex hardening predetermines a high temperature of recrystallization in mechanically alloyed compounds,  its value exceeds 400 °C.

  13. The uniqueness of the regularization procedure

    International Nuclear Information System (INIS)

    Brzezowski, S.

    1981-01-01

    On the grounds of the BPHZ procedure, the criteria of correct regularization in perturbation calculations of QFT are given, together with the prescription for dividing the regularized formulas into the finite and infinite parts. (author)

  14. Coupling regularizes individual units in noisy populations

    International Nuclear Information System (INIS)

    Ly Cheng; Ermentrout, G. Bard

    2010-01-01

    The regularity of a noisy system can modulate in various ways. It is well known that coupling in a population can lower the variability of the entire network; the collective activity is more regular. Here, we show that diffusive (reciprocal) coupling of two simple Ornstein-Uhlenbeck (O-U) processes can regularize the individual, even when it is coupled to a noisier process. In cellular networks, the regularity of individual cells is important when a select few play a significant role. The regularizing effect of coupling surprisingly applies also to general nonlinear noisy oscillators. However, unlike with the O-U process, coupling-induced regularity is robust to different kinds of coupling. With two coupled noisy oscillators, we derive an asymptotic formula assuming weak noise and coupling for the variance of the period (i.e., spike times) that accurately captures this effect. Moreover, we find that reciprocal coupling can regularize the individual period of higher dimensional oscillators such as the Morris-Lecar and Brusselator models, even when coupled to noisier oscillators. Coupling can have a counterintuitive and beneficial effect on noisy systems. These results have implications for the role of connectivity with noisy oscillators and the modulation of variability of individual oscillators.

  15. Learning regularization parameters for general-form Tikhonov

    International Nuclear Information System (INIS)

    Chung, Julianne; Español, Malena I

    2017-01-01

    Computing regularization parameters for general-form Tikhonov regularization can be an expensive and difficult task, especially if multiple parameters or many solutions need to be computed in real time. In this work, we assume training data is available and describe an efficient learning approach for computing regularization parameters that can be used for a large set of problems. We consider an empirical Bayes risk minimization framework for finding regularization parameters that minimize average errors for the training data. We first extend methods from Chung et al (2011 SIAM J. Sci. Comput. 33 3132–52) to the general-form Tikhonov problem. Then we develop a learning approach for multi-parameter Tikhonov problems, for the case where all involved matrices are simultaneously diagonalizable. For problems where this is not the case, we describe an approach to compute near-optimal regularization parameters by using operator approximations for the original problem. Finally, we propose a new class of regularizing filters, where solutions correspond to multi-parameter Tikhonov solutions, that requires less data than previously proposed optimal error filters, avoids the generalized SVD, and allows flexibility and novelty in the choice of regularization matrices. Numerical results for 1D and 2D examples using different norms on the errors show the effectiveness of our methods. (paper)

  16. 5 CFR 551.421 - Regular working hours.

    Science.gov (United States)

    2010-01-01

    ... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Regular working hours. 551.421 Section... Activities § 551.421 Regular working hours. (a) Under the Act there is no requirement that a Federal employee... distinction based on whether the activity is performed by an employee during regular working hours or outside...

  17. Regular extensions of some classes of grammars

    NARCIS (Netherlands)

    Nijholt, Antinus

    Culik and Cohen introduced the class of LR-regular grammars, an extension of the LR(k) grammars. In this report we consider the analogous extension of the LL(k) grammers, called the LL-regular grammars. The relations of this class of grammars to other classes of grammars are shown. Every LL-regular

  18. Properties of nano-structured Ni/YSZ anodes fabricated from plasma sprayable NiO/YSZ powder prepared by single step solution combustion method

    Energy Technology Data Exchange (ETDEWEB)

    Prakash, B. Shri; Balaji, N.; Kumar, S. Senthil; Aruna, S.T., E-mail: staruna194@gmail.com

    2016-12-15

    Highlights: • Preparation of plasma grade NiO/YSZ powder in single step. • Fabrication of nano-structured Ni/YSZ coating. • Conductivity of 600 S/cm at 800 °C. - Abstract: NiO/YSZ anode coatings are fabricated by atmospheric plasma spraying at different plasma powers from plasma grade NiO/YSZ powders that are prepared in a single step by solution combustion method. The process adopted is devoid of multi-steps that are generally involved in conventional spray drying or fusing and crushing methods. Density of the coating increased and porosity decreased with increase in the plasma power of deposition. An ideal nano-structured Ni/YSZ anode encompassing nano YSZ particles, nano Ni particles and nano pores is achieved on reducing the coating deposited at lower plasma powers. The coating exhibit porosities in the range of 27%, sufficient for anode functional layers. Electronic conductivity of the coatings is in the range of 600 S/cm at 800 °C.

  19. Regular non-twisting S-branes

    International Nuclear Information System (INIS)

    Obregon, Octavio; Quevedo, Hernando; Ryan, Michael P.

    2004-01-01

    We construct a family of time and angular dependent, regular S-brane solutions which corresponds to a simple analytical continuation of the Zipoy-Voorhees 4-dimensional vacuum spacetime. The solutions are asymptotically flat and turn out to be free of singularities without requiring a twist in space. They can be considered as the simplest non-singular generalization of the singular S0-brane solution. We analyze the properties of a representative of this family of solutions and show that it resembles to some extent the asymptotic properties of the regular Kerr S-brane. The R-symmetry corresponds, however, to the general lorentzian symmetry. Several generalizations of this regular solution are derived which include a charged S-brane and an additional dilatonic field. (author)

  20. Optimized star sensors laboratory calibration method using a regularization neural network.

    Science.gov (United States)

    Zhang, Chengfen; Niu, Yanxiong; Zhang, Hao; Lu, Jiazhen

    2018-02-10

    High-precision ground calibration is essential to ensure the performance of star sensors. However, the complex distortion and multi-error coupling have brought great difficulties to traditional calibration methods, especially for large field of view (FOV) star sensors. Although increasing the complexity of models is an effective way to improve the calibration accuracy, it significantly increases the demand for calibration data. In order to achieve high-precision calibration of star sensors with large FOV, a novel laboratory calibration method based on a regularization neural network is proposed. A multi-layer structure neural network is designed to represent the mapping of the star vector and the corresponding star point coordinate directly. To ensure the generalization performance of the network, regularization strategies are incorporated into the net structure and the training algorithm. Simulation and experiment results demonstrate that the proposed method can achieve high precision with less calibration data and without any other priori information. Compared with traditional methods, the calibration error of the star sensor decreased by about 30%. The proposed method can satisfy the precision requirement for large FOV star sensors.

  1. Tetravalent one-regular graphs of order 4p2

    DEFF Research Database (Denmark)

    Feng, Yan-Quan; Kutnar, Klavdija; Marusic, Dragan

    2014-01-01

    A graph is one-regular if its automorphism group acts regularly on the set of its arcs. In this paper tetravalent one-regular graphs of order 4p2, where p is a prime, are classified.......A graph is one-regular if its automorphism group acts regularly on the set of its arcs. In this paper tetravalent one-regular graphs of order 4p2, where p is a prime, are classified....

  2. Assesment risk of fracture in thin-walled fiber reinforced and regular High Performance Concretes sandwich elements

    DEFF Research Database (Denmark)

    Hodicky, Kamil; Hulin, Thomas; Schmidt, Jacob Wittrup

    2013-01-01

    load. Due to structural restraints, autogenous shrinkage may lead to high self-induced stresses. Therefore autogenous shrinkage plays important role in design of HPCSE. The present paper assesses risk of fracture due to autogenous shrinkage-induced stresses in three fiber reinforced and regular High....... Finally the paper describes the modeling work with HPCSE predicting structural cracking provoked by autogenous shrinkage. It was observed that risk of cracking due to autogenous shrinkage rapidly rises after 3 days in case of regular HPC and after 7 days in case of fiber reinforced HPC.......High Performance Concrete Sandwich Elements (HPCSE) are an interesting option for future low or plus energy building construction. Recent research and development work, however, indicate that such elements are prone to structural cracking due to the combined effect of shrinkage and high temperature...

  3. Regularization and error assignment to unfolded distributions

    CERN Document Server

    Zech, Gunter

    2011-01-01

    The commonly used approach to present unfolded data only in graphical formwith the diagonal error depending on the regularization strength is unsatisfac-tory. It does not permit the adjustment of parameters of theories, the exclusionof theories that are admitted by the observed data and does not allow the com-bination of data from different experiments. We propose fixing the regulariza-tion strength by a p-value criterion, indicating the experimental uncertaintiesindependent of the regularization and publishing the unfolded data in additionwithout regularization. These considerations are illustrated with three differentunfolding and smoothing approaches applied to a toy example.

  4. Multiview vector-valued manifold regularization for multilabel image classification.

    Science.gov (United States)

    Luo, Yong; Tao, Dacheng; Xu, Chang; Xu, Chao; Liu, Hong; Wen, Yonggang

    2013-05-01

    In computer vision, image datasets used for classification are naturally associated with multiple labels and comprised of multiple views, because each image may contain several objects (e.g., pedestrian, bicycle, and tree) and is properly characterized by multiple visual features (e.g., color, texture, and shape). Currently, available tools ignore either the label relationship or the view complementarily. Motivated by the success of the vector-valued function that constructs matrix-valued kernels to explore the multilabel structure in the output space, we introduce multiview vector-valued manifold regularization (MV(3)MR) to integrate multiple features. MV(3)MR exploits the complementary property of different features and discovers the intrinsic local geometry of the compact support shared by different features under the theme of manifold regularization. We conduct extensive experiments on two challenging, but popular, datasets, PASCAL VOC' 07 and MIR Flickr, and validate the effectiveness of the proposed MV(3)MR for image classification.

  5. Understanding and controlling the step bunching instability in aqueous silicon etching

    Science.gov (United States)

    Bao, Hailing

    Chemical etching of silicon has been widely used for more than half a century in the semiconductor industry. It not only forms the basis for current wafer cleaning processes, it also serves as a powerful tool to create a variety of surface morphologies for different applications. Its potential for controlling surface morphology at the atomic scale over micron-size regions is especially appealing. In spite of its wide usage, the chemistry of silicon etching is poorly understood. Many seemingly simple but fundamental questions have not been answered. As a result, the development of new etchants and new etching protocols are based on expensive and tedious trial-and-error experiments. A better understanding of the etching mechanism would direct the rational formulation of new etchants that produce controlled etch morphologies. Particularly, micron-scale step bunches spontaneously develop on the vicinal Si(111) surface etched in KOH or other anisotropic aqueous etchants. The ability to control the size, orientation, density and regularity of these surface features would greatly improve the performance of microelectromechanical devices. This study is directed towards understanding the chemistry and step bunching instability in aqueous anisotropic etching of silicon through a combination of experimental techniques and theoretical simulations. To reveal the cause of step-bunching instability, kinetic Monte Carlo simulations were constructed based on an atomistic model of the silicon lattice and a modified kinematic wave theory. The simulations showed that inhomogeneity was the origin of step-bunching, which was confirmed through STM studies of etch morphologies created under controlled flow conditions. To quantify the size of the inhomogeneities in different etchants and to clarify their effects, a five-parallel-trench pattern was fabricated. This pattern used a nitride mask to protect most regions of the wafer; five evenly spaced etch windows were opened to the Si(110

  6. Evaluating Web-Scale Discovery Services: A Step-by-Step Guide

    Directory of Open Access Journals (Sweden)

    Joseph Deodato

    2015-06-01

    Full Text Available Selecting a web-scale discovery service is a large and important undertaking that involves a significant investment of time, staff, and resources. Finding the right match begins with a thorough and carefully planned evaluation process. In order to be successful, this process should be inclusive, goal-oriented, data-driven, user-centered, and transparent. The following article offers a step-by-step guide for developing a web-scale discovery evaluation plan rooted in these five key principles based on best practices synthesized from the literature as well as the author’s own experiences coordinating the evaluation process at Rutgers University. The goal is to offer academic libraries that are considering acquiring a web-scale discovery service a blueprint for planning a structured and comprehensive evaluation process.

  7. Work Integration of People with Disabilities in the Regular Labour Market: What Can We Do to Improve These Processes?

    Science.gov (United States)

    Vila, Montserrat; Pallisera, Maria; Fullana, Judit

    2007-01-01

    Background: It is important to ensure that regular processes of labour market integration are available for all citizens. Method: Thematic content analysis techniques, using semi-structured group interviews, were used to identify the principal elements contributing to the processes of integrating people with disabilities into the regular labour…

  8. Higher order total variation regularization for EIT reconstruction.

    Science.gov (United States)

    Gong, Bo; Schullcke, Benjamin; Krueger-Ziolek, Sabine; Zhang, Fan; Mueller-Lisse, Ullrich; Moeller, Knut

    2018-01-08

    Electrical impedance tomography (EIT) attempts to reveal the conductivity distribution of a domain based on the electrical boundary condition. This is an ill-posed inverse problem; its solution is very unstable. Total variation (TV) regularization is one of the techniques commonly employed to stabilize reconstructions. However, it is well known that TV regularization induces staircase effects, which are not realistic in clinical applications. To reduce such artifacts, modified TV regularization terms considering a higher order differential operator were developed in several previous studies. One of them is called total generalized variation (TGV) regularization. TGV regularization has been successively applied in image processing in a regular grid context. In this study, we adapted TGV regularization to the finite element model (FEM) framework for EIT reconstruction. Reconstructions using simulation and clinical data were performed. First results indicate that, in comparison to TV regularization, TGV regularization promotes more realistic images. Graphical abstract Reconstructed conductivity changes located on selected vertical lines. For each of the reconstructed images as well as the ground truth image, conductivity changes located along the selected left and right vertical lines are plotted. In these plots, the notation GT in the legend stands for ground truth, TV stands for total variation method, and TGV stands for total generalized variation method. Reconstructed conductivity distributions from the GREIT algorithm are also demonstrated.

  9. Application of Turchin's method of statistical regularization

    Science.gov (United States)

    Zelenyi, Mikhail; Poliakova, Mariia; Nozik, Alexander; Khudyakov, Alexey

    2018-04-01

    During analysis of experimental data, one usually needs to restore a signal after it has been convoluted with some kind of apparatus function. According to Hadamard's definition this problem is ill-posed and requires regularization to provide sensible results. In this article we describe an implementation of the Turchin's method of statistical regularization based on the Bayesian approach to the regularization strategy.

  10. Air-chemistry "turbulence": power-law scaling and statistical regularity

    Directory of Open Access Journals (Sweden)

    H.-m. Hsu

    2011-08-01

    Full Text Available With the intent to gain further knowledge on the spectral structures and statistical regularities of surface atmospheric chemistry, the chemical gases (NO, NO2, NOx, CO, SO2, and O3 and aerosol (PM10 measured at 74 air quality monitoring stations over the island of Taiwan are analyzed for the year of 2004 at hourly resolution. They represent a range of surface air quality with a mixed combination of geographic settings, and include urban/rural, coastal/inland, plain/hill, and industrial/agricultural locations. In addition to the well-known semi-diurnal and diurnal oscillations, weekly, and intermediate (20 ~ 30 days peaks are also identified with the continuous wavelet transform (CWT. The spectra indicate power-law scaling regions for the frequencies higher than the diurnal and those lower than the diurnal with the average exponents of −5/3 and −1, respectively. These dual-exponents are corroborated with those with the detrended fluctuation analysis in the corresponding time-lag regions. These exponents are mostly independent of the averages and standard deviations of time series measured at various geographic settings, i.e., the spatial inhomogeneities. In other words, they possess dominant universal structures. After spectral coefficients from the CWT decomposition are grouped according to the spectral bands, and inverted separately, the PDFs of the reconstructed time series for the high-frequency band demonstrate the interesting statistical regularity, −3 power-law scaling for the heavy tails, consistently. Such spectral peaks, dual-exponent structures, and power-law scaling in heavy tails are important structural information, but their relations to turbulence and mesoscale variability require further investigations. This could lead to a better understanding of the processes controlling air quality.

  11. System identification via sparse multiple kernel-based regularization using sequential convex optimization techniques

    DEFF Research Database (Denmark)

    Chen, Tianshi; Andersen, Martin Skovgaard; Ljung, Lennart

    2014-01-01

    Model estimation and structure detection with short data records are two issues that receive increasing interests in System Identification. In this paper, a multiple kernel-based regularization method is proposed to handle those issues. Multiple kernels are conic combinations of fixed kernels...

  12. Reconstructing Genetic Regulatory Networks Using Two-Step Algorithms with the Differential Equation Models of Neural Networks.

    Science.gov (United States)

    Chen, Chi-Kan

    2017-07-26

    The identification of genetic regulatory networks (GRNs) provides insights into complex cellular processes. A class of recurrent neural networks (RNNs) captures the dynamics of GRN. Algorithms combining the RNN and machine learning schemes were proposed to reconstruct small-scale GRNs using gene expression time series. We present new GRN reconstruction methods with neural networks. The RNN is extended to a class of recurrent multilayer perceptrons (RMLPs) with latent nodes. Our methods contain two steps: the edge rank assignment step and the network construction step. The former assigns ranks to all possible edges by a recursive procedure based on the estimated weights of wires of RNN/RMLP (RE RNN /RE RMLP ), and the latter constructs a network consisting of top-ranked edges under which the optimized RNN simulates the gene expression time series. The particle swarm optimization (PSO) is applied to optimize the parameters of RNNs and RMLPs in a two-step algorithm. The proposed RE RNN -RNN and RE RMLP -RNN algorithms are tested on synthetic and experimental gene expression time series of small GRNs of about 10 genes. The experimental time series are from the studies of yeast cell cycle regulated genes and E. coli DNA repair genes. The unstable estimation of RNN using experimental time series having limited data points can lead to fairly arbitrary predicted GRNs. Our methods incorporate RNN and RMLP into a two-step structure learning procedure. Results show that the RE RMLP using the RMLP with a suitable number of latent nodes to reduce the parameter dimension often result in more accurate edge ranks than the RE RNN using the regularized RNN on short simulated time series. Combining by a weighted majority voting rule the networks derived by the RE RMLP -RNN using different numbers of latent nodes in step one to infer the GRN, the method performs consistently and outperforms published algorithms for GRN reconstruction on most benchmark time series. The framework of two-step

  13. Numerical simulation of complex turbulent Flow over a backward-facing step

    International Nuclear Information System (INIS)

    Silveira Neto, A.

    1991-06-01

    A statistical and topological study of a complex turbulent flow over a backward-facing step is realized by means of Direct and Large-Eddy Simulations. Direct simulations are performed in an isothermal and in a stratified two-dimensional case. In the isothermal case coherent structures have been obtained by the numerical simulation in the mixing layer downstream of the step. In a second step a thermal stratification is imposed on this flow. The coherent structures are in this case produced in the immediate vicinity of the step and disappear dowstream for increasing stratification. Afterwards, large-eddy simulations are carried out in the three-dimensional case. The subgrid-scale model is a local adaptation to the physical space of the spectral eddy-viscosity concept. The statistics of turbulence are in good agreement with the experimental data, corresponding to a small step configuration. Furthermore, calculations at higher step configuration show that the eddy structure of the flow presents striking analogies with the plane shear layers, with large billows shed behind the step, and intense longitudinal vortices strained between these billows [fr

  14. Regularization modeling for large-eddy simulation

    NARCIS (Netherlands)

    Geurts, Bernardus J.; Holm, D.D.

    2003-01-01

    A new modeling approach for large-eddy simulation (LES) is obtained by combining a "regularization principle" with an explicit filter and its inversion. This regularization approach allows a systematic derivation of the implied subgrid model, which resolves the closure problem. The central role of

  15. Feature selection and multi-kernel learning for adaptive graph regularized nonnegative matrix factorization

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-09-20

    Nonnegative matrix factorization (NMF), a popular part-based representation technique, does not capture the intrinsic local geometric structure of the data space. Graph regularized NMF (GNMF) was recently proposed to avoid this limitation by regularizing NMF with a nearest neighbor graph constructed from the input data set. However, GNMF has two main bottlenecks. First, using the original feature space directly to construct the graph is not necessarily optimal because of the noisy and irrelevant features and nonlinear distributions of data samples. Second, one possible way to handle the nonlinear distribution of data samples is by kernel embedding. However, it is often difficult to choose the most suitable kernel. To solve these bottlenecks, we propose two novel graph-regularized NMF methods, AGNMFFS and AGNMFMK, by introducing feature selection and multiple-kernel learning to the graph regularized NMF, respectively. Instead of using a fixed graph as in GNMF, the two proposed methods learn the nearest neighbor graph that is adaptive to the selected features and learned multiple kernels, respectively. For each method, we propose a unified objective function to conduct feature selection/multi-kernel learning, NMF and adaptive graph regularization simultaneously. We further develop two iterative algorithms to solve the two optimization problems. Experimental results on two challenging pattern classification tasks demonstrate that the proposed methods significantly outperform state-of-the-art data representation methods.

  16. A step-by-step guide to systematically identify all relevant animal studies

    Science.gov (United States)

    Leenaars, Marlies; Hooijmans, Carlijn R; van Veggel, Nieky; ter Riet, Gerben; Leeflang, Mariska; Hooft, Lotty; van der Wilt, Gert Jan; Tillema, Alice; Ritskes-Hoitinga, Merel

    2012-01-01

    Before starting a new animal experiment, thorough analysis of previously performed experiments is essential from a scientific as well as from an ethical point of view. The method that is most suitable to carry out such a thorough analysis of the literature is a systematic review (SR). An essential first step in an SR is to search and find all potentially relevant studies. It is important to include all available evidence in an SR to minimize bias and reduce hampered interpretation of experimental outcomes. Despite the recent development of search filters to find animal studies in PubMed and EMBASE, searching for all available animal studies remains a challenge. Available guidelines from the clinical field cannot be copied directly to the situation within animal research, and although there are plenty of books and courses on searching the literature, there is no compact guide available to search and find relevant animal studies. Therefore, in order to facilitate a structured, thorough and transparent search for animal studies (in both preclinical and fundamental science), an easy-to-use, step-by-step guide was prepared and optimized using feedback from scientists in the field of animal experimentation. The step-by-step guide will assist scientists in performing a comprehensive literature search and, consequently, improve the scientific quality of the resulting review and prevent unnecessary animal use in the future. PMID:22037056

  17. Effect of von Karman Vortex Shedding on Regular and Open-slit V-gutter Stabilized Turbulent Premixed Flames

    Science.gov (United States)

    2012-04-01

    Both flame lengths shrink and large scale disruptions occur downstream with vortex shedding carrying reaction zones. Flames in both flameholders...9) the flame structure changes dramatically for both regular and open-slit V-gutter. Both flame lengths shrink and large scale disruptions occur...reduces the flame length . However, qualitatively the open-slit V-gutter appears to be more sensitive than the regular V-gutter. Both flames remain

  18. Spatially-Variant Tikhonov Regularization for Double-Difference Waveform Inversion

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Youzuo [Los Alamos National Laboratory; Huang, Lianjie [Los Alamos National Laboratory; Zhang, Zhigang [Los Alamos National Laboratory

    2011-01-01

    Double-difference waveform inversion is a potential tool for quantitative monitoring for geologic carbon storage. It jointly inverts time-lapse seismic data for changes in reservoir geophysical properties. Due to the ill-posedness of waveform inversion, it is a great challenge to obtain reservoir changes accurately and efficiently, particularly when using time-lapse seismic reflection data. Regularization techniques can be utilized to address the issue of ill-posedness. The regularization parameter controls the smoothness of inversion results. A constant regularization parameter is normally used in waveform inversion, and an optimal regularization parameter has to be selected. The resulting inversion results are a trade off among regions with different smoothness or noise levels; therefore the images are either over regularized in some regions while under regularized in the others. In this paper, we employ a spatially-variant parameter in the Tikhonov regularization scheme used in double-difference waveform tomography to improve the inversion accuracy and robustness. We compare the results obtained using a spatially-variant parameter with those obtained using a constant regularization parameter and those produced without any regularization. We observe that, utilizing a spatially-variant regularization scheme, the target regions are well reconstructed while the noise is reduced in the other regions. We show that the spatially-variant regularization scheme provides the flexibility to regularize local regions based on the a priori information without increasing computational costs and the computer memory requirement.

  19. Regularized Partial Least Squares with an Application to NMR Spectroscopy

    OpenAIRE

    Allen, Genevera I.; Peterson, Christine; Vannucci, Marina; Maletic-Savatic, Mirjana

    2012-01-01

    High-dimensional data common in genomics, proteomics, and chemometrics often contains complicated correlation structures. Recently, partial least squares (PLS) and Sparse PLS methods have gained attention in these areas as dimension reduction techniques in the context of supervised data analysis. We introduce a framework for Regularized PLS by solving a relaxation of the SIMPLS optimization problem with penalties on the PLS loadings vectors. Our approach enjoys many advantages including flexi...

  20. Protein secondary structure appears to be robust under in silico evolution while protein disorder appears not to be.

    KAUST Repository

    Schaefer, Christian

    2010-01-16

    MOTIVATION: The mutation of amino acids often impacts protein function and structure. Mutations without negative effect sustain evolutionary pressure. We study a particular aspect of structural robustness with respect to mutations: regular protein secondary structure and natively unstructured (intrinsically disordered) regions. Is the formation of regular secondary structure an intrinsic feature of amino acid sequences, or is it a feature that is lost upon mutation and is maintained by evolution against the odds? Similarly, is disorder an intrinsic sequence feature or is it difficult to maintain? To tackle these questions, we in silico mutated native protein sequences into random sequence-like ensembles and monitored the change in predicted secondary structure and disorder. RESULTS: We established that by our coarse-grained measures for change, predictions and observations were similar, suggesting that our results were not biased by prediction mistakes. Changes in secondary structure and disorder predictions were linearly proportional to the change in sequence. Surprisingly, neither the content nor the length distribution for the predicted secondary structure changed substantially. Regions with long disorder behaved differently in that significantly fewer such regions were predicted after a few mutation steps. Our findings suggest that the formation of regular secondary structure is an intrinsic feature of random amino acid sequences, while the formation of long-disordered regions is not an intrinsic feature of proteins with disordered regions. Put differently, helices and strands appear to be maintained easily by evolution, whereas maintaining disordered regions appears difficult. Neutral mutations with respect to disorder are therefore very unlikely.

  1. Protein secondary structure appears to be robust under in silico evolution while protein disorder appears not to be.

    KAUST Repository

    Schaefer, Christian; Schlessinger, Avner; Rost, Burkhard

    2010-01-01

    MOTIVATION: The mutation of amino acids often impacts protein function and structure. Mutations without negative effect sustain evolutionary pressure. We study a particular aspect of structural robustness with respect to mutations: regular protein secondary structure and natively unstructured (intrinsically disordered) regions. Is the formation of regular secondary structure an intrinsic feature of amino acid sequences, or is it a feature that is lost upon mutation and is maintained by evolution against the odds? Similarly, is disorder an intrinsic sequence feature or is it difficult to maintain? To tackle these questions, we in silico mutated native protein sequences into random sequence-like ensembles and monitored the change in predicted secondary structure and disorder. RESULTS: We established that by our coarse-grained measures for change, predictions and observations were similar, suggesting that our results were not biased by prediction mistakes. Changes in secondary structure and disorder predictions were linearly proportional to the change in sequence. Surprisingly, neither the content nor the length distribution for the predicted secondary structure changed substantially. Regions with long disorder behaved differently in that significantly fewer such regions were predicted after a few mutation steps. Our findings suggest that the formation of regular secondary structure is an intrinsic feature of random amino acid sequences, while the formation of long-disordered regions is not an intrinsic feature of proteins with disordered regions. Put differently, helices and strands appear to be maintained easily by evolution, whereas maintaining disordered regions appears difficult. Neutral mutations with respect to disorder are therefore very unlikely.

  2. From recreational to regular drug use

    DEFF Research Database (Denmark)

    Järvinen, Margaretha; Ravn, Signe

    2011-01-01

    This article analyses the process of going from recreational use to regular and problematic use of illegal drugs. We present a model containing six career contingencies relevant for young people’s progress from recreational to regular drug use: the closing of social networks, changes in forms...

  3. Regular variation on measure chains

    Czech Academy of Sciences Publication Activity Database

    Řehák, Pavel; Vitovec, J.

    2010-01-01

    Roč. 72, č. 1 (2010), s. 439-448 ISSN 0362-546X R&D Projects: GA AV ČR KJB100190701 Institutional research plan: CEZ:AV0Z10190503 Keywords : regularly varying function * regularly varying sequence * measure chain * time scale * embedding theorem * representation theorem * second order dynamic equation * asymptotic properties Subject RIV: BA - General Mathematics Impact factor: 1.279, year: 2010 http://www.sciencedirect.com/science/article/pii/S0362546X09008475

  4. New regular black hole solutions

    International Nuclear Information System (INIS)

    Lemos, Jose P. S.; Zanchin, Vilson T.

    2011-01-01

    In the present work we consider general relativity coupled to Maxwell's electromagnetism and charged matter. Under the assumption of spherical symmetry, there is a particular class of solutions that correspond to regular charged black holes whose interior region is de Sitter, the exterior region is Reissner-Nordstroem and there is a charged thin-layer in-between the two. The main physical and geometrical properties of such charged regular black holes are analyzed.

  5. On geodesics in low regularity

    Science.gov (United States)

    Sämann, Clemens; Steinbauer, Roland

    2018-02-01

    We consider geodesics in both Riemannian and Lorentzian manifolds with metrics of low regularity. We discuss existence of extremal curves for continuous metrics and present several old and new examples that highlight their subtle interrelation with solutions of the geodesic equations. Then we turn to the initial value problem for geodesics for locally Lipschitz continuous metrics and generalize recent results on existence, regularity and uniqueness of solutions in the sense of Filippov.

  6. Extended -Regular Sequence for Automated Analysis of Microarray Images

    Directory of Open Access Journals (Sweden)

    Jin Hee-Jeong

    2006-01-01

    Full Text Available Microarray study enables us to obtain hundreds of thousands of expressions of genes or genotypes at once, and it is an indispensable technology for genome research. The first step is the analysis of scanned microarray images. This is the most important procedure for obtaining biologically reliable data. Currently most microarray image processing systems require burdensome manual block/spot indexing work. Since the amount of experimental data is increasing very quickly, automated microarray image analysis software becomes important. In this paper, we propose two automated methods for analyzing microarray images. First, we propose the extended -regular sequence to index blocks and spots, which enables a novel automatic gridding procedure. Second, we provide a methodology, hierarchical metagrid alignment, to allow reliable and efficient batch processing for a set of microarray images. Experimental results show that the proposed methods are more reliable and convenient than the commercial tools.

  7. Steps and dislocations in cubic lyotropic crystals

    International Nuclear Information System (INIS)

    Leroy, S; Pieranski, P

    2006-01-01

    It has been shown recently that lyotropic systems are convenient for studies of faceting, growth or anisotropic surface melting of crystals. All these phenomena imply the active contribution of surface steps and bulk dislocations. We show here that steps can be observed in situ and in real time by means of a new method combining hygroscopy with phase contrast. First results raise interesting issues about the consequences of bicontinuous topology on the structure and dynamical behaviour of steps and dislocations

  8. Manifold Regularized Reinforcement Learning.

    Science.gov (United States)

    Li, Hongliang; Liu, Derong; Wang, Ding

    2018-04-01

    This paper introduces a novel manifold regularized reinforcement learning scheme for continuous Markov decision processes. Smooth feature representations for value function approximation can be automatically learned using the unsupervised manifold regularization method. The learned features are data-driven, and can be adapted to the geometry of the state space. Furthermore, the scheme provides a direct basis representation extension for novel samples during policy learning and control. The performance of the proposed scheme is evaluated on two benchmark control tasks, i.e., the inverted pendulum and the energy storage problem. Simulation results illustrate the concepts of the proposed scheme and show that it can obtain excellent performance.

  9. Step out - Step in Sequencing Games

    NARCIS (Netherlands)

    Musegaas, M.; Borm, P.E.M.; Quant, M.

    2014-01-01

    In this paper a new class of relaxed sequencing games is introduced: the class of Step out - Step in sequencing games. In this relaxation any player within a coalition is allowed to step out from his position in the processing order and to step in at any position later in the processing order.

  10. From design of activity-based costing systems to their regular use From design of activity-based costing systems to their regular use Del diseño de modelos de costes basados en las actividades a su uso normalizado

    Directory of Open Access Journals (Sweden)

    M. Angels Fito

    2011-11-01

    Full Text Available Purpose: To understand why many companies that develop activity-based costing (ABC systems do not use them on a regular basis.Design/methodology/approach: We review the existing literature on the process of ABC implementation, concentrating specifically on the step from the acceptance of an ABC model to its routine use. We identify key factors for successful uptake of ABC systems as a regular management tool and use these factors to interpret the experience of two companies that illustrate, respectively, a success and a failure.Findings: Sixteen factors are identified that positively or negatively influence the actual use of ABC costing systems. These factors can be grouped into six categories: strategic, individual, organizational, technological, operational and external factors.Originality/value: This paper sheds some light on the paradoxical situation that regular usage of ABC systems is not as common as might be expected given their widespread acceptance on a conceptual level.Purpose: To understand why many companies that develop activity-based costing (ABC systems do not use them on a regular basis.Design/methodology/approach: We review the existing literature on the process of ABC implementation, concentrating specifically on the step from the acceptance of an ABC model to its routine use. We identify key factors for successful uptake of ABC systems as a regular management tool and use these factors to interpret the experience of two companies that illustrate, respectively, a success and a failure.Findings: Sixteen factors are identified that positively or negatively influence the actual use of ABC costing systems. These factors can be grouped into six categories: strategic, individual, organizational, technological, operational and external factors.Originality/value: This paper sheds some light on the paradoxical situation that regular usage of ABC systems is not as common as might be expected given their widespread acceptance on a

  11. Step out-step in sequencing games

    NARCIS (Netherlands)

    Musegaas, Marieke; Borm, Peter; Quant, Marieke

    2015-01-01

    In this paper a new class of relaxed sequencing games is introduced: the class of Step out–Step in sequencing games. In this relaxation any player within a coalition is allowed to step out from his position in the processing order and to step in at any position later in the processing order. First,

  12. Learning Sparse Visual Representations with Leaky Capped Norm Regularizers

    OpenAIRE

    Wangni, Jianqiao; Lin, Dahua

    2017-01-01

    Sparsity inducing regularization is an important part for learning over-complete visual representations. Despite the popularity of $\\ell_1$ regularization, in this paper, we investigate the usage of non-convex regularizations in this problem. Our contribution consists of three parts. First, we propose the leaky capped norm regularization (LCNR), which allows model weights below a certain threshold to be regularized more strongly as opposed to those above, therefore imposes strong sparsity and...

  13. A Class of Manifold Regularized Multiplicative Update Algorithms for Image Clustering.

    Science.gov (United States)

    Yang, Shangming; Yi, Zhang; He, Xiaofei; Li, Xuelong

    2015-12-01

    Multiplicative update algorithms are important tools for information retrieval, image processing, and pattern recognition. However, when the graph regularization is added to the cost function, different classes of sample data may be mapped to the same subspace, which leads to the increase of data clustering error rate. In this paper, an improved nonnegative matrix factorization (NMF) cost function is introduced. Based on the cost function, a class of novel graph regularized NMF algorithms is developed, which results in a class of extended multiplicative update algorithms with manifold structure regularization. Analysis shows that in the learning, the proposed algorithms can efficiently minimize the rank of the data representation matrix. Theoretical results presented in this paper are confirmed by simulations. For different initializations and data sets, variation curves of cost functions and decomposition data are presented to show the convergence features of the proposed update rules. Basis images, reconstructed images, and clustering results are utilized to present the efficiency of the new algorithms. Last, the clustering accuracies of different algorithms are also investigated, which shows that the proposed algorithms can achieve state-of-the-art performance in applications of image clustering.

  14. Adaptive regularization of noisy linear inverse problems

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Madsen, Kristoffer Hougaard; Lehn-Schiøler, Tue

    2006-01-01

    In the Bayesian modeling framework there is a close relation between regularization and the prior distribution over parameters. For prior distributions in the exponential family, we show that the optimal hyper-parameter, i.e., the optimal strength of regularization, satisfies a simple relation: T......: The expectation of the regularization function, i.e., takes the same value in the posterior and prior distribution. We present three examples: two simulations, and application in fMRI neuroimaging....

  15. One-step controllable fabrication of superhydrophobic surfaces with special composite structure on zinc substrates.

    Science.gov (United States)

    Ning, Tao; Xu, Wenguo; Lu, Shixiang

    2011-09-01

    Stable superhydrophobic platinum surfaces have been effectively fabricated on the zinc substrates through one-step replacement deposition process without further modification or any other post-treatment procedures. The fabrication process was controllable, which could be testified by various morphologies and hydrophobic properties of different prepared samples. By conducting SEM and water CA analysis, the effects of reaction conditions on the surface morphology and hydrophobicity of the resulting surfaces were carefully studied. The results show that the optimum condition of superhydrophobic surface fabrication depends largely on the positioning of zinc plate and the concentrations of reactants. When the zinc plate was placed vertically and the concentration of PtCl(4) solution was 5 mmol/L, the zinc substrate would be covered by a novel and interesting composite structure. The structure was composed by microscale hexagonal cavities, densely packed nanoparticles layer and top micro- and nanoscale flower-like structures, which exhibit great surface roughness and porosity contributing to the superhydrophobicity. The maximal CA value of about 171° was obtained under the same reaction condition. The XRD, XPS and EDX results indicate that crystallite pure platinum nanoparticles were aggregated on the zinc substrates in accordance with a free deposition way. Copyright © 2011 Elsevier Inc. All rights reserved.

  16. A Step-by-Step Framework on Discrete Events Simulation in Emergency Department; A Systematic Review.

    Science.gov (United States)

    Dehghani, Mahsa; Moftian, Nazila; Rezaei-Hachesu, Peyman; Samad-Soltani, Taha

    2017-04-01

    To systematically review the current literature of simulation in healthcare including the structured steps in the emergency healthcare sector by proposing a framework for simulation in the emergency department. For the purpose of collecting the data, PubMed and ACM databases were used between the years 2003 and 2013. The inclusion criteria were to select English-written articles available in full text with the closest objectives from among a total of 54 articles retrieved from the databases. Subsequently, 11 articles were selected for further analysis. The studies focused on the reduction of waiting time and patient stay, optimization of resources allocation, creation of crisis and maximum demand scenarios, identification of overcrowding bottlenecks, investigation of the impact of other systems on the existing system, and improvement of the system operations and functions. Subsequently, 10 simulation steps were derived from the relevant studies after an expert's evaluation. The 10-steps approach proposed on the basis of the selected studies provides simulation and planning specialists with a structured method for both analyzing problems and choosing best-case scenarios. Moreover, following this framework systematically enables the development of design processes as well as software implementation of simulation problems.

  17. Moving force identification based on redundant concatenated dictionary and weighted l1-norm regularization

    Science.gov (United States)

    Pan, Chu-Dong; Yu, Ling; Liu, Huan-Lin; Chen, Ze-Peng; Luo, Wen-Feng

    2018-01-01

    Moving force identification (MFI) is an important inverse problem in the field of bridge structural health monitoring (SHM). Reasonable signal structures of moving forces are rarely considered in the existing MFI methods. Interaction forces are complex because they contain both slowly-varying harmonic and impact signals due to bridge vibration and bumps on a bridge deck, respectively. Therefore, the interaction forces are usually hard to be expressed completely and sparsely by using a single basis function set. Based on the redundant concatenated dictionary and weighted l1-norm regularization method, a hybrid method is proposed for MFI in this study. The redundant dictionary consists of both trigonometric functions and rectangular functions used for matching the harmonic and impact signal features of unknown moving forces. The weighted l1-norm regularization method is introduced for formulation of MFI equation, so that the signal features of moving forces can be accurately extracted. The fast iterative shrinkage-thresholding algorithm (FISTA) is used for solving the MFI problem. The optimal regularization parameter is appropriately chosen by the Bayesian information criterion (BIC) method. In order to assess the accuracy and the feasibility of the proposed method, a simply-supported beam bridge subjected to a moving force is taken as an example for numerical simulations. Finally, a series of experimental studies on MFI of a steel beam are performed in laboratory. Both numerical and experimental results show that the proposed method can accurately identify the moving forces with a strong robustness, and it has a better performance than the Tikhonov regularization method. Some related issues are discussed as well.

  18. Exclusion of children with intellectual disabilities from regular ...

    African Journals Online (AJOL)

    Study investigated why teachers exclude children with intellectual disability from the regular classrooms in Nigeria. Participants were, 169 regular teachers randomly selected from Oyo and Ogun states. Questionnaire was used to collect data result revealed that 57.4% regular teachers could not cope with children with ID ...

  19. On infinite regular and chiral maps

    OpenAIRE

    Arredondo, John A.; Valdez, Camilo Ramírez y Ferrán

    2015-01-01

    We prove that infinite regular and chiral maps take place on surfaces with at most one end. Moreover, we prove that an infinite regular or chiral map on an orientable surface with genus can only be realized on the Loch Ness monster, that is, the topological surface of infinite genus with one end.

  20. 29 CFR 779.18 - Regular rate.

    Science.gov (United States)

    2010-07-01

    ... employee under subsection (a) or in excess of the employee's normal working hours or regular working hours... Relating to Labor (Continued) WAGE AND HOUR DIVISION, DEPARTMENT OF LABOR STATEMENTS OF GENERAL POLICY OR... not less than one and one-half times their regular rates of pay. Section 7(e) of the Act defines...

  1. Continuum regularized Yang-Mills theory

    International Nuclear Information System (INIS)

    Sadun, L.A.

    1987-01-01

    Using the machinery of stochastic quantization, Z. Bern, M. B. Halpern, C. Taubes and I recently proposed a continuum regularization technique for quantum field theory. This regularization may be implemented by applying a regulator to either the (d + 1)-dimensional Parisi-Wu Langevin equation or, equivalently, to the d-dimensional second order Schwinger-Dyson (SD) equations. This technique is non-perturbative, respects all gauge and Lorentz symmetries, and is consistent with a ghost-free gauge fixing (Zwanziger's). This thesis is a detailed study of this regulator, and of regularized Yang-Mills theory, using both perturbative and non-perturbative techniques. The perturbative analysis comes first. The mechanism of stochastic quantization is reviewed, and a perturbative expansion based on second-order SD equations is developed. A diagrammatic method (SD diagrams) for evaluating terms of this expansion is developed. We apply the continuum regulator to a scalar field theory. Using SD diagrams, we show that all Green functions can be rendered finite to all orders in perturbation theory. Even non-renormalizable theories can be regularized. The continuum regulator is then applied to Yang-Mills theory, in conjunction with Zwanziger's gauge fixing. A perturbative expansion of the regulator is incorporated into the diagrammatic method. It is hoped that the techniques discussed in this thesis will contribute to the construction of a renormalized Yang-Mills theory is 3 and 4 dimensions

  2. Structural and functional characterization of an archaeal clustered regularly interspaced short palindromic repeat (CRISPR)-associated complex for antiviral defense (CASCADE).

    Science.gov (United States)

    Lintner, Nathanael G; Kerou, Melina; Brumfield, Susan K; Graham, Shirley; Liu, Huanting; Naismith, James H; Sdano, Matthew; Peng, Nan; She, Qunxin; Copié, Valérie; Young, Mark J; White, Malcolm F; Lawrence, C Martin

    2011-06-17

    In response to viral infection, many prokaryotes incorporate fragments of virus-derived DNA into loci called clustered regularly interspaced short palindromic repeats (CRISPRs). The loci are then transcribed, and the processed CRISPR transcripts are used to target invading viral DNA and RNA. The Escherichia coli "CRISPR-associated complex for antiviral defense" (CASCADE) is central in targeting invading DNA. Here we report the structural and functional characterization of an archaeal CASCADE (aCASCADE) from Sulfolobus solfataricus. Tagged Csa2 (Cas7) expressed in S. solfataricus co-purifies with Cas5a-, Cas6-, Csa5-, and Cas6-processed CRISPR-RNA (crRNA). Csa2, the dominant protein in aCASCADE, forms a stable complex with Cas5a. Transmission electron microscopy reveals a helical complex of variable length, perhaps due to substoichiometric amounts of other CASCADE components. A recombinant Csa2-Cas5a complex is sufficient to bind crRNA and complementary ssDNA. The structure of Csa2 reveals a crescent-shaped structure unexpectedly composed of a modified RNA-recognition motif and two additional domains present as insertions in the RNA-recognition motif. Conserved residues indicate potential crRNA- and target DNA-binding sites, and the H160A variant shows significantly reduced affinity for crRNA. We propose a general subunit architecture for CASCADE in other bacteria and Archaea.

  3. ℓ1/2-norm regularized nonnegative low-rank and sparse affinity graph for remote sensing image segmentation

    Science.gov (United States)

    Tian, Shu; Zhang, Ye; Yan, Yiming; Su, Nan

    2016-10-01

    Segmentation of real-world remote sensing images is a challenge due to the complex texture information with high heterogeneity. Thus, graph-based image segmentation methods have been attracting great attention in the field of remote sensing. However, most of the traditional graph-based approaches fail to capture the intrinsic structure of the feature space and are sensitive to noises. A ℓ-norm regularization-based graph segmentation method is proposed to segment remote sensing images. First, we use the occlusion of the random texture model (ORTM) to extract the local histogram features. Then, a ℓ-norm regularized low-rank and sparse representation (LNNLRS) is implemented to construct a ℓ-regularized nonnegative low-rank and sparse graph (LNNLRS-graph), by the union of feature subspaces. Moreover, the LNNLRS-graph has a high ability to discriminate the manifold intrinsic structure of highly homogeneous texture information. Meanwhile, the LNNLRS representation takes advantage of the low-rank and sparse characteristics to remove the noises and corrupted data. Last, we introduce the LNNLRS-graph into the graph regularization nonnegative matrix factorization to enhance the segmentation accuracy. The experimental results using remote sensing images show that when compared to five state-of-the-art image segmentation methods, the proposed method achieves more accurate segmentation results.

  4. Synchronisation phenomenon in three blades rotor driven by regular or chaotic oscillations

    Directory of Open Access Journals (Sweden)

    Szmit Zofia

    2018-01-01

    Full Text Available The goal of the paper is to analysed the influence of the different types of excitation on the synchronisation phenomenon in case of the rotating system composed of a rigid hub and three flexible composite beams. In the model is assumed that two blades, due to structural differences, are de-tuned. Numerical calculation are divided on two parts, firstly the rotating system is exited by a torque given by regular harmonic function, than in the second part the torque is produced by chaotic Duffing oscillator. The synchronisation phenomenon between the beams is analysed both either for regular or chaotic motions. Partial differential equations of motion are solved numerically and resonance curves, time series and Poincaré maps are presented for selected excitation torques.

  5. Age-related patterns of drug use initiation among polydrug using regular psychostimulant users.

    Science.gov (United States)

    Darke, Shane; Kaye, Sharlene; Torok, Michelle

    2012-09-01

    To determine age-related patterns of drug use initiation, drug sequencing and treatment entry among regular psychostimulant users. Cross-sectional study of 269 regular psychostimulant users, administered a structured interview examining onset of use for major licit and illicit drugs. The mean age at first intoxication was not associated with age or gender. In contrast, younger age was associated with earlier ages of onset for all of the illicit drug classes. Each additional year of age was associated with a 4 month increase in onset age for methamphetamine, and 3 months for heroin. By the age of 17, those born prior to 1961 had, on average, used only tobacco and alcohol, whereas those born between 1986 and 1990 had used nine different drug classes. The period between initial use and the transition to regular use, however, was stable. Age was also negatively correlated with both age at initial injection and regular injecting. Onset sequences, however, remained stable. Consistent with the age-related patterns of drug use, each additional year of age associated with a 0.47 year increase in the age at first treatment. While the age at first intoxication appeared stable, the trajectory through illicit drug use was substantially truncated. The data indicate that, at least among those who progress to regular illicit drug use, younger users are likely to be exposed to far broader polydrug use in their teens than has previously been the case. © 2012 Australasian Professional Society on Alcohol and other Drugs.

  6. A comparison between star products on regular orbits of compact Lie groups

    International Nuclear Information System (INIS)

    Fioresi, R.; Lledo, M.A.

    2002-01-01

    In this paper, an algebraic and a differential star product defined on a regular coadjoint orbit of a compact semisimple group are compared. It has been proved that there is an injective algebra homomorphism between the algebra of polynomials with the algebraic star product and the algebra of differential functions with the differential star product structure. (author)

  7. Regularity effect in prospective memory during aging

    OpenAIRE

    Blondelle, Geoffrey; Hainselin, Mathieu; Gounden, Yannick; Heurley, Laurent; Voisin, Hélène; Megalakaki, Olga; Bressous, Estelle; Quaglino, Véronique

    2016-01-01

    Background: Regularity effect can affect performance in prospective memory (PM), but little is known on the cognitive processes linked to this effect. Moreover, its impacts with regard to aging remain unknown. To our knowledge, this study is the first to examine regularity effect in PM in a lifespan perspective, with a sample of young, intermediate, and older adults.Objective and design: Our study examined the regularity effect in PM in three groups of participants: 28 young adults (18–30), 1...

  8. 20 CFR 226.14 - Employee regular annuity rate.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Employee regular annuity rate. 226.14 Section... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Computing an Employee Annuity § 226.14 Employee regular annuity rate. The regular annuity rate payable to the employee is the total of the employee tier I...

  9. Nonnegative Matrix Factorization with Rank Regularization and Hard Constraint.

    Science.gov (United States)

    Shang, Ronghua; Liu, Chiyang; Meng, Yang; Jiao, Licheng; Stolkin, Rustam

    2017-09-01

    Nonnegative matrix factorization (NMF) is well known to be an effective tool for dimensionality reduction in problems involving big data. For this reason, it frequently appears in many areas of scientific and engineering literature. This letter proposes a novel semisupervised NMF algorithm for overcoming a variety of problems associated with NMF algorithms, including poor use of prior information, negative impact on manifold structure of the sparse constraint, and inaccurate graph construction. Our proposed algorithm, nonnegative matrix factorization with rank regularization and hard constraint (NMFRC), incorporates label information into data representation as a hard constraint, which makes full use of prior information. NMFRC also measures pairwise similarity according to geodesic distance rather than Euclidean distance. This results in more accurate measurement of pairwise relationships, resulting in more effective manifold information. Furthermore, NMFRC adopts rank constraint instead of norm constraints for regularization to balance the sparseness and smoothness of data. In this way, the new data representation is more representative and has better interpretability. Experiments on real data sets suggest that NMFRC outperforms four other state-of-the-art algorithms in terms of clustering accuracy.

  10. Gift from statistical learning: Visual statistical learning enhances memory for sequence elements and impairs memory for items that disrupt regularities.

    Science.gov (United States)

    Otsuka, Sachio; Saiki, Jun

    2016-02-01

    Prior studies have shown that visual statistical learning (VSL) enhances familiarity (a type of memory) of sequences. How do statistical regularities influence the processing of each triplet element and inserted distractors that disrupt the regularity? Given that increased attention to triplets induced by VSL and inhibition of unattended triplets, we predicted that VSL would promote memory for each triplet constituent, and degrade memory for inserted stimuli. Across the first two experiments, we found that objects from structured sequences were more likely to be remembered than objects from random sequences, and that letters (Experiment 1) or objects (Experiment 2) inserted into structured sequences were less likely to be remembered than those inserted into random sequences. In the subsequent two experiments, we examined an alternative account for our results, whereby the difference in memory for inserted items between structured and random conditions is due to individuation of items within random sequences. Our findings replicated even when control letters (Experiment 3A) or objects (Experiment 3B) were presented before or after, rather than inserted into, random sequences. Our findings suggest that statistical learning enhances memory for each item in a regular set and impairs memory for items that disrupt the regularity. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Regular algebra and finite machines

    CERN Document Server

    Conway, John Horton

    2012-01-01

    World-famous mathematician John H. Conway based this classic text on a 1966 course he taught at Cambridge University. Geared toward graduate students of mathematics, it will also prove a valuable guide to researchers and professional mathematicians.His topics cover Moore's theory of experiments, Kleene's theory of regular events and expressions, Kleene algebras, the differential calculus of events, factors and the factor matrix, and the theory of operators. Additional subjects include event classes and operator classes, some regulator algebras, context-free languages, communicative regular alg

  12. 39 CFR 6.1 - Regular meetings, annual meeting.

    Science.gov (United States)

    2010-07-01

    ... 39 Postal Service 1 2010-07-01 2010-07-01 false Regular meetings, annual meeting. 6.1 Section 6.1 Postal Service UNITED STATES POSTAL SERVICE THE BOARD OF GOVERNORS OF THE U.S. POSTAL SERVICE MEETINGS (ARTICLE VI) § 6.1 Regular meetings, annual meeting. The Board shall meet regularly on a schedule...

  13. Bio-inspired Artificial Intelligence: А Generalized Net Model of the Regularization Process in MLP

    Directory of Open Access Journals (Sweden)

    Stanimir Surchev

    2013-10-01

    Full Text Available Many objects and processes inspired by the nature have been recreated by the scientists. The inspiration to create a Multilayer Neural Network came from human brain as member of the group. It possesses complicated structure and it is difficult to recreate, because of the existence of too many processes that require different solving methods. The aim of the following paper is to describe one of the methods that improve learning process of Artificial Neural Network. The proposed generalized net method presents Regularization process in Multilayer Neural Network. The purpose of verification is to protect the neural network from overfitting. The regularization is commonly used in neural network training process. Many methods of verification are present, the subject of interest is the one known as Regularization. It contains function in order to set weights and biases with smaller values to protect from overfitting.

  14. Phase reconstruction by a multilevel iteratively regularized Gauss–Newton method

    International Nuclear Information System (INIS)

    Langemann, Dirk; Tasche, Manfred

    2008-01-01

    In this paper we consider the numerical solution of a phase retrieval problem for a compactly supported, linear spline f : R → C with the Fourier transform f-circumflex, where values of |f| and |f-circumflex| at finitely many equispaced nodes are given. The unknown phases of complex spline coefficients fulfil a well-structured system of nonlinear equations. Thus the phase reconstruction leads to a nonlinear inverse problem, which is solved by a multilevel strategy and iterative Tikhonov regularization. The multilevel strategy concentrates the main effort of the solution of the phase retrieval problem in the coarse, less expensive levels and provides convenient initial guesses at the next finer level. On each level, the corresponding nonlinear system is solved by an iteratively regularized Gauss–Newton method. The multilevel strategy is motivated by convergence results of IRGN. This method is applicable to a wide range of examples as shown in several numerical tests for noiseless and noisy data

  15. Application of Littlewood-Paley decomposition to the regularity of Boltzmann type kinetic equations

    International Nuclear Information System (INIS)

    EL Safadi, M.

    2007-03-01

    We study the regularity of kinetic equations of Boltzmann type.We use essentially Littlewood-Paley method from harmonic analysis, consisting mainly in working with dyadics annulus. We shall mainly concern with the homogeneous case, where the solution f(t,x,v) depends only on the time t and on the velocities v, while working with realistic and singular cross-sections (non cutoff). In the first part, we study the particular case of Maxwellian molecules. Under this hypothesis, the structure of the Boltzmann operator and his Fourier transform write in a simple form. We show a global C ∞ regularity. Then, we deal with the case of general cross-sections with 'hard potential'. We are interested in the Landau equation which is limit equation to the Boltzmann equation, taking in account grazing collisions. We prove that any weak solution belongs to Schwartz space S. We demonstrate also a similar regularity for the case of Boltzmann equation. Let us note that our method applies directly for all dimensions, and proofs are often simpler compared to other previous ones. Finally, we finish with Boltzmann-Dirac equation. In particular, we adapt the result of regularity obtained in Alexandre, Desvillettes, Wennberg and Villani work, using the dissipation rate connected with Boltzmann-Dirac equation. (author)

  16. Relativistic time-dependent Fermion-mass renormalization using statistical regularization

    Science.gov (United States)

    Kutnink, Timothy; McMurray, Christian; Santrach, Amelia; Hockett, Sarah; Barcus, Scott; Petridis, Athanasios

    2017-09-01

    The time-dependent electromagnetically self-coupled Dirac equation is solved numerically by means of the staggered-leap-frog algorithm with reflecting boundary conditions. The stability region of the method versus the interaction strength and the spatial-grid size over time-step ratio is established. The expectation values of several dynamic operators are then evaluated as functions of time. These include the fermion and electromagnetic energies and the fermion dynamic mass. There is a characteristic, non-exponential, oscillatory dependence leading to asymptotic constants of these expectation values. In the case of the fermion mass this amounts to renormalization. The dependence of the expectation values on the spatial-grid size is evaluated in detail. Furthermore, the contribution of positive and negative energy states to the asymptotic values and the gauge fields is analyzed. Statistical regularization, employing a canonical ensemble whose temperature is the inverse of the grid size, is used to remove the grid-size and momentum-dependence and produce a finite result in the continuum limit.

  17. Atypical jobs : stepping stones or dead ends? Evidence from the NLSY79.

    OpenAIRE

    Addison, J.T.; Cotti, C.; Surfield, C.J.

    2015-01-01

    Atypical work arrangements have long been criticized as offering more precarious and lower paid work than regular open-ended employment. An important British paper by Booth et al. (Economic Journal, Vol. 112 (2002), No. 480, pp. F189–F213) was among the first to recognize such jobs also functioned as a stepping stone to permanent work. This conclusion proved prescient, receiving increased support in Europe. Here, we provide a broadly parallel analysis for the USA, where research has been less...

  18. Effects of walking speed on the step-by-step control of step width.

    Science.gov (United States)

    Stimpson, Katy H; Heitkamp, Lauren N; Horne, Joscelyn S; Dean, Jesse C

    2018-02-08

    Young, healthy adults walking at typical preferred speeds use step-by-step adjustments of step width to appropriately redirect their center of mass motion and ensure mediolateral stability. However, it is presently unclear whether this control strategy is retained when walking at the slower speeds preferred by many clinical populations. We investigated whether the typical stabilization strategy is influenced by walking speed. Twelve young, neurologically intact participants walked on a treadmill at a range of prescribed speeds (0.2-1.2 m/s). The mediolateral stabilization strategy was quantified as the proportion of step width variance predicted by the mechanical state of the pelvis throughout a step (calculated as R 2 magnitude from a multiple linear regression). Our ability to accurately predict the upcoming step width increased over the course of a step. The strength of the relationship between step width and pelvis mechanics at the start of a step was reduced at slower speeds. However, these speed-dependent differences largely disappeared by the end of a step, other than at the slowest walking speed (0.2 m/s). These results suggest that mechanics-dependent adjustments in step width are a consistent component of healthy gait across speeds and contexts. However, slower walking speeds may ease this control by allowing mediolateral repositioning of the swing leg to occur later in a step, thus encouraging slower walking among clinical populations with limited sensorimotor control. Published by Elsevier Ltd.

  19. Linear step drive

    International Nuclear Information System (INIS)

    Haniger, L.; Elger, R.; Kocandrle, L.; Zdebor, J.

    1986-01-01

    A linear step drive is described developed in Czechoslovak-Soviet cooperation and intended for driving WWER-1000 control rods. The functional principle is explained of the motor and the mechanical and electrical parts of the drive, power control, and the indicator of position are described. The motor has latches situated in the reactor at a distance of 3 m from magnetic armatures, it has a low structural height above the reactor cover, which suggests its suitability for seismic localities. Its magnetic circuits use counterpoles; the mechanical shocks at the completion of each step are damped using special design features. The position indicator is of a special design and evaluates motor position within ±1% of total travel. A drive diagram and the flow chart of both the control electronics and the position indicator are presented. (author) 4 figs

  20. Structural and Functional Characterization of an Archaeal Clustered Regularly Interspaced Short Palindromic Repeat (CRISPR)-associated Complex for Antiviral Defense (CASCADE)

    DEFF Research Database (Denmark)

    Lintner, Nathanael G; Kerou, Melina; Brumfield, Susan K

    2011-01-01

    In response to viral infection, many prokaryotes incorporate fragments of virus-derived DNA into loci called clustered regularly interspaced short palindromic repeats (CRISPRs). The loci are then transcribed, and the processed CRISPR transcripts are used to target invading viral DNA and RNA....... The Escherichia coli "CRISPR-associated complex for antiviral defense" (CASCADE) is central in targeting invading DNA. Here we report the structural and functional characterization of an archaeal CASCADE (aCASCADE) from Sulfolobus solfataricus. Tagged Csa2 (Cas7) expressed in S. solfataricus co-purifies with Cas5......a-, Cas6-, Csa5-, and Cas6-processed CRISPR-RNA (crRNA). Csa2, the dominant protein in aCASCADE, forms a stable complex with Cas5a. Transmission electron microscopy reveals a helical complex of variable length, perhaps due to substoichiometric amounts of other CASCADE components. A recombinant Csa2...

  1. Bayesian estimation of regularization and atlas building in diffeomorphic image registration.

    Science.gov (United States)

    Zhang, Miaomiao; Singh, Nikhil; Fletcher, P Thomas

    2013-01-01

    This paper presents a generative Bayesian model for diffeomorphic image registration and atlas building. We develop an atlas estimation procedure that simultaneously estimates the parameters controlling the smoothness of the diffeomorphic transformations. To achieve this, we introduce a Monte Carlo Expectation Maximization algorithm, where the expectation step is approximated via Hamiltonian Monte Carlo sampling on the manifold of diffeomorphisms. An added benefit of this stochastic approach is that it can successfully solve difficult registration problems involving large deformations, where direct geodesic optimization fails. Using synthetic data generated from the forward model with known parameters, we demonstrate the ability of our model to successfully recover the atlas and regularization parameters. We also demonstrate the effectiveness of the proposed method in the atlas estimation problem for 3D brain images.

  2. A regularized stationary mean-field game

    KAUST Repository

    Yang, Xianjin

    2016-01-01

    In the thesis, we discuss the existence and numerical approximations of solutions of a regularized mean-field game with a low-order regularization. In the first part, we prove a priori estimates and use the continuation method to obtain the existence of a solution with a positive density. Finally, we introduce the monotone flow method and solve the system numerically.

  3. A regularized stationary mean-field game

    KAUST Repository

    Yang, Xianjin

    2016-04-19

    In the thesis, we discuss the existence and numerical approximations of solutions of a regularized mean-field game with a low-order regularization. In the first part, we prove a priori estimates and use the continuation method to obtain the existence of a solution with a positive density. Finally, we introduce the monotone flow method and solve the system numerically.

  4. Automating InDesign with Regular Expressions

    CERN Document Server

    Kahrel, Peter

    2006-01-01

    If you need to make automated changes to InDesign documents beyond what basic search and replace can handle, you need regular expressions, and a bit of scripting to make them work. This Short Cut explains both how to write regular expressions, so you can find and replace the right things, and how to use them in InDesign specifically.

  5. Internal friction behaviors of Ni-Mn-In magnetic shape memory alloy with two-step structural transformation

    Directory of Open Access Journals (Sweden)

    Zhen-ni Zhou

    2017-06-01

    Full Text Available The internal friction (IF behaviors of dual-phase Ni52Mn32In16 alloy with two-step structural transformation were investigated by dynamic mechanical analyzer. The IF peak for the martensite transformation (MT is an asymmetric shoulder rather than those sharp peaks for other shape memory alloys. The intermartensitic transformation (IMT peak has the maximum IF value. As the heating rate increases, the height of the IMT peak increases and its position is shifted to higher temperatures. In comparison with the IMT peak, the MT peak is independent on the heating rate. The starting temperatures of the IMT peak are strongly dependent on frequency, while the MT peak is weakly dependent. Meanwhile, the heights of both the MT and IMT peak rapidly decrease with increasing the frequency. This work also throws new light on their structural transformation mechanisms.

  6. Two-Step Proximal Gradient Algorithm for Low-Rank Matrix Completion

    Directory of Open Access Journals (Sweden)

    Qiuyu Wang

    2016-06-01

    Full Text Available In this paper, we  propose a two-step proximal gradient algorithm to solve nuclear norm regularized least squares for the purpose of recovering low-rank data matrix from sampling of its entries. Each iteration generated by the proposed algorithm is a combination of the latest three points, namely, the previous point, the current iterate, and its proximal gradient point. This algorithm preserves the computational simplicity of classical proximal gradient algorithm where a singular value decomposition in proximal operator is involved. Global convergence is followed directly in the literature. Numerical results are reported to show the efficiency of the algorithm.

  7. Optimal behaviour can violate the principle of regularity.

    Science.gov (United States)

    Trimmer, Pete C

    2013-07-22

    Understanding decisions is a fundamental aim of behavioural ecology, psychology and economics. The regularity axiom of utility theory holds that a preference between options should be maintained when other options are made available. Empirical studies have shown that animals violate regularity but this has not been understood from a theoretical perspective, such decisions have therefore been labelled as irrational. Here, I use models of state-dependent behaviour to demonstrate that choices can violate regularity even when behavioural strategies are optimal. I also show that the range of conditions over which regularity should be violated can be larger when options do not always persist into the future. Consequently, utility theory--based on axioms, including transitivity, regularity and the independence of irrelevant alternatives--is undermined, because even alternatives that are never chosen by an animal (in its current state) can be relevant to a decision.

  8. Dimensional regularization in configuration space

    International Nuclear Information System (INIS)

    Bollini, C.G.; Giambiagi, J.J.

    1995-09-01

    Dimensional regularization is introduced in configuration space by Fourier transforming in D-dimensions the perturbative momentum space Green functions. For this transformation, Bochner theorem is used, no extra parameters, such as those of Feynman or Bogoliubov-Shirkov are needed for convolutions. The regularized causal functions in x-space have ν-dependent moderated singularities at the origin. They can be multiplied together and Fourier transformed (Bochner) without divergence problems. The usual ultraviolet divergences appear as poles of the resultant functions of ν. Several example are discussed. (author). 9 refs

  9. Matrix regularization of 4-manifolds

    OpenAIRE

    Trzetrzelewski, M.

    2012-01-01

    We consider products of two 2-manifolds such as S^2 x S^2, embedded in Euclidean space and show that the corresponding 4-volume preserving diffeomorphism algebra can be approximated by a tensor product SU(N)xSU(N) i.e. functions on a manifold are approximated by the Kronecker product of two SU(N) matrices. A regularization of the 4-sphere is also performed by constructing N^2 x N^2 matrix representations of the 4-algebra (and as a byproduct of the 3-algebra which makes the regularization of S...

  10. 11S Storage globulin from pumpkin seeds: regularities of proteolysis by papain.

    Science.gov (United States)

    Rudakova, A S; Rudakov, S V; Kakhovskaya, I A; Shutov, A D

    2014-08-01

    Limited proteolysis of the α- and β-chains and deep cleavage of the αβ-subunits by the cooperative (one-by-one) mechanism was observed in the course of papain hydrolysis of cucurbitin, an 11S storage globulin from seeds of the pumpkin Cucurbita maxima. An independent analysis of the kinetics of the limited and cooperative proteolyses revealed that the reaction occurs in two successive steps. In the first step, limited proteolysis consisting of detachments of short terminal peptides from the α- and β-chains was observed. The cooperative proteolysis, which occurs as a pseudo-first order reaction, started at the second step. Therefore, the limited proteolysis at the first step plays a regulatory role, impacting the rate of deep degradation of cucurbitin molecules by the cooperative mechanism. Structural alterations of cucurbitin induced by limited proteolysis are suggested to generate its susceptibility to cooperative proteolysis. These alterations are tentatively discussed on the basis of the tertiary structure of the cucurbitin subunit pdb|2EVX in comparison with previously obtained data on features of degradation of soybean 11S globulin hydrolyzed by papain.

  11. Regular Breakfast and Blood Lead Levels among Preschool Children

    Directory of Open Access Journals (Sweden)

    Needleman Herbert

    2011-04-01

    Full Text Available Abstract Background Previous studies have shown that fasting increases lead absorption in the gastrointestinal tract of adults. Regular meals/snacks are recommended as a nutritional intervention for lead poisoning in children, but epidemiological evidence of links between fasting and blood lead levels (B-Pb is rare. The purpose of this study was to examine the association between eating a regular breakfast and B-Pb among children using data from the China Jintan Child Cohort Study. Methods Parents completed a questionnaire regarding children's breakfast-eating habit (regular or not, demographics, and food frequency. Whole blood samples were collected from 1,344 children for the measurements of B-Pb and micronutrients (iron, copper, zinc, calcium, and magnesium. B-Pb and other measures were compared between children with and without regular breakfast. Linear regression modeling was used to evaluate the association between regular breakfast and log-transformed B-Pb. The association between regular breakfast and risk of lead poisoning (B-Pb≥10 μg/dL was examined using logistic regression modeling. Results Median B-Pb among children who ate breakfast regularly and those who did not eat breakfast regularly were 6.1 μg/dL and 7.2 μg/dL, respectively. Eating breakfast was also associated with greater zinc blood levels. Adjusting for other relevant factors, the linear regression model revealed that eating breakfast regularly was significantly associated with lower B-Pb (beta = -0.10 units of log-transformed B-Pb compared with children who did not eat breakfast regularly, p = 0.02. Conclusion The present study provides some initial human data supporting the notion that eating a regular breakfast might reduce B-Pb in young children. To our knowledge, this is the first human study exploring the association between breakfast frequency and B-Pb in young children.

  12. On the equivalence of different regularization methods

    International Nuclear Information System (INIS)

    Brzezowski, S.

    1985-01-01

    The R-circunflex-operation preceded by the regularization procedure is discussed. Some arguments are given, according to which the results may depend on the method of regularization, introduced in order to avoid divergences in perturbation calculations. 10 refs. (author)

  13. Microsoft® SQL Server® 2008 Step by Step

    CERN Document Server

    Hotek, Mike

    2009-01-01

    Teach yourself SQL Server 2008-one step at a time. Get the practical guidance you need to build database solutions that solve real-world business problems. Learn to integrate SQL Server data in your applications, write queries, develop reports, and employ powerful business intelligence systems.Discover how to:Install and work with core components and toolsCreate tables and index structuresManipulate and retrieve dataSecure, manage, back up, and recover databasesApply tuning plus optimization techniques to generate high-performing database applicationsOptimize availability through clustering, d

  14. Accreting fluids onto regular black holes via Hamiltonian approach

    Energy Technology Data Exchange (ETDEWEB)

    Jawad, Abdul [COMSATS Institute of Information Technology, Department of Mathematics, Lahore (Pakistan); Shahzad, M.U. [COMSATS Institute of Information Technology, Department of Mathematics, Lahore (Pakistan); University of Central Punjab, CAMS, UCP Business School, Lahore (Pakistan)

    2017-08-15

    We investigate the accretion of test fluids onto regular black holes such as Kehagias-Sfetsos black holes and regular black holes with Dagum distribution function. We analyze the accretion process when different test fluids are falling onto these regular black holes. The accreting fluid is being classified through the equation of state according to the features of regular black holes. The behavior of fluid flow and the existence of sonic points is being checked for these regular black holes. It is noted that the three-velocity depends on critical points and the equation of state parameter on phase space. (orig.)

  15. Linearized Alternating Direction Method of Multipliers for Constrained Nonconvex Regularized Optimization

    Science.gov (United States)

    2016-11-22

    structure of the graph, we replace the ℓ1- norm by the nonconvex Capped -ℓ1 norm , and obtain the Generalized Capped -ℓ1 regularized logistic regression...X. M. Yuan. Linearized augmented lagrangian and alternating direction methods for nuclear norm minimization. Mathematics of Computation, 82(281):301...better approximations of ℓ0- norm theoretically and computationally beyond ℓ1- norm , for example, the compressive sensing (Xiao et al., 2011). The

  16. Numerical analysis of control of hard roof's stepped cantilever structure for longwall mining with sublevel caving

    Energy Technology Data Exchange (ETDEWEB)

    Wei, J.; Jin, Z.; Tang, Y. [Taiyuan University of Technology, Taiyuan (China)

    2002-12-01

    Based on the field monitoring and simulation test of strata movement, the hard roof's stepped cantilever structure and its mechanics model are presented. The finite element method is used to analyse the effect of hard coal cracking under the abutment pressure of hard roof, so the rational pre-treatment span of hard roof is determined, and the rational working resistance of support is selected also. According to the mechanics model, the transient balance conditions of the hard roof's stepped cantilever structure are studied, and the support-rock relation is theoretically explained. As a result, the basic theory and technique of surrounding rocks control for fully mechanised longwall mining with sub-level caving is formed under the hard roof and hard coal conditions, and the hard roof is effectively controlled not only to protect the working face but also to promote the caving of hard top-coal to increase the recovery rate of coal, thus to realise safe and highly efficient and productive fully mechanised longwall mining with sub-level caving in extra-thick seam. Finally, the successfully practice of hard roof control in 8914 and 8911 working face is presented in this paper. 10 refs., 5 figs., 4 tabs.

  17. Free Modal Algebras Revisited: The Step-by-Step Method

    NARCIS (Netherlands)

    Bezhanishvili, N.; Ghilardi, Silvio; Jibladze, Mamuka

    2012-01-01

    We review the step-by-step method of constructing finitely generated free modal algebras. First we discuss the global step-by-step method, which works well for rank one modal logics. Next we refine the global step-by-step method to obtain the local step-by-step method, which is applicable beyond

  18. Quadratic contributions of softly broken supersymmetry in the light of loop regularization

    Energy Technology Data Exchange (ETDEWEB)

    Bai, Dong [Chinese Academy of Sciences, Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Beijing (China); University of Chinese Academy of Sciences, School of Physical Sciences, Beijing (China); Wu, Yue-Liang [Chinese Academy of Sciences, Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Beijing (China); International Centre for Theoretical Physics Asia-Pacific (ICTP-AP), Beijing (China); University of Chinese Academy of Sciences, School of Physical Sciences, Beijing (China)

    2017-09-15

    Loop regularization (LORE) is a novel regularization scheme in modern quantum field theories. It makes no change to the spacetime structure and respects both gauge symmetries and supersymmetry. As a result, LORE should be useful in calculating loop corrections in supersymmetry phenomenology. To further demonstrate its power, in this article we revisit in the light of LORE the old issue of the absence of quadratic contributions (quadratic divergences) in softly broken supersymmetric field theories. It is shown explicitly by Feynman diagrammatic calculations that up to two loops the Wess-Zumino model with soft supersymmetry breaking terms (WZ' model), one of the simplest models with the explicit supersymmetry breaking, is free of quadratic contributions. All the quadratic contributions cancel with each other perfectly, which is consistent with results dictated by the supergraph techniques. (orig.)

  19. SYSTEMATIZATION OF THE BASIC STEPS OF THE STEP-AEROBICS

    Directory of Open Access Journals (Sweden)

    Darinka Korovljev

    2011-03-01

    Full Text Available Following the development of the powerful sport industry, in front of us appeared a lot of new opportunities for creating of the new programmes of exercising with certain requisites. One of such programmes is certainly step-aerobics. Step-aerobics can be defined as a type of aerobics consisting of the basic aerobic steps (basic steps applied in exercising on stepper (step bench, with a possibility to regulate its height. Step-aerobics itself can be divided into several groups, depending on the following: type of music, working methods and adopted knowledge of the attendants. In this work, the systematization of the basic steps in step-aerobics was made on the basis of the following criteria: steps origin, number of leg motions in stepping and relating the body support at the end of the step. Systematization of the basic steps of the step-aerobics is quite significant for making a concrete review of the existing basic steps, thus making creation of the step-aerobics lesson easier

  20. Factors associated with regular consumption of obesogenic foods: National School-Based Student Health Hurvey, 2012

    Directory of Open Access Journals (Sweden)

    Giovana LONGO-SILVA

    Full Text Available ABSTRACT Objective: To investigate the frequency of consumption of obesogenic foods among adolescents and its association with sociodemographic, family, behavioral, and environmental variables. Methods: Secondary data from the National School-Based Student Health Hurvey were analyzed from a representative sample of 9th grade Brazilian students (high school. A self-administered questionnaire, organized into thematic blocks, was used. The dependent variables were the consumption of deep fried snacks, packaged snacks, sugar candies, and soft drinks; consumption frequency for the seven days preceding the study was analyzed. Bivariate analysis was carried out to determine the empirical relationship between the regular consumption of these foods (≥3 days/week with sociodemographic, family, behavioral, and school structural variables. p-value <0.20 was used as the criterion for initial inclusion in the multivariate logistic analysis, which was conducted using the "Enter" method, and the results were expressed as adjusted odds ratios with 95% confidence interval and p<0.05 indicating a statistically significance. Results: Regular food consumption ranged from 27.17% to 65.96%. The variables female gender, mobile phone ownership, Internet access at home, tobacco use, alcohol consumption, regular physical activity, eating while watching television or studying, watching television for at least 2 hours a day, and not willing to lose weight were associated in the final logistic models of all foods analyzed. Conclusion: It was concluded that fried snacks, packaged snacks, sugar candies, and soft drinks are regularly consumed by adolescents and that such consumption was associated with the sociodemographic, family, behavioral, and school structural variables.

  1. Investigation of turbulent boundary layer over forward-facing step via direct numerical simulation

    International Nuclear Information System (INIS)

    Hattori, Hirofumi; Nagano, Yasutaka

    2010-01-01

    This paper presents observations and investigations of the detailed turbulent structure of a boundary layer over a forward-facing step. The present DNSs are conducted under conditions with three Reynolds numbers based on step height, or three Reynolds numbers based on momentum thickness so as to investigate the effects of step height and inlet boundary layer thickness. DNS results show the quantitative turbulent statistics and structures of boundary layers over a forward-facing step, where pronounced counter-gradient diffusion phenomena (CDP) are especially observed on the step near the wall. Also, a quadrant analysis is conducted in which the results indicate in detail the turbulence motion around the step.

  2. MRI reconstruction with joint global regularization and transform learning.

    Science.gov (United States)

    Tanc, A Korhan; Eksioglu, Ender M

    2016-10-01

    Sparsity based regularization has been a popular approach to remedy the measurement scarcity in image reconstruction. Recently, sparsifying transforms learned from image patches have been utilized as an effective regularizer for the Magnetic Resonance Imaging (MRI) reconstruction. Here, we infuse additional global regularization terms to the patch-based transform learning. We develop an algorithm to solve the resulting novel cost function, which includes both patchwise and global regularization terms. Extensive simulation results indicate that the introduced mixed approach has improved MRI reconstruction performance, when compared to the algorithms which use either of the patchwise transform learning or global regularization terms alone. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Steps on rutile TiO2(110): Active sites for water and methanol dissociation

    DEFF Research Database (Denmark)

    Martinez, Umberto; Vilhelmsen, Lasse; Kristoffersen, Henrik Høgh

    2011-01-01

    for each step edge, more stable, reconstructed structures were found for the step edge, while the bulk truncated structures were recovered for the step edge.We demonstrate how oxygen vacancies along these defects have lower formation energies than on flat terraces and how water and methanol...

  4. Diffusion of charged particles in strong large-scale random and regular magnetic fields

    International Nuclear Information System (INIS)

    Mel'nikov, Yu.P.

    2000-01-01

    The nonlinear collision integral for the Green's function averaged over a random magnetic field is transformed using an iteration procedure taking account of the strong random scattering of particles on the correlation length of the random magnetic field. Under this transformation the regular magnetic field is assumed to be uniform at distances of the order of the correlation length. The single-particle Green's functions of the scattered particles in the presence of a regular magnetic field are investigated. The transport coefficients are calculated taking account of the broadening of the cyclotron and Cherenkov resonances as a result of strong random scattering. The mean-free path lengths parallel and perpendicular to the regular magnetic field are found for a power-law spectrum of the random field. The analytical results obtained are compared with the experimental data on the transport ranges of solar and galactic cosmic rays in the interplanetary magnetic field. As a result, the conditions for the propagation of cosmic rays in the interplanetary space and a more accurate idea of the structure of the interplanetary magnetic field are determined

  5. Laplacian embedded regression for scalable manifold regularization.

    Science.gov (United States)

    Chen, Lin; Tsang, Ivor W; Xu, Dong

    2012-06-01

    Semi-supervised learning (SSL), as a powerful tool to learn from a limited number of labeled data and a large number of unlabeled data, has been attracting increasing attention in the machine learning community. In particular, the manifold regularization framework has laid solid theoretical foundations for a large family of SSL algorithms, such as Laplacian support vector machine (LapSVM) and Laplacian regularized least squares (LapRLS). However, most of these algorithms are limited to small scale problems due to the high computational cost of the matrix inversion operation involved in the optimization problem. In this paper, we propose a novel framework called Laplacian embedded regression by introducing an intermediate decision variable into the manifold regularization framework. By using ∈-insensitive loss, we obtain the Laplacian embedded support vector regression (LapESVR) algorithm, which inherits the sparse solution from SVR. Also, we derive Laplacian embedded RLS (LapERLS) corresponding to RLS under the proposed framework. Both LapESVR and LapERLS possess a simpler form of a transformed kernel, which is the summation of the original kernel and a graph kernel that captures the manifold structure. The benefits of the transformed kernel are two-fold: (1) we can deal with the original kernel matrix and the graph Laplacian matrix in the graph kernel separately and (2) if the graph Laplacian matrix is sparse, we only need to perform the inverse operation for a sparse matrix, which is much more efficient when compared with that for a dense one. Inspired by kernel principal component analysis, we further propose to project the introduced decision variable into a subspace spanned by a few eigenvectors of the graph Laplacian matrix in order to better reflect the data manifold, as well as accelerate the calculation of the graph kernel, allowing our methods to efficiently and effectively cope with large scale SSL problems. Extensive experiments on both toy and real

  6. Regularities, Natural Patterns and Laws of Nature

    Directory of Open Access Journals (Sweden)

    Stathis Psillos

    2014-02-01

    Full Text Available  The goal of this paper is to sketch an empiricist metaphysics of laws of nature. The key idea is that there are regularities without regularity-enforcers. Differently put, there are natural laws without law-makers of a distinct metaphysical kind. This sketch will rely on the concept of a natural pattern and more significantly on the existence of a network of natural patterns in nature. The relation between a regularity and a pattern will be analysed in terms of mereology.  Here is the road map. In section 2, I will briefly discuss the relation between empiricism and metaphysics, aiming to show that an empiricist metaphysics is possible. In section 3, I will offer arguments against stronger metaphysical views of laws. Then, in section 4 I will motivate nomic objectivism. In section 5, I will address the question ‘what is a regularity?’ and will develop a novel answer to it, based on the notion of a natural pattern. In section 6, I will raise the question: ‘what is a law of nature?’, the answer to which will be: a law of nature is a regularity that is characterised by the unity of a natural pattern.

  7. Facile fabrication of nano-structured silica hybrid film with superhydrophobicity by one-step VAFS approach

    Science.gov (United States)

    Jia, Yi; Yue, Renliang; Liu, Gang; Yang, Jie; Ni, Yong; Wu, Xiaofeng; Chen, Yunfa

    2013-01-01

    Here we report a novel one-step vapor-fed aerosol flame synthesis (VAFS) method to attain silica hybrid film with superhydrophobicity on normal glass and other engineering material substrates using hexamethyldisiloxane (HMDSO) as precursor. The deposited nano-structured silica films represent excellent superhydrophobicity with contact angle larger than 150° and sliding angle below 5°, without any surface modification or other post treatments. SEM photographs proved that flame-made SiO2 nanoparticles formed dual-scale surface roughness on the substrates. It was confirmed by FTIR and XPS that the in situ formed organic fragments on the particle surface as species like (CH3)xSiO2-x/2 (x = 1, 2, 3) which progressively lowered the surface energy of fabricated films. Thus, these combined dual-scale roughness and lowered surface energy cooperatively produced superhydrophobic films. IR camera had been used to monitor the real-time flame temperature. It is found that the inert dilution gas inflow played a critical role in attaining superhydrophobicity due to its cooling and anti-oxidation effect. This method is facile and scalable for diverse substrates, without any requirement of complex equipments and multiple processing steps. It may contribute to the industrial fabrication of superhydrophobic films.

  8. Novel Ordered Stepped-Wedge Cluster Trial Designs for Detecting Ebola Vaccine Efficacy Using a Spatially Structured Mathematical Model.

    Directory of Open Access Journals (Sweden)

    Ibrahim Diakite

    2016-08-01

    Full Text Available During the 2014 Ebola virus disease (EVD outbreak, policy-makers were confronted with difficult decisions on how best to test the efficacy of EVD vaccines. On one hand, many were reluctant to withhold a vaccine that might prevent a fatal disease from study participants randomized to a control arm. On the other, regulatory bodies called for rigorous placebo-controlled trials to permit direct measurement of vaccine efficacy prior to approval of the products. A stepped-wedge cluster study (SWCT was proposed as an alternative to a more traditional randomized controlled vaccine trial to address these concerns. Here, we propose novel "ordered stepped-wedge cluster trial" (OSWCT designs to further mitigate tradeoffs between ethical concerns, logistics, and statistical rigor.We constructed a spatially structured mathematical model of the EVD outbreak in Sierra Leone. We used the output of this model to simulate and compare a series of stepped-wedge cluster vaccine studies. Our model reproduced the observed order of first case occurrence within districts of Sierra Leone. Depending on the infection risk within the trial population and the trial start dates, the statistical power to detect a vaccine efficacy of 90% varied from 14% to 32% for standard SWCT, and from 67% to 91% for OSWCTs for an alpha error of 5%. The model's projection of first case occurrence was robust to changes in disease natural history parameters.Ordering clusters in a step-wedge trial based on the cluster's underlying risk of infection as predicted by a spatial model can increase the statistical power of a SWCT. In the event of another hemorrhagic fever outbreak, implementation of our proposed OSWCT designs could improve statistical power when a step-wedge study is desirable based on either ethical concerns or logistical constraints.

  9. Regularization of the Boundary-Saddle-Node Bifurcation

    Directory of Open Access Journals (Sweden)

    Xia Liu

    2018-01-01

    Full Text Available In this paper we treat a particular class of planar Filippov systems which consist of two smooth systems that are separated by a discontinuity boundary. In such systems one vector field undergoes a saddle-node bifurcation while the other vector field is transversal to the boundary. The boundary-saddle-node (BSN bifurcation occurs at a critical value when the saddle-node point is located on the discontinuity boundary. We derive a local topological normal form for the BSN bifurcation and study its local dynamics by applying the classical Filippov’s convex method and a novel regularization approach. In fact, by the regularization approach a given Filippov system is approximated by a piecewise-smooth continuous system. Moreover, the regularization process produces a singular perturbation problem where the original discontinuous set becomes a center manifold. Thus, the regularization enables us to make use of the established theories for continuous systems and slow-fast systems to study the local behavior around the BSN bifurcation.

  10. Improvements in GRACE Gravity Fields Using Regularization

    Science.gov (United States)

    Save, H.; Bettadpur, S.; Tapley, B. D.

    2008-12-01

    The unconstrained global gravity field models derived from GRACE are susceptible to systematic errors that show up as broad "stripes" aligned in a North-South direction on the global maps of mass flux. These errors are believed to be a consequence of both systematic and random errors in the data that are amplified by the nature of the gravity field inverse problem. These errors impede scientific exploitation of the GRACE data products, and limit the realizable spatial resolution of the GRACE global gravity fields in certain regions. We use regularization techniques to reduce these "stripe" errors in the gravity field products. The regularization criteria are designed such that there is no attenuation of the signal and that the solutions fit the observations as well as an unconstrained solution. We have used a computationally inexpensive method, normally referred to as "L-ribbon", to find the regularization parameter. This paper discusses the characteristics and statistics of a 5-year time-series of regularized gravity field solutions. The solutions show markedly reduced stripes, are of uniformly good quality over time, and leave little or no systematic observation residuals, which is a frequent consequence of signal suppression from regularization. Up to degree 14, the signal in regularized solution shows correlation greater than 0.8 with the un-regularized CSR Release-04 solutions. Signals from large-amplitude and small-spatial extent events - such as the Great Sumatra Andaman Earthquake of 2004 - are visible in the global solutions without using special post-facto error reduction techniques employed previously in the literature. Hydrological signals as small as 5 cm water-layer equivalent in the small river basins, like Indus and Nile for example, are clearly evident, in contrast to noisy estimates from RL04. The residual variability over the oceans relative to a seasonal fit is small except at higher latitudes, and is evident without the need for de-striping or

  11. Unusual poles of the {zeta}-functions for some regular singular differential operators

    Energy Technology Data Exchange (ETDEWEB)

    Falomir, H [IFLP, Departamento de Fisica-Facultad de Ciencias Exactas, UNLP, CC 67 (1900) La Plata (Argentina); Muschietti, M A [Departamento de Matematica-Facultad de Ciencias Exactas, UNLP, CC 172 (1900) La Plata (Argentina); Pisani, P A G [IFLP, Departamento de Fisica-Facultad de Ciencias Exactas, UNLP, CC 67 (1900) La Plata (Argentina); Seeley, R [University of Massachusetts at Boston, Boston, MA 02125 (United States)

    2003-10-03

    We consider the resolvent of a system of first-order differential operators with a regular singularity, admitting a family of self-adjoint extensions. We find that the asymptotic expansion for the resolvent in the general case presents powers of {lambda} which depend on the singularity, and can take even irrational values. The consequences for the pole structure of the corresponding {zeta}- and {eta}-functions are also discussed.

  12. Deterministic automata for extended regular expressions

    Directory of Open Access Journals (Sweden)

    Syzdykov Mirzakhmet

    2017-12-01

    Full Text Available In this work we present the algorithms to produce deterministic finite automaton (DFA for extended operators in regular expressions like intersection, subtraction and complement. The method like “overriding” of the source NFA(NFA not defined with subset construction rules is used. The past work described only the algorithm for AND-operator (or intersection of regular languages; in this paper the construction for the MINUS-operator (and complement is shown.

  13. Regularities of intermediate adsorption complex relaxation

    International Nuclear Information System (INIS)

    Manukova, L.A.

    1982-01-01

    The experimental data, characterizing the regularities of intermediate adsorption complex relaxation in the polycrystalline Mo-N 2 system at 77 K are given. The method of molecular beam has been used in the investigation. The analytical expressions of change regularity in the relaxation process of full and specific rates - of transition from intermediate state into ''non-reversible'', of desorption into the gas phase and accumUlation of the particles in the intermediate state are obtained

  14. The perception of regularity in an isochronous stimulus in zebra finches (Taeniopygia guttata) and humans.

    Science.gov (United States)

    van der Aa, Jeroen; Honing, Henkjan; ten Cate, Carel

    2015-06-01

    Perceiving temporal regularity in an auditory stimulus is considered one of the basic features of musicality. Here we examine whether zebra finches can detect regularity in an isochronous stimulus. Using a go/no go paradigm we show that zebra finches are able to distinguish between an isochronous and an irregular stimulus. However, when the tempo of the isochronous stimulus is changed, it is no longer treated as similar to the training stimulus. Training with three isochronous and three irregular stimuli did not result in improvement of the generalization. In contrast, humans, exposed to the same stimuli, readily generalized across tempo changes. Our results suggest that zebra finches distinguish the different stimuli by learning specific local temporal features of each individual stimulus rather than attending to the global structure of the stimuli, i.e., to the temporal regularity. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. A nearest-neighbour discretisation of the regularized stokeslet boundary integral equation

    Science.gov (United States)

    Smith, David J.

    2018-04-01

    The method of regularized stokeslets is extensively used in biological fluid dynamics due to its conceptual simplicity and meshlessness. This simplicity carries a degree of cost in computational expense and accuracy because the number of degrees of freedom used to discretise the unknown surface traction is generally significantly higher than that required by boundary element methods. We describe a meshless method based on nearest-neighbour interpolation that significantly reduces the number of degrees of freedom required to discretise the unknown traction, increasing the range of problems that can be practically solved, without excessively complicating the task of the modeller. The nearest-neighbour technique is tested against the classical problem of rigid body motion of a sphere immersed in very viscous fluid, then applied to the more complex biophysical problem of calculating the rotational diffusion timescales of a macromolecular structure modelled by three closely-spaced non-slender rods. A heuristic for finding the required density of force and quadrature points by numerical refinement is suggested. Matlab/GNU Octave code for the key steps of the algorithm is provided, which predominantly use basic linear algebra operations, with a full implementation being provided on github. Compared with the standard Nyström discretisation, more accurate and substantially more efficient results can be obtained by de-refining the force discretisation relative to the quadrature discretisation: a cost reduction of over 10 times with improved accuracy is observed. This improvement comes at minimal additional technical complexity. Future avenues to develop the algorithm are then discussed.

  16. l0 regularization based on a prior image incorporated non-local means for limited-angle X-ray CT reconstruction.

    Science.gov (United States)

    Zhang, Lingli; Zeng, Li; Guo, Yumeng

    2018-03-15

    Restricted by the scanning environment in some CT imaging modalities, the acquired projection data are usually incomplete, which may lead to a limited-angle reconstruction problem. Thus, image quality usually suffers from the slope artifacts. The objective of this study is to first investigate the distorted domains of the reconstructed images which encounter the slope artifacts and then present a new iterative reconstruction method to address the limited-angle X-ray CT reconstruction problem. The presented framework of new method exploits the structural similarity between the prior image and the reconstructed image aiming to compensate the distorted edges. Specifically, the new method utilizes l0 regularization and wavelet tight framelets to suppress the slope artifacts and pursue the sparsity. New method includes following 4 steps to (1) address the data fidelity using SART; (2) compensate for the slope artifacts due to the missed projection data using the prior image and modified nonlocal means (PNLM); (3) utilize l0 regularization to suppress the slope artifacts and pursue the sparsity of wavelet coefficients of the transformed image by using iterative hard thresholding (l0W); and (4) apply an inverse wavelet transform to reconstruct image. In summary, this method is referred to as "l0W-PNLM". Numerical implementations showed that the presented l0W-PNLM was superior to suppress the slope artifacts while preserving the edges of some features as compared to the commercial and other popular investigative algorithms. When the image to be reconstructed is inconsistent with the prior image, the new method can avoid or minimize the distorted edges in the reconstructed images. Quantitative assessments also showed that applying the new method obtained the highest image quality comparing to the existing algorithms. This study demonstrated that the presented l0W-PNLM yielded higher image quality due to a number of unique characteristics, which include that (1) it utilizes

  17. Comparison between time-step-integration and probabilistic methods in seismic analysis of a linear structure

    International Nuclear Information System (INIS)

    Schneeberger, B.; Breuleux, R.

    1977-01-01

    Assuming that earthquake ground motion is a stationary time function, the seismic analysis of a linear structure can be done by probailistic methods using the 'power spectral density function' (PSD), instead of applying the more traditional time-step-integration using earthquake time histories (TH). A given structure was analysed both by PSD and TH methods computing and comparing 'floor response spectra'. The analysis using TH was performed for two different TH and different frequency intervals for the 'floor-response-spectra'. The analysis using PSD first produced PSD functions of the responses of the floors and these were then converted into 'foor-response-spectra'. Plots of the resulting 'floor-response-spectra' show: (1) The agreement of TH and PSD results is quite close. (2) The curves produced by PSD are much smoother than those produced by TH and mostly form an enelope of the latter. (3) The curves produced by TH are quite jagged with the location and magnitude of the peaks depending on the choice of frequencies at which the 'floor-response-spectra' were evaluated and on the choice of TH. (Auth.)

  18. 20 CFR 226.35 - Deductions from regular annuity rate.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Deductions from regular annuity rate. 226.35... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Computing a Spouse or Divorced Spouse Annuity § 226.35 Deductions from regular annuity rate. The regular annuity rate of the spouse and divorced...

  19. Manifold regularized discriminative nonnegative matrix factorization with fast gradient descent.

    Science.gov (United States)

    Guan, Naiyang; Tao, Dacheng; Luo, Zhigang; Yuan, Bo

    2011-07-01

    Nonnegative matrix factorization (NMF) has become a popular data-representation method and has been widely used in image processing and pattern-recognition problems. This is because the learned bases can be interpreted as a natural parts-based representation of data and this interpretation is consistent with the psychological intuition of combining parts to form a whole. For practical classification tasks, however, NMF ignores both the local geometry of data and the discriminative information of different classes. In addition, existing research results show that the learned basis is unnecessarily parts-based because there is neither explicit nor implicit constraint to ensure the representation parts-based. In this paper, we introduce the manifold regularization and the margin maximization to NMF and obtain the manifold regularized discriminative NMF (MD-NMF) to overcome the aforementioned problems. The multiplicative update rule (MUR) can be applied to optimizing MD-NMF, but it converges slowly. In this paper, we propose a fast gradient descent (FGD) to optimize MD-NMF. FGD contains a Newton method that searches the optimal step length, and thus, FGD converges much faster than MUR. In addition, FGD includes MUR as a special case and can be applied to optimizing NMF and its variants. For a problem with 165 samples in R(1600), FGD converges in 28 s, while MUR requires 282 s. We also apply FGD in a variant of MD-NMF and experimental results confirm its efficiency. Experimental results on several face image datasets suggest the effectiveness of MD-NMF.

  20. Sparse Adaptive Iteratively-Weighted Thresholding Algorithm (SAITA for L p -Regularization Using the Multiple Sub-Dictionary Representation

    Directory of Open Access Journals (Sweden)

    Yunyi Li

    2017-12-01

    Full Text Available Both L 1 / 2 and L 2 / 3 are two typical non-convex regularizations of L p ( 0 < p < 1 , which can be employed to obtain a sparser solution than the L 1 regularization. Recently, the multiple-state sparse transformation strategy has been developed to exploit the sparsity in L 1 regularization for sparse signal recovery, which combines the iterative reweighted algorithms. To further exploit the sparse structure of signal and image, this paper adopts multiple dictionary sparse transform strategies for the two typical cases p ∈ { 1 / 2 ,   2 / 3 } based on an iterative L p thresholding algorithm and then proposes a sparse adaptive iterative-weighted L p thresholding algorithm (SAITA. Moreover, a simple yet effective regularization parameter is proposed to weight each sub-dictionary-based L p regularizer. Simulation results have shown that the proposed SAITA not only performs better than the corresponding L 1 algorithms but can also obtain a better recovery performance and achieve faster convergence than the conventional single-dictionary sparse transform-based L p case. Moreover, we conduct some applications about sparse image recovery and obtain good results by comparison with relative work.

  1. Convergence rates in constrained Tikhonov regularization: equivalence of projected source conditions and variational inequalities

    International Nuclear Information System (INIS)

    Flemming, Jens; Hofmann, Bernd

    2011-01-01

    In this paper, we enlighten the role of variational inequalities for obtaining convergence rates in Tikhonov regularization of nonlinear ill-posed problems with convex penalty functionals under convexity constraints in Banach spaces. Variational inequalities are able to cover solution smoothness and the structure of nonlinearity in a uniform manner, not only for unconstrained but, as we indicate, also for constrained Tikhonov regularization. In this context, we extend the concept of projected source conditions already known in Hilbert spaces to Banach spaces, and we show in the main theorem that such projected source conditions are to some extent equivalent to certain variational inequalities. The derived variational inequalities immediately yield convergence rates measured by Bregman distances

  2. Regularization theory for ill-posed problems selected topics

    CERN Document Server

    Lu, Shuai

    2013-01-01

    Thismonograph is a valuable contribution to thehighly topical and extremly productive field ofregularisationmethods for inverse and ill-posed problems. The author is an internationally outstanding and acceptedmathematicianin this field. In his book he offers a well-balanced mixtureof basic and innovative aspects.He demonstrates new,differentiatedviewpoints, and important examples for applications. The bookdemontrates thecurrent developments inthe field of regularization theory,such as multiparameter regularization and regularization in learning theory. The book is written for graduate and PhDs

  3. 20 CFR 226.34 - Divorced spouse regular annuity rate.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Divorced spouse regular annuity rate. 226.34... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Computing a Spouse or Divorced Spouse Annuity § 226.34 Divorced spouse regular annuity rate. The regular annuity rate of a divorced spouse is equal to...

  4. Chimeric mitochondrial peptides from contiguous regular and swinger RNA.

    Science.gov (United States)

    Seligmann, Hervé

    2016-01-01

    Previous mass spectrometry analyses described human mitochondrial peptides entirely translated from swinger RNAs, RNAs where polymerization systematically exchanged nucleotides. Exchanges follow one among 23 bijective transformation rules, nine symmetric exchanges (X ↔ Y, e.g. A ↔ C) and fourteen asymmetric exchanges (X → Y → Z → X, e.g. A → C → G → A), multiplying by 24 DNA's protein coding potential. Abrupt switches from regular to swinger polymerization produce chimeric RNAs. Here, human mitochondrial proteomic analyses assuming abrupt switches between regular and swinger transcriptions, detect chimeric peptides, encoded by part regular, part swinger RNA. Contiguous regular- and swinger-encoded residues within single peptides are stronger evidence for translation of swinger RNA than previously detected, entirely swinger-encoded peptides: regular parts are positive controls matched with contiguous swinger parts, increasing confidence in results. Chimeric peptides are 200 × rarer than swinger peptides (3/100,000 versus 6/1000). Among 186 peptides with > 8 residues for each regular and swinger parts, regular parts of eleven chimeric peptides correspond to six among the thirteen recognized, mitochondrial protein-coding genes. Chimeric peptides matching partly regular proteins are rarer and less expressed than chimeric peptides matching non-coding sequences, suggesting targeted degradation of misfolded proteins. Present results strengthen hypotheses that the short mitogenome encodes far more proteins than hitherto assumed. Entirely swinger-encoded proteins could exist.

  5. Accelerated fast iterative shrinkage thresholding algorithms for sparsity-regularized cone-beam CT image reconstruction

    International Nuclear Information System (INIS)

    Xu, Qiaofeng; Sawatzky, Alex; Anastasio, Mark A.; Yang, Deshan; Tan, Jun

    2016-01-01

    Purpose: The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Methods: Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that is solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. Results: The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. Conclusions: The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated

  6. Accelerated fast iterative shrinkage thresholding algorithms for sparsity-regularized cone-beam CT image reconstruction

    Science.gov (United States)

    Xu, Qiaofeng; Yang, Deshan; Tan, Jun; Sawatzky, Alex; Anastasio, Mark A.

    2016-01-01

    Purpose: The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Methods: Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that is solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. Results: The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. Conclusions: The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated

  7. How Different Information Sources Interact in the Interpretation of Interleaved Discourse: The Case of Two-Step Enumerative Structures

    Directory of Open Access Journals (Sweden)

    Marianne Vergez-Couret

    2012-12-01

    Full Text Available Little attention has been devoted to interleaved discourse structures despite the challenges they offer to discourse coherence studies. Interleaved structures occur frequently if several dimensions of discourse coherence (semantic, intentional, textual, etc. are considered simultaneously on relatively large texts. Two-step enumerative structures, a kind of interleaved structure, are enumerative structures in which the items are further developed in an enumerative fashion. We propose in this paper a treatment of the semantic and textual dimensions of such structures. We also propose some generalizations for the treatment of interleaved structures.Les structures discursives croisées ont très peu attiré l’attention des chercheurs jusqu’à maintenant. Pourtant leur analyse soulève des questions qui sont de véritables défis pour les théories de la cohérence du discours. Les structures croisées sont fréquemment introduites par l’analyse conjointe des différentes dimensions de la cohérence discursive (sémantique, intentionnelle, textuelle… sur des empans textuels significatifs. Les structures énumératives à deux temps, une sorte de structure croisée, sont des structures énumératives dans lesquelles les items sont eux-mêmes développés selon un processus énumératif. Nous proposons ici un traitement des dimensions sémantique et textuelle de ces structures. Nous avançons aussi des pistes pour généraliser nos traitements à un traitement des structures croisées dans leur ensemble.

  8. Fast and compact regular expression matching

    DEFF Research Database (Denmark)

    Bille, Philip; Farach-Colton, Martin

    2008-01-01

    We study 4 problems in string matching, namely, regular expression matching, approximate regular expression matching, string edit distance, and subsequence indexing, on a standard word RAM model of computation that allows logarithmic-sized words to be manipulated in constant time. We show how...... to improve the space and/or remove a dependency on the alphabet size for each problem using either an improved tabulation technique of an existing algorithm or by combining known algorithms in a new way....

  9. Reconstruction of signal in plastic scintillator of PET using Tikhonov regularization.

    Science.gov (United States)

    Raczynski, Lech

    2015-08-01

    The new concept of Time of Flight Positron Emission Tomography (TOF-PET) detection system, which allows for single bed imaging of the whole human body, is currently under development at the Jagiellonian University. The Jagiellonian-PET (J-PET) detector improves the TOF resolution due to the use of fast plastic scintillators. Since registration of the waveform of signals with duration times of few nanoseconds is not feasible, a novel front-end electronics allowing for sampling in a voltage domain at four thresholds was developed. To take fully advantage of these fast signals a novel scheme of recovery of the waveform of the signal, based on idea from the Tikhonov regularization method, is presented. From the Bayes theory the properties of regularized solution, especially its covariance matrix, may be easily derived. This step is crucial to introduce and prove the formula for calculations of the signal recovery error. The method is tested using signals registered by means of the single detection module of the J-PET detector built out from the 30 cm long plastic scintillator strip. It is shown that using the recovered waveform of the signals, instead of samples at four voltage levels alone, improves the spatial resolution of the hit position reconstruction from 1.05 cm to 0.94 cm. Moreover, the obtained result is only slightly worse than the one evaluated using the original raw-signal. The spatial resolution calculated under these conditions is equal to 0.93 cm.

  10. Dimensional regularization and analytical continuation at finite temperature

    International Nuclear Information System (INIS)

    Chen Xiangjun; Liu Lianshou

    1998-01-01

    The relationship between dimensional regularization and analytical continuation of infrared divergent integrals at finite temperature is discussed and a method of regularization of infrared divergent integrals and infrared divergent sums is given

  11. Organizational commitment and intrinsic motivation of regular and contractual primary health care providers

    Directory of Open Access Journals (Sweden)

    Pawan Kumar

    2016-01-01

    Full Text Available Background: Motivated and committed employees deliver better health care, which results in better outcomes and higher patient satisfaction. Objective: To assess the Organizational Commitment and Intrinsic Motivation of Primary Health Care Providers (HCPs in New Delhi, India. Materials and Methods: Study was conducted in 2013 on a sample of 333 HCPs who were selected using multistage stage random sampling technique. The sample includes medical officers, auxiliary nurses and midwives, and pharmacists and laboratory technicians/assistants among regular and contractual staff. Data were collected using the pretested structured questionnaire for organization commitment (OC, job satisfiers, and intrinsic job motivation. Analysis was done by using SPSS version 18 and appropriate statistical tests were applied. Results: The mean score for OC for entire regular staff is 1.6 ± 0.39 and contractual staff is 1.3 ± 0.45 which has statistically significant difference (t = 5.57; P = 0.00. In both regular and contractual staff, none of them show high emotional attachment with the organization and does not feel part of the family in the organization. Contractual staff does not feel proud to work in a present organization for rest of their career. Intrinsic motivation is high in both regular and contractual groups but intergroup difference is significant (t = 2.38; P < 0.05. Contractual staff has more dissatisfier than regular, and the difference is significant (P < 0.01. Conclusion: Organizational commitment and intrinsic motivation of contractual staff are lesser than the permanent staff. Appropriate changes are required in the predictors of organizational commitment and factors responsible for satisfaction in the organization to keep the contractual human resource motivated and committed to the organization.

  12. Regular and conformal regular cores for static and rotating solutions

    Energy Technology Data Exchange (ETDEWEB)

    Azreg-Aïnou, Mustapha

    2014-03-07

    Using a new metric for generating rotating solutions, we derive in a general fashion the solution of an imperfect fluid and that of its conformal homolog. We discuss the conditions that the stress–energy tensors and invariant scalars be regular. On classical physical grounds, it is stressed that conformal fluids used as cores for static or rotating solutions are exempt from any malicious behavior in that they are finite and defined everywhere.

  13. Regular and conformal regular cores for static and rotating solutions

    International Nuclear Information System (INIS)

    Azreg-Aïnou, Mustapha

    2014-01-01

    Using a new metric for generating rotating solutions, we derive in a general fashion the solution of an imperfect fluid and that of its conformal homolog. We discuss the conditions that the stress–energy tensors and invariant scalars be regular. On classical physical grounds, it is stressed that conformal fluids used as cores for static or rotating solutions are exempt from any malicious behavior in that they are finite and defined everywhere.

  14. Preparation of TiC/W core–shell structured powders by one-step activation and chemical reduction process

    International Nuclear Information System (INIS)

    Ding, Xiao-Yu; Luo, Lai-Ma; Huang, Li-Mei; Luo, Guang-Nan; Zhu, Xiao-Yong; Cheng, Ji-Gui; Wu, Yu-Cheng

    2015-01-01

    Highlights: • A novel wet chemical method was used to prepare TiC/W core–shell structure powders. • TiC nanoparticles were well-encapsulated by W shells. • TiC phase was present in the interior of tungsten grains. - Abstract: In the present study, one-step activation and chemical reduction process as a novel wet-chemical route was performed for the preparation of TiC/W core–shell structured ultra-fine powders. The XRD, FE-SEM, TEM and EDS results demonstrated that the as-synthesized powders are of high purity and uniform with a diameter of approximately 500 nm. It is also found that the TiC nanoparticles were well-encapsulated by W shells. Such a unique process suggests a new method for preparing X/W (X refers the water-insoluble nanoparticles) core–shell nanoparticles with different cores

  15. Sparsity-regularized HMAX for visual recognition.

    Directory of Open Access Journals (Sweden)

    Xiaolin Hu

    Full Text Available About ten years ago, HMAX was proposed as a simple and biologically feasible model for object recognition, based on how the visual cortex processes information. However, the model does not encompass sparse firing, which is a hallmark of neurons at all stages of the visual pathway. The current paper presents an improved model, called sparse HMAX, which integrates sparse firing. This model is able to learn higher-level features of objects on unlabeled training images. Unlike most other deep learning models that explicitly address global structure of images in every layer, sparse HMAX addresses local to global structure gradually along the hierarchy by applying patch-based learning to the output of the previous layer. As a consequence, the learning method can be standard sparse coding (SSC or independent component analysis (ICA, two techniques deeply rooted in neuroscience. What makes SSC and ICA applicable at higher levels is the introduction of linear higher-order statistical regularities by max pooling. After training, high-level units display sparse, invariant selectivity for particular individuals or for image categories like those observed in human inferior temporal cortex (ITC and medial temporal lobe (MTL. Finally, on an image classification benchmark, sparse HMAX outperforms the original HMAX by a large margin, suggesting its great potential for computer vision.

  16. Regularities And Irregularities Of The Stark Parameters For Single Ionized Noble Gases

    Science.gov (United States)

    Peláez, R. J.; Djurovic, S.; Cirišan, M.; Aparicio, J. A.; Mar S.

    2010-07-01

    Spectroscopy of ionized noble gases has a great importance for the laboratory and astrophysical plasmas. Generally, spectra of inert gases are important for many physics areas, for example laser physics, fusion diagnostics, photoelectron spectroscopy, collision physics, astrophysics etc. Stark halfwidths as well as shifts of spectral lines are usually employed for plasma diagnostic purposes. For example atomic data of argon krypton and xenon will be useful for the spectral diagnostic of ITER. In addition, the software used for stellar atmosphere simulation like TMAP, and SMART require a large amount of atomic and spectroscopic data. Availability of these parameters will be useful for a further development of stellar atmosphere and evolution models. Stark parameters data of spectral lines can also be useful for verification of theoretical calculations and investigation of regularities and systematic trends of these parameters within a multiplet, supermultiplet or transition array. In the last years, different trends and regularities of Stark parameters (halwidths and shifts of spectral lines) have been analyzed. The conditions related with atomic structure of the element as well as plasma conditions are responsible for regular or irregular behaviors of the Stark parameters. The absence of very close perturbing levels makes Ne II as a good candidate for analysis of the regularities. Other two considered elements Kr II and Xe II with complex spectra present strong perturbations and in some cases an irregularities in Stark parameters appear. In this work we analyze the influence of the perturbations to Stark parameters within the multiplets.

  17. Low-rank matrix approximation with manifold regularization.

    Science.gov (United States)

    Zhang, Zhenyue; Zhao, Keke

    2013-07-01

    This paper proposes a new model of low-rank matrix factorization that incorporates manifold regularization to the matrix factorization. Superior to the graph-regularized nonnegative matrix factorization, this new regularization model has globally optimal and closed-form solutions. A direct algorithm (for data with small number of points) and an alternate iterative algorithm with inexact inner iteration (for large scale data) are proposed to solve the new model. A convergence analysis establishes the global convergence of the iterative algorithm. The efficiency and precision of the algorithm are demonstrated numerically through applications to six real-world datasets on clustering and classification. Performance comparison with existing algorithms shows the effectiveness of the proposed method for low-rank factorization in general.

  18. Regularity criteria for incompressible magnetohydrodynamics equations in three dimensions

    International Nuclear Information System (INIS)

    Lin, Hongxia; Du, Lili

    2013-01-01

    In this paper, we give some new global regularity criteria for three-dimensional incompressible magnetohydrodynamics (MHD) equations. More precisely, we provide some sufficient conditions in terms of the derivatives of the velocity or pressure, for the global regularity of strong solutions to 3D incompressible MHD equations in the whole space, as well as for periodic boundary conditions. Moreover, the regularity criterion involving three of the nine components of the velocity gradient tensor is also obtained. The main results generalize the recent work by Cao and Wu (2010 Two regularity criteria for the 3D MHD equations J. Diff. Eqns 248 2263–74) and the analysis in part is based on the works by Cao C and Titi E (2008 Regularity criteria for the three-dimensional Navier–Stokes equations Indiana Univ. Math. J. 57 2643–61; 2011 Gobal regularity criterion for the 3D Navier–Stokes equations involving one entry of the velocity gradient tensor Arch. Rational Mech. Anal. 202 919–32) for 3D incompressible Navier–Stokes equations. (paper)

  19. Chiral Thirring–Wess model with Faddeevian regularization

    International Nuclear Information System (INIS)

    Rahaman, Anisur

    2015-01-01

    Replacing vector type of interaction of the Thirring–Wess model by the chiral type a new model is presented which is termed here as chiral Thirring–Wess model. Ambiguity parameters of regularization are so chosen that the model falls into the Faddeevian class. The resulting Faddeevian class of model in general does not possess Lorentz invariance. However we can exploit the arbitrariness admissible in the ambiguity parameters to relate the quantum mechanically generated ambiguity parameters with the classical parameter involved in the masslike term of the gauge field which helps to maintain physical Lorentz invariance instead of the absence of manifestly Lorentz covariance of the model. The phase space structure and the theoretical spectrum of this class of model have been determined through Dirac’s method of quantization of constraint system

  20. Regularities in Low-Temperature Phosphatization of Silicates

    Science.gov (United States)

    Savenko, A. V.

    2018-01-01

    The regularities in low-temperature phosphatization of silicates are defined from long-term experiments on the interaction between different silicate minerals and phosphate-bearing solutions in a wide range of medium acidity. It is shown that the parameters of the reaction of phosphatization of hornblende, orthoclase, and labradorite have the same values as for clayey minerals (kaolinite and montmorillonite). This effect may appear, if phosphotization proceeds, not after silicate minerals with a different structure and composition, but after a secondary silicate phase formed upon interaction between silicates and water and stable in a certain pH range. Variation in the parameters of the reaction of phosphatization at pH ≈ 1.8 is due to the stability of the silicate phase different from that at higher pH values.

  1. Two-Step Classification of Unemployed People in the Czech Republic

    OpenAIRE

    Zdeněk Šulc; Marina Stecenková; Jiří Vild

    2015-01-01

    The paper analyzes structure and behavior of unemployed people in the Czech Republic by means of latent class analysis (LCA) and CHAID analysis where the output of LCA serves as the input for CHAID. The unemployed are classified in two steps; for each step different characteristics are used. In the first step, respondents are split into latent classes according to their answers to questions concerning ways of searching for a new job. In the second step, CHAID analysis is performed with result...

  2. Regular-fat dairy and human health

    DEFF Research Database (Denmark)

    Astrup, Arne; Bradley, Beth H Rice; Brenna, J Thomas

    2016-01-01

    In recent history, some dietary recommendations have treated dairy fat as an unnecessary source of calories and saturated fat in the human diet. These assumptions, however, have recently been brought into question by current research on regular fat dairy products and human health. In an effort to......, cheese and yogurt, can be important components of an overall healthy dietary pattern. Systematic examination of the effects of dietary patterns that include regular-fat milk, cheese and yogurt on human health is warranted....

  3. Bounded Perturbation Regularization for Linear Least Squares Estimation

    KAUST Repository

    Ballal, Tarig; Suliman, Mohamed Abdalla Elhag; Al-Naffouri, Tareq Y.

    2017-01-01

    This paper addresses the problem of selecting the regularization parameter for linear least-squares estimation. We propose a new technique called bounded perturbation regularization (BPR). In the proposed BPR method, a perturbation with a bounded

  4. Research on test of product based on spatial sampling criteria and variable step sampling mechanism

    Science.gov (United States)

    Li, Ruihong; Han, Yueping

    2014-09-01

    This paper presents an effective approach for online testing the assembly structures inside products using multiple views technique and X-ray digital radiography system based on spatial sampling criteria and variable step sampling mechanism. Although there are some objects inside one product to be tested, there must be a maximal rotary step for an object within which the least structural size to be tested is predictable. In offline learning process, Rotating the object by the step and imaging it and so on until a complete cycle is completed, an image sequence is obtained that includes the full structural information for recognition. The maximal rotary step is restricted by the least structural size and the inherent resolution of the imaging system. During online inspection process, the program firstly finds the optimum solutions to all different target parts in the standard sequence, i.e., finds their exact angles in one cycle. Aiming at the issue of most sizes of other targets in product are larger than that of the least structure, the paper adopts variable step-size sampling mechanism to rotate the product specific angles with different steps according to different objects inside the product and match. Experimental results show that the variable step-size method can greatly save time compared with the traditional fixed-step inspection method while the recognition accuracy is guaranteed.

  5. Regularized Regression and Density Estimation based on Optimal Transport

    KAUST Repository

    Burger, M.

    2012-03-11

    The aim of this paper is to investigate a novel nonparametric approach for estimating and smoothing density functions as well as probability densities from discrete samples based on a variational regularization method with the Wasserstein metric as a data fidelity. The approach allows a unified treatment of discrete and continuous probability measures and is hence attractive for various tasks. In particular, the variational model for special regularization functionals yields a natural method for estimating densities and for preserving edges in the case of total variation regularization. In order to compute solutions of the variational problems, a regularized optimal transport problem needs to be solved, for which we discuss several formulations and provide a detailed analysis. Moreover, we compute special self-similar solutions for standard regularization functionals and we discuss several computational approaches and results. © 2012 The Author(s).

  6. Energy functions for regularization algorithms

    Science.gov (United States)

    Delingette, H.; Hebert, M.; Ikeuchi, K.

    1991-01-01

    Regularization techniques are widely used for inverse problem solving in computer vision such as surface reconstruction, edge detection, or optical flow estimation. Energy functions used for regularization algorithms measure how smooth a curve or surface is, and to render acceptable solutions these energies must verify certain properties such as invariance with Euclidean transformations or invariance with parameterization. The notion of smoothness energy is extended here to the notion of a differential stabilizer, and it is shown that to void the systematic underestimation of undercurvature for planar curve fitting, it is necessary that circles be the curves of maximum smoothness. A set of stabilizers is proposed that meet this condition as well as invariance with rotation and parameterization.

  7. Three regularities of recognition memory: the role of bias.

    Science.gov (United States)

    Hilford, Andrew; Maloney, Laurence T; Glanzer, Murray; Kim, Kisok

    2015-12-01

    A basic assumption of Signal Detection Theory is that decisions are made on the basis of likelihood ratios. In a preceding paper, Glanzer, Hilford, and Maloney (Psychonomic Bulletin & Review, 16, 431-455, 2009) showed that the likelihood ratio assumption implies that three regularities will occur in recognition memory: (1) the Mirror Effect, (2) the Variance Effect, (3) the normalized Receiver Operating Characteristic (z-ROC) Length Effect. The paper offered formal proofs and computational demonstrations that decisions based on likelihood ratios produce the three regularities. A survey of data based on group ROCs from 36 studies validated the likelihood ratio assumption by showing that its three implied regularities are ubiquitous. The study noted, however, that bias, another basic factor in Signal Detection Theory, can obscure the Mirror Effect. In this paper we examine how bias affects the regularities at the theoretical level. The theoretical analysis shows: (1) how bias obscures the Mirror Effect, not the other two regularities, and (2) four ways to counter that obscuring. We then report the results of five experiments that support the theoretical analysis. The analyses and the experimental results also demonstrate: (1) that the three regularities govern individual, as well as group, performance, (2) alternative explanations of the regularities are ruled out, and (3) that Signal Detection Theory, correctly applied, gives a simple and unified explanation of recognition memory data.

  8. A soft double regularization approach to parametric blind image deconvolution.

    Science.gov (United States)

    Chen, Li; Yap, Kim-Hui

    2005-05-01

    This paper proposes a blind image deconvolution scheme based on soft integration of parametric blur structures. Conventional blind image deconvolution methods encounter a difficult dilemma of either imposing stringent and inflexible preconditions on the problem formulation or experiencing poor restoration results due to lack of information. This paper attempts to address this issue by assessing the relevance of parametric blur information, and incorporating the knowledge into the parametric double regularization (PDR) scheme. The PDR method assumes that the actual blur satisfies up to a certain degree of parametric structure, as there are many well-known parametric blurs in practical applications. Further, it can be tailored flexibly to include other blur types if some prior parametric knowledge of the blur is available. A manifold soft parametric modeling technique is proposed to generate the blur manifolds, and estimate the fuzzy blur structure. The PDR scheme involves the development of the meaningful cost function, the estimation of blur support and structure, and the optimization of the cost function. Experimental results show that it is effective in restoring degraded images under different environments.

  9. Method of transferring regular shaped vessel into cell

    International Nuclear Information System (INIS)

    Murai, Tsunehiko.

    1997-01-01

    The present invention concerns a method of transferring regular shaped vessels from a non-contaminated area to a contaminated cell. A passage hole for allowing the regular shaped vessels to pass in the longitudinal direction is formed to a partitioning wall at the bottom of the contaminated cell. A plurality of regular shaped vessel are stacked in multiple stages in a vertical direction from the non-contaminated area present below the passage hole, allowed to pass while being urged and transferred successively into the contaminated cell. As a result, since they are transferred while substantially closing the passage hole by the regular shaped vessels, radiation rays or contaminated materials are prevented from discharging from the contaminated cell to the non-contaminated area. Since there is no requirement to open/close an isolation door frequently, the workability upon transfer can be improved remarkably. In addition, the sealing member for sealing the gap between the regular shaped vessel passing through the passage hole and the partitioning wall of the bottom is disposed to the passage hole, the contaminated materials in the contaminated cells can be prevented from discharging from the gap to the non-contaminated area. (N.H.)

  10. Regular scattering patterns from near-cloaking devices and their implications for invisibility cloaking

    International Nuclear Information System (INIS)

    Kocyigit, Ilker; Liu, Hongyu; Sun, Hongpeng

    2013-01-01

    In this paper, we consider invisibility cloaking via the transformation optics approach through a ‘blow-up’ construction. An ideal cloak makes use of singular cloaking material. ‘Blow-up-a-small-region’ construction and ‘truncation-of-singularity’ construction are introduced to avoid the singular structure, however, giving only near-cloaks. The study in the literature is to develop various mechanisms in order to achieve high-accuracy approximate near-cloaking devices, and also from a practical viewpoint to nearly cloak an arbitrary content. We study the problem from a different viewpoint. It is shown that for those regularized cloaking devices, the corresponding scattering wave fields due to an incident plane wave have regular patterns. The regular patterns are both a curse and a blessing. On the one hand, the regular wave pattern betrays the location of a cloaking device which is an intrinsic defect due to the ‘blow-up’ construction, and this is particularly the case for the construction by employing a high-loss layer lining. Indeed, our numerical experiments show robust reconstructions of the location, even by implementing the phaseless cross-section data. The construction by employing a high-density layer lining shows a certain promising feature. On the other hand, it is shown that one can introduce an internal point source to produce the canceling scattering pattern to achieve a near-cloak of an arbitrary order of accuracy. (paper)

  11. Modified truncated randomized singular value decomposition (MTRSVD) algorithms for large scale discrete ill-posed problems with general-form regularization

    Science.gov (United States)

    Jia, Zhongxiao; Yang, Yanfei

    2018-05-01

    In this paper, we propose new randomization based algorithms for large scale linear discrete ill-posed problems with general-form regularization: subject to , where L is a regularization matrix. Our algorithms are inspired by the modified truncated singular value decomposition (MTSVD) method, which suits only for small to medium scale problems, and randomized SVD (RSVD) algorithms that generate good low rank approximations to A. We use rank-k truncated randomized SVD (TRSVD) approximations to A by truncating the rank- RSVD approximations to A, where q is an oversampling parameter. The resulting algorithms are called modified TRSVD (MTRSVD) methods. At every step, we use the LSQR algorithm to solve the resulting inner least squares problem, which is proved to become better conditioned as k increases so that LSQR converges faster. We present sharp bounds for the approximation accuracy of the RSVDs and TRSVDs for severely, moderately and mildly ill-posed problems, and substantially improve a known basic bound for TRSVD approximations. We prove how to choose the stopping tolerance for LSQR in order to guarantee that the computed and exact best regularized solutions have the same accuracy. Numerical experiments illustrate that the best regularized solutions by MTRSVD are as accurate as the ones by the truncated generalized singular value decomposition (TGSVD) algorithm, and at least as accurate as those by some existing truncated randomized generalized singular value decomposition (TRGSVD) algorithms. This work was supported in part by the National Science Foundation of China (Nos. 11771249 and 11371219).

  12. Automatic Constraint Detection for 2D Layout Regularization.

    Science.gov (United States)

    Jiang, Haiyong; Nan, Liangliang; Yan, Dong-Ming; Dong, Weiming; Zhang, Xiaopeng; Wonka, Peter

    2016-08-01

    In this paper, we address the problem of constraint detection for layout regularization. The layout we consider is a set of two-dimensional elements where each element is represented by its bounding box. Layout regularization is important in digitizing plans or images, such as floor plans and facade images, and in the improvement of user-created contents, such as architectural drawings and slide layouts. To regularize a layout, we aim to improve the input by detecting and subsequently enforcing alignment, size, and distance constraints between layout elements. Similar to previous work, we formulate layout regularization as a quadratic programming problem. In addition, we propose a novel optimization algorithm that automatically detects constraints. We evaluate the proposed framework using a variety of input layouts from different applications. Our results demonstrate that our method has superior performance to the state of the art.

  13. Automatic Constraint Detection for 2D Layout Regularization

    KAUST Repository

    Jiang, Haiyong

    2015-09-18

    In this paper, we address the problem of constraint detection for layout regularization. As layout we consider a set of two-dimensional elements where each element is represented by its bounding box. Layout regularization is important for digitizing plans or images, such as floor plans and facade images, and for the improvement of user created contents, such as architectural drawings and slide layouts. To regularize a layout, we aim to improve the input by detecting and subsequently enforcing alignment, size, and distance constraints between layout elements. Similar to previous work, we formulate the layout regularization as a quadratic programming problem. In addition, we propose a novel optimization algorithm to automatically detect constraints. In our results, we evaluate the proposed framework on a variety of input layouts from different applications, which demonstrates our method has superior performance to the state of the art.

  14. Lavrentiev regularization method for nonlinear ill-posed problems

    International Nuclear Information System (INIS)

    Kinh, Nguyen Van

    2002-10-01

    In this paper we shall be concerned with Lavientiev regularization method to reconstruct solutions x 0 of non ill-posed problems F(x)=y o , where instead of y 0 noisy data y δ is an element of X with absolut(y δ -y 0 ) ≤ δ are given and F:X→X is an accretive nonlinear operator from a real reflexive Banach space X into itself. In this regularization method solutions x α δ are obtained by solving the singularly perturbed nonlinear operator equation F(x)+α(x-x*)=y δ with some initial guess x*. Assuming certain conditions concerning the operator F and the smoothness of the element x*-x 0 we derive stability estimates which show that the accuracy of the regularized solutions is order optimal provided that the regularization parameter α has been chosen properly. (author)

  15. Long-wave model for strongly anisotropic growth of a crystal step.

    Science.gov (United States)

    Khenner, Mikhail

    2013-08-01

    A continuum model for the dynamics of a single step with the strongly anisotropic line energy is formulated and analyzed. The step grows by attachment of adatoms from the lower terrace, onto which atoms adsorb from a vapor phase or from a molecular beam, and the desorption is nonnegligible (the "one-sided" model). Via a multiscale expansion, we derived a long-wave, strongly nonlinear, and strongly anisotropic evolution PDE for the step profile. Written in terms of the step slope, the PDE can be represented in a form similar to a convective Cahn-Hilliard equation. We performed the linear stability analysis and computed the nonlinear dynamics. Linear stability depends on whether the stiffness is minimum or maximum in the direction of the step growth. It also depends nontrivially on the combination of the anisotropy strength parameter and the atomic flux from the terrace to the step. Computations show formation and coarsening of a hill-and-valley structure superimposed onto a long-wavelength profile, which independently coarsens. Coarsening laws for the hill-and-valley structure are computed for two principal orientations of a maximum step stiffness, the increasing anisotropy strength, and the varying atomic flux.

  16. L1-norm locally linear representation regularization multi-source adaptation learning.

    Science.gov (United States)

    Tao, Jianwen; Wen, Shiting; Hu, Wenjun

    2015-09-01

    In most supervised domain adaptation learning (DAL) tasks, one has access only to a small number of labeled examples from target domain. Therefore the success of supervised DAL in this "small sample" regime needs the effective utilization of the large amounts of unlabeled data to extract information that is useful for generalization. Toward this end, we here use the geometric intuition of manifold assumption to extend the established frameworks in existing model-based DAL methods for function learning by incorporating additional information about the target geometric structure of the marginal distribution. We would like to ensure that the solution is smooth with respect to both the ambient space and the target marginal distribution. In doing this, we propose a novel L1-norm locally linear representation regularization multi-source adaptation learning framework which exploits the geometry of the probability distribution, which has two techniques. Firstly, an L1-norm locally linear representation method is presented for robust graph construction by replacing the L2-norm reconstruction measure in LLE with L1-norm one, which is termed as L1-LLR for short. Secondly, considering the robust graph regularization, we replace traditional graph Laplacian regularization with our new L1-LLR graph Laplacian regularization and therefore construct new graph-based semi-supervised learning framework with multi-source adaptation constraint, which is coined as L1-MSAL method. Moreover, to deal with the nonlinear learning problem, we also generalize the L1-MSAL method by mapping the input data points from the input space to a high-dimensional reproducing kernel Hilbert space (RKHS) via a nonlinear mapping. Promising experimental results have been obtained on several real-world datasets such as face, visual video and object. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Strategic Planning Information Online: A Step-by-Step Guide through Major Checkpoints. . . What Database to Use When.

    Science.gov (United States)

    Marsh, Sharon G.

    1984-01-01

    This article concentrates on the use of online searching to provide information needed by business for strategy formulation. The following steps are explained: environment analysis (structural analysis, trend analysis); business analysis; strategy formulation; strategic anticipation; strategic plan formalization; and implementation and control.…

  18. Tidal-induced large-scale regular bed form patterns in a three-dimensional shallow water model

    NARCIS (Netherlands)

    Hulscher, Suzanne J.M.H.

    1996-01-01

    The three-dimensional model presented in this paper is used to study how tidal currents form wave-like bottom patterns. Inclusion of vertical flow structure turns out to be necessary to describe the formation, or absence, of all known large-scale regular bottom features. The tide and topography are

  19. Awareness about “Ten Steps for Successful Breastfeeding” among Medical and Nursing Students

    Science.gov (United States)

    Kakrani, Vandana A.; Rathod (Waghela), Hetal K.; Mammulwar, Megha S.; Bhawalkar, Jitendra S.

    2015-01-01

    Background: Baby-friendly Hospital Initiative (BFHI) is a vital intervention supported by World Health Organization and UNICEF to reduce infant mortality and has been included as a part of the curriculum in nursing and medical courses. To know the extent of knowledge of students about BFHI along with its understanding and to find out the gap in their knowledge about BFHI steps. Methods: A descriptive cross sectional study was carried out among the nursing (4th year) and medical students (3rd year MBBS) about ten steps of BFHI by a pretested and predesigned questionnaire. After ethical clearance, information was collected about their awareness and correct understanding concerning ten steps. Results: A total of 102 (51.6%) medical and 96 (48.4%) nursing students comprising of 57 (28.8%) males and 141 (71.2%) females were interviewed, had similar mean score about the ten steps of BFHI. Female respondents 82.3% had best understood the step 2 (training), as compared to males 80.7%. About step 6 (no supplements) 94.3% females and 86% males had well understood the step. Step 7 (rooming in) was known to 85.8% females and 54.4% males respectively. Step 9 (no pacifiers) was known to 80.1% females while among males 56.1% were aware. There was statistically significant difference in their knowledge about the steps 2 and 4 (skin to skin), 5 (counseling), 7, and 9 as females were more aware about these steps than males. The least understood steps in medical and nursing students were step 1 (written policy) (15.7%, 15.6%), step 3 (prenatal education) (27.5%, 29.2%), step 8 (cues) (10.8%, 24%) and step ten (community support) (8.8%, 11.5%) respectively. Conclusions: BFHI is one of the successful international efforts undertaken to promote, protect and support breast feeding. Acquiring knowledge about the same by medical and nursing students is most crucial tool for better practices by them in the future. Continued medical education, workshops and seminars by lactation specialists

  20. Online Manifold Regularization by Dual Ascending Procedure

    Directory of Open Access Journals (Sweden)

    Boliang Sun

    2013-01-01

    Full Text Available We propose a novel online manifold regularization framework based on the notion of duality in constrained optimization. The Fenchel conjugate of hinge functions is a key to transfer manifold regularization from offline to online in this paper. Our algorithms are derived by gradient ascent in the dual function. For practical purpose, we propose two buffering strategies and two sparse approximations to reduce the computational complexity. Detailed experiments verify the utility of our approaches. An important conclusion is that our online MR algorithms can handle the settings where the target hypothesis is not fixed but drifts with the sequence of examples. We also recap and draw connections to earlier works. This paper paves a way to the design and analysis of online manifold regularization algorithms.

  1. The exotic heat-trace asymptotics of a regular-singular operator revisited

    OpenAIRE

    Vertman, Boris

    2013-01-01

    We discuss the exotic properties of the heat-trace asymptotics for a regular-singular operator with general boundary conditions at the singular end, as observed by Falomir, Muschietti, Pisani and Seeley as well as by Kirsten, Loya and Park. We explain how their results alternatively follow from the general heat kernel construction by Mooers, a natural question that has not been addressed yet, as the latter work did not elaborate explicitly on the singular structure of the heat trace expansion...

  2. Mechanical properties of molybdenum-titanium alloys micro-structurally controlled by multi-step internal nitriding

    International Nuclear Information System (INIS)

    Nagae, M.; Yoshio, T.; Takemoto, Y.; Takada, J.; Hiraoka, Y.

    2001-01-01

    Internally nitrided dilute Mo-Ti alloys having a heavily deformed microstructure near the specimen surface were prepared by a novel two-step nitriding process at 1173 to 1773 K in N 2 gas. For the nitrided specimens three-point bend tests were performed at temperatures from 77 to 298 K in order to investigate the effect of microstructure control by internal nitriding on the ductile-to-brittle transition temperature (DBTT) of the alloy Yield strength obtained at 243 K of the specimen maintaining the deformed microstructure by the two-step nitriding was about 1.7 times as much as recrystallized specimen. The specimen subjected to the two-step nitriding was bent more than 90 degree at 243 K, whereas recrystallized specimen was fractured after showing a slight ductility at 243 K. DBTT of the specimen subjected to the two-step nitriding and recrystallized specimen was about 153 K and 203 K, respectively. These results indicate that multi-step internal nitriding is very effective to the improvement in the embrittlement by the recrystallization of molybdenum alloys. (author)

  3. REGULAR PATTERN MINING (WITH JITTER ON WEIGHTED-DIRECTED DYNAMIC GRAPHS

    Directory of Open Access Journals (Sweden)

    A. GUPTA

    2017-02-01

    Full Text Available Real world graphs are mostly dynamic in nature, exhibiting time-varying behaviour in structure of the graph, weight on the edges and direction of the edges. Mining regular patterns in the occurrence of edge parameters gives an insight into the consumer trends over time in ecommerce co-purchasing networks. But such patterns need not necessarily be precise as in the case when some product goes out of stock or a group of customers becomes unavailable for a short period of time. Ignoring them may lead to loss of useful information and thus taking jitter into account becomes vital. To the best of our knowledge, no work has been yet reported to extract regular patterns considering a jitter of length greater than unity. In this article, we propose a novel method to find quasi regular patterns on weight and direction sequences of such graphs. The method involves analysing the dynamic network considering the inconsistencies in the occurrence of edges. It utilizes the relation between the occurrence sequence and the corresponding weight and direction sequences to speed up this process. Further, these patterns are used to determine the most central nodes (such as the most profit yielding products. To accomplish this we introduce the concept of dynamic closeness centrality and dynamic betweenness centrality. Experiments on Enron e-mail dataset and a synthetic dynamic network show that the presented approach is efficient, so it can be used to find patterns in large scale networks consisting of many timestamps.

  4. pH-Controlled Two-Step Uncoating of Influenza Virus

    Science.gov (United States)

    Li, Sai; Sieben, Christian; Ludwig, Kai; Höfer, Chris T.; Chiantia, Salvatore; Herrmann, Andreas; Eghiaian, Frederic; Schaap, Iwan A.T.

    2014-01-01

    Upon endocytosis in its cellular host, influenza A virus transits via early to late endosomes. To efficiently release its genome, the composite viral shell must undergo significant structural rearrangement, but the exact sequence of events leading to viral uncoating remains largely speculative. In addition, no change in viral structure has ever been identified at the level of early endosomes, raising a question about their role. We performed AFM indentation on single viruses in conjunction with cellular assays under conditions that mimicked gradual acidification from early to late endosomes. We found that the release of the influenza genome requires sequential exposure to the pH of both early and late endosomes, with each step corresponding to changes in the virus mechanical response. Step 1 (pH 7.5–6) involves a modification of both hemagglutinin and the viral lumen and is reversible, whereas Step 2 (pH pH step or blocking the envelope proton channel M2 precludes proper genome release and efficient infection, illustrating the importance of viral lumen acidification during the early endosomal residence for influenza virus infection. PMID:24703306

  5. Regular graph construction for semi-supervised learning

    International Nuclear Information System (INIS)

    Vega-Oliveros, Didier A; Berton, Lilian; Eberle, Andre Mantini; Lopes, Alneu de Andrade; Zhao, Liang

    2014-01-01

    Semi-supervised learning (SSL) stands out for using a small amount of labeled points for data clustering and classification. In this scenario graph-based methods allow the analysis of local and global characteristics of the available data by identifying classes or groups regardless data distribution and representing submanifold in Euclidean space. Most of methods used in literature for SSL classification do not worry about graph construction. However, regular graphs can obtain better classification accuracy compared to traditional methods such as k-nearest neighbor (kNN), since kNN benefits the generation of hubs and it is not appropriate for high-dimensionality data. Nevertheless, methods commonly used for generating regular graphs have high computational cost. We tackle this problem introducing an alternative method for generation of regular graphs with better runtime performance compared to methods usually find in the area. Our technique is based on the preferential selection of vertices according some topological measures, like closeness, generating at the end of the process a regular graph. Experiments using the global and local consistency method for label propagation show that our method provides better or equal classification rate in comparison with kNN

  6. The use of assistive technology resources for disabled children in regular schooling: the teachers’ perception

    Directory of Open Access Journals (Sweden)

    2012-12-01

    Full Text Available The national School Census revealed that 702,603 disabled people were enrolled in regular educationin 2010. The use of assistive technology resources in the school context has been indicated to favor the executionof tasks and the access to educational content and school environments and, consequently, help disabledindividuals’ learning. However, there are few studies showing the impact of these resources in the educationprocess of children with physical disabilities. The aim of this study was to identify, from the teacher’s viewpoint,the contributions and difficulties in the use of technology resources with students with cerebral palsy, focusingon those with severe motor impairment, attending regular education. The study included five teachers of these students who were using assistive technology resources in the execution of writing and/or communicationassignments. Semi-structured interviews were conducted and data were analyzed following the CollectiveSubject Discourse (CSD technique. Results indicated that assistive technology resources are already includedin regular schools and that they have brought contributions to the education process of children with cerebralpalsy in regular class; nevertheless, they are being implemented without systematization, monitoring and/orpartnerships. The study pointed to the need to consider the opinions and requirements of the people involved inthe context where the use of technology is inserted.

  7. Physical model of dimensional regularization

    Energy Technology Data Exchange (ETDEWEB)

    Schonfeld, Jonathan F.

    2016-12-15

    We explicitly construct fractals of dimension 4-ε on which dimensional regularization approximates scalar-field-only quantum-field theory amplitudes. The construction does not require fractals to be Lorentz-invariant in any sense, and we argue that there probably is no Lorentz-invariant fractal of dimension greater than 2. We derive dimensional regularization's power-law screening first for fractals obtained by removing voids from 3-dimensional Euclidean space. The derivation applies techniques from elementary dielectric theory. Surprisingly, fractal geometry by itself does not guarantee the appropriate power-law behavior; boundary conditions at fractal voids also play an important role. We then extend the derivation to 4-dimensional Minkowski space. We comment on generalization to non-scalar fields, and speculate about implications for quantum gravity. (orig.)

  8. Factors associated with regular dental visits among hemodialysis patients

    Science.gov (United States)

    Yoshioka, Masami; Shirayama, Yasuhiko; Imoto, Issei; Hinode, Daisuke; Yanagisawa, Shizuko; Takeuchi, Yuko; Bando, Takashi; Yokota, Narushi

    2016-01-01

    AIM To investigate awareness and attitudes about preventive dental visits among dialysis patients; to clarify the barriers to visiting the dentist. METHODS Subjects included 141 dentate outpatients receiving hemodialysis treatment at two facilities, one with a dental department and the other without a dental department. We used a structured questionnaire to interview participants about their awareness of oral health management issues for dialysis patients, perceived oral symptoms and attitudes about dental visits. Bivariate analysis using the χ2 test was conducted to determine associations between study variables and regular dental check-ups. Binominal logistic regression analysis was used to determine factors associated with regular dental check-ups. RESULTS There were no significant differences in patient demographics between the two participating facilities, including attitudes about dental visits. Therefore, we included all patients in the following analyses. Few patients (4.3%) had been referred to a dentist by a medical doctor or nurse. Although 80.9% of subjects had a primary dentist, only 34.0% of subjects received regular dental check-ups. The most common reasons cited for not seeking dental care were that visits are burdensome and a lack of perceived need. Patients with gum swelling or bleeding were much more likely to be in the group of those not receiving routine dental check-ups (χ2 test, P < 0.01). Logistic regression analysis demonstrated that receiving dental check-ups was associated with awareness that oral health management is more important for dialysis patients than for others and with having a primary dentist (P < 0.05). CONCLUSION Dialysis patients should be educated about the importance of preventive dental care. Medical providers are expected to participate in promoting dental visits among dialysis patients. PMID:27648409

  9. Evaluating the impact of spatio-temporal smoothness constraints on the BOLD hemodynamic response function estimation: an analysis based on Tikhonov regularization

    International Nuclear Information System (INIS)

    Casanova, R; Yang, L; Hairston, W D; Laurienti, P J; Maldjian, J A

    2009-01-01

    Recently we have proposed the use of Tikhonov regularization with temporal smoothness constraints to estimate the BOLD fMRI hemodynamic response function (HRF). The temporal smoothness constraint was imposed on the estimates by using second derivative information while the regularization parameter was selected based on the generalized cross-validation function (GCV). Using one-dimensional simulations, we previously found this method to produce reliable estimates of the HRF time course, especially its time to peak (TTP), being at the same time fast and robust to over-sampling in the HRF estimation. Here, we extend the method to include simultaneous temporal and spatial smoothness constraints. This method does not need Gaussian smoothing as a pre-processing step as usually done in fMRI data analysis. We carried out two-dimensional simulations to compare the two methods: Tikhonov regularization with temporal (Tik-GCV-T) and spatio-temporal (Tik-GCV-ST) smoothness constraints on the estimated HRF. We focus our attention on quantifying the influence of the Gaussian data smoothing and the presence of edges on the performance of these techniques. Our results suggest that the spatial smoothing introduced by regularization is less severe than that produced by Gaussian smoothing. This allows more accurate estimates of the response amplitudes while producing similar estimates of the TTP. We illustrate these ideas using real data. (note)

  10. Fluctuations of quantum fields via zeta function regularization

    International Nuclear Information System (INIS)

    Cognola, Guido; Zerbini, Sergio; Elizalde, Emilio

    2002-01-01

    Explicit expressions for the expectation values and the variances of some observables, which are bilinear quantities in the quantum fields on a D-dimensional manifold, are derived making use of zeta function regularization. It is found that the variance, related to the second functional variation of the effective action, requires a further regularization and that the relative regularized variance turns out to be 2/N, where N is the number of the fields, thus being independent of the dimension D. Some illustrating examples are worked through. The issue of the stress tensor is also briefly addressed

  11. X-ray computed tomography using curvelet sparse regularization.

    Science.gov (United States)

    Wieczorek, Matthias; Frikel, Jürgen; Vogel, Jakob; Eggl, Elena; Kopp, Felix; Noël, Peter B; Pfeiffer, Franz; Demaret, Laurent; Lasser, Tobias

    2015-04-01

    Reconstruction of x-ray computed tomography (CT) data remains a mathematically challenging problem in medical imaging. Complementing the standard analytical reconstruction methods, sparse regularization is growing in importance, as it allows inclusion of prior knowledge. The paper presents a method for sparse regularization based on the curvelet frame for the application to iterative reconstruction in x-ray computed tomography. In this work, the authors present an iterative reconstruction approach based on the alternating direction method of multipliers using curvelet sparse regularization. Evaluation of the method is performed on a specifically crafted numerical phantom dataset to highlight the method's strengths. Additional evaluation is performed on two real datasets from commercial scanners with different noise characteristics, a clinical bone sample acquired in a micro-CT and a human abdomen scanned in a diagnostic CT. The results clearly illustrate that curvelet sparse regularization has characteristic strengths. In particular, it improves the restoration and resolution of highly directional, high contrast features with smooth contrast variations. The authors also compare this approach to the popular technique of total variation and to traditional filtered backprojection. The authors conclude that curvelet sparse regularization is able to improve reconstruction quality by reducing noise while preserving highly directional features.

  12. Semisupervised Support Vector Machines With Tangent Space Intrinsic Manifold Regularization.

    Science.gov (United States)

    Sun, Shiliang; Xie, Xijiong

    2016-09-01

    Semisupervised learning has been an active research topic in machine learning and data mining. One main reason is that labeling examples is expensive and time-consuming, while there are large numbers of unlabeled examples available in many practical problems. So far, Laplacian regularization has been widely used in semisupervised learning. In this paper, we propose a new regularization method called tangent space intrinsic manifold regularization. It is intrinsic to data manifold and favors linear functions on the manifold. Fundamental elements involved in the formulation of the regularization are local tangent space representations, which are estimated by local principal component analysis, and the connections that relate adjacent tangent spaces. Simultaneously, we explore its application to semisupervised classification and propose two new learning algorithms called tangent space intrinsic manifold regularized support vector machines (TiSVMs) and tangent space intrinsic manifold regularized twin SVMs (TiTSVMs). They effectively integrate the tangent space intrinsic manifold regularization consideration. The optimization of TiSVMs can be solved by a standard quadratic programming, while the optimization of TiTSVMs can be solved by a pair of standard quadratic programmings. The experimental results of semisupervised classification problems show the effectiveness of the proposed semisupervised learning algorithms.

  13. Regularity and chaos in cavity QED

    International Nuclear Information System (INIS)

    Bastarrachea-Magnani, Miguel Angel; López-del-Carpio, Baldemar; Chávez-Carlos, Jorge; Lerma-Hernández, Sergio; Hirsch, Jorge G

    2017-01-01

    The interaction of a quantized electromagnetic field in a cavity with a set of two-level atoms inside it can be described with algebraic Hamiltonians of increasing complexity, from the Rabi to the Dicke models. Their algebraic character allows, through the use of coherent states, a semiclassical description in phase space, where the non-integrable Dicke model has regions associated with regular and chaotic motion. The appearance of classical chaos can be quantified calculating the largest Lyapunov exponent over the whole available phase space for a given energy. In the quantum regime, employing efficient diagonalization techniques, we are able to perform a detailed quantitative study of the regular and chaotic regions, where the quantum participation ratio (P R ) of coherent states on the eigenenergy basis plays a role equivalent to the Lyapunov exponent. It is noted that, in the thermodynamic limit, dividing the participation ratio by the number of atoms leads to a positive value in chaotic regions, while it tends to zero in the regular ones. (paper)

  14. One-step synthesis of g-C{sub 3}N{sub 4} hierarchical porous structure nanosheets with dramatic ultraviolet light photocatalytic activity

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Jing; Wang, Yong; Huang, Jianfeng, E-mail: huangjfsust@126.com; Cao, Liyun; Li, Jiayin; Hai, Guojuan; Bai, Zhe

    2016-12-15

    Highlights: • g-C{sub 3}N{sub 4} nanosheets with hierarchical porous structure were synthesized via one step. • The band gap of the nanosheets was wider and investigated in detail. • The nanosheets can degrade almost all of the RhB within 9 min. • The photocurrent of the nanosheets is 5.97 times as high as that of the P-25. - Abstract: Graphitic carbon nitride (g-C{sub 3}N{sub 4}) nanosheets with hierarchical porous structure were synthesized via one-step thermal condensation-oxidation process. The microstructure of g-C{sub 3}N{sub 4} was characterized to explain the dramatic ultraviolet light photocatalytic activity. The results showed that g-C{sub 3}N{sub 4} hierarchical aggregates were assembled by nanosheets with a length of 1–2 μm and a thickness of 20–30 nm. And the N{sub 2}-adsorption/desorption isotherms further informed the presence of fissure form mesoporous structure. An enhanced photocurrent of 37.2 μA was obtained, which is almost 5 times higher than that of P-25. Besides, the g-C{sub 3}N{sub 4} nanosheets displayed the degradation of Rhodamine B with 99.4% removal efficiency in only 9 min. Such highly photocatalytic activity could be attributed to the nano platelet morphology which improves electron transport ability along the in-plane direction. In addition, the hierarchical porous structure adapted a wider band gap of C{sub 3}N{sub 4}. Therefore, the photoinduced electron-hole pairs have a stronger oxidation-reduction potential for photocatalysis.

  15. Identification of Random Dynamic Force Using an Improved Maximum Entropy Regularization Combined with a Novel Conjugate Gradient

    Directory of Open Access Journals (Sweden)

    ChunPing Ren

    2017-01-01

    Full Text Available We propose a novel mathematical algorithm to offer a solution for the inverse random dynamic force identification in practical engineering. Dealing with the random dynamic force identification problem using the proposed algorithm, an improved maximum entropy (IME regularization technique is transformed into an unconstrained optimization problem, and a novel conjugate gradient (NCG method was applied to solve the objective function, which was abbreviated as IME-NCG algorithm. The result of IME-NCG algorithm is compared with that of ME, ME-CG, ME-NCG, and IME-CG algorithm; it is found that IME-NCG algorithm is available for identifying the random dynamic force due to smaller root mean-square-error (RMSE, lower restoration time, and fewer iterative steps. Example of engineering application shows that L-curve method is introduced which is better than Generalized Cross Validation (GCV method and is applied to select regularization parameter; thus the proposed algorithm can be helpful to alleviate the ill-conditioned problem in identification of dynamic force and to acquire an optimal solution of inverse problem in practical engineering.

  16. Cognitive Aspects of Regularity Exhibit When Neighborhood Disappears

    Science.gov (United States)

    Chen, Sau-Chin; Hu, Jon-Fan

    2015-01-01

    Although regularity refers to the compatibility between pronunciation of character and sound of phonetic component, it has been suggested as being part of consistency, which is defined by neighborhood characteristics. Two experiments demonstrate how regularity effect is amplified or reduced by neighborhood characteristics and reveals the…

  17. Functional status, physical activity level, and exercise regularity in patients with fibromyalgia after Multidisciplinary treatment: retrospective analysis of a randomized controlled trial.

    Science.gov (United States)

    Salvat, I; Zaldivar, P; Monterde, S; Montull, S; Miralles, I; Castel, A

    2017-03-01

    Multidisciplinary treatments have shown to be effective for fibromyalgia. We report detailed functional outcomes of patients with fibromyalgia who attended a 3-month Multidisciplinary treatment program. The hypothesis was that patients would have increased functional status, physical activity level, and exercise regularity after attending this program. We performed a retrospective analysis of a randomized, simple blinded clinical trial. The inclusion criteria consisted of female sex, a diagnosis of fibromyalgia, age 18-60  and 3-8 years of schooling. Measures from the Fibromyalgia Impact Questionnaire (FIQ) and the COOP/WONCA Functional Health Assessment Charts (WONCA) were obtained before and at the end of the treatment and at 3-, 6-, and 12-month follow-ups. Patients recorded their number of steps per day with pedometers. They performed the six-minute walk test (6 MW) before and after treatment. In total, 155 women participated in the study. Their median (interquartile interval) FIQ score was 68.0 (53.0-77.0) at the beginning of the treatment, and the difference between the Multidisciplinary and Control groups was statistically and clinically significant in all of the measures (except the 6-month follow-up). The WONCA charts showed significant clinical improvements in the Multidisciplinary group, with physical fitness in the normal range across almost all values. In that group, steps/day showed more regularity, and the 6 MW results showed improvement of -33.00 (-59.8 to -8.25) m, and the differences from the Control group were statistically significant. The patients who underwent the Multidisciplinary treatment had improved functional status, physical activity level, and exercise regularity. The functional improvements were maintained 1 year after treatment completion.

  18. Formation of Ag nanowires on graphite stepped surfaces. A DFT study

    Science.gov (United States)

    Ambrusi, Rubén E.; García, Silvana G.; Pronsato, María E.

    2015-01-01

    We investigate the feasibility of obtaining silver nanowires on graphite stepped surfaces theoretically, using density functional theory calculations. Three layer slabs are used to model graphite surfaces with and without defects. Adsorption energies for Ag atoms on graphite surfaces were calculated showing the preference of Ag adatoms to locate on the steps, forming linear structures like nanowires. An analysis of the charge densities and projected densities of states for different structures is also performed.

  19. Fabrication and characterization of one- and two-dimensional regular patterns produced employing multiple exposure holographic lithography

    DEFF Research Database (Denmark)

    Tamulevičius, S.; Jurkevičiute, A.; Armakavičius, N.

    2017-01-01

    In this paper we describe fabrication and characterization methods of two-dimensional periodic microstructures in photoresist with pitch of 1.2 urn and lattice constant 1.2-4.8 μm, formed using two-beam multiple exposure holographic lithography technique. The regular structures were recorded empl...

  20. An adaptive regularization parameter choice strategy for multispectral bioluminescence tomography

    Energy Technology Data Exchange (ETDEWEB)

    Feng Jinchao; Qin Chenghu; Jia Kebin; Han Dong; Liu Kai; Zhu Shouping; Yang Xin; Tian Jie [Medical Image Processing Group, Institute of Automation, Chinese Academy of Sciences, P. O. Box 2728, Beijing 100190 (China); College of Electronic Information and Control Engineering, Beijing University of Technology, Beijing 100124 (China); Medical Image Processing Group, Institute of Automation, Chinese Academy of Sciences, P. O. Box 2728, Beijing 100190 (China); Medical Image Processing Group, Institute of Automation, Chinese Academy of Sciences, P. O. Box 2728, Beijing 100190 (China) and School of Life Sciences and Technology, Xidian University, Xi' an 710071 (China)

    2011-11-15

    Purpose: Bioluminescence tomography (BLT) provides an effective tool for monitoring physiological and pathological activities in vivo. However, the measured data in bioluminescence imaging are corrupted by noise. Therefore, regularization methods are commonly used to find a regularized solution. Nevertheless, for the quality of the reconstructed bioluminescent source obtained by regularization methods, the choice of the regularization parameters is crucial. To date, the selection of regularization parameters remains challenging. With regards to the above problems, the authors proposed a BLT reconstruction algorithm with an adaptive parameter choice rule. Methods: The proposed reconstruction algorithm uses a diffusion equation for modeling the bioluminescent photon transport. The diffusion equation is solved with a finite element method. Computed tomography (CT) images provide anatomical information regarding the geometry of the small animal and its internal organs. To reduce the ill-posedness of BLT, spectral information and the optimal permissible source region are employed. Then, the relationship between the unknown source distribution and multiview and multispectral boundary measurements is established based on the finite element method and the optimal permissible source region. Since the measured data are noisy, the BLT reconstruction is formulated as l{sub 2} data fidelity and a general regularization term. When choosing the regularization parameters for BLT, an efficient model function approach is proposed, which does not require knowledge of the noise level. This approach only requests the computation of the residual and regularized solution norm. With this knowledge, we construct the model function to approximate the objective function, and the regularization parameter is updated iteratively. Results: First, the micro-CT based mouse phantom was used for simulation verification. Simulation experiments were used to illustrate why multispectral data were used

  1. On Hierarchical Extensions of Large-Scale 4-regular Grid Network Structures

    DEFF Research Database (Denmark)

    Pedersen, Jens Myrup; Patel, A.; Knudsen, Thomas Phillip

    2004-01-01

    dependencies between the number of nodes and the distances in the structures. The perfect square mesh is introduced for hierarchies, and it is shown that applying ordered hierarchies in this way results in logarithmic dependencies between the number of nodes and the distances, resulting in better scaling...... structures. For example, in a mesh of 391876 nodes the average distance is reduced from 417.33 to 17.32 by adding hierarchical lines. This is gained by increasing the number of lines by 4.20% compared to the non-hierarchical structure. A similar hierarchical extension of the torus structure also results...

  2. Matrix regularization of embedded 4-manifolds

    International Nuclear Information System (INIS)

    Trzetrzelewski, Maciej

    2012-01-01

    We consider products of two 2-manifolds such as S 2 ×S 2 , embedded in Euclidean space and show that the corresponding 4-volume preserving diffeomorphism algebra can be approximated by a tensor product SU(N)⊗SU(N) i.e. functions on a manifold are approximated by the Kronecker product of two SU(N) matrices. A regularization of the 4-sphere is also performed by constructing N 2 ×N 2 matrix representations of the 4-algebra (and as a byproduct of the 3-algebra which makes the regularization of S 3 also possible).

  3. PIV Study of Aeration Efficient of Stepped Spillway System

    Science.gov (United States)

    Abas, M. A.; Jamil, R.; Rozainy, M. R.; Zainol, M. A.; Adlan, M. N.; Keong, C. W.

    2017-06-01

    This paper investigates the three-dimensional (3D) simulation of Cascade aerator system using Lattice Boltzmann simulation and laboratory experiment was carried out to investigate the flow, aeration and cavitation in the spillway. Different configurations of stepped spillway are designed in this project in order to investigate the relationship between the configurations of stepped spillway and cavitation in the flow. The aeration in the stepped spillway will also be investigated. The experimental result will be compared with the simulated result at the end of this project. The figure of flow pattern at the 3rd step in simulation and experiment for Set 1 and Set 2 are look similar between LBM simulation and the experiment findings. This will provide a better understanding of the cavitation, aeration and flow in different configurations of the stepped spillway. In addition the occurrence of negative pressure region in the stepped spillway, increases the possibility of cavitation to occur. The cavitation will damage the structure of the stepped spillway. Furthermore, it also founds that increasing in barrier thickness of the stepped spillway will improve the aeration efficiency and reduce the cavitation in stepped spillway.

  4. STEP: Self-supporting tailored k-space estimation for parallel imaging reconstruction.

    Science.gov (United States)

    Zhou, Zechen; Wang, Jinnan; Balu, Niranjan; Li, Rui; Yuan, Chun

    2016-02-01

    A new subspace-based iterative reconstruction method, termed Self-supporting Tailored k-space Estimation for Parallel imaging reconstruction (STEP), is presented and evaluated in comparison to the existing autocalibrating method SPIRiT and calibrationless method SAKE. In STEP, two tailored schemes including k-space partition and basis selection are proposed to promote spatially variant signal subspace and incorporated into a self-supporting structured low rank model to enforce properties of locality, sparsity, and rank deficiency, which can be formulated into a constrained optimization problem and solved by an iterative algorithm. Simulated and in vivo datasets were used to investigate the performance of STEP in terms of overall image quality and detail structure preservation. The advantage of STEP on image quality is demonstrated by retrospectively undersampled multichannel Cartesian data with various patterns. Compared with SPIRiT and SAKE, STEP can provide more accurate reconstruction images with less residual aliasing artifacts and reduced noise amplification in simulation and in vivo experiments. In addition, STEP has the capability of combining compressed sensing with arbitrary sampling trajectory. Using k-space partition and basis selection can further improve the performance of parallel imaging reconstruction with or without calibration signals. © 2015 Wiley Periodicals, Inc.

  5. Attitudes and actions of asthma patients on regular maintenance therapy: the INSPIRE study

    Directory of Open Access Journals (Sweden)

    Myrseth Sven-Erik

    2006-06-01

    Full Text Available Abstract Background This study examined the attitudes and actions of 3415 physician-recruited adults aged ≥ 16 years with asthma in eleven countries who were prescribed regular maintenance therapy with inhaled corticosteroids or inhaled corticosteroids plus long-acting β2-agonists. Methods Structured interviews were conducted to assess medication use, asthma control, and patients' ability to recognise and self-manage worsening asthma. Results Despite being prescribed regular maintenance therapy, 74% of patients used short-acting β2-agonists daily and 51% were classified by the Asthma Control Questionnaire as having uncontrolled asthma. Even patients with well-controlled asthma reported an average of 6 worsenings/year. The mean period from the onset to the peak symptoms of a worsening was 5.1 days. Although most patients recognised the early signs of worsenings, the most common response was to increase short-acting β2-agonist use; inhaled corticosteroids were increased to a lesser extent at the peak of a worsening. Conclusion Previous studies of this nature have also reported considerable patient morbidity, but in those studies approximately three-quarters of patients were not receiving regular maintenance therapy and not all had a physician-confirmed diagnosis of asthma. This study shows that patients with asthma receiving regular maintenance therapy still have high levels of inadequately controlled asthma. The study also shows that patients recognise deteriorating asthma control and adjust their medication during episodes of worsening. However, they often adjust treatment in an inappropriate manner, which represents a window of missed opportunity.

  6. Organizational commitment and intrinsic motivation of regular and contractual primary health care providers.

    Science.gov (United States)

    Kumar, Pawan; Mehra, Anu; Inder, Deep; Sharma, Nandini

    2016-01-01

    Motivated and committed employees deliver better health care, which results in better outcomes and higher patient satisfaction. To assess the Organizational Commitment and Intrinsic Motivation of Primary Health Care Providers (HCPs) in New Delhi, India. Study was conducted in 2013 on a sample of 333 HCPs who were selected using multistage stage random sampling technique. The sample includes medical officers, auxiliary nurses and midwives, and pharmacists and laboratory technicians/assistants among regular and contractual staff. Data were collected using the pretested structured questionnaire for organization commitment (OC), job satisfiers, and intrinsic job motivation. Analysis was done by using SPSS version 18 and appropriate statistical tests were applied. The mean score for OC for entire regular staff is 1.6 ± 0.39 and contractual staff is 1.3 ± 0.45 which has statistically significant difference (t = 5.57; P = 0.00). In both regular and contractual staff, none of them show high emotional attachment with the organization and does not feel part of the family in the organization. Contractual staff does not feel proud to work in a present organization for rest of their career. Intrinsic motivation is high in both regular and contractual groups but intergroup difference is significant (t = 2.38; P Organizational commitment and intrinsic motivation of contractual staff are lesser than the permanent staff. Appropriate changes are required in the predictors of organizational commitment and factors responsible for satisfaction in the organization to keep the contractual human resource motivated and committed to the organization.

  7. Fractional Regularization Term for Variational Image Registration

    Directory of Open Access Journals (Sweden)

    Rafael Verdú-Monedero

    2009-01-01

    Full Text Available Image registration is a widely used task of image analysis with applications in many fields. Its classical formulation and current improvements are given in the spatial domain. In this paper a regularization term based on fractional order derivatives is formulated. This term is defined and implemented in the frequency domain by translating the energy functional into the frequency domain and obtaining the Euler-Lagrange equations which minimize it. The new regularization term leads to a simple formulation and design, being applicable to higher dimensions by using the corresponding multidimensional Fourier transform. The proposed regularization term allows for a real gradual transition from a diffusion registration to a curvature registration which is best suited to some applications and it is not possible in the spatial domain. Results with 3D actual images show the validity of this approach.

  8. Reducing errors in the GRACE gravity solutions using regularization

    Science.gov (United States)

    Save, Himanshu; Bettadpur, Srinivas; Tapley, Byron D.

    2012-09-01

    The nature of the gravity field inverse problem amplifies the noise in the GRACE data, which creeps into the mid and high degree and order harmonic coefficients of the Earth's monthly gravity fields provided by GRACE. Due to the use of imperfect background models and data noise, these errors are manifested as north-south striping in the monthly global maps of equivalent water heights. In order to reduce these errors, this study investigates the use of the L-curve method with Tikhonov regularization. L-curve is a popular aid for determining a suitable value of the regularization parameter when solving linear discrete ill-posed problems using Tikhonov regularization. However, the computational effort required to determine the L-curve is prohibitively high for a large-scale problem like GRACE. This study implements a parameter-choice method, using Lanczos bidiagonalization which is a computationally inexpensive approximation to L-curve. Lanczos bidiagonalization is implemented with orthogonal transformation in a parallel computing environment and projects a large estimation problem on a problem of the size of about 2 orders of magnitude smaller for computing the regularization parameter. Errors in the GRACE solution time series have certain characteristics that vary depending on the ground track coverage of the solutions. These errors increase with increasing degree and order. In addition, certain resonant and near-resonant harmonic coefficients have higher errors as compared with the other coefficients. Using the knowledge of these characteristics, this study designs a regularization matrix that provides a constraint on the geopotential coefficients as a function of its degree and order. This regularization matrix is then used to compute the appropriate regularization parameter for each monthly solution. A 7-year time-series of the candidate regularized solutions (Mar 2003-Feb 2010) show markedly reduced error stripes compared with the unconstrained GRACE release 4

  9. A novel stepped-care approach to weight loss: The role of self-monitoring and health literacy in treatment outcomes.

    Science.gov (United States)

    Carels, Robert A; Selensky, Jennifer C; Rossi, James; Solar, Chelsey; Hlavka, Reid

    2017-08-01

    The aims of the current study were twofold: 1) examine the effectiveness of an innovative three-step, stepped-care behavioral weight loss treatment, and 2) examine factors that contribute to poor weight loss outcomes and the need for more intensive treatment. The total sample for the study consisted of 53 individuals (87% female) with M BMI =35.6, SD BMI =6.4. A three-step, stepped-care treatment approach was implemented over six months. Step 1 included the Diabetes Prevention Program manual adapted for self-administration augmented with monitoring technology shown to facilitate weight loss and participant accountability and engagement. Participants who were unsuccessful at achieving established weight loss goals received stepped-up treatments in 2-month increments beginning at month 2. The stepped progression included the addition of meal replacement at Step 2 and individual counseling concurrent with meal replacement at Step 3. Un-stepped and once stepped participants lost a clinically significant amount of weight (i.e., >5%), while twice stepped participants lost an insignificant amount of weight. Twice stepped participants were significantly lower in health literacy and self-monitoring frequency. In this investigation, approximately 60% of the participants were able to lose a clinically significant amount of weight utilizing a minimally intensive intervention with little additional support. Regular self-monitoring and high health literacy proved to be significant correlates of success. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. A randomized trial comparing structured and lifestyle goals in an internet-mediated walking program for people with type 2 diabetes

    Directory of Open Access Journals (Sweden)

    Fortlage Laurie A

    2007-11-01

    Full Text Available Abstract Background The majority of individuals with type 2 diabetes do not exercise regularly. Pedometer-based walking interventions can help; however, pedometer-based interventions targeting only total daily accumulated steps might not yield the same health benefits as physical activity programs specifying a minimum duration and intensity of physical activity bouts. Methods This pilot randomized trial compared two goal-setting strategies: 1 lifestyle goals targeting total daily accumulated step counts and 2 structured goals targeting bout steps defined as walking that lasts for 10 minutes or longer at a pace of at least 60 steps per minute. We sought to determine which goal-setting strategy was more effective at increasing bout steps. Participants were sedentary adults with type 2 diabetes. All participants: wore enhanced pedometers with embedded USB ports; uploaded detailed, time-stamped step-count data to a website called Stepping Up to Health; and received automated step-count feedback, automatically calculated goals, and tailored motivational messages throughout the six-week intervention. Only the automated goal calculations and step-count feedback differed between the two groups. The primary outcome of interest was increase in steps taken during the previously defined bouts of walking (lasting at least 10 minutes or longer at a pace of at least 60 steps per minute between baseline and end of the intervention. Results Thirty-five participants were randomized and 30 (86% completed the pilot study. Both groups significantly increased bout steps, but there was no statistically significant difference between groups. Among study completers, bout steps increased by 1921 ± 2729 steps a day. Those who received lifestyle goals were more satisfied with the intervention (p = 0.006 and wore the pedometer more often (p Conclusion In this six-week intervention, Lifestyle Goals group participants achieved increases in bout steps comparable to the

  11. Likelihood ratio decisions in memory: three implied regularities.

    Science.gov (United States)

    Glanzer, Murray; Hilford, Andrew; Maloney, Laurence T

    2009-06-01

    We analyze four general signal detection models for recognition memory that differ in their distributional assumptions. Our analyses show that a basic assumption of signal detection theory, the likelihood ratio decision axis, implies three regularities in recognition memory: (1) the mirror effect, (2) the variance effect, and (3) the z-ROC length effect. For each model, we present the equations that produce the three regularities and show, in computed examples, how they do so. We then show that the regularities appear in data from a range of recognition studies. The analyses and data in our study support the following generalization: Individuals make efficient recognition decisions on the basis of likelihood ratios.

  12. Low-Rank Matrix Factorization With Adaptive Graph Regularizer.

    Science.gov (United States)

    Lu, Gui-Fu; Wang, Yong; Zou, Jian

    2016-05-01

    In this paper, we present a novel low-rank matrix factorization algorithm with adaptive graph regularizer (LMFAGR). We extend the recently proposed low-rank matrix with manifold regularization (MMF) method with an adaptive regularizer. Different from MMF, which constructs an affinity graph in advance, LMFAGR can simultaneously seek graph weight matrix and low-dimensional representations of data. That is, graph construction and low-rank matrix factorization are incorporated into a unified framework, which results in an automatically updated graph rather than a predefined one. The experimental results on some data sets demonstrate that the proposed algorithm outperforms the state-of-the-art low-rank matrix factorization methods.

  13. Topological chaos of the spatial prisoner's dilemma game on regular networks.

    Science.gov (United States)

    Jin, Weifeng; Chen, Fangyue

    2016-02-21

    The spatial version of evolutionary prisoner's dilemma on infinitely large regular lattice with purely deterministic strategies and no memories among players is investigated in this paper. Based on the statistical inferences, it is pertinent to confirm that the frequency of cooperation for characterizing its macroscopic behaviors is very sensitive to the initial conditions, which is the most practically significant property of chaos. Its intrinsic complexity is then justified on firm ground from the theory of symbolic dynamics; that is, this game is topologically mixing and possesses positive topological entropy on its subsystems. It is demonstrated therefore that its frequency of cooperation could not be adopted by simply averaging over several steps after the game reaches the equilibrium state. Furthermore, the chaotically changing spatial patterns via empirical observations can be defined and justified in view of symbolic dynamics. It is worth mentioning that the procedure proposed in this work is also applicable to other deterministic spatial evolutionary games therein. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Online Manifold Regularization by Dual Ascending Procedure

    OpenAIRE

    Sun, Boliang; Li, Guohui; Jia, Li; Zhang, Hui

    2013-01-01

    We propose a novel online manifold regularization framework based on the notion of duality in constrained optimization. The Fenchel conjugate of hinge functions is a key to transfer manifold regularization from offline to online in this paper. Our algorithms are derived by gradient ascent in the dual function. For practical purpose, we propose two buffering strategies and two sparse approximations to reduce the computational complexity. Detailed experiments verify the utility of our approache...

  15. Degree-regular triangulations of torus and Klein bottle

    Indian Academy of Sciences (India)

    Home; Journals; Proceedings – Mathematical Sciences; Volume 115; Issue 3 ... A triangulation of a connected closed surface is called degree-regular if each of its vertices have the same degree. ... In [5], Datta and Nilakantan have classified all the degree-regular triangulations of closed surfaces on at most 11 vertices.

  16. Class of regular bouncing cosmologies

    Science.gov (United States)

    Vasilić, Milovan

    2017-06-01

    In this paper, I construct a class of everywhere regular geometric sigma models that possess bouncing solutions. Precisely, I show that every bouncing metric can be made a solution of such a model. My previous attempt to do so by employing one scalar field has failed due to the appearance of harmful singularities near the bounce. In this work, I use four scalar fields to construct a class of geometric sigma models which are free of singularities. The models within the class are parametrized by their background geometries. I prove that, whatever background is chosen, the dynamics of its small perturbations is classically stable on the whole time axis. Contrary to what one expects from the structure of the initial Lagrangian, the physics of background fluctuations is found to carry two tensor, two vector, and two scalar degrees of freedom. The graviton mass, which naturally appears in these models, is shown to be several orders of magnitude smaller than its experimental bound. I provide three simple examples to demonstrate how this is done in practice. In particular, I show that graviton mass can be made arbitrarily small.

  17. Effects of attitude, social influence, and self-efficacy model factors on regular mammography performance in life-transition aged women in Korea.

    Science.gov (United States)

    Lee, Chang Hyun; Kim, Young Im

    2015-01-01

    This study analyzed predictors of regular mammography performance in Korea. In addition, we determined factors affecting regular mammography performance in life-transition aged women by applying an attitude, social influence, and self-efficacy (ASE) model. Data were collected from women aged over 40 years residing in province J in Korea. The 178 enrolled subjects provided informed voluntary consent prior to completing a structural questionnaire. The overall regular mammography performance rate of the subjects was 41.6%. Older age, city residency, high income and part-time job were associated with a high regular mammography performance. Among women who had undergone more breast self-examinations (BSE) or more doctors' physical examinations (PE), there were higher regular mammography performance rates. All three ASE model factors were significantly associated with regular mammography performance. Women with a high level of positive ASE values had a significantly high regular mammography performance rate. Within the ASE model, self-efficacy and social influence were particularly important. Logistic regression analysis explained 34.7% of regular mammography performance and PE experience (β=4.645, p=.003), part- time job (β=4.010, p=.050), self-efficacy (β=1.820, p=.026) and social influence (β=1.509, p=.038) were significant factors. Promotional strategies that could improve self-efficacy, reinforce social influence and reduce geographical, time and financial barriers are needed to increase the regular mammography performance rate in life-transition aged.

  18. Two-Step Amyloid Aggregation: Sequential Lag Phase Intermediates

    Science.gov (United States)

    Castello, Fabio; Paredes, Jose M.; Ruedas-Rama, Maria J.; Martin, Miguel; Roldan, Mar; Casares, Salvador; Orte, Angel

    2017-01-01

    The self-assembly of proteins into fibrillar structures called amyloid fibrils underlies the onset and symptoms of neurodegenerative diseases, such as Alzheimer’s and Parkinson’s. However, the molecular basis and mechanism of amyloid aggregation are not completely understood. For many amyloidogenic proteins, certain oligomeric intermediates that form in the early aggregation phase appear to be the principal cause of cellular toxicity. Recent computational studies have suggested the importance of nonspecific interactions for the initiation of the oligomerization process prior to the structural conversion steps and template seeding, particularly at low protein concentrations. Here, using advanced single-molecule fluorescence spectroscopy and imaging of a model SH3 domain, we obtained direct evidence that nonspecific aggregates are required in a two-step nucleation mechanism of amyloid aggregation. We identified three different oligomeric types according to their sizes and compactness and performed a full mechanistic study that revealed a mandatory rate-limiting conformational conversion step. We also identified the most cytotoxic species, which may be possible targets for inhibiting and preventing amyloid aggregation.

  19. The relationship between lifestyle regularity and subjective sleep quality

    Science.gov (United States)

    Monk, Timothy H.; Reynolds, Charles F 3rd; Buysse, Daniel J.; DeGrazia, Jean M.; Kupfer, David J.

    2003-01-01

    In previous work we have developed a diary instrument-the Social Rhythm Metric (SRM), which allows the assessment of lifestyle regularity-and a questionnaire instrument--the Pittsburgh Sleep Quality Index (PSQI), which allows the assessment of subjective sleep quality. The aim of the present study was to explore the relationship between lifestyle regularity and subjective sleep quality. Lifestyle regularity was assessed by both standard (SRM-17) and shortened (SRM-5) metrics; subjective sleep quality was assessed by the PSQI. We hypothesized that high lifestyle regularity would be conducive to better sleep. Both instruments were given to a sample of 100 healthy subjects who were studied as part of a variety of different experiments spanning a 9-yr time frame. Ages ranged from 19 to 49 yr (mean age: 31.2 yr, s.d.: 7.8 yr); there were 48 women and 52 men. SRM scores were derived from a two-week diary. The hypothesis was confirmed. There was a significant (rho = -0.4, p subjects with higher levels of lifestyle regularity reported fewer sleep problems. This relationship was also supported by a categorical analysis, where the proportion of "poor sleepers" was doubled in the "irregular types" group as compared with the "non-irregular types" group. Thus, there appears to be an association between lifestyle regularity and good sleep, though the direction of causality remains to be tested.

  20. Symplectic integrators with adaptive time steps

    Science.gov (United States)

    Richardson, A. S.; Finn, J. M.

    2012-01-01

    In recent decades, there have been many attempts to construct symplectic integrators with variable time steps, with rather disappointing results. In this paper, we identify the causes for this lack of performance, and find that they fall into two categories. In the first, the time step is considered a function of time alone, Δ = Δ(t). In this case, backward error analysis shows that while the algorithms remain symplectic, parametric instabilities may arise because of resonance between oscillations of Δ(t) and the orbital motion. In the second category the time step is a function of phase space variables Δ = Δ(q, p). In this case, the system of equations to be solved is analyzed by introducing a new time variable τ with dt = Δ(q, p) dτ. The transformed equations are no longer in Hamiltonian form, and thus do not benefit from integration methods which would be symplectic for Hamiltonian systems. We analyze two methods for integrating the transformed equations which do, however, preserve the structure of the original equations. The first is an extended phase space method, which has been successfully used in previous studies of adaptive time step symplectic integrators. The second, novel, method is based on a non-canonical mixed-variable generating function. Numerical trials for both of these methods show good results, without parametric instabilities or spurious growth or damping. It is then shown how to adapt the time step to an error estimate found by backward error analysis, in order to optimize the time-stepping scheme. Numerical results are obtained using this formulation and compared with other time-stepping schemes for the extended phase space symplectic method.