Sparse structure regularized ranking
Wang, Jim Jing-Yan; Sun, Yijun; Gao, Xin
2014-01-01
Learning ranking scores is critical for the multimedia database retrieval problem. In this paper, we propose a novel ranking score learning algorithm by exploring the sparse structure and using it to regularize ranking scores. To explore the sparse
Sparse structure regularized ranking
Wang, Jim Jing-Yan
2014-04-17
Learning ranking scores is critical for the multimedia database retrieval problem. In this paper, we propose a novel ranking score learning algorithm by exploring the sparse structure and using it to regularize ranking scores. To explore the sparse structure, we assume that each multimedia object could be represented as a sparse linear combination of all other objects, and combination coefficients are regarded as a similarity measure between objects and used to regularize their ranking scores. Moreover, we propose to learn the sparse combination coefficients and the ranking scores simultaneously. A unified objective function is constructed with regard to both the combination coefficients and the ranking scores, and is optimized by an iterative algorithm. Experiments on two multimedia database retrieval data sets demonstrate the significant improvements of the propose algorithm over state-of-the-art ranking score learning algorithms.
Recursive regularization step for high-order lattice Boltzmann methods
Coreixas, Christophe; Wissocq, Gauthier; Puigt, Guillaume; Boussuge, Jean-François; Sagaut, Pierre
2017-09-01
A lattice Boltzmann method (LBM) with enhanced stability and accuracy is presented for various Hermite tensor-based lattice structures. The collision operator relies on a regularization step, which is here improved through a recursive computation of nonequilibrium Hermite polynomial coefficients. In addition to the reduced computational cost of this procedure with respect to the standard one, the recursive step allows to considerably enhance the stability and accuracy of the numerical scheme by properly filtering out second- (and higher-) order nonhydrodynamic contributions in under-resolved conditions. This is first shown in the isothermal case where the simulation of the doubly periodic shear layer is performed with a Reynolds number ranging from 104 to 106, and where a thorough analysis of the case at Re=3 ×104 is conducted. In the latter, results obtained using both regularization steps are compared against the Bhatnagar-Gross-Krook LBM for standard (D2Q9) and high-order (D2V17 and D2V37) lattice structures, confirming the tremendous increase of stability range of the proposed approach. Further comparisons on thermal and fully compressible flows, using the general extension of this procedure, are then conducted through the numerical simulation of Sod shock tubes with the D2V37 lattice. They confirm the stability increase induced by the recursive approach as compared with the standard one.
Regularities development of entrepreneurial structures in regions
Directory of Open Access Journals (Sweden)
Julia Semenovna Pinkovetskaya
2012-12-01
Full Text Available Consider regularities and tendencies for the three types of entrepreneurial structures — small enterprises, medium enterprises and individual entrepreneurs. The aim of the research was to confirm the possibilities of describing indicators of aggregate entrepreneurial structures with the use of normal law distribution functions. Presented proposed by the author the methodological approach and results of construction of the functions of the density distribution for the main indicators for the various objects: the Russian Federation, regions, as well as aggregates ofentrepreneurial structures, specialized in certain forms ofeconomic activity. All the developed functions, as shown by the logical and statistical analysis, are of high quality and well-approximate the original data. In general, the proposed methodological approach is versatile and can be used in further studies of aggregates of entrepreneurial structures. The received results can be applied in solving a wide range of problems justify the need for personnel and financial resources at the federal, regional and municipal levels, as well as the formation of plans and forecasts of development entrepreneurship and improvement of this sector of the economy.
Recognition Memory for Novel Stimuli: The Structural Regularity Hypothesis
Cleary, Anne M.; Morris, Alison L.; Langley, Moses M.
2007-01-01
Early studies of human memory suggest that adherence to a known structural regularity (e.g., orthographic regularity) benefits memory for an otherwise novel stimulus (e.g., G. A. Miller, 1958). However, a more recent study suggests that structural regularity can lead to an increase in false-positive responses on recognition memory tests (B. W. A.…
Diffusion coefficients for periodically induced multi-step persistent walks on regular lattices
International Nuclear Information System (INIS)
Gilbert, Thomas; Sanders, David P
2012-01-01
We present a generalization of our formalism for the computation of diffusion coefficients of multi-step persistent random walks on regular lattices to walks which include zero-displacement states. This situation is especially relevant to systems where tracer particles move across potential barriers as a result of the action of a periodic forcing whose period sets the timescale between transitions. (paper)
The structure of stepped surfaces
International Nuclear Information System (INIS)
Algra, A.J.
1981-01-01
The state-of-the-art of Low Energy Ion Scattering (LEIS) as far as multiple scattering effects are concerned, is discussed. The ion fractions of lithium, sodium and potassium scattered from a copper (100) surface have been measured as a function of several experimental parameters. The ratio of the intensities of the single and double scattering peaks observed in ion scattering spectroscopy has been determined and ion scattering spectroscopy applied in the multiple scattering mode is used to determine the structure of a stepped Cu(410) surface. The average relaxation of the (100) terraces of this surface appears to be very small. The adsorption of oxygen on this surface has been studied with LEIS and it is indicated that oxygen absorbs dissociatively. (C.F.)
Near-Regular Structure Discovery Using Linear Programming
Huang, Qixing
2014-06-02
Near-regular structures are common in manmade and natural objects. Algorithmic detection of such regularity greatly facilitates our understanding of shape structures, leads to compact encoding of input geometries, and enables efficient generation and manipulation of complex patterns on both acquired and synthesized objects. Such regularity manifests itself both in the repetition of certain geometric elements, as well as in the structured arrangement of the elements. We cast the regularity detection problem as an optimization and efficiently solve it using linear programming techniques. Our optimization has a discrete aspect, that is, the connectivity relationships among the elements, as well as a continuous aspect, namely the locations of the elements of interest. Both these aspects are captured by our near-regular structure extraction framework, which alternates between discrete and continuous optimizations. We demonstrate the effectiveness of our framework on a variety of problems including near-regular structure extraction, structure-preserving pattern manipulation, and markerless correspondence detection. Robustness results with respect to geometric and topological noise are presented on synthesized, real-world, and also benchmark datasets. © 2014 ACM.
Discrete maximal regularity of time-stepping schemes for fractional evolution equations.
Jin, Bangti; Li, Buyang; Zhou, Zhi
2018-01-01
In this work, we establish the maximal [Formula: see text]-regularity for several time stepping schemes for a fractional evolution model, which involves a fractional derivative of order [Formula: see text], [Formula: see text], in time. These schemes include convolution quadratures generated by backward Euler method and second-order backward difference formula, the L1 scheme, explicit Euler method and a fractional variant of the Crank-Nicolson method. The main tools for the analysis include operator-valued Fourier multiplier theorem due to Weis (Math Ann 319:735-758, 2001. doi:10.1007/PL00004457) and its discrete analogue due to Blunck (Stud Math 146:157-176, 2001. doi:10.4064/sm146-2-3). These results generalize the corresponding results for parabolic problems.
Structural characterization of the packings of granular regular polygons.
Wang, Chuncheng; Dong, Kejun; Yu, Aibing
2015-12-01
By using a recently developed method for discrete modeling of nonspherical particles, we simulate the random packings of granular regular polygons with three to 11 edges under gravity. The effects of shape and friction on the packing structures are investigated by various structural parameters, including packing fraction, the radial distribution function, coordination number, Voronoi tessellation, and bond-orientational order. We find that packing fraction is generally higher for geometrically nonfrustrated regular polygons, and can be increased by the increase of edge number and decrease of friction. The changes of packing fraction are linked with those of the microstructures, such as the variations of the translational and orientational orders and local configurations. In particular, the free areas of Voronoi tessellations (which are related to local packing fractions) can be described by log-normal distributions for all polygons. The quantitative analyses establish a clearer picture for the packings of regular polygons.
Process of motion by unit steps over a surface provided with elements regularly arranged
International Nuclear Information System (INIS)
Cooper, D.E.; Hendee, L.C. III; Hill, W.G. Jr.; Leshem, Adam; Marugg, M.L.
1977-01-01
This invention concerns a process for moving by unit steps an apparatus travelling over a surface provided with an array of orifices aligned and evenly spaced in several lines and several parallel rows regularly spaced, the lines and rows being parallel to axes x and y of Cartesian co-ordinates, each orifice having a separate address in the Cartesian co-ordinate system. The surface travelling apparatus has two previously connected arms aranged in directions transversal to each other thus forming an angle corresponding to the intersection of axes x and y. In the inspection and/or repair of nuclear or similar steam generator tubes, it is desirable that such an apparatus should be able to move in front of a surface comprising an array of orifices by the selective alternate introduction and retraction of two sets of anchoring claws of the two respective arms, in relation to the orifices of the array, it being possible to shift the arms in a movement of translation, transversally to each other, as a set of claws is withdrawn from the orifices. The invention concerns a process and aparatus as indicated above that reduces to a minimum the path length of the apparatus between the orifices it is effectively opposite and a given orifice [fr
Chidori, Kazuhiro; Yamamoto, Yuji
2017-01-01
The aim of this study was to evaluate the effects of the lateral amplitude and regularity of upper body fluctuation on step time variability. Return map analysis was used to clarify the relationship between step time variability and a history of falling. Eleven healthy, community-dwelling older adults and twelve younger adults participated in the study. All of the subjects walked 25 m at a comfortable speed. Trunk acceleration was measured using triaxial accelerometers attached to the third lumbar vertebrae (L3) and the seventh cervical vertebrae (C7). The normalized average magnitude of acceleration, the coefficient of determination ($R^2$) of the return map, and the step time variabilities, were calculated. Cluster analysis using the average fluctuation and the regularity of C7 fluctuation identified four walking patterns in the mediolateral (ML) direction. The participants with higher fluctuation and lower regularity showed significantly greater step time variability compared with the others. Additionally, elderly participants who had fallen in the past year had higher amplitude and a lower regularity of fluctuation during walking. In conclusion, by focusing on the time evolution of each step, it is possible to understand the cause of stride and/or step time variability that is associated with a risk of falls.
Optimal analysis of structures by concepts of symmetry and regularity
Kaveh, Ali
2013-01-01
Optimal analysis is defined as an analysis that creates and uses sparse, well-structured and well-conditioned matrices. The focus is on efficient methods for eigensolution of matrices involved in static, dynamic and stability analyses of symmetric and regular structures, or those general structures containing such components. Powerful tools are also developed for configuration processing, which is an important issue in the analysis and design of space structures and finite element models. Different mathematical concepts are combined to make the optimal analysis of structures feasible. Canonical forms from matrix algebra, product graphs from graph theory and symmetry groups from group theory are some of the concepts involved in the variety of efficient methods and algorithms presented. The algorithms elucidated in this book enable analysts to handle large-scale structural systems by lowering their computational cost, thus fulfilling the requirement for faster analysis and design of future complex systems. The ...
STRUCTURE OPTIMIZATION OF RESERVATION BY PRECISE QUADRATIC REGULARIZATION
Directory of Open Access Journals (Sweden)
KOSOLAP A. I.
2015-11-01
Full Text Available The problem of optimization of the structure of systems redundancy elements. Such problems arise in the design of complex systems. To improve the reliability of operation of such systems of its elements are duplicated. This increases system cost and improves its reliability. When optimizing these systems is maximized probability of failure of the entire system while limiting its cost or the cost is minimized for a given probability of failure-free operation. A mathematical model of the problem is a discrete backup multiextremal. To search for the global extremum of currently used methods of Lagrange multipliers, coordinate descent, dynamic programming, random search. These methods guarantee a just and local solutions are used in the backup tasks of small dimension. In the work for solving redundancy uses a new method for accurate quadratic regularization. This method allows you to convert the original discrete problem to the maximization of multi vector norm on a convex set. This means that the diversity of the tasks given to the problem of redundancy maximize vector norm on a convex set. To solve the problem, a reformed straightdual interior point methods. Currently, it is the best method for local optimization of nonlinear problems. Transformed the task includes a new auxiliary variable, which is determined by dichotomy. There have been numerous comparative numerical experiments in problems with the number of redundant subsystems to one hundred. These experiments confirm the effectiveness of the method of precise quadratic regularization for solving problems of redundancy.
Pan, Chu-Dong; Yu, Ling; Liu, Huan-Lin
2017-08-01
Traffic-induced moving force identification (MFI) is a typical inverse problem in the field of bridge structural health monitoring. Lots of regularization-based methods have been proposed for MFI. However, the MFI accuracy obtained from the existing methods is low when the moving forces enter into and exit a bridge deck due to low sensitivity of structural responses to the forces at these zones. To overcome this shortcoming, a novel moving average Tikhonov regularization method is proposed for MFI by combining with the moving average concepts. Firstly, the bridge-vehicle interaction moving force is assumed as a discrete finite signal with stable average value (DFS-SAV). Secondly, the reasonable signal feature of DFS-SAV is quantified and introduced for improving the penalty function (∣∣x∣∣2 2) defined in the classical Tikhonov regularization. Then, a feasible two-step strategy is proposed for selecting regularization parameter and balance coefficient defined in the improved penalty function. Finally, both numerical simulations on a simply-supported beam and laboratory experiments on a hollow tube beam are performed for assessing the accuracy and the feasibility of the proposed method. The illustrated results show that the moving forces can be accurately identified with a strong robustness. Some related issues, such as selection of moving window length, effect of different penalty functions, and effect of different car speeds, are discussed as well.
Strictly-regular number system and data structures
DEFF Research Database (Denmark)
Elmasry, Amr Ahmed Abd Elmoneim; Jensen, Claus; Katajainen, Jyrki
2010-01-01
We introduce a new number system that we call the strictly-regular system, which efficiently supports the operations: digit-increment, digit-decrement, cut, concatenate, and add. Compared to other number systems, the strictly-regular system has distinguishable properties. It is superior to the re...
Neuregulin: First Steps Towards a Structure
Ferree, D. S.; Malone, C. C.; Karr, L. J.
2003-01-01
Neuregulins are growth factor domain proteins with diverse bioactivities, such as cell proliferation, receptor binding, and differentiation. Neureguh- 1 binds to two members of the ErbB class I tyrosine kinase receptors, ErbB3 and ErbB4. A number of human cancers overexpress the ErbB receptors, and neuregulin can modulate the growth of certain cancer types. Neuregulin-1 has been shown to promote the migration of invasive gliomas of the central nervous system. Neuregulin has also been implicated in schizophrenia, multiple sclerosis and abortive cardiac abnormalities. The full function of neuregulin-1 is not known. In this study we are inserting a cDNA clone obtained from American Type Culture Collection into E.coli expression vectors to express neuregulin- 1 protein. Metal chelate affinity chromatography is used for recombinant protein purification. Crystallization screening will proceed for X-ray diffraction studies following expression, optimization, and protein purification. In spite of medical and scholarly interest in the neuregulins, there are currently no high-resolution structures available for these proteins. Here we present the first steps toward attaining a high-resolution structure of neuregulin- 1, which will help enable us to better understand its function
International Nuclear Information System (INIS)
Olson, Gordon L.
2008-01-01
In binary stochastic media in two- and three-dimensions consisting of randomly placed impenetrable disks or spheres, the chord lengths in the background material between disks and spheres closely follow exponential distributions if the disks and spheres occupy less than 10% of the medium. This work demonstrates that for regular spatial structures of disks and spheres, the tails of the chord length distributions (CLDs) follow power laws rather than exponentials. In dilute media, when the disks and spheres are widely spaced, the slope of the power law seems to be independent of the details of the structure. When approaching a close-packed arrangement, the exact placement of the spheres can make a significant difference. When regular structures are perturbed by small random displacements, the CLDs become power laws with steeper slopes. An example CLD from a quasi-random distribution of spheres in clusters shows a modified exponential distribution
Energy Technology Data Exchange (ETDEWEB)
Olson, Gordon L. [Computer and Computational Sciences Division (CCS-2), Los Alamos National Laboratory, 5 Foxglove Circle, Madison, WI 53717 (United States)], E-mail: olson99@tds.net
2008-11-15
In binary stochastic media in two- and three-dimensions consisting of randomly placed impenetrable disks or spheres, the chord lengths in the background material between disks and spheres closely follow exponential distributions if the disks and spheres occupy less than 10% of the medium. This work demonstrates that for regular spatial structures of disks and spheres, the tails of the chord length distributions (CLDs) follow power laws rather than exponentials. In dilute media, when the disks and spheres are widely spaced, the slope of the power law seems to be independent of the details of the structure. When approaching a close-packed arrangement, the exact placement of the spheres can make a significant difference. When regular structures are perturbed by small random displacements, the CLDs become power laws with steeper slopes. An example CLD from a quasi-random distribution of spheres in clusters shows a modified exponential distribution.
Intermediate surface structure between step bunching and step flow in SrRuO3 thin film growth
Bertino, Giulia; Gura, Anna; Dawber, Matthew
We performed a systematic study of SrRuO3 thin films grown on TiO2 terminated SrTiO3 substrates using off-axis magnetron sputtering. We investigated the step bunching formation and the evolution of the SRO film morphology by varying the step size of the substrate, the growth temperature and the film thickness. The thin films were characterized using Atomic Force Microscopy and X-Ray Diffraction. We identified single and multiple step bunching and step flow growth regimes as a function of the growth parameters. Also, we clearly observe a stronger influence of the step size of the substrate on the evolution of the SRO film surface with respect to the other growth parameters. Remarkably, we observe the formation of a smooth, regular and uniform ``fish skin'' structure at the transition between one regime and another. We believe that the fish skin structure results from the merging of 2D flat islands predicted by previous models. The direct observation of this transition structure allows us to better understand how and when step bunching develops in the growth of SrRuO3 thin films.
Analysis of regular structures third degree based on chordal rings
DEFF Research Database (Denmark)
Bujnowski, Slawomir; Dubalski, Bozydar; Pedersen, Jens Myrup
2009-01-01
. In the first part of paper, formulas for the basic parameters diameter and average path length were derived using optimal/ideal graphs, and used for indicating transmission properties of the structures. These analytical results were confirmed by comparison to a large number of computations on real graphs. In...
Italian Sign Language (LIS) Poetry: Iconic Properties and Structural Regularities.
Russo, Tommaso; Giuranna, Rosaria; Pizzuto, Elena
2001-01-01
Explores and describes from a crosslinguistic perspective, some of the major structural irregularities that characterize poetry in Italian Sign Language and distinguish poetic from nonpoetic texts. Reviews findings of previous studies of signed language poetry, and points out issues that need to be clarified to provide a more accurate description…
Structure of period-2 step-1 accelerator island in area preserving maps
International Nuclear Information System (INIS)
Hirose, K.; Ichikawa, Y.H.; Saito, S.
1996-03-01
Since the multi-periodic accelerator modes manifest their contribution even in the region of small stochastic parameters, analysis of such regular motion appears to be critical to explore the stochastic properties of the Hamiltonian system. Here, structure of period-2 step-1 accelerator mode is analyzed for the systems described by the Harper map and by the standard map. The stability criterions have been analyzed in detail in comparison with numerical analyses. The period-3 squeezing around the period-2 step-1 islands is identified in the standard map. (author)
Fine structures on zero-field steps in low-loss Josephson tunnel junctions
DEFF Research Database (Denmark)
Monaco, Roberto; Barbara, Paola; Mygind, Jesper
1993-01-01
The first zero-field step in the current-voltage characteristic of intermediate-length, high-quality, low-loss Nb/Al-AlOx/Nb Josephson tunnel junctions has been carefully investigated as a function of temperature. When decreasing the temperature, a number of structures develop in the form...... of regular and slightly hysteretic steps whose voltage position depends on the junction temperature and length. This phenomenon is interesting for the study of nonlinear dynamics and for application of long Josephson tunnel junctions as microwave and millimeter-wavelength oscillators....
Gong, Bo; Schullcke, Benjamin; Krueger-Ziolek, Sabine; Mueller-Lisse, Ullrich; Moeller, Knut
2016-06-01
Electrical impedance tomography (EIT) reconstructs the conductivity distribution of a domain using electrical data on its boundary. This is an ill-posed inverse problem usually solved on a finite element mesh. For this article, a special regularization method incorporating structural information of the targeted domain is proposed and evaluated. Structural information was obtained either from computed tomography images or from preliminary EIT reconstructions by a modified k-means clustering. The proposed regularization method integrates this structural information into the reconstruction as a soft constraint preferring sparsity in group level. A first evaluation with Monte Carlo simulations indicated that the proposed solver is more robust to noise and the resulting images show fewer artifacts. This finding is supported by real data analysis. The structure based regularization has the potential to balance structural a priori information with data driven reconstruction. It is robust to noise, reduces artifacts and produces images that reflect anatomy and are thus easier to interpret for physicians.
Influence of the volume ratio of solid phase on carrying capacity of regular porous structure
Directory of Open Access Journals (Sweden)
Monkova Katarina
2017-01-01
Full Text Available Direct metal laser sintering is spread technology today. The main advantage of this method is the ability to produce parts which have a very complex geometry and which can be produced only in very complicated way by classical conventional methods. Special category of such components are parts with porous structure, which can give to the product extraordinary combination of properties. The article deals with some aspects that influence the manufacturing of regular porous structures in spite of the fact that input technological parameters at various samples were the same. The main goal of presented research has been to investigate the influence of the volume ratio of solid phase on carrying capacity of regular porous structure. Realized tests have indicated that the unit of regular porous structure with lower volume ratio is able to carry a greater load to failure than the unit with higher volume ratio.
Regularities of structure formation on different stages of WC-Co hard alloys fabrication
Energy Technology Data Exchange (ETDEWEB)
Chernyavskij, K S
1987-03-01
Some regularities of structural transformations in powder products of the hard alloys fabrication have been formulated on the basis of results of the author works and other native and foreign reseachers. New data confirming the influene of technological prehistory of carbide powder on the mechanism of its particle grinding as well as the influence of the structural-energy state of WC powder on the course of the WC-Co alloy structure formation processes are given. Some possibilities for the application in practice of the regularities studied are considered.
On Hierarchical Extensions of Large-Scale 4-regular Grid Network Structures
DEFF Research Database (Denmark)
Pedersen, Jens Myrup; Patel, A.; Knudsen, Thomas Phillip
It is studied how the introduction of ordered hierarchies in 4-regular grid network structures decreses distances remarkably, while at the same time allowing for simple topological routing schemes. Both meshes and tori are considered; in both cases non-hierarchical structures have power law depen...
"Equilibrium structure of monatomic steps on vicinal Si(001)
Zandvliet, Henricus J.W.; Elswijk, H.B.; van Loenen, E.J.; Dijkkamp, D.
1992-01-01
The equilibrium structure of monatomic steps on vicinal Si(001) is described in terms of anisotropic nearest-neighbor and isotropic second-nearest-neighbor interactions between dimers. By comparing scanning-tunneling-microscopy data and this equilibrium structure, we obtained interaction energies of
Periodic vortex pinning by regular structures in Nb thin films: magnetic vs. structural effects
Montero, Maria Isabel; Jonsson-Akerman, B. Johan; Schuller, Ivan K.
2001-03-01
The defects present in a superconducting material can lead to a great variety of static and dynamic vortex phases. In particular, the interaction of the vortex lattice with regular arrays of pinning centers such as holes or magnetic dots gives rise to commensurability effects. These commensurability effects can be observed in the magnetoresistance and in the critical current dependence with the applied field. In recent years, experimental results have shown that there is a dependence of the periodic pinning effect on the properties of the vortex lattice (i.e. vortex-vortex interactions, elastic energy and vortex velocity) and also on the dots characteristics (i.e. dot size, distance between dots, magnetic character of the dot material, etc). However, there is not still a good understanding of the nature of the main pinning mechanisms by the magnetic dots. To clarify this important issue, we have studied and compared the periodic pinning effects in Nb films with rectangular arrays of Ni, Co and Fe dots, as well as the pinning effects in a Nb film deposited on a hole patterned substrate without any magnetic material. We will discuss the differences on pinning energies arising from magnetic effects as compared to structural effects of the superconducting film. This work was supported by NSF and DOE. M.I. Montero acknowledges postdoctoral fellowship by the Secretaria de Estado de Educacion y Universidades (Spain).
On Line Segment Length and Mapping 4-regular Grid Structures in Network Infrastructures
DEFF Research Database (Denmark)
Riaz, Muhammad Tahir; Nielsen, Rasmus Hjorth; Pedersen, Jens Myrup
2006-01-01
The paper focuses on mapping the road network into 4-regular grid structures. A mapping algorithm is proposed. To model the road network GIS data have been used. The Geographic Information System (GIS) data for the road network are composed with different size of line segment lengths...
The significance of the structural regularity for the seismic response of buildings
International Nuclear Information System (INIS)
Hampe, E.; Goldbach, R.; Schwarz, J.
1991-01-01
The paper gives an state-of-the-art report about the international design practice and submits fundamentals for a systematic approach to the solution of that problem. Different criteria of regularity are presented and discussed with respect to EUROCODE Nr. 8. Still remaining questions and the main topics of future research activities are announced and come into consideration. Frame structures with or without additional stiffening wall elements are investigated to illustrate the qualitative differences of the vibrational properties and the earthquake response of regular and irregular systems. (orig./HP) [de
Yu, Yan; Qiu, Robin G
2014-01-01
Microblog that provides us a new communication and information sharing platform has been growing exponentially since it emerged just a few years ago. To microblog users, recommending followees who can serve as high quality information sources is a competitive service. To address this problem, in this paper we propose a matrix factorization model with structural regularization to improve the accuracy of followee recommendation in microblog. More specifically, we adapt the matrix factorization model in traditional item recommender systems to followee recommendation in microblog and use structural regularization to exploit structure information of social network to constrain matrix factorization model. The experimental analysis on a real-world dataset shows that our proposed model is promising.
Applying 4-regular grid structures in large-scale access networks
DEFF Research Database (Denmark)
Pedersen, Jens Myrup; Knudsen, Thomas P.; Patel, Ahmed
2006-01-01
4-Regular grid structures have been used in multiprocessor systems for decades due to a number of nice properties with regard to routing, protection, and restoration, together with a straightforward planar layout. These qualities are to an increasing extent demanded also in largescale access...... networks, but concerning protection and restoration these demands have been met only to a limited extent by the commonly used ring and tree structures. To deal with the fact that classical 4-regular grid structures are not directly applicable in such networks, this paper proposes a number of extensions...... concerning restoration, protection, scalability, embeddability, flexibility, and cost. The extensions are presented as a tool case, which can be used for implementing semi-automatic and in the longer term full automatic network planning tools....
Subharmonic structure of Shapiro steps in frustrated superconducting arrays
International Nuclear Information System (INIS)
Kim, S.; Kim, B.J.; Choi, M.Y.
1995-01-01
Two-dimensional superconducting arrays with combined direct and alternating applied currents are studied both analytically and numerically. In particular, we investigate in detail current-voltage characteristics of a square array with 1/2 flux quantum per plaquette and triangular arrays with 1/2 and 1/4 flux quantum per plaquette. At zero temperature reduced equations of motion are obtained through the use of the translational symmetry present in the systems. The reduced equations lead to a series of subharmonic steps in addition to the standard integer and fractional giant Shapiro steps, producing devil's staircase structure. This devil's staircase structure reflects the existence of dynamically generated states in addition to the states originating from degenerate ground states in equilibrium. Widths of the subharmonic steps as functions of the amplitudes of alternating currents display Bessel-function-type behavior. We also present results of extensive numerical simulations, which indeed reveal the subharmonic steps together with their stability against small thermal fluctuations. Implications for topological invariance are also discussed
Structure-Based Low-Rank Model With Graph Nuclear Norm Regularization for Noise Removal.
Ge, Qi; Jing, Xiao-Yuan; Wu, Fei; Wei, Zhi-Hui; Xiao, Liang; Shao, Wen-Ze; Yue, Dong; Li, Hai-Bo
2017-07-01
Nonlocal image representation methods, including group-based sparse coding and block-matching 3-D filtering, have shown their great performance in application to low-level tasks. The nonlocal prior is extracted from each group consisting of patches with similar intensities. Grouping patches based on intensity similarity, however, gives rise to disturbance and inaccuracy in estimation of the true images. To address this problem, we propose a structure-based low-rank model with graph nuclear norm regularization. We exploit the local manifold structure inside a patch and group the patches by the distance metric of manifold structure. With the manifold structure information, a graph nuclear norm regularization is established and incorporated into a low-rank approximation model. We then prove that the graph-based regularization is equivalent to a weighted nuclear norm and the proposed model can be solved by a weighted singular-value thresholding algorithm. Extensive experiments on additive white Gaussian noise removal and mixed noise removal demonstrate that the proposed method achieves a better performance than several state-of-the-art algorithms.
Cui, Yujun; Li, Yanjun; Yan, Yanfeng; Yang, Ruifu
2008-11-01
CRISPRs (Clustered Regularly Interspaced Short Palindromic Repeats), the basis of spoligotyping technology, can provide prokaryotes with heritable adaptive immunity against phages' invasion. Studies on CRISPR loci and their associated elements, including various CAS (CRISPR-associated) proteins and leader sequences, are still in its infant period. We introduce the brief history', structure, function, bioinformatics research and application of this amazing immunity system in prokaryotic organism for inspiring more scientists to find their interest in this developing topic.
Some regularity of the grain size distribution in nuclear fuel with controllable structure
International Nuclear Information System (INIS)
Loktev, Igor
2008-01-01
It is known, the fission gas release from ceramic nuclear fuel depends from average size of grains. To increase grain size they use additives which activate sintering of pellets. However, grain size distribution influences on fission gas release also. Fuel with different structures, but with the same average size of grains has different fission gas release. Other structure elements, which influence operational behavior of fuel, are pores and inclusions. Earlier, in Kyoto, questions of distribution of grain size for fuel with 'natural' structure were discussed. Some regularity of grain size distribution of fuel with controllable structure and high average size of grains are considered in the report. Influence of inclusions and pores on an error of the automated definition of parameters of structure is shown. The criterion, which describe of behavior of fuel with specific grain size distribution, is offered
Hintermüller, Michael; Holler, Martin; Papafitsoros, Kostas
2018-06-01
In this work, we introduce a function space setting for a wide class of structural/weighted total variation (TV) regularization methods motivated by their applications in inverse problems. In particular, we consider a regularizer that is the appropriate lower semi-continuous envelope (relaxation) of a suitable TV type functional initially defined for sufficiently smooth functions. We study examples where this relaxation can be expressed explicitly, and we also provide refinements for weighted TV for a wide range of weights. Since an integral characterization of the relaxation in function space is, in general, not always available, we show that, for a rather general linear inverse problems setting, instead of the classical Tikhonov regularization problem, one can equivalently solve a saddle-point problem where no a priori knowledge of an explicit formulation of the structural TV functional is needed. In particular, motivated by concrete applications, we deduce corresponding results for linear inverse problems with norm and Poisson log-likelihood data discrepancy terms. Finally, we provide proof-of-concept numerical examples where we solve the saddle-point problem for weighted TV denoising as well as for MR guided PET image reconstruction.
AFM tip characterization by using FFT filtered images of step structures
Energy Technology Data Exchange (ETDEWEB)
Yan, Yongda, E-mail: yanyongda@hit.edu.cn [Key Laboratory of Micro-systems and Micro-structures Manufacturing of Ministry of Education, Harbin Institute of Technology, Harbin, Heilongjiang 150001 (China); Center For Precision Engineering, Harbin Institute of Technology, Harbin, Heilongjiang 150001 (China); Xue, Bo [Key Laboratory of Micro-systems and Micro-structures Manufacturing of Ministry of Education, Harbin Institute of Technology, Harbin, Heilongjiang 150001 (China); Center For Precision Engineering, Harbin Institute of Technology, Harbin, Heilongjiang 150001 (China); Hu, Zhenjiang; Zhao, Xuesen [Center For Precision Engineering, Harbin Institute of Technology, Harbin, Heilongjiang 150001 (China)
2016-01-15
The measurement resolution of an atomic force microscope (AFM) is largely dependent on the radius of the tip. Meanwhile, when using AFM to study nanoscale surface properties, the value of the tip radius is needed in calculations. As such, estimation of the tip radius is important for analyzing results taken using an AFM. In this study, a geometrical model created by scanning a step structure with an AFM tip was developed. The tip was assumed to have a hemispherical cone shape. Profiles simulated by tips with different scanning radii were calculated by fast Fourier transform (FFT). By analyzing the influence of tip radius variation on the spectra of simulated profiles, it was found that low-frequency harmonics were more susceptible, and that the relationship between the tip radius and the low-frequency harmonic amplitude of the step structure varied monotonically. Based on this regularity, we developed a new method to characterize the radius of the hemispherical tip. The tip radii estimated with this approach were comparable to the results obtained using scanning electron microscope imaging and blind reconstruction methods. - Highlights: • The AFM tips with different radii were simulated to scan a nano-step structure. • The spectra of the simulation scans under different radii were analyzed. • The functions of tip radius and harmonic amplitude were used for evaluating tip. • The proposed method has been validated by SEM imaging and blind reconstruction.
Abramov, G. V.; Emeljanov, A. E.; Ivashin, A. L.
Theoretical bases for modeling a digital control system with information transfer via the channel of plural access and a regular quantization cycle are submitted. The theory of dynamic systems with random changes of the structure including elements of the Markov random processes theory is used for a mathematical description of a network control system. The characteristics of similar control systems are received. Experimental research of the given control systems is carried out.
International Nuclear Information System (INIS)
Liu Weina; Li Ping; Gou Qingquan; Zhao Yanping
2008-01-01
The formation mechanism for the body-centred regular icosahedral structure of Li 13 cluster is proposed. The curve of the total energy versus the separation R between the nucleus at the centre and nuclei at the apexes for this structure of Li 13 has been calculated by using the method of Gou's modified arrangement channel quantum mechanics (MACQM). The result shows that the curve has a minimal energy of -96.951 39 a.u. at R = 5.46a 0 . When R approaches to infinity, the total energy of thirteen lithium atoms has the value of -96.564 38 a.u. So the binding energy of Li 13 with respect to thirteen lithium atoms is 0.387 01 a.u. Therefore the binding energy per atom for Li 13 is 0.029 77 a.u. or 0.810 eV, which is greater than the binding energy per atom of 0.453 eV for Li 2 , 0.494 eV for Li 3 , 0.7878 eV for Li 4 , 0.632 eV for Li 5 , and 0.674 eV for Li 7 calculated by us previously. This means that the Li 13 cluster may be formed stably in a body-centred regular icosahedral structure with a greater binding energy
Energy Technology Data Exchange (ETDEWEB)
Zeng, Dong; Zhang, Xinyu; Bian, Zhaoying, E-mail: zybian@smu.edu.cn, E-mail: jhma@smu.edu.cn; Huang, Jing; Zhang, Hua; Lu, Lijun; Lyu, Wenbing; Feng, Qianjin; Chen, Wufan; Ma, Jianhua, E-mail: zybian@smu.edu.cn, E-mail: jhma@smu.edu.cn [Department of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong 510515, China and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong 510515 (China); Zhang, Jing [Department of Radiology, Tianjin Medical University General Hospital, Tianjin 300052 (China)
2016-05-15
Purpose: Cerebral perfusion computed tomography (PCT) imaging as an accurate and fast acute ischemic stroke examination has been widely used in clinic. Meanwhile, a major drawback of PCT imaging is the high radiation dose due to its dynamic scan protocol. The purpose of this work is to develop a robust perfusion deconvolution approach via structure tensor total variation (STV) regularization (PD-STV) for estimating an accurate residue function in PCT imaging with the low-milliampere-seconds (low-mAs) data acquisition. Methods: Besides modeling the spatio-temporal structure information of PCT data, the STV regularization of the present PD-STV approach can utilize the higher order derivatives of the residue function to enhance denoising performance. To minimize the objective function, the authors propose an effective iterative algorithm with a shrinkage/thresholding scheme. A simulation study on a digital brain perfusion phantom and a clinical study on an old infarction patient were conducted to validate and evaluate the performance of the present PD-STV approach. Results: In the digital phantom study, visual inspection and quantitative metrics (i.e., the normalized mean square error, the peak signal-to-noise ratio, and the universal quality index) assessments demonstrated that the PD-STV approach outperformed other existing approaches in terms of the performance of noise-induced artifacts reduction and accurate perfusion hemodynamic maps (PHM) estimation. In the patient data study, the present PD-STV approach could yield accurate PHM estimation with several noticeable gains over other existing approaches in terms of visual inspection and correlation analysis. Conclusions: This study demonstrated the feasibility and efficacy of the present PD-STV approach in utilizing STV regularization to improve the accuracy of residue function estimation of cerebral PCT imaging in the case of low-mAs.
Mechanical properties of regular hexahedral lattice structure formed by selective laser melting
International Nuclear Information System (INIS)
Sun, Jianfeng; Yang, Yongqiang; Wang, Di
2013-01-01
The Ti–6Al–4V lattice structure is widely used in the aerospace field. This research first designs a regular hexahedral unit, processes the lattice structure composed of the Ti–6Al–4V units by selective laser melting technology, obtains the experimental fracture load and the compression deformation of them through compression tests, then conducts a simulation of the unit and the lattice structure through ANSYS to analyze the failure point. Later, according to the force condition of the point, the model of maximum load is built, through which the analytical formula of the fracture load of the unit and the lattice structure are obtained. The results of groups of experiments demonstrate that there exists an exponential relationship between the practical fracture load and the porosity of the lattice structure. There also exists a trigonometric function relationship between the compression deformation and the porosity of the lattice structure. The fracture analysis indicates that fracture of the units and lattice structure is brittle fracture due to cleavage fracture. (paper)
International Nuclear Information System (INIS)
Randic, M.; Wilkins, C.L.
1979-01-01
Selected molecular data on alkanes have been reexamined in a search for general regularities in isomeric variations. In contrast to the prevailing approaches concerned with fitting data by searching for optimal parameterization, the present work is primarily aimed at established trends, i.e., searching for relative magnitudes and their regularities among the isomers. Such an approach is complementary to curve fitting or correlation seeking procedures. It is particularly useful when there are incomplete data which allow trends to be recognized but no quantitative correlation to be established. One proceeds by first ordering structures. One way is to consider molecular graphs and enumerate paths of different length as the basic graph invariant. It can be shown that, for several thermodynamic molecular properties, the number of paths of length two (p 2 ) and length three (p 3 ) are critical. Hence, an ordering based on p 2 and p 3 indicates possible trends and behavior for many molecular properties, some of which relate to others, some which do not. By considering a grid graph derived by attributing to each isomer coordinates (p 2 ,p 3 ) and connecting points along the coordinate axis, one obtains a simple presentation useful for isomer structural interrelations. This skeletal frame is one upon which possible trends for different molecular properties may be conveniently represented. The significance of the results and their conceptual value is discussed. 16 figures, 3 tables
Xu, Yongxiang; Yuan, Shenpo; Han, Jianmin; Lin, Hong; Zhang, Xuehui
2017-11-15
The development of scaffolds to mimic the gradient structure of natural tissue is an important consideration for effective tissue engineering. In the present study, a physical cross-linking chitosan hydrogel with gradient structures was fabricated via a step-by-step cross-linking process using sodium tripolyphosphate and sodium hydroxide as sequential cross-linkers. Chitosan hydrogels with different structures (single, double, and triple layers) were prepared by modifying the gelling process. The properties of the hydrogels were further adjusted by varying the gelling conditions, such as gelling time, pH, and composition of the crosslinking solution. Slight cytotoxicity was showed in MTT assay for hydrogels with uncross-linking chitosan solution and non-cytotoxicity was showed for other hydrogels. The results suggest that step-by-step cross-linking represents a practicable method to fabricate scaffolds with gradient structures. Copyright © 2017. Published by Elsevier Ltd.
Directory of Open Access Journals (Sweden)
F. G. Lovshenko
2014-01-01
Full Text Available Experimentally determined regularities and mechanism of formation of structure of the mechanically alloyed compositions foundations on the basis of the widely applied in mechanical engineering metals – iron, nickel, aluminum, copper are given.
Liu, Jie; Zhou, Lutan; He, Zhicheng; Gao, Na; Shang, Feineng; Xu, Jianping; Li, Zi; Yang, Zengming; Wu, Mingyi; Zhao, Jinhua
2018-02-01
Edible snails have been widely used as a health food and medicine in many countries. A unique glycosaminoglycan (AF-GAG) was purified from Achatina fulica. Its structure was analyzed and characterized by chemical and instrumental methods, such as Fourier transform infrared spectroscopy, analysis of monosaccharide composition, and 1D/2D nuclear magnetic resonance spectroscopy. Chemical composition analysis indicated that AF-GAG is composed of iduronic acid (IdoA) and N-acetyl-glucosamine (GlcNAc) and its average molecular weight is 118kDa. Structural analysis clarified that the uronic acid unit in glycosaminoglycan (GAG) is the fully epimerized and the sequence of AF-GAG is →4)-α-GlcNAc (1→4)-α-IdoA2S (1→. Although its structure with a uniform repeating disaccharide is similar to those of heparin and heparan sulfate, this GAG is structurally highly regular and homogeneous. Anticoagulant activity assays indicated that AF-GAG exhibits no anticoagulant activities, but considering its structural characteristic, other bioactivities such as heparanase inhibition may be worthy of further study. Copyright © 2017 Elsevier Ltd. All rights reserved.
Front propagation in a regular vortex lattice: Dependence on the vortex structure.
Beauvier, E; Bodea, S; Pocheau, A
2017-11-01
We investigate the dependence on the vortex structure of the propagation of fronts in stirred flows. For this, we consider a regular set of vortices whose structure is changed by varying both their boundary conditions and their aspect ratios. These configurations are investigated experimentally in autocatalytic solutions stirred by electroconvective flows and numerically from kinematic simulations based on the determination of the dominant Fourier mode of the vortex stream function in each of them. For free lateral boundary conditions, i.e., in an extended vortex lattice, it is found that both the flow structure and the front propagation negligibly depend on vortex aspect ratios. For rigid lateral boundary conditions, i.e., in a vortex chain, vortices involve a slight dependence on their aspect ratios which surprisingly yields a noticeable decrease of the enhancement of front velocity by flow advection. These different behaviors reveal a sensitivity of the mean front velocity on the flow subscales. It emphasizes the intrinsic multiscale nature of front propagation in stirred flows and the need to take into account not only the intensity of vortex flows but also their inner structure to determine front propagation at a large scale. Differences between experiments and simulations suggest the occurrence of secondary flows in vortex chains at large velocity and large aspect ratios.
A structured four-step curriculum in basic laparoscopy
DEFF Research Database (Denmark)
Strandbygaard, Jeanett; Bjerrum, Flemming; Maagaard, Mathilde
2014-01-01
The objective of this study was to develop a 4-step curriculum in basic laparoscopy consisting of validated modules integrating a cognitive component, a practical component and a procedural component.......The objective of this study was to develop a 4-step curriculum in basic laparoscopy consisting of validated modules integrating a cognitive component, a practical component and a procedural component....
A lattice Boltzmann model for substrates with regularly structured surface roughness
Yagub, A.; Farhat, H.; Kondaraju, S.; Singh, T.
2015-11-01
Superhydrophobic surface characteristics are important in many industrial applications, ranging from the textile to the military. It was observed that surfaces fabricated with nano/micro roughness can manipulate the droplet contact angle, thus providing an opportunity to control the droplet wetting characteristics. The Shan and Chen (SC) lattice Boltzmann model (LBM) is a good numerical tool, which holds strong potentials to qualify for simulating droplets wettability. This is due to its realistic nature of droplet contact angle (CA) prediction on flat smooth surfaces. But SC-LBM was not able to replicate the CA on rough surfaces because it lacks a real representation of the physics at work under these conditions. By using a correction factor to influence the interfacial tension within the asperities, the physical forces acting on the droplet at its contact lines were mimicked. This approach allowed the model to replicate some experimentally confirmed Wenzel and Cassie wetting cases. Regular roughness structures with different spacing were used to validate the study using the classical Wenzel and Cassie equations. The present work highlights the strength and weakness of the SC model and attempts to qualitatively conform it to the fundamental physics, which causes a change in the droplet apparent contact angle, when placed on nano/micro structured surfaces.
Void Structures in Regularly Patterned ZnO Nanorods Grown with the Hydrothermal Method
Directory of Open Access Journals (Sweden)
Yu-Feng Yao
2014-01-01
Full Text Available The void structures and related optical properties after thermal annealing with ambient oxygen in regularly patterned ZnO nanrorod (NR arrays grown with the hydrothermal method are studied. In increasing the thermal annealing temperature, void distribution starts from the bottom and extends to the top of an NR in the vertical (c-axis growth region. When the annealing temperature is higher than 400°C, void distribution spreads into the lateral (m-axis growth region. Photoluminescence measurement shows that the ZnO band-edge emission, in contrast to defect emission in the yellow-red range, is the strongest under the n-ZnO NR process conditions of 0.003 M in Ga-doping concentration and 300°C in thermal annealing temperature with ambient oxygen. Energy dispersive X-ray spectroscopy data indicate that the concentration of hydroxyl groups in the vertical growth region is significantly higher than that in the lateral growth region. During thermal annealing, hydroxyl groups are desorbed from the NR leaving anion vacancies for reacting with cation vacancies to form voids.
Directory of Open Access Journals (Sweden)
F. G. Lovshenko
2015-01-01
Full Text Available The paper presents investigation results pertaining to ascertainment of formation regularities of phase composition and structure during mechanical alloying of binary aluminium composites/substances. The invetigations have been executed while applying a wide range of methods, devices and equipment used in modern material science. The obtained data complement each other. It has been established that presence of oxide and hydro-oxide films on aluminium powder and introduction of surface-active substance in the composite have significant effect on mechanically and thermally activated phase transformations and properties of semi-finished products. Higher fatty acids have been used as a surface active substance.The mechanism of mechanically activated solid solution formation has been identified. Its essence is a formation of specific quasi-solutions at the initial stage of processing. Mechanical and chemical interaction between components during formation of other phases has taken place along with dissolution in aluminium while processing powder composites. Granule basis is formed according to the dynamic recrystallization mechanism and possess submicrocrystal structural type with the granule dimension basis less than 100 nm and the grains are divided in block size of not more than 20 nm with oxide inclusions of 10–20 nm size.All the compounds with the addition of surface-active substances including aluminium powder without alloying elements obtained by processing in mechanic reactor are disperse hardened. In some cases disperse hardening is accompanied by dispersive and solid solution hardnening process. Complex hardening predetermines a high temperature of recrystallization in mechanically alloyed compounds, its value exceeds 400 °C.
Michelitsch, T. M.; Collet, B. A.; Riascos, A. P.; Nowakowski, A. F.; Nicolleau, F. C. G. A.
2017-12-01
We analyze a Markovian random walk strategy on undirected regular networks involving power matrix functions of the type L\\frac{α{2}} where L indicates a ‘simple’ Laplacian matrix. We refer to such walks as ‘fractional random walks’ with admissible interval 0walk. From these analytical results we establish a generalization of Polya’s recurrence theorem for fractional random walks on d-dimensional infinite lattices: The fractional random walk is transient for dimensions d > α (recurrent for d≤slantα ) of the lattice. As a consequence, for 0walk is transient for all lattice dimensions d=1, 2, .. and in the range 1≤slantα walk is transient only for lattice dimensions d≥slant 3 . The generalization of Polya’s recurrence theorem remains valid for the class of random walks with Lévy flight asymptotics for long-range steps. We also analyze the mean first passage probabilities, mean residence times, mean first passage times and global mean first passage times (Kemeny constant) for the fractional random walk. For an infinite 1D lattice (infinite ring) we obtain for the transient regime 0walk is generated by the non-diagonality of the fractional Laplacian matrix with Lévy-type heavy tailed inverse power law decay for the probability of long-range moves. This non-local and asymptotic behavior of the fractional random walk introduces small-world properties with the emergence of Lévy flights on large (infinite) lattices.
Goyvaerts, Jan
2009-01-01
This cookbook provides more than 100 recipes to help you crunch data and manipulate text with regular expressions. Every programmer can find uses for regular expressions, but their power doesn't come worry-free. Even seasoned users often suffer from poor performance, false positives, false negatives, or perplexing bugs. Regular Expressions Cookbook offers step-by-step instructions for some of the most common tasks involving this tool, with recipes for C#, Java, JavaScript, Perl, PHP, Python, Ruby, and VB.NET. With this book, you will: Understand the basics of regular expressions through a
International Nuclear Information System (INIS)
Kang Zili.
1989-01-01
Based on summing up Guangxi geotectonic features and evolutionary regularities, this paper discusses the occurrence features, formation conditions and time-space distribution regularities of various U-rich strata during the development of geosyncline, platform and diwa stages, Especially, during diwa stage all those U-rich strata might be reworked to a certain degree and resulted in the mobilization of uranium, then enriching to form polygenetic composite uranium ore deposits with stratabound features. This study will be helpful for prospecting in the region
Yaski, Osnat; Portugali, Juval; Eilam, David
2012-04-01
The physical structure of the surrounding environment shapes the paths of progression, which in turn reflect the structure of the environment and the way that it shapes behavior. A regular and coherent physical structure results in paths that extend over the entire environment. In contrast, irregular structure results in traveling over a confined sector of the area. In this study, rats were tested in a dark arena in which half the area contained eight objects in a regular grid layout, and the other half contained eight objects in an irregular layout. In subsequent trials, a salient landmark was placed first within the irregular half, and then within the grid. We hypothesized that rats would favor travel in the area with regular order, but found that activity in the area with irregular object layout did not differ from activity in the area with grid layout, even when the irregular half included a salient landmark. Thus, the grid impact in one arena half extended to the other half and overshadowed the presumed impact of the salient landmark. This could be explained by mechanisms that control spatial behavior, such as grid cells and odometry. However, when objects were spaced irregularly over the entire arena, the salient landmark became dominant and the paths converged upon it, especially from objects with direct access to the salient landmark. Altogether, three environmental properties: (i) regular and predictable structure; (ii) salience of landmarks; and (iii) accessibility, hierarchically shape the paths of progression in a dark environment. Copyright © 2012 Elsevier B.V. All rights reserved.
Solving large scale structure in ten easy steps with COLA
Energy Technology Data Exchange (ETDEWEB)
Tassev, Svetlin [Department of Astrophysical Sciences, Princeton University, 4 Ivy Lane, Princeton, NJ 08544 (United States); Zaldarriaga, Matias [School of Natural Sciences, Institute for Advanced Study, Olden Lane, Princeton, NJ 08540 (United States); Eisenstein, Daniel J., E-mail: stassev@cfa.harvard.edu, E-mail: matiasz@ias.edu, E-mail: deisenstein@cfa.harvard.edu [Center for Astrophysics, Harvard University, 60 Garden Street, Cambridge, MA 02138 (United States)
2013-06-01
We present the COmoving Lagrangian Acceleration (COLA) method: an N-body method for solving for Large Scale Structure (LSS) in a frame that is comoving with observers following trajectories calculated in Lagrangian Perturbation Theory (LPT). Unlike standard N-body methods, the COLA method can straightforwardly trade accuracy at small-scales in order to gain computational speed without sacrificing accuracy at large scales. This is especially useful for cheaply generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing, as those catalogs are essential for performing detailed error analysis for ongoing and future surveys of LSS. As an illustration, we ran a COLA-based N-body code on a box of size 100 Mpc/h with particles of mass ≈ 5 × 10{sup 9}M{sub s}un/h. Running the code with only 10 timesteps was sufficient to obtain an accurate description of halo statistics down to halo masses of at least 10{sup 11}M{sub s}un/h. This is only at a modest speed penalty when compared to mocks obtained with LPT. A standard detailed N-body run is orders of magnitude slower than our COLA-based code. The speed-up we obtain with COLA is due to the fact that we calculate the large-scale dynamics exactly using LPT, while letting the N-body code solve for the small scales, without requiring it to capture exactly the internal dynamics of halos. Achieving a similar level of accuracy in halo statistics without the COLA method requires at least 3 times more timesteps than when COLA is employed.
Implementation of a variable-step integration technique for nonlinear structural dynamic analysis
International Nuclear Information System (INIS)
Underwood, P.; Park, K.C.
1977-01-01
The paper presents the implementation of a recently developed unconditionally stable implicit time integration method into a production computer code for the transient response analysis of nonlinear structural dynamic systems. The time integrator is packaged with two significant features; a variable step size that is automatically determined and this is accomplished without additional matrix refactorizations. The equations of motion solved by the time integrator must be cast in the pseudo-force form, and this provides the mechanism for controlling the step size. Step size control is accomplished by extrapolating the pseudo-force to the next time (the predicted pseudo-force), then performing the integration step and then recomputing the pseudo-force based on the current solution (the correct pseudo-force); from this data an error norm is constructed, the value of which determines the step size for the next step. To avoid refactoring the required matrix with each step size change a matrix scaling technique is employed, which allows step sizes to change by a factor of 100 without refactoring. If during a computer run the integrator determines it can run with a step size larger than 100 times the original minimum step size, the matrix is refactored to take advantage of the larger step size. The strategy for effecting these features are discussed in detail. (Auth.)
Zandvliet, Henricus J.W.; Wulfhekel, W.C.U.; Hendriksen, B.; Poelsema, Bene
1997-01-01
In contrast to a recent claim by Sánchez and Aldao [Phys. Rev. B 54, R11 058 (1996)] that the relaxation dynamics of attachment processes influences the equilibrium step structure we argue that the step structure in thermodynamic equilibrium is only governed by the configurational free energy
Directory of Open Access Journals (Sweden)
Tamrazyan Ashot Georgievich
2012-10-01
Full Text Available Accurate and adequate description of external influences and of the bearing capacity of the structural material requires the employment of the probability theory methods. In this regard, the characteristic that describes the probability of failure-free operation is required. The characteristic of reliability means that the maximum stress caused by the action of the load will not exceed the bearing capacity. In this paper, the author presents a solution to the problem of calculation of structures, namely, the identification of reliability of pre-set design parameters, in particular, cross-sectional dimensions. If the load distribution pattern is available, employment of the regularities of distributed functions make it possible to find the pattern of distribution of maximum stresses over the structure. Similarly, we can proceed to the design of structures of pre-set rigidity, reliability and stability in the case of regular load distribution. We consider the element of design (a monolithic concrete slab, maximum stress S which depends linearly on load q. Within a pre-set period of time, the probability will not exceed the values according to the Poisson law. The analysis demonstrates that the variability of the bearing capacity produces a stronger effect on relative sizes of cross sections of a slab than the variability of loads. It is therefore particularly important to reduce the coefficient of variation of the load capacity. One of the methods contemplates the truncation of the bearing capacity distribution by pre-culling the construction material.
On Hierarchical Extensions of Large-Scale 4-regular Grid Network Structures
DEFF Research Database (Denmark)
Pedersen, Jens Myrup; Patel, A.; Knudsen, Thomas Phillip
2004-01-01
dependencies between the number of nodes and the distances in the structures. The perfect square mesh is introduced for hierarchies, and it is shown that applying ordered hierarchies in this way results in logarithmic dependencies between the number of nodes and the distances, resulting in better scaling...... structures. For example, in a mesh of 391876 nodes the average distance is reduced from 417.33 to 17.32 by adding hierarchical lines. This is gained by increasing the number of lines by 4.20% compared to the non-hierarchical structure. A similar hierarchical extension of the torus structure also results...
DEFF Research Database (Denmark)
Montoya-Martinez, Jair; Artes-Rodriguez, Antonio; Pontil, Massimiliano
2014-01-01
We consider the estimation of the Brain Electrical Sources (BES) matrix from noisy electroencephalographic (EEG) measurements, commonly named as the EEG inverse problem. We propose a new method to induce neurophysiological meaningful solutions, which takes into account the smoothness, structured...... sparsity, and low rank of the BES matrix. The method is based on the factorization of the BES matrix as a product of a sparse coding matrix and a dense latent source matrix. The structured sparse-low-rank structure is enforced by minimizing a regularized functional that includes the ℓ21-norm of the coding...... matrix and the squared Frobenius norm of the latent source matrix. We develop an alternating optimization algorithm to solve the resulting nonsmooth-nonconvex minimization problem. We analyze the convergence of the optimization procedure, and we compare, under different synthetic scenarios...
Simulations of fine structures on the zero field steps of Josephson tunnel junctions
DEFF Research Database (Denmark)
Scheuermann, M.; Chi, C. C.; Pedersen, Niels Falsig
1986-01-01
Fine structures on the zero field steps of long Josephson tunnel junctions are simulated for junctions with the bias current injected into the junction at the edges. These structures are due to the coupling between self-generated plasma oscillations and the traveling fluxon. The plasma oscillations...... are generated by the interaction of the bias current with the fluxon at the junction edges. On the first zero field step, the voltages of successive fine structures are given by Vn=[h-bar]/2e(2omegap/n), where n is an even integer. Applied Physics Letters is copyrighted by The American Institute of Physics....
Lin, Nan; Zhu, Yun; Fan, Ruzong; Xiong, Momiao
2017-10-01
Investigating the pleiotropic effects of genetic variants can increase statistical power, provide important information to achieve deep understanding of the complex genetic structures of disease, and offer powerful tools for designing effective treatments with fewer side effects. However, the current multiple phenotype association analysis paradigm lacks breadth (number of phenotypes and genetic variants jointly analyzed at the same time) and depth (hierarchical structure of phenotype and genotypes). A key issue for high dimensional pleiotropic analysis is to effectively extract informative internal representation and features from high dimensional genotype and phenotype data. To explore correlation information of genetic variants, effectively reduce data dimensions, and overcome critical barriers in advancing the development of novel statistical methods and computational algorithms for genetic pleiotropic analysis, we proposed a new statistic method referred to as a quadratically regularized functional CCA (QRFCCA) for association analysis which combines three approaches: (1) quadratically regularized matrix factorization, (2) functional data analysis and (3) canonical correlation analysis (CCA). Large-scale simulations show that the QRFCCA has a much higher power than that of the ten competing statistics while retaining the appropriate type 1 errors. To further evaluate performance, the QRFCCA and ten other statistics are applied to the whole genome sequencing dataset from the TwinsUK study. We identify a total of 79 genes with rare variants and 67 genes with common variants significantly associated with the 46 traits using QRFCCA. The results show that the QRFCCA substantially outperforms the ten other statistics.
International Nuclear Information System (INIS)
Chernykh, A.; Shur, V.; Nikolaeva, E.; Shishkin, E.; Shur, A.; Terabe, K.; Kurimura, S.; Kitamura, K.; Gallo, K.
2005-01-01
The variety of the shapes of isolated domains, revealed in congruent and stoichiometric LiTaO 3 and LiNbO 3 by chemical etching and visualized by optical and scanning probe microscopy, was obtained by computer simulation. The kinetic nature of the domain shape was clearly demonstrated. The kinetics of domain structure with the dominance of the growth of the steps formed at the domain walls as a result of domain merging was investigated experimentally in slightly distorted artificial regular two-dimensional (2D) hexagonal domain structure and random natural one. The artificial structure has been realized in congruent LiNbO 3 by 2D electrode pattern produced by photolithography. The polarization reversal in congruent LiTaO 3 was investigated as an example of natural domain growth limited by merging. The switching process defined by domain merging was studied by computer simulation. The crucial dependence of the switching kinetics on the nuclei concentration has been revealed
Regularities of ferritic-pearlitic structure formation during subcooled austenite decomposition
International Nuclear Information System (INIS)
Shkatov, V.V.; Frantsenyuk, L.I.; Bogomolov, I.V.
1997-01-01
Relationships of ferrite-pearlite structure parameters to austenite grain size and cooling conditions during γ -> α transformation are studied for steel 3 sp. A mathematical description has been proposed for grain evolution in carbon and low alloy steel cooling after hot rolling. It is shown that ferrite grain size can be controlled by changing temperature range of water spraying when the temperatures of rolling completion and strip coiling are the same
DEFF Research Database (Denmark)
Callot, Laurent; Kristensen, Johannes Tang
This paper shows that the parsimoniously time-varying methodology of Callot and Kristensen (2015) can be applied to factor models.We apply this method to study macroeconomic instability in the US from 1959:1 to 2006:4 with a particular focus on the Great Moderation. Models with parsimoniously time...... that the parameters of both models exhibit a higher degree of instability in the period from 1970:1 to 1984:4 relative to the following 15 years. In our setting the Great Moderation appears as the gradual ending of a period of high structural instability that took place in the 1970s and early 1980s....
On the regularities of structural transformations in copper-beryllium alloys during aging
International Nuclear Information System (INIS)
Tkhagapsoev, Kh.G.
1983-01-01
Peculiarities of elastic oscillations damping and those of the change of specific electric resistance taking place in the process of isothermal aging of the BrB2 bronze have been studied to determine the mechanism and kinetics of mutual transformations of precipitating phases in Cu-Be alloys. It is found out that isothermal aging of beryllium bronze BrB2 at 260... 400 deg C is accompanied by structural transitions connected with the decomposition of oversaturated α-solid solution. Formation of α phase nuclei (or transformation of Guinier-Preston zones) as well as their growth occur at the expense of cooperative-shift processes characterized by low activation energy (19.7...26.3 J/mol) and by considerable time of relaxation (tau approximately equal to 10 -1 -10 2 s)
Improvement of surface acidity and structural regularity of Zr-modified mesoporous MCM-41
Energy Technology Data Exchange (ETDEWEB)
Chen, L.F. [Departamento de Ciencias Basicas, Universidad Autonoma Metropolitana-A, Av. San Pablo 180, Col. Reynosa-Tamaulipas, 02200 Mexico D.F. (Mexico)]. E-mail: chenlf2001@yahoo.com; Norena, L.E. [Departamento de Ciencias Basicas, Universidad Autonoma Metropolitana-A, Av. San Pablo 180, Col. Reynosa-Tamaulipas, 02200 Mexico D.F. (Mexico); Navarrete, J. [Grupo de Molecular Ingenieria, Instituto Mexicano del Petroleo, Eje Lazaro Cardenas 152, 07730 Mexico D.F. (Mexico); Wang, J.A. [Laboratorio de Catalisis y Materiales, SEPI-ESIQIE, Instituto Politecnico Nacional, Av. Politecnico S/N, Col. Zacatenco, 07738 Mexico D.F. (Mexico)
2006-06-10
This work reports the synthesis and surface characterization of a Zr-modified mesoporous MCM-41 solid with an ordered hexagonal arrangement, prepared through a templated synthesis route, using cetyltrimethylammonium chloride as the template. The surface features, crystalline structure, textural properties and surface acidity of the materials were characterized by in situ Fourier transform infrared (FT-IR) spectroscopy, X-ray diffraction (XRD), N{sub 2} physisorption isotherms, {sup 29}Si MAS-NMR and in situ FT-IR of pyridine adsorption. It is evident that the surfactant cations inserted into the network of the solids during the preparation could be removed by calcination of the sample above 500 deg. C. The resultant material showed a large surface area of 680.6 m{sup 2} g{sup -1} with a uniform pore diameter distribution in a very narrow range centered at approximately 2.5 nm. Zirconium incorporation into the Si-MCM-41 framework, confirmed by {sup 29}Si MAS-NMR analysis, increased not only the wall thickness of the mesopores but also the long-range order of the periodically hexagonal structure. Both, Lewis and Broensted acid sites, were formed on the surface of the Zr-modified MCM-41 solid. Compared to Si-MCM-41 on which only very weak Lewis acid sites were formed, the densities of both Lewis and Broensted acid sites and the strength of the acidity on the Zr-modified sample were significantly increased, indicating that the incorporation of zirconium greatly enhances the acidity of the material.
Damianos, Konstantina; Ferrando, Riccardo
2012-02-21
The structural modifications of small supported gold clusters caused by realistic surface defects (steps) in the MgO(001) support are investigated by computational methods. The most stable gold cluster structures on a stepped MgO(001) surface are searched for in the size range up to 24 Au atoms, and locally optimized by density-functional calculations. Several structural motifs are found within energy differences of 1 eV: inclined leaflets, arched leaflets, pyramidal hollow cages and compact structures. We show that the interaction with the step clearly modifies the structures with respect to adsorption on the flat defect-free surface. We find that leaflet structures clearly dominate for smaller sizes. These leaflets are either inclined and quasi-horizontal, or arched, at variance with the case of the flat surface in which vertical leaflets prevail. With increasing cluster size pyramidal hollow cages begin to compete against leaflet structures. Cage structures become more and more favourable as size increases. The only exception is size 20, at which the tetrahedron is found as the most stable isomer. This tetrahedron is however quite distorted. The comparison of two different exchange-correlation functionals (Perdew-Burke-Ernzerhof and local density approximation) show the same qualitative trends. This journal is © The Royal Society of Chemistry 2012
International Nuclear Information System (INIS)
Kuczumow, A.; Nowak, J.; ChaLas, R.
2011-01-01
The aim of a recent paper was to recognize the chemical and structural changes in apatites, which form both the enamel and the dentin of the human tooth. The aim was achieved by scrutinizing the linear elemental profiles along the cross-sections of human molar teeth. Essentially, the task was accomplished with the application of the Electron Probe Microanalysis method and with some additional studies by Micro-Raman spectrometry. All the trends in linear profiles were strictly determined. In the enamel zone they were either increasing or decreasing curves of exponential character. The direction of the investigations was to start with the tooth surface and move towards the dentin-enamel junction (DEJ). The results of the elemental studies were more visible when the detected material was divided, in an arbitrary way, into the prevailing 'core' enamel (∼93.5% of the total mass) and the remaining 'overbuilt' enamel. The material in the 'core' enamel was fully stable, with clearly determined chemical and mechanical features. However, the case was totally different in the 'overbuilt enamel', with dynamic changes in the composition. In the 'overbuilt' layer Ca, P, Cl and F profiles present the decaying distribution curves, whereas Mg, Na, K and CO 3 2- present the growing ones. Close to the surface of the tooth the mixture of hydroxy-, chlor- and fluor-apatite is formed, which is much more resistant than the rest of the enamel. On passing towards the DEJ, the apatite is enriched with Na, Mg and CO 3 2- . In this location, three of six phosphate groups were substituted with carbonate groups. Simultaneously, Mg is associated with the hydroxyl groups around the hexad axis. In this way, the mechanisms of exchange reactions were established. The crystallographic structures were proposed for new phases located close to DEJ. In the dentin zone, the variability of elemental profiles looks different, with the most characteristic changes occurring in Mg and Na concentrations. Mg
Structural and electrostatic regularities in interactions of homeodomains with operator DNA
International Nuclear Information System (INIS)
Chirgadze, Yu.N.; Ivanov, V.V.; Polozov, R.V.; Zheltukhin, E.I.; Sivozhelezov, V.S.
2008-01-01
Interfaces of five DNA-homeodomain complexes, selected by similarity of structures and patterns of contacting residues, were compared. The long-range stage of the recognition process was characterized by electrostatic potentials about 5 Angstroem away from molecular surfaces of both protein and DNA. For proteins, clear positive potential is displayed only at the side contacting DNA, while grooves of DNA display a strong negative potential. Thus, one functional role of electrostatics is guiding the protein into the DNA major groove. At the close-range stage, neutralization of the phosphate charges by positively charged residues is necessary for decreasing the strong electrostatic potential of DNA, allowing nucleotide bases to participate in formation of protein-DNA atomic contacts in the interface. The protein's recognizing α-helix was shown to form both invariant and variable contacts with DNA by means of the certain specific side groups, with water molecules participating in some of the contacts. The invariant contacts included the highly specific Asn-Ade hydrogen bonds, nonpolar contacts of hydrophobic amino acids serving as barriers for fixing the protein on DNA, and interface water molecule cluster providing local mobility necessary for the dissociation of the protein-DNA complex. One of the water molecules is invariant and located at the center of the interface. Invariant contacts of the proteins are mostly formed with the TAAT motive of promoter DNA's forward strand. They distinguish the homeodomain family from other DNA-binding proteins. Variable contacts are formed with the reverse strand and are responsible for the binding specificity within the homeodomain family
Li, Xiaomei; Luo, Lan; Cai, Ying; Yang, Wenjiao; Lin, Lisha; Li, Zi; Gao, Na; Purcell, Steven W; Wu, Mingyi; Zhao, Jinhua
2017-10-25
Edible sea cucumbers are widely used as a health food and medicine. A fucosylated glycosaminoglycan (FG) was purified from the high-value sea cucumber Stichopus herrmanni. Its physicochemical properties and structure were analyzed and characterized by chemical and instrumental methods. Chemical analysis indicated that this FG with a molecular weight of ∼64 kDa is composed of N-acetyl-d-galactosamine, d-glucuronic acid (GlcA), and l-fucose. Structural analysis clarified that the FG contains the chondroitin sulfate E-like backbone, with mostly 2,4-di-O-sulfated (85%) and some 3,4-di-O-sulfated (10%) and 4-O-sulfated (5%) fucose side chains that link to the C3 position of GlcA. This FG is structurally highly regular and homogeneous, differing from the FGs of other sea cucumbers, for its sulfation patterns are simpler. Biological activity assays indicated that it is a strong anticoagulant, inhibiting thrombin and intrinsic factor Xase. Our results expand the knowledge on structural types of FG and illustrate its biological activity as a functional food material.
Furuhama, A; Hasunuma, K; Hayashi, T I; Tatarazako, N
2016-05-01
We propose a three-step strategy that uses structural and physicochemical properties of chemicals to predict their 72 h algal growth inhibition toxicities against Pseudokirchneriella subcapitata. In Step 1, using a log D-based criterion and structural alerts, we produced an interspecies QSAR between algal and acute daphnid toxicities for initial screening of chemicals. In Step 2, we categorized chemicals according to the Verhaar scheme for aquatic toxicity, and we developed QSARs for toxicities of Class 1 (non-polar narcotic) and Class 2 (polar narcotic) chemicals by means of simple regression with a hydrophobicity descriptor and multiple regression with a hydrophobicity descriptor and a quantum chemical descriptor. Using the algal toxicities of the Class 1 chemicals, we proposed a baseline QSAR for calculating their excess toxicities. In Step 3, we used structural profiles to predict toxicity either quantitatively or qualitatively and to assign chemicals to the following categories: Pesticide, Reactive, Toxic, Toxic low and Uncategorized. Although this three-step strategy cannot be used to estimate the algal toxicities of all chemicals, it is useful for chemicals within its domain. The strategy is also applicable as a component of Integrated Approaches to Testing and Assessment.
Mizutani, Eiji; Demmel, James W
2003-01-01
This paper briefly introduces our numerical linear algebra approaches for solving structured nonlinear least squares problems arising from 'multiple-output' neural-network (NN) models. Our algorithms feature trust-region regularization, and exploit sparsity of either the 'block-angular' residual Jacobian matrix or the 'block-arrow' Gauss-Newton Hessian (or Fisher information matrix in statistical sense) depending on problem scale so as to render a large class of NN-learning algorithms 'efficient' in both memory and operation costs. Using a relatively large real-world nonlinear regression application, we shall explain algorithmic strengths and weaknesses, analyzing simulation results obtained by both direct and iterative trust-region algorithms with two distinct NN models: 'multilayer perceptrons' (MLP) and 'complementary mixtures of MLP-experts' (or neuro-fuzzy modular networks).
Energy Technology Data Exchange (ETDEWEB)
Lange, Ilja; Reiter, Sina; Kniepert, Juliane; Piersimoni, Fortunato; Brenner, Thomas; Neher, Dieter, E-mail: neher@uni-potsdam.de [Institute of Physics and Astronomy, University of Potsdam, Karl-Liebknecht-Strasse 24-25, 14476 Potsdam (Germany); Pätzel, Michael; Hildebrandt, Jana; Hecht, Stefan [Department of Chemistry and IRIS Adlershof, Humboldt-Universität zu Berlin, Brook-Taylor-Str. 2, 12489 Berlin (Germany)
2015-03-16
An approach is presented to modify the work function of solution-processed sol-gel derived zinc oxide (ZnO) over an exceptionally wide range of more than 2.3 eV. This approach relies on the formation of dense and homogeneous self-assembled monolayers based on phosphonic acids with different dipole moments. This allows us to apply ZnO as charge selective bottom electrodes in either regular or inverted solar cell structures, using poly(3-hexylthiophene):phenyl-C71-butyric acid methyl ester as the active layer. These devices compete with or even surpass the performance of the reference on indium tin oxide/poly(3,4-ethylenedioxythiophene) polystyrene sulfonate. Our findings highlight the potential of properly modified ZnO as electron or hole extracting electrodes in hybrid optoelectronic devices.
Energy Technology Data Exchange (ETDEWEB)
Saide, Pablo (CGRER, Center for Global and Regional Environmental Research, Univ. of Iowa, Iowa City, IA (United States)), e-mail: pablo-saide@uiowa.edu; Bocquet, Marc (Universite Paris-Est, CEREA Joint Laboratory Ecole des Ponts ParisTech and EDF RandD, Champs-sur-Marne (France); INRIA, Paris Rocquencourt Research Center (France)); Osses, Axel (Departamento de Ingeniera Matematica, Universidad de Chile, Santiago (Chile); Centro de Modelamiento Matematico, UMI 2807/Universidad de Chile-CNRS, Santiago (Chile)); Gallardo, Laura (Centro de Modelamiento Matematico, UMI 2807/Universidad de Chile-CNRS, Santiago (Chile); Departamento de Geofisica, Universidad de Chile, Santiago (Chile))
2011-07-15
When constraining surface emissions of air pollutants using inverse modelling one often encounters spurious corrections to the inventory at places where emissions and observations are colocated, referred to here as the colocalization problem. Several approaches have been used to deal with this problem: coarsening the spatial resolution of emissions; adding spatial correlations to the covariance matrices; adding constraints on the spatial derivatives into the functional being minimized; and multiplying the emission error covariance matrix by weighting factors. Intercomparison of methods for a carbon monoxide inversion over a city shows that even though all methods diminish the colocalization problem and produce similar general patterns, detailed information can greatly change according to the method used ranging from smooth, isotropic and short range modifications to not so smooth, non-isotropic and long range modifications. Poisson (non-Gaussian) and Gaussian assumptions both show these patterns, but for the Poisson case the emissions are naturally restricted to be positive and changes are given by means of multiplicative correction factors, producing results closer to the true nature of emission errors. Finally, we propose and test a new two-step, two-scale, fully Bayesian approach that deals with the colocalization problem and can be implemented for any prior density distribution
CSIR Research Space (South Africa)
Oxtoby, Oliver F
2012-05-01
Full Text Available In this paper we detail a fast, fully-coupled, partitioned fluid–structure interaction (FSI) scheme. For the incompressible fluid, new fractional-step algorithms are proposed which make possible the fully implicit, but matrixfree, parallel solution...
Yankovskii, A. P.
2018-01-01
On the basis of constitutive equations of the Rabotnov nonlinear hereditary theory of creep, the problem on the rheonomic flexural behavior of layered plates with a regular structure is formu-lated. Equations allowing one to describe, with different degrees of accuracy, the stress-strain state of such plates with account of their weakened resistance to transverse shear were ob-tained. From them, the relations of the nonclassical Reissner- and Reddytype theories can be found. For axially loaded annular plates clamped at one edge and loaded quasistatically on the other edge, a simplified version of the refined theory, whose complexity is comparable to that of the Reissner and Reddy theories, is developed. The flexural strains of such metal-composite annular plates in shortterm and long-term loadings at different levels of heat action are calcu-lated. It is shown that, for plates with a relative thickness of order of 1/10, neither the classical theory, nor the traditional nonclassical Reissner and Reddy theories guarantee reliable results for deflections even with the rough 10% accuracy. The accuracy of these theories decreases at elevated temperatures and with time under long-term loadings of structures. On the basic of relations of the refined theory, it is revealed that, in bending of layered metal-composite heat-sensitive plates under elevated temperatures, marked edge effects arise in the neighborhood of the supported edge, which characterize the shear of these structures in the transverse direction
DEFF Research Database (Denmark)
Hansen, Lars Kai; Rasmussen, Carl Edward; Svarer, C.
1994-01-01
Regularization, e.g., in the form of weight decay, is important for training and optimization of neural network architectures. In this work the authors provide a tool based on asymptotic sampling theory, for iterative estimation of weight decay parameters. The basic idea is to do a gradient desce...
Driver training in steps (DTS).
2010-01-01
For some years now, it has been possible in the Netherlands to follow a Driver Training in Steps (DTS) as well as the regular driver training. The DTS is a structured training method with clear training objectives which are categorized in four modules. Although the DTS is considerably better than
A two-step FEM-SEM approach for wave propagation analysis in cable structures
Zhang, Songhan; Shen, Ruili; Wang, Tao; De Roeck, Guido; Lombaert, Geert
2018-02-01
Vibration-based methods are among the most widely studied in structural health monitoring (SHM). It is well known, however, that the low-order modes, characterizing the global dynamic behaviour of structures, are relatively insensitive to local damage. Such local damage may be easier to detect by methods based on wave propagation which involve local high frequency behaviour. The present work considers the numerical analysis of wave propagation in cables. A two-step approach is proposed which allows taking into account the cable sag and the distribution of the axial forces in the wave propagation analysis. In the first step, the static deformation and internal forces are obtained by the finite element method (FEM), taking into account geometric nonlinear effects. In the second step, the results from the static analysis are used to define the initial state of the dynamic analysis which is performed by means of the spectral element method (SEM). The use of the SEM in the second step of the analysis allows for a significant reduction in computational costs as compared to a FE analysis. This methodology is first verified by means of a full FE analysis for a single stretched cable. Next, simulations are made to study the effects of damage in a single stretched cable and a cable-supported truss. The results of the simulations show how damage significantly affects the high frequency response, confirming the potential of wave propagation based methods for SHM.
Scanning moiré and spatial-offset phase-stepping for surface inspection of structures
Yoneyama, S.; Morimoto, Y.; Fujigaki, M.; Ikeda, Y.
2005-06-01
In order to develop a high-speed and accurate surface inspection system of structures such as tunnels, a new surface profile measurement method using linear array sensors is studied. The sinusoidal grating is projected on a structure surface. Then, the deformed grating is scanned by linear array sensors that move together with the grating projector. The phase of the grating is analyzed by a spatial offset phase-stepping method to perform accurate measurement. The surface profile measurements of the wall with bricks and the concrete surface of a structure are demonstrated using the proposed method. The change of geometry or fabric of structures and the defects on structure surfaces can be detected by the proposed method. It is expected that the surface profile inspection system of tunnels measuring from a running train can be constructed based on the proposed method.
Structural properties and complexity of a new network class: Collatz step graphs.
Directory of Open Access Journals (Sweden)
Frank Emmert-Streib
Full Text Available In this paper, we introduce a biologically inspired model to generate complex networks. In contrast to many other construction procedures for growing networks introduced so far, our method generates networks from one-dimensional symbol sequences that are related to the so called Collatz problem from number theory. The major purpose of the present paper is, first, to derive a symbol sequence from the Collatz problem, we call the step sequence, and investigate its structural properties. Second, we introduce a construction procedure for growing networks that is based on these step sequences. Third, we investigate the structural properties of this new network class including their finite scaling and asymptotic behavior of their complexity, average shortest path lengths and clustering coefficients. Interestingly, in contrast to many other network models including the small-world network from Watts & Strogatz, we find that CS graphs become 'smaller' with an increasing size.
Structures of adsorbed CO on atomically smooth and on stepped sngle crystal surfaces
International Nuclear Information System (INIS)
Madey, T.E.; Houston, J.E.
1980-01-01
The structures of molecular CO adsorbed on atomically smooth surfaces and on surfaces containing monatomic steps have been studied using the electron stimulated desorption ion angular distribution (ESDIAD) method. For CO adsorbed on the close packed Ru(001) and W(110) surfaces, the dominant bonding mode is via the carbon atom, with the CO molecular axis perpendicular to the plane of the surface. For CO on atomicaly rough Pd(210), and for CO adsorbed at step sites on four different surfaces vicinal to W(110), the axis of the molecule is tilted or inclined away from the normal to the surface. The ESDIAD method, in which ion desorption angles are related to surface bond angles, provides a direct determination of the structures of adsorbed molecules and molecular complexes on surfaces
Directory of Open Access Journals (Sweden)
Ramesh Kumar Lama
2017-01-01
Full Text Available Alzheimer’s disease (AD is a progressive, neurodegenerative brain disorder that attacks neurotransmitters, brain cells, and nerves, affecting brain functions, memory, and behaviors and then finally causing dementia on elderly people. Despite its significance, there is currently no cure for it. However, there are medicines available on prescription that can help delay the progress of the condition. Thus, early diagnosis of AD is essential for patient care and relevant researches. Major challenges in proper diagnosis of AD using existing classification schemes are the availability of a smaller number of training samples and the larger number of possible feature representations. In this paper, we present and compare AD diagnosis approaches using structural magnetic resonance (sMR images to discriminate AD, mild cognitive impairment (MCI, and healthy control (HC subjects using a support vector machine (SVM, an import vector machine (IVM, and a regularized extreme learning machine (RELM. The greedy score-based feature selection technique is employed to select important feature vectors. In addition, a kernel-based discriminative approach is adopted to deal with complex data distributions. We compare the performance of these classifiers for volumetric sMR image data from Alzheimer’s disease neuroimaging initiative (ADNI datasets. Experiments on the ADNI datasets showed that RELM with the feature selection approach can significantly improve classification accuracy of AD from MCI and HC subjects.
Edelman, David B; McMenamin, Mark; Sheesley, Peter; Pivar, Stuart
2016-09-01
We present a plausible account of the origin of the archetypal vertebrate bauplan. We offer a theoretical reconstruction of the geometrically regular structure of the blastula resulting from the sequential subdivision of the egg, followed by mechanical deformations of the blastula in subsequent stages of gastrulation. We suggest that the formation of the vertebrate bauplan during development, as well as fixation of its variants over the course of evolution, have been constrained and guided by global mechanical biases. Arguably, the role of such biases in directing morphology-though all but neglected in previous accounts of both development and macroevolution-is critical to any substantive explanation for the origin of the archetypal vertebrate bauplan. We surmise that the blastula inherently preserves the underlying geometry of the cuboidal array of eight cells produced by the first three cleavages that ultimately define the medial-lateral, dorsal-ventral, and anterior-posterior axes of the future body plan. Through graphical depictions, we demonstrate the formation of principal structures of the vertebrate body via mechanical deformation of predictable geometrical patterns during gastrulation. The descriptive rigor of our model is supported through comparisons with previous characterizations of the embryonic and adult vertebrate bauplane. Though speculative, the model addresses the poignant absence in the literature of any plausible account of the origin of vertebrate morphology. A robust solution to the problem of morphogenesis-currently an elusive goal-will only emerge from consideration of both top-down (e.g., the mechanical constraints and geometric properties considered here) and bottom-up (e.g., molecular and mechano-chemical) influences. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Directory of Open Access Journals (Sweden)
Mallika Thabuot
2016-02-01
Full Text Available Anodization of Ti sheet in the ethylene glycol electrolyte containing 0.38wt% NH4F with the addition of 1.79wt% H2O at room temperature was studied. Applied potential of 10-60 V and anodizing time of 1-3 h were conducted by single-step and three-step of anodization within the two paralleled-electrodes anodizing cell. Their structural and textural properties were investigated by X-ray diffraction (XRD and scanning electron microscopy (SEM. After annealing at 600°C in the air furnace for 3 h, TiO2-nanotubes was transformed to the higher proportion of anatase crystal phase. Also crystallization of anatase phase was enhanced as the duration of anodization as the final step increased. By using single-step of anodization, pore texture of oxide film was started to reveal at the applied potential of 30 V. Better orderly arrangement of the TiO2-nanotubes array with larger pore size was obtained with the increase of applied potential. The applied potential of 60 V was selected for the three-step of anodization with anodizing time of 1-3 h. Results showed that the well-smooth surface coverage with higher density of porous-TiO2 was achieved using prolonging time at the first and second step, however, discontinuity tube in length was produced instead of the long-vertical tube. Layer thickness of anodic oxide film depended on the anodizing time at the last step of anodization. More well arrangement of nanostructured-TiO2 was produced using three-step of anodization under 60 V with 3 h for each step.
Selective adsorption of a supramolecular structure on flat and stepped gold surfaces
Peköz, Rengin; Donadio, Davide
2018-04-01
Halogenated aromatic molecules assemble on surfaces forming both hydrogen and halogen bonds. Even though these systems have been intensively studied on flat metal surfaces, high-index vicinal surfaces remain challenging, as they may induce complex adsorbate structures. The adsorption of 2,6-dibromoanthraquinone (2,6-DBAQ) on flat and stepped gold surfaces is studied by means of van der Waals corrected density functional theory. Equilibrium geometries and corresponding adsorption energies are systematically investigated for various different adsorption configurations. It is shown that bridge sites and step edges are the preferred adsorption sites for single molecules on flat and stepped surfaces, respectively. The role of van der Waals interactions, halogen bonds and hydrogen bonds are explored for a monolayer coverage of 2,6-DBAQ molecules, revealing that molecular flexibility and intermolecular interactions stabilize two-dimensional networks on both flat and stepped surfaces. Our results provide a rationale for experimental observation of molecular carpeting on high-index vicinal surfaces of transition metals.
Effective field theory dimensional regularization
International Nuclear Information System (INIS)
Lehmann, Dirk; Prezeau, Gary
2002-01-01
A Lorentz-covariant regularization scheme for effective field theories with an arbitrary number of propagating heavy and light particles is given. This regularization scheme leaves the low-energy analytic structure of Greens functions intact and preserves all the symmetries of the underlying Lagrangian. The power divergences of regularized loop integrals are controlled by the low-energy kinematic variables. Simple diagrammatic rules are derived for the regularization of arbitrary one-loop graphs and the generalization to higher loops is discussed
Effective field theory dimensional regularization
Lehmann, Dirk; Prézeau, Gary
2002-01-01
A Lorentz-covariant regularization scheme for effective field theories with an arbitrary number of propagating heavy and light particles is given. This regularization scheme leaves the low-energy analytic structure of Greens functions intact and preserves all the symmetries of the underlying Lagrangian. The power divergences of regularized loop integrals are controlled by the low-energy kinematic variables. Simple diagrammatic rules are derived for the regularization of arbitrary one-loop graphs and the generalization to higher loops is discussed.
Passive control of coherent structures in a modified backwards-facing step flow
Ormonde, Pedro C.; Cavalieri, André V. G.; Silva, Roberto G. A. da; Avelar, Ana C.
2018-05-01
We study a modified backwards-facing step flow, with the addition of two different plates; one is a baseline, impermeable plate and the second a perforated one. An experimental investigation is carried out for a turbulent reattaching shear layer downstream of the two plates. The proposed setup is a model configuration to study how the plate characteristics affect the separated shear layer and how turbulent kinetic energies and large-scale coherent structures are modified. Measurements show that the perforated plate changes the mean flow field, mostly by reducing the intensity of reverse flow close to the bottom wall. Disturbance amplitudes are significantly reduced up to five step heights downstream of the trailing edge of the plate, more specifically in the recirculation region. A loudspeaker is then used to introduce phase-locked, low-amplitude perturbations upstream of the plates, and phase-averaged measurements allow a quantitative study of large-scale structures in the shear-layer. The evolution of such coherent structures is evaluated in light of linear stability theory, comparing the eigenfunction of the Kelvin-Helmholtz mode to the experimental results. We observe a close match of linear-stability eigenfunctions with phase-averaged amplitudes for the two tested Strouhal numbers. The perforated plate is found to reduce the amplitude of the Kelvin-Helmholtz coherent structures in comparison to the baseline, impermeable plate, a behavior consistent with the predicted amplification trends from linear stability.
Fish mouths as engineering structures for vortical cross-step filtration
Sanderson, S. Laurie; Roberts, Erin; Lineburg, Jillian; Brooks, Hannah
2016-03-01
Suspension-feeding fishes such as goldfish and whale sharks retain prey without clogging their oral filters, whereas clogging is a major expense in industrial crossflow filtration of beer, dairy foods and biotechnology products. Fishes' abilities to retain particles that are smaller than the pore size of the gill-raker filter, including extraction of particles despite large holes in the filter, also remain unexplained. Here we show that unexplored combinations of engineering structures (backward-facing steps forming d-type ribs on the porous surface of a cone) cause fluid dynamic phenomena distinct from current biological and industrial filter operations. This vortical cross-step filtration model prevents clogging and explains the transport of tiny concentrated particles to the oesophagus using a hydrodynamic tongue. Mass transfer caused by vortices along d-type ribs in crossflow is applicable to filter-feeding duck beak lamellae and whale baleen plates, as well as the fluid mechanics of ventilation at fish gill filaments.
Wasley, David; Gale, Nichola; Roberts, Sioned; Backx, Karianne; Nelson, Annmarie; van Deursen, Robert; Byrne, Anthony
2018-02-01
Patients with advanced cancer frequently suffer a decline in activities associated with involuntary loss of weight and muscle mass (cachexia). This can profoundly affect function and quality of life. Although exercise participation can maintain physical and psychological function in patients with cancer, uptake is low in cachectic patients who are underrepresented in exercise studies. To understand how such patients' experiences are associated with exercise participation, we investigated exercise history, self-confidence, and exercise motivations in patients with established cancer cachexia, and relationships between relevant variables. Lung and gastrointestinal cancer outpatients with established cancer cachexia (n = 196) completed a questionnaire exploring exercise history and key constructs of the Theory of Planned Behaviour relating to perceived control, psychological adjustment, and motivational attitudes. Patients reported low physical activity levels, and few undertook regular structured exercise. Exercise self-efficacy was very low with concerns it could worsen symptoms and cause harm. Patients showed poor perceived control and a strong need for approval but received little advice from health care professionals. Preferences were for low intensity activities, on their own, in the home setting. Regression analysis revealed no significant factors related to the independent variables. Frequently employed higher intensity, group exercise models do not address the motivational and behavioural concerns of cachectic cancer patients in this study. Developing exercise interventions which match perceived abilities and skills is required to address challenges of self-efficacy and perceived control identified. Greater engagement of health professionals with this group is required to explore potential benefits of exercise. Copyright © 2017 John Wiley & Sons, Ltd.
One-step sol-gel imprint lithography for guided-mode resonance structures.
Huang, Yin; Liu, Longju; Johnson, Michael; C Hillier, Andrew; Lu, Meng
2016-03-04
Guided-mode resonance (GMR) structures consisting of sub-wavelength periodic gratings are capable of producing narrow-linewidth optical resonances. This paper describes a sol-gel-based imprint lithography method for the fabrication of submicron 1D and 2D GMR structures. This method utilizes a patterned polydimethylsiloxane (PDMS) mold to fabricate the grating coupler and waveguide for a GMR device using a sol-gel thin film in a single step. An organic-inorganic hybrid sol-gel film was selected as the imprint material because of its relatively high refractive index. The optical responses of several sol-gel GMR devices were characterized, and the experimental results were in good agreement with the results of electromagnetic simulations. The influence of processing parameters was investigated in order to determine how finely the spectral response and resonant wavelength of the GMR devices could be tuned. As an example potential application, refractometric sensing experiments were performed using a 1D sol-gel device. The results demonstrated a refractive index sensitivity of 50 nm/refractive index unit. This one-step fabrication process offers a simple, rapid, and low-cost means of fabricating GMR structures. We anticipate that this method can be valuable in the development of various GMR-based devices as it can readily enable the fabrication of complex shapes and allow the doping of optically active materials into sol-gel thin film.
One-step sol–gel imprint lithography for guided-mode resonance structures
International Nuclear Information System (INIS)
Huang, Yin; Liu, Longju; Lu, Meng; Johnson, Michael; C Hillier, Andrew
2016-01-01
Guided-mode resonance (GMR) structures consisting of sub-wavelength periodic gratings are capable of producing narrow-linewidth optical resonances. This paper describes a sol–gel-based imprint lithography method for the fabrication of submicron 1D and 2D GMR structures. This method utilizes a patterned polydimethylsiloxane (PDMS) mold to fabricate the grating coupler and waveguide for a GMR device using a sol–gel thin film in a single step. An organic–inorganic hybrid sol–gel film was selected as the imprint material because of its relatively high refractive index. The optical responses of several sol–gel GMR devices were characterized, and the experimental results were in good agreement with the results of electromagnetic simulations. The influence of processing parameters was investigated in order to determine how finely the spectral response and resonant wavelength of the GMR devices could be tuned. As an example potential application, refractometric sensing experiments were performed using a 1D sol–gel device. The results demonstrated a refractive index sensitivity of 50 nm/refractive index unit. This one-step fabrication process offers a simple, rapid, and low-cost means of fabricating GMR structures. We anticipate that this method can be valuable in the development of various GMR-based devices as it can readily enable the fabrication of complex shapes and allow the doping of optically active materials into sol–gel thin film. (paper)
Non destructive testing of heterogeneous structures with a step frequency radar
International Nuclear Information System (INIS)
Cattin, V.; Chaillout, J.J.
1998-01-01
Ground penetrating radar have shown increasing potential in diagnostic of soils or concrete, but the realisation of such a system and the interpretation of data produced by this technique require a clear understanding of the physical electromagnetic processes that appear between media and waves. In this paper are studied the performances of a step frequency radar as a nondestructive technique to evaluate different heterogeneous laboratory size structures. Some critical points are studied like material properties, antenna effect and image reconstruction algorithm, to determine its viability to distinguish smallest region of interest
One-Step Solvent Evaporation-Assisted 3D Printing of Piezoelectric PVDF Nanocomposite Structures.
Bodkhe, Sampada; Turcot, Gabrielle; Gosselin, Frederick P; Therriault, Daniel
2017-06-21
Development of a 3D printable material system possessing inherent piezoelectric properties to fabricate integrable sensors in a single-step printing process without poling is of importance to the creation of a wide variety of smart structures. Here, we study the effect of addition of barium titanate nanoparticles in nucleating piezoelectric β-polymorph in 3D printable polyvinylidene fluoride (PVDF) and fabrication of the layer-by-layer and self-supporting piezoelectric structures on a micro- to millimeter scale by solvent evaporation-assisted 3D printing at room temperature. The nanocomposite formulation obtained after a comprehensive investigation of composition and processing techniques possesses a piezoelectric coefficient, d 31 , of 18 pC N -1 , which is comparable to that of typical poled and stretched commercial PVDF film sensors. A 3D contact sensor that generates up to 4 V upon gentle finger taps demonstrates the efficacy of the fabrication technique. Our one-step 3D printing of piezoelectric nanocomposites can form ready-to-use, complex-shaped, flexible, and lightweight piezoelectric devices. When combined with other 3D printable materials, they could serve as stand-alone or embedded sensors in aerospace, biomedicine, and robotic applications.
Structural Studies of Silver Nanoparticles Obtained Through Single-Step Green Synthesis
Prasad Peddi, Siva; Abdallah Sadeh, Bilal
2015-10-01
Green synthesis of silver Nanoparticles (AGNP's) has been the most prominent among the metallic nanoparticles for research for over a decade and half now due to both the simplicity of preparation and the applicability of biological species with extensive applications in medicine and biotechnology to reduce and trap the particles. The current article uses Eclipta Prostrata leaf extract as the biological species to cap the AGNP's through a single step process. The characterization data obtained was used for the analysis of the sample structure. The article emphasizes the disquisition of their shape and size of the lattice parameters and proposes a general scheme and a mathematical model for the analysis of their dependence. The data of the synthesized AGNP's has been used to advantage through the introduction of a structural shape factor for the crystalline nanoparticles. The properties of the structure of the AGNP's proposed and evaluated through a theoretical model was undeviating with the experimental consequences. This modus operandi gives scope for the structural studies of ultrafine particles prepared using biological methods.
One-step fabrication of superhydrophobic hierarchical structures by femtosecond laser ablation
International Nuclear Information System (INIS)
Rukosuyev, Maxym V.; Lee, Jason; Cho, Seong Jin; Lim, Geunbae; Jun, Martin B.G.
2014-01-01
Highlights: • Superhydrophobic surface patterns by femtosecond laser ablation in open air. • Micron scale ridge-like structure with superimposed submicron convex features. • Hydrophobic or even superhydrophobic behavior with no additional silanization. - Abstract: Hydrophobic surface properties are sought after in many areas of research, engineering, and consumer product development. Traditionally, hydrophobic surfaces are produced by using various types of coatings. However, introduction of foreign material onto the surface is often undesirable as it changes surface chemistry and cannot provide a long lasting solution (i.e. reapplication is needed). Therefore, surface modification by transforming the base material itself can be preferable in many applications. Femtosecond laser ablation is one of the methods that can be used to create structures on the surface that will exhibit hydrophobic behavior. The goal of the presented research was to create micro and nano-scale patterns that will exhibit hydrophobic properties with no additional post treatment. As a result, dual scale patterned structures were created on the surface of steel aluminum and tungsten carbide samples. Ablation was performed in the open air with no subsequent treatment. Resultant surfaces appeared to be strongly hydrophobic or even superhydrophobic with contact angle values of 140° and higher. In conclusion, the nature of surface hydrophobicity proved to be highly dependent on surface morphology as the base materials used are intrinsically hydrophilic. It was also proven that the hydrophobicity inducing structures could be manufactured using femtosecond laser machining in a single step with no subsequent post treatment
One-step fabrication of superhydrophobic hierarchical structures by femtosecond laser ablation
Energy Technology Data Exchange (ETDEWEB)
Rukosuyev, Maxym V.; Lee, Jason [Mechanical Engineering, University of Victoria (Canada); Cho, Seong Jin; Lim, Geunbae [Mechanical Engineering, Pohang University of Science and Technology, Pohang (Korea, Republic of); Jun, Martin B.G., E-mail: mbgjun@uvic.ca [Mechanical Engineering, University of Victoria (Canada)
2014-09-15
Highlights: • Superhydrophobic surface patterns by femtosecond laser ablation in open air. • Micron scale ridge-like structure with superimposed submicron convex features. • Hydrophobic or even superhydrophobic behavior with no additional silanization. - Abstract: Hydrophobic surface properties are sought after in many areas of research, engineering, and consumer product development. Traditionally, hydrophobic surfaces are produced by using various types of coatings. However, introduction of foreign material onto the surface is often undesirable as it changes surface chemistry and cannot provide a long lasting solution (i.e. reapplication is needed). Therefore, surface modification by transforming the base material itself can be preferable in many applications. Femtosecond laser ablation is one of the methods that can be used to create structures on the surface that will exhibit hydrophobic behavior. The goal of the presented research was to create micro and nano-scale patterns that will exhibit hydrophobic properties with no additional post treatment. As a result, dual scale patterned structures were created on the surface of steel aluminum and tungsten carbide samples. Ablation was performed in the open air with no subsequent treatment. Resultant surfaces appeared to be strongly hydrophobic or even superhydrophobic with contact angle values of 140° and higher. In conclusion, the nature of surface hydrophobicity proved to be highly dependent on surface morphology as the base materials used are intrinsically hydrophilic. It was also proven that the hydrophobicity inducing structures could be manufactured using femtosecond laser machining in a single step with no subsequent post treatment.
DEFF Research Database (Denmark)
Yang, Zilong; Wang, Zhe; Zhang, Ying
2017-01-01
-up structure, instead of applying line-frequency step-up transformer, is proposed to connect PV directly to the 10 kV medium voltage grid. This series-connected step-up PV system integrates with multiple functions, including separated maximum power point tracking (MPPT), centralized energy storage, power...
Femtosecond laser pulses for fast 3-D surface profilometry of microelectronic step-structures.
Joo, Woo-Deok; Kim, Seungman; Park, Jiyong; Lee, Keunwoo; Lee, Joohyung; Kim, Seungchul; Kim, Young-Jin; Kim, Seung-Woo
2013-07-01
Fast, precise 3-D measurement of discontinuous step-structures fabricated on microelectronic products is essential for quality assurance of semiconductor chips, flat panel displays, and photovoltaic cells. Optical surface profilers of low-coherence interferometry have long been used for the purpose, but the vertical scanning range and speed are limited by the micro-actuators available today. Besides, the lateral field-of-view extendable for a single measurement is restricted by the low spatial coherence of broadband light sources. Here, we cope with the limitations of the conventional low-coherence interferometer by exploiting unique characteristics of femtosecond laser pulses, i.e., low temporal but high spatial coherence. By scanning the pulse repetition rate with direct reference to the Rb atomic clock, step heights of ~69.6 μm are determined with a repeatability of 10.3 nm. The spatial coherence of femtosecond pulses provides a large field-of-view with superior visibility, allowing for a high volume measurement rate of ~24,000 mm3/s.
Selection of regularization parameter for l1-regularized damage detection
Hou, Rongrong; Xia, Yong; Bao, Yuequan; Zhou, Xiaoqing
2018-06-01
The l1 regularization technique has been developed for structural health monitoring and damage detection through employing the sparsity condition of structural damage. The regularization parameter, which controls the trade-off between data fidelity and solution size of the regularization problem, exerts a crucial effect on the solution. However, the l1 regularization problem has no closed-form solution, and the regularization parameter is usually selected by experience. This study proposes two strategies of selecting the regularization parameter for the l1-regularized damage detection problem. The first method utilizes the residual and solution norms of the optimization problem and ensures that they are both small. The other method is based on the discrepancy principle, which requires that the variance of the discrepancy between the calculated and measured responses is close to the variance of the measurement noise. The two methods are applied to a cantilever beam and a three-story frame. A range of the regularization parameter, rather than one single value, can be determined. When the regularization parameter in this range is selected, the damage can be accurately identified even for multiple damage scenarios. This range also indicates the sensitivity degree of the damage identification problem to the regularization parameter.
One-step synthesis and structural features of CdS/montmorillonite nanocomposites.
Han, Zhaohui; Zhu, Huaiyong; Bulcock, Shaun R; Ringer, Simon P
2005-02-24
A novel synthesis method was introduced for the nanocomposites of cadmium sulfide and montmorillonite. This method features the combination of an ion exchange process and an in situ hydrothermal decomposition process of a complex precursor, which is simple in contrast to the conventional synthesis methods that comprise two separate steps for similar nanocomposite materials. Cadmium sulfide species in the composites exist in the forms of pillars and nanoparticles, the crystallized sulfide particles are in the hexagonal phase, and the sizes change when the amount of the complex for the synthesis is varied. Structural features of the nanocomposites are similar to those of the clay host but changed because of the introduction of the sulfide into the clay.
Ning, Tao; Xu, Wenguo; Lu, Shixiang
2011-09-01
Stable superhydrophobic platinum surfaces have been effectively fabricated on the zinc substrates through one-step replacement deposition process without further modification or any other post-treatment procedures. The fabrication process was controllable, which could be testified by various morphologies and hydrophobic properties of different prepared samples. By conducting SEM and water CA analysis, the effects of reaction conditions on the surface morphology and hydrophobicity of the resulting surfaces were carefully studied. The results show that the optimum condition of superhydrophobic surface fabrication depends largely on the positioning of zinc plate and the concentrations of reactants. When the zinc plate was placed vertically and the concentration of PtCl(4) solution was 5 mmol/L, the zinc substrate would be covered by a novel and interesting composite structure. The structure was composed by microscale hexagonal cavities, densely packed nanoparticles layer and top micro- and nanoscale flower-like structures, which exhibit great surface roughness and porosity contributing to the superhydrophobicity. The maximal CA value of about 171° was obtained under the same reaction condition. The XRD, XPS and EDX results indicate that crystallite pure platinum nanoparticles were aggregated on the zinc substrates in accordance with a free deposition way. Copyright © 2011 Elsevier Inc. All rights reserved.
Regularities of radiation heredity
International Nuclear Information System (INIS)
Skakov, M.K.; Melikhov, V.D.
2001-01-01
One analyzed regularities of radiation heredity in metals and alloys. One made conclusion about thermodynamically irreversible changes in structure of materials under irradiation. One offers possible ways of heredity transmittance of radiation effects at high-temperature transformations in the materials. Phenomenon of radiation heredity may be turned to practical use to control structure of liquid metal and, respectively, structure of ingot via preliminary radiation treatment of charge. Concentration microheterogeneities in material defect structure induced by preliminary irradiation represent the genetic factor of radiation heredity [ru
UNFOLDED REGULAR AND SEMI-REGULAR POLYHEDRA
Directory of Open Access Journals (Sweden)
IONIŢĂ Elena
2015-06-01
Full Text Available This paper proposes a presentation unfolding regular and semi-regular polyhedra. Regular polyhedra are convex polyhedra whose faces are regular and equal polygons, with the same number of sides, and whose polyhedral angles are also regular and equal. Semi-regular polyhedra are convex polyhedra with regular polygon faces, several types and equal solid angles of the same type. A net of a polyhedron is a collection of edges in the plane which are the unfolded edges of the solid. Modeling and unfolding Platonic and Arhimediene polyhedra will be using 3dsMAX program. This paper is intended as an example of descriptive geometry applications.
International Nuclear Information System (INIS)
Schneeberger, B.; Breuleux, R.
1977-01-01
Assuming that earthquake ground motion is a stationary time function, the seismic analysis of a linear structure can be done by probailistic methods using the 'power spectral density function' (PSD), instead of applying the more traditional time-step-integration using earthquake time histories (TH). A given structure was analysed both by PSD and TH methods computing and comparing 'floor response spectra'. The analysis using TH was performed for two different TH and different frequency intervals for the 'floor-response-spectra'. The analysis using PSD first produced PSD functions of the responses of the floors and these were then converted into 'foor-response-spectra'. Plots of the resulting 'floor-response-spectra' show: (1) The agreement of TH and PSD results is quite close. (2) The curves produced by PSD are much smoother than those produced by TH and mostly form an enelope of the latter. (3) The curves produced by TH are quite jagged with the location and magnitude of the peaks depending on the choice of frequencies at which the 'floor-response-spectra' were evaluated and on the choice of TH. (Auth.)
General inverse problems for regular variation
DEFF Research Database (Denmark)
Damek, Ewa; Mikosch, Thomas Valentin; Rosinski, Jan
2014-01-01
Regular variation of distributional tails is known to be preserved by various linear transformations of some random structures. An inverse problem for regular variation aims at understanding whether the regular variation of a transformed random object is caused by regular variation of components ...
Coordinate-invariant regularization
International Nuclear Information System (INIS)
Halpern, M.B.
1987-01-01
A general phase-space framework for coordinate-invariant regularization is given. The development is geometric, with all regularization contained in regularized DeWitt Superstructures on field deformations. Parallel development of invariant coordinate-space regularization is obtained by regularized functional integration of the momenta. As representative examples of the general formulation, the regularized general non-linear sigma model and regularized quantum gravity are discussed. copyright 1987 Academic Press, Inc
DEFF Research Database (Denmark)
Kamikawa, Naoya; Huang, Xiaoxu; Hansen, Niels
2008-01-01
temperature before annealing at high temperature. By this two-step process, the structure is homogenized and the stored energy is reduced significantly during the first annealing step. As an example, high-purity aluminum has been deformed to a total reduction of 98.4% (equivalent strain of 4.......8) by accumulative roll-bonding at room temperature. Isochronal annealing for 0.5 h of the deformed samples shows the occurrence of recrystallization at 200 °C and above. However, when introducing an annealing step for 6 h at 175 °C, no significant recrystallization is observed and relatively homogeneous structures...... are obtained when the samples afterwards are annealed at higher temperatures up to 300 °C. To underpin these observations, the structural evolution has been characterized by transmission electron microscopy, showing that significant annihilation of high-angle boundaries, low-angle dislocation boundaries...
Manifold Regularized Correlation Object Tracking
Hu, Hongwei; Ma, Bo; Shen, Jianbing; Shao, Ling
2017-01-01
In this paper, we propose a manifold regularized correlation tracking method with augmented samples. To make better use of the unlabeled data and the manifold structure of the sample space, a manifold regularization-based correlation filter is introduced, which aims to assign similar labels to neighbor samples. Meanwhile, the regression model is learned by exploiting the block-circulant structure of matrices resulting from the augmented translated samples over multiple base samples cropped fr...
International Nuclear Information System (INIS)
Cernomorcenco, Andrei; Notingher, Petru Jr.
2008-01-01
The thermal step method is a nondestructive technique for determining electric charge distribution across solid insulating structures. It consists in measuring and analyzing a transient capacitive current due to the redistribution of influence charges when the sample is crossed by a thermal wave. This work concerns the application of the technique to inhomogeneous insulating structures. A general equation of the thermal step current appearing in such a sample is established. It is shown that this expression is close to the one corresponding to a homogeneous sample and allows using similar techniques for calculating electric field and charge distribution
Brambilla, Luigi; Tommasini, Matteo; Botiz, Ioan; Rahimi, Khosrow; Agumba, John O.; Stingelin, Natalie; Zerbi, Giuseppe
2014-01-01
, namely, amorphous, semicrystalline, polycrystalline and single crystal. We have based our analysis on the spectra of the (3HT)8 single crystal (whose structure has been determined by selected area electron diffraction) taken as reference
Brambilla, Luigi
2014-10-14
© 2014 American Chemical Society. In this work, we report a comparative analysis of the infrared and Raman spectra of octa(3-hexylthiophene) (3HT)8, trideca(3-hexylthiophene) (3HT)13, and poly(3-hexylthiophene) P3HT recorded in various phases, namely, amorphous, semicrystalline, polycrystalline and single crystal. We have based our analysis on the spectra of the (3HT)8 single crystal (whose structure has been determined by selected area electron diffraction) taken as reference and on the results of DFT calculations and molecular vibrational dynamics. New and precise spectroscopic markers of the molecular structures show the existence of three phases, namely: hairy (phase 1), ordered (phase 2), and disordered/amorphous (phase 3). Conceptually, the identified markers can be used for the molecular structure analysis of other similar systems.
Energy Technology Data Exchange (ETDEWEB)
Yasuzawa, Y.; Kagawa, K.; Kitabayashi, K. [Kyushu University, Fukuoka (Japan); Kawano, D. [Mitsubishi Heavy Industries, Ltd., Tokyo (Japan)
1997-08-01
The theory and formulation for the numerical response analysis of a large floating structure in regular waves were given. This paper also reports the comparison between the experiment in the Shipping Research Institute in the Minitry of Transport and the result calculated using numerical analytic codes in this study. The effect of the bending rigidity of a floating structure and the wave direction on the dynamic response of a structure was examined by numerical calculation. When the ratio of structure length and incident wavelength (L/{lambda}) is lower, the response amplitude on the transmission side becomes higher in a wave-based response. The hydrodynamic elasticity exerts a dominant influence when L/{lambda} becomes higher. For incident oblique waves, the maximum response does not necessarily appear on the incidence side. Moreover, the response distribution is also complicated. For example, the portion where any flexible amplitude hardly appears exists. A long structure response can be predicted from a short structure response to some degree. They differ in response properties when the ridigity based on the similarity rule largely differs, irrespective of the same L/{lambda}. For higher L/{lambda}, the wave response can be easily predicted when the diffrection force is replaced by the concentrated exciting force on the incidence side. 13 refs., 14 figs., 3 tabs.
SQoS based Planning using 4-regular Grid for Optical Fiber Metworks
DEFF Research Database (Denmark)
Riaz, Muhammad Tahir; Pedersen, Jens Myrup; Madsen, Ole Brun
optical fiber based network infrastructures. In the first step of SQoS based planning, this paper describes how 4-regular Grid structures can be implemented in the physical level of optical fiber network infrastructures. A systematic approach for implementing the Grid structure is presented. We used...
SQoS based Planning using 4-regular Grid for Optical Fiber Networks
DEFF Research Database (Denmark)
Riaz, Muhammad Tahir; Pedersen, Jens Myrup; Madsen, Ole Brun
2005-01-01
optical fiber based network infrastructures. In the first step of SQoS based planning, this paper describes how 4-regular Grid structures can be implemented in the physical level of optical fiber network infrastructures. A systematic approach for implementing the Grid structure is presented. We used...
Energy Technology Data Exchange (ETDEWEB)
Tom, Nathan M.; Madhi, Farshad; Yeung, Ronald W.
2016-07-01
The aim of this paper is to maximize the power-to-load ratio of the Berkeley Wedge: a one-degree-of-freedom, asymmetrical, energy-capturing, floating breakwater of high performance that is relatively free of viscosity effects. Linear hydrodynamic theory was used to calculate bounds on the expected time-averaged power (TAP) and corresponding surge restraining force, pitch restraining torque, and power take-off (PTO) control force when assuming that the heave motion of the wave energy converter remains sinusoidal. This particular device was documented to be an almost-perfect absorber if one-degree-of-freedom motion is maintained. The success of such or similar future wave energy converter technologies would require the development of control strategies that can adapt device performance to maximize energy generation in operational conditions while mitigating hydrodynamic loads in extreme waves to reduce the structural mass and overall cost. This paper formulates the optimal control problem to incorporate metrics that provide a measure of the surge restraining force, pitch restraining torque, and PTO control force. The optimizer must now handle an objective function with competing terms in an attempt to maximize power capture while minimizing structural and actuator loads. A penalty weight is placed on the surge restraining force, pitch restraining torque, and PTO actuation force, thereby allowing the control focus to be placed either on power absorption or load mitigation. Thus, in achieving these goals, a per-unit gain in TAP would not lead to a greater per-unit demand in structural strength, hence yielding a favorable benefit-to-cost ratio. Demonstrative results in the form of TAP, reactive TAP, and the amplitudes of the surge restraining force, pitch restraining torque, and PTO control force are shown for the Berkeley Wedge example.
International Nuclear Information System (INIS)
Khvostyntsev, K.I.; Kuz'mina, T.S.; Kruglov, V.V.; Lukovkin, G.F.
1982-01-01
Effect of electoerosion machining on the surface state of pearlitic class steel of the 12KhN4MFA type, bronzes BrAMts 9-2 and BrAZhNMts 9-4-4-1, of the alloy PT-3V has been studied. As a result of electroerosion machining (EEM) a transformed layer, presenting overheated and partially melted metal, the structure and hardness of which depend on chemical composition of the materials treated, their tendency to phase transformatins and saturation with introduction elements, is formed on the surface of metal materials
Energy Technology Data Exchange (ETDEWEB)
Khvostyntsev, K.I.; Kuz' mina, T.S.; Kruglov, V.V.; Lukovkin, G.F.
1982-01-01
Effect of electoerosion machining on the surface state of pearlitic class steel of the 12KhN4MFA type, bronzes BrAMts 9-2 and BrAZhNMts 9-4-4-1, of the alloy PT-3V has been studied. As a result of electroerosion machining (EEM) a transformed layer, presenting overheated and partially melted metal, the structure and hardness of which depend on chemical composition of the materials treated, their tendency to phase transformatins and saturation with introduction elements, is formed on the surface of metal materials.
Two-step values for games with two-level communication structure
Béal, Silvain; Khmelnitskaya, Anna Borisovna; Solal, Philippe
TU games with two-level communication structure, in which a two-level communication structure relates fundamentally to the given coalition structure and consists of a communication graph on the collection of the a priori unions in the coalition structure, as well as a collection of communication
Branch structures at the steps of the devil's staircase of the sine circle map
International Nuclear Information System (INIS)
Wen, H.C.; Duong-van, M.
1992-01-01
We have discovered substructures consisting of branches at each step of the devil's staircase of the sine circle map. These substructures are found to follow the hierarchy of the Farey tree. We develop a formalism to relate the rational winding number W=p/q to the number of branches in these substructures
Energy Technology Data Exchange (ETDEWEB)
Kuczumow, A., E-mail: kuczon@kul.lublin.pl [Department of Chemistry, Lublin Catholic University, 20-718 Lublin (Poland); Nowak, J. [Department of Chemistry, Lublin Catholic University, 20-718 Lublin (Poland); ChaLas, R. [Department of Conservative Medicine, Lublin Medical University, 20-081 Lublin (Poland)
2011-10-15
The aim of a recent paper was to recognize the chemical and structural changes in apatites, which form both the enamel and the dentin of the human tooth. The aim was achieved by scrutinizing the linear elemental profiles along the cross-sections of human molar teeth. Essentially, the task was accomplished with the application of the Electron Probe Microanalysis method and with some additional studies by Micro-Raman spectrometry. All the trends in linear profiles were strictly determined. In the enamel zone they were either increasing or decreasing curves of exponential character. The direction of the investigations was to start with the tooth surface and move towards the dentin-enamel junction (DEJ). The results of the elemental studies were more visible when the detected material was divided, in an arbitrary way, into the prevailing 'core' enamel ({approx}93.5% of the total mass) and the remaining 'overbuilt' enamel. The material in the 'core' enamel was fully stable, with clearly determined chemical and mechanical features. However, the case was totally different in the 'overbuilt enamel', with dynamic changes in the composition. In the 'overbuilt' layer Ca, P, Cl and F profiles present the decaying distribution curves, whereas Mg, Na, K and CO{sub 3}{sup 2-} present the growing ones. Close to the surface of the tooth the mixture of hydroxy-, chlor- and fluor-apatite is formed, which is much more resistant than the rest of the enamel. On passing towards the DEJ, the apatite is enriched with Na, Mg and CO{sub 3}{sup 2-}. In this location, three of six phosphate groups were substituted with carbonate groups. Simultaneously, Mg is associated with the hydroxyl groups around the hexad axis. In this way, the mechanisms of exchange reactions were established. The crystallographic structures were proposed for new phases located close to DEJ. In the dentin zone, the variability of elemental profiles looks different, with
Directory of Open Access Journals (Sweden)
Mingyi Wu
2015-04-01
Full Text Available Sulfated fucans, the complex polysaccharides, exhibit various biological activities. Herein, we purified two fucans from the sea cucumbers Holothuria edulis and Ludwigothurea grisea. Their structures were verified by means of HPGPC, FT-IR, GC–MS and NMR. As a result, a novel structural motif for this type of polymers is reported. The fucans have a unique structure composed of a central core of regular (1→2 and (1→3-linked tetrasaccharide repeating units. Approximately 50% of the units from L. grisea (100% for H. edulis fucan contain sides of oligosaccharides formed by nonsulfated fucose units linked to the O-4 position of the central core. Anticoagulant activity assays indicate that the sea cucumber fucans strongly inhibit human blood clotting through the intrinsic pathways of the coagulation cascade. Moreover, the mechanism of anticoagulant action of the fucans is selective inhibition of thrombin activity by heparin cofactor II. The distinctive tetrasaccharide repeating units contribute to the anticoagulant action. Additionally, unlike the fucans from marine alga, although the sea cucumber fucans have great molecular weights and affluent sulfates, they do not induce platelet aggregation. Overall, our results may be helpful in understanding the structure-function relationships of the well-defined polysaccharides from invertebrate as new types of safer anticoagulants.
DEFF Research Database (Denmark)
Lintner, Nathanael G; Kerou, Melina; Brumfield, Susan K
2011-01-01
In response to viral infection, many prokaryotes incorporate fragments of virus-derived DNA into loci called clustered regularly interspaced short palindromic repeats (CRISPRs). The loci are then transcribed, and the processed CRISPR transcripts are used to target invading viral DNA and RNA....... The Escherichia coli "CRISPR-associated complex for antiviral defense" (CASCADE) is central in targeting invading DNA. Here we report the structural and functional characterization of an archaeal CASCADE (aCASCADE) from Sulfolobus solfataricus. Tagged Csa2 (Cas7) expressed in S. solfataricus co-purifies with Cas5......a-, Cas6-, Csa5-, and Cas6-processed CRISPR-RNA (crRNA). Csa2, the dominant protein in aCASCADE, forms a stable complex with Cas5a. Transmission electron microscopy reveals a helical complex of variable length, perhaps due to substoichiometric amounts of other CASCADE components. A recombinant Csa2...
Move-step structures of literature Ph.D. theses in the Japanese and UK higher education
Directory of Open Access Journals (Sweden)
Masumi Ono
2017-02-01
Full Text Available This study investigates the move-step structures of Japanese and English introductory chapters of literature Ph.D. theses and perceptions of Ph.D. supervisors in the Japanese and UK higher education contexts. In this study, 51 Japanese and 48 English introductory chapters of literature Ph.D. theses written by first language writers of Japanese or English were collected from three Japanese and three British universities. Genre analysis of 99 introductory chapters was conducted using a revised “Create a Research Space” (CARS model (Swales, 1990, 2004. Semi-structured interviews were also carried out with seven Japanese supervisors and ten British supervisors. The findings showed that the introductory chapters of literature Ph.D. theses had 13 move-specific steps and five move-independent steps, each of which presented different cyclical patterns, indicating cross-cultural similarities and differences between the two language groups. The perceptions of supervisors varied in terms of the importance and the sequence of individual steps in the introductory chapters. Based on the textual and interview analyses, a discipline-oriented Open-CARS model is proposed for pedagogical purposes of teaching and writing about this genre in Japanese or English in the field of literature and related fields.
Energy Technology Data Exchange (ETDEWEB)
Barbee, T. W. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Schena, D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2017-08-29
This was a collaborative effort between Lawrence Livermore National Security, LLC as manager and operator of Lawrence Livermore National Laboratory (LLNL) and TroyCap LLC, to develop manufacturing steps for commercial production of nano-structure capacitors. The technical objective of this project was to demonstrate high deposition rates of selected dielectric materials which are 2 to 5 times larger than typical using current technology.
Lintner, Nathanael G; Kerou, Melina; Brumfield, Susan K; Graham, Shirley; Liu, Huanting; Naismith, James H; Sdano, Matthew; Peng, Nan; She, Qunxin; Copié, Valérie; Young, Mark J; White, Malcolm F; Lawrence, C Martin
2011-06-17
In response to viral infection, many prokaryotes incorporate fragments of virus-derived DNA into loci called clustered regularly interspaced short palindromic repeats (CRISPRs). The loci are then transcribed, and the processed CRISPR transcripts are used to target invading viral DNA and RNA. The Escherichia coli "CRISPR-associated complex for antiviral defense" (CASCADE) is central in targeting invading DNA. Here we report the structural and functional characterization of an archaeal CASCADE (aCASCADE) from Sulfolobus solfataricus. Tagged Csa2 (Cas7) expressed in S. solfataricus co-purifies with Cas5a-, Cas6-, Csa5-, and Cas6-processed CRISPR-RNA (crRNA). Csa2, the dominant protein in aCASCADE, forms a stable complex with Cas5a. Transmission electron microscopy reveals a helical complex of variable length, perhaps due to substoichiometric amounts of other CASCADE components. A recombinant Csa2-Cas5a complex is sufficient to bind crRNA and complementary ssDNA. The structure of Csa2 reveals a crescent-shaped structure unexpectedly composed of a modified RNA-recognition motif and two additional domains present as insertions in the RNA-recognition motif. Conserved residues indicate potential crRNA- and target DNA-binding sites, and the H160A variant shows significantly reduced affinity for crRNA. We propose a general subunit architecture for CASCADE in other bacteria and Archaea.
Mulepati, Sabin; Bailey, Scott
2011-09-09
RNA transcribed from clustered regularly interspaced short palindromic repeats (CRISPRs) protects many prokaryotes from invasion by foreign DNA such as viruses, conjugative plasmids, and transposable elements. Cas3 (CRISPR-associated protein 3) is essential for this CRISPR protection and is thought to mediate cleavage of the foreign DNA through its N-terminal histidine-aspartate (HD) domain. We report here the 1.8 Å crystal structure of the HD domain of Cas3 from Thermus thermophilus HB8. Structural and biochemical studies predict that this enzyme binds two metal ions at its active site. We also demonstrate that the single-stranded DNA endonuclease activity of this T. thermophilus domain is activated not by magnesium but by transition metal ions such as manganese and nickel. Structure-guided mutagenesis confirms the importance of the metal-binding residues for the nuclease activity and identifies other active site residues. Overall, these results provide a framework for understanding the role of Cas3 in the CRISPR system.
Simulations geometric structures of the stepped profile bearing surface of the piston
Directory of Open Access Journals (Sweden)
Wroblewski Emil
2017-01-01
Full Text Available The main node piston-pin-piston rings are most responsible for the formation of mechanical losses. It is advisable to reduce friction losses in the piston-cylinder group lead to an increase in the overall efficiency of the engine and thus reduce the fuel consumption. The method to reduce the area covered by the oil film is a modification of the bearing surface of the piston by adjusting the profile. In this paper the results of simulation for the stepped microgeometry piston bearing surface are presented.
One-step synthesis of mesoporous pentasil zeolite with single-unit-cell lamellar structural features
Tsapstsis, Michael; Zhang, Xueyi
2015-11-17
A method for making a pentasil zeolite material includes forming an aqueous solution that includes a structure directing agent and a silica precursor; and heating the solution at a sufficient temperature and for sufficient time to form a pentasil zeolite material from the silica precursor, wherein the structure directing agent includes a quaternary phosphonium ion.
Computer Programming and Biomolecular Structure Studies: A Step beyond Internet Bioinformatics
Likic, Vladimir A.
2006-01-01
This article describes the experience of teaching structural bioinformatics to third year undergraduate students in a subject titled "Biomolecular Structure and Bioinformatics." Students were introduced to computer programming and used this knowledge in a practical application as an alternative to the well established Internet bioinformatics…
Meng, Lingqian; Mezari, Brahim; Goesten, Maarten G.; Hensen, Emiel J. M.
2017-01-01
Hierarchical ZSM-5 zeolite is hydrothermally synthesized in a single step with cetyltrimethylammonium (CTA) hydroxide acting as mesoporogen and structure-directing agent. Essential to this synthesis is the replacement of NaOH with KOH. An in-depth solid-state NMR study reveals that, after early electrostatic interaction between condensed silica and the head group of CTA, ZSM-5 crystallizes around the structure-directing agent. The crucial aspect of using KOH instead of NaOH lies in the faster...
Role of step edges on the structure formation of α-6T on Ag(441)
Wagner, Thorsten; Fritz, Daniel Roman; Rudolfová, Zdena; Zeppenfeld, Peter
2018-01-01
Controlling the orientation of organic molecules on surfaces is important in order to tune the physical properties of the organic thin films and, thereby, increase the performance of organic thin film devices. Here, we present a scanning tunneling microscopy (STM) and photoelectron emission microscopy (PEEM) study of the deposition of the organic dye pigment α-sexithiophene (α-6T) on the vicinal Ag(441) surface. In the presence of the steps on the Ag(441) surface, the α-6T molecules exclusively align parallel to the step edges oriented along the [1 1 bar0]-direction of the substrate. The STM results further reveal that the adsorption of the α-6T molecules is accompanied by various restructuring of the substrate surface: Initially, the molecules prefer the Ag(551) building blocks of the Ag(441) surface. The Ag(551) termination of the terraces is then changed to a predominately Ag(331) one upon completion of the first α-6T monolayer. When closing the two layer thick wetting layer, the original ratio of Ag(331) and Ag(551) building blocks ( ≈ 1:1) is recovered, but a phase separation into microfacets, which are composed either of Ag(331) or of Ag(551) building blocks, is found.
Directory of Open Access Journals (Sweden)
Marianne Vergez-Couret
2012-12-01
Full Text Available Little attention has been devoted to interleaved discourse structures despite the challenges they offer to discourse coherence studies. Interleaved structures occur frequently if several dimensions of discourse coherence (semantic, intentional, textual, etc. are considered simultaneously on relatively large texts. Two-step enumerative structures, a kind of interleaved structure, are enumerative structures in which the items are further developed in an enumerative fashion. We propose in this paper a treatment of the semantic and textual dimensions of such structures. We also propose some generalizations for the treatment of interleaved structures.Les structures discursives croisées ont très peu attiré l’attention des chercheurs jusqu’à maintenant. Pourtant leur analyse soulève des questions qui sont de véritables défis pour les théories de la cohérence du discours. Les structures croisées sont fréquemment introduites par l’analyse conjointe des différentes dimensions de la cohérence discursive (sémantique, intentionnelle, textuelle… sur des empans textuels significatifs. Les structures énumératives à deux temps, une sorte de structure croisée, sont des structures énumératives dans lesquelles les items sont eux-mêmes développés selon un processus énumératif. Nous proposons ici un traitement des dimensions sémantique et textuelle de ces structures. Nous avançons aussi des pistes pour généraliser nos traitements à un traitement des structures croisées dans leur ensemble.
van Dam, Edwin R.; Koolen, Jack H.; Tanaka, Hajime
2016-01-01
This is a survey of distance-regular graphs. We present an introduction to distance-regular graphs for the reader who is unfamiliar with the subject, and then give an overview of some developments in the area of distance-regular graphs since the monograph 'BCN'[Brouwer, A.E., Cohen, A.M., Neumaier,
Nijholt, Antinus
1980-01-01
Culik II and Cogen introduced the class of LR-regular grammars, an extension of the LR(k) grammars. In this paper we consider an analogous extension of the LL(k) grammars called the LL-regular grammars. The relation of this class of grammars to other classes of grammars will be shown. Any LL-regular
Nam, Ki Hyun; Kurinov, Igor; Ke, Ailong
2011-09-02
Clustered regularly interspaced short palindromic repeats (CRISPR) and their associated protein genes (cas genes) are widespread in bacteria and archaea. They form a line of RNA-based immunity to eradicate invading bacteriophages and malicious plasmids. A key molecular event during this process is the acquisition of new spacers into the CRISPR loci to guide the selective degradation of the matching foreign genetic elements. Csn2 is a Nmeni subtype-specific cas gene required for new spacer acquisition. Here we characterize the Enterococcus faecalis Csn2 protein as a double-stranded (ds-) DNA-binding protein and report its 2.7 Å tetrameric ring structure. The inner circle of the Csn2 tetrameric ring is ∼26 Å wide and populated with conserved lysine residues poised for nonspecific interactions with ds-DNA. Each Csn2 protomer contains an α/β domain and an α-helical domain; significant hinge motion was observed between these two domains. Ca(2+) was located at strategic positions in the oligomerization interface. We further showed that removal of Ca(2+) ions altered the oligomerization state of Csn2, which in turn severely decreased its affinity for ds-DNA. In summary, our results provided the first insight into the function of the Csn2 protein in CRISPR adaptation by revealing that it is a ds-DNA-binding protein functioning at the quaternary structure level and regulated by Ca(2+) ions.
Energy Technology Data Exchange (ETDEWEB)
Nam, Ki Hyun; Kurinov, Igor; Ke, Ailong (Cornell); (NWU)
2012-05-22
Clustered regularly interspaced short palindromic repeats (CRISPR) and their associated protein genes (cas genes) are widespread in bacteria and archaea. They form a line of RNA-based immunity to eradicate invading bacteriophages and malicious plasmids. A key molecular event during this process is the acquisition of new spacers into the CRISPR loci to guide the selective degradation of the matching foreign genetic elements. Csn2 is a Nmeni subtype-specific cas gene required for new spacer acquisition. Here we characterize the Enterococcus faecalis Csn2 protein as a double-stranded (ds-) DNA-binding protein and report its 2.7 {angstrom} tetrameric ring structure. The inner circle of the Csn2 tetrameric ring is {approx}26 {angstrom} wide and populated with conserved lysine residues poised for nonspecific interactions with ds-DNA. Each Csn2 protomer contains an {alpha}/{beta} domain and an {alpha}-helical domain; significant hinge motion was observed between these two domains. Ca{sup 2+} was located at strategic positions in the oligomerization interface. We further showed that removal of Ca{sup 2+} ions altered the oligomerization state of Csn2, which in turn severely decreased its affinity for ds-DNA. In summary, our results provided the first insight into the function of the Csn2 protein in CRISPR adaptation by revealing that it is a ds-DNA-binding protein functioning at the quaternary structure level and regulated by Ca{sup 2+} ions.
Regular Expression Pocket Reference
Stubblebine, Tony
2007-01-01
This handy little book offers programmers a complete overview of the syntax and semantics of regular expressions that are at the heart of every text-processing application. Ideal as a quick reference, Regular Expression Pocket Reference covers the regular expression APIs for Perl 5.8, Ruby (including some upcoming 1.9 features), Java, PHP, .NET and C#, Python, vi, JavaScript, and the PCRE regular expression libraries. This concise and easy-to-use reference puts a very powerful tool for manipulating text and data right at your fingertips. Composed of a mixture of symbols and text, regular exp
Energy Technology Data Exchange (ETDEWEB)
Yang, Lei, E-mail: nanoyang@qq.com; Jiang, Zhongcheng; Dong, Jiazhang; Zhang, Liuqian [Hunan University, College of Materials Science and Engineering (China); Pan, Anlian, E-mail: anlian.pan@gmail.com; Zhuang, Xiujuan [Hunan University, Key Laboratory for Micro-Nano Physics and Technology of Hunan Province (China)
2015-10-15
We report a scheme for investigating two-step stimulated structure change of luminescence centers. Amorphous silica nanospheres with uniform diameter of 9–15 nm have been synthesized by Stöber method. Strong hydroxyl-related infrared-absorption band is observed in infrared spectrum. The surface hydroxyl groups exert great influence on the luminescent behavior of silica. They provide stable and intermediate energy states to accommodate excitation electrons. The existence of these surface states reduces the energy barrier of photochemical reactions, creating conditions for two-step excitation process. By carefully examining excitation and emission process, the nearest excitation band is absent in both optical absorption spectrum and excitation spectrum. This later generated state confirms the generation of new luminescence centers as well as the existence of photochemical reactions. Stimulated by different energies, two-step excitation process impels different photochemical reactions, prompting generation of different lattice defects on surface area of silica. Thereby, tunable luminescence is achieved. After thermal treatment, strong gap excitation band appears with the disappearance of strong surface excitation band. Strong blue luminescence also disappears. The research is significance to precise introducing structural defects and controlling position of luminescence peaks.
Manifold Regularized Correlation Object Tracking.
Hu, Hongwei; Ma, Bo; Shen, Jianbing; Shao, Ling
2018-05-01
In this paper, we propose a manifold regularized correlation tracking method with augmented samples. To make better use of the unlabeled data and the manifold structure of the sample space, a manifold regularization-based correlation filter is introduced, which aims to assign similar labels to neighbor samples. Meanwhile, the regression model is learned by exploiting the block-circulant structure of matrices resulting from the augmented translated samples over multiple base samples cropped from both target and nontarget regions. Thus, the final classifier in our method is trained with positive, negative, and unlabeled base samples, which is a semisupervised learning framework. A block optimization strategy is further introduced to learn a manifold regularization-based correlation filter for efficient online tracking. Experiments on two public tracking data sets demonstrate the superior performance of our tracker compared with the state-of-the-art tracking approaches.
Directory of Open Access Journals (Sweden)
Ibrahim Diakite
2016-08-01
Full Text Available During the 2014 Ebola virus disease (EVD outbreak, policy-makers were confronted with difficult decisions on how best to test the efficacy of EVD vaccines. On one hand, many were reluctant to withhold a vaccine that might prevent a fatal disease from study participants randomized to a control arm. On the other, regulatory bodies called for rigorous placebo-controlled trials to permit direct measurement of vaccine efficacy prior to approval of the products. A stepped-wedge cluster study (SWCT was proposed as an alternative to a more traditional randomized controlled vaccine trial to address these concerns. Here, we propose novel "ordered stepped-wedge cluster trial" (OSWCT designs to further mitigate tradeoffs between ethical concerns, logistics, and statistical rigor.We constructed a spatially structured mathematical model of the EVD outbreak in Sierra Leone. We used the output of this model to simulate and compare a series of stepped-wedge cluster vaccine studies. Our model reproduced the observed order of first case occurrence within districts of Sierra Leone. Depending on the infection risk within the trial population and the trial start dates, the statistical power to detect a vaccine efficacy of 90% varied from 14% to 32% for standard SWCT, and from 67% to 91% for OSWCTs for an alpha error of 5%. The model's projection of first case occurrence was robust to changes in disease natural history parameters.Ordering clusters in a step-wedge trial based on the cluster's underlying risk of infection as predicted by a spatial model can increase the statistical power of a SWCT. In the event of another hemorrhagic fever outbreak, implementation of our proposed OSWCT designs could improve statistical power when a step-wedge study is desirable based on either ethical concerns or logistical constraints.
Mixture models with entropy regularization for community detection in networks
Chang, Zhenhai; Yin, Xianjun; Jia, Caiyan; Wang, Xiaoyang
2018-04-01
Community detection is a key exploratory tool in network analysis and has received much attention in recent years. NMM (Newman's mixture model) is one of the best models for exploring a range of network structures including community structure, bipartite and core-periphery structures, etc. However, NMM needs to know the number of communities in advance. Therefore, in this study, we have proposed an entropy regularized mixture model (called EMM), which is capable of inferring the number of communities and identifying network structure contained in a network, simultaneously. In the model, by minimizing the entropy of mixing coefficients of NMM using EM (expectation-maximization) solution, the small clusters contained little information can be discarded step by step. The empirical study on both synthetic networks and real networks has shown that the proposed model EMM is superior to the state-of-the-art methods.
International Nuclear Information System (INIS)
Chela Flores, J.
1995-01-01
Our present understanding of the origin and evolution of chromosomes differs considerably from current understanding of the origin and evolution of the cell itself. Chromosome origins have been less prominent in research, as the emphasis has not shifted so far appreciably from the phenomenon of primeval nucleic acid encapsulation to that of the origin of gene organization, expression, and regulation. In this work we discuss some reasons why preliminary steps in this direction are being taken. We have been led to examine properties that have contributed to raise the ancestral prokaryotic programmes to a level where we can appreciate in eukaryotes a clear departure from earlier themes in the evolution of the cell from the last common ancestor. We shift our point of view from evolution of cell morphology to the point of view of the genes. In particular, we focus attention on possible physical bases for the way transmission of information has evolved in eukaryotes, namely, the inactivation of whole chromosomes. The special case of inactivation of the X chromosome in mammals is discussed, paying particular attention to the physical process of the spread of X inactivation in monotremes (platypus and echidna.) When experimental data is unavailable some theoretical analysis is possible based on the idea that in certain cases collective phenomena in genetics, rather than chemical detail, are better correlates of complex chemical processes. (author). Abstract only
International Nuclear Information System (INIS)
Chela Flores, J.
1995-08-01
Our present understanding of the origin and evolution of chromosomes differs considerably from current understanding of the origin and evolution of the cell itself. Chromosome origins have been less prominent in research, as the emphasis has not shifted so far appreciably from the phenomenon of primeval nucleic acid encapsulation to that of the origin of gene organization, expression, and regulation. In this work we discuss some reasons why preliminary steps in this direction are being taken. We have been led to examine properties that have contributed to raise the ancestral prokaryotic programmes to a level where we can appreciate in eukaryotes a clear departure from earlier themes in the evolution of cell from the last common ancestor. We shift our point of view from evolution of cell morphology to the point of view of the genes. In particular we focus attention on possible physical bases for the way transmission of information has evolved in eukaryotes, namely, the inactivation of whole chromosomes. The special case of the inactivation of the X chromosome in mammals is discussed, paying particular attention to the physical process of the spread of X inactivation in monotremes (platypus and echidna). When experimental data is unavailable some theoretical analysis is possible based on the idea that in certain cases collective phenomena in genetics, rather than chemical detail, are better correlates of complex chemical processes. (author). 65 refs
Structural and kinetic steps of β→α transformation in titanium
International Nuclear Information System (INIS)
Mirzaev, D.A.; Ul'yanov, V.G.; Shtejnberg, M.M.; Protopopov, V.A.
1981-01-01
α-Ti structure and temperature of β→α transformation within the range of cooling rates from 100 to 5x10 5 deg/sec, is studied. Stepwise temperature decrease of β→α transformation temperature in titanium with the increase of cooling rate is stated. Jump-like drops of transformation temperature observed in the case of increasing critical cooling rates, are followed by change of α-phase morphology [ru
Milanesi, P; Holderegger, R; Bollmann, K; Gugerli, F; Zellweger, F
2017-02-01
Estimating connectivity among fragmented habitat patches is crucial for evaluating the functionality of ecological networks. However, current estimates of landscape resistance to animal movement and dispersal lack landscape-level data on local habitat structure. Here, we used a landscape genetics approach to show that high-fidelity habitat structure maps derived from Light Detection and Ranging (LiDAR) data critically improve functional connectivity estimates compared to conventional land cover data. We related pairwise genetic distances of 128 Capercaillie (Tetrao urogallus) genotypes to least-cost path distances at multiple scales derived from land cover data. Resulting β values of linear mixed effects models ranged from 0.372 to 0.495, while those derived from LiDAR ranged from 0.558 to 0.758. The identification and conservation of functional ecological networks suffering from habitat fragmentation and homogenization will thus benefit from the growing availability of detailed and contiguous data on three-dimensional habitat structure and associated habitat quality. © 2016 by the Ecological Society of America.
Magnetic and structural properties of NdFeB thin film prepared by step annealing
International Nuclear Information System (INIS)
Serrona, Leo K.E.B.; Sugimura, A.; Fujisaki, R.; Okuda, T.; Adachi, N.; Ohsato, H.; Sakamoto, I.; Nakanishi, A.; Motokawa, M.
2003-01-01
The crystallization of the amorphous phase into the tetragonal Nd 2 Fe 14 B (PHI) phase and the corresponding changes in magnetic properties have been examined by step annealing experiment using a 2 μm thick NdFeB film sample. Microstructural and magnetic analysis indicate that the film was magnetically soft as deposited with the coercivity H ciperp -1 and the remnant magnetization 4πM rperp -1 was developed and diffraction analysis showed evidence of PHI phase 002l peaks being aligned perpendicular to the film plane. At an optimum annealing temperature of 575 deg. C, the remnant magnetization of this anisotropic thin film is around 0.60 T with intrinsic coercivity of ∼1340 kA m -1 . Annealing the film sample at 200 deg. C≤T ann ≤750 deg. C showed variations in magnetic properties that were mostly due to the change in the perpendicular anisotropy. Based on 4πM sperpendicular values plotted against T ann , a dip in 4πM sperpendicular values was observed as T ann increased in the soft-to-hard magnetic characteristics transition region and rose as the hard crystalline phase started to form. The results show that the magnetic properties of the NdFeB film were slightly influenced by the presence of NdO, film surface roughening and the small increase in crystal size as a consequence of repeated heat treatment. At T ann ∼300 deg. C, the nominal saturation magnetization indicated a certain degree of weak perpendicular magnetic anisotropy in the film sample considered to be essential in the enhancement of coercivity in crystallized films
Facile one-step synthesis and photoluminescence properties of Ag–ZnO core–shell structure
International Nuclear Information System (INIS)
Zhai, HongJu; Wang, LiJing; Han, DongLai; Wang, Huan; Wang, Jian; Liu, XiaoYan; Lin, Xue; Li, XiuYan; Gao, Ming; Yang, JingHai
2014-01-01
Graphical abstract: The PL of the Ag–ZnO core-shell nanostructure showed obvious increase of UV emission and slight decrease of visible light emission compared to that of the pure ZnO. With the calcination temperature increasing from 300 to 600 °C, the primary peak located at 380 nm became stronger and sharper, indicating that the increasing calcination temperature made the samples crystallize better. - Highlights: • Ag-ZnO core-shell structure was obtained via a simple one-step solvothermal process. • The approach was simple, mild, low cost, reproducible and easy-to-handle. • The obvious enhancement of UV luminescent has been observed. • Effects of the calcining temperature to luminescence were investigated in detail. - Abstract: Ag–ZnO core–shell structures were gained via one-step solvothermal process. The products were characterized by means of X-ray diffraction (XRD), transmission electron microscopy (TEM), Raman spectroscopy, photoluminescence (PL) and UV–vis spectroscopy, respectively. It was shown that the properties were greatly changed compared to pure ZnO from the PL and Raman spectra, which indicated the strong interfacial interaction between ZnO and Ag. The work provides a feasible method to synthesize Ag–ZnO core–shell structure photocatalyst, which is promising in the further practical application of ZnO-based photocatalytic materials
International Nuclear Information System (INIS)
Harris, J.P.; Julyk, L.J.; Marlow, R.S.; Moore, C.J.; Day, J.P.; Dyrness, A.D.; Jagadish, P.; Shulman, J.S.
1993-10-01
The buried single-shell waste tank 241-C-106, located at the US Department of Energy's Hanford Site, has been a repository for various liquid radioactive waste materials since its construction in 1943. A first step toward waste tank remediation is demonstrating that remediation activities can be performed safely. Determination of the current structural capacity of this high-heat tank is an important element in this assessment. A structural finite-element model of tank 241-C-106 has been developed to assess the tank's structural integrity with respect to in situ conditions and additional remediation surface loads. To predict structural integrity realistically, the model appropriately addresses two complex issues: (1) surrounding soil-tank interaction associated with thermal expansion cycling and surcharge load distribution and (2) concrete-property degradation and creep resulting from exposure to high temperatures generated by the waste. This paper describes the development of the 241-C-106 structural model, analysis methodology, and tank-specific structural acceptance criteria
International Nuclear Information System (INIS)
Li Ping; Wang Sha; Li Jibiao; Wei Yu
2012-01-01
Zinc oxide (ZnO) nanocrystallites with different Co-doping levels were successfully synthesized by a simple one-step solution route at low temperature (95 deg. C) in this study. The structure and morphology of the samples thus obtained were characterized by XRD, EDS, XPS and FESEM. Results show that cobalt ions, in the oxidation state of Co 2+ , replace Zn 2+ ions in the ZnO lattice without changing its wurtzite structure. The dopant content varies from 0.59% to 5.39%, based on Co-doping levels. The pure ZnO particles exhibit well-defined 3D flower-like morphology with an average size of 550 nm, while the particles obtained after Co-doping are mostly cauliflower-like nanoclusters with an average size of 120 nm. Both the flower-like pure ZnO and the cauliflower-like Co:ZnO nanoclusters are composed of densely arrayed nanorods. The optical properties of the ZnO nanocrystallites following Co-doping were also investigated by UV-Visible absorption and Photoluminescence spectra. Our results indicate that Co-doping can change the energy-band structure and effectively adjust the luminescence properties of ZnO nanocrystallites. - Highlights: → Co-doped ZnO nanocrystallites were synthesized via a simple one-step solution route. → Co 2+ ions incorporated into the ZnO lattice without changing its wurtzite structure. → Co-doping changed the energy band structure of ZnO. → Co-doping effectively adjusted the luminescence properties of ZnO nanocrystallites.
Energy Technology Data Exchange (ETDEWEB)
Li Ping, E-mail: lipingchina@yahoo.com.cn [Provincial Key Laboratory of Inorganic Nanomaterials, School of Chemistry and Materials Science, Hebei Normal University, 113 Yuhua Road, Shijiazhuang 050016, Hebei (China); Wang Sha; Li Jibiao; Wei Yu [Provincial Key Laboratory of Inorganic Nanomaterials, School of Chemistry and Materials Science, Hebei Normal University, 113 Yuhua Road, Shijiazhuang 050016, Hebei (China)
2012-01-15
Zinc oxide (ZnO) nanocrystallites with different Co-doping levels were successfully synthesized by a simple one-step solution route at low temperature (95 deg. C) in this study. The structure and morphology of the samples thus obtained were characterized by XRD, EDS, XPS and FESEM. Results show that cobalt ions, in the oxidation state of Co{sup 2+}, replace Zn{sup 2+} ions in the ZnO lattice without changing its wurtzite structure. The dopant content varies from 0.59% to 5.39%, based on Co-doping levels. The pure ZnO particles exhibit well-defined 3D flower-like morphology with an average size of 550 nm, while the particles obtained after Co-doping are mostly cauliflower-like nanoclusters with an average size of 120 nm. Both the flower-like pure ZnO and the cauliflower-like Co:ZnO nanoclusters are composed of densely arrayed nanorods. The optical properties of the ZnO nanocrystallites following Co-doping were also investigated by UV-Visible absorption and Photoluminescence spectra. Our results indicate that Co-doping can change the energy-band structure and effectively adjust the luminescence properties of ZnO nanocrystallites. - Highlights: > Co-doped ZnO nanocrystallites were synthesized via a simple one-step solution route. > Co{sup 2+} ions incorporated into the ZnO lattice without changing its wurtzite structure. > Co-doping changed the energy band structure of ZnO. > Co-doping effectively adjusted the luminescence properties of ZnO nanocrystallites.
First find your stresses. An essential step to structural integrity assessment
International Nuclear Information System (INIS)
Lewis, D.J.
1988-01-01
To ensure the safety and efficiency of modern engineering plant running at high temperature and pressure, it is essential to be able to predict the stresses to which it could be subjected at all times in its working life. Over the last 20 years a revolution has taken place in the methods by which these stresses are calculated. The high-speed digital computer allows numerical methods to be used for analyses of the behaviour of complicated components which could not previously have been attempted. It is no longer necessary to have a deep understanding of the complex mathematics. Use of computer graphics makes it simple to describe to the computer the shape of a component, its loadings, and temperature history. The computer will provide a thorough analysis of the resulting stresses, strains, and deformations. This leaves the design engineer free to spend more time on achieving the best design. The Structural Analysis Centre at the Berkeley Nuclear Laboratories of CEGB has been in the forefront of the development of the finite-element method in structural mechanics. Its BERSAFE system is extensively used both within CEGB and by many other organisations in Britain and overseas. The Centre is also responsible for pioneering advanced applications of these methods and offering training and consultancy in their use. (author)
Directory of Open Access Journals (Sweden)
Carlos Andres Perez-Ramirez
2017-01-01
Full Text Available Nowadays, the accurate identification of natural frequencies and damping ratios play an important role in smart civil engineering, since they can be used for seismic design, vibration control, and condition assessment, among others. To achieve it in practical way, it is required to instrument the structure and apply techniques which are able to deal with noise-corrupted and non-linear signals, as they are common features in real-life civil structures. In this article, a two-step strategy is proposed for performing accurate modal parameters identification in an automated manner. In the first step, it is obtained and decomposed the measured signals using the natural excitation technique and the synchrosqueezed wavelet transform, respectively. Then, the second step estimates the modal parameters by solving an optimization problem employing a genetic algorithm-based approach, where the micropopulation concept is used to improve the speed convergence as well as the accuracy of the estimated values. The accuracy and effectiveness of the proposal are tested using both the simulated response of a benchmark structure and the measurements of a real eight-story building. The obtained results show that the proposed strategy can estimate the modal parameters accurately, indicating than the proposal can be considered as an alternative to perform the abovementioned task.
Regularization by External Variables
DEFF Research Database (Denmark)
Bossolini, Elena; Edwards, R.; Glendinning, P. A.
2016-01-01
Regularization was a big topic at the 2016 CRM Intensive Research Program on Advances in Nonsmooth Dynamics. There are many open questions concerning well known kinds of regularization (e.g., by smoothing or hysteresis). Here, we propose a framework for an alternative and important kind of regula......Regularization was a big topic at the 2016 CRM Intensive Research Program on Advances in Nonsmooth Dynamics. There are many open questions concerning well known kinds of regularization (e.g., by smoothing or hysteresis). Here, we propose a framework for an alternative and important kind...
International Nuclear Information System (INIS)
Ding, Xiao-Yu; Luo, Lai-Ma; Huang, Li-Mei; Luo, Guang-Nan; Zhu, Xiao-Yong; Cheng, Ji-Gui; Wu, Yu-Cheng
2015-01-01
Highlights: • A novel wet chemical method was used to prepare TiC/W core–shell structure powders. • TiC nanoparticles were well-encapsulated by W shells. • TiC phase was present in the interior of tungsten grains. - Abstract: In the present study, one-step activation and chemical reduction process as a novel wet-chemical route was performed for the preparation of TiC/W core–shell structured ultra-fine powders. The XRD, FE-SEM, TEM and EDS results demonstrated that the as-synthesized powders are of high purity and uniform with a diameter of approximately 500 nm. It is also found that the TiC nanoparticles were well-encapsulated by W shells. Such a unique process suggests a new method for preparing X/W (X refers the water-insoluble nanoparticles) core–shell nanoparticles with different cores
Directory of Open Access Journals (Sweden)
Zhen-ni Zhou
2017-06-01
Full Text Available The internal friction (IF behaviors of dual-phase Ni52Mn32In16 alloy with two-step structural transformation were investigated by dynamic mechanical analyzer. The IF peak for the martensite transformation (MT is an asymmetric shoulder rather than those sharp peaks for other shape memory alloys. The intermartensitic transformation (IMT peak has the maximum IF value. As the heating rate increases, the height of the IMT peak increases and its position is shifted to higher temperatures. In comparison with the IMT peak, the MT peak is independent on the heating rate. The starting temperatures of the IMT peak are strongly dependent on frequency, while the MT peak is weakly dependent. Meanwhile, the heights of both the MT and IMT peak rapidly decrease with increasing the frequency. This work also throws new light on their structural transformation mechanisms.
Metric regularity and subdifferential calculus
International Nuclear Information System (INIS)
Ioffe, A D
2000-01-01
The theory of metric regularity is an extension of two classical results: the Lyusternik tangent space theorem and the Graves surjection theorem. Developments in non-smooth analysis in the 1980s and 1990s paved the way for a number of far-reaching extensions of these results. It was also well understood that the phenomena behind the results are of metric origin, not connected with any linear structure. At the same time it became clear that some basic hypotheses of the subdifferential calculus are closely connected with the metric regularity of certain set-valued maps. The survey is devoted to the metric theory of metric regularity and its connection with subdifferential calculus in Banach spaces
Energy Technology Data Exchange (ETDEWEB)
Wei, J.; Jin, Z.; Tang, Y. [Taiyuan University of Technology, Taiyuan (China)
2002-12-01
Based on the field monitoring and simulation test of strata movement, the hard roof's stepped cantilever structure and its mechanics model are presented. The finite element method is used to analyse the effect of hard coal cracking under the abutment pressure of hard roof, so the rational pre-treatment span of hard roof is determined, and the rational working resistance of support is selected also. According to the mechanics model, the transient balance conditions of the hard roof's stepped cantilever structure are studied, and the support-rock relation is theoretically explained. As a result, the basic theory and technique of surrounding rocks control for fully mechanised longwall mining with sub-level caving is formed under the hard roof and hard coal conditions, and the hard roof is effectively controlled not only to protect the working face but also to promote the caving of hard top-coal to increase the recovery rate of coal, thus to realise safe and highly efficient and productive fully mechanised longwall mining with sub-level caving in extra-thick seam. Finally, the successfully practice of hard roof control in 8914 and 8911 working face is presented in this paper. 10 refs., 5 figs., 4 tabs.
Regularities of Multifractal Measures
Indian Academy of Sciences (India)
First, we prove the decomposition theorem for the regularities of multifractal Hausdorff measure and packing measure in R R d . This decomposition theorem enables us to split a set into regular and irregular parts, so that we can analyze each separately, and recombine them without affecting density properties. Next, we ...
Stochastic analytic regularization
International Nuclear Information System (INIS)
Alfaro, J.
1984-07-01
Stochastic regularization is reexamined, pointing out a restriction on its use due to a new type of divergence which is not present in the unregulated theory. Furthermore, we introduce a new form of stochastic regularization which permits the use of a minimal subtraction scheme to define the renormalized Green functions. (author)
Jia, Yi; Yue, Renliang; Liu, Gang; Yang, Jie; Ni, Yong; Wu, Xiaofeng; Chen, Yunfa
2013-01-01
Here we report a novel one-step vapor-fed aerosol flame synthesis (VAFS) method to attain silica hybrid film with superhydrophobicity on normal glass and other engineering material substrates using hexamethyldisiloxane (HMDSO) as precursor. The deposited nano-structured silica films represent excellent superhydrophobicity with contact angle larger than 150° and sliding angle below 5°, without any surface modification or other post treatments. SEM photographs proved that flame-made SiO2 nanoparticles formed dual-scale surface roughness on the substrates. It was confirmed by FTIR and XPS that the in situ formed organic fragments on the particle surface as species like (CH3)xSiO2-x/2 (x = 1, 2, 3) which progressively lowered the surface energy of fabricated films. Thus, these combined dual-scale roughness and lowered surface energy cooperatively produced superhydrophobic films. IR camera had been used to monitor the real-time flame temperature. It is found that the inert dilution gas inflow played a critical role in attaining superhydrophobicity due to its cooling and anti-oxidation effect. This method is facile and scalable for diverse substrates, without any requirement of complex equipments and multiple processing steps. It may contribute to the industrial fabrication of superhydrophobic films.
Peng, Yi; Zhang, Jie; Li, Dong
2018-03-01
A large wastewater treatment plant (WWTP) could not meet the new demand of urban environment and the need of reclaimed water in China, using a US treatment technology. Thus a multi AO reaction process (Anaerobic/oxic/anoxic/oxic/anoxic/oxic) WWTP with underground structure was proposed to carry out the upgrade project. Four main new technologies were applied: (1) multi AO reaction with step feed technology; (2) deodorization; (3) new energy-saving technology such as water resource heat pump and optical fiber lighting system; (4) dependable old WWTP’s water quality support measurement during new WWTP’s construction. After construction, upgrading WWTP had saved two thirds land occupation, increased 80% treatment capacity and improved effluent standard by more than two times. Moreover, it had become a benchmark of an ecological negative capital changing to a positive capital.
Directory of Open Access Journals (Sweden)
S. I. Sherman
2015-01-01
Full Text Available Studying locations of strong earthquakes (М≥8 in space and time in Central Asia has been among top prob-lems for many years and still remains challenging for international research teams. The authors propose a new ap-proach that requires changing the paradigm of earthquake focus – solid rock relations, while this paradigm is a basis for practically all known physical models of earthquake foci. This paper describes the first step towards developing a new concept of the seismic process, including generation of strong earthquakes, with reference to specific geodynamic features of the part of the study region wherein strong earthquakes were recorded in the past two centuries. Our analysis of the locations of М≥8 earthquakes shows that in the past two centuries such earthquakes took place in areas of the dynamic influence of large deep faults in the western regions of Central Asia. In the continental Asia, there is a clear submeridional structural boundary (95–105°E between the western and eastern regions, and this is a factor controlling localization of strong seismic events in the western regions. Obviously, the Indostan plate’s pressure from the south is an energy source for such events. The strong earthquakes are located in a relatively small part of the territory of Central Asia (i.e. the western regions, which is significantly different from its neighbouring areas at the north, east and west, as evidenced by its specific geodynamic parameters. (1 The crust is twice as thick in the western regions than in the eastern regions. (2 In the western regions, the block structures re-sulting from the crust destruction, which are mainly represented by lense-shaped forms elongated in the submeridio-nal direction, tend to dominate. (3 Active faults bordering large block structures are characterized by significant slip velocities that reach maximum values in the central part of the Tibetan plateau. Further northward, slip velocities decrease
Regular expression containment
DEFF Research Database (Denmark)
Henglein, Fritz; Nielsen, Lasse
2011-01-01
We present a new sound and complete axiomatization of regular expression containment. It consists of the conventional axiomatiza- tion of concatenation, alternation, empty set and (the singleton set containing) the empty string as an idempotent semiring, the fixed- point rule E* = 1 + E × E......* for Kleene-star, and a general coin- duction rule as the only additional rule. Our axiomatization gives rise to a natural computational inter- pretation of regular expressions as simple types that represent parse trees, and of containment proofs as coercions. This gives the axiom- atization a Curry......-Howard-style constructive interpretation: Con- tainment proofs do not only certify a language-theoretic contain- ment, but, under our computational interpretation, constructively transform a membership proof of a string in one regular expres- sion into a membership proof of the same string in another regular expression. We...
Supersymmetric dimensional regularization
International Nuclear Information System (INIS)
Siegel, W.; Townsend, P.K.; van Nieuwenhuizen, P.
1980-01-01
There is a simple modification of dimension regularization which preserves supersymmetry: dimensional reduction to real D < 4, followed by analytic continuation to complex D. In terms of component fields, this means fixing the ranges of all indices on the fields (and therefore the numbers of Fermi and Bose components). For superfields, it means continuing in the dimensionality of x-space while fixing the dimensionality of theta-space. This regularization procedure allows the simple manipulation of spinor derivatives in supergraph calculations. The resulting rules are: (1) First do all algebra exactly as in D = 4; (2) Then do the momentum integrals as in ordinary dimensional regularization. This regularization procedure needs extra rules before one can say that it is consistent. Such extra rules needed for superconformal anomalies are discussed. Problems associated with renormalizability and higher order loops are also discussed
Regularized maximum correntropy machine
Wang, Jim Jing-Yan; Wang, Yunji; Jing, Bing-Yi; Gao, Xin
2015-01-01
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
Regularized maximum correntropy machine
Wang, Jim Jing-Yan
2015-02-12
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
International Nuclear Information System (INIS)
Takahashi, Hiroaki; Watanabe, Ryosuke; Miyauchi, Yoshihiro; Mizutani, Goro
2011-01-01
In this report, local electronic structures of steps and terraces on rutile TiO 2 single crystal faces were studied by second harmonic and sum frequency generation (SHG/SFG) spectroscopy. We attained selective measurement of the local electronic states of the step bunches formed on the vicinal (17 18 1) and (15 13 0) surfaces using a recently developed step-selective probing technique. The electronic structures of the flat (110)-(1x1) (the terrace face of the vicinal surfaces) and (011)-(2x1) surfaces were also discussed. The SHG/SFG spectra showed that step structures are mainly responsible for the formation of trap states, since significant resonances from the trap states were observed only from the vicinal surfaces. We detected deep hole trap (DHT) states and shallow electron trap (SET) states selectively from the step bunches on the vicinal surfaces. Detailed analysis of the SHG/SFG spectra showed that the DHT and SET states are more likely to be induced at the top edges of the step bunches than on their hillsides. Unlike the SET states, the DHT states were observed only at the step bunches parallel to [1 1 1][equivalent to the step bunches formed on the (17 18 1) surface]. Photocatalytic activity for each TiO 2 sample was also measured through methylene blue photodegradation reactions and was found to follow the sequence: (110) < (17 18 1) < (15 13 0) < (011), indicating that steps along [0 0 1] are more reactive than steps along [1 1 1]. This result implies that the presence of the DHT states observed from the step bunches parallel to [1 1 1] did not effectively contribute to the methylene blue photodegradation reactions.
International Nuclear Information System (INIS)
Forzano, P.; Castagna, P.
1998-01-01
The know-how of plant operation, in normal or emergency situations, and maintenance, is typically written in the form of guidelines. Although technical literature is structured in chapters and paragraphs, this type of formalization is still to the level of a 'good practice'. It is informal and subject to individual ability and competence. For many human activities that have intrinsically an high level of formalization, like procedures, it is advisable to analyze the formalism, to extract a set of rules from it, and to collect all the rules in a standard, that should be applicable to a range of specific situations. This papers presents DIAM, a methodology and a tool for procedure management, inspired by practical day-by-day issues: it embeds drawing and documentation standards and offers some new features like automatic flow diagramming and the structuring of large diagrams into pages, the congruence of step names and line paths, and the automatic printout of the procedure book. The automation of many activities offered by DIAM leads to a dramatic reduction in time for drawing productions and think document management in a completely new mode, allowing to abandon the old era of document patching and opening the new era of top quality drawings. (author)
Regularized Statistical Analysis of Anatomy
DEFF Research Database (Denmark)
Sjöstrand, Karl
2007-01-01
This thesis presents the application and development of regularized methods for the statistical analysis of anatomical structures. Focus is on structure-function relationships in the human brain, such as the connection between early onset of Alzheimer’s disease and shape changes of the corpus...... and mind. Statistics represents a quintessential part of such investigations as they are preluded by a clinical hypothesis that must be verified based on observed data. The massive amounts of image data produced in each examination pose an important and interesting statistical challenge...... efficient algorithms which make the analysis of large data sets feasible, and gives examples of applications....
Manifold Regularized Reinforcement Learning.
Li, Hongliang; Liu, Derong; Wang, Ding
2018-04-01
This paper introduces a novel manifold regularized reinforcement learning scheme for continuous Markov decision processes. Smooth feature representations for value function approximation can be automatically learned using the unsupervised manifold regularization method. The learned features are data-driven, and can be adapted to the geometry of the state space. Furthermore, the scheme provides a direct basis representation extension for novel samples during policy learning and control. The performance of the proposed scheme is evaluated on two benchmark control tasks, i.e., the inverted pendulum and the energy storage problem. Simulation results illustrate the concepts of the proposed scheme and show that it can obtain excellent performance.
Multiple graph regularized protein domain ranking
Wang, Jim Jing-Yan
2012-11-19
Background: Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods.Results: To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods.Conclusion: The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications. 2012 Wang et al; licensee BioMed Central Ltd.
Multiple graph regularized protein domain ranking
Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin
2012-01-01
Background: Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods.Results: To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods.Conclusion: The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications. 2012 Wang et al; licensee BioMed Central Ltd.
Multiple graph regularized protein domain ranking.
Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin
2012-11-19
Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications.
Multiple graph regularized protein domain ranking
Directory of Open Access Journals (Sweden)
Wang Jim
2012-11-01
Full Text Available Abstract Background Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. Results To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. Conclusion The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications.
Save, H.; Bettadpur, S. V.
2013-12-01
It has been demonstrated before that using Tikhonov regularization produces spherical harmonic solutions from GRACE that have very little residual stripes while capturing all the signal observed by GRACE within the noise level. This paper demonstrates a two-step process and uses Tikhonov regularization to remove the residual stripes in the CSR regularized spherical harmonic coefficients when computing the spatial projections. We discuss methods to produce mass anomaly grids that have no stripe features while satisfying the necessary condition of capturing all observed signal within the GRACE noise level.
Cartigny, M.; Postma, G.; Berg, J.H. van den; Mastbergen, D.R.
2011-01-01
Although sediment waves cover many levees and canyon floors of submarine fan systems, their relation to the turbidity currents that formed them is still poorly understood. Over the recent years some large erosional sediment waves have been interpreted as cyclic steps. Cyclic steps are a series of
Diverse Regular Employees and Non-regular Employment (Japanese)
MORISHIMA Motohiro
2011-01-01
Currently there are high expectations for the introduction of policies related to diverse regular employees. These policies are a response to the problem of disparities between regular and non-regular employees (part-time, temporary, contract and other non-regular employees) and will make it more likely that workers can balance work and their private lives while companies benefit from the advantages of regular employment. In this paper, I look at two issues that underlie this discussion. The ...
'Regular' and 'emergency' repair
International Nuclear Information System (INIS)
Luchnik, N.V.
1975-01-01
Experiments on the combined action of radiation and a DNA inhibitor using Crepis roots and on split-dose irradiation of human lymphocytes lead to the conclusion that there are two types of repair. The 'regular' repair takes place twice in each mitotic cycle and ensures the maintenance of genetic stability. The 'emergency' repair is induced at all stages of the mitotic cycle by high levels of injury. (author)
Regularization of divergent integrals
Felder, Giovanni; Kazhdan, David
2016-01-01
We study the Hadamard finite part of divergent integrals of differential forms with singularities on submanifolds. We give formulae for the dependence of the finite part on the choice of regularization and express them in terms of a suitable local residue map. The cases where the submanifold is a complex hypersurface in a complex manifold and where it is a boundary component of a manifold with boundary, arising in string perturbation theory, are treated in more detail.
Regularizing portfolio optimization
International Nuclear Information System (INIS)
Still, Susanne; Kondor, Imre
2010-01-01
The optimization of large portfolios displays an inherent instability due to estimation error. This poses a fundamental problem, because solutions that are not stable under sample fluctuations may look optimal for a given sample, but are, in effect, very far from optimal with respect to the average risk. In this paper, we approach the problem from the point of view of statistical learning theory. The occurrence of the instability is intimately related to over-fitting, which can be avoided using known regularization methods. We show how regularized portfolio optimization with the expected shortfall as a risk measure is related to support vector regression. The budget constraint dictates a modification. We present the resulting optimization problem and discuss the solution. The L2 norm of the weight vector is used as a regularizer, which corresponds to a diversification 'pressure'. This means that diversification, besides counteracting downward fluctuations in some assets by upward fluctuations in others, is also crucial because it improves the stability of the solution. The approach we provide here allows for the simultaneous treatment of optimization and diversification in one framework that enables the investor to trade off between the two, depending on the size of the available dataset.
Regularizing portfolio optimization
Still, Susanne; Kondor, Imre
2010-07-01
The optimization of large portfolios displays an inherent instability due to estimation error. This poses a fundamental problem, because solutions that are not stable under sample fluctuations may look optimal for a given sample, but are, in effect, very far from optimal with respect to the average risk. In this paper, we approach the problem from the point of view of statistical learning theory. The occurrence of the instability is intimately related to over-fitting, which can be avoided using known regularization methods. We show how regularized portfolio optimization with the expected shortfall as a risk measure is related to support vector regression. The budget constraint dictates a modification. We present the resulting optimization problem and discuss the solution. The L2 norm of the weight vector is used as a regularizer, which corresponds to a diversification 'pressure'. This means that diversification, besides counteracting downward fluctuations in some assets by upward fluctuations in others, is also crucial because it improves the stability of the solution. The approach we provide here allows for the simultaneous treatment of optimization and diversification in one framework that enables the investor to trade off between the two, depending on the size of the available dataset.
Regular Single Valued Neutrosophic Hypergraphs
Directory of Open Access Journals (Sweden)
Muhammad Aslam Malik
2016-12-01
Full Text Available In this paper, we define the regular and totally regular single valued neutrosophic hypergraphs, and discuss the order and size along with properties of regular and totally regular single valued neutrosophic hypergraphs. We also extend work on completeness of single valued neutrosophic hypergraphs.
The geometry of continuum regularization
International Nuclear Information System (INIS)
Halpern, M.B.
1987-03-01
This lecture is primarily an introduction to coordinate-invariant regularization, a recent advance in the continuum regularization program. In this context, the program is seen as fundamentally geometric, with all regularization contained in regularized DeWitt superstructures on field deformations
Leontjevas, R.; Gerritsen, D.L.; Smalbrugge, M.; Teerenstra, S.; Vernooij-Dassen, M.J.F.J.; Koopmans, R.T.C.M.
2013-01-01
BACKGROUND: Depression in nursing-home residents is often under-recognised. We aimed to establish the effectiveness of a structural approach to its management. METHODS: Between May 15, 2009, and April 30, 2011, we undertook a multicentre, stepped-wedge cluster-randomised trial in four provinces of
Annotation of Regular Polysemy
DEFF Research Database (Denmark)
Martinez Alonso, Hector
Regular polysemy has received a lot of attention from the theory of lexical semantics and from computational linguistics. However, there is no consensus on how to represent the sense of underspecified examples at the token level, namely when annotating or disambiguating senses of metonymic words...... and metonymic. We have conducted an analysis in English, Danish and Spanish. Later on, we have tried to replicate the human judgments by means of unsupervised and semi-supervised sense prediction. The automatic sense-prediction systems have been unable to find empiric evidence for the underspecified sense, even...
Regularity of Minimal Surfaces
Dierkes, Ulrich; Tromba, Anthony J; Kuster, Albrecht
2010-01-01
"Regularity of Minimal Surfaces" begins with a survey of minimal surfaces with free boundaries. Following this, the basic results concerning the boundary behaviour of minimal surfaces and H-surfaces with fixed or free boundaries are studied. In particular, the asymptotic expansions at interior and boundary branch points are derived, leading to general Gauss-Bonnet formulas. Furthermore, gradient estimates and asymptotic expansions for minimal surfaces with only piecewise smooth boundaries are obtained. One of the main features of free boundary value problems for minimal surfaces is t
International Nuclear Information System (INIS)
Nagae, M.; Yoshio, T.; Takemoto, Y.; Takada, J.; Hiraoka, Y.
2001-01-01
Internally nitrided dilute Mo-Ti alloys having a heavily deformed microstructure near the specimen surface were prepared by a novel two-step nitriding process at 1173 to 1773 K in N 2 gas. For the nitrided specimens three-point bend tests were performed at temperatures from 77 to 298 K in order to investigate the effect of microstructure control by internal nitriding on the ductile-to-brittle transition temperature (DBTT) of the alloy Yield strength obtained at 243 K of the specimen maintaining the deformed microstructure by the two-step nitriding was about 1.7 times as much as recrystallized specimen. The specimen subjected to the two-step nitriding was bent more than 90 degree at 243 K, whereas recrystallized specimen was fractured after showing a slight ductility at 243 K. DBTT of the specimen subjected to the two-step nitriding and recrystallized specimen was about 153 K and 203 K, respectively. These results indicate that multi-step internal nitriding is very effective to the improvement in the embrittlement by the recrystallization of molybdenum alloys. (author)
Jing, Long; Huang, Ping; Zhu, Huarui; Gao, Xueyun
2013-01-28
First-principles calculations (generalized gradient approximation, density functional therory (DFT) with dispersion corrections, and DFT plus local atomic potential) are carried out on the stability and electronic structures of superlattice configurations of nitrophenyl diazonium functionalized graphene with different coverage. In the calculations, the stabilities of these structures are strengthened significantly since van der Waals interactions between nitrophenyl groups are taken into account. Furthermore, spin-polarized and wider-bandgap electronic structures are obtained when the nitrophenyl groups break the sublattice symmetry of the graphene. The unpaired quasi-localized p electrons are responsible for this itinerant magnetism. The results provide a novel approach to tune graphene's electronic structures as well as to form ferromagnetic semiconductive graphene. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
2010-12-07
... FARM CREDIT SYSTEM INSURANCE CORPORATION Regular Meeting AGENCY: Farm Credit System Insurance Corporation Board. ACTION: Regular meeting. SUMMARY: Notice is hereby given of the regular meeting of the Farm Credit System Insurance Corporation Board (Board). Date and Time: The meeting of the Board will be held...
Wu, Feng; Sun, Haiding; Ajia, Idris A.; Roqan, Iman S.; Zhang, Daliang; Dai, Jiangnan; Chen, Changqing; Feng, Zhe Chuan; Li, Xiaohang
2017-01-01
Significant internal quantum efficiency (IQE) enhancement of GaN/AlGaN multiple quantum wells (MQWs) emitting at similar to 350 nm was achieved via a step quantum well (QW) structure design. The MQW structures were grown on AlGaN/AlN/sapphire templates by metal-organic chemical vapor deposition (MOCVD). High resolution x-ray diffraction (HR-XRD) and scanning transmission electron microscopy (STEM) were performed, showing sharp interface of the MQWs. Weak beam dark field imaging was conducted, indicating a similar dislocation density of the investigated MQWs samples. The IQE of GaN/AlGaN MQWs was estimated by temperature dependent photoluminescence (TDPL). An IQE enhancement of about two times was observed for the GaN/AlGaN step QW structure, compared with conventional QW structure. Based on the theoretical calculation, this IQE enhancement was attributed to the suppressed polarization-induced field, and thus the improved electron-hole wave-function overlap in the step QW.
Wu, Feng
2017-05-03
Significant internal quantum efficiency (IQE) enhancement of GaN/AlGaN multiple quantum wells (MQWs) emitting at similar to 350 nm was achieved via a step quantum well (QW) structure design. The MQW structures were grown on AlGaN/AlN/sapphire templates by metal-organic chemical vapor deposition (MOCVD). High resolution x-ray diffraction (HR-XRD) and scanning transmission electron microscopy (STEM) were performed, showing sharp interface of the MQWs. Weak beam dark field imaging was conducted, indicating a similar dislocation density of the investigated MQWs samples. The IQE of GaN/AlGaN MQWs was estimated by temperature dependent photoluminescence (TDPL). An IQE enhancement of about two times was observed for the GaN/AlGaN step QW structure, compared with conventional QW structure. Based on the theoretical calculation, this IQE enhancement was attributed to the suppressed polarization-induced field, and thus the improved electron-hole wave-function overlap in the step QW.
International Nuclear Information System (INIS)
Jia, C.L.; Kabius, B.; Urban, K.
1993-01-01
The microstructure of YBa 2 Cu 3 O 7 films epitaxially grown on step-edge (0 0 1) SrTiO 3 and LaAlO 3 substrates has been characterized by means of high-resolution electron microscopy. The results indicate a relationship between the microstructure of the film across a step and the angle the step makes with the substrate plane. On a steep, high-angle step, the film grows with its c-axis perpendicular to that of the film on substrate surface so that two grain boundaries are formed. In the upper grain boundary, on the average, a (0 1 3) habit plane alternates with a (1 0 3) habit plane. This alternating structure is caused by twinning in the orthorhombic structure. The lower boundaries consist of a chain of (0 1 3)(0 1 3) and (0 1 0)(0 0 1) type segments exhibiting a tendency to tilt the whole habit plane toward the a-b plane of the flank film. Dislocations, stacking faults and misfit strains were also observed in or close to the boundaries. (orig.)
International Nuclear Information System (INIS)
Ovechkin, B.I.; Miklina, N.V.; Blokhin, N.N.; Sorokin, A.F.
1981-01-01
Problems of the structure formation of magnesium-yttrium alloy of Mg-G-Mn-Cd system with 7.8 % G in a wide range of temperature-rate parameters of hot working are studied. On the basis of X-ray analysis results ascertained with metallographic and electron microscopic investigations, a diagram of structural states after hot working of Mg-G-Mn-Cd system alloy has been plotted. A change in grain size in relation to temperature-rate conditions of hot working
From inactive to regular jogger
DEFF Research Database (Denmark)
Lund-Cramer, Pernille; Brinkmann Løite, Vibeke; Bredahl, Thomas Viskum Gjelstrup
study was conducted using individual semi-structured interviews on how a successful long-term behavior change had been achieved. Ten informants were purposely selected from participants in the DANO-RUN research project (7 men, 3 women, average age 41.5). Interviews were performed on the basis of Theory...... of Planned Behavior (TPB) and The Transtheoretical Model (TTM). Coding and analysis of interviews were performed using NVivo 10 software. Results TPB: During the behavior change process, the intention to jogging shifted from a focus on weight loss and improved fitness to both physical health, psychological......Title From inactive to regular jogger - a qualitative study of achieved behavioral change among recreational joggers Authors Pernille Lund-Cramer & Vibeke Brinkmann Løite Purpose Despite extensive knowledge of barriers to physical activity, most interventions promoting physical activity have proven...
Directory of Open Access Journals (Sweden)
James C. Anderson
2013-08-01
Full Text Available Piperazirum, isolated from Arum palaestinum Boiss, was originally assigned as r-3,c-5-diisobutyl-c-6-isopropylpiperazin-2-one. The reported structure was synthesised diastereoselectively using a key nitro-Mannich reaction to set up the C5/C6 relative stereochemistry. The structure was unambiguously assigned by single crystal X-ray diffraction but the spectroscopic data did not match those reported for the natural product. The structure of the natural product must therefore be revised.
Ensemble manifold regularization.
Geng, Bo; Tao, Dacheng; Xu, Chao; Yang, Linjun; Hua, Xian-Sheng
2012-06-01
We propose an automatic approximation of the intrinsic manifold for general semi-supervised learning (SSL) problems. Unfortunately, it is not trivial to define an optimization function to obtain optimal hyperparameters. Usually, cross validation is applied, but it does not necessarily scale up. Other problems derive from the suboptimality incurred by discrete grid search and the overfitting. Therefore, we develop an ensemble manifold regularization (EMR) framework to approximate the intrinsic manifold by combining several initial guesses. Algorithmically, we designed EMR carefully so it 1) learns both the composite manifold and the semi-supervised learner jointly, 2) is fully automatic for learning the intrinsic manifold hyperparameters implicitly, 3) is conditionally optimal for intrinsic manifold approximation under a mild and reasonable assumption, and 4) is scalable for a large number of candidate manifold hyperparameters, from both time and space perspectives. Furthermore, we prove the convergence property of EMR to the deterministic matrix at rate root-n. Extensive experiments over both synthetic and real data sets demonstrate the effectiveness of the proposed framework.
Barriot, Jean-Pierre; Serafini, Jonathan; Sichoix, Lydie; Benna, Mehdi; Kofman, Wlodek; Herique, Alain
We investigate the inverse problem of imaging the internal structure of comet 67P/ Churyumov-Gerasimenko from radiotomography CONSERT data by using a coupled regularized inversion of the Helmholtz equations. A first set of Helmholtz equations, written w.r.t a basis of 3D Hankel functions describes the wave propagation outside the comet at large distances, a second set of Helmholtz equations, written w.r.t. a basis of 3D Zernike functions describes the wave propagation throughout the comet with avariable permittivity. Both sets are connected by continuity equations over a sphere that surrounds the comet. This approach, derived from GPS water vapor tomography of the atmosphere,will permit a full 3D inversion of the internal structure of the comet, contrary to traditional approaches that use a discretization of space at a fraction of the radiowave wavelength.
Ali, Ghafar; Ahmad, Maqsood; Akhter, Javed Iqbal; Maqbool, Muhammad; Cho, Sung Oh
2010-08-01
A simple approach for the growth of long-range highly ordered nanoporous anodic alumina film in H(2)SO(4) electrolyte through a single step anodization without any additional pre-anodizing procedure is reported. Free-standing porous anodic alumina film of 180 microm thickness with through hole morphology was obtained. A simple and single step process was used for the detachment of alumina from aluminum substrate. The effect of anodizing conditions, such as anodizing voltage and time on the pore diameter and pore ordering is discussed. The metal/oxide and oxide/electrolyte interfaces were examined by high resolution scanning transmission electron microscope. The arrangement of pores on metal/oxide interface was well ordered with smaller diameters than that of the oxide/electrolyte interface. The inter-pore distance was larger in metal/oxide interface as compared to the oxide/electrolyte interface. The size of the ordered domain was found to depend strongly upon anodizing voltage and time. (c) 2010 Elsevier Ltd. All rights reserved.
DEFF Research Database (Denmark)
Sette, A; Lamont, A; Buus, S
1989-01-01
the binding capacity, but no correlation was found between their effect and their alpha-helical, beta-sheet, or beta-turn conformational propensity as calculated by the Chou and Fasman algorithm. In summary, all the data presented herein suggest that, at least in the case of OVA 323-336 and IAd......, the propensity of the antigen molecule to form secondary structures such as alpha-helices, beta-sheets, or beta-turns does not correlate with its capacity to bind MHC molecules....
On the regularized fermionic projector of the vacuum
Finster, Felix
2008-03-01
We construct families of fermionic projectors with spherically symmetric regularization, which satisfy the condition of a distributional MP-product. The method is to analyze regularization tails with a power law or logarithmic scaling in composite expressions in the fermionic projector. The resulting regularizations break the Lorentz symmetry and give rise to a multilayer structure of the fermionic projector near the light cone. Furthermore, we construct regularizations which go beyond the distributional MP-product in that they yield additional distributional contributions supported at the origin. The remaining freedom for the regularization parameters and the consequences for the normalization of the fermionic states are discussed.
On the regularized fermionic projector of the vacuum
International Nuclear Information System (INIS)
Finster, Felix
2008-01-01
We construct families of fermionic projectors with spherically symmetric regularization, which satisfy the condition of a distributional MP-product. The method is to analyze regularization tails with a power law or logarithmic scaling in composite expressions in the fermionic projector. The resulting regularizations break the Lorentz symmetry and give rise to a multilayer structure of the fermionic projector near the light cone. Furthermore, we construct regularizations which go beyond the distributional MP-product in that they yield additional distributional contributions supported at the origin. The remaining freedom for the regularization parameters and the consequences for the normalization of the fermionic states are discussed
International Nuclear Information System (INIS)
Na, Byung Hoon; Ju, Gun Wu; Cho, Yong Chul; Lee, Yong Tak; Choi, Hee Ju; Jeon, Jin Myeong; Lee, Soo Kyung; Park, Yong Hwa; Park, Chang Young
2015-01-01
In this paper, we propose a transmission type electro-absorption modulator (EAM) operating at 850 nm having low operating voltage and high absorption change with low insertion loss using a novel three step asymmetric coupled quantum well (3 ACQW) structure which can be used as an optical image shutter for high-definition (HD) three dimensional (3D) imaging. Theoretical calculations show that the exciton red shift of 3 ACQW structure is more than two times larger than that of rectangular quantum well (RQW) structure while maintaining high absorption change. The EAM having coupled cavities with 3 ACQW structure shows a wide spectral bandwidth and high amplitude modulation at a bias voltage of only -8V, which is 41% lower in operating voltage than that of RQW, making the proposed EAM highly attractive as an optical image shutter for HD 3D imaging applications
Analytic stochastic regularization and gange invariance
International Nuclear Information System (INIS)
Abdalla, E.; Gomes, M.; Lima-Santos, A.
1986-05-01
A proof that analytic stochastic regularization breaks gauge invariance is presented. This is done by an explicit one loop calculation of the vaccum polarization tensor in scalar electrodynamics, which turns out not to be transversal. The counterterm structure, Langevin equations and the construction of composite operators in the general framework of stochastic quantization, are also analysed. (Author) [pt
DEFF Research Database (Denmark)
Garcia-Aymerich, Judith; Lange, Peter; Serra, Ignasi
2008-01-01
PURPOSE: Results from longitudinal studies about the association between physical activity and chronic obstructive pulmonary disease (COPD) may have been biased because they did not properly adjust for time-dependent confounders. Marginal structural models (MSMs) have been proposed to address...... this type of confounding. We sought to assess the presence of time-dependent confounding in the association between physical activity and COPD development and course by comparing risk estimates between standard statistical methods and MSMs. METHODS: By using the population-based cohort Copenhagen City Heart...... Study, 6,568 subjects selected from the general population in 1976 were followed up until 2004 with three repeated examinations. RESULTS: Moderate to high compared with low physical activity was associated with a reduced risk of developing COPD both in the standard analysis (odds ratio [OR] 0.76, p = 0...
Directory of Open Access Journals (Sweden)
Biryukova Irina V
2008-08-01
Full Text Available Abstract Background The development of modern producer strains with metabolically engineered pathways poses special problems that often require manipulating many genes and expressing them individually at different levels or under separate regulatory controls. The construction of plasmid-less marker-less strains has many advantages for the further practical exploitation of these bacteria in industry. Such producer strains are usually constructed by sequential chromosome modifications including deletions and integration of genetic material. For these purposes complex methods based on in vitro and in vivo recombination processes have been developed. Results Here, we describe the new scheme of insertion of the foreign DNA for step-by-step construction of plasmid-less marker-less recombinant E. coli strains with chromosome structure designed in advance. This strategy, entitled as Dual-In/Out, based on the initial Red-driven insertion of artificial φ80-attB sites into desired points of the chromosome followed by two site-specific recombination processes: first, the φ80 system is used for integration of the recombinant DNA based on selective marker-carrier conditionally-replicated plasmid with φ80-attP-site, and second, the λ system is used for excision of inserted vector part, including the plasmid ori-replication and the marker, flanked by λ-attL/R-sites. Conclusion The developed Dual-In/Out strategy is a rather straightforward, but convenient combination of previously developed recombination methods: phages site-specific and general Red/ET-mediated. This new approach allows us to detail the design of future recombinant marker-less strains, carrying, in particular, rather large artificial insertions that could be difficult to introduce by usually used PCR-based Recombineering procedure. The developed strategy is simple and could be particularly useful for construction of strains for the biotechnological industry.
Energy Technology Data Exchange (ETDEWEB)
Prakash, B. Shri; Balaji, N.; Kumar, S. Senthil; Aruna, S.T., E-mail: staruna194@gmail.com
2016-12-15
Highlights: • Preparation of plasma grade NiO/YSZ powder in single step. • Fabrication of nano-structured Ni/YSZ coating. • Conductivity of 600 S/cm at 800 °C. - Abstract: NiO/YSZ anode coatings are fabricated by atmospheric plasma spraying at different plasma powers from plasma grade NiO/YSZ powders that are prepared in a single step by solution combustion method. The process adopted is devoid of multi-steps that are generally involved in conventional spray drying or fusing and crushing methods. Density of the coating increased and porosity decreased with increase in the plasma power of deposition. An ideal nano-structured Ni/YSZ anode encompassing nano YSZ particles, nano Ni particles and nano pores is achieved on reducing the coating deposited at lower plasma powers. The coating exhibit porosities in the range of 27%, sufficient for anode functional layers. Electronic conductivity of the coatings is in the range of 600 S/cm at 800 °C.
Adaptive Regularization of Neural Classifiers
DEFF Research Database (Denmark)
Andersen, Lars Nonboe; Larsen, Jan; Hansen, Lars Kai
1997-01-01
We present a regularization scheme which iteratively adapts the regularization parameters by minimizing the validation error. It is suggested to use the adaptive regularization scheme in conjunction with optimal brain damage pruning to optimize the architecture and to avoid overfitting. Furthermo......, we propose an improved neural classification architecture eliminating an inherent redundancy in the widely used SoftMax classification network. Numerical results demonstrate the viability of the method...
Okada, Takayuki
2013-01-01
The author suggested that it is essential for lawyers and psychiatrists to have a common understanding of the mutual division of roles between them when determining criminal responsibility (CR) and, for this purpose, proposed an 8-step structured CR decision-making process. The 8 steps are: (1) gathering of information related to mental function and condition, (2) recognition of mental function and condition,(3) psychiatric diagnosis, (4) description of the relationship between psychiatric symptom or psychopathology and index offense, (5) focus on capacities of differentiation between right and wrong and behavioral control, (6) specification of elements of cognitive/volitional prong in legal context, (7) legal evaluation of degree of cognitive/volitional prong, and (8) final interpretation of CR as a legal conclusion. The author suggested that the CR decision-making process should proceed not in a step-like pattern from (1) to (2) to (3) to (8), but in a step-like pattern from (1) to (2) to (4) to (5) to (6) to (7) to (8), and that not steps after (5), which require the interpretation or the application of section 39 of the Penal Code, but Step (4), must be the core of psychiatric expert evidence. When explaining the relationship between the mental disorder and offense described in Step (4), the Seven Focal Points (7FP) are often used. The author urged basic precautions to prevent the misuse of 7FP, which are: (a) the priority of each item is not equal and the relative importance differs from case to case; (b) each item is not exclusively independent, there may be overlap between items; (c) the criminal responsibility shall not be judged because one item is applicable or because a number of items are applicable, i. e., 7FP are not "criteria," for example, the aim is not to decide such things as 'the motive is understandable' or 'the conduct is appropriate', but should be to describe how psychopathological factors affected the offense specifically in the context of
Regular Expression Matching and Operational Semantics
Directory of Open Access Journals (Sweden)
Asiri Rathnayake
2011-08-01
Full Text Available Many programming languages and tools, ranging from grep to the Java String library, contain regular expression matchers. Rather than first translating a regular expression into a deterministic finite automaton, such implementations typically match the regular expression on the fly. Thus they can be seen as virtual machines interpreting the regular expression much as if it were a program with some non-deterministic constructs such as the Kleene star. We formalize this implementation technique for regular expression matching using operational semantics. Specifically, we derive a series of abstract machines, moving from the abstract definition of matching to increasingly realistic machines. First a continuation is added to the operational semantics to describe what remains to be matched after the current expression. Next, we represent the expression as a data structure using pointers, which enables redundant searches to be eliminated via testing for pointer equality. From there, we arrive both at Thompson's lockstep construction and a machine that performs some operations in parallel, suitable for implementation on a large number of cores, such as a GPU. We formalize the parallel machine using process algebra and report some preliminary experiments with an implementation on a graphics processor using CUDA.
Jo, Joon-Jung; Kim, Min-Ji; Son, Jung-Tae; Kim, Jandi; Shin, Jong-Shik
2009-07-17
Nucleic acid hybridization is one of the essential biological processes involved in storage and transmission of genetic information. Here we quantitatively determined the effect of secondary structure on the hybridization activation energy using structurally defined oligonucleotides. It turned out that activation energy is linearly proportional to the length of a single-stranded region flanking a nucleation site, generating a 0.18 kcal/mol energy barrier per nucleotide. Based on this result, we propose that the presence of single-stranded segments available for non-productive base pairing with a nucleation counterpart extends the searching process for nucleation sites to find a perfect match. This result may provide insights into rational selection of a target mRNA site for siRNA and antisense gene silencing.
Understanding Regular Expressions
Directory of Open Access Journals (Sweden)
Doug Knox
2013-06-01
Full Text Available In this exercise we will use advanced find-and-replace capabilities in a word processing application in order to make use of structure in a brief historical document that is essentially a table in the form of prose. Without using a general programming language, we will gain exposure to some aspects of computational thinking, especially pattern matching, that can be immediately helpful to working historians (and others using word processors, and can form the basis for subsequent learning with more general programming environments.
High-pressure polymorphism as a step towards high density structures of LiAlH{sub 4}
Energy Technology Data Exchange (ETDEWEB)
Huang, Xiaoli; Duan, Defang; Li, Xin; Li, Fangfei; Huang, Yanping; Wu, Gang; Liu, Yunxian; Zhou, Qiang; Liu, Bingbing; Cui, Tian, E-mail: cuitian@jlu.edu.cn [State Key Laboratory of Superhard Materials, College of Physics, Jilin University, Changchun 130012 (China)
2015-07-27
Two high density structures β- and γ-LiAlH{sub 4} are detected in LiAlH{sub 4}, a promising hydrogen storage compound, upon compression in diamond anvil cells, investigated with synchrotron X-ray diffraction and first-principle calculations. The joint of the experimental and theoretical results has confirmed the sequence of the pressure-induced structural phase transitions from α-LiAlH{sub 4} (space group P2{sub 1}/c) to β-LiAlH{sub 4} (P2{sub 1}/c-6C symmetry), and then to γ-LiAlH{sub 4} (space group Pnc2), which are not reported in previous literatures. At the α to β transition point for LiAlH{sub 4}, the estimated difference in cell volume is about 20%, while the transformation from β to γ phase is with a volume drop smaller than 1%. The α to β phase transition is accompanied by the local structure change from a AlH{sub 4} tetrahedron into a AlH{sub 6} octahedron, which contributes to a large volume collapse.
2010-09-02
... FARM CREDIT SYSTEM INSURANCE CORPORATION Regular Meeting AGENCY: Farm Credit System Insurance Corporation Board. SUMMARY: Notice is hereby given of the regular meeting of the Farm Credit System Insurance Corporation Board (Board). DATE AND TIME: The meeting of the Board will be held at the offices of the Farm...
Online co-regularized algorithms
Ruijter, T. de; Tsivtsivadze, E.; Heskes, T.
2012-01-01
We propose an online co-regularized learning algorithm for classification and regression tasks. We demonstrate that by sequentially co-regularizing prediction functions on unlabeled data points, our algorithm provides improved performance in comparison to supervised methods on several UCI benchmarks
van Duijn, Esther; Barbu, Ioana M; Barendregt, Arjan; Jore, Matthijs M; Wiedenheft, Blake; Lundgren, Magnus; Westra, Edze R; Brouns, Stan J J; Doudna, Jennifer A; van der Oost, John; Heck, Albert J R
2012-11-01
The CRISPR/Cas (clustered regularly interspaced short palindromic repeats/CRISPR-associated genes) immune system of bacteria and archaea provides acquired resistance against viruses and plasmids, by a strategy analogous to RNA-interference. Key components of the defense system are ribonucleoprotein complexes, the composition of which appears highly variable in different CRISPR/Cas subtypes. Previous studies combined mass spectrometry, electron microscopy, and small angle x-ray scattering to demonstrate that the E. coli Cascade complex (405 kDa) and the P. aeruginosa Csy-complex (350 kDa) are similar in that they share a central spiral-shaped hexameric structure, flanked by associating proteins and one CRISPR RNA. Recently, a cryo-electron microscopy structure of Cascade revealed that the CRISPR RNA molecule resides in a groove of the hexameric backbone. For both complexes we here describe the use of native mass spectrometry in combination with ion mobility mass spectrometry to assign a stable core surrounded by more loosely associated modules. Via computational modeling subcomplex structures were proposed that relate to the experimental IMMS data. Despite the absence of obvious sequence homology between several subunits, detailed analysis of sub-complexes strongly suggests analogy between subunits of the two complexes. Probing the specific association of E. coli Cascade/crRNA to its complementary DNA target reveals a conformational change. All together these findings provide relevant new information about the potential assembly process of the two CRISPR-associated complexes.
Modelling the harmonized tertiary Institutions Salary Structure ...
African Journals Online (AJOL)
This paper analyses the Harmonized Tertiary Institution Salary Structure (HATISS IV) used in Nigeria. The irregularities in the structure are highlighted. A model that assumes a polynomial trend for the zero step salary, and exponential trend for the incremental rates, is suggested for the regularization of the structure.
GrowYourIC: A Step Toward a Coherent Model of the Earth's Inner Core Seismic Structure
Lasbleis, Marine; Waszek, Lauren; Day, Elizabeth A.
2017-11-01
A complex inner core structure has been well established from seismic studies, showing radial and lateral heterogeneities at various length scales. Yet no geodynamic model is able to explain all the features observed. One of the main limits for this is the lack of tools to compare seismic observations and numerical models successfully. We use here a new Python tool called GrowYourIC to compare models of inner core structure. We calculate properties of geodynamic models of the inner core along seismic raypaths, for random or user-specified data sets. We test kinematic models which simulate fast lateral translation, superrotation, and differential growth. We explore first the influence on a real inner core data set, which has a sparse coverage of the inner core boundary. Such a data set is however able to successfully constrain the hemispherical boundaries due to a good sampling of latitudes. Combining translation and rotation could explain some of the features of the boundaries separating the inner core hemispheres. The depth shift of the boundaries, observed by some authors, seems unlikely to be modeled by a fast translation but could be produced by slow translation associated with superrotation.
Class of regular bouncing cosmologies
Vasilić, Milovan
2017-06-01
In this paper, I construct a class of everywhere regular geometric sigma models that possess bouncing solutions. Precisely, I show that every bouncing metric can be made a solution of such a model. My previous attempt to do so by employing one scalar field has failed due to the appearance of harmful singularities near the bounce. In this work, I use four scalar fields to construct a class of geometric sigma models which are free of singularities. The models within the class are parametrized by their background geometries. I prove that, whatever background is chosen, the dynamics of its small perturbations is classically stable on the whole time axis. Contrary to what one expects from the structure of the initial Lagrangian, the physics of background fluctuations is found to carry two tensor, two vector, and two scalar degrees of freedom. The graviton mass, which naturally appears in these models, is shown to be several orders of magnitude smaller than its experimental bound. I provide three simple examples to demonstrate how this is done in practice. In particular, I show that graviton mass can be made arbitrarily small.
Hattotuwagama, Channa K; Doytchinova, Irini A; Flower, Darren R
2007-01-01
Quantitative structure-activity relationship (QSAR) analysis is a cornerstone of modern informatics. Predictive computational models of peptide-major histocompatibility complex (MHC)-binding affinity based on QSAR technology have now become important components of modern computational immunovaccinology. Historically, such approaches have been built around semiqualitative, classification methods, but these are now giving way to quantitative regression methods. We review three methods--a 2D-QSAR additive-partial least squares (PLS) and a 3D-QSAR comparative molecular similarity index analysis (CoMSIA) method--which can identify the sequence dependence of peptide-binding specificity for various class I MHC alleles from the reported binding affinities (IC50) of peptide sets. The third method is an iterative self-consistent (ISC) PLS-based additive method, which is a recently developed extension to the additive method for the affinity prediction of class II peptides. The QSAR methods presented here have established themselves as immunoinformatic techniques complementary to existing methodology, useful in the quantitative prediction of binding affinity: current methods for the in silico identification of T-cell epitopes (which form the basis of many vaccines, diagnostics, and reagents) rely on the accurate computational prediction of peptide-MHC affinity. We have reviewed various human and mouse class I and class II allele models. Studied alleles comprise HLA-A*0101, HLA-A*0201, HLA-A*0202, HLA-A*0203, HLA-A*0206, HLA-A*0301, HLA-A*1101, HLA-A*3101, HLA-A*6801, HLA-A*6802, HLA-B*3501, H2-K(k), H2-K(b), H2-D(b) HLA-DRB1*0101, HLA-DRB1*0401, HLA-DRB1*0701, I-A(b), I-A(d), I-A(k), I-A(S), I-E(d), and I-E(k). In this chapter we show a step-by-step guide into predicting the reliability and the resulting models to represent an advance on existing methods. The peptides used in this study are available from the AntiJen database (http://www.jenner.ac.uk/AntiJen). The PLS method
Yoon, Yeong Keng; Choon, Tan Soo
2016-01-01
Benzimidazole derivatives have been shown to possess sirtuin-inhibitory activity. In the continuous search for potent sirtuin inhibitors, systematic changes on the terminal benzene ring were performed on previously identified benzimidazole-based sirtuin inhibitors, to further investigate their structure-activity relationships. It was demonstrated that the sirtuin activities of these novel compounds followed the trend where meta-substituted compounds possessed markedly weaker potency than ortho- and para-substituted compounds, with the exception of halogenated substituents. Molecular docking studies were carried out to rationalize these observations. Apart from this, the methods used to synthesize the interesting compounds are also discussed. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Wlostowski, A. N.; Harman, C. J.; Molotch, N. P.
2017-12-01
The physical and biological architecture of the Earth's Critical Zone controls hydrologic partitioning, storage, and chemical evolution of precipitated water. The Critical Zone Observatory (CZO) Network provides an ideal platform to explore linkages between catchment structure and hydrologic function across a gradient of geologic and climatic settings. A legacy of hypothesis-motivated research at each site has generated a wealth of data characterizing the architecture and hydrologic function of the critical zone. We will present a synthesis of this data that aims to elucidate and explain (in the sense of making mutually intelligible) variations in hydrologic function across the CZO network. Top-down quantitative signatures of the storage and partitioning of water at catchment scales extracted from precipitation, streamflow, and meteorological data will be compared with each other, and provide quantitative benchmarks to assess differences in perceptual models of hydrologic function at each CZO site. Annual water balance analyses show that CZO sites span a wide gradient of aridity and evaporative partitioning. The aridity index (PET/P) ranges from 0.3 at Luquillo to 4.3 at Reynolds Creek, while the evaporative index (E/P) ranges from 0.3 at Luquillo (Rio Mamayes) to 0.9 at Reynolds Creek (Reynolds Creek Outlet). Snow depth and SWE observations reveal that snowpack is an important seasonal storage reservoir at three sites: Boulder, Jemez, Reynolds Creek and Southern Sierra. Simple dynamical models are also used to infer seasonal patterns of subsurface catchment storage. A root-zone water balance model reveals unique seasonal variations in plant-available water storage. Seasonal patterns of plant-available storage are driven by the asynchronicity of seasonal precipitation and evaporation cycles. Catchment sensitivity functions are derived at each site to infer relative changes in hydraulic storage (the apparent storage reservoir responsible for modulating streamflow
Continuum-regularized quantum gravity
International Nuclear Information System (INIS)
Chan Huesum; Halpern, M.B.
1987-01-01
The recent continuum regularization of d-dimensional Euclidean gravity is generalized to arbitrary power-law measure and studied in some detail as a representative example of coordinate-invariant regularization. The weak-coupling expansion of the theory illustrates a generic geometrization of regularized Schwinger-Dyson rules, generalizing previous rules in flat space and flat superspace. The rules are applied in a non-trivial explicit check of Einstein invariance at one loop: the cosmological counterterm is computed and its contribution is included in a verification that the graviton mass is zero. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Lu, Jing; Wang, Yong; Huang, Jianfeng, E-mail: huangjfsust@126.com; Cao, Liyun; Li, Jiayin; Hai, Guojuan; Bai, Zhe
2016-12-15
Highlights: • g-C{sub 3}N{sub 4} nanosheets with hierarchical porous structure were synthesized via one step. • The band gap of the nanosheets was wider and investigated in detail. • The nanosheets can degrade almost all of the RhB within 9 min. • The photocurrent of the nanosheets is 5.97 times as high as that of the P-25. - Abstract: Graphitic carbon nitride (g-C{sub 3}N{sub 4}) nanosheets with hierarchical porous structure were synthesized via one-step thermal condensation-oxidation process. The microstructure of g-C{sub 3}N{sub 4} was characterized to explain the dramatic ultraviolet light photocatalytic activity. The results showed that g-C{sub 3}N{sub 4} hierarchical aggregates were assembled by nanosheets with a length of 1–2 μm and a thickness of 20–30 nm. And the N{sub 2}-adsorption/desorption isotherms further informed the presence of fissure form mesoporous structure. An enhanced photocurrent of 37.2 μA was obtained, which is almost 5 times higher than that of P-25. Besides, the g-C{sub 3}N{sub 4} nanosheets displayed the degradation of Rhodamine B with 99.4% removal efficiency in only 9 min. Such highly photocatalytic activity could be attributed to the nano platelet morphology which improves electron transport ability along the in-plane direction. In addition, the hierarchical porous structure adapted a wider band gap of C{sub 3}N{sub 4}. Therefore, the photoinduced electron-hole pairs have a stronger oxidation-reduction potential for photocatalysis.
Hema, M. K.; Karthik, C. S.; Warad, Ismail; Lokanath, N. K.; Zarrouk, Abdelkader; Kumara, Karthik; Pampa, K. J.; Mallu, P.
2018-04-01
Trans-[Cu(O∩O)2] complex, O∩O = 4,4,4-trifluoro-1-(thiophen-2-yl)butane-1,3-dione was reported with high potential toward CT-DNA binder. The solved XRD-structure of complex indicated a perfect regular square-planer geometry around the Cu(II) center. The trans/cis-DFT-isomerization calculation supported the XRD seen in reflecting the trans-isomer as the kinetic-favor isomer. The desired complex structure was also characterized by conductivity measurement, CHN-elemental analyses, MS, EDX, SEM, UV-Vis., FT-IR, HAS and TG/DTG. The Solvatochromism behavior of the complex was evaluated using four different polar solvents. MPE and Hirshfeld surface analysis (HSA) come to an agreement that fluoride and thiophene protons atoms are with suitable electro-potential environment to form non-classical H-bonds of type CThsbnd H⋯F. The DNA-binding properties were investigated by viscosity tests and spectrometric titrations, the results revealed the complex as strong calf-thymus DNA binder. High intrinsic-binding constants value ∼1.8 × 105 was collected.
Convex nonnegative matrix factorization with manifold regularization.
Hu, Wenjun; Choi, Kup-Sze; Wang, Peiliang; Jiang, Yunliang; Wang, Shitong
2015-03-01
Nonnegative Matrix Factorization (NMF) has been extensively applied in many areas, including computer vision, pattern recognition, text mining, and signal processing. However, nonnegative entries are usually required for the data matrix in NMF, which limits its application. Besides, while the basis and encoding vectors obtained by NMF can represent the original data in low dimension, the representations do not always reflect the intrinsic geometric structure embedded in the data. Motivated by manifold learning and Convex NMF (CNMF), we propose a novel matrix factorization method called Graph Regularized and Convex Nonnegative Matrix Factorization (GCNMF) by introducing a graph regularized term into CNMF. The proposed matrix factorization technique not only inherits the intrinsic low-dimensional manifold structure, but also allows the processing of mixed-sign data matrix. Clustering experiments on nonnegative and mixed-sign real-world data sets are conducted to demonstrate the effectiveness of the proposed method. Copyright © 2014 Elsevier Ltd. All rights reserved.
Analysis of regularized Navier-Stokes equations, 2
Ou, Yuh-Roung; Sritharan, S. S.
1989-01-01
A practically important regularization of the Navier-Stokes equations was analyzed. As a continuation of the previous work, the structure of the attractors characterizing the solutins was studied. Local as well as global invariant manifolds were found. Regularity properties of these manifolds are analyzed.
New regular black hole solutions
International Nuclear Information System (INIS)
Lemos, Jose P. S.; Zanchin, Vilson T.
2011-01-01
In the present work we consider general relativity coupled to Maxwell's electromagnetism and charged matter. Under the assumption of spherical symmetry, there is a particular class of solutions that correspond to regular charged black holes whose interior region is de Sitter, the exterior region is Reissner-Nordstroem and there is a charged thin-layer in-between the two. The main physical and geometrical properties of such charged regular black holes are analyzed.
Regular variation on measure chains
Czech Academy of Sciences Publication Activity Database
Řehák, Pavel; Vitovec, J.
2010-01-01
Roč. 72, č. 1 (2010), s. 439-448 ISSN 0362-546X R&D Projects: GA AV ČR KJB100190701 Institutional research plan: CEZ:AV0Z10190503 Keywords : regularly varying function * regularly varying sequence * measure chain * time scale * embedding theorem * representation theorem * second order dynamic equation * asymptotic properties Subject RIV: BA - General Mathematics Impact factor: 1.279, year: 2010 http://www.sciencedirect.com/science/article/pii/S0362546X09008475
On geodesics in low regularity
Sämann, Clemens; Steinbauer, Roland
2018-02-01
We consider geodesics in both Riemannian and Lorentzian manifolds with metrics of low regularity. We discuss existence of extremal curves for continuous metrics and present several old and new examples that highlight their subtle interrelation with solutions of the geodesic equations. Then we turn to the initial value problem for geodesics for locally Lipschitz continuous metrics and generalize recent results on existence, regularity and uniqueness of solutions in the sense of Filippov.
Analytic stochastic regularization and gauge theories
International Nuclear Information System (INIS)
Abdalla, E.; Gomes, M.; Lima-Santos, A.
1987-04-01
We prove that analytic stochatic regularization braks gauge invariance. This is done by an explicit one loop calculation of the two three and four point vertex functions of the gluon field in scalar chromodynamics, which turns out not to be geuge invariant. We analyse the counter term structure, Langevin equations and the construction of composite operators in the general framework of stochastic quantization. (author) [pt
Minimal length uncertainty relation and ultraviolet regularization
Kempf, Achim; Mangano, Gianpiero
1997-06-01
Studies in string theory and quantum gravity suggest the existence of a finite lower limit Δx0 to the possible resolution of distances, at the latest on the scale of the Planck length of 10-35 m. Within the framework of the Euclidean path integral we explicitly show ultraviolet regularization in field theory through this short distance structure. Both rotation and translation invariance can be preserved. An example is studied in detail.
Condition Number Regularized Covariance Estimation.
Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala
2013-06-01
Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the "large p small n " setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required.
Condition Number Regularized Covariance Estimation*
Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala
2012-01-01
Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the “large p small n” setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required. PMID:23730197
Spegazzini, Nicolas; Siesler, Heinz W; Ozaki, Yukihiro
2012-08-02
The doublet of the ν(C=O) carbonyl band in isomeric urethane systems has been extensively discussed in qualitative terms on the basis of FT-IR spectroscopy of the macromolecular structures. Recently, a reaction extent model was proposed as an inverse kinetic problem for the synthesis of diphenylurethane for which hydrogen-bonded and non-hydrogen-bonded C=O functionalities were identified. In this article, the heteronuclear C=O···H-N hydrogen bonding in the isomeric structure of diphenylurethane synthesized from phenylisocyanate and phenol was investigated via FT-IR spectroscopy, using a methodology of regularization for the inverse reaction extent model through an eigenvalue problem. The kinetic and thermodynamic parameters of this system were derived directly from the spectroscopic data. The activation and thermodynamic parameters of the isomeric structures of diphenylurethane linked through a hydrogen bonding equilibrium were studied. The study determined the enthalpy (ΔH = 15.25 kJ/mol), entropy (TΔS = 14.61 kJ/mol), and free energy (ΔG = 0.6 kJ/mol) of heteronuclear C=O···H-N hydrogen bonding by FT-IR spectroscopy through direct calculation from the differences in the kinetic parameters (δΔ(‡)H, -TδΔ(‡)S, and δΔ(‡)G) at equilibrium in the chemical reaction system. The parameters obtained in this study may contribute toward a better understanding of the properties of, and interactions in, supramolecular systems, such as the switching behavior of hydrogen bonding.
Regular website transformation to mobile friendly methodology development
Miščenkov, Ilja
2017-01-01
Nowadays, rate of technology improvement grows faster than ever which results in increased mobile device usage. Internet users often choose to browse their favorite websites via computers as well as mobile devices, however, not every website is suited to be displayed on both types of technology. As an example the website of Vilnius University’s Mathematics and Informatics faculty. Therefore the objective of this work is to develop a step-by-step procedure which is used to turn a regular websi...
Geometric continuum regularization of quantum field theory
International Nuclear Information System (INIS)
Halpern, M.B.
1989-01-01
An overview of the continuum regularization program is given. The program is traced from its roots in stochastic quantization, with emphasis on the examples of regularized gauge theory, the regularized general nonlinear sigma model and regularized quantum gravity. In its coordinate-invariant form, the regularization is seen as entirely geometric: only the supermetric on field deformations is regularized, and the prescription provides universal nonperturbative invariant continuum regularization across all quantum field theory. 54 refs
Energy Technology Data Exchange (ETDEWEB)
1980-01-01
On the 60th anniversary of the founding of the Chartered Institute of Transport, its past, its current aims and structure, and the role which the Institute should adopt to the year 2000 are discussed. Technological and social change, bureaucracy, industrial relations, worker participation, government and transport, national role, and financial policy, are some of the subjects covered. In discussing the energy crisis, the objectives of the European Communities for 1985 and 1990 dealing with energy and energy conservatin are noted. Absent was the mention of the role that transport could play in the more efficient and effective utilization of energy. The Institute of Transport has the opportunity to demonstrate leadership and initiative in cooperating with governments in the area of energy use and energy conservation.
Consistent Partial Least Squares Path Modeling via Regularization.
Jung, Sunho; Park, JaeHong
2018-01-01
Partial least squares (PLS) path modeling is a component-based structural equation modeling that has been adopted in social and psychological research due to its data-analytic capability and flexibility. A recent methodological advance is consistent PLS (PLSc), designed to produce consistent estimates of path coefficients in structural models involving common factors. In practice, however, PLSc may frequently encounter multicollinearity in part because it takes a strategy of estimating path coefficients based on consistent correlations among independent latent variables. PLSc has yet no remedy for this multicollinearity problem, which can cause loss of statistical power and accuracy in parameter estimation. Thus, a ridge type of regularization is incorporated into PLSc, creating a new technique called regularized PLSc. A comprehensive simulation study is conducted to evaluate the performance of regularized PLSc as compared to its non-regularized counterpart in terms of power and accuracy. The results show that our regularized PLSc is recommended for use when serious multicollinearity is present.
Jiang, Hanlun
2015-07-16
Argonaute (Ago) proteins and microRNAs (miRNAs) are central components in RNA interference, which is a key cellular mechanism for sequence-specific gene silencing. Despite intensive studies, molecular mechanisms of how Ago recognizes miRNA remain largely elusive. In this study, we propose a two-step mechanism for this molecular recognition: selective binding followed by structural re-arrangement. Our model is based on the results of a combination of Markov State Models (MSMs), large-scale protein-RNA docking, and molecular dynamics (MD) simulations. Using MSMs, we identify an open state of apo human Ago-2 in fast equilibrium with partially open and closed states. Conformations in this open state are distinguished by their largely exposed binding grooves that can geometrically accommodate miRNA as indicated in our protein-RNA docking studies. miRNA may then selectively bind to these open conformations. Upon the initial binding, the complex may perform further structural re-arrangement as shown in our MD simulations and eventually reach the stable binary complex structure. Our results provide novel insights in Ago-miRNA recognition mechanisms and our methodology holds great potential to be widely applied in the studies of other important molecular recognition systems.
Jiang, Hanlun; Sheong, Fu Kit; Zhu, Lizhe; Gao, Xin; Bernauer, Julie; Huang, Xuhui
2015-01-01
Argonaute (Ago) proteins and microRNAs (miRNAs) are central components in RNA interference, which is a key cellular mechanism for sequence-specific gene silencing. Despite intensive studies, molecular mechanisms of how Ago recognizes miRNA remain largely elusive. In this study, we propose a two-step mechanism for this molecular recognition: selective binding followed by structural re-arrangement. Our model is based on the results of a combination of Markov State Models (MSMs), large-scale protein-RNA docking, and molecular dynamics (MD) simulations. Using MSMs, we identify an open state of apo human Ago-2 in fast equilibrium with partially open and closed states. Conformations in this open state are distinguished by their largely exposed binding grooves that can geometrically accommodate miRNA as indicated in our protein-RNA docking studies. miRNA may then selectively bind to these open conformations. Upon the initial binding, the complex may perform further structural re-arrangement as shown in our MD simulations and eventually reach the stable binary complex structure. Our results provide novel insights in Ago-miRNA recognition mechanisms and our methodology holds great potential to be widely applied in the studies of other important molecular recognition systems.
International Nuclear Information System (INIS)
1980-10-01
This book is divided into three parts, which is about practical using of stepping motor. The first part has six chapters. The contents of the first part are about stepping motor, classification of stepping motor, basic theory og stepping motor, characteristic and basic words, types and characteristic of stepping motor in hybrid type and basic control of stepping motor. The second part deals with application of stepping motor with hardware of stepping motor control, stepping motor control by microcomputer and software of stepping motor control. The last part mentions choice of stepping motor system, examples of stepping motor, measurement of stepping motor and practical cases of application of stepping motor.
The Impact of Computerization on Regular Employment (Japanese)
SUNADA Mitsuru; HIGUCHI Yoshio; ABE Masahiro
2004-01-01
This paper uses micro data from the Basic Survey of Japanese Business Structure and Activity to analyze the effects of companies' introduction of information and telecommunications technology on employment structures, especially regular versus non-regular employment. Firstly, examination of trends in the ratio of part-time workers recorded in the Basic Survey shows that part-time worker ratios in manufacturing firms are rising slightly, but that companies with a high proportion of part-timers...
Dimensional regularization in configuration space
International Nuclear Information System (INIS)
Bollini, C.G.; Giambiagi, J.J.
1995-09-01
Dimensional regularization is introduced in configuration space by Fourier transforming in D-dimensions the perturbative momentum space Green functions. For this transformation, Bochner theorem is used, no extra parameters, such as those of Feynman or Bogoliubov-Shirkov are needed for convolutions. The regularized causal functions in x-space have ν-dependent moderated singularities at the origin. They can be multiplied together and Fourier transformed (Bochner) without divergence problems. The usual ultraviolet divergences appear as poles of the resultant functions of ν. Several example are discussed. (author). 9 refs
Regular algebra and finite machines
Conway, John Horton
2012-01-01
World-famous mathematician John H. Conway based this classic text on a 1966 course he taught at Cambridge University. Geared toward graduate students of mathematics, it will also prove a valuable guide to researchers and professional mathematicians.His topics cover Moore's theory of experiments, Kleene's theory of regular events and expressions, Kleene algebras, the differential calculus of events, factors and the factor matrix, and the theory of operators. Additional subjects include event classes and operator classes, some regulator algebras, context-free languages, communicative regular alg
Matrix regularization of 4-manifolds
Trzetrzelewski, M.
2012-01-01
We consider products of two 2-manifolds such as S^2 x S^2, embedded in Euclidean space and show that the corresponding 4-volume preserving diffeomorphism algebra can be approximated by a tensor product SU(N)xSU(N) i.e. functions on a manifold are approximated by the Kronecker product of two SU(N) matrices. A regularization of the 4-sphere is also performed by constructing N^2 x N^2 matrix representations of the 4-algebra (and as a byproduct of the 3-algebra which makes the regularization of S...
Regularization of Nonmonotone Variational Inequalities
International Nuclear Information System (INIS)
Konnov, Igor V.; Ali, M.S.S.; Mazurkevich, E.O.
2006-01-01
In this paper we extend the Tikhonov-Browder regularization scheme from monotone to rather a general class of nonmonotone multivalued variational inequalities. We show that their convergence conditions hold for some classes of perfectly and nonperfectly competitive economic equilibrium problems
Lattice regularized chiral perturbation theory
International Nuclear Information System (INIS)
Borasoy, Bugra; Lewis, Randy; Ouimet, Pierre-Philippe A.
2004-01-01
Chiral perturbation theory can be defined and regularized on a spacetime lattice. A few motivations are discussed here, and an explicit lattice Lagrangian is reviewed. A particular aspect of the connection between lattice chiral perturbation theory and lattice QCD is explored through a study of the Wess-Zumino-Witten term
2011-01-20
... Meeting SUMMARY: Notice is hereby given of the regular meeting of the Farm Credit System Insurance Corporation Board (Board). Date and Time: The meeting of the Board will be held at the offices of the Farm... meeting of the Board will be open to the [[Page 3630
Forcing absoluteness and regularity properties
Ikegami, D.
2010-01-01
For a large natural class of forcing notions, we prove general equivalence theorems between forcing absoluteness statements, regularity properties, and transcendence properties over L and the core model K. We use our results to answer open questions from set theory of the reals.
Globals of Completely Regular Monoids
Institute of Scientific and Technical Information of China (English)
Wu Qian-qian; Gan Ai-ping; Du Xian-kun
2015-01-01
An element of a semigroup S is called irreducible if it cannot be expressed as a product of two elements in S both distinct from itself. In this paper we show that the class C of all completely regular monoids with irreducible identity elements satisfies the strong isomorphism property and so it is globally determined.
Fluid queues and regular variation
Boxma, O.J.
1996-01-01
This paper considers a fluid queueing system, fed by N independent sources that alternate between silence and activity periods. We assume that the distribution of the activity periods of one or more sources is a regularly varying function of index ¿. We show that its fat tail gives rise to an even
Fluid queues and regular variation
O.J. Boxma (Onno)
1996-01-01
textabstractThis paper considers a fluid queueing system, fed by $N$ independent sources that alternate between silence and activity periods. We assume that the distribution of the activity periods of one or more sources is a regularly varying function of index $zeta$. We show that its fat tail
Empirical laws, regularity and necessity
Koningsveld, H.
1973-01-01
In this book I have tried to develop an analysis of the concept of an empirical law, an analysis that differs in many ways from the alternative analyse's found in contemporary literature dealing with the subject.
1 am referring especially to two well-known views, viz. the regularity and
Interval matrices: Regularity generates singularity
Czech Academy of Sciences Publication Activity Database
Rohn, Jiří; Shary, S.P.
2018-01-01
Roč. 540, 1 March (2018), s. 149-159 ISSN 0024-3795 Institutional support: RVO:67985807 Keywords : interval matrix * regularity * singularity * P-matrix * absolute value equation * diagonally singilarizable matrix Subject RIV: BA - General Mathematics Impact factor: 0.973, year: 2016
Regularization in Matrix Relevance Learning
Schneider, Petra; Bunte, Kerstin; Stiekema, Han; Hammer, Barbara; Villmann, Thomas; Biehl, Michael
A In this paper, we present a regularization technique to extend recently proposed matrix learning schemes in learning vector quantization (LVQ). These learning algorithms extend the concept of adaptive distance measures in LVQ to the use of relevance matrices. In general, metric learning can
Korir, Peter C.; Dejene, Francis B.
2018-04-01
In this work two step growth process was used to prepare Cu(In, Ga)Se2 thin film for solar cell applications. The first step involves deposition of Cu-In-Ga precursor films followed by the selenization process under vacuum using elemental selenium vapor to form Cu(In,Ga)Se2 film. The growth process was done at a fixed temperature of 515 °C for 45, 60 and 90 min to control film thickness and gallium incorporation into the absorber layer film. The X-ray diffraction (XRD) pattern confirms single-phase Cu(In,Ga)Se2 film for all the three samples and no secondary phases were observed. A shift in the diffraction peaks to higher 2θ (2 theta) values is observed for the thin films compared to that of pure CuInSe2. The surface morphology of the resulting film grown for 60 min was characterized by the presence of uniform large grain size particles, which are typical for device quality material. Photoluminescence spectra show the shifting of emission peaks to higher energies for longer duration of selenization attributed to the incorporation of more gallium into the CuInSe2 crystal structure. Electron probe microanalysis (EPMA) revealed a uniform distribution of the elements through the surface of the film. The elemental ratio of Cu/(In + Ga) and Se/Cu + In + Ga strongly depends on the selenization time. The Cu/In + Ga ratio for the 60 min film is 0.88 which is in the range of the values (0.75-0.98) for best solar cell device performances.
Constrained least squares regularization in PET
International Nuclear Information System (INIS)
Choudhury, K.R.; O'Sullivan, F.O.
1996-01-01
Standard reconstruction methods used in tomography produce images with undesirable negative artifacts in background and in areas of high local contrast. While sophisticated statistical reconstruction methods can be devised to correct for these artifacts, their computational implementation is excessive for routine operational use. This work describes a technique for rapid computation of approximate constrained least squares regularization estimates. The unique feature of the approach is that it involves no iterative projection or backprojection steps. This contrasts with the familiar computationally intensive algorithms based on algebraic reconstruction (ART) or expectation-maximization (EM) methods. Experimentation with the new approach for deconvolution and mixture analysis shows that the root mean square error quality of estimators based on the proposed algorithm matches and usually dominates that of more elaborate maximum likelihood, at a fraction of the computational effort
International Nuclear Information System (INIS)
Herrmann, K.
1994-03-01
In this work the properties of josephson step contacts are investigated. After a short introduction into Josephson step contacts the structure, properties and the Josphson contacts of YBa 2 Cu 3 O 7-x high-T c superconductors is presented. The fabrication of HTSC step contacts and the microstructure is discussed. The electric properties of these contacts are measured together with the Josephson emission and the magnetic field dependence. The temperature dependence of the stationary transport properties is given. (WL)
Regular and conformal regular cores for static and rotating solutions
Energy Technology Data Exchange (ETDEWEB)
Azreg-Aïnou, Mustapha
2014-03-07
Using a new metric for generating rotating solutions, we derive in a general fashion the solution of an imperfect fluid and that of its conformal homolog. We discuss the conditions that the stress–energy tensors and invariant scalars be regular. On classical physical grounds, it is stressed that conformal fluids used as cores for static or rotating solutions are exempt from any malicious behavior in that they are finite and defined everywhere.
Regular and conformal regular cores for static and rotating solutions
International Nuclear Information System (INIS)
Azreg-Aïnou, Mustapha
2014-01-01
Using a new metric for generating rotating solutions, we derive in a general fashion the solution of an imperfect fluid and that of its conformal homolog. We discuss the conditions that the stress–energy tensors and invariant scalars be regular. On classical physical grounds, it is stressed that conformal fluids used as cores for static or rotating solutions are exempt from any malicious behavior in that they are finite and defined everywhere.
Step out - Step in Sequencing Games
Musegaas, M.; Borm, P.E.M.; Quant, M.
2014-01-01
In this paper a new class of relaxed sequencing games is introduced: the class of Step out - Step in sequencing games. In this relaxation any player within a coalition is allowed to step out from his position in the processing order and to step in at any position later in the processing order.
Step out-step in sequencing games
Musegaas, Marieke; Borm, Peter; Quant, Marieke
2015-01-01
In this paper a new class of relaxed sequencing games is introduced: the class of Step out–Step in sequencing games. In this relaxation any player within a coalition is allowed to step out from his position in the processing order and to step in at any position later in the processing order. First,
Laplacian manifold regularization method for fluorescence molecular tomography
He, Xuelei; Wang, Xiaodong; Yi, Huangjian; Chen, Yanrong; Zhang, Xu; Yu, Jingjing; He, Xiaowei
2017-04-01
Sparse regularization methods have been widely used in fluorescence molecular tomography (FMT) for stable three-dimensional reconstruction. Generally, ℓ1-regularization-based methods allow for utilizing the sparsity nature of the target distribution. However, in addition to sparsity, the spatial structure information should be exploited as well. A joint ℓ1 and Laplacian manifold regularization model is proposed to improve the reconstruction performance, and two algorithms (with and without Barzilai-Borwein strategy) are presented to solve the regularization model. Numerical studies and in vivo experiment demonstrate that the proposed Gradient projection-resolved Laplacian manifold regularization method for the joint model performed better than the comparative algorithm for ℓ1 minimization method in both spatial aggregation and location accuracy.
Arend, Carlos Frederico; Arend, Ana Amalia; da Silva, Tiago Rodrigues
2014-06-01
The aim of our study was to systematically compare different methodologies to establish an evidence-based approach based on tendon thickness and structure for sonographic diagnosis of supraspinatus tendinopathy when compared to MRI. US was obtained from 164 symptomatic patients with supraspinatus tendinopathy detected at MRI and 42 asymptomatic controls with normal MRI. Diagnostic yield was calculated for either maximal supraspinatus tendon thickness (MSTT) and tendon structure as isolated criteria and using different combinations of parallel and sequential testing at US. Chi-squared tests were performed to assess sensitivity, specificity, and accuracy of different diagnostic approaches. Mean MSTT was 6.68 mm in symptomatic patients and 5.61 mm in asymptomatic controls (P6.0mm provided best results for accuracy (93.7%) when compared to other measurements of tendon thickness. Also as an isolated criterion, abnormal tendon structure (ATS) yielded 93.2% accuracy for diagnosis. The best overall yield was obtained by both parallel and sequential testing using either MSTT>6.0mm or ATS as diagnostic criteria at no particular order, which provided 99.0% accuracy, 100% sensitivity, and 95.2% specificity. Among these parallel and sequential tests that provided best overall yield, additional analysis revealed that sequential testing first evaluating tendon structure required assessment of 258 criteria (vs. 261 for sequential testing first evaluating tendon thickness and 412 for parallel testing) and demanded a mean of 16.1s to assess diagnostic criteria and reach the diagnosis (vs. 43.3s for sequential testing first evaluating tendon thickness and 47.4s for parallel testing). We found that using either MSTT>6.0mm or ATS as diagnostic criteria for both parallel and sequential testing provides the best overall yield for sonographic diagnosis of supraspinatus tendinopathy when compared to MRI. Among these strategies, a two-step sequential approach first assessing tendon
Guerrero, Miguel; Pané, Salvador; Nelson, Bradley J.; Baró, Maria Dolors; Roldán, Mònica; Sort, Jordi; Pellicer, Eva
2013-11-01
Three-dimensional (3D) hierarchically porous composite Cu-BiOCl films have been prepared by a facile one-step galvanostatic electrodeposition process from acidic electrolytic solutions containing Cu(ii) and Bi(iii) chloride salts and Triton X-100. The films show spherical, micron-sized pores that spread over the whole film thickness. In turn, the pore walls are made of randomly packed BiOCl nanoplates that are assembled leaving micro-nanopore voids beneath. It is believed that Cu grows within the interstitial spaces between the hydrogen bubbles produced from the reduction of H+ ions. Then, the BiOCl sheets accommodate in the porous network defined by the Cu building blocks. The presence of Cu tends to enhance the mechanical stability of the composite material. The resulting porous Cu-BiOCl films exhibit homogeneous and stable-in-time photoluminescent response arising from the BiOCl component that spreads over the entire 3D porous structure, as demonstrated by confocal scanning laser microscopy. A broad-band emission covering the entire visible range, in the wavelength interval 450-750 nm, is obtained. The present work paves the way for the facile and controlled preparation of a new generation of photoluminescent membranes.Three-dimensional (3D) hierarchically porous composite Cu-BiOCl films have been prepared by a facile one-step galvanostatic electrodeposition process from acidic electrolytic solutions containing Cu(ii) and Bi(iii) chloride salts and Triton X-100. The films show spherical, micron-sized pores that spread over the whole film thickness. In turn, the pore walls are made of randomly packed BiOCl nanoplates that are assembled leaving micro-nanopore voids beneath. It is believed that Cu grows within the interstitial spaces between the hydrogen bubbles produced from the reduction of H+ ions. Then, the BiOCl sheets accommodate in the porous network defined by the Cu building blocks. The presence of Cu tends to enhance the mechanical stability of the
Sparse regularization for force identification using dictionaries
Qiao, Baijie; Zhang, Xingwu; Wang, Chenxi; Zhang, Hang; Chen, Xuefeng
2016-04-01
The classical function expansion method based on minimizing l2-norm of the response residual employs various basis functions to represent the unknown force. Its difficulty lies in determining the optimum number of basis functions. Considering the sparsity of force in the time domain or in other basis space, we develop a general sparse regularization method based on minimizing l1-norm of the coefficient vector of basis functions. The number of basis functions is adaptively determined by minimizing the number of nonzero components in the coefficient vector during the sparse regularization process. First, according to the profile of the unknown force, the dictionary composed of basis functions is determined. Second, a sparsity convex optimization model for force identification is constructed. Third, given the transfer function and the operational response, Sparse reconstruction by separable approximation (SpaRSA) is developed to solve the sparse regularization problem of force identification. Finally, experiments including identification of impact and harmonic forces are conducted on a cantilever thin plate structure to illustrate the effectiveness and applicability of SpaRSA. Besides the Dirac dictionary, other three sparse dictionaries including Db6 wavelets, Sym4 wavelets and cubic B-spline functions can also accurately identify both the single and double impact forces from highly noisy responses in a sparse representation frame. The discrete cosine functions can also successfully reconstruct the harmonic forces including the sinusoidal, square and triangular forces. Conversely, the traditional Tikhonov regularization method with the L-curve criterion fails to identify both the impact and harmonic forces in these cases.
Energy functions for regularization algorithms
Delingette, H.; Hebert, M.; Ikeuchi, K.
1991-01-01
Regularization techniques are widely used for inverse problem solving in computer vision such as surface reconstruction, edge detection, or optical flow estimation. Energy functions used for regularization algorithms measure how smooth a curve or surface is, and to render acceptable solutions these energies must verify certain properties such as invariance with Euclidean transformations or invariance with parameterization. The notion of smoothness energy is extended here to the notion of a differential stabilizer, and it is shown that to void the systematic underestimation of undercurvature for planar curve fitting, it is necessary that circles be the curves of maximum smoothness. A set of stabilizers is proposed that meet this condition as well as invariance with rotation and parameterization.
Physical model of dimensional regularization
Energy Technology Data Exchange (ETDEWEB)
Schonfeld, Jonathan F.
2016-12-15
We explicitly construct fractals of dimension 4-ε on which dimensional regularization approximates scalar-field-only quantum-field theory amplitudes. The construction does not require fractals to be Lorentz-invariant in any sense, and we argue that there probably is no Lorentz-invariant fractal of dimension greater than 2. We derive dimensional regularization's power-law screening first for fractals obtained by removing voids from 3-dimensional Euclidean space. The derivation applies techniques from elementary dielectric theory. Surprisingly, fractal geometry by itself does not guarantee the appropriate power-law behavior; boundary conditions at fractal voids also play an important role. We then extend the derivation to 4-dimensional Minkowski space. We comment on generalization to non-scalar fields, and speculate about implications for quantum gravity. (orig.)
Maximum mutual information regularized classification
Wang, Jim Jing-Yan
2014-09-07
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
Maximum mutual information regularized classification
Wang, Jim Jing-Yan; Wang, Yi; Zhao, Shiguang; Gao, Xin
2014-01-01
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
Regularized strings with extrinsic curvature
International Nuclear Information System (INIS)
Ambjoern, J.; Durhuus, B.
1987-07-01
We analyze models of discretized string theories, where the path integral over world sheet variables is regularized by summing over triangulated surfaces. The inclusion of curvature in the action is a necessity for the scaling of the string tension. We discuss the physical properties of models with extrinsic curvature terms in the action and show that the string tension vanishes at the critical point where the bare extrinsic curvature coupling tends to infinity. Similar results are derived for models with intrinsic curvature. (orig.)
Circuit complexity of regular languages
Czech Academy of Sciences Publication Activity Database
Koucký, Michal
2009-01-01
Roč. 45, č. 4 (2009), s. 865-879 ISSN 1432-4350 R&D Projects: GA ČR GP201/07/P276; GA MŠk(CZ) 1M0545 Institutional research plan: CEZ:AV0Z10190503 Keywords : regular languages * circuit complexity * upper and lower bounds Subject RIV: BA - General Mathematics Impact factor: 0.726, year: 2009
Directory of Open Access Journals (Sweden)
Ian MacLaren
2014-06-01
Full Text Available Stepped antiphase boundaries are frequently observed in Ti-doped Bi0.85Nd0.15FeO3, related to the novel planar antiphase boundaries reported recently. The atomic structure and chemistry of these steps are determined by a combination of high angle annular dark field and bright field scanning transmission electron microscopy imaging, together with electron energy loss spectroscopy. The core of these steps is found to consist of 4 edge-sharing FeO6 octahedra. The structure is confirmed by image simulations using a frozen phonon multislice approach. The steps are also found to be negatively charged and, like the planar boundaries studied previously, result in polarisation of the surrounding perovskite matrix.
Parameter optimization in the regularized kernel minimum noise fraction transformation
DEFF Research Database (Denmark)
Nielsen, Allan Aasbjerg; Vestergaard, Jacob Schack
2012-01-01
Based on the original, linear minimum noise fraction (MNF) transformation and kernel principal component analysis, a kernel version of the MNF transformation was recently introduced. Inspired by we here give a simple method for finding optimal parameters in a regularized version of kernel MNF...... analysis. We consider the model signal-to-noise ratio (SNR) as a function of the kernel parameters and the regularization parameter. In 2-4 steps of increasingly refined grid searches we find the parameters that maximize the model SNR. An example based on data from the DLR 3K camera system is given....
International Nuclear Information System (INIS)
Yang, Shuo; Wang, Jian; Li, Xiuyan; Zhai, Hongju; Han, Donglai; Wei, Bing; Wang, Dandan; Yang, Jinghai
2014-01-01
Highlights: • ZnO nanocage arrays were synthesized by a one-step etching route. • ZnO nanocage exhibit higher photocatalytic activity than other samples. • The different photocatalytic activities of different samples were analyzed. • The formation mechanism of ZnO nanocages was proposed. - Abstract: ZnO nanocages and other nanostructures have been synthesized via a simple one-pot hydrothermal method with different reaction times. It is worth mentioning that this is a completely green method which does not require any other chemicals except that Zn foil served as Zn source in the experiment. X-ray diffraction (XRD), Scanning electron microscopy (SEM), transmission electron microscopy (TEM), photoluminescence (PL) and UV–Vis diffuse reflection spectra were used to characterize the crystallinity, morphology and optical property of ZnO structures. Growth mechanisms of ZnO were proposed based on these results. Furthermore, ZnO films with different morphologies and crystal growth habits exhibited different activities to rhodamine B degradation. The influence of the reaction time on the morphology of ZnO films and the effect of the morphologies on the photocatalytic activity are discussed
International Nuclear Information System (INIS)
Dahmani, M.; McArthur, R.; Kim, B.G.; Kim, S.M.; Seo, H.-B.
2008-01-01
This paper describes the calculation of two-group incremental cross sections for all of the reactivity devices and incore structural materials for an RFSP-IST full-core model of Wolsong NPP Unit 1, in support of the conversion of the reference plant model to two energy groups. This is of particular interest since the calculation used the new standard 'side-step' approach, which is a three-dimensional supercell method that employs the Industry Standard Toolset (IST) codes DRAGON-IST and WIMS-IST with the ENDF/B-VI nuclear data library. In this technique, the macroscopic cross sections for the fuel regions and the device material specifications are first generated using the lattice code WIMS-IST with 89 energy groups. DRAGON-IST then uses this data with a standard supercell modelling approach for the three-dimensional calculations. Incremental cross sections are calculated for the stainless-steel adjuster rods (SS-ADJ), the liquid zone control units (LZCU), the shutoff rods (SOR), the mechanical control absorbers (MCA) and various structural materials, such as guide tubes, springs, locators, brackets, adjuster cables and support bars and the moderator inlet nozzle deflectors. Isotopic compositions of the Zircaloy-2, stainless steel and Inconel X-750 alloys in these items are derived from Wolsong NPP Unit 1 history dockets. Their geometrical layouts are based on applicable design drawings. Mid-burnup fuel with no moderator poison was assumed. The incremental cross sections and key aspects of the modelling are summarized in this paper. (author)
On the MSE Performance and Optimization of Regularized Problems
Alrashdi, Ayed
2016-11-01
The amount of data that has been measured, transmitted/received, and stored in the recent years has dramatically increased. So, today, we are in the world of big data. Fortunately, in many applications, we can take advantages of possible structures and patterns in the data to overcome the curse of dimensionality. The most well known structures include sparsity, low-rankness, block sparsity. This includes a wide range of applications such as machine learning, medical imaging, signal processing, social networks and computer vision. This also led to a specific interest in recovering signals from noisy compressed measurements (Compressed Sensing (CS) problem). Such problems are generally ill-posed unless the signal is structured. The structure can be captured by a regularizer function. This gives rise to a potential interest in regularized inverse problems, where the process of reconstructing the structured signal can be modeled as a regularized problem. This thesis particularly focuses on finding the optimal regularization parameter for such problems, such as ridge regression, LASSO, square-root LASSO and low-rank Generalized LASSO. Our goal is to optimally tune the regularizer to minimize the mean-squared error (MSE) of the solution when the noise variance or structure parameters are unknown. The analysis is based on the framework of the Convex Gaussian Min-max Theorem (CGMT) that has been used recently to precisely predict performance errors.
International Nuclear Information System (INIS)
Haniger, L.; Elger, R.; Kocandrle, L.; Zdebor, J.
1986-01-01
A linear step drive is described developed in Czechoslovak-Soviet cooperation and intended for driving WWER-1000 control rods. The functional principle is explained of the motor and the mechanical and electrical parts of the drive, power control, and the indicator of position are described. The motor has latches situated in the reactor at a distance of 3 m from magnetic armatures, it has a low structural height above the reactor cover, which suggests its suitability for seismic localities. Its magnetic circuits use counterpoles; the mechanical shocks at the completion of each step are damped using special design features. The position indicator is of a special design and evaluates motor position within ±1% of total travel. A drive diagram and the flow chart of both the control electronics and the position indicator are presented. (author) 4 figs
Hippocampus discovery First steps
Directory of Open Access Journals (Sweden)
Eliasz Engelhardt
Full Text Available The first steps of the discovery, and the main discoverers, of the hippocampus are outlined. Arantius was the first to describe a structure he named "hippocampus" or "white silkworm". Despite numerous controversies and alternate designations, the term hippocampus has prevailed until this day as the most widely used term. Duvernoy provided an illustration of the hippocampus and surrounding structures, considered the first by most authors, which appeared more than one and a half century after Arantius' description. Some authors have identified other drawings and texts which they claim predate Duvernoy's depiction, in studies by Vesalius, Varolio, Willis, and Eustachio, albeit unconvincingly. Considering the definition of the hippocampal formation as comprising the hippocampus proper, dentate gyrus and subiculum, Arantius and Duvernoy apparently described the gross anatomy of this complex. The pioneering studies of Arantius and Duvernoy revealed a relatively small hidden formation that would become one of the most valued brain structures.
Multiple graph regularized nonnegative matrix factorization
Wang, Jim Jing-Yan
2013-10-01
Non-negative matrix factorization (NMF) has been widely used as a data representation method based on components. To overcome the disadvantage of NMF in failing to consider the manifold structure of a data set, graph regularized NMF (GrNMF) has been proposed by Cai et al. by constructing an affinity graph and searching for a matrix factorization that respects graph structure. Selecting a graph model and its corresponding parameters is critical for this strategy. This process is usually carried out by cross-validation or discrete grid search, which are time consuming and prone to overfitting. In this paper, we propose a GrNMF, called MultiGrNMF, in which the intrinsic manifold is approximated by a linear combination of several graphs with different models and parameters inspired by ensemble manifold regularization. Factorization metrics and linear combination coefficients of graphs are determined simultaneously within a unified object function. They are alternately optimized in an iterative algorithm, thus resulting in a novel data representation algorithm. Extensive experiments on a protein subcellular localization task and an Alzheimer\\'s disease diagnosis task demonstrate the effectiveness of the proposed algorithm. © 2013 Elsevier Ltd. All rights reserved.
Geostatistical regularization operators for geophysical inverse problems on irregular meshes
Jordi, C.; Doetsch, J.; Günther, T.; Schmelzbach, C.; Robertsson, J. OA
2018-05-01
Irregular meshes allow to include complicated subsurface structures into geophysical modelling and inverse problems. The non-uniqueness of these inverse problems requires appropriate regularization that can incorporate a priori information. However, defining regularization operators for irregular discretizations is not trivial. Different schemes for calculating smoothness operators on irregular meshes have been proposed. In contrast to classical regularization constraints that are only defined using the nearest neighbours of a cell, geostatistical operators include a larger neighbourhood around a particular cell. A correlation model defines the extent of the neighbourhood and allows to incorporate information about geological structures. We propose an approach to calculate geostatistical operators for inverse problems on irregular meshes by eigendecomposition of a covariance matrix that contains the a priori geological information. Using our approach, the calculation of the operator matrix becomes tractable for 3-D inverse problems on irregular meshes. We tested the performance of the geostatistical regularization operators and compared them against the results of anisotropic smoothing in inversions of 2-D surface synthetic electrical resistivity tomography (ERT) data as well as in the inversion of a realistic 3-D cross-well synthetic ERT scenario. The inversions of 2-D ERT and seismic traveltime field data with geostatistical regularization provide results that are in good accordance with the expected geology and thus facilitate their interpretation. In particular, for layered structures the geostatistical regularization provides geologically more plausible results compared to the anisotropic smoothness constraints.
Regularization methods in Banach spaces
Schuster, Thomas; Hofmann, Bernd; Kazimierski, Kamil S
2012-01-01
Regularization methods aimed at finding stable approximate solutions are a necessary tool to tackle inverse and ill-posed problems. Usually the mathematical model of an inverse problem consists of an operator equation of the first kind and often the associated forward operator acts between Hilbert spaces. However, for numerous problems the reasons for using a Hilbert space setting seem to be based rather on conventions than on an approprimate and realistic model choice, so often a Banach space setting would be closer to reality. Furthermore, sparsity constraints using general Lp-norms or the B
Academic Training Lecture - Regular Programme
PH Department
2011-01-01
Regular Lecture Programme 9 May 2011 ACT Lectures on Detectors - Inner Tracking Detectors by Pippa Wells (CERN) 10 May 2011 ACT Lectures on Detectors - Calorimeters (2/5) by Philippe Bloch (CERN) 11 May 2011 ACT Lectures on Detectors - Muon systems (3/5) by Kerstin Hoepfner (RWTH Aachen) 12 May 2011 ACT Lectures on Detectors - Particle Identification and Forward Detectors by Peter Krizan (University of Ljubljana and J. Stefan Institute, Ljubljana, Slovenia) 13 May 2011 ACT Lectures on Detectors - Trigger and Data Acquisition (5/5) by Dr. Brian Petersen (CERN) from 11:00 to 12:00 at CERN ( Bldg. 222-R-001 - Filtration Plant )
The Regularity of Optimal Irrigation Patterns
Morel, Jean-Michel; Santambrogio, Filippo
2010-02-01
A branched structure is observable in draining and irrigation systems, in electric power supply systems, and in natural objects like blood vessels, the river basins or the trees. Recent approaches of these networks derive their branched structure from an energy functional whose essential feature is to favor wide routes. Given a flow s in a river, a road, a tube or a wire, the transportation cost per unit length is supposed in these models to be proportional to s α with 0 measure is the Lebesgue density on a smooth open set and the irrigating measure is a single source. In that case we prove that all branches of optimal irrigation trees satisfy an elliptic equation and that their curvature is a bounded measure. In consequence all branching points in the network have a tangent cone made of a finite number of segments, and all other points have a tangent. An explicit counterexample disproves these regularity properties for non-Lebesgue irrigated measures.
RES: Regularized Stochastic BFGS Algorithm
Mokhtari, Aryan; Ribeiro, Alejandro
2014-12-01
RES, a regularized stochastic version of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton method is proposed to solve convex optimization problems with stochastic objectives. The use of stochastic gradient descent algorithms is widespread, but the number of iterations required to approximate optimal arguments can be prohibitive in high dimensional problems. Application of second order methods, on the other hand, is impracticable because computation of objective function Hessian inverses incurs excessive computational cost. BFGS modifies gradient descent by introducing a Hessian approximation matrix computed from finite gradient differences. RES utilizes stochastic gradients in lieu of deterministic gradients for both, the determination of descent directions and the approximation of the objective function's curvature. Since stochastic gradients can be computed at manageable computational cost RES is realizable and retains the convergence rate advantages of its deterministic counterparts. Convergence results show that lower and upper bounds on the Hessian egeinvalues of the sample functions are sufficient to guarantee convergence to optimal arguments. Numerical experiments showcase reductions in convergence time relative to stochastic gradient descent algorithms and non-regularized stochastic versions of BFGS. An application of RES to the implementation of support vector machines is developed.
Regularized Label Relaxation Linear Regression.
Fang, Xiaozhao; Xu, Yong; Li, Xuelong; Lai, Zhihui; Wong, Wai Keung; Fang, Bingwu
2018-04-01
Linear regression (LR) and some of its variants have been widely used for classification problems. Most of these methods assume that during the learning phase, the training samples can be exactly transformed into a strict binary label matrix, which has too little freedom to fit the labels adequately. To address this problem, in this paper, we propose a novel regularized label relaxation LR method, which has the following notable characteristics. First, the proposed method relaxes the strict binary label matrix into a slack variable matrix by introducing a nonnegative label relaxation matrix into LR, which provides more freedom to fit the labels and simultaneously enlarges the margins between different classes as much as possible. Second, the proposed method constructs the class compactness graph based on manifold learning and uses it as the regularization item to avoid the problem of overfitting. The class compactness graph is used to ensure that the samples sharing the same labels can be kept close after they are transformed. Two different algorithms, which are, respectively, based on -norm and -norm loss functions are devised. These two algorithms have compact closed-form solutions in each iteration so that they are easily implemented. Extensive experiments show that these two algorithms outperform the state-of-the-art algorithms in terms of the classification accuracy and running time.
Leontjevas, Ruslan; Gerritsen, Debby L; Smalbrugge, Martin; Teerenstra, Steven; Vernooij-Dassen, Myrra J F J; Koopmans, Raymond T C M
2013-06-29
Depression in nursing-home residents is often under-recognised. We aimed to establish the effectiveness of a structural approach to its management. Between May 15, 2009, and April 30, 2011, we undertook a multicentre, stepped-wedge cluster-randomised trial in four provinces of the Netherlands. A network of nursing homes was invited to enrol one dementia and one somatic unit per nursing home. In enrolled units, nursing-home staff recruited residents, who were eligible as long as we had received written informed consent. Units were randomly allocated to one of five groups with computer-generated random numbers. A multidisciplinary care programme, Act in Case of Depression (AiD), was implemented at different timepoints in each group: at baseline, no groups were implenting the programme (usual care); the first group implemented it shortly after baseline; and other groups sequentially began implementation after assessments at intervals of roughly 4 months. Residents did not know when the intervention was being implemented or what the programme elements were; research staff were masked to intervention implementation, depression treatment, and results of previous assessments; and data analysts were masked to intervention implementation. The primary endpoint was depression prevalence in units, which was the proportion of residents per unit with a score of more than seven on the proxy-based Cornell scale for depression in dementia. Analyses were by intention to treat. This trial is registered with the Netherlands National Trial Register, number NTR1477. 16 dementia units (403 residents) and 17 somatic units (390 residents) were enrolled in the course of the study. In somatic units, AiD reduced prevalence of depression (adjusted effect size -7·3%, 95% CI -13·7 to -0·9). The effect was not significant in dementia units (0·6, -5·6 to 6·8) and differed significantly from that in somatic units (p=0·031). Adherence to depression assessment procedures was lower in dementia
Tessellating the Sphere with Regular Polygons
Soto-Johnson, Hortensia; Bechthold, Dawn
2004-01-01
Tessellations in the Euclidean plane and regular polygons that tessellate the sphere are reviewed. The regular polygons that can possibly tesellate the sphere are spherical triangles, squares and pentagons.
On the equivalence of different regularization methods
International Nuclear Information System (INIS)
Brzezowski, S.
1985-01-01
The R-circunflex-operation preceded by the regularization procedure is discussed. Some arguments are given, according to which the results may depend on the method of regularization, introduced in order to avoid divergences in perturbation calculations. 10 refs. (author)
The uniqueness of the regularization procedure
International Nuclear Information System (INIS)
Brzezowski, S.
1981-01-01
On the grounds of the BPHZ procedure, the criteria of correct regularization in perturbation calculations of QFT are given, together with the prescription for dividing the regularized formulas into the finite and infinite parts. (author)
Lifshitz anomalies, Ward identities and split dimensional regularization
Energy Technology Data Exchange (ETDEWEB)
Arav, Igal; Oz, Yaron; Raviv-Moshe, Avia [Raymond and Beverly Sackler School of Physics and Astronomy, Tel-Aviv University,55 Haim Levanon street, Tel-Aviv, 69978 (Israel)
2017-03-16
We analyze the structure of the stress-energy tensor correlation functions in Lifshitz field theories and construct the corresponding anomalous Ward identities. We develop a framework for calculating the anomaly coefficients that employs a split dimensional regularization and the pole residues. We demonstrate the procedure by calculating the free scalar Lifshitz scale anomalies in 2+1 spacetime dimensions. We find that the analysis of the regularization dependent trivial terms requires a curved spacetime description without a foliation structure. We discuss potential ambiguities in Lifshitz scale anomaly definitions.
Lifshitz anomalies, Ward identities and split dimensional regularization
International Nuclear Information System (INIS)
Arav, Igal; Oz, Yaron; Raviv-Moshe, Avia
2017-01-01
We analyze the structure of the stress-energy tensor correlation functions in Lifshitz field theories and construct the corresponding anomalous Ward identities. We develop a framework for calculating the anomaly coefficients that employs a split dimensional regularization and the pole residues. We demonstrate the procedure by calculating the free scalar Lifshitz scale anomalies in 2+1 spacetime dimensions. We find that the analysis of the regularization dependent trivial terms requires a curved spacetime description without a foliation structure. We discuss potential ambiguities in Lifshitz scale anomaly definitions.
Application of Turchin's method of statistical regularization
Zelenyi, Mikhail; Poliakova, Mariia; Nozik, Alexander; Khudyakov, Alexey
2018-04-01
During analysis of experimental data, one usually needs to restore a signal after it has been convoluted with some kind of apparatus function. According to Hadamard's definition this problem is ill-posed and requires regularization to provide sensible results. In this article we describe an implementation of the Turchin's method of statistical regularization based on the Bayesian approach to the regularization strategy.
Regular extensions of some classes of grammars
Nijholt, Antinus
Culik and Cohen introduced the class of LR-regular grammars, an extension of the LR(k) grammars. In this report we consider the analogous extension of the LL(k) grammers, called the LL-regular grammars. The relations of this class of grammars to other classes of grammars are shown. Every LL-regular
Adsorption-induced step formation
DEFF Research Database (Denmark)
Thostrup, P.; Christoffersen, Ebbe; Lorensen, Henrik Qvist
2001-01-01
Through an interplay between density functional calculations, Monte Carlo simulations and scanning tunneling microscopy experiments, we show that an intermediate coverage of CO on the Pt(110) surface gives rise to a new rough equilibrium structure with more than 50% step atoms. CO is shown to bind...... so strongly to low-coordinated Pt atoms that it can break Pt-Pt bonds and spontaneously form steps on the surface. It is argued that adsorption-induced step formation may be a general effect, in particular at high gas pressures and temperatures....
Internship guide : Work placements step by step
Haag, Esther
2013-01-01
Internship Guide: Work Placements Step by Step has been written from the practical perspective of a placement coordinator. This book addresses the following questions : what problems do students encounter when they start thinking about the jobs their degree programme prepares them for? How do you
The way to collisions, step by step
2009-01-01
While the LHC sectors cool down and reach the cryogenic operating temperature, spirits are warming up as we all eagerly await the first collisions. No reason to hurry, though. Making particles collide involves the complex manoeuvring of thousands of delicate components. The experts will make it happen using a step-by-step approach.
Iterative regularization in intensity-modulated radiation therapy optimization
International Nuclear Information System (INIS)
Carlsson, Fredrik; Forsgren, Anders
2006-01-01
A common way to solve intensity-modulated radiation therapy (IMRT) optimization problems is to use a beamlet-based approach. The approach is usually employed in a three-step manner: first a beamlet-weight optimization problem is solved, then the fluence profiles are converted into step-and-shoot segments, and finally postoptimization of the segment weights is performed. A drawback of beamlet-based approaches is that beamlet-weight optimization problems are ill-conditioned and have to be regularized in order to produce smooth fluence profiles that are suitable for conversion. The purpose of this paper is twofold: first, to explain the suitability of solving beamlet-based IMRT problems by a BFGS quasi-Newton sequential quadratic programming method with diagonal initial Hessian estimate, and second, to empirically show that beamlet-weight optimization problems should be solved in relatively few iterations when using this optimization method. The explanation of the suitability is based on viewing the optimization method as an iterative regularization method. In iterative regularization, the optimization problem is solved approximately by iterating long enough to obtain a solution close to the optimal one, but terminating before too much noise occurs. Iterative regularization requires an optimization method that initially proceeds in smooth directions and makes rapid initial progress. Solving ten beamlet-based IMRT problems with dose-volume objectives and bounds on the beamlet-weights, we find that the considered optimization method fulfills the requirements for performing iterative regularization. After segment-weight optimization, the treatments obtained using 35 beamlet-weight iterations outperform the treatments obtained using 100 beamlet-weight iterations, both in terms of objective value and of target uniformity. We conclude that iterating too long may in fact deteriorate the quality of the deliverable plan
Bounded Perturbation Regularization for Linear Least Squares Estimation
Ballal, Tarig
2017-10-18
This paper addresses the problem of selecting the regularization parameter for linear least-squares estimation. We propose a new technique called bounded perturbation regularization (BPR). In the proposed BPR method, a perturbation with a bounded norm is allowed into the linear transformation matrix to improve the singular-value structure. Following this, the problem is formulated as a min-max optimization problem. Next, the min-max problem is converted to an equivalent minimization problem to estimate the unknown vector quantity. The solution of the minimization problem is shown to converge to that of the ℓ2 -regularized least squares problem, with the unknown regularizer related to the norm bound of the introduced perturbation through a nonlinear constraint. A procedure is proposed that combines the constraint equation with the mean squared error (MSE) criterion to develop an approximately optimal regularization parameter selection algorithm. Both direct and indirect applications of the proposed method are considered. Comparisons with different Tikhonov regularization parameter selection methods, as well as with other relevant methods, are carried out. Numerical results demonstrate that the proposed method provides significant improvement over state-of-the-art methods.
Microsoft Office professional 2010 step by step
Cox, Joyce; Frye, Curtis
2011-01-01
Teach yourself exactly what you need to know about using Office Professional 2010-one step at a time! With STEP BY STEP, you build and practice new skills hands-on, at your own pace. Covering Microsoft Word, PowerPoint, Outlook, Excel, Access, Publisher, and OneNote, this book will help you learn the core features and capabilities needed to: Create attractive documents, publications, and spreadsheetsManage your e-mail, calendar, meetings, and communicationsPut your business data to workDevelop and deliver great presentationsOrganize your ideas and notes in one placeConnect, share, and accom
Consistent Partial Least Squares Path Modeling via Regularization
Directory of Open Access Journals (Sweden)
Sunho Jung
2018-02-01
Full Text Available Partial least squares (PLS path modeling is a component-based structural equation modeling that has been adopted in social and psychological research due to its data-analytic capability and flexibility. A recent methodological advance is consistent PLS (PLSc, designed to produce consistent estimates of path coefficients in structural models involving common factors. In practice, however, PLSc may frequently encounter multicollinearity in part because it takes a strategy of estimating path coefficients based on consistent correlations among independent latent variables. PLSc has yet no remedy for this multicollinearity problem, which can cause loss of statistical power and accuracy in parameter estimation. Thus, a ridge type of regularization is incorporated into PLSc, creating a new technique called regularized PLSc. A comprehensive simulation study is conducted to evaluate the performance of regularized PLSc as compared to its non-regularized counterpart in terms of power and accuracy. The results show that our regularized PLSc is recommended for use when serious multicollinearity is present.
Laplacian embedded regression for scalable manifold regularization.
Chen, Lin; Tsang, Ivor W; Xu, Dong
2012-06-01
Semi-supervised learning (SSL), as a powerful tool to learn from a limited number of labeled data and a large number of unlabeled data, has been attracting increasing attention in the machine learning community. In particular, the manifold regularization framework has laid solid theoretical foundations for a large family of SSL algorithms, such as Laplacian support vector machine (LapSVM) and Laplacian regularized least squares (LapRLS). However, most of these algorithms are limited to small scale problems due to the high computational cost of the matrix inversion operation involved in the optimization problem. In this paper, we propose a novel framework called Laplacian embedded regression by introducing an intermediate decision variable into the manifold regularization framework. By using ∈-insensitive loss, we obtain the Laplacian embedded support vector regression (LapESVR) algorithm, which inherits the sparse solution from SVR. Also, we derive Laplacian embedded RLS (LapERLS) corresponding to RLS under the proposed framework. Both LapESVR and LapERLS possess a simpler form of a transformed kernel, which is the summation of the original kernel and a graph kernel that captures the manifold structure. The benefits of the transformed kernel are two-fold: (1) we can deal with the original kernel matrix and the graph Laplacian matrix in the graph kernel separately and (2) if the graph Laplacian matrix is sparse, we only need to perform the inverse operation for a sparse matrix, which is much more efficient when compared with that for a dense one. Inspired by kernel principal component analysis, we further propose to project the introduced decision variable into a subspace spanned by a few eigenvectors of the graph Laplacian matrix in order to better reflect the data manifold, as well as accelerate the calculation of the graph kernel, allowing our methods to efficiently and effectively cope with large scale SSL problems. Extensive experiments on both toy and real
SparseBeads data: benchmarking sparsity-regularized computed tomography
DEFF Research Database (Denmark)
Jørgensen, Jakob Sauer; Coban, Sophia B.; Lionheart, William R. B.
2017-01-01
-regularized reconstruction. A collection of 48 x-ray CT datasets called SparseBeads was designed for benchmarking SR reconstruction algorithms. Beadpacks comprising glass beads of five different sizes as well as mixtures were scanned in a micro-CT scanner to provide structured datasets with variable image sparsity levels...
Boosting Maintenance in Working Memory with Temporal Regularities
Plancher, Gaën; Lévêque, Yohana; Fanuel, Lison; Piquandet, Gaëlle; Tillmann, Barbara
2018-01-01
Music cognition research has provided evidence for the benefit of temporally regular structures guiding attention over time. The present study investigated whether maintenance in working memory can benefit from an isochronous rhythm. Participants were asked to remember series of 6 letters for serial recall. In the rhythm condition of Experiment…
Plante, Ianik; Devroye, Luc
2017-10-01
Ionizing radiation interacts with the water molecules of the tissues mostly by ionizations and excitations, which result in the formation of the radiation track structure and the creation of radiolytic species such as H.,.OH, H2, H2O2, and e-aq. After their creation, these species diffuse and may chemically react with the neighboring species and with the molecules of the medium. Therefore radiation chemistry is of great importance in radiation biology. As the chemical species are not distributed homogeneously, the use of conventional models of homogeneous reactions cannot completely describe the reaction kinetics of the particles. Actually, many simulations of radiation chemistry are done using the Independent Reaction Time (IRT) method, which is a very fast technique to calculate radiochemical yields but which do not calculate the positions of the radiolytic species as a function of time. Step-by-step (SBS) methods, which are able to provide such information, have been used only sparsely because these are time-consuming in terms of calculation. Recent improvements in computer performance now allow the regular use of the SBS method in radiation chemistry. The SBS and IRT methods are both based on the Green's functions of the diffusion equation (GFDE). In this paper, several sampling algorithms of the GFDE and for the IRT method are presented. We show that the IRT and SBS methods are exactly equivalent for 2-particles systems for diffusion and partially diffusion-controlled reactions between non-interacting particles. We also show that the results obtained with the SBS simulation method with periodic boundary conditions are in agreement with the predictions by classical reaction kinetics theory, which is an important step towards using this method for modelling of biochemical networks and metabolic pathways involved in oxidative stress. Finally, the first simulation results obtained with the code RITRACKS (Relativistic Ion Tracks) are presented.
Adaptive regularization of noisy linear inverse problems
DEFF Research Database (Denmark)
Hansen, Lars Kai; Madsen, Kristoffer Hougaard; Lehn-Schiøler, Tue
2006-01-01
In the Bayesian modeling framework there is a close relation between regularization and the prior distribution over parameters. For prior distributions in the exponential family, we show that the optimal hyper-parameter, i.e., the optimal strength of regularization, satisfies a simple relation: T......: The expectation of the regularization function, i.e., takes the same value in the posterior and prior distribution. We present three examples: two simulations, and application in fMRI neuroimaging....
Higher derivative regularization and chiral anomaly
International Nuclear Information System (INIS)
Nagahama, Yoshinori.
1985-02-01
A higher derivative regularization which automatically leads to the consistent chiral anomaly is analyzed in detail. It explicitly breaks all the local gauge symmetry but preserves global chiral symmetry and leads to the chirally symmetric consistent anomaly. This regularization thus clarifies the physics content contained in the consistent anomaly. We also briefly comment on the application of this higher derivative regularization to massless QED. (author)
Regularity effect in prospective memory during aging
Directory of Open Access Journals (Sweden)
Geoffrey Blondelle
2016-10-01
Full Text Available Background: Regularity effect can affect performance in prospective memory (PM, but little is known on the cognitive processes linked to this effect. Moreover, its impacts with regard to aging remain unknown. To our knowledge, this study is the first to examine regularity effect in PM in a lifespan perspective, with a sample of young, intermediate, and older adults. Objective and design: Our study examined the regularity effect in PM in three groups of participants: 28 young adults (18–30, 16 intermediate adults (40–55, and 25 older adults (65–80. The task, adapted from the Virtual Week, was designed to manipulate the regularity of the various activities of daily life that were to be recalled (regular repeated activities vs. irregular non-repeated activities. We examine the role of several cognitive functions including certain dimensions of executive functions (planning, inhibition, shifting, and binding, short-term memory, and retrospective episodic memory to identify those involved in PM, according to regularity and age. Results: A mixed-design ANOVA showed a main effect of task regularity and an interaction between age and regularity: an age-related difference in PM performances was found for irregular activities (older < young, but not for regular activities. All participants recalled more regular activities than irregular ones with no age effect. It appeared that recalling of regular activities only involved planning for both intermediate and older adults, while recalling of irregular ones were linked to planning, inhibition, short-term memory, binding, and retrospective episodic memory. Conclusion: Taken together, our data suggest that planning capacities seem to play a major role in remembering to perform intended actions with advancing age. Furthermore, the age-PM-paradox may be attenuated when the experimental design is adapted by implementing a familiar context through the use of activities of daily living. The clinical
Regularity effect in prospective memory during aging
Blondelle, Geoffrey; Hainselin, Mathieu; Gounden, Yannick; Heurley, Laurent; Voisin, Hélène; Megalakaki, Olga; Bressous, Estelle; Quaglino, Véronique
2016-01-01
Background: Regularity effect can affect performance in prospective memory (PM), but little is known on the cognitive processes linked to this effect. Moreover, its impacts with regard to aging remain unknown. To our knowledge, this study is the first to examine regularity effect in PM in a lifespan perspective, with a sample of young, intermediate, and older adults.Objective and design: Our study examined the regularity effect in PM in three groups of participants: 28 young adults (18–30), 1...
Sparsity-regularized HMAX for visual recognition.
Directory of Open Access Journals (Sweden)
Xiaolin Hu
Full Text Available About ten years ago, HMAX was proposed as a simple and biologically feasible model for object recognition, based on how the visual cortex processes information. However, the model does not encompass sparse firing, which is a hallmark of neurons at all stages of the visual pathway. The current paper presents an improved model, called sparse HMAX, which integrates sparse firing. This model is able to learn higher-level features of objects on unlabeled training images. Unlike most other deep learning models that explicitly address global structure of images in every layer, sparse HMAX addresses local to global structure gradually along the hierarchy by applying patch-based learning to the output of the previous layer. As a consequence, the learning method can be standard sparse coding (SSC or independent component analysis (ICA, two techniques deeply rooted in neuroscience. What makes SSC and ICA applicable at higher levels is the introduction of linear higher-order statistical regularities by max pooling. After training, high-level units display sparse, invariant selectivity for particular individuals or for image categories like those observed in human inferior temporal cortex (ITC and medial temporal lobe (MTL. Finally, on an image classification benchmark, sparse HMAX outperforms the original HMAX by a large margin, suggesting its great potential for computer vision.
Regularization and error assignment to unfolded distributions
Zech, Gunter
2011-01-01
The commonly used approach to present unfolded data only in graphical formwith the diagonal error depending on the regularization strength is unsatisfac-tory. It does not permit the adjustment of parameters of theories, the exclusionof theories that are admitted by the observed data and does not allow the com-bination of data from different experiments. We propose fixing the regulariza-tion strength by a p-value criterion, indicating the experimental uncertaintiesindependent of the regularization and publishing the unfolded data in additionwithout regularization. These considerations are illustrated with three differentunfolding and smoothing approaches applied to a toy example.
Iterative Regularization with Minimum-Residual Methods
DEFF Research Database (Denmark)
Jensen, Toke Koldborg; Hansen, Per Christian
2007-01-01
subspaces. We provide a combination of theory and numerical examples, and our analysis confirms the experience that MINRES and MR-II can work as general regularization methods. We also demonstrate theoretically and experimentally that the same is not true, in general, for GMRES and RRGMRES their success......We study the regularization properties of iterative minimum-residual methods applied to discrete ill-posed problems. In these methods, the projection onto the underlying Krylov subspace acts as a regularizer, and the emphasis of this work is on the role played by the basis vectors of these Krylov...... as regularization methods is highly problem dependent....
Iterative regularization with minimum-residual methods
DEFF Research Database (Denmark)
Jensen, Toke Koldborg; Hansen, Per Christian
2006-01-01
subspaces. We provide a combination of theory and numerical examples, and our analysis confirms the experience that MINRES and MR-II can work as general regularization methods. We also demonstrate theoretically and experimentally that the same is not true, in general, for GMRES and RRGMRES - their success......We study the regularization properties of iterative minimum-residual methods applied to discrete ill-posed problems. In these methods, the projection onto the underlying Krylov subspace acts as a regularizer, and the emphasis of this work is on the role played by the basis vectors of these Krylov...... as regularization methods is highly problem dependent....
Optimal Tikhonov Regularization in Finite-Frequency Tomography
Fang, Y.; Yao, Z.; Zhou, Y.
2017-12-01
The last decade has witnessed a progressive transition in seismic tomography from ray theory to finite-frequency theory which overcomes the resolution limit of the high-frequency approximation in ray theory. In addition to approximations in wave propagation physics, a main difference between ray-theoretical tomography and finite-frequency tomography is the sparseness of the associated sensitivity matrix. It is well known that seismic tomographic problems are ill-posed and regularizations such as damping and smoothing are often applied to analyze the tradeoff between data misfit and model uncertainty. The regularizations depend on the structure of the matrix as well as noise level of the data. Cross-validation has been used to constrain data uncertainties in body-wave finite-frequency inversions when measurements at multiple frequencies are available to invert for a common structure. In this study, we explore an optimal Tikhonov regularization in surface-wave phase-velocity tomography based on minimization of an empirical Bayes risk function using theoretical training datasets. We exploit the structure of the sensitivity matrix in the framework of singular value decomposition (SVD) which also allows for the calculation of complete resolution matrix. We compare the optimal Tikhonov regularization in finite-frequency tomography with traditional tradeo-off analysis using surface wave dispersion measurements from global as well as regional studies.
Wavelet domain image restoration with adaptive edge-preserving regularization.
Belge, M; Kilmer, M E; Miller, E L
2000-01-01
In this paper, we consider a wavelet based edge-preserving regularization scheme for use in linear image restoration problems. Our efforts build on a collection of mathematical results indicating that wavelets are especially useful for representing functions that contain discontinuities (i.e., edges in two dimensions or jumps in one dimension). We interpret the resulting theory in a statistical signal processing framework and obtain a highly flexible framework for adapting the degree of regularization to the local structure of the underlying image. In particular, we are able to adapt quite easily to scale-varying and orientation-varying features in the image while simultaneously retaining the edge preservation properties of the regularizer. We demonstrate a half-quadratic algorithm for obtaining the restorations from observed data.
Further investigation on "A multiplicative regularization for force reconstruction"
Aucejo, M.; De Smet, O.
2018-05-01
We have recently proposed a multiplicative regularization to reconstruct mechanical forces acting on a structure from vibration measurements. This method does not require any selection procedure for choosing the regularization parameter, since the amount of regularization is automatically adjusted throughout an iterative resolution process. The proposed iterative algorithm has been developed with performance and efficiency in mind, but it is actually a simplified version of a full iterative procedure not described in the original paper. The present paper aims at introducing the full resolution algorithm and comparing it with its simplified version in terms of computational efficiency and solution accuracy. In particular, it is shown that both algorithms lead to very similar identified solutions.
Step by Step Microsoft Office Visio 2003
Lemke, Judy
2004-01-01
Experience learning made easy-and quickly teach yourself how to use Visio 2003, the Microsoft Office business and technical diagramming program. With STEP BY STEP, you can take just the lessons you need, or work from cover to cover. Either way, you drive the instruction-building and practicing the skills you need, just when you need them! Produce computer network diagrams, organization charts, floor plans, and moreUse templates to create new diagrams and drawings quicklyAdd text, color, and 1-D and 2-D shapesInsert graphics and pictures, such as company logosConnect shapes to create a basic f
High accuracy step gauge interferometer
Byman, V.; Jaakkola, T.; Palosuo, I.; Lassila, A.
2018-05-01
Step gauges are convenient transfer standards for the calibration of coordinate measuring machines. A novel interferometer for step gauge calibrations implemented at VTT MIKES is described. The four-pass interferometer follows Abbe’s principle and measures the position of the inductive probe attached to a measuring head. The measuring head of the instrument is connected to a balanced boom above the carriage by a piezo translation stage. A key part of the measuring head is an invar structure on which the inductive probe and the corner cubes of the measuring arm of the interferometer are attached. The invar structure can be elevated so that the probe is raised without breaking the laser beam. During probing, the bending of the probe and the interferometer readings are recorded and the measurement face position is extrapolated to zero force. The measurement process is fully automated and the face positions of the steps can be measured up to a length of 2 m. Ambient conditions are measured continuously and the refractive index of air is compensated for. Before measurements the step gauge is aligned with an integrated 2D coordinate measuring system. The expanded uncertainty of step gauge calibration is U=\\sqrt{{{(64 nm)}2}+{{(88× {{10}-9}L)}2}} .
Manifold Based Low-rank Regularization for Image Restoration and Semi-supervised Learning
Lai, Rongjie; Li, Jia
2017-01-01
Low-rank structures play important role in recent advances of many problems in image science and data science. As a natural extension of low-rank structures for data with nonlinear structures, the concept of the low-dimensional manifold structure has been considered in many data processing problems. Inspired by this concept, we consider a manifold based low-rank regularization as a linear approximation of manifold dimension. This regularization is less restricted than the global low-rank regu...
Free Modal Algebras Revisited: The Step-by-Step Method
Bezhanishvili, N.; Ghilardi, Silvio; Jibladze, Mamuka
2012-01-01
We review the step-by-step method of constructing finitely generated free modal algebras. First we discuss the global step-by-step method, which works well for rank one modal logics. Next we refine the global step-by-step method to obtain the local step-by-step method, which is applicable beyond
A regularized stationary mean-field game
Yang, Xianjin
2016-01-01
In the thesis, we discuss the existence and numerical approximations of solutions of a regularized mean-field game with a low-order regularization. In the first part, we prove a priori estimates and use the continuation method to obtain the existence of a solution with a positive density. Finally, we introduce the monotone flow method and solve the system numerically.
A regularized stationary mean-field game
Yang, Xianjin
2016-04-19
In the thesis, we discuss the existence and numerical approximations of solutions of a regularized mean-field game with a low-order regularization. In the first part, we prove a priori estimates and use the continuation method to obtain the existence of a solution with a positive density. Finally, we introduce the monotone flow method and solve the system numerically.
On infinite regular and chiral maps
Arredondo, John A.; Valdez, Camilo Ramírez y Ferrán
2015-01-01
We prove that infinite regular and chiral maps take place on surfaces with at most one end. Moreover, we prove that an infinite regular or chiral map on an orientable surface with genus can only be realized on the Loch Ness monster, that is, the topological surface of infinite genus with one end.
From recreational to regular drug use
DEFF Research Database (Denmark)
Järvinen, Margaretha; Ravn, Signe
2011-01-01
This article analyses the process of going from recreational use to regular and problematic use of illegal drugs. We present a model containing six career contingencies relevant for young people’s progress from recreational to regular drug use: the closing of social networks, changes in forms...
Automating InDesign with Regular Expressions
Kahrel, Peter
2006-01-01
If you need to make automated changes to InDesign documents beyond what basic search and replace can handle, you need regular expressions, and a bit of scripting to make them work. This Short Cut explains both how to write regular expressions, so you can find and replace the right things, and how to use them in InDesign specifically.
Regularization modeling for large-eddy simulation
Geurts, Bernardus J.; Holm, D.D.
2003-01-01
A new modeling approach for large-eddy simulation (LES) is obtained by combining a "regularization principle" with an explicit filter and its inversion. This regularization approach allows a systematic derivation of the implied subgrid model, which resolves the closure problem. The central role of
2010-07-01
... employee under subsection (a) or in excess of the employee's normal working hours or regular working hours... Relating to Labor (Continued) WAGE AND HOUR DIVISION, DEPARTMENT OF LABOR STATEMENTS OF GENERAL POLICY OR... not less than one and one-half times their regular rates of pay. Section 7(e) of the Act defines...
Diabetes PSA (:30) Step By Step
Centers for Disease Control (CDC) Podcasts
2009-10-24
First steps to preventing diabetes. For Hispanic and Latino American audiences. Created: 10/24/2009 by National Diabetes Education Program (NDEP), a joint program of the Centers for Disease Control and Prevention and the National Institutes of Health. Date Released: 10/24/2009.
Diabetes PSA (:60) Step By Step
Centers for Disease Control (CDC) Podcasts
2009-10-24
First steps to preventing diabetes. For Hispanic and Latino American audiences. Created: 10/24/2009 by National Diabetes Education Program (NDEP), a joint program of the Centers for Disease Control and Prevention and the National Institutes of Health. Date Released: 10/24/2009.
Neighborhood Regularized Logistic Matrix Factorization for Drug-Target Interaction Prediction.
Liu, Yong; Wu, Min; Miao, Chunyan; Zhao, Peilin; Li, Xiao-Li
2016-02-01
In pharmaceutical sciences, a crucial step of the drug discovery process is the identification of drug-target interactions. However, only a small portion of the drug-target interactions have been experimentally validated, as the experimental validation is laborious and costly. To improve the drug discovery efficiency, there is a great need for the development of accurate computational approaches that can predict potential drug-target interactions to direct the experimental verification. In this paper, we propose a novel drug-target interaction prediction algorithm, namely neighborhood regularized logistic matrix factorization (NRLMF). Specifically, the proposed NRLMF method focuses on modeling the probability that a drug would interact with a target by logistic matrix factorization, where the properties of drugs and targets are represented by drug-specific and target-specific latent vectors, respectively. Moreover, NRLMF assigns higher importance levels to positive observations (i.e., the observed interacting drug-target pairs) than negative observations (i.e., the unknown pairs). Because the positive observations are already experimentally verified, they are usually more trustworthy. Furthermore, the local structure of the drug-target interaction data has also been exploited via neighborhood regularization to achieve better prediction accuracy. We conducted extensive experiments over four benchmark datasets, and NRLMF demonstrated its effectiveness compared with five state-of-the-art approaches.
Rapid decay of vacancy islands at step edges on Ag(111): step orientation dependence
International Nuclear Information System (INIS)
Shen, Mingmin; Thiel, P A; Jenks, Cynthia J; Evans, J W
2010-01-01
Previous work has established that vacancy islands or pits fill much more quickly when they are in contact with a step edge, such that the common boundary is a double step. The present work focuses on the effect of the orientation of that step, with two possibilities existing for a face centered cubic (111) surface: A- and B-type steps. We find that the following features can depend on the orientation: (1) the shapes of islands while they shrink; (2) whether the island remains attached to the step edge; and (3) the rate of filling. The first two effects can be explained by the different rates of adatom diffusion along the A- and B-steps that define the pit, enhanced by the different filling rates. The third observation-the difference in the filling rate itself-is explained within the context of the concerted exchange mechanism at the double step. This process is facile at all regular sites along B-steps, but only at kink sites along A-steps, which explains the different rates. We also observe that oxygen can greatly accelerate the decay process, although it has no apparent effect on an isolated vacancy island (i.e. an island that is not in contact with a step).
Regularities and irregularities in order flow data
Theissen, Martin; Krause, Sebastian M.; Guhr, Thomas
2017-11-01
We identify and analyze statistical regularities and irregularities in the recent order flow of different NASDAQ stocks, focusing on the positions where orders are placed in the order book. This includes limit orders being placed outside of the spread, inside the spread and (effective) market orders. Based on the pairwise comparison of the order flow of different stocks, we perform a clustering of stocks into groups with similar behavior. This is useful to assess systemic aspects of stock price dynamics. We find that limit order placement inside the spread is strongly determined by the dynamics of the spread size. Most orders, however, arrive outside of the spread. While for some stocks order placement on or next to the quotes is dominating, deeper price levels are more important for other stocks. As market orders are usually adjusted to the quote volume, the impact of market orders depends on the order book structure, which we find to be quite diverse among the analyzed stocks as a result of the way limit order placement takes place.
Manifold regularization for sparse unmixing of hyperspectral images.
Liu, Junmin; Zhang, Chunxia; Zhang, Jiangshe; Li, Huirong; Gao, Yuelin
2016-01-01
Recently, sparse unmixing has been successfully applied to spectral mixture analysis of remotely sensed hyperspectral images. Based on the assumption that the observed image signatures can be expressed in the form of linear combinations of a number of pure spectral signatures known in advance, unmixing of each mixed pixel in the scene is to find an optimal subset of signatures in a very large spectral library, which is cast into the framework of sparse regression. However, traditional sparse regression models, such as collaborative sparse regression , ignore the intrinsic geometric structure in the hyperspectral data. In this paper, we propose a novel model, called manifold regularized collaborative sparse regression , by introducing a manifold regularization to the collaborative sparse regression model. The manifold regularization utilizes a graph Laplacian to incorporate the locally geometrical structure of the hyperspectral data. An algorithm based on alternating direction method of multipliers has been developed for the manifold regularized collaborative sparse regression model. Experimental results on both the simulated and real hyperspectral data sets have demonstrated the effectiveness of our proposed model.
Energy Technology Data Exchange (ETDEWEB)
Moon, Byeong-Seok; Kim, Sungwon; Kim, Hyoun-Ee; Jang, Tae-Sik, E-mail: cgamja@snu.ac.kr
2017-04-01
Hierarchical micro-nano (HMN) surface structuring of dental implants is a fascinating strategy for achieving fast and mechanically stable fixation due to the synergetic effect of micro- and nano-scale surface roughness with surrounding tissues. However, the introduction of a well-defined nanostructure on a microstructure having complex surface geometry is still challenging. As a means of fabricating HMN surface on Ti6Al4V-ELI, target-ion induced plasma sputtering (TIPS) was used onto a sand-blasted, large-grit and acid-etched substrate. The HMN surface topography was simply controlled by adjusting the tantalum (Ta) target power of the TIPS technique, which is directly related to the Ta ion flux and the surface chemical composition of the substrate. Characterization using scanning electron microscopy (SEM), transmission electron microscopy (TEM), and laser scanning microscopy (LSM) verified that well-defined nano-patterned surface structures with a depth of ~ 300 to 400 nm and a width of ~ 60 to 70 nm were uniformly distributed and followed the complex micron-sized surface geometry. In vitro cellular responses of pre-osteoblast cells (MC3T3-E1) were assessed by attachment and proliferation of cells on flat, nano-roughened, micro-roughened, and an HMN surface structure of Ti6Al4V-ELI. Moreover, an in vivo dog mandible defect model study was used to investigate the biological effect of the HMN surface structure compared with the micro-roughened surface. The results showed that the surface nanostructure significantly increased the cellular activities of flat and micro-roughened Ti, and the bone-to-implant contact area and new bone volume were significantly improved on the HMN surface structured Ti. These results support the idea that an HMN surface structure on Ti6Al4V-ELI alloy has great potential for enhancing the biological performance of dental implants. - Highlights: • A micro-nano-hierarchical (MNH) surface structure on Ti6Al4V-ELI was fabricated via TIPS
International Nuclear Information System (INIS)
Moon, Byeong-Seok; Kim, Sungwon; Kim, Hyoun-Ee; Jang, Tae-Sik
2017-01-01
Hierarchical micro-nano (HMN) surface structuring of dental implants is a fascinating strategy for achieving fast and mechanically stable fixation due to the synergetic effect of micro- and nano-scale surface roughness with surrounding tissues. However, the introduction of a well-defined nanostructure on a microstructure having complex surface geometry is still challenging. As a means of fabricating HMN surface on Ti6Al4V-ELI, target-ion induced plasma sputtering (TIPS) was used onto a sand-blasted, large-grit and acid-etched substrate. The HMN surface topography was simply controlled by adjusting the tantalum (Ta) target power of the TIPS technique, which is directly related to the Ta ion flux and the surface chemical composition of the substrate. Characterization using scanning electron microscopy (SEM), transmission electron microscopy (TEM), and laser scanning microscopy (LSM) verified that well-defined nano-patterned surface structures with a depth of ~ 300 to 400 nm and a width of ~ 60 to 70 nm were uniformly distributed and followed the complex micron-sized surface geometry. In vitro cellular responses of pre-osteoblast cells (MC3T3-E1) were assessed by attachment and proliferation of cells on flat, nano-roughened, micro-roughened, and an HMN surface structure of Ti6Al4V-ELI. Moreover, an in vivo dog mandible defect model study was used to investigate the biological effect of the HMN surface structure compared with the micro-roughened surface. The results showed that the surface nanostructure significantly increased the cellular activities of flat and micro-roughened Ti, and the bone-to-implant contact area and new bone volume were significantly improved on the HMN surface structured Ti. These results support the idea that an HMN surface structure on Ti6Al4V-ELI alloy has great potential for enhancing the biological performance of dental implants. - Highlights: • A micro-nano-hierarchical (MNH) surface structure on Ti6Al4V-ELI was fabricated via TIPS
An iterative method for Tikhonov regularization with a general linear regularization operator
Hochstenbach, M.E.; Reichel, L.
2010-01-01
Tikhonov regularization is one of the most popular approaches to solve discrete ill-posed problems with error-contaminated data. A regularization operator and a suitable value of a regularization parameter have to be chosen. This paper describes an iterative method, based on Golub-Kahan
Moon, Byeong-Seok; Kim, Sungwon; Kim, Hyoun-Ee; Jang, Tae-Sik
2017-04-01
Hierarchical micro-nano (HMN) surface structuring of dental implants is a fascinating strategy for achieving fast and mechanically stable fixation due to the synergetic effect of micro- and nano-scale surface roughness with surrounding tissues. However, the introduction of a well-defined nanostructure on a microstructure having complex surface geometry is still challenging. As a means of fabricating HMN surface on Ti6Al4V-ELI, target-ion induced plasma sputtering (TIPS) was used onto a sand-blasted, large-grit and acid-etched substrate. The HMN surface topography was simply controlled by adjusting the tantalum (Ta) target power of the TIPS technique, which is directly related to the Ta ion flux and the surface chemical composition of the substrate. Characterization using scanning electron microscopy (SEM), transmission electron microscopy (TEM), and laser scanning microscopy (LSM) verified that well-defined nano-patterned surface structures with a depth of ~300 to 400nm and a width of ~60 to 70nm were uniformly distributed and followed the complex micron-sized surface geometry. In vitro cellular responses of pre-osteoblast cells (MC3T3-E1) were assessed by attachment and proliferation of cells on flat, nano-roughened, micro-roughened, and an HMN surface structure of Ti6Al4V-ELI. Moreover, an in vivo dog mandible defect model study was used to investigate the biological effect of the HMN surface structure compared with the micro-roughened surface. The results showed that the surface nanostructure significantly increased the cellular activities of flat and micro-roughened Ti, and the bone-to-implant contact area and new bone volume were significantly improved on the HMN surface structured Ti. These results support the idea that an HMN surface structure on Ti6Al4V-ELI alloy has great potential for enhancing the biological performance of dental implants. Copyright © 2016 Elsevier B.V. All rights reserved.
Kahlen, Franz-Josef; Sankaranarayanan, Srikanth; Kar, Aravinda
1997-09-01
Subject of this investigation is a one-step rapid machining process to create miniaturized 3D parts, using the original sample material. An experimental setup where metal powder is fed to the laser beam-material interaction region has been built. The powder is melted and forms planar, 2D geometries as the substrate is moved under the laser beam in XY- direction. After completing the geometry in the plane, the substrate is displaced in Z-direction, and a new layer of material is placed on top of the just completed deposit. By continuous repetition of this process, 3D parts wee created. In particular, the impact of the focal spot size of the high power laser beam on the smallest achievable structures was investigated. At a translation speed of 51 mm/s a minimum material thickness of 590 micrometers was achieved. Also, it was shown that a small Z-displacement has a negligible influence on the continuity of the material deposition over this power range. A high power CO2 laser was used as energy source, the material powder under investigation was stainless steel SS304L. Helium was used as shield gas at a flow rate of 15 1/min. The incident CO2 laser beam power was varied between 300 W and 400 W, with the laser beam intensity distribute in a donut mode. The laser beam was focused to a focal diameter of 600 (Mu) m.
Microsoft Office Word 2007 step by step
Cox, Joyce
2007-01-01
Experience learning made easy-and quickly teach yourself how to create impressive documents with Word 2007. With Step By Step, you set the pace-building and practicing the skills you need, just when you need them!Apply styles and themes to your document for a polished lookAdd graphics and text effects-and see a live previewOrganize information with new SmartArt diagrams and chartsInsert references, footnotes, indexes, a table of contentsSend documents for review and manage revisionsTurn your ideas into blogs, Web pages, and moreYour all-in-one learning experience includes:Files for building sk
Hierarchical regular small-world networks
International Nuclear Information System (INIS)
Boettcher, Stefan; Goncalves, Bruno; Guclu, Hasan
2008-01-01
Two new networks are introduced that resemble small-world properties. These networks are recursively constructed but retain a fixed, regular degree. They possess a unique one-dimensional lattice backbone overlaid by a hierarchical sequence of long-distance links, mixing real-space and small-world features. Both networks, one 3-regular and the other 4-regular, lead to distinct behaviors, as revealed by renormalization group studies. The 3-regular network is planar, has a diameter growing as √N with system size N, and leads to super-diffusion with an exact, anomalous exponent d w = 1.306..., but possesses only a trivial fixed point T c = 0 for the Ising ferromagnet. In turn, the 4-regular network is non-planar, has a diameter growing as ∼2 √(log 2 N 2 ) , exhibits 'ballistic' diffusion (d w = 1), and a non-trivial ferromagnetic transition, T c > 0. It suggests that the 3-regular network is still quite 'geometric', while the 4-regular network qualifies as a true small world with mean-field properties. As an engineering application we discuss synchronization of processors on these networks. (fast track communication)
Coupling regularizes individual units in noisy populations
International Nuclear Information System (INIS)
Ly Cheng; Ermentrout, G. Bard
2010-01-01
The regularity of a noisy system can modulate in various ways. It is well known that coupling in a population can lower the variability of the entire network; the collective activity is more regular. Here, we show that diffusive (reciprocal) coupling of two simple Ornstein-Uhlenbeck (O-U) processes can regularize the individual, even when it is coupled to a noisier process. In cellular networks, the regularity of individual cells is important when a select few play a significant role. The regularizing effect of coupling surprisingly applies also to general nonlinear noisy oscillators. However, unlike with the O-U process, coupling-induced regularity is robust to different kinds of coupling. With two coupled noisy oscillators, we derive an asymptotic formula assuming weak noise and coupling for the variance of the period (i.e., spike times) that accurately captures this effect. Moreover, we find that reciprocal coupling can regularize the individual period of higher dimensional oscillators such as the Morris-Lecar and Brusselator models, even when coupled to noisier oscillators. Coupling can have a counterintuitive and beneficial effect on noisy systems. These results have implications for the role of connectivity with noisy oscillators and the modulation of variability of individual oscillators.
Elementary Particle Spectroscopy in Regular Solid Rewrite
International Nuclear Information System (INIS)
Trell, Erik
2008-01-01
The Nilpotent Universal Computer Rewrite System (NUCRS) has operationalized the radical ontological dilemma of Nothing at All versus Anything at All down to the ground recursive syntax and principal mathematical realisation of this categorical dichotomy as such and so governing all its sui generis modalities, leading to fulfilment of their individual terms and compass when the respective choice sequence operations are brought to closure. Focussing on the general grammar, NUCRS by pure logic and its algebraic notations hence bootstraps Quantum Mechanics, aware that it ''is the likely keystone of a fundamental computational foundation'' also for e.g. physics, molecular biology and neuroscience. The present work deals with classical geometry where morphology is the modality, and ventures that the ancient regular solids are its specific rewrite system, in effect extensively anticipating the detailed elementary particle spectroscopy, and further on to essential structures at large both over the inorganic and organic realms. The geodetic antipode to Nothing is extension, with natural eigenvector the endless straight line which when deployed according to the NUCRS as well as Plotelemeian topographic prescriptions forms a real three-dimensional eigenspace with cubical eigenelements where observed quark-skewed quantum-chromodynamical particle events self-generate as an Aristotelean phase transition between the straight and round extremes of absolute endlessness under the symmetry- and gauge-preserving, canonical coset decomposition SO(3)xO(5) of Lie algebra SU(3). The cubical eigen-space and eigen-elements are the parental state and frame, and the other solids are a range of transition matrix elements and portions adapting to the spherical root vector symmetries and so reproducibly reproducing the elementary particle spectroscopy, including a modular, truncated octahedron nano-composition of the Electron which piecemeal enter into molecular structures or compressed to each
Seghouane, Abd-Krim; Iqbal, Asif
2017-09-01
Sequential dictionary learning algorithms have been successfully applied to functional magnetic resonance imaging (fMRI) data analysis. fMRI data sets are, however, structured data matrices with the notions of temporal smoothness in the column direction. This prior information, which can be converted into a constraint of smoothness on the learned dictionary atoms, has seldomly been included in classical dictionary learning algorithms when applied to fMRI data analysis. In this paper, we tackle this problem by proposing two new sequential dictionary learning algorithms dedicated to fMRI data analysis by accounting for this prior information. These algorithms differ from the existing ones in their dictionary update stage. The steps of this stage are derived as a variant of the power method for computing the SVD. The proposed algorithms generate regularized dictionary atoms via the solution of a left regularized rank-one matrix approximation problem where temporal smoothness is enforced via regularization through basis expansion and sparse basis expansion in the dictionary update stage. Applications on synthetic data experiments and real fMRI data sets illustrating the performance of the proposed algorithms are provided.
International Nuclear Information System (INIS)
Cao, A.
1981-07-01
This study is concerned with the transverse axial gamma emission tomography. The problem of self-attenuation of radiations in biologic tissues is raised. The regularizing iterative method is developed, as a reconstruction method of 3 dimensional images. The different steps from acquisition to results, necessary to its application, are described. Organigrams relative to each step are explained. Comparison notion between two reconstruction methods is introduced. Some methods used for the comparison or to bring about the characteristics of a reconstruction technique are defined. The studies realized to test the regularizing iterative method are presented and results are analyzed [fr
Enhanced manifold regularization for semi-supervised classification.
Gan, Haitao; Luo, Zhizeng; Fan, Yingle; Sang, Nong
2016-06-01
Manifold regularization (MR) has become one of the most widely used approaches in the semi-supervised learning field. It has shown superiority by exploiting the local manifold structure of both labeled and unlabeled data. The manifold structure is modeled by constructing a Laplacian graph and then incorporated in learning through a smoothness regularization term. Hence the labels of labeled and unlabeled data vary smoothly along the geodesics on the manifold. However, MR has ignored the discriminative ability of the labeled and unlabeled data. To address the problem, we propose an enhanced MR framework for semi-supervised classification in which the local discriminative information of the labeled and unlabeled data is explicitly exploited. To make full use of labeled data, we firstly employ a semi-supervised clustering method to discover the underlying data space structure of the whole dataset. Then we construct a local discrimination graph to model the discriminative information of labeled and unlabeled data according to the discovered intrinsic structure. Therefore, the data points that may be from different clusters, though similar on the manifold, are enforced far away from each other. Finally, the discrimination graph is incorporated into the MR framework. In particular, we utilize semi-supervised fuzzy c-means and Laplacian regularized Kernel minimum squared error for semi-supervised clustering and classification, respectively. Experimental results on several benchmark datasets and face recognition demonstrate the effectiveness of our proposed method.
Yao, Zhibo; Wang, Wenli; Shen, Heping; Zhang, Ye; Luo, Qiang; Yin, Xuewen; Dai, Xuezeng; Li, Jianbao; Lin, Hong
2017-12-01
Although the two-step deposition (TSD) method is widely adopted for the high performance perovskite solar cells (PSCs), the CH3NH3PbI3 perovskite crystal growth mechanism during the TSD process and the photo-generated charge recombination dynamics in the mesoporous-TiO2 (mp-TiO2)/CH3NH3PbI3/hole transporting material (HTM) system remains unexploited. Herein, we modified the concentration of PbI2 (C(PbI2)) solution to control the perovskite crystal properties, and observed an abnormal CH3NH3PbI3 grain growth phenomenon atop mesoporous TiO2 film. To illustrate this abnormal grain growth mechanism, we propose that a grain ripening process is taking place during the transformation from PbI2 to CH3NH3PbI3, and discuss the PbI2 nuclei morphology, perovskite grain growing stage, as well as Pb:I atomic ratio difference among CH3NH3PbI3 grains with different morphology. These C(PbI2)-dependent perovskite morphologies resulted in varied charge carrier transfer properties throughout the mp-TiO2/CH3NH3PbI3/HTM hybrid, as illustrated by photoluminescence measurement. Furthermore, the effect of CH3NH3PbI3 morphology on light absorption and interfacial properties is investigated and correlated with the photovoltaic performance of PSCs.
Guo, Chaozhong; Li, Zhongbin; Niu, Lidan; Liao, Wenli; Sun, Lingtao; Wen, Bixia; Nie, Yunqing; Cheng, Jing; Chen, Changguo
2016-05-01
So far, the development of highly active and stable carbon-based electrocatalysts for oxygen reduction reaction (ORR) to replace commercial Pt/C catalyst is a hot topic. In this study, a new nanoporous nitrogen-doped carbon material was facilely designed by two-step pyrolysis of the renewable Lemna minor enriched in crude protein under a nitrogen atmosphere. Electrochemical measurements show that the onset potential for ORR on this carbon material is around 0.93 V (versus reversible hydrogen electrode), slightly lower than that on the Pt/C catalyst, but its cycling stability is higher compared to the Pt/C catalyst in an alkaline medium. Besides, the ORR at this catalyst approaches to a four-electron transfer pathway. The obtained ORR performance can be basically attributed to the formation of high contents of pyridinic and graphitic nitrogen atoms inside this catalyst. Thus, this work opens up the path in the ORR catalysis for the design of nitrogen-doped carbon materials utilizing aquatic plants as starting precursors.
Diagrammatic methods in phase-space regularization
International Nuclear Information System (INIS)
Bern, Z.; Halpern, M.B.; California Univ., Berkeley
1987-11-01
Using the scalar prototype and gauge theory as the simplest possible examples, diagrammatic methods are developed for the recently proposed phase-space form of continuum regularization. A number of one-loop and all-order applications are given, including general diagrammatic discussions of the nogrowth theorem and the uniqueness of the phase-space stochastic calculus. The approach also generates an alternate derivation of the equivalence of the large-β phase-space regularization to the more conventional coordinate-space regularization. (orig.)
J-regular rings with injectivities
Shen, Liang
2010-01-01
A ring $R$ is called a J-regular ring if R/J(R) is von Neumann regular, where J(R) is the Jacobson radical of R. It is proved that if R is J-regular, then (i) R is right n-injective if and only if every homomorphism from an $n$-generated small right ideal of $R$ to $R_{R}$ can be extended to one from $R_{R}$ to $R_{R}$; (ii) R is right FP-injective if and only if R is right (J, R)-FP-injective. Some known results are improved.
International Nuclear Information System (INIS)
Shimoni, Nira; Ayal, Shai; Millo, Oded
2000-01-01
Dynamics of atomic steps and the terrace-width distribution within step bunches on flame-annealed gold films are studied using scanning tunneling microscopy. The distribution is narrower than commonly observed for vicinal planes and has a Gaussian shape, indicating a short-range repulsive interaction between the steps, with an apparently large interaction constant. The dynamics of the atomic steps, on the other hand, appear to be influenced, in addition to these short-range interactions, also by a longer-range attraction of steps towards step bunches. Both types of interactions promote self-ordering of terrace structures on the surface. When current is driven through the films a step-fingering instability sets in, reminiscent of the Bales-Zangwill instability
Generalized regular genus for manifolds with boundary
Directory of Open Access Journals (Sweden)
Paola Cristofori
2003-05-01
Full Text Available We introduce a generalization of the regular genus, a combinatorial invariant of PL manifolds ([10], which is proved to be strictly related, in dimension three, to generalized Heegaard splittings defined in [12].
Geometric regularizations and dual conifold transitions
International Nuclear Information System (INIS)
Landsteiner, Karl; Lazaroiu, Calin I.
2003-01-01
We consider a geometric regularization for the class of conifold transitions relating D-brane systems on noncompact Calabi-Yau spaces to certain flux backgrounds. This regularization respects the SL(2,Z) invariance of the flux superpotential, and allows for computation of the relevant periods through the method of Picard-Fuchs equations. The regularized geometry is a noncompact Calabi-Yau which can be viewed as a monodromic fibration, with the nontrivial monodromy being induced by the regulator. It reduces to the original, non-monodromic background when the regulator is removed. Using this regularization, we discuss the simple case of the local conifold, and show how the relevant field-theoretic information can be extracted in this approach. (author)
Reali, Florencia; Griffiths, Thomas L.
2009-01-01
The regularization of linguistic structures by learners has played a key role in arguments for strong innate constraints on language acquisition, and has important implications for language evolution. However, relating the inductive biases of learners to regularization behavior in laboratory tasks can be challenging without a formal model. In this…
Regularization Techniques for Linear Least-Squares Problems
Suliman, Mohamed
2016-04-01
Linear estimation is a fundamental branch of signal processing that deals with estimating the values of parameters from a corrupted measured data. Throughout the years, several optimization criteria have been used to achieve this task. The most astonishing attempt among theses is the linear least-squares. Although this criterion enjoyed a wide popularity in many areas due to its attractive properties, it appeared to suffer from some shortcomings. Alternative optimization criteria, as a result, have been proposed. These new criteria allowed, in one way or another, the incorporation of further prior information to the desired problem. Among theses alternative criteria is the regularized least-squares (RLS). In this thesis, we propose two new algorithms to find the regularization parameter for linear least-squares problems. In the constrained perturbation regularization algorithm (COPRA) for random matrices and COPRA for linear discrete ill-posed problems, an artificial perturbation matrix with a bounded norm is forced into the model matrix. This perturbation is introduced to enhance the singular value structure of the matrix. As a result, the new modified model is expected to provide a better stabilize substantial solution when used to estimate the original signal through minimizing the worst-case residual error function. Unlike many other regularization algorithms that go in search of minimizing the estimated data error, the two new proposed algorithms are developed mainly to select the artifcial perturbation bound and the regularization parameter in a way that approximately minimizes the mean-squared error (MSE) between the original signal and its estimate under various conditions. The first proposed COPRA method is developed mainly to estimate the regularization parameter when the measurement matrix is complex Gaussian, with centered unit variance (standard), and independent and identically distributed (i.i.d.) entries. Furthermore, the second proposed COPRA
Fast and compact regular expression matching
DEFF Research Database (Denmark)
Bille, Philip; Farach-Colton, Martin
2008-01-01
We study 4 problems in string matching, namely, regular expression matching, approximate regular expression matching, string edit distance, and subsequence indexing, on a standard word RAM model of computation that allows logarithmic-sized words to be manipulated in constant time. We show how...... to improve the space and/or remove a dependency on the alphabet size for each problem using either an improved tabulation technique of an existing algorithm or by combining known algorithms in a new way....
Regular-fat dairy and human health
DEFF Research Database (Denmark)
Astrup, Arne; Bradley, Beth H Rice; Brenna, J Thomas
2016-01-01
In recent history, some dietary recommendations have treated dairy fat as an unnecessary source of calories and saturated fat in the human diet. These assumptions, however, have recently been brought into question by current research on regular fat dairy products and human health. In an effort to......, cheese and yogurt, can be important components of an overall healthy dietary pattern. Systematic examination of the effects of dietary patterns that include regular-fat milk, cheese and yogurt on human health is warranted....
Deterministic automata for extended regular expressions
Directory of Open Access Journals (Sweden)
Syzdykov Mirzakhmet
2017-12-01
Full Text Available In this work we present the algorithms to produce deterministic finite automaton (DFA for extended operators in regular expressions like intersection, subtraction and complement. The method like “overriding” of the source NFA(NFA not defined with subset construction rules is used. The past work described only the algorithm for AND-operator (or intersection of regular languages; in this paper the construction for the MINUS-operator (and complement is shown.
Regularities of intermediate adsorption complex relaxation
International Nuclear Information System (INIS)
Manukova, L.A.
1982-01-01
The experimental data, characterizing the regularities of intermediate adsorption complex relaxation in the polycrystalline Mo-N 2 system at 77 K are given. The method of molecular beam has been used in the investigation. The analytical expressions of change regularity in the relaxation process of full and specific rates - of transition from intermediate state into ''non-reversible'', of desorption into the gas phase and accumUlation of the particles in the intermediate state are obtained
Online Manifold Regularization by Dual Ascending Procedure
Sun, Boliang; Li, Guohui; Jia, Li; Zhang, Hui
2013-01-01
We propose a novel online manifold regularization framework based on the notion of duality in constrained optimization. The Fenchel conjugate of hinge functions is a key to transfer manifold regularization from offline to online in this paper. Our algorithms are derived by gradient ascent in the dual function. For practical purpose, we propose two buffering strategies and two sparse approximations to reduce the computational complexity. Detailed experiments verify the utility of our approache...
Regularized Partial Least Squares with an Application to NMR Spectroscopy
Allen, Genevera I.; Peterson, Christine; Vannucci, Marina; Maletic-Savatic, Mirjana
2012-01-01
High-dimensional data common in genomics, proteomics, and chemometrics often contains complicated correlation structures. Recently, partial least squares (PLS) and Sparse PLS methods have gained attention in these areas as dimension reduction techniques in the context of supervised data analysis. We introduce a framework for Regularized PLS by solving a relaxation of the SIMPLS optimization problem with penalties on the PLS loadings vectors. Our approach enjoys many advantages including flexi...
Reduction of Nambu-Poisson Manifolds by Regular Distributions
Das, Apurba
2018-03-01
The version of Marsden-Ratiu reduction theorem for Nambu-Poisson manifolds by a regular distribution has been studied by Ibáñez et al. In this paper we show that the reduction is always ensured unless the distribution is zero. Next we extend the more general Falceto-Zambon Poisson reduction theorem for Nambu-Poisson manifolds. Finally, we define gauge transformations of Nambu-Poisson structures and show that these transformations commute with the reduction procedure.
Steps and dislocations in cubic lyotropic crystals
International Nuclear Information System (INIS)
Leroy, S; Pieranski, P
2006-01-01
It has been shown recently that lyotropic systems are convenient for studies of faceting, growth or anisotropic surface melting of crystals. All these phenomena imply the active contribution of surface steps and bulk dislocations. We show here that steps can be observed in situ and in real time by means of a new method combining hygroscopy with phase contrast. First results raise interesting issues about the consequences of bicontinuous topology on the structure and dynamical behaviour of steps and dislocations
The persistence of the attentional bias to regularities in a changing environment.
Yu, Ru Qi; Zhao, Jiaying
2015-10-01
The environment often is stable, but some aspects may change over time. The challenge for the visual system is to discover and flexibly adapt to the changes. We examined how attention is shifted in the presence of changes in the underlying structure of the environment. In six experiments, observers viewed four simultaneous streams of objects while performing a visual search task. In the first half of each experiment, the stream in the structured location contained regularities, the shapes in the random location were randomized, and gray squares appeared in two neutral locations. In the second half, the stream in the structured or the random location may change. In the first half of all experiments, visual search was facilitated in the structured location, suggesting that attention was consistently biased toward regularities. In the second half, this bias persisted in the structured location when no change occurred (Experiment 1), when the regularities were removed (Experiment 2), or when new regularities embedded in the original or novel stimuli emerged in the previously random location (Experiments 3 and 6). However, visual search was numerically but no longer reliably faster in the structured location when the initial regularities were removed and new regularities were introduced in the previously random location (Experiment 4), or when novel random stimuli appeared in the random location (Experiment 5). This suggests that the attentional bias was weakened. Overall, the results demonstrate that the attentional bias to regularities was persistent but also sensitive to changes in the environment.
Directory of Open Access Journals (Sweden)
Philipp Kainz
2017-10-01
Full Text Available Segmentation of histopathology sections is a necessary preprocessing step for digital pathology. Due to the large variability of biological tissue, machine learning techniques have shown superior performance over conventional image processing methods. Here we present our deep neural network-based approach for segmentation and classification of glands in tissue of benign and malignant colorectal cancer, which was developed to participate in the GlaS@MICCAI2015 colon gland segmentation challenge. We use two distinct deep convolutional neural networks (CNN for pixel-wise classification of Hematoxylin-Eosin stained images. While the first classifier separates glands from background, the second classifier identifies gland-separating structures. In a subsequent step, a figure-ground segmentation based on weighted total variation produces the final segmentation result by regularizing the CNN predictions. We present both quantitative and qualitative segmentation results on the recently released and publicly available Warwick-QU colon adenocarcinoma dataset associated with the GlaS@MICCAI2015 challenge and compare our approach to the simultaneously developed other approaches that participated in the same challenge. On two test sets, we demonstrate our segmentation performance and show that we achieve a tissue classification accuracy of 98% and 95%, making use of the inherent capability of our system to distinguish between benign and malignant tissue. Our results show that deep learning approaches can yield highly accurate and reproducible results for biomedical image analysis, with the potential to significantly improve the quality and speed of medical diagnoses.
Spectral Regularization Algorithms for Learning Large Incomplete Matrices.
Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert
2010-03-01
We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 10(6) × 10(6) incomplete matrix with 10(5) observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques.
Effects of Irregular Bridge Columns and Feasibility of Seismic Regularity
Thomas, Abey E.
2018-05-01
Bridges with unequal column height is one of the main irregularities in bridge design particularly while negotiating steep valleys, making the bridges vulnerable to seismic action. The desirable behaviour of bridge columns towards seismic loading is that, they should perform in a regular fashion, i.e. the capacity of each column should be utilized evenly. But, this type of behaviour is often missing when the column heights are unequal along the length of the bridge, allowing short columns to bear the maximum lateral load. In the present study, the effects of unequal column height on the global seismic performance of bridges are studied using pushover analysis. Codes such as CalTrans (Engineering service center, earthquake engineering branch, 2013) and EC-8 (EN 1998-2: design of structures for earthquake resistance. Part 2: bridges, European Committee for Standardization, Brussels, 2005) suggests seismic regularity criterion for achieving regular seismic performance level at all the bridge columns. The feasibility of adopting these seismic regularity criterions along with those mentioned in literatures will be assessed for bridges designed as per the Indian Standards in the present study.
Dynamics of coherent states in regular and chaotic regimes of the non-integrable Dicke model
Lerma-Hernández, S.; Chávez-Carlos, J.; Bastarrachea-Magnani, M. A.; López-del-Carpio, B.; Hirsch, J. G.
2018-04-01
The quantum dynamics of initial coherent states is studied in the Dicke model and correlated with the dynamics, regular or chaotic, of their classical limit. Analytical expressions for the survival probability, i.e. the probability of finding the system in its initial state at time t, are provided in the regular regions of the model. The results for regular regimes are compared with those of the chaotic ones. It is found that initial coherent states in regular regions have a much longer equilibration time than those located in chaotic regions. The properties of the distributions for the initial coherent states in the Hamiltonian eigenbasis are also studied. It is found that for regular states the components with no negligible contribution are organized in sequences of energy levels distributed according to Gaussian functions. In the case of chaotic coherent states, the energy components do not have a simple structure and the number of participating energy levels is larger than in the regular cases.
Enhancing Low-Rank Subspace Clustering by Manifold Regularization.
Liu, Junmin; Chen, Yijun; Zhang, JiangShe; Xu, Zongben
2014-07-25
Recently, low-rank representation (LRR) method has achieved great success in subspace clustering (SC), which aims to cluster the data points that lie in a union of low-dimensional subspace. Given a set of data points, LRR seeks the lowest rank representation among the many possible linear combinations of the bases in a given dictionary or in terms of the data itself. However, LRR only considers the global Euclidean structure, while the local manifold structure, which is often important for many real applications, is ignored. In this paper, to exploit the local manifold structure of the data, a manifold regularization characterized by a Laplacian graph has been incorporated into LRR, leading to our proposed Laplacian regularized LRR (LapLRR). An efficient optimization procedure, which is based on alternating direction method of multipliers (ADMM), is developed for LapLRR. Experimental results on synthetic and real data sets are presented to demonstrate that the performance of LRR has been enhanced by using the manifold regularization.
TRANSIENT LUNAR PHENOMENA: REGULARITY AND REALITY
International Nuclear Information System (INIS)
Crotts, Arlin P. S.
2009-01-01
Transient lunar phenomena (TLPs) have been reported for centuries, but their nature is largely unsettled, and even their existence as a coherent phenomenon is controversial. Nonetheless, TLP data show regularities in the observations; a key question is whether this structure is imposed by processes tied to the lunar surface, or by terrestrial atmospheric or human observer effects. I interrogate an extensive catalog of TLPs to gauge how human factors determine the distribution of TLP reports. The sample is grouped according to variables which should produce differing results if determining factors involve humans, and not reflecting phenomena tied to the lunar surface. Features dependent on human factors can then be excluded. Regardless of how the sample is split, the results are similar: ∼50% of reports originate from near Aristarchus, ∼16% from Plato, ∼6% from recent, major impacts (Copernicus, Kepler, Tycho, and Aristarchus), plus several at Grimaldi. Mare Crisium produces a robust signal in some cases (however, Crisium is too large for a 'feature' as defined). TLP count consistency for these features indicates that ∼80% of these may be real. Some commonly reported sites disappear from the robust averages, including Alphonsus, Ross D, and Gassendi. These reports begin almost exclusively after 1955, when TLPs became widely known and many more (and inexperienced) observers searched for TLPs. In a companion paper, we compare the spatial distribution of robust TLP sites to transient outgassing (seen by Apollo and Lunar Prospector instruments). To a high confidence, robust TLP sites and those of lunar outgassing correlate strongly, further arguing for the reality of TLPs.
Energy Technology Data Exchange (ETDEWEB)
Mory, Cyril, E-mail: cyril.mory@philips.com [Université de Lyon, CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Université Lyon 1, F-69621 Villeurbanne Cedex (France); Philips Research Medisys, 33 rue de Verdun, 92156 Suresnes (France); Auvray, Vincent; Zhang, Bo [Philips Research Medisys, 33 rue de Verdun, 92156 Suresnes (France); Grass, Michael; Schäfer, Dirk [Philips Research, Röntgenstrasse 24–26, D-22335 Hamburg (Germany); Chen, S. James; Carroll, John D. [Department of Medicine, Division of Cardiology, University of Colorado Denver, 12605 East 16th Avenue, Aurora, Colorado 80045 (United States); Rit, Simon [Université de Lyon, CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Université Lyon 1 (France); Centre Léon Bérard, 28 rue Laënnec, F-69373 Lyon (France); Peyrin, Françoise [Université de Lyon, CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Université Lyon 1, F-69621 Villeurbanne Cedex (France); X-ray Imaging Group, European Synchrotron, Radiation Facility, BP 220, F-38043 Grenoble Cedex (France); Douek, Philippe; Boussel, Loïc [Université de Lyon, CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Université Lyon 1 (France); Hospices Civils de Lyon, 28 Avenue du Doyen Jean Lépine, 69500 Bron (France)
2014-02-15
Purpose: Reconstruction of the beating heart in 3D + time in the catheter laboratory using only the available C-arm system would improve diagnosis, guidance, device sizing, and outcome control for intracardiac interventions, e.g., electrophysiology, valvular disease treatment, structural or congenital heart disease. To obtain such a reconstruction, the patient's electrocardiogram (ECG) must be recorded during the acquisition and used in the reconstruction. In this paper, the authors present a 4D reconstruction method aiming to reconstruct the heart from a single sweep 10 s acquisition. Methods: The authors introduce the 4D RecOnstructiOn using Spatial and TEmporal Regularization (short 4D ROOSTER) method, which reconstructs all cardiac phases at once, as a 3D + time volume. The algorithm alternates between a reconstruction step based on conjugate gradient and four regularization steps: enforcing positivity, averaging along time outside a motion mask that contains the heart and vessels, 3D spatial total variation minimization, and 1D temporal total variation minimization. Results: 4D ROOSTER recovers the different temporal representations of a moving Shepp and Logan phantom, and outperforms both ECG-gated simultaneous algebraic reconstruction technique and prior image constrained compressed sensing on a clinical case. It generates 3D + time reconstructions with sharp edges which can be used, for example, to estimate the patient's left ventricular ejection fraction. Conclusions: 4D ROOSTER can be applied for human cardiac C-arm CT, and potentially in other dynamic tomography areas. It can easily be adapted to other problems as regularization is decoupled from projection and back projection.
International Nuclear Information System (INIS)
Mory, Cyril; Auvray, Vincent; Zhang, Bo; Grass, Michael; Schäfer, Dirk; Chen, S. James; Carroll, John D.; Rit, Simon; Peyrin, Françoise; Douek, Philippe; Boussel, Loïc
2014-01-01
Purpose: Reconstruction of the beating heart in 3D + time in the catheter laboratory using only the available C-arm system would improve diagnosis, guidance, device sizing, and outcome control for intracardiac interventions, e.g., electrophysiology, valvular disease treatment, structural or congenital heart disease. To obtain such a reconstruction, the patient's electrocardiogram (ECG) must be recorded during the acquisition and used in the reconstruction. In this paper, the authors present a 4D reconstruction method aiming to reconstruct the heart from a single sweep 10 s acquisition. Methods: The authors introduce the 4D RecOnstructiOn using Spatial and TEmporal Regularization (short 4D ROOSTER) method, which reconstructs all cardiac phases at once, as a 3D + time volume. The algorithm alternates between a reconstruction step based on conjugate gradient and four regularization steps: enforcing positivity, averaging along time outside a motion mask that contains the heart and vessels, 3D spatial total variation minimization, and 1D temporal total variation minimization. Results: 4D ROOSTER recovers the different temporal representations of a moving Shepp and Logan phantom, and outperforms both ECG-gated simultaneous algebraic reconstruction technique and prior image constrained compressed sensing on a clinical case. It generates 3D + time reconstructions with sharp edges which can be used, for example, to estimate the patient's left ventricular ejection fraction. Conclusions: 4D ROOSTER can be applied for human cardiac C-arm CT, and potentially in other dynamic tomography areas. It can easily be adapted to other problems as regularization is decoupled from projection and back projection
Improvements in GRACE Gravity Fields Using Regularization
Save, H.; Bettadpur, S.; Tapley, B. D.
2008-12-01
The unconstrained global gravity field models derived from GRACE are susceptible to systematic errors that show up as broad "stripes" aligned in a North-South direction on the global maps of mass flux. These errors are believed to be a consequence of both systematic and random errors in the data that are amplified by the nature of the gravity field inverse problem. These errors impede scientific exploitation of the GRACE data products, and limit the realizable spatial resolution of the GRACE global gravity fields in certain regions. We use regularization techniques to reduce these "stripe" errors in the gravity field products. The regularization criteria are designed such that there is no attenuation of the signal and that the solutions fit the observations as well as an unconstrained solution. We have used a computationally inexpensive method, normally referred to as "L-ribbon", to find the regularization parameter. This paper discusses the characteristics and statistics of a 5-year time-series of regularized gravity field solutions. The solutions show markedly reduced stripes, are of uniformly good quality over time, and leave little or no systematic observation residuals, which is a frequent consequence of signal suppression from regularization. Up to degree 14, the signal in regularized solution shows correlation greater than 0.8 with the un-regularized CSR Release-04 solutions. Signals from large-amplitude and small-spatial extent events - such as the Great Sumatra Andaman Earthquake of 2004 - are visible in the global solutions without using special post-facto error reduction techniques employed previously in the literature. Hydrological signals as small as 5 cm water-layer equivalent in the small river basins, like Indus and Nile for example, are clearly evident, in contrast to noisy estimates from RL04. The residual variability over the oceans relative to a seasonal fit is small except at higher latitudes, and is evident without the need for de-striping or
Quantum transport with long-range steps on Watts-Strogatz networks
Wang, Yan; Xu, Xin-Jian
2016-07-01
We study transport dynamics of quantum systems with long-range steps on the Watts-Strogatz network (WSN) which is generated by rewiring links of the regular ring. First, we probe physical systems modeled by the discrete nonlinear schrödinger (DNLS) equation. Using the localized initial condition, we compute the time-averaged occupation probability of the initial site, which is related to the nonlinearity, the long-range steps and rewiring links. Self-trapping transitions occur at large (small) nonlinear parameters for coupling ɛ=-1 (1), as long-range interactions are intensified. The structure disorder induced by random rewiring, however, has dual effects for ɛ=-1 and inhibits the self-trapping behavior for ɛ=1. Second, we investigate continuous-time quantum walks (CTQW) on the regular ring ruled by the discrete linear schrödinger (DLS) equation. It is found that only the presence of the long-range steps does not affect the efficiency of the coherent exciton transport, while only the allowance of random rewiring enhances the partial localization. If both factors are considered simultaneously, localization is greatly strengthened, and the transport becomes worse.
Focal cryotherapy: step by step technique description
Directory of Open Access Journals (Sweden)
Cristina Redondo
Full Text Available ABSTRACT Introduction and objective: Focal cryotherapy emerged as an efficient option to treat favorable and localized prostate cancer (PCa. The purpose of this video is to describe the procedure step by step. Materials and methods: We present the case of a 68 year-old man with localized PCa in the anterior aspect of the prostate. Results: The procedure is performed under general anesthesia, with the patient in lithotomy position. Briefly, the equipment utilized includes the cryotherapy console coupled with an ultrasound system, argon and helium gas bottles, cryoprobes, temperature probes and an urethral warming catheter. The procedure starts with a real-time trans-rectal prostate ultrasound, which is used to outline the prostate, the urethra and the rectal wall. The cryoprobes are pretested and placed in to the prostate through the perineum, following a grid template, along with the temperature sensors under ultrasound guidance. A cystoscopy confirms the right positioning of the needles and the urethral warming catheter is installed. Thereafter, the freeze sequence with argon gas is started, achieving extremely low temperatures (-40°C to induce tumor cell lysis. Sequentially, the thawing cycle is performed using helium gas. This process is repeated one time. Results among several series showed a biochemical disease-free survival between 71-93% at 9-70 month- follow-up, incontinence rates between 0-3.6% and erectile dysfunction between 0-42% (1–5. Conclusions: Focal cryotherapy is a feasible procedure to treat anterior PCa that may offer minimal morbidity, allowing good cancer control and better functional outcomes when compared to whole-gland treatment.
Ginsburger, Kévin; Poupon, Fabrice; Beaujoin, Justine; Estournet, Delphine; Matuschke, Felix; Mangin, Jean-François; Axer, Markus; Poupon, Cyril
2018-02-01
White matter is composed of irregularly packed axons leading to a structural disorder in the extra-axonal space. Diffusion MRI experiments using oscillating gradient spin echo sequences have shown that the diffusivity transverse to axons in this extra-axonal space is dependent on the frequency of the employed sequence. In this study, we observe the same frequency-dependence using 3D simulations of the diffusion process in disordered media. We design a novel white matter numerical phantom generation algorithm which constructs biomimicking geometric configurations with few design parameters, and enables to control the level of disorder of the generated phantoms. The influence of various geometrical parameters present in white matter, such as global angular dispersion, tortuosity, presence of Ranvier nodes, beading, on the extra-cellular perpendicular diffusivity frequency dependence was investigated by simulating the diffusion process in numerical phantoms of increasing complexity and fitting the resulting simulated diffusion MR signal attenuation with an adequate analytical model designed for trapezoidal OGSE sequences. This work suggests that angular dispersion and especially beading have non-negligible effects on this extracellular diffusion metrics that may be measured using standard OGSE DW-MRI clinical protocols.
Regularities, Natural Patterns and Laws of Nature
Directory of Open Access Journals (Sweden)
Stathis Psillos
2014-02-01
Full Text Available The goal of this paper is to sketch an empiricist metaphysics of laws of nature. The key idea is that there are regularities without regularity-enforcers. Differently put, there are natural laws without law-makers of a distinct metaphysical kind. This sketch will rely on the concept of a natural pattern and more significantly on the existence of a network of natural patterns in nature. The relation between a regularity and a pattern will be analysed in terms of mereology. Here is the road map. In section 2, I will briefly discuss the relation between empiricism and metaphysics, aiming to show that an empiricist metaphysics is possible. In section 3, I will offer arguments against stronger metaphysical views of laws. Then, in section 4 I will motivate nomic objectivism. In section 5, I will address the question ‘what is a regularity?’ and will develop a novel answer to it, based on the notion of a natural pattern. In section 6, I will raise the question: ‘what is a law of nature?’, the answer to which will be: a law of nature is a regularity that is characterised by the unity of a natural pattern.
Directory of Open Access Journals (Sweden)
T. N. Zhilnikova
2016-01-01
Full Text Available Objectives. The formation of the structure of hardened concrete grouting with two-stage expansion is a complex process that is influenced by many factors, both of a prescriptive nature (composition and additive dosage, mineralogical composition of Portland cement clinker, concrete composition, the presence of chemical additives and in terms of process (the fineness of cement grinding, temperature of curing, etc.. Methods. In order to assess the impact of the above factors, the article proposes the introduction of a number of integrated indicators being characterised as a process in which influences are shown alongside the factor generating the influence. For the evaluation of the influence of different factors on the process of gas generation, an effectiveness ratio of gas generation is proposed by the authors. Results. The article presents the results of an investigation into the influence of the amount of gassing agent and the type and dosage of superplasticiser on the process of gassing by means of the displacement method on the mortar mix. The authors similarly propose a expansion efficiency coefficient. The article presents the results of the investigation into the influence of the amount of gassing agent, the presence and amount of superplasticiser, the sand/cement ratio, aggregate size and water-cement ratio during the first stage of expansion of the mixture. The authors propose a formula for describing the dependence of the relative expansion deformations on the concentration of filler. In order to assess the conditions in which a mixture is present, it is proposed to use an indicator consisting in the constraint expansion coefficient. Conclusion. Use of the hardening condition coefficient is proposed as a means of accounting for the effect of curing conditions on the strength of the concrete grouting with two-stage expansion. The authors recommend taking the introduction of correction factors into account when considering the impact of
Low-Complexity Regularization Algorithms for Image Deblurring
Alanazi, Abdulrahman
2016-11-01
Image restoration problems deal with images in which information has been degraded by blur or noise. In practice, the blur is usually caused by atmospheric turbulence, motion, camera shake, and several other mechanical or physical processes. In this study, we present two regularization algorithms for the image deblurring problem. We first present a new method based on solving a regularized least-squares (RLS) problem. This method is proposed to find a near-optimal value of the regularization parameter in the RLS problems. Experimental results on the non-blind image deblurring problem are presented. In all experiments, comparisons are made with three benchmark methods. The results demonstrate that the proposed method clearly outperforms the other methods in terms of both the output PSNR and structural similarity, as well as the visual quality of the deblurred images. To reduce the complexity of the proposed algorithm, we propose a technique based on the bootstrap method to estimate the regularization parameter in low and high-resolution images. Numerical results show that the proposed technique can effectively reduce the computational complexity of the proposed algorithms. In addition, for some cases where the point spread function (PSF) is separable, we propose using a Kronecker product so as to reduce the computations. Furthermore, in the case where the image is smooth, it is always desirable to replace the regularization term in the RLS problems by a total variation term. Therefore, we propose a novel method for adaptively selecting the regularization parameter in a so-called square root regularized total variation (SRTV). Experimental results demonstrate that our proposed method outperforms the other benchmark methods when applied to smooth images in terms of PSNR, SSIM and the restored image quality. In this thesis, we focus on the non-blind image deblurring problem, where the blur kernel is assumed to be known. However, we developed algorithms that also work
An experimentalists view on the analogy between step edges and quantum mechanical particles
Zandvliet, Henricus J.W.
1995-01-01
Guided by scanning tunnelling microscopy images of regularly stepped surfaces it will be illustrated that there is a striking similarity between the behaviour of monoatomic step edges and quantum mechanical particles (spinless fermions). The direction along the step edge is equivalent to the time,
Scofield, David C.; Rytlewski, Jeffrey D.; Childress, Paul; Shah, Kishan; Tucker, Aamir; Khan, Faisal; Peveler, Jessica; Li, Ding; McKinley, Todd O.; Chu, Tien-Min G.; Hickman, Debra L.; Kacena, Melissa A.
2018-05-01
This study was initiated as a component of a larger undertaking designed to study bone healing in microgravity aboard the International Space Station (ISS). Spaceflight experimentation introduces multiple challenges not seen in ground studies, especially with regard to physical space, limited resources, and inability to easily reproduce results. Together, these can lead to diminished statistical power and increased risk of failure. It is because of the limited space, and need for improved statistical power by increasing sample size over historical numbers, NASA studies involving mice require housing mice at densities higher than recommended in the Guide for the Care and Use of Laboratory Animals (National Research Council, 2011). All previous NASA missions in which mice were co-housed, involved female mice; however, in our spaceflight studies examining bone healing, male mice are required for optimal experimentation. Additionally, the logistics associated with spaceflight hardware and our study design necessitated variation of density and cohort make up during the experiment. This required the development of a new method to successfully co-house male mice while varying mouse density and hierarchical structure. For this experiment, male mice in an experimental housing schematic of variable density (Spaceflight Correlate) analogous to previously established NASA spaceflight studies was compared to a standard ground based housing schematic (Normal Density Controls) throughout the experimental timeline. We hypothesized that mice in the Spaceflight Correlate group would show no significant difference in activity, aggression, or stress when compared to Normal Density Controls. Activity and aggression were assessed using a novel activity scoring system (based on prior literature, validated in-house) and stress was assessed via body weights, organ weights, and veterinary assessment. No significant differences were detected between the Spaceflight Correlate group and the
Fractional Regularization Term for Variational Image Registration
Directory of Open Access Journals (Sweden)
Rafael Verdú-Monedero
2009-01-01
Full Text Available Image registration is a widely used task of image analysis with applications in many fields. Its classical formulation and current improvements are given in the spatial domain. In this paper a regularization term based on fractional order derivatives is formulated. This term is defined and implemented in the frequency domain by translating the energy functional into the frequency domain and obtaining the Euler-Lagrange equations which minimize it. The new regularization term leads to a simple formulation and design, being applicable to higher dimensions by using the corresponding multidimensional Fourier transform. The proposed regularization term allows for a real gradual transition from a diffusion registration to a curvature registration which is best suited to some applications and it is not possible in the spatial domain. Results with 3D actual images show the validity of this approach.
International Nuclear Information System (INIS)
Obregon, Octavio; Quevedo, Hernando; Ryan, Michael P.
2004-01-01
We construct a family of time and angular dependent, regular S-brane solutions which corresponds to a simple analytical continuation of the Zipoy-Voorhees 4-dimensional vacuum spacetime. The solutions are asymptotically flat and turn out to be free of singularities without requiring a twist in space. They can be considered as the simplest non-singular generalization of the singular S0-brane solution. We analyze the properties of a representative of this family of solutions and show that it resembles to some extent the asymptotic properties of the regular Kerr S-brane. The R-symmetry corresponds, however, to the general lorentzian symmetry. Several generalizations of this regular solution are derived which include a charged S-brane and an additional dilatonic field. (author)
Online Manifold Regularization by Dual Ascending Procedure
Directory of Open Access Journals (Sweden)
Boliang Sun
2013-01-01
Full Text Available We propose a novel online manifold regularization framework based on the notion of duality in constrained optimization. The Fenchel conjugate of hinge functions is a key to transfer manifold regularization from offline to online in this paper. Our algorithms are derived by gradient ascent in the dual function. For practical purpose, we propose two buffering strategies and two sparse approximations to reduce the computational complexity. Detailed experiments verify the utility of our approaches. An important conclusion is that our online MR algorithms can handle the settings where the target hypothesis is not fixed but drifts with the sequence of examples. We also recap and draw connections to earlier works. This paper paves a way to the design and analysis of online manifold regularization algorithms.
A Regularization SAA Scheme for a Stochastic Mathematical Program with Complementarity Constraints
Directory of Open Access Journals (Sweden)
Yu-xin Li
2014-01-01
Full Text Available To reflect uncertain data in practical problems, stochastic versions of the mathematical program with complementarity constraints (MPCC have drawn much attention in the recent literature. Our concern is the detailed analysis of convergence properties of a regularization sample average approximation (SAA method for solving a stochastic mathematical program with complementarity constraints (SMPCC. The analysis of this regularization method is carried out in three steps: First, the almost sure convergence of optimal solutions of the regularized SAA problem to that of the true problem is established by the notion of epiconvergence in variational analysis. Second, under MPCC-MFCQ, which is weaker than MPCC-LICQ, we show that any accumulation point of Karash-Kuhn-Tucker points of the regularized SAA problem is almost surely a kind of stationary point of SMPCC as the sample size tends to infinity. Finally, some numerical results are reported to show the efficiency of the method proposed.
Regular Network Class Features Enhancement Using an Evolutionary Synthesis Algorithm
Directory of Open Access Journals (Sweden)
O. G. Monahov
2014-01-01
Full Text Available This paper investigates a solution of the optimization problem concerning the construction of diameter-optimal regular networks (graphs. Regular networks are of practical interest as the graph-theoretical models of reliable communication networks of parallel supercomputer systems, as a basis of the structure in a model of small world in optical and neural networks. It presents a new class of parametrically described regular networks - hypercirculant networks (graphs. An approach that uses evolutionary algorithms for the automatic generation of parametric descriptions of optimal hypercirculant networks is developed. Synthesis of optimal hypercirculant networks is based on the optimal circulant networks with smaller degree of nodes. To construct optimal hypercirculant networks is used a template of circulant network from the known optimal families of circulant networks with desired number of nodes and with smaller degree of nodes. Thus, a generating set of the circulant network is used as a generating subset of the hypercirculant network, and the missing generators are synthesized by means of the evolutionary algorithm, which is carrying out minimization of diameter (average diameter of networks. A comparative analysis of the structural characteristics of hypercirculant, toroidal, and circulant networks is conducted. The advantage hypercirculant networks under such structural characteristics, as diameter, average diameter, and the width of bisection, with comparable costs of the number of nodes and the number of connections is demonstrated. It should be noted the advantage of hypercirculant networks of dimension three over four higher-dimensional tori. Thus, the optimization of hypercirculant networks of dimension three is more efficient than the introduction of an additional dimension for the corresponding toroidal structures. The paper also notes the best structural parameters of hypercirculant networks in comparison with iBT-networks previously
A new approach to nonlinear constrained Tikhonov regularization
Ito, Kazufumi
2011-09-16
We present a novel approach to nonlinear constrained Tikhonov regularization from the viewpoint of optimization theory. A second-order sufficient optimality condition is suggested as a nonlinearity condition to handle the nonlinearity of the forward operator. The approach is exploited to derive convergence rate results for a priori as well as a posteriori choice rules, e.g., discrepancy principle and balancing principle, for selecting the regularization parameter. The idea is further illustrated on a general class of parameter identification problems, for which (new) source and nonlinearity conditions are derived and the structural property of the nonlinearity term is revealed. A number of examples including identifying distributed parameters in elliptic differential equations are presented. © 2011 IOP Publishing Ltd.
Traveling waves of the regularized short pulse equation
International Nuclear Information System (INIS)
Shen, Y; Horikis, T P; Kevrekidis, P G; Frantzeskakis, D J
2014-01-01
The properties of the so-called regularized short pulse equation (RSPE) are explored with a particular focus on the traveling wave solutions of this model. We theoretically analyze and numerically evolve two sets of such solutions. First, using a fixed point iteration scheme, we numerically integrate the equation to find solitary waves. It is found that these solutions are well approximated by a finite sum of hyperbolic secants powers. The dependence of the soliton's parameters (height, width, etc) to the parameters of the equation is also investigated. Second, by developing a multiple scale reduction of the RSPE to the nonlinear Schrödinger equation, we are able to construct (both standing and traveling) envelope wave breather type solutions of the former, based on the solitary wave structures of the latter. Both the regular and the breathing traveling wave solutions identified are found to be robust and should thus be amenable to observations in the form of few optical cycle pulses. (paper)
Regularization of Hamilton-Lagrangian guiding center theories
International Nuclear Information System (INIS)
Correa-Restrepo, D.; Wimmel, H.K.
1985-04-01
The Hamilton-Lagrangian guiding-center (G.C.) theories of Littlejohn, Wimmel, and Pfirsch show a singularity for B-fields with non-vanishing parallel curl at a critical value of vsub(parallel), which complicates applications. The singularity is related to a sudden breakdown, at a critical vsub(parallel), of gyration in the exact particle mechanics. While the latter is a real effect, the G.C. singularity can be removed. To this end a regularization method is defined that preserves the Hamilton-Lagrangian structure and the conservation theorems. For demonstration this method is applied to the standard G.C. theory (without polarization drift). Liouville's theorem and G.C. kinetic equations are also derived in regularized form. The method could equally well be applied to the case with polarization drift and to relativistic G.C. theory. (orig.)
Multiview vector-valued manifold regularization for multilabel image classification.
Luo, Yong; Tao, Dacheng; Xu, Chang; Xu, Chao; Liu, Hong; Wen, Yonggang
2013-05-01
In computer vision, image datasets used for classification are naturally associated with multiple labels and comprised of multiple views, because each image may contain several objects (e.g., pedestrian, bicycle, and tree) and is properly characterized by multiple visual features (e.g., color, texture, and shape). Currently, available tools ignore either the label relationship or the view complementarily. Motivated by the success of the vector-valued function that constructs matrix-valued kernels to explore the multilabel structure in the output space, we introduce multiview vector-valued manifold regularization (MV(3)MR) to integrate multiple features. MV(3)MR exploits the complementary property of different features and discovers the intrinsic local geometry of the compact support shared by different features under the theme of manifold regularization. We conduct extensive experiments on two challenging, but popular, datasets, PASCAL VOC' 07 and MIR Flickr, and validate the effectiveness of the proposed MV(3)MR for image classification.
Regular transport dynamics produce chaotic travel times.
Villalobos, Jorge; Muñoz, Víctor; Rogan, José; Zarama, Roberto; Johnson, Neil F; Toledo, Benjamín; Valdivia, Juan Alejandro
2014-06-01
In the hope of making passenger travel times shorter and more reliable, many cities are introducing dedicated bus lanes (e.g., Bogota, London, Miami). Here we show that chaotic travel times are actually a natural consequence of individual bus function, and hence of public transport systems more generally, i.e., chaotic dynamics emerge even when the route is empty and straight, stops and lights are equidistant and regular, and loading times are negligible. More generally, our findings provide a novel example of chaotic dynamics emerging from a single object following Newton's laws of motion in a regularized one-dimensional system.
Regularity of difference equations on Banach spaces
Agarwal, Ravi P; Lizama, Carlos
2014-01-01
This work introduces readers to the topic of maximal regularity for difference equations. The authors systematically present the method of maximal regularity, outlining basic linear difference equations along with relevant results. They address recent advances in the field, as well as basic semigroup and cosine operator theories in the discrete setting. The authors also identify some open problems that readers may wish to take up for further research. This book is intended for graduate students and researchers in the area of difference equations, particularly those with advance knowledge of and interest in functional analysis.
PET regularization by envelope guided conjugate gradients
International Nuclear Information System (INIS)
Kaufman, L.; Neumaier, A.
1996-01-01
The authors propose a new way to iteratively solve large scale ill-posed problems and in particular the image reconstruction problem in positron emission tomography by exploiting the relation between Tikhonov regularization and multiobjective optimization to obtain iteratively approximations to the Tikhonov L-curve and its corner. Monitoring the change of the approximate L-curves allows us to adjust the regularization parameter adaptively during a preconditioned conjugate gradient iteration, so that the desired solution can be reconstructed with a small number of iterations
Matrix regularization of embedded 4-manifolds
International Nuclear Information System (INIS)
Trzetrzelewski, Maciej
2012-01-01
We consider products of two 2-manifolds such as S 2 ×S 2 , embedded in Euclidean space and show that the corresponding 4-volume preserving diffeomorphism algebra can be approximated by a tensor product SU(N)⊗SU(N) i.e. functions on a manifold are approximated by the Kronecker product of two SU(N) matrices. A regularization of the 4-sphere is also performed by constructing N 2 ×N 2 matrix representations of the 4-algebra (and as a byproduct of the 3-algebra which makes the regularization of S 3 also possible).
Step-by-step cyclic processes scheduling
DEFF Research Database (Denmark)
Bocewicz, G.; Nielsen, Izabela Ewa; Banaszak, Z.
2013-01-01
Automated Guided Vehicles (AGVs) fleet scheduling is one of the big problems in Flexible Manufacturing System (FMS) control. The problem is more complicated when concurrent multi-product manufacturing and resource deadlock avoidance policies are considered. The objective of the research is to pro......Automated Guided Vehicles (AGVs) fleet scheduling is one of the big problems in Flexible Manufacturing System (FMS) control. The problem is more complicated when concurrent multi-product manufacturing and resource deadlock avoidance policies are considered. The objective of the research...... is to provide a declarative model enabling to state a constraint satisfaction problem aimed at AGVs fleet scheduling subject to assumed itineraries of concurrently manufactured product types. In other words, assuming a given layout of FMS’s material handling and production routes of simultaneously manufactured...... orders, the main objective is to provide the declarative framework aimed at conditions allowing one to calculate the AGVs fleet schedule in online mode. An illustrative example of the relevant algebra-like driven step-by-stem cyclic scheduling is provided....
SparseBeads data: benchmarking sparsity-regularized computed tomography
Jørgensen, Jakob S.; Coban, Sophia B.; Lionheart, William R. B.; McDonald, Samuel A.; Withers, Philip J.
2017-12-01
Sparsity regularization (SR) such as total variation (TV) minimization allows accurate image reconstruction in x-ray computed tomography (CT) from fewer projections than analytical methods. Exactly how few projections suffice and how this number may depend on the image remain poorly understood. Compressive sensing connects the critical number of projections to the image sparsity, but does not cover CT, however empirical results suggest a similar connection. The present work establishes for real CT data a connection between gradient sparsity and the sufficient number of projections for accurate TV-regularized reconstruction. A collection of 48 x-ray CT datasets called SparseBeads was designed for benchmarking SR reconstruction algorithms. Beadpacks comprising glass beads of five different sizes as well as mixtures were scanned in a micro-CT scanner to provide structured datasets with variable image sparsity levels, number of projections and noise levels to allow the systematic assessment of parameters affecting performance of SR reconstruction algorithms6. Using the SparseBeads data, TV-regularized reconstruction quality was assessed as a function of numbers of projections and gradient sparsity. The critical number of projections for satisfactory TV-regularized reconstruction increased almost linearly with the gradient sparsity. This establishes a quantitative guideline from which one may predict how few projections to acquire based on expected sample sparsity level as an aid in planning of dose- or time-critical experiments. The results are expected to hold for samples of similar characteristics, i.e. consisting of few, distinct phases with relatively simple structure. Such cases are plentiful in porous media, composite materials, foams, as well as non-destructive testing and metrology. For samples of other characteristics the proposed methodology may be used to investigate similar relations.
Subcortical processing of speech regularities underlies reading and music aptitude in children
2011-01-01
Background Neural sensitivity to acoustic regularities supports fundamental human behaviors such as hearing in noise and reading. Although the failure to encode acoustic regularities in ongoing speech has been associated with language and literacy deficits, how auditory expertise, such as the expertise that is associated with musical skill, relates to the brainstem processing of speech regularities is unknown. An association between musical skill and neural sensitivity to acoustic regularities would not be surprising given the importance of repetition and regularity in music. Here, we aimed to define relationships between the subcortical processing of speech regularities, music aptitude, and reading abilities in children with and without reading impairment. We hypothesized that, in combination with auditory cognitive abilities, neural sensitivity to regularities in ongoing speech provides a common biological mechanism underlying the development of music and reading abilities. Methods We assessed auditory working memory and attention, music aptitude, reading ability, and neural sensitivity to acoustic regularities in 42 school-aged children with a wide range of reading ability. Neural sensitivity to acoustic regularities was assessed by recording brainstem responses to the same speech sound presented in predictable and variable speech streams. Results Through correlation analyses and structural equation modeling, we reveal that music aptitude and literacy both relate to the extent of subcortical adaptation to regularities in ongoing speech as well as with auditory working memory and attention. Relationships between music and speech processing are specifically driven by performance on a musical rhythm task, underscoring the importance of rhythmic regularity for both language and music. Conclusions These data indicate common brain mechanisms underlying reading and music abilities that relate to how the nervous system responds to regularities in auditory input
Subcortical processing of speech regularities underlies reading and music aptitude in children
Directory of Open Access Journals (Sweden)
Strait Dana L
2011-10-01
Full Text Available Abstract Background Neural sensitivity to acoustic regularities supports fundamental human behaviors such as hearing in noise and reading. Although the failure to encode acoustic regularities in ongoing speech has been associated with language and literacy deficits, how auditory expertise, such as the expertise that is associated with musical skill, relates to the brainstem processing of speech regularities is unknown. An association between musical skill and neural sensitivity to acoustic regularities would not be surprising given the importance of repetition and regularity in music. Here, we aimed to define relationships between the subcortical processing of speech regularities, music aptitude, and reading abilities in children with and without reading impairment. We hypothesized that, in combination with auditory cognitive abilities, neural sensitivity to regularities in ongoing speech provides a common biological mechanism underlying the development of music and reading abilities. Methods We assessed auditory working memory and attention, music aptitude, reading ability, and neural sensitivity to acoustic regularities in 42 school-aged children with a wide range of reading ability. Neural sensitivity to acoustic regularities was assessed by recording brainstem responses to the same speech sound presented in predictable and variable speech streams. Results Through correlation analyses and structural equation modeling, we reveal that music aptitude and literacy both relate to the extent of subcortical adaptation to regularities in ongoing speech as well as with auditory working memory and attention. Relationships between music and speech processing are specifically driven by performance on a musical rhythm task, underscoring the importance of rhythmic regularity for both language and music. Conclusions These data indicate common brain mechanisms underlying reading and music abilities that relate to how the nervous system responds to
Subcortical processing of speech regularities underlies reading and music aptitude in children.
Strait, Dana L; Hornickel, Jane; Kraus, Nina
2011-10-17
Neural sensitivity to acoustic regularities supports fundamental human behaviors such as hearing in noise and reading. Although the failure to encode acoustic regularities in ongoing speech has been associated with language and literacy deficits, how auditory expertise, such as the expertise that is associated with musical skill, relates to the brainstem processing of speech regularities is unknown. An association between musical skill and neural sensitivity to acoustic regularities would not be surprising given the importance of repetition and regularity in music. Here, we aimed to define relationships between the subcortical processing of speech regularities, music aptitude, and reading abilities in children with and without reading impairment. We hypothesized that, in combination with auditory cognitive abilities, neural sensitivity to regularities in ongoing speech provides a common biological mechanism underlying the development of music and reading abilities. We assessed auditory working memory and attention, music aptitude, reading ability, and neural sensitivity to acoustic regularities in 42 school-aged children with a wide range of reading ability. Neural sensitivity to acoustic regularities was assessed by recording brainstem responses to the same speech sound presented in predictable and variable speech streams. Through correlation analyses and structural equation modeling, we reveal that music aptitude and literacy both relate to the extent of subcortical adaptation to regularities in ongoing speech as well as with auditory working memory and attention. Relationships between music and speech processing are specifically driven by performance on a musical rhythm task, underscoring the importance of rhythmic regularity for both language and music. These data indicate common brain mechanisms underlying reading and music abilities that relate to how the nervous system responds to regularities in auditory input. Definition of common biological underpinnings
Steps to preventing Type 2 diabetes: Exercise, walk more, or sit less?
Directory of Open Access Journals (Sweden)
Catrine eTudor-Locke
2012-11-01
Full Text Available Accumulated evidence supports the promotion of structured exercise for treating prediabetes and preventing Type 2 diabetes. Unfortunately, contemporary societal changes in lifestyle behaviors (occupational, domestic, transportation, and leisure time have resulted in a notable widespread deficiency of non-exercise physical activity (e.g., ambulatory activity undertaken outside the context of purposeful exercise that has been simultaneously exchanged for an excess in sedentary behaviors (e.g., desk work, labor saving devices, motor vehicle travel, and screen-based leisure time pursuits. It is possible that the known beneficial effects of more structured forms of exercise are attenuated or otherwise undermined against this backdrop of normalized and ubiquitous slothful living. Although public health guidelines have traditionally focused on promoting a detailed exercise prescription, it is evident that the more pressing need is to revise and expand the message to address this insidious and deleterious lifestyle shift. Specifically, we recommend that adults avoid averaging < 5,000 steps/day and strive to average ≥ 7,500 steps/day, of which ≥ 3,000 steps (representing at least 30 minutes should be taken at a cadence ≥ 100 steps/min. They should also practice regularly breaking up extended bouts of sitting with ambulatory activity. Simply put, we must consider advocating a whole message to walk more, sit less, and exercise.
Optimal Design of the Adaptive Normalized Matched Filter Detector using Regularized Tyler Estimators
Kammoun, Abla; Couillet, Romain; Pascal, Frederic; Alouini, Mohamed-Slim
2017-01-01
This article addresses improvements on the design of the adaptive normalized matched filter (ANMF) for radar detection. It is well-acknowledged that the estimation of the noise-clutter covariance matrix is a fundamental step in adaptive radar detection. In this paper, we consider regularized estimation methods which force by construction the eigenvalues of the covariance estimates to be greater than a positive regularization parameter ρ. This makes them more suitable for high dimensional problems with a limited number of secondary data samples than traditional sample covariance estimates. The motivation behind this work is to understand the effect and properly set the value of ρthat improves estimate conditioning while maintaining a low estimation bias. More specifically, we consider the design of the ANMF detector for two kinds of regularized estimators, namely the regularized sample covariance matrix (RSCM), the regularized Tyler estimator (RTE). The rationale behind this choice is that the RTE is efficient in mitigating the degradation caused by the presence of impulsive noises while inducing little loss when the noise is Gaussian. Based on asymptotic results brought by recent tools from random matrix theory, we propose a design for the regularization parameter that maximizes the asymptotic detection probability under constant asymptotic false alarm rates. Provided Simulations support the efficiency of the proposed method, illustrating its gain over conventional settings of the regularization parameter.
Optimal Design of the Adaptive Normalized Matched Filter Detector using Regularized Tyler Estimators
Kammoun, Abla
2017-10-25
This article addresses improvements on the design of the adaptive normalized matched filter (ANMF) for radar detection. It is well-acknowledged that the estimation of the noise-clutter covariance matrix is a fundamental step in adaptive radar detection. In this paper, we consider regularized estimation methods which force by construction the eigenvalues of the covariance estimates to be greater than a positive regularization parameter ρ. This makes them more suitable for high dimensional problems with a limited number of secondary data samples than traditional sample covariance estimates. The motivation behind this work is to understand the effect and properly set the value of ρthat improves estimate conditioning while maintaining a low estimation bias. More specifically, we consider the design of the ANMF detector for two kinds of regularized estimators, namely the regularized sample covariance matrix (RSCM), the regularized Tyler estimator (RTE). The rationale behind this choice is that the RTE is efficient in mitigating the degradation caused by the presence of impulsive noises while inducing little loss when the noise is Gaussian. Based on asymptotic results brought by recent tools from random matrix theory, we propose a design for the regularization parameter that maximizes the asymptotic detection probability under constant asymptotic false alarm rates. Provided Simulations support the efficiency of the proposed method, illustrating its gain over conventional settings of the regularization parameter.
Achieving world class performance step by step.
Kerr, L J
1992-02-01
Bridgestone of Japan acquired Firestone, a United States corporation, in early 1988. This article describes the integration process of the two organizations' cultures. There are many lessons in the approach that should apply to a variety of organizations. The Strategic Improvement Process, a rather highly structured approach, harnesses the strengths of both the Japanese and American organizations and starts the manufacturing and technical departments on the road to excellence.
On a correspondence between regular and non-regular operator monotone functions
DEFF Research Database (Denmark)
Gibilisco, P.; Hansen, Frank; Isola, T.
2009-01-01
We prove the existence of a bijection between the regular and the non-regular operator monotone functions satisfying a certain functional equation. As an application we give a new proof of the operator monotonicity of certain functions related to the Wigner-Yanase-Dyson skew information....
Regularity and irreversibility of weekly travel behavior
Kitamura, R.; van der Hoorn, A.I.J.M.
1987-01-01
Dynamic characteristics of travel behavior are analyzed in this paper using weekly travel diaries from two waves of panel surveys conducted six months apart. An analysis of activity engagement indicates the presence of significant regularity in weekly activity participation between the two waves.
Regular and context-free nominal traces
DEFF Research Database (Denmark)
Degano, Pierpaolo; Ferrari, Gian-Luigi; Mezzetti, Gianluca
2017-01-01
Two kinds of automata are presented, for recognising new classes of regular and context-free nominal languages. We compare their expressive power with analogous proposals in the literature, showing that they express novel classes of languages. Although many properties of classical languages hold ...
Faster 2-regular information-set decoding
Bernstein, D.J.; Lange, T.; Peters, C.P.; Schwabe, P.; Chee, Y.M.
2011-01-01
Fix positive integers B and w. Let C be a linear code over F 2 of length Bw. The 2-regular-decoding problem is to find a nonzero codeword consisting of w length-B blocks, each of which has Hamming weight 0 or 2. This problem appears in attacks on the FSB (fast syndrome-based) hash function and
Complexity in union-free regular languages
Czech Academy of Sciences Publication Activity Database
Jirásková, G.; Masopust, Tomáš
2011-01-01
Roč. 22, č. 7 (2011), s. 1639-1653 ISSN 0129-0541 Institutional research plan: CEZ:AV0Z10190503 Keywords : Union-free regular language * one-cycle-free-path automaton * descriptional complexity Subject RIV: BA - General Mathematics Impact factor: 0.379, year: 2011 http://www.worldscinet.com/ijfcs/22/2207/S0129054111008933.html
Regular Gleason Measures and Generalized Effect Algebras
Dvurečenskij, Anatolij; Janda, Jiří
2015-12-01
We study measures, finitely additive measures, regular measures, and σ-additive measures that can attain even infinite values on the quantum logic of a Hilbert space. We show when particular classes of non-negative measures can be studied in the frame of generalized effect algebras.
Regularization of finite temperature string theories
International Nuclear Information System (INIS)
Leblanc, Y.; Knecht, M.; Wallet, J.C.
1990-01-01
The tachyonic divergences occurring in the free energy of various string theories at finite temperature are eliminated through the use of regularization schemes and analytic continuations. For closed strings, we obtain finite expressions which, however, develop an imaginary part above the Hagedorn temperature, whereas open string theories are still plagued with dilatonic divergences. (orig.)
A Sim(2 invariant dimensional regularization
Directory of Open Access Journals (Sweden)
J. Alfaro
2017-09-01
Full Text Available We introduce a Sim(2 invariant dimensional regularization of loop integrals. Then we can compute the one loop quantum corrections to the photon self energy, electron self energy and vertex in the Electrodynamics sector of the Very Special Relativity Standard Model (VSRSM.
Continuum regularized Yang-Mills theory
International Nuclear Information System (INIS)
Sadun, L.A.
1987-01-01
Using the machinery of stochastic quantization, Z. Bern, M. B. Halpern, C. Taubes and I recently proposed a continuum regularization technique for quantum field theory. This regularization may be implemented by applying a regulator to either the (d + 1)-dimensional Parisi-Wu Langevin equation or, equivalently, to the d-dimensional second order Schwinger-Dyson (SD) equations. This technique is non-perturbative, respects all gauge and Lorentz symmetries, and is consistent with a ghost-free gauge fixing (Zwanziger's). This thesis is a detailed study of this regulator, and of regularized Yang-Mills theory, using both perturbative and non-perturbative techniques. The perturbative analysis comes first. The mechanism of stochastic quantization is reviewed, and a perturbative expansion based on second-order SD equations is developed. A diagrammatic method (SD diagrams) for evaluating terms of this expansion is developed. We apply the continuum regulator to a scalar field theory. Using SD diagrams, we show that all Green functions can be rendered finite to all orders in perturbation theory. Even non-renormalizable theories can be regularized. The continuum regulator is then applied to Yang-Mills theory, in conjunction with Zwanziger's gauge fixing. A perturbative expansion of the regulator is incorporated into the diagrammatic method. It is hoped that the techniques discussed in this thesis will contribute to the construction of a renormalized Yang-Mills theory is 3 and 4 dimensions
Gravitational lensing by a regular black hole
International Nuclear Information System (INIS)
Eiroa, Ernesto F; Sendra, Carlos M
2011-01-01
In this paper, we study a regular Bardeen black hole as a gravitational lens. We find the strong deflection limit for the deflection angle, from which we obtain the positions and magnifications of the relativistic images. As an example, we apply the results to the particular case of the supermassive black hole at the center of our galaxy.
Gravitational lensing by a regular black hole
Energy Technology Data Exchange (ETDEWEB)
Eiroa, Ernesto F; Sendra, Carlos M, E-mail: eiroa@iafe.uba.ar, E-mail: cmsendra@iafe.uba.ar [Instituto de Astronomia y Fisica del Espacio, CC 67, Suc. 28, 1428, Buenos Aires (Argentina)
2011-04-21
In this paper, we study a regular Bardeen black hole as a gravitational lens. We find the strong deflection limit for the deflection angle, from which we obtain the positions and magnifications of the relativistic images. As an example, we apply the results to the particular case of the supermassive black hole at the center of our galaxy.
Annotation of regular polysemy and underspecification
DEFF Research Database (Denmark)
Martínez Alonso, Héctor; Pedersen, Bolette Sandford; Bel, Núria
2013-01-01
We present the result of an annotation task on regular polysemy for a series of seman- tic classes or dot types in English, Dan- ish and Spanish. This article describes the annotation process, the results in terms of inter-encoder agreement, and the sense distributions obtained with two methods...
Stabilization, pole placement, and regular implementability
Belur, MN; Trentelman, HL
In this paper, we study control by interconnection of linear differential systems. We give necessary and sufficient conditions for regular implementability of a-given linear, differential system. We formulate the problems of stabilization and pole placement as problems of finding a suitable,
12 CFR 725.3 - Regular membership.
2010-01-01
... UNION ADMINISTRATION CENTRAL LIQUIDITY FACILITY § 725.3 Regular membership. (a) A natural person credit....5(b) of this part, and forwarding with its completed application funds equal to one-half of this... 1, 1979, is not required to forward these funds to the Facility until October 1, 1979. (3...
Supervised scale-regularized linear convolutionary filters
DEFF Research Database (Denmark)
Loog, Marco; Lauze, Francois Bernard
2017-01-01
also be solved relatively efficient. All in all, the idea is to properly control the scale of a trained filter, which we solve by introducing a specific regularization term into the overall objective function. We demonstrate, on an artificial filter learning problem, the capabil- ities of our basic...
On regular riesz operators | Raubenheimer | Quaestiones ...
African Journals Online (AJOL)
The r-asymptotically quasi finite rank operators on Banach lattices are examples of regular Riesz operators. We characterise Riesz elements in a subalgebra of a Banach algebra in terms of Riesz elements in the Banach algebra. This enables us to characterise r-asymptotically quasi finite rank operators in terms of adjoint ...
Regularized Discriminant Analysis: A Large Dimensional Study
Yang, Xiaoke
2018-04-28
In this thesis, we focus on studying the performance of general regularized discriminant analysis (RDA) classifiers. The data used for analysis is assumed to follow Gaussian mixture model with different means and covariances. RDA offers a rich class of regularization options, covering as special cases the regularized linear discriminant analysis (RLDA) and the regularized quadratic discriminant analysis (RQDA) classi ers. We analyze RDA under the double asymptotic regime where the data dimension and the training size both increase in a proportional way. This double asymptotic regime allows for application of fundamental results from random matrix theory. Under the double asymptotic regime and some mild assumptions, we show that the asymptotic classification error converges to a deterministic quantity that only depends on the data statistical parameters and dimensions. This result not only implicates some mathematical relations between the misclassification error and the class statistics, but also can be leveraged to select the optimal parameters that minimize the classification error, thus yielding the optimal classifier. Validation results on the synthetic data show a good accuracy of our theoretical findings. We also construct a general consistent estimator to approximate the true classification error in consideration of the unknown previous statistics. We benchmark the performance of our proposed consistent estimator against classical estimator on synthetic data. The observations demonstrate that the general estimator outperforms others in terms of mean squared error (MSE).
Complexity in union-free regular languages
Czech Academy of Sciences Publication Activity Database
Jirásková, G.; Masopust, Tomáš
2011-01-01
Roč. 22, č. 7 (2011), s. 1639-1653 ISSN 0129-0541 Institutional research plan: CEZ:AV0Z10190503 Keywords : Union-free regular language * one-cycle-free- path automaton * descriptional complexity Subject RIV: BA - General Mathematics Impact factor: 0.379, year: 2011 http://www.worldscinet.com/ijfcs/22/2207/S0129054111008933.html
Bit-coded regular expression parsing
DEFF Research Database (Denmark)
Nielsen, Lasse; Henglein, Fritz
2011-01-01
the DFA-based parsing algorithm due to Dub ´e and Feeley to emit the bits of the bit representation without explicitly materializing the parse tree itself. We furthermore show that Frisch and Cardelli’s greedy regular expression parsing algorithm can be straightforwardly modified to produce bit codings...
Tetravalent one-regular graphs of order 4p2
DEFF Research Database (Denmark)
Feng, Yan-Quan; Kutnar, Klavdija; Marusic, Dragan
2014-01-01
A graph is one-regular if its automorphism group acts regularly on the set of its arcs. In this paper tetravalent one-regular graphs of order 4p2, where p is a prime, are classified.......A graph is one-regular if its automorphism group acts regularly on the set of its arcs. In this paper tetravalent one-regular graphs of order 4p2, where p is a prime, are classified....
Regularized Biot-Savart Laws for Modeling Magnetic Flux Ropes
Titov, Viacheslav; Downs, Cooper; Mikic, Zoran; Torok, Tibor; Linker, Jon A.
2017-08-01
Many existing models assume that magnetic flux ropes play a key role in solar flares and coronal mass ejections (CMEs). It is therefore important to develop efficient methods for constructing flux-rope configurations constrained by observed magnetic data and the initial morphology of CMEs. As our new step in this direction, we have derived and implemented a compact analytical form that represents the magnetic field of a thin flux rope with an axis of arbitrary shape and a circular cross-section. This form implies that the flux rope carries axial current I and axial flux F, so that the respective magnetic field is a curl of the sum of toroidal and poloidal vector potentials proportional to I and F, respectively. The vector potentials are expressed in terms of Biot-Savart laws whose kernels are regularized at the rope axis. We regularized them in such a way that for a straight-line axis the form provides a cylindrical force-free flux rope with a parabolic profile of the axial current density. So far, we set the shape of the rope axis by tracking the polarity inversion lines of observed magnetograms and estimating its height and other parameters of the rope from a calculated potential field above these lines. In spite of this heuristic approach, we were able to successfully construct pre-eruption configurations for the 2009 February13 and 2011 October 1 CME events. These applications demonstrate that our regularized Biot-Savart laws are indeed a very flexible and efficient method for energizing initial configurations in MHD simulations of CMEs. We discuss possible ways of optimizing the axis paths and other extensions of the method in order to make it more useful and robust.Research supported by NSF, NASA's HSR and LWS Programs, and AFOSR.
Regularized κ-distributions with non-diverging moments
Scherer, K.; Fichtner, H.; Lazar, M.
2017-12-01
For various plasma applications the so-called (non-relativistic) κ-distribution is widely used to reproduce and interpret the suprathermal particle populations exhibiting a power-law distribution in velocity or energy. Despite its reputation the standard κ-distribution as a concept is still disputable, mainly due to the velocity moments M l which make a macroscopic characterization possible, but whose existence is restricted only to low orders l definition of the κ-distribution itself is conditioned by the existence of the moment of order l = 2 (i.e., kinetic temperature) satisfied only for κ > 3/2 . In order to resolve these critical limitations we introduce the regularized κ-distribution with non-diverging moments. For the evaluation of all velocity moments a general analytical expression is provided enabling a significant step towards a macroscopic (fluid-like) description of space plasmas, and, in general, any system of κ-distributed particles.
Extended -Regular Sequence for Automated Analysis of Microarray Images
Directory of Open Access Journals (Sweden)
Jin Hee-Jeong
2006-01-01
Full Text Available Microarray study enables us to obtain hundreds of thousands of expressions of genes or genotypes at once, and it is an indispensable technology for genome research. The first step is the analysis of scanned microarray images. This is the most important procedure for obtaining biologically reliable data. Currently most microarray image processing systems require burdensome manual block/spot indexing work. Since the amount of experimental data is increasing very quickly, automated microarray image analysis software becomes important. In this paper, we propose two automated methods for analyzing microarray images. First, we propose the extended -regular sequence to index blocks and spots, which enables a novel automatic gridding procedure. Second, we provide a methodology, hierarchical metagrid alignment, to allow reliable and efficient batch processing for a set of microarray images. Experimental results show that the proposed methods are more reliable and convenient than the commercial tools.
Metal-assisted etch combined with regularizing etch
Energy Technology Data Exchange (ETDEWEB)
Yim, Joanne; Miller, Jeff; Jura, Michael; Black, Marcie R.; Forziati, Joanne; Murphy, Brian; Magliozzi, Lauren
2018-03-06
In an aspect of the disclosure, a process for forming nanostructuring on a silicon-containing substrate is provided. The process comprises (a) performing metal-assisted chemical etching on the substrate, (b) performing a clean, including partial or total removal of the metal used to assist the chemical etch, and (c) performing an isotropic or substantially isotropic chemical etch subsequently to the metal-assisted chemical etch of step (a). In an alternative aspect of the disclosure, the process comprises (a) performing metal-assisted chemical etching on the substrate, (b) cleaning the substrate, including removal of some or all of the assisting metal, and (c) performing a chemical etch which results in regularized openings in the silicon substrate.
Schaffer, Connie
2017-01-01
Many well-intended instructors use Socratic or leveled questioning to facilitate the discussion of an assigned reading. While this engages a few students, most can opt to remain silent. The seven step strategy described in this article provides an alternative to classroom silence and engages all students. Students discuss a single reading as they…
International Nuclear Information System (INIS)
Liu, Xiaoxu; Wang, Yanhui; Dong, Liang; Chen, Xi; Xin, Guoxiang; Zhang, Yan; Zang, Jianbing
2016-01-01
Shell/core structural boron and nitrogen co-doped graphitic carbon/nanodiamond (BN-C/ND) non-noble metal catalyst has been synthesized by a simple one-step heat-treatment of the mixture with nanodiamond, melamine, boric acid and FeCl 3 . In the process of the surface graphitization of nanodiamond with catalysis by FeCl 3 , B and N atoms from the decomposition of boric acid and melamine were directly introduced into the graphite lattice to form B, N co-doped graphitic carbon shell, while the core still retained the diamond structure. Electrochemical measurements of the BN-C/ND catalyst show much higher electrocatalytic activities towards oxygen reduction reaction (ORR) in alkaline medium than its analogues doped with B or N alone (B-C/ND or N-C/ND). The high catalytic activity of BN-C/ND is attributed to the synergetic effect caused by co-doping of C/ND with B and N. Meanwhile, the BN-C/ND exhibits an excellent electrochemical stability due to the special shell/core structure. There is almost no alteration occurred in the cyclic voltammetry measurements for BN-C/ND before and after 5000 cycles. All experimental results prove that the BN-C/ND may be exploited as a potentially efficient and inexpensive non-noble metal cathode catalyst for ORR to substitute Pt-based catalysts in fuel cells.
GLOBAL OPTIMIZATION METHODS FOR GRAVITATIONAL LENS SYSTEMS WITH REGULARIZED SOURCES
International Nuclear Information System (INIS)
Rogers, Adam; Fiege, Jason D.
2012-01-01
Several approaches exist to model gravitational lens systems. In this study, we apply global optimization methods to find the optimal set of lens parameters using a genetic algorithm. We treat the full optimization procedure as a two-step process: an analytical description of the source plane intensity distribution is used to find an initial approximation to the optimal lens parameters; the second stage of the optimization uses a pixelated source plane with the semilinear method to determine an optimal source. Regularization is handled by means of an iterative method and the generalized cross validation (GCV) and unbiased predictive risk estimator (UPRE) functions that are commonly used in standard image deconvolution problems. This approach simultaneously estimates the optimal regularization parameter and the number of degrees of freedom in the source. Using the GCV and UPRE functions, we are able to justify an estimation of the number of source degrees of freedom found in previous work. We test our approach by applying our code to a subset of the lens systems included in the SLACS survey.
International Nuclear Information System (INIS)
Wen, Jian-Wu; Zhang, Da-Wei; Zang, Yong; Sun, Xin; Cheng, Bin; Ding, Chu-Xiong; Yu, Yan; Chen, Chun-Hua
2014-01-01
Highlights: • A one-step sol-gel route with resorcinol-formaldehyde resin is designed to synthesis LiNi 0.5 Mn 1.5 O 4 . • Fd-3 m phase delivers an excellent high rate performance and stable cycling retention. • A double “w”-shape R-V curve is a potential tool to indicate structure transition. - Abstract: Spinel LiNi 0.5 Mn 1.5 O 4 (Fd-3 m) powders are synthesized by a facile one-step sol-gel approach with a resorcinol formaldehyde (RF) resin as a chelating agent. The cross-linked metal-containing RF xerogel particles are sintered at different high temperatures from 750 to 950 °C to produce several micron-sized LiNi 0.5 Mn 1.5 O 4 powders. Electrochemical measurements suggest that the 850 °C-sintered (in air) sample (Fd-3 m phase) performs the best with a discharge capacity of 141 mAh g −1 at 0.1 C and 110 mAh g −1 at 10 C, and capacity-retention of 96.3% after 60 cycles at 0.25 C and 89% after 200 cycles at 1 C. For comparison, the LiNi 0.5 Mn 1.5 O 4 sample sintered at 850 °C in O 2 (P4 3 32 phase) presents limited rate performance (45 mAh g −1 at 10 C) and higher values in both AC impedance and DC-method derived resistance. A characteristic double “w”-shape curve of DC resistance against cell potential can be possibly considered as an indicator to probe the material structure transition during the charge/discharge process of the cell
Regularity and predictability of human mobility in personal space.
Directory of Open Access Journals (Sweden)
Daniel Austin
Full Text Available Fundamental laws governing human mobility have many important applications such as forecasting and controlling epidemics or optimizing transportation systems. These mobility patterns, studied in the context of out of home activity during travel or social interactions with observations recorded from cell phone use or diffusion of money, suggest that in extra-personal space humans follow a high degree of temporal and spatial regularity - most often in the form of time-independent universal scaling laws. Here we show that mobility patterns of older individuals in their home also show a high degree of predictability and regularity, although in a different way than has been reported for out-of-home mobility. Studying a data set of almost 15 million observations from 19 adults spanning up to 5 years of unobtrusive longitudinal home activity monitoring, we find that in-home mobility is not well represented by a universal scaling law, but that significant structure (predictability and regularity is uncovered when explicitly accounting for contextual data in a model of in-home mobility. These results suggest that human mobility in personal space is highly stereotyped, and that monitoring discontinuities in routine room-level mobility patterns may provide an opportunity to predict individual human health and functional status or detect adverse events and trends.
Image deblurring using a perturbation-basec regularization approach
Alanazi, Abdulrahman
2017-11-02
The image restoration problem deals with images in which information has been degraded by blur or noise. In this work, we present a new method for image deblurring by solving a regularized linear least-squares problem. In the proposed method, a synthetic perturbation matrix with a bounded norm is forced into the discrete ill-conditioned model matrix. This perturbation is added to enhance the singular-value structure of the matrix and hence to provide an improved solution. A method is proposed to find a near-optimal value of the regularization parameter for the proposed approach. To reduce the computational complexity, we present a technique based on the bootstrapping method to estimate the regularization parameter for both low and high-resolution images. Experimental results on the image deblurring problem are presented. Comparisons are made with three benchmark methods and the results demonstrate that the proposed method clearly outperforms the other methods in terms of both the output PSNR and SSIM values.
Image deblurring using a perturbation-basec regularization approach
Alanazi, Abdulrahman; Ballal, Tarig; Masood, Mudassir; Al-Naffouri, Tareq Y.
2017-01-01
The image restoration problem deals with images in which information has been degraded by blur or noise. In this work, we present a new method for image deblurring by solving a regularized linear least-squares problem. In the proposed method, a synthetic perturbation matrix with a bounded norm is forced into the discrete ill-conditioned model matrix. This perturbation is added to enhance the singular-value structure of the matrix and hence to provide an improved solution. A method is proposed to find a near-optimal value of the regularization parameter for the proposed approach. To reduce the computational complexity, we present a technique based on the bootstrapping method to estimate the regularization parameter for both low and high-resolution images. Experimental results on the image deblurring problem are presented. Comparisons are made with three benchmark methods and the results demonstrate that the proposed method clearly outperforms the other methods in terms of both the output PSNR and SSIM values.
Hessian-regularized co-training for social activity recognition.
Liu, Weifeng; Li, Yang; Lin, Xu; Tao, Dacheng; Wang, Yanjiang
2014-01-01
Co-training is a major multi-view learning paradigm that alternately trains two classifiers on two distinct views and maximizes the mutual agreement on the two-view unlabeled data. Traditional co-training algorithms usually train a learner on each view separately and then force the learners to be consistent across views. Although many co-trainings have been developed, it is quite possible that a learner will receive erroneous labels for unlabeled data when the other learner has only mediocre accuracy. This usually happens in the first rounds of co-training, when there are only a few labeled examples. As a result, co-training algorithms often have unstable performance. In this paper, Hessian-regularized co-training is proposed to overcome these limitations. Specifically, each Hessian is obtained from a particular view of examples; Hessian regularization is then integrated into the learner training process of each view by penalizing the regression function along the potential manifold. Hessian can properly exploit the local structure of the underlying data manifold. Hessian regularization significantly boosts the generalizability of a classifier, especially when there are a small number of labeled examples and a large number of unlabeled examples. To evaluate the proposed method, extensive experiments were conducted on the unstructured social activity attribute (USAA) dataset for social activity recognition. Our results demonstrate that the proposed method outperforms baseline methods, including the traditional co-training and LapCo algorithms.
Hessian-regularized co-training for social activity recognition.
Directory of Open Access Journals (Sweden)
Weifeng Liu
Full Text Available Co-training is a major multi-view learning paradigm that alternately trains two classifiers on two distinct views and maximizes the mutual agreement on the two-view unlabeled data. Traditional co-training algorithms usually train a learner on each view separately and then force the learners to be consistent across views. Although many co-trainings have been developed, it is quite possible that a learner will receive erroneous labels for unlabeled data when the other learner has only mediocre accuracy. This usually happens in the first rounds of co-training, when there are only a few labeled examples. As a result, co-training algorithms often have unstable performance. In this paper, Hessian-regularized co-training is proposed to overcome these limitations. Specifically, each Hessian is obtained from a particular view of examples; Hessian regularization is then integrated into the learner training process of each view by penalizing the regression function along the potential manifold. Hessian can properly exploit the local structure of the underlying data manifold. Hessian regularization significantly boosts the generalizability of a classifier, especially when there are a small number of labeled examples and a large number of unlabeled examples. To evaluate the proposed method, extensive experiments were conducted on the unstructured social activity attribute (USAA dataset for social activity recognition. Our results demonstrate that the proposed method outperforms baseline methods, including the traditional co-training and LapCo algorithms.
International Nuclear Information System (INIS)
Kaltenbacher, Barbara; Kirchner, Alana; Vexler, Boris
2011-01-01
Parameter identification problems for partial differential equations usually lead to nonlinear inverse problems. A typical property of such problems is their instability, which requires regularization techniques, like, e.g., Tikhonov regularization. The main focus of this paper will be on efficient methods for determining a suitable regularization parameter by using adaptive finite element discretizations based on goal-oriented error estimators. A well-established method for the determination of a regularization parameter is the discrepancy principle where the residual norm, considered as a function i of the regularization parameter, should equal an appropriate multiple of the noise level. We suggest to solve the resulting scalar nonlinear equation by an inexact Newton method, where in each iteration step, a regularized problem is solved at a different discretization level. The proposed algorithm is an extension of the method suggested in Griesbaum A et al (2008 Inverse Problems 24 025025) for linear inverse problems, where goal-oriented error estimators for i and its derivative are used for adaptive refinement strategies in order to keep the discretization level as coarse as possible to save computational effort but fine enough to guarantee global convergence of the inexact Newton method. This concept leads to a highly efficient method for determining the Tikhonov regularization parameter for nonlinear ill-posed problems. Moreover, we prove that with the so-obtained regularization parameter and an also adaptively discretized Tikhonov minimizer, usual convergence and regularization results from the continuous setting can be recovered. As a matter of fact, it is shown that it suffices to use stationary points of the Tikhonov functional. The efficiency of the proposed method is demonstrated by means of numerical experiments. (paper)
Extended, regular HI structures around early-type galaxies
Oosterloo, T.; Morganti, R.; Sadler, E. M. Van der; Hulst, J. M. van der; Serra, P.
Abstract: We discuss the morphology and kinematics of the HI of a sample of 30 southern gas-rich early-type galaxies selected from the HI Parkes All-Sky Survey (HIPASS). This is the largest collection of high-resolution HI data of a homogeneously selected sample. Given the sensitivity of HIPASS,
Near-Regular Structure Discovery Using Linear Programming
Huang, Qixing; Guibas, Leonidas J.; Mitra, Niloy J.
2014-01-01
as an optimization and efficiently solve it using linear programming techniques. Our optimization has a discrete aspect, that is, the connectivity relationships among the elements, as well as a continuous aspect, namely the locations of the elements of interest. Both
Computational Abstraction Steps
DEFF Research Database (Denmark)
Thomsen, Lone Leth; Thomsen, Bent; Nørmark, Kurt
2010-01-01
and class instantiations. Our teaching experience shows that many novice programmers find it difficult to write programs with abstractions that materialise to concrete objects later in the development process. The contribution of this paper is the idea of initiating a programming process by creating...... or capturing concrete values, objects, or actions. As the next step, some of these are lifted to a higher level by computational means. In the object-oriented paradigm the target of such steps is classes. We hypothesise that the proposed approach primarily will be beneficial to novice programmers or during...... the exploratory phase of a program development process. In some specific niches it is also expected that our approach will benefit professional programmers....
... please turn JavaScript on. Feature: Type 2 Diabetes Step 3: Manage Your Diabetes Past Issues / Fall 2014 ... 2 Diabetes" Articles Diabetes Is Serious But Manageable / Step 1: Learn About Diabetes / Step 2: Know Your ...
Extreme values, regular variation and point processes
Resnick, Sidney I
1987-01-01
Extremes Values, Regular Variation and Point Processes is a readable and efficient account of the fundamental mathematical and stochastic process techniques needed to study the behavior of extreme values of phenomena based on independent and identically distributed random variables and vectors It presents a coherent treatment of the distributional and sample path fundamental properties of extremes and records It emphasizes the core primacy of three topics necessary for understanding extremes the analytical theory of regularly varying functions; the probabilistic theory of point processes and random measures; and the link to asymptotic distribution approximations provided by the theory of weak convergence of probability measures in metric spaces The book is self-contained and requires an introductory measure-theoretic course in probability as a prerequisite Almost all sections have an extensive list of exercises which extend developments in the text, offer alternate approaches, test mastery and provide for enj...
Stream Processing Using Grammars and Regular Expressions
DEFF Research Database (Denmark)
Rasmussen, Ulrik Terp
disambiguation. The first algorithm operates in two passes in a semi-streaming fashion, using a constant amount of working memory and an auxiliary tape storage which is written in the first pass and consumed by the second. The second algorithm is a single-pass and optimally streaming algorithm which outputs...... as much of the parse tree as is semantically possible based on the input prefix read so far, and resorts to buffering as many symbols as is required to resolve the next choice. Optimality is obtained by performing a PSPACE-complete pre-analysis on the regular expression. In the second part we present...... Kleenex, a language for expressing high-performance streaming string processing programs as regular grammars with embedded semantic actions, and its compilation to streaming string transducers with worst-case linear-time performance. Its underlying theory is based on transducer decomposition into oracle...
Describing chaotic attractors: Regular and perpetual points
Dudkowski, Dawid; Prasad, Awadhesh; Kapitaniak, Tomasz
2018-03-01
We study the concepts of regular and perpetual points for describing the behavior of chaotic attractors in dynamical systems. The idea of these points, which have been recently introduced to theoretical investigations, is thoroughly discussed and extended into new types of models. We analyze the correlation between regular and perpetual points, as well as their relation with phase space, showing the potential usefulness of both types of points in the qualitative description of co-existing states. The ability of perpetual points in finding attractors is indicated, along with its potential cause. The location of chaotic trajectories and sets of considered points is investigated and the study on the stability of systems is shown. The statistical analysis of the observing desired states is performed. We focus on various types of dynamical systems, i.e., chaotic flows with self-excited and hidden attractors, forced mechanical models, and semiconductor superlattices, exhibiting the universality of appearance of the observed patterns and relations.
Chaos regularization of quantum tunneling rates
International Nuclear Information System (INIS)
Pecora, Louis M.; Wu Dongho; Lee, Hoshik; Antonsen, Thomas; Lee, Ming-Jer; Ott, Edward
2011-01-01
Quantum tunneling rates through a barrier separating two-dimensional, symmetric, double-well potentials are shown to depend on the classical dynamics of the billiard trajectories in each well and, hence, on the shape of the wells. For shapes that lead to regular (integrable) classical dynamics the tunneling rates fluctuate greatly with eigenenergies of the states sometimes by over two orders of magnitude. Contrarily, shapes that lead to completely chaotic trajectories lead to tunneling rates whose fluctuations are greatly reduced, a phenomenon we call regularization of tunneling rates. We show that a random-plane-wave theory of tunneling accounts for the mean tunneling rates and the small fluctuation variances for the chaotic systems.
Least square regularized regression in sum space.
Xu, Yong-Li; Chen, Di-Rong; Li, Han-Xiong; Liu, Lu
2013-04-01
This paper proposes a least square regularized regression algorithm in sum space of reproducing kernel Hilbert spaces (RKHSs) for nonflat function approximation, and obtains the solution of the algorithm by solving a system of linear equations. This algorithm can approximate the low- and high-frequency component of the target function with large and small scale kernels, respectively. The convergence and learning rate are analyzed. We measure the complexity of the sum space by its covering number and demonstrate that the covering number can be bounded by the product of the covering numbers of basic RKHSs. For sum space of RKHSs with Gaussian kernels, by choosing appropriate parameters, we tradeoff the sample error and regularization error, and obtain a polynomial learning rate, which is better than that in any single RKHS. The utility of this method is illustrated with two simulated data sets and five real-life databases.
Contour Propagation With Riemannian Elasticity Regularization
DEFF Research Database (Denmark)
Bjerre, Troels; Hansen, Mads Fogtmann; Sapru, W.
2011-01-01
Purpose/Objective(s): Adaptive techniques allow for correction of spatial changes during the time course of the fractionated radiotherapy. Spatial changes include tumor shrinkage and weight loss, causing tissue deformation and residual positional errors even after translational and rotational image...... the planning CT onto the rescans and correcting to reflect actual anatomical changes. For deformable registration, a free-form, multi-level, B-spline deformation model with Riemannian elasticity, penalizing non-rigid local deformations, and volumetric changes, was used. Regularization parameters was defined...... on the original delineation and tissue deformation in the time course between scans form a better starting point than rigid propagation. There was no significant difference of locally and globally defined regularization. The method used in the present study suggests that deformed contours need to be reviewed...
Thin accretion disk around regular black hole
Directory of Open Access Journals (Sweden)
QIU Tianqi
2014-08-01
Full Text Available The Penrose′s cosmic censorship conjecture says that naked singularities do not exist in nature.So,it seems reasonable to further conjecture that not even a singularity exists in nature.In this paper,a regular black hole without singularity is studied in detail,especially on its thin accretion disk,energy flux,radiation temperature and accretion efficiency.It is found that the interaction of regular black hole is stronger than that of the Schwarzschild black hole. Furthermore,the thin accretion will be more efficiency to lost energy while the mass of black hole decreased. These particular properties may be used to distinguish between black holes.
A short proof of increased parabolic regularity
Directory of Open Access Journals (Sweden)
Stephen Pankavich
2015-08-01
Full Text Available We present a short proof of the increased regularity obtained by solutions to uniformly parabolic partial differential equations. Though this setting is fairly introductory, our new method of proof, which uses a priori estimates and an inductive method, can be extended to prove analogous results for problems with time-dependent coefficients, advection-diffusion or reaction diffusion equations, and nonlinear PDEs even when other tools, such as semigroup methods or the use of explicit fundamental solutions, are unavailable.
Regular black hole in three dimensions
Myung, Yun Soo; Yoon, Myungseok
2008-01-01
We find a new black hole in three dimensional anti-de Sitter space by introducing an anisotropic perfect fluid inspired by the noncommutative black hole. This is a regular black hole with two horizons. We compare thermodynamics of this black hole with that of non-rotating BTZ black hole. The first-law of thermodynamics is not compatible with the Bekenstein-Hawking entropy.
Preconditioners for regularized saddle point matrices
Czech Academy of Sciences Publication Activity Database
Axelsson, Owe
2011-01-01
Roč. 19, č. 2 (2011), s. 91-112 ISSN 1570-2820 Institutional research plan: CEZ:AV0Z30860518 Keywords : saddle point matrices * preconditioning * regularization * eigenvalue clustering Subject RIV: BA - General Mathematics Impact factor: 0.533, year: 2011 http://www.degruyter.com/view/j/jnma.2011.19.issue-2/jnum.2011.005/jnum.2011.005. xml
Analytic stochastic regularization: gauge and supersymmetry theories
International Nuclear Information System (INIS)
Abdalla, M.C.B.
1988-01-01
Analytic stochastic regularization for gauge and supersymmetric theories is considered. Gauge invariance in spinor and scalar QCD is verified to brak fown by an explicit one loop computation of the two, theree and four point vertex function of the gluon field. As a result, non gauge invariant counterterms must be added. However, in the supersymmetric multiplets there is a cancellation rendering the counterterms gauge invariant. The calculation is considered at one loop order. (author) [pt
Regularized forecasting of chaotic dynamical systems
International Nuclear Information System (INIS)
Bollt, Erik M.
2017-01-01
While local models of dynamical systems have been highly successful in terms of using extensive data sets observing even a chaotic dynamical system to produce useful forecasts, there is a typical problem as follows. Specifically, with k-near neighbors, kNN method, local observations occur due to recurrences in a chaotic system, and this allows for local models to be built by regression to low dimensional polynomial approximations of the underlying system estimating a Taylor series. This has been a popular approach, particularly in context of scalar data observations which have been represented by time-delay embedding methods. However such local models can generally allow for spatial discontinuities of forecasts when considered globally, meaning jumps in predictions because the collected near neighbors vary from point to point. The source of these discontinuities is generally that the set of near neighbors varies discontinuously with respect to the position of the sample point, and so therefore does the model built from the near neighbors. It is possible to utilize local information inferred from near neighbors as usual but at the same time to impose a degree of regularity on a global scale. We present here a new global perspective extending the general local modeling concept. In so doing, then we proceed to show how this perspective allows us to impose prior presumed regularity into the model, by involving the Tikhonov regularity theory, since this classic perspective of optimization in ill-posed problems naturally balances fitting an objective with some prior assumed form of the result, such as continuity or derivative regularity for example. This all reduces to matrix manipulations which we demonstrate on a simple data set, with the implication that it may find much broader context.
Regularity and chaos in cavity QED
International Nuclear Information System (INIS)
Bastarrachea-Magnani, Miguel Angel; López-del-Carpio, Baldemar; Chávez-Carlos, Jorge; Lerma-Hernández, Sergio; Hirsch, Jorge G
2017-01-01
The interaction of a quantized electromagnetic field in a cavity with a set of two-level atoms inside it can be described with algebraic Hamiltonians of increasing complexity, from the Rabi to the Dicke models. Their algebraic character allows, through the use of coherent states, a semiclassical description in phase space, where the non-integrable Dicke model has regions associated with regular and chaotic motion. The appearance of classical chaos can be quantified calculating the largest Lyapunov exponent over the whole available phase space for a given energy. In the quantum regime, employing efficient diagonalization techniques, we are able to perform a detailed quantitative study of the regular and chaotic regions, where the quantum participation ratio (P R ) of coherent states on the eigenenergy basis plays a role equivalent to the Lyapunov exponent. It is noted that, in the thermodynamic limit, dividing the participation ratio by the number of atoms leads to a positive value in chaotic regions, while it tends to zero in the regular ones. (paper)
Solution path for manifold regularized semisupervised classification.
Wang, Gang; Wang, Fei; Chen, Tao; Yeung, Dit-Yan; Lochovsky, Frederick H
2012-04-01
Traditional learning algorithms use only labeled data for training. However, labeled examples are often difficult or time consuming to obtain since they require substantial human labeling efforts. On the other hand, unlabeled data are often relatively easy to collect. Semisupervised learning addresses this problem by using large quantities of unlabeled data with labeled data to build better learning algorithms. In this paper, we use the manifold regularization approach to formulate the semisupervised learning problem where a regularization framework which balances a tradeoff between loss and penalty is established. We investigate different implementations of the loss function and identify the methods which have the least computational expense. The regularization hyperparameter, which determines the balance between loss and penalty, is crucial to model selection. Accordingly, we derive an algorithm that can fit the entire path of solutions for every value of the hyperparameter. Its computational complexity after preprocessing is quadratic only in the number of labeled examples rather than the total number of labeled and unlabeled examples.
Regularizations: different recipes for identical situations
International Nuclear Information System (INIS)
Gambin, E.; Lobo, C.O.; Battistel, O.A.
2004-03-01
We present a discussion where the choice of the regularization procedure and the routing for the internal lines momenta are put at the same level of arbitrariness in the analysis of Ward identities involving simple and well-known problems in QFT. They are the complex self-interacting scalar field and two simple models where the SVV and AVV process are pertinent. We show that, in all these problems, the conditions to symmetry relations preservation are put in terms of the same combination of divergent Feynman integrals, which are evaluated in the context of a very general calculational strategy, concerning the manipulations and calculations involving divergences. Within the adopted strategy, all the arbitrariness intrinsic to the problem are still maintained in the final results and, consequently, a perfect map can be obtained with the corresponding results of the traditional regularization techniques. We show that, when we require an universal interpretation for the arbitrariness involved, in order to get consistency with all stated physical constraints, a strong condition is imposed for regularizations which automatically eliminates the ambiguities associated to the routing of the internal lines momenta of loops. The conclusion is clean and sound: the association between ambiguities and unavoidable symmetry violations in Ward identities cannot be maintained if an unique recipe is required for identical situations in the evaluation of divergent physical amplitudes. (author)
Parekh, Ankit
Sparsity has become the basis of some important signal processing methods over the last ten years. Many signal processing problems (e.g., denoising, deconvolution, non-linear component analysis) can be expressed as inverse problems. Sparsity is invoked through the formulation of an inverse problem with suitably designed regularization terms. The regularization terms alone encode sparsity into the problem formulation. Often, the ℓ1 norm is used to induce sparsity, so much so that ℓ1 regularization is considered to be `modern least-squares'. The use of ℓ1 norm, as a sparsity-inducing regularizer, leads to a convex optimization problem, which has several benefits: the absence of extraneous local minima, well developed theory of globally convergent algorithms, even for large-scale problems. Convex regularization via the ℓ1 norm, however, tends to under-estimate the non-zero values of sparse signals. In order to estimate the non-zero values more accurately, non-convex regularization is often favored over convex regularization. However, non-convex regularization generally leads to non-convex optimization, which suffers from numerous issues: convergence may be guaranteed to only a stationary point, problem specific parameters may be difficult to set, and the solution is sensitive to the initialization of the algorithm. The first part of this thesis is aimed toward combining the benefits of non-convex regularization and convex optimization to estimate sparse signals more effectively. To this end, we propose to use parameterized non-convex regularizers with designated non-convexity and provide a range for the non-convex parameter so as to ensure that the objective function is strictly convex. By ensuring convexity of the objective function (sum of data-fidelity and non-convex regularizer), we can make use of a wide variety of convex optimization algorithms to obtain the unique global minimum reliably. The second part of this thesis proposes a non-linear signal
Strength evaluation code STEP for brittle materials
International Nuclear Information System (INIS)
Ishihara, Masahiro; Futakawa, Masatoshi.
1997-12-01
In a structural design using brittle materials such as graphite and/or ceramics it is necessary to evaluate the strength of component under complex stress condition. The strength of ceramic materials is said to be influenced by the stress distribution. However, in the structural design criteria simplified stress limits had been adopted without taking account of the strength change with the stress distribution. It is, therefore, important to evaluate the strength of component on the basis of the fracture model for brittle material. Consequently, the strength evaluation program, STEP, on a brittle fracture of ceramic materials based on the competing risk theory had been developed. Two different brittle fracture modes, a surface layer fracture mode dominated by surface flaws and an internal fracture mode by internal flaws, are treated in the STEP code in order to evaluate the strength of brittle fracture. The STEP code uses stress calculation results including complex shape of structures analyzed by the generalized FEM stress analysis code, ABAQUS, so as to be possible to evaluate the strength of brittle fracture for the structures having complicate shapes. This code is, therefore, useful to evaluate the structural integrity of arbitrary shapes of components such as core graphite components in the HTTR, heat exchanger components made of ceramics materials etc. This paper describes the basic equations applying to the STEP code, code system with a combination of the STEP and the ABAQUS codes and the result of the verification analysis. (author)
Sparsity regularization for parameter identification problems
International Nuclear Information System (INIS)
Jin, Bangti; Maass, Peter
2012-01-01
The investigation of regularization schemes with sparsity promoting penalty terms has been one of the dominant topics in the field of inverse problems over the last years, and Tikhonov functionals with ℓ p -penalty terms for 1 ⩽ p ⩽ 2 have been studied extensively. The first investigations focused on regularization properties of the minimizers of such functionals with linear operators and on iteration schemes for approximating the minimizers. These results were quickly transferred to nonlinear operator equations, including nonsmooth operators and more general function space settings. The latest results on regularization properties additionally assume a sparse representation of the true solution as well as generalized source conditions, which yield some surprising and optimal convergence rates. The regularization theory with ℓ p sparsity constraints is relatively complete in this setting; see the first part of this review. In contrast, the development of efficient numerical schemes for approximating minimizers of Tikhonov functionals with sparsity constraints for nonlinear operators is still ongoing. The basic iterated soft shrinkage approach has been extended in several directions and semi-smooth Newton methods are becoming applicable in this field. In particular, the extension to more general non-convex, non-differentiable functionals by variational principles leads to a variety of generalized iteration schemes. We focus on such iteration schemes in the second part of this review. A major part of this survey is devoted to applying sparsity constrained regularization techniques to parameter identification problems for partial differential equations, which we regard as the prototypical setting for nonlinear inverse problems. Parameter identification problems exhibit different levels of complexity and we aim at characterizing a hierarchy of such problems. The operator defining these inverse problems is the parameter-to-state mapping. We first summarize some
REGULAR METHOD FOR SYNTHESIS OF BASIC BENT-SQUARES OF RANDOM ORDER
Directory of Open Access Journals (Sweden)
A. V. Sokolov
2016-01-01
Full Text Available The paper is devoted to the class construction of the most non-linear Boolean bent-functions of any length N = 2k (k = 2, 4, 6…, on the basis of their spectral representation – Agievich bent squares. These perfect algebraic constructions are used as a basis to build many new cryptographic primitives, such as generators of pseudo-random key sequences, crypto graphic S-boxes, etc. Bent-functions also find their application in the construction of C-codes in the systems with code division multiple access (CDMA to provide the lowest possible value of Peak-to-Average Power Ratio (PAPR k = 1, as well as for the construction of error-correcting codes and systems of orthogonal biphasic signals. All the numerous applications of bent-functions relate to the theory of their synthesis. However, regular methods for complete class synthesis of bent-functions of any length N = 2k are currently unknown. The paper proposes a regular synthesis method for the basic Agievich bent squares of any order n, based on a regular operator of dyadic shift. Classification for a complete set of spectral vectors of lengths (l = 8, 16, … based on a criterion of the maximum absolute value and set of absolute values of spectral components has been carried out in the paper. It has been shown that any spectral vector can be a basis for building bent squares. Results of the synthesis for the Agievich bent squares of order n = 8 have been generalized and it has been revealed that there are only 3 basic bent squares for this order, while the other 5 can be obtained with help the operation of step-cyclic shift. All the basic bent squares of order n = 16 have been synthesized that allows to construct the bent-functions of length N = 256. The obtained basic bent squares can be used either for direct synthesis of bent-functions and their practical application or for further research in order to synthesize new structures of bent squares of orders n = 16, 32, 64, …
Hu, Han; Ding, Yulin; Zhu, Qing; Wu, Bo; Lin, Hui; Du, Zhiqiang; Zhang, Yeting; Zhang, Yunsheng
2014-06-01
The filtering of point clouds is a ubiquitous task in the processing of airborne laser scanning (ALS) data; however, such filtering processes are difficult because of the complex configuration of the terrain features. The classical filtering algorithms rely on the cautious tuning of parameters to handle various landforms. To address the challenge posed by the bundling of different terrain features into a single dataset and to surmount the sensitivity of the parameters, in this study, we propose an adaptive surface filter (ASF) for the classification of ALS point clouds. Based on the principle that the threshold should vary in accordance to the terrain smoothness, the ASF embeds bending energy, which quantitatively depicts the local terrain structure to self-adapt the filter threshold automatically. The ASF employs a step factor to control the data pyramid scheme in which the processing window sizes are reduced progressively, and the ASF gradually interpolates thin plate spline surfaces toward the ground with regularization to handle noise. Using the progressive densification strategy, regularization and self-adaption, both performance improvement and resilience to parameter tuning are achieved. When tested against the benchmark datasets provided by ISPRS, the ASF performs the best in comparison with all other filtering methods, yielding an average total error of 2.85% when optimized and 3.67% when using the same parameter set.
Crowdsourcing step-by-step information extraction to enhance existing how-to videos
Nguyen, Phu Tran; Weir, Sarah; Guo, Philip J.; Miller, Robert C.; Gajos, Krzysztof Z.; Kim, Ju Ho
2014-01-01
Millions of learners today use how-to videos to master new skills in a variety of domains. But browsing such videos is often tedious and inefficient because video player interfaces are not optimized for the unique step-by-step structure of such videos. This research aims to improve the learning experience of existing how-to videos with step-by-step annotations. We first performed a formative study to verify that annotations are actually useful to learners. We created ToolScape, an interac...
Learning Sparse Visual Representations with Leaky Capped Norm Regularizers
Wangni, Jianqiao; Lin, Dahua
2017-01-01
Sparsity inducing regularization is an important part for learning over-complete visual representations. Despite the popularity of $\\ell_1$ regularization, in this paper, we investigate the usage of non-convex regularizations in this problem. Our contribution consists of three parts. First, we propose the leaky capped norm regularization (LCNR), which allows model weights below a certain threshold to be regularized more strongly as opposed to those above, therefore imposes strong sparsity and...
Temporal regularity of the environment drives time perception
van Rijn, H; Rhodes, D; Di Luca, M
2016-01-01
It’s reasonable to assume that a regularly paced sequence should be perceived as regular, but here we show that perceived regularity depends on the context in which the sequence is embedded. We presented one group of participants with perceptually regularly paced sequences, and another group of participants with mostly irregularly paced sequences (75% irregular, 25% regular). The timing of the final stimulus in each sequence could be var- ied. In one experiment, we asked whether the last stim...
Directory of Open Access Journals (Sweden)
Emily Lyle
2012-03-01
Full Text Available Indo-European mythology is known only through written records but it needs to be understood in terms of the preliterate oral-cultural context in which it was rooted. It is proposed that this world was conceptually organized through a memory-capsule consisting of the current generation and the three before it, and that there was a system of alternate generations with each generation taking a step into the future under the leadership of a white or red king.
SYSTEMATIZATION OF THE BASIC STEPS OF THE STEP-AEROBICS
Directory of Open Access Journals (Sweden)
Darinka Korovljev
2011-03-01
Full Text Available Following the development of the powerful sport industry, in front of us appeared a lot of new opportunities for creating of the new programmes of exercising with certain requisites. One of such programmes is certainly step-aerobics. Step-aerobics can be defined as a type of aerobics consisting of the basic aerobic steps (basic steps applied in exercising on stepper (step bench, with a possibility to regulate its height. Step-aerobics itself can be divided into several groups, depending on the following: type of music, working methods and adopted knowledge of the attendants. In this work, the systematization of the basic steps in step-aerobics was made on the basis of the following criteria: steps origin, number of leg motions in stepping and relating the body support at the end of the step. Systematization of the basic steps of the step-aerobics is quite significant for making a concrete review of the existing basic steps, thus making creation of the step-aerobics lesson easier
Directory of Open Access Journals (Sweden)
Dustin Kai Yan Lau
2014-03-01
Full Text Available Background Unlike alphabetic languages, Chinese uses a logographic script. However, the pronunciation of many character’s phonetic radical has the same pronunciation as the character as a whole. These are considered regular characters and can be read through a lexical non-semantic route (Weekes & Chen, 1999. Pseudocharacters are another way to study this non-semantic route. A pseudocharacter is the combination of existing semantic and phonetic radicals in their legal positions resulting in a non-existing character (Ho, Chan, Chung, Lee, & Tsang, 2007. Pseudocharacters can be pronounced by direct derivation from the sound of its phonetic radical. Conversely, if the pronunciation of a character does not follow that of the phonetic radical, it is considered as irregular and can only be correctly read through the lexical-semantic route. The aim of the current investigation was to examine reading aloud in normal adults. We hypothesized that the regularity effect, previously described for alphabetical scripts and acquired dyslexic patients of Chinese (Weekes & Chen, 1999; Wu, Liu, Sun, Chromik, & Zhang, 2014, would also be present in normal adult Chinese readers. Method Participants. Thirty (50% female native Hong Kong Cantonese speakers with a mean age of 19.6 years and a mean education of 12.9 years. Stimuli. Sixty regular-, 60 irregular-, and 60 pseudo-characters (with at least 75% of name agreement in Chinese were matched by initial phoneme, number of strokes and family size. Additionally, regular- and irregular-characters were matched by frequency (low and consistency. Procedure. Each participant was asked to read aloud the stimuli presented on a laptop using the DMDX software. The order of stimuli presentation was randomized. Data analysis. ANOVAs were carried out by participants and items with RTs and errors as dependent variables and type of stimuli (regular-, irregular- and pseudo-character as repeated measures (F1 or between subject
Two-step variable selection in quantile regression models
Directory of Open Access Journals (Sweden)
FAN Yali
2015-06-01
Full Text Available We propose a two-step variable selection procedure for high dimensional quantile regressions, in which the dimension of the covariates, pn is much larger than the sample size n. In the first step, we perform ℓ1 penalty, and we demonstrate that the first step penalized estimator with the LASSO penalty can reduce the model from an ultra-high dimensional to a model whose size has the same order as that of the true model, and the selected model can cover the true model. The second step excludes the remained irrelevant covariates by applying the adaptive LASSO penalty to the reduced model obtained from the first step. Under some regularity conditions, we show that our procedure enjoys the model selection consistency. We conduct a simulation study and a real data analysis to evaluate the finite sample performance of the proposed approach.
Properties of regular polygons of coupled microring resonators.
Chremmos, Ioannis; Uzunoglu, Nikolaos
2007-11-01
The resonant properties of a closed and symmetric cyclic array of N coupled microring resonators (coupled-microring resonator regular N-gon) are for the first time determined analytically by applying the transfer matrix approach and Floquet theorem for periodic propagation in cylindrically symmetric structures. By solving the corresponding eigenvalue problem with the field amplitudes in the rings as eigenvectors, it is shown that, for even or odd N, this photonic molecule possesses 1 + N/2 or 1+N resonant frequencies, respectively. The condition for resonances is found to be identical to the familiar dispersion equation of the infinite coupled-microring resonator waveguide with a discrete wave vector. This result reveals the so far latent connection between the two optical structures and is based on the fact that, for a regular polygon, the field transfer matrix over two successive rings is independent of the polygon vertex angle. The properties of the resonant modes are discussed in detail using the illustration of Brillouin band diagrams. Finally, the practical application of a channel-dropping filter based on polygons with an even number of rings is also analyzed.
Regularized inversion of controlled source and earthquake data
International Nuclear Information System (INIS)
Ramachandran, Kumar
2012-01-01
Estimation of the seismic velocity structure of the Earth's crust and upper mantle from travel-time data has advanced greatly in recent years. Forward modelling trial-and-error methods have been superseded by tomographic methods which allow more objective analysis of large two-dimensional and three-dimensional refraction and/or reflection data sets. The fundamental purpose of travel-time tomography is to determine the velocity structure of a medium by analysing the time it takes for a wave generated at a source point within the medium to arrive at a distribution of receiver points. Tomographic inversion of first-arrival travel-time data is a nonlinear problem since both the velocity of the medium and ray paths in the medium are unknown. The solution for such a problem is typically obtained by repeated application of linearized inversion. Regularization of the nonlinear problem reduces the ill posedness inherent in the tomographic inversion due to the under-determined nature of the problem and the inconsistencies in the observed data. This paper discusses the theory of regularized inversion for joint inversion of controlled source and earthquake data, and results from synthetic data testing and application to real data. The results obtained from tomographic inversion of synthetic data and real data from the northern Cascadia subduction zone show that the velocity model and hypocentral parameters can be efficiently estimated using this approach. (paper)
Brouilly, Nicolas; Lecroisey, Claire; Martin, Edwige; Pierson, Laura; Mariol, Marie-Christine; Qadota, Hiroshi; Labouesse, Michel; Streichenberger, Nathalie; Mounier, Nicole; Gieseler, Kathrin
2015-11-15
Duchenne muscular dystrophy (DMD) is a genetic disease characterized by progressive muscle degeneration due to mutations in the dystrophin gene. In spite of great advances in the design of curative treatments, most patients currently receive palliative therapies with steroid molecules such as prednisone or deflazacort thought to act through their immunosuppressive properties. These molecules only slightly slow down the progression of the disease and lead to severe side effects. Fundamental research is still needed to reveal the mechanisms involved in the disease that could be exploited as therapeutic targets. By studying a Caenorhabditis elegans model for DMD, we show here that dystrophin-dependent muscle degeneration is likely to be cell autonomous and affects the muscle cells the most involved in locomotion. We demonstrate that muscle degeneration is dependent on exercise and force production. Exhaustive studies by electron microscopy allowed establishing for the first time the chronology of subcellular events occurring during the entire process of muscle degeneration. This chronology highlighted the crucial role for dystrophin in stabilizing sarcomeric anchoring structures and the sarcolemma. Our results suggest that the disruption of sarcomeric anchoring structures and sarcolemma integrity, observed at the onset of the muscle degeneration process, triggers subcellular consequences that lead to muscle cell death. An ultra-structural analysis of muscle biopsies from DMD patients suggested that the chronology of subcellular events established in C. elegans models the pathogenesis in human. Finally, we found that the loss of sarcolemma integrity was greatly reduced after prednisone treatment suggesting a role for this molecule in plasma membrane stabilization. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Convergence and fluctuations of Regularized Tyler estimators
Kammoun, Abla; Couillet, Romain; Pascal, Frederic; Alouini, Mohamed-Slim
2015-01-01
This article studies the behavior of regularized Tyler estimators (RTEs) of scatter matrices. The key advantages of these estimators are twofold. First, they guarantee by construction a good conditioning of the estimate and second, being a derivative of robust Tyler estimators, they inherit their robustness properties, notably their resilience to the presence of outliers. Nevertheless, one major problem that poses the use of RTEs in practice is represented by the question of setting the regularization parameter p. While a high value of p is likely to push all the eigenvalues away from zero, it comes at the cost of a larger bias with respect to the population covariance matrix. A deep understanding of the statistics of RTEs is essential to come up with appropriate choices for the regularization parameter. This is not an easy task and might be out of reach, unless one considers asymptotic regimes wherein the number of observations n and/or their size N increase together. First asymptotic results have recently been obtained under the assumption that N and n are large and commensurable. Interestingly, no results concerning the regime of n going to infinity with N fixed exist, even though the investigation of this assumption has usually predated the analysis of the most difficult N and n large case. This motivates our work. In particular, we prove in the present paper that the RTEs converge to a deterministic matrix when n → ∞ with N fixed, which is expressed as a function of the theoretical covariance matrix. We also derive the fluctuations of the RTEs around this deterministic matrix and establish that these fluctuations converge in distribution to a multivariate Gaussian distribution with zero mean and a covariance depending on the population covariance and the parameter.
Convergence and fluctuations of Regularized Tyler estimators
Kammoun, Abla
2015-10-26
This article studies the behavior of regularized Tyler estimators (RTEs) of scatter matrices. The key advantages of these estimators are twofold. First, they guarantee by construction a good conditioning of the estimate and second, being a derivative of robust Tyler estimators, they inherit their robustness properties, notably their resilience to the presence of outliers. Nevertheless, one major problem that poses the use of RTEs in practice is represented by the question of setting the regularization parameter p. While a high value of p is likely to push all the eigenvalues away from zero, it comes at the cost of a larger bias with respect to the population covariance matrix. A deep understanding of the statistics of RTEs is essential to come up with appropriate choices for the regularization parameter. This is not an easy task and might be out of reach, unless one considers asymptotic regimes wherein the number of observations n and/or their size N increase together. First asymptotic results have recently been obtained under the assumption that N and n are large and commensurable. Interestingly, no results concerning the regime of n going to infinity with N fixed exist, even though the investigation of this assumption has usually predated the analysis of the most difficult N and n large case. This motivates our work. In particular, we prove in the present paper that the RTEs converge to a deterministic matrix when n → ∞ with N fixed, which is expressed as a function of the theoretical covariance matrix. We also derive the fluctuations of the RTEs around this deterministic matrix and establish that these fluctuations converge in distribution to a multivariate Gaussian distribution with zero mean and a covariance depending on the population covariance and the parameter.
The use of regularization in inferential measurements
International Nuclear Information System (INIS)
Hines, J. Wesley; Gribok, Andrei V.; Attieh, Ibrahim; Uhrig, Robert E.
1999-01-01
Inferential sensing is the prediction of a plant variable through the use of correlated plant variables. A correct prediction of the variable can be used to monitor sensors for drift or other failures making periodic instrument calibrations unnecessary. This move from periodic to condition based maintenance can reduce costs and increase the reliability of the instrument. Having accurate, reliable measurements is important for signals that may impact safety or profitability. This paper investigates how collinearity adversely affects inferential sensing by making the results inconsistent and unrepeatable; and presents regularization as a potential solution (author) (ml)
Regularization ambiguities in loop quantum gravity
International Nuclear Information System (INIS)
Perez, Alejandro
2006-01-01
One of the main achievements of loop quantum gravity is the consistent quantization of the analog of the Wheeler-DeWitt equation which is free of ultraviolet divergences. However, ambiguities associated to the intermediate regularization procedure lead to an apparently infinite set of possible theories. The absence of an UV problem--the existence of well-behaved regularization of the constraints--is intimately linked with the ambiguities arising in the quantum theory. Among these ambiguities is the one associated to the SU(2) unitary representation used in the diffeomorphism covariant 'point-splitting' regularization of the nonlinear functionals of the connection. This ambiguity is labeled by a half-integer m and, here, it is referred to as the m ambiguity. The aim of this paper is to investigate the important implications of this ambiguity. We first study 2+1 gravity (and more generally BF theory) quantized in the canonical formulation of loop quantum gravity. Only when the regularization of the quantum constraints is performed in terms of the fundamental representation of the gauge group does one obtain the usual topological quantum field theory as a result. In all other cases unphysical local degrees of freedom arise at the level of the regulated theory that conspire against the existence of the continuum limit. This shows that there is a clear-cut choice in the quantization of the constraints in 2+1 loop quantum gravity. We then analyze the effects of the ambiguity in 3+1 gravity exhibiting the existence of spurious solutions for higher representation quantizations of the Hamiltonian constraint. Although the analysis is not complete in 3+1 dimensions - due to the difficulties associated to the definition of the physical inner product - it provides evidence supporting the definitions quantum dynamics of loop quantum gravity in terms of the fundamental representation of the gauge group as the only consistent possibilities. If the gauge group is SO(3) we find
Effort variation regularization in sound field reproduction
DEFF Research Database (Denmark)
Stefanakis, Nick; Jacobsen, Finn; Sarris, Ioannis
2010-01-01
In this paper, active control is used in order to reproduce a given sound field in an extended spatial region. A method is proposed which minimizes the reproduction error at a number of control positions with the reproduction sources holding a certain relation within their complex strengths......), and adaptive wave field synthesis (AWFS), both under free-field conditions and in reverberant rooms. It is shown that effort variation regularization overcomes the problems associated with small spaces and with a low ratio of direct to reverberant energy, improving thus the reproduction accuracy...
New regularities in mass spectra of hadrons
International Nuclear Information System (INIS)
Kajdalov, A.B.
1989-01-01
The properties of bosonic and baryonic Regge trajectories for hadrons composed of light quarks are considered. Experimental data agree with an existence of daughter trajectories consistent with string models. It is pointed out that the parity doubling for baryonic trajectories, observed experimentally, is not understood in the existing quark models. Mass spectrum of bosons and baryons indicates to an approximate supersymmetry in the mass region M>1 GeV. These regularities indicates to a high degree of symmetry for the dynamics in the confinement region. 8 refs.; 5 figs
Total-variation regularization with bound constraints
International Nuclear Information System (INIS)
Chartrand, Rick; Wohlberg, Brendt
2009-01-01
We present a new algorithm for bound-constrained total-variation (TV) regularization that in comparison with its predecessors is simple, fast, and flexible. We use a splitting approach to decouple TV minimization from enforcing the constraints. Consequently, existing TV solvers can be employed with minimal alteration. This also makes the approach straightforward to generalize to any situation where TV can be applied. We consider deblurring of images with Gaussian or salt-and-pepper noise, as well as Abel inversion of radiographs with Poisson noise. We incorporate previous iterative reweighting algorithms to solve the TV portion.
Bayesian regularization of diffusion tensor images
DEFF Research Database (Denmark)
Frandsen, Jesper; Hobolth, Asger; Østergaard, Leif
2007-01-01
Diffusion tensor imaging (DTI) is a powerful tool in the study of the course of nerve fibre bundles in the human brain. Using DTI, the local fibre orientation in each image voxel can be described by a diffusion tensor which is constructed from local measurements of diffusion coefficients along...... several directions. The measured diffusion coefficients and thereby the diffusion tensors are subject to noise, leading to possibly flawed representations of the three dimensional fibre bundles. In this paper we develop a Bayesian procedure for regularizing the diffusion tensor field, fully utilizing...
Indefinite metric and regularization of electrodynamics
International Nuclear Information System (INIS)
Gaudin, M.
1984-06-01
The invariant regularization of Pauli and Villars in quantum electrodynamics can be considered as deriving from a local and causal lagrangian theory for spin 1/2 bosons, by introducing an indefinite metric and a condition on the allowed states similar to the Lorentz condition. The consequences are the asymptotic freedom of the photon's propagator. We present a calcultion of the effective charge to the fourth order in the coupling as a function of the auxiliary masses, the theory avoiding all mass divergencies to this order [fr
Strategies for regular segmented reductions on GPU
DEFF Research Database (Denmark)
Larsen, Rasmus Wriedt; Henriksen, Troels
2017-01-01
We present and evaluate an implementation technique for regular segmented reductions on GPUs. Existing techniques tend to be either consistent in performance but relatively inefficient in absolute terms, or optimised for specific workloads and thereby exhibiting bad performance for certain input...... is in the context of the Futhark compiler, the implementation technique is applicable to any library or language that has a need for segmented reductions. We evaluate the technique on four microbenchmarks, two of which we also compare to implementations in the CUB library for GPU programming, as well as on two...
International Nuclear Information System (INIS)
Galyean, W.J.; Whaley, A.M.; Kelly, D.L.; Boring, R.L.
2011-01-01
This guide provides step-by-step guidance on the use of the SPAR-H method for quantifying Human Failure Events (HFEs). This guide is intended to be used with the worksheets provided in: 'The SPAR-H Human Reliability Analysis Method,' NUREG/CR-6883, dated August 2005. Each step in the process of producing a Human Error Probability (HEP) is discussed. These steps are: Step-1, Categorizing the HFE as Diagnosis and/or Action; Step-2, Rate the Performance Shaping Factors; Step-3, Calculate PSF-Modified HEP; Step-4, Accounting for Dependence, and; Step-5, Minimum Value Cutoff. The discussions on dependence are extensive and include an appendix that describes insights obtained from the psychology literature.
Energy Technology Data Exchange (ETDEWEB)
W. J. Galyean; A. M. Whaley; D. L. Kelly; R. L. Boring
2011-05-01
This guide provides step-by-step guidance on the use of the SPAR-H method for quantifying Human Failure Events (HFEs). This guide is intended to be used with the worksheets provided in: 'The SPAR-H Human Reliability Analysis Method,' NUREG/CR-6883, dated August 2005. Each step in the process of producing a Human Error Probability (HEP) is discussed. These steps are: Step-1, Categorizing the HFE as Diagnosis and/or Action; Step-2, Rate the Performance Shaping Factors; Step-3, Calculate PSF-Modified HEP; Step-4, Accounting for Dependence, and; Step-5, Minimum Value Cutoff. The discussions on dependence are extensive and include an appendix that describes insights obtained from the psychology literature.
[First steps in neuronavigation].
Castilla, J M; Martín, V; Fernández-Arconada, O; Delgado, P; Rodríguez-Salazar, A
2003-10-01
We try to evaluate the introduction of a neuronavigation system widely used in a neurosurgical department. We analyze the surgical procedures performed since the introduction of a neuronavigator in our hospital, the advantages and the problems related with its use. From 21/12/00 to 31/12/01, 64 cranial and 5 spinal procedures were performed in our centre with the aid of the BrainLAB neuronavigation system. They were 19.37% of the elective surgeries: 45.7% of cranial and 2.8% of spinal procedures. The accuracy of registration was 1.6 mm; the number of trials for registration was 2.8 on average, although in 3 cases it was not possible; there were disarrangements during 9 surgical procedures (two of them after the lesions were reached). Magnetic resonance imaging (MRI) was used in 54 instances, computerized tomography (CT) in 5, fluoroscopy (Rx) in 1, CT plus MRI in 8, CT plus Rx in 1. Since Z-Touch localization system and software was available, it was used exclusively, disregarding the use of external fiducials. In our experience, neuronavigation needs extra time, but it helps in the election of the best position for the surgical approach, reduces the time required for scalp incision and craniotomy planning, and is useful for the opening of the dura and the corticectomy. As the operation proceeds, we found it less truhstworthy and necessary. The Z-touch system frees the imaging from the surgery. Its use in spinal operation is scarce and with limited results in our practice. We found the neuronavigation useful, and we employ it on a regular basis in every cranial procedure whenever it is possible.
Emotion regulation deficits in regular marijuana users.
Zimmermann, Kaeli; Walz, Christina; Derckx, Raissa T; Kendrick, Keith M; Weber, Bernd; Dore, Bruce; Ochsner, Kevin N; Hurlemann, René; Becker, Benjamin
2017-08-01
Effective regulation of negative affective states has been associated with mental health. Impaired regulation of negative affect represents a risk factor for dysfunctional coping mechanisms such as drug use and thus could contribute to the initiation and development of problematic substance use. This study investigated behavioral and neural indices of emotion regulation in regular marijuana users (n = 23) and demographically matched nonusing controls (n = 20) by means of an fMRI cognitive emotion regulation (reappraisal) paradigm. Relative to nonusing controls, marijuana users demonstrated increased neural activity in a bilateral frontal network comprising precentral, middle cingulate, and supplementary motor regions during reappraisal of negative affect (P marijuana users relative to controls. Together, the present findings could reflect an unsuccessful attempt of compensatory recruitment of additional neural resources in the context of disrupted amygdala-prefrontal interaction during volitional emotion regulation in marijuana users. As such, impaired volitional regulation of negative affect might represent a consequence of, or risk factor for, regular marijuana use. Hum Brain Mapp 38:4270-4279, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Efficient multidimensional regularization for Volterra series estimation
Birpoutsoukis, Georgios; Csurcsia, Péter Zoltán; Schoukens, Johan
2018-05-01
This paper presents an efficient nonparametric time domain nonlinear system identification method. It is shown how truncated Volterra series models can be efficiently estimated without the need of long, transient-free measurements. The method is a novel extension of the regularization methods that have been developed for impulse response estimates of linear time invariant systems. To avoid the excessive memory needs in case of long measurements or large number of estimated parameters, a practical gradient-based estimation method is also provided, leading to the same numerical results as the proposed Volterra estimation method. Moreover, the transient effects in the simulated output are removed by a special regularization method based on the novel ideas of transient removal for Linear Time-Varying (LTV) systems. Combining the proposed methodologies, the nonparametric Volterra models of the cascaded water tanks benchmark are presented in this paper. The results for different scenarios varying from a simple Finite Impulse Response (FIR) model to a 3rd degree Volterra series with and without transient removal are compared and studied. It is clear that the obtained models capture the system dynamics when tested on a validation dataset, and their performance is comparable with the white-box (physical) models.
Supporting Regularized Logistic Regression Privately and Efficiently
Li, Wenfa; Liu, Hongzhe; Yang, Peng; Xie, Wei
2016-01-01
As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc. PMID:27271738
Supporting Regularized Logistic Regression Privately and Efficiently.
Li, Wenfa; Liu, Hongzhe; Yang, Peng; Xie, Wei
2016-01-01
As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc.
Accelerating Large Data Analysis By Exploiting Regularities
Moran, Patrick J.; Ellsworth, David
2003-01-01
We present techniques for discovering and exploiting regularity in large curvilinear data sets. The data can be based on a single mesh or a mesh composed of multiple submeshes (also known as zones). Multi-zone data are typical to Computational Fluid Dynamics (CFD) simulations. Regularities include axis-aligned rectilinear and cylindrical meshes as well as cases where one zone is equivalent to a rigid-body transformation of another. Our algorithms can also discover rigid-body motion of meshes in time-series data. Next, we describe a data model where we can utilize the results from the discovery process in order to accelerate large data visualizations. Where possible, we replace general curvilinear zones with rectilinear or cylindrical zones. In rigid-body motion cases we replace a time-series of meshes with a transformed mesh object where a reference mesh is dynamically transformed based on a given time value in order to satisfy geometry requests, on demand. The data model enables us to make these substitutions and dynamic transformations transparently with respect to the visualization algorithms. We present results with large data sets where we combine our mesh replacement and transformation techniques with out-of-core paging in order to achieve significant speed-ups in analysis.
Supporting Regularized Logistic Regression Privately and Efficiently.
Directory of Open Access Journals (Sweden)
Wenfa Li
Full Text Available As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc.
Multiview Hessian regularization for image annotation.
Liu, Weifeng; Tao, Dacheng
2013-07-01
The rapid development of computer hardware and Internet technology makes large scale data dependent models computationally tractable, and opens a bright avenue for annotating images through innovative machine learning algorithms. Semisupervised learning (SSL) therefore received intensive attention in recent years and was successfully deployed in image annotation. One representative work in SSL is Laplacian regularization (LR), which smoothes the conditional distribution for classification along the manifold encoded in the graph Laplacian, however, it is observed that LR biases the classification function toward a constant function that possibly results in poor generalization. In addition, LR is developed to handle uniformly distributed data (or single-view data), although instances or objects, such as images and videos, are usually represented by multiview features, such as color, shape, and texture. In this paper, we present multiview Hessian regularization (mHR) to address the above two problems in LR-based image annotation. In particular, mHR optimally combines multiple HR, each of which is obtained from a particular view of instances, and steers the classification function that varies linearly along the data manifold. We apply mHR to kernel least squares and support vector machines as two examples for image annotation. Extensive experiments on the PASCAL VOC'07 dataset validate the effectiveness of mHR by comparing it with baseline algorithms, including LR and HR.
EIT image reconstruction with four dimensional regularization.
Dai, Tao; Soleimani, Manuchehr; Adler, Andy
2008-09-01
Electrical impedance tomography (EIT) reconstructs internal impedance images of the body from electrical measurements on body surface. The temporal resolution of EIT data can be very high, although the spatial resolution of the images is relatively low. Most EIT reconstruction algorithms calculate images from data frames independently, although data are actually highly correlated especially in high speed EIT systems. This paper proposes a 4-D EIT image reconstruction for functional EIT. The new approach is developed to directly use prior models of the temporal correlations among images and 3-D spatial correlations among image elements. A fast algorithm is also developed to reconstruct the regularized images. Image reconstruction is posed in terms of an augmented image and measurement vector which are concatenated from a specific number of previous and future frames. The reconstruction is then based on an augmented regularization matrix which reflects the a priori constraints on temporal and 3-D spatial correlations of image elements. A temporal factor reflecting the relative strength of the image correlation is objectively calculated from measurement data. Results show that image reconstruction models which account for inter-element correlations, in both space and time, show improved resolution and noise performance, in comparison to simpler image models.
Steps towards an evolutionary physics
Tiezzi, E
2006-01-01
If thermodynamics is to physics as logic is to philosophy, recent theoretical advancements lend new coherence to the marvel and dynamism of life on Earth. Enzo Tiezzi's "Steps Towards an Evolutionary Physics" is a primer and guide, to those who would to stand on the shoulders of giants to attain this view: Heisenberg, Planck, Bateson, Varela, and Prigogine as well as notable contemporary scientists. The adventure of such a free and enquiring spirit thrives not so much on answers as on new questions. The book offers a new gestalt on the uncertainty principle and concept of probability. A wide range of examples, enigmas, and paradoxes lead one's imagination on an exquisite dance. Among the applications are: songs and shapes of nature, oscillatory reactions, orientors, goal functions and configurations of processes, and "dissipative structures and the city". Ecodynamics is a new science, which proposes a cross-fertilization between Charles Darwin and Ilya Prigogine. As an enigma in thermodynamics, Entropy forms ...
Directory of Open Access Journals (Sweden)
María C. Nevárez-Martínez
2017-04-01
Full Text Available V2O5-TiO2 mixed oxide nanotube (NT layers were successfully prepared via the one-step anodization of Ti-V alloys. The obtained samples were characterized by scanning electron microscopy (SEM, UV-Vis absorption, photoluminescence spectroscopy, energy-dispersive X-ray spectroscopy (EDX, X-ray diffraction (DRX, and micro-Raman spectroscopy. The effect of the applied voltage (30–50 V, vanadium content (5–15 wt % in the alloy, and water content (2–10 vol % in an ethylene glycol-based electrolyte was studied systematically to determine their influence on the morphology, and for the first-time, on the photocatalytic properties of these nanomaterials. The morphology of the samples varied from sponge-like to highly-organized nanotubular structures. The vanadium content in the alloy was found to have the highest influence on the morphology and the sample with the lowest vanadium content (5 wt % exhibited the best auto-alignment and self-organization (length = 1 μm, diameter = 86 nm and wall thickness = 11 nm. Additionally, a probable growth mechanism of V2O5-TiO2 nanotubes (NTs over the Ti-V alloys was presented. Toluene, in the gas phase, was effectively removed through photodegradation under visible light (LEDs, λmax = 465 nm in the presence of the modified TiO2 nanostructures. The highest degradation value was 35% after 60 min of irradiation. V2O5 species were ascribed as the main structures responsible for the generation of photoactive e− and h+ under Vis light and a possible excitation mechanism was proposed.
Manifold regularized discriminative nonnegative matrix factorization with fast gradient descent.
Guan, Naiyang; Tao, Dacheng; Luo, Zhigang; Yuan, Bo
2011-07-01
Nonnegative matrix factorization (NMF) has become a popular data-representation method and has been widely used in image processing and pattern-recognition problems. This is because the learned bases can be interpreted as a natural parts-based representation of data and this interpretation is consistent with the psychological intuition of combining parts to form a whole. For practical classification tasks, however, NMF ignores both the local geometry of data and the discriminative information of different classes. In addition, existing research results show that the learned basis is unnecessarily parts-based because there is neither explicit nor implicit constraint to ensure the representation parts-based. In this paper, we introduce the manifold regularization and the margin maximization to NMF and obtain the manifold regularized discriminative NMF (MD-NMF) to overcome the aforementioned problems. The multiplicative update rule (MUR) can be applied to optimizing MD-NMF, but it converges slowly. In this paper, we propose a fast gradient descent (FGD) to optimize MD-NMF. FGD contains a Newton method that searches the optimal step length, and thus, FGD converges much faster than MUR. In addition, FGD includes MUR as a special case and can be applied to optimizing NMF and its variants. For a problem with 165 samples in R(1600), FGD converges in 28 s, while MUR requires 282 s. We also apply FGD in a variant of MD-NMF and experimental results confirm its efficiency. Experimental results on several face image datasets suggest the effectiveness of MD-NMF.