WorldWideScience

Sample records for completely regular codes

  1. Bit-coded regular expression parsing

    DEFF Research Database (Denmark)

    Nielsen, Lasse; Henglein, Fritz

    2011-01-01

    the DFA-based parsing algorithm due to Dub ´e and Feeley to emit the bits of the bit representation without explicitly materializing the parse tree itself. We furthermore show that Frisch and Cardelli’s greedy regular expression parsing algorithm can be straightforwardly modified to produce bit codings...

  2. Globals of Completely Regular Monoids

    Institute of Scientific and Technical Information of China (English)

    Wu Qian-qian; Gan Ai-ping; Du Xian-kun

    2015-01-01

    An element of a semigroup S is called irreducible if it cannot be expressed as a product of two elements in S both distinct from itself. In this paper we show that the class C of all completely regular monoids with irreducible identity elements satisfies the strong isomorphism property and so it is globally determined.

  3. Universal Regularizers For Robust Sparse Coding and Modeling

    OpenAIRE

    Ramirez, Ignacio; Sapiro, Guillermo

    2010-01-01

    Sparse data models, where data is assumed to be well represented as a linear combination of a few elements from a dictionary, have gained considerable attention in recent years, and their use has led to state-of-the-art results in many signal and image processing tasks. It is now well understood that the choice of the sparsity regularization term is critical in the success of such models. Based on a codelength minimization interpretation of sparse coding, and using tools from universal coding...

  4. Statistical regularities in art: Relations with visual coding and perception.

    Science.gov (United States)

    Graham, Daniel J; Redies, Christoph

    2010-07-21

    Since at least 1935, vision researchers have used art stimuli to test human response to complex scenes. This is sensible given the "inherent interestingness" of art and its relation to the natural visual world. The use of art stimuli has remained popular, especially in eye tracking studies. Moreover, stimuli in common use by vision scientists are inspired by the work of famous artists (e.g., Mondrians). Artworks are also popular in vision science as illustrations of a host of visual phenomena, such as depth cues and surface properties. However, until recently, there has been scant consideration of the spatial, luminance, and color statistics of artwork, and even less study of ways that regularities in such statistics could affect visual processing. Furthermore, the relationship between regularities in art images and those in natural scenes has received little or no attention. In the past few years, there has been a concerted effort to study statistical regularities in art as they relate to neural coding and visual perception, and art stimuli have begun to be studied in rigorous ways, as natural scenes have been. In this minireview, we summarize quantitative studies of links between regular statistics in artwork and processing in the visual stream. The results of these studies suggest that art is especially germane to understanding human visual coding and perception, and it therefore warrants wider study. Copyright 2010 Elsevier Ltd. All rights reserved.

  5. An FPGA Implementation of (3,6-Regular Low-Density Parity-Check Code Decoder

    Directory of Open Access Journals (Sweden)

    Tong Zhang

    2003-05-01

    Full Text Available Because of their excellent error-correcting performance, low-density parity-check (LDPC codes have recently attracted a lot of attention. In this paper, we are interested in the practical LDPC code decoder hardware implementations. The direct fully parallel decoder implementation usually incurs too high hardware complexity for many real applications, thus partly parallel decoder design approaches that can achieve appropriate trade-offs between hardware complexity and decoding throughput are highly desirable. Applying a joint code and decoder design methodology, we develop a high-speed (3,k-regular LDPC code partly parallel decoder architecture based on which we implement a 9216-bit, rate-1/2(3,6-regular LDPC code decoder on Xilinx FPGA device. This partly parallel decoder supports a maximum symbol throughput of 54 Mbps and achieves BER 10−6 at 2 dB over AWGN channel while performing maximum 18 decoding iterations.

  6. Schnek: A C++ library for the development of parallel simulation codes on regular grids

    Science.gov (United States)

    Schmitz, Holger

    2018-05-01

    A large number of algorithms across the field of computational physics are formulated on grids with a regular topology. We present Schnek, a library that enables fast development of parallel simulations on regular grids. Schnek contains a number of easy-to-use modules that greatly reduce the amount of administrative code for large-scale simulation codes. The library provides an interface for reading simulation setup files with a hierarchical structure. The structure of the setup file is translated into a hierarchy of simulation modules that the developer can specify. The reader parses and evaluates mathematical expressions and initialises variables or grid data. This enables developers to write modular and flexible simulation codes with minimal effort. Regular grids of arbitrary dimension are defined as well as mechanisms for defining physical domain sizes, grid staggering, and ghost cells on these grids. Ghost cells can be exchanged between neighbouring processes using MPI with a simple interface. The grid data can easily be written into HDF5 files using serial or parallel I/O.

  7. CONSTRUCTION OF REGULAR LDPC LIKE CODES BASED ON FULL RANK CODES AND THEIR ITERATIVE DECODING USING A PARITY CHECK TREE

    Directory of Open Access Journals (Sweden)

    H. Prashantha Kumar

    2011-09-01

    Full Text Available Low density parity check (LDPC codes are capacity-approaching codes, which means that practical constructions exist that allow the noise threshold to be set very close to the theoretical Shannon limit for a memory less channel. LDPC codes are finding increasing use in applications like LTE-Networks, digital television, high density data storage systems, deep space communication systems etc. Several algebraic and combinatorial methods are available for constructing LDPC codes. In this paper we discuss a novel low complexity algebraic method for constructing regular LDPC like codes derived from full rank codes. We demonstrate that by employing these codes over AWGN channels, coding gains in excess of 2dB over un-coded systems can be realized when soft iterative decoding using a parity check tree is employed.

  8. Retrieval-based Face Annotation by Weak Label Regularized Local Coordinate Coding.

    Science.gov (United States)

    Wang, Dayong; Hoi, Steven C H; He, Ying; Zhu, Jianke; Mei, Tao; Luo, Jiebo

    2013-08-02

    Retrieval-based face annotation is a promising paradigm of mining massive web facial images for automated face annotation. This paper addresses a critical problem of such paradigm, i.e., how to effectively perform annotation by exploiting the similar facial images and their weak labels which are often noisy and incomplete. In particular, we propose an effective Weak Label Regularized Local Coordinate Coding (WLRLCC) technique, which exploits the principle of local coordinate coding in learning sparse features, and employs the idea of graph-based weak label regularization to enhance the weak labels of the similar facial images. We present an efficient optimization algorithm to solve the WLRLCC task. We conduct extensive empirical studies on two large-scale web facial image databases: (i) a Western celebrity database with a total of $6,025$ persons and $714,454$ web facial images, and (ii)an Asian celebrity database with $1,200$ persons and $126,070$ web facial images. The encouraging results validate the efficacy of the proposed WLRLCC algorithm. To further improve the efficiency and scalability, we also propose a PCA-based approximation scheme and an offline approximation scheme (AWLRLCC), which generally maintains comparable results but significantly saves much time cost. Finally, we show that WLRLCC can also tackle two existing face annotation tasks with promising performance.

  9. Complete permutation Gray code implemented by finite state machine

    Directory of Open Access Journals (Sweden)

    Li Peng

    2014-09-01

    Full Text Available An enumerating method of complete permutation array is proposed. The list of n! permutations based on Gray code defined over finite symbol set Z(n = {1, 2, …, n} is implemented by finite state machine, named as n-RPGCF. An RPGCF can be used to search permutation code and provide improved lower bounds on the maximum cardinality of a permutation code in some cases.

  10. MODIF-a code for completely reflected cylindrical reactors

    International Nuclear Information System (INIS)

    Gaafar, M.; Mechail, I.; Tadrus, S.

    1981-01-01

    MODIF-Code is a computer program for calculating the reflector saving, material buckling, and effective multiplication constant of completely reflected cylindrical reactors. The calculational method is based on a modified iterative algorithm which has been deduced from the general analytical solution of the two group diffusion equations. The code has been written in FORTRAN language suited for the ICL-1906 computer facility at Cairo University. The computer time required to solve a problem of actual reactor is less than 1 minute. The problem converges within five iteration steps. The accuracy in determining the effective multiplication constant lies within +-10 -5 . The code has been applied to the case of UA-RR-1 reactor, the results confirm the validity and accuracy of the calculational method

  11. Manifold regularized matrix completion for multi-label learning with ADMM.

    Science.gov (United States)

    Liu, Bin; Li, Yingming; Xu, Zenglin

    2018-05-01

    Multi-label learning is a common machine learning problem arising from numerous real-world applications in diverse fields, e.g, natural language processing, bioinformatics, information retrieval and so on. Among various multi-label learning methods, the matrix completion approach has been regarded as a promising approach to transductive multi-label learning. By constructing a joint matrix comprising the feature matrix and the label matrix, the missing labels of test samples are regarded as missing values of the joint matrix. With the low-rank assumption of the constructed joint matrix, the missing labels can be recovered by minimizing its rank. Despite its success, most matrix completion based approaches ignore the smoothness assumption of unlabeled data, i.e., neighboring instances should also share a similar set of labels. Thus they may under exploit the intrinsic structures of data. In addition, the matrix completion problem can be less efficient. To this end, we propose to efficiently solve the multi-label learning problem as an enhanced matrix completion model with manifold regularization, where the graph Laplacian is used to ensure the label smoothness over it. To speed up the convergence of our model, we develop an efficient iterative algorithm, which solves the resulted nuclear norm minimization problem with the alternating direction method of multipliers (ADMM). Experiments on both synthetic and real-world data have shown the promising results of the proposed approach. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. The Danish Registry on Regular Dialysis and Transplantation:completeness and validity of incident patient registration

    DEFF Research Database (Denmark)

    Hommel, Kristine; Rasmussen, Søren; Madsen, Mette

    2010-01-01

    BACKGROUND: The Danish National Registry on Regular Dialysis and Transplantation (NRDT) provides systematic information on the epidemiology and treatment of end-stage chronic kidney disease in Denmark. It is therefore of major importance that the registry is valid and complete. The aim of the pre...

  13. Scaling Non-Regular Shared-Memory Codes by Reusing Custom Loop Schedules

    Directory of Open Access Journals (Sweden)

    Dimitrios S. Nikolopoulos

    2003-01-01

    Full Text Available In this paper we explore the idea of customizing and reusing loop schedules to improve the scalability of non-regular numerical codes in shared-memory architectures with non-uniform memory access latency. The main objective is to implicitly setup affinity links between threads and data, by devising loop schedules that achieve balanced work distribution within irregular data spaces and reusing them as much as possible along the execution of the program for better memory access locality. This transformation provides a great deal of flexibility in optimizing locality, without compromising the simplicity of the shared-memory programming paradigm. In particular, the programmer does not need to explicitly distribute data between processors. The paper presents practical examples from real applications and experiments showing the efficiency of the approach.

  14. A Bit Stream Scalable Speech/Audio Coder Combining Enhanced Regular Pulse Excitation and Parametric Coding

    Directory of Open Access Journals (Sweden)

    Albertus C. den Brinker

    2007-01-01

    Full Text Available This paper introduces a new audio and speech broadband coding technique based on the combination of a pulse excitation coder and a standardized parametric coder, namely, MPEG-4 high-quality parametric coder. After presenting a series of enhancements to regular pulse excitation (RPE to make it suitable for the modeling of broadband signals, it is shown how pulse and parametric codings complement each other and how they can be merged to yield a layered bit stream scalable coder able to operate at different points in the quality bit rate plane. The performance of the proposed coder is evaluated in a listening test. The major result is that the extra functionality of the bit stream scalability does not come at the price of a reduced performance since the coder is competitive with standardized coders (MP3, AAC, SSC.

  15. A Bit Stream Scalable Speech/Audio Coder Combining Enhanced Regular Pulse Excitation and Parametric Coding

    Science.gov (United States)

    Riera-Palou, Felip; den Brinker, Albertus C.

    2007-12-01

    This paper introduces a new audio and speech broadband coding technique based on the combination of a pulse excitation coder and a standardized parametric coder, namely, MPEG-4 high-quality parametric coder. After presenting a series of enhancements to regular pulse excitation (RPE) to make it suitable for the modeling of broadband signals, it is shown how pulse and parametric codings complement each other and how they can be merged to yield a layered bit stream scalable coder able to operate at different points in the quality bit rate plane. The performance of the proposed coder is evaluated in a listening test. The major result is that the extra functionality of the bit stream scalability does not come at the price of a reduced performance since the coder is competitive with standardized coders (MP3, AAC, SSC).

  16. Complete coding sequence of Zika virus from Martinique outbreak in 2015

    Directory of Open Access Journals (Sweden)

    G. Piorkowski

    2016-05-01

    Full Text Available Zika virus is an Aedes-borne Flavivirus causing fever, arthralgia, myalgia rash, associated with Guillain–Barré syndrome and suspected to induce microcephaly in the fetus. We report here the complete coding sequence of the first characterized Caribbean Zika virus strain, isolated from a patient from Martinique in December, 2015.

  17. Regular Single Valued Neutrosophic Hypergraphs

    Directory of Open Access Journals (Sweden)

    Muhammad Aslam Malik

    2016-12-01

    Full Text Available In this paper, we define the regular and totally regular single valued neutrosophic hypergraphs, and discuss the order and size along with properties of regular and totally regular single valued neutrosophic hypergraphs. We also extend work on completeness of single valued neutrosophic hypergraphs.

  18. On the regularity of mild solutions to complete higher order differential equations on Banach spaces

    Directory of Open Access Journals (Sweden)

    Nezam Iraniparast

    2015-09-01

    Full Text Available For the complete higher order differential equation u(n(t=Σk=0n-1Aku(k(t+f(t, t∈ R (* on a Banach space E, we give a new definition of mild solutions of (*. We then characterize the regular admissibility of a translation invariant subspace al M of BUC(R, E with respect to (* in terms of solvability of the operator equation Σj=0n-1AjXal Dj-Xal Dn = C. As application, almost periodicity of mild solutions of (* is proved.

  19. Completion time reduction in instantly decodable network coding through decoding delay control

    KAUST Repository

    Douik, Ahmed S.; Sorour, Sameh; Alouini, Mohamed-Slim; Al-Naffouri, Tareq Y.

    2014-01-01

    For several years, the completion time and the decoding delay problems in Instantly Decodable Network Coding (IDNC) were considered separately and were thought to completely act against each other. Recently, some works aimed to balance the effects of these two important IDNC metrics but none of them studied a further optimization of one by controlling the other. In this paper, we study the effect of controlling the decoding delay to reduce the completion time below its currently best known solution. We first derive the decoding-delay-dependent expressions of the users' and their overall completion times. Although using such expressions to find the optimal overall completion time is NP-hard, we use a heuristic that minimizes the probability of increasing the maximum of these decoding-delay-dependent completion time expressions after each transmission through a layered control of their decoding delays. Simulation results show that this new algorithm achieves both a lower mean completion time and mean decoding delay compared to the best known heuristic for completion time reduction. The gap in performance becomes significant for harsh erasure scenarios.

  20. Completion time reduction in instantly decodable network coding through decoding delay control

    KAUST Repository

    Douik, Ahmed S.

    2014-12-01

    For several years, the completion time and the decoding delay problems in Instantly Decodable Network Coding (IDNC) were considered separately and were thought to completely act against each other. Recently, some works aimed to balance the effects of these two important IDNC metrics but none of them studied a further optimization of one by controlling the other. In this paper, we study the effect of controlling the decoding delay to reduce the completion time below its currently best known solution. We first derive the decoding-delay-dependent expressions of the users\\' and their overall completion times. Although using such expressions to find the optimal overall completion time is NP-hard, we use a heuristic that minimizes the probability of increasing the maximum of these decoding-delay-dependent completion time expressions after each transmission through a layered control of their decoding delays. Simulation results show that this new algorithm achieves both a lower mean completion time and mean decoding delay compared to the best known heuristic for completion time reduction. The gap in performance becomes significant for harsh erasure scenarios.

  1. Chimeric mitochondrial peptides from contiguous regular and swinger RNA.

    Science.gov (United States)

    Seligmann, Hervé

    2016-01-01

    Previous mass spectrometry analyses described human mitochondrial peptides entirely translated from swinger RNAs, RNAs where polymerization systematically exchanged nucleotides. Exchanges follow one among 23 bijective transformation rules, nine symmetric exchanges (X ↔ Y, e.g. A ↔ C) and fourteen asymmetric exchanges (X → Y → Z → X, e.g. A → C → G → A), multiplying by 24 DNA's protein coding potential. Abrupt switches from regular to swinger polymerization produce chimeric RNAs. Here, human mitochondrial proteomic analyses assuming abrupt switches between regular and swinger transcriptions, detect chimeric peptides, encoded by part regular, part swinger RNA. Contiguous regular- and swinger-encoded residues within single peptides are stronger evidence for translation of swinger RNA than previously detected, entirely swinger-encoded peptides: regular parts are positive controls matched with contiguous swinger parts, increasing confidence in results. Chimeric peptides are 200 × rarer than swinger peptides (3/100,000 versus 6/1000). Among 186 peptides with > 8 residues for each regular and swinger parts, regular parts of eleven chimeric peptides correspond to six among the thirteen recognized, mitochondrial protein-coding genes. Chimeric peptides matching partly regular proteins are rarer and less expressed than chimeric peptides matching non-coding sequences, suggesting targeted degradation of misfolded proteins. Present results strengthen hypotheses that the short mitogenome encodes far more proteins than hitherto assumed. Entirely swinger-encoded proteins could exist.

  2. Regular Expression Pocket Reference

    CERN Document Server

    Stubblebine, Tony

    2007-01-01

    This handy little book offers programmers a complete overview of the syntax and semantics of regular expressions that are at the heart of every text-processing application. Ideal as a quick reference, Regular Expression Pocket Reference covers the regular expression APIs for Perl 5.8, Ruby (including some upcoming 1.9 features), Java, PHP, .NET and C#, Python, vi, JavaScript, and the PCRE regular expression libraries. This concise and easy-to-use reference puts a very powerful tool for manipulating text and data right at your fingertips. Composed of a mixture of symbols and text, regular exp

  3. Decoding Delay Controlled Completion Time Reduction in Instantly Decodable Network Coding

    KAUST Repository

    Douik, Ahmed

    2016-06-27

    For several years, the completion time and the decoding delay problems in Instantly Decodable Network Coding (IDNC) were considered separately and were thought to act completely against each other. Recently, some works aimed to balance the effects of these two important IDNC metrics but none of them studied a further optimization of one by controlling the other. This paper investigates the effect of controlling the decoding delay to reduce the completion time below its currently best-known solution in both perfect and imperfect feedback with persistent erasure channels. To solve the problem, the decodingdelay- dependent expressions of the users’ and overall completion times are derived in the complete feedback scenario. Although using such expressions to find the optimal overall completion time is NP-hard, the paper proposes two novel heuristics that minimizes the probability of increasing the maximum of these decoding-delay-dependent completion time expressions after each transmission through a layered control of their decoding delays. Afterward, the paper extends the study to the imperfect feedback scenario in which uncertainties at the sender affects its ability to anticipate accurately the decoding delay increase at each user. The paper formulates the problem in such environment and derives the expression of the minimum increase in the completion time. Simulation results show the performance of the proposed solutions and suggest that both heuristics achieves a lower mean completion time as compared to the best-known heuristics for the completion time reduction in perfect and imperfect feedback. The gap in performance becomes more significant as the erasure of the channel increases.

  4. Decoding Delay Controlled Completion Time Reduction in Instantly Decodable Network Coding

    KAUST Repository

    Douik, Ahmed S.; Sorour, Sameh; Al-Naffouri, Tareq Y.; Alouini, Mohamed-Slim

    2016-01-01

    For several years, the completion time and the decoding delay problems in Instantly Decodable Network Coding (IDNC) were considered separately and were thought to act completely against each other. Recently, some works aimed to balance the effects of these two important IDNC metrics but none of them studied a further optimization of one by controlling the other. This paper investigates the effect of controlling the decoding delay to reduce the completion time below its currently best-known solution in both perfect and imperfect feedback with persistent erasure channels. To solve the problem, the decodingdelay- dependent expressions of the users’ and overall completion times are derived in the complete feedback scenario. Although using such expressions to find the optimal overall completion time is NP-hard, the paper proposes two novel heuristics that minimizes the probability of increasing the maximum of these decoding-delay-dependent completion time expressions after each transmission through a layered control of their decoding delays. Afterward, the paper extends the study to the imperfect feedback scenario in which uncertainties at the sender affects its ability to anticipate accurately the decoding delay increase at each user. The paper formulates the problem in such environment and derives the expression of the minimum increase in the completion time. Simulation results show the performance of the proposed solutions and suggest that both heuristics achieves a lower mean completion time as compared to the best-known heuristics for the completion time reduction in perfect and imperfect feedback. The gap in performance becomes more significant as the erasure of the channel increases.

  5. On-line monitoring and inservice inspection in codes

    International Nuclear Information System (INIS)

    Bartonicek, J.; Zaiss, W.; Bath, H.R.

    1999-01-01

    The relevant regulatory codes determine the ISI tasks and the time intervals for recurrent components testing for evaluation of operation-induced damaging or ageing in order to ensure component integrity on the basis of the last available quality data. In-service quality monitoring is carried out through on-line monitoring and recurrent testing. The requirements defined by the engineering codes elaborated by various institutions are comparable, with the KTA nuclear engineering and safety codes being the most complete provisions for quality evaluation and assurance after different, defined service periods. German conventional codes for assuring component integrity provide exclusively for recurrent inspection regimes (mainly pressure tests and optical testing). The requirements defined in the KTA codes however always demanded more specific inspections relying on recurrent testing as well as on-line monitoring. Foreign codes for ensuring component integrity concentrate on NDE tasks at regular time intervals, with time intervals scope of testing activities being defined on the basis of the ASME code, section XI. (orig./CB) [de

  6. Code of Medical Ethics

    Directory of Open Access Journals (Sweden)

    . SZD-SZZ

    2017-03-01

    Full Text Available Te Code was approved on December 12, 1992, at the 3rd regular meeting of the General Assembly of the Medical Chamber of Slovenia and revised on April 24, 1997, at the 27th regular meeting of the General Assembly of the Medical Chamber of Slovenia. The Code was updated and harmonized with the Medical Association of Slovenia and approved on October 6, 2016, at the regular meeting of the General Assembly of the Medical Chamber of Slovenia.

  7. Coding completeness and quality of relative survival-related variables in the National Program of Cancer Registries Cancer Surveillance System, 1995-2008.

    Science.gov (United States)

    Wilson, Reda J; O'Neil, M E; Ntekop, E; Zhang, Kevin; Ren, Y

    2014-01-01

    Calculating accurate estimates of cancer survival is important for various analyses of cancer patient care and prognosis. Current US survival rates are estimated based on data from the National Cancer Institute's (NCI's) Surveillance, Epidemiology, and End RESULTS (SEER) program, covering approximately 28 percent of the US population. The National Program of Cancer Registries (NPCR) covers about 96 percent of the US population. Using a population-based database with greater US population coverage to calculate survival rates at the national, state, and regional levels can further enhance the effective monitoring of cancer patient care and prognosis in the United States. The first step is to establish the coding completeness and coding quality of the NPCR data needed for calculating survival rates and conducting related validation analyses. Using data from the NPCR-Cancer Surveillance System (CSS) from 1995 through 2008, we assessed coding completeness and quality on 26 data elements that are needed to calculate cancer relative survival estimates and conduct related analyses. Data elements evaluated consisted of demographic, follow-up, prognostic, and cancer identification variables. Analyses were performed showing trends of these variables by diagnostic year, state of residence at diagnosis, and cancer site. Mean overall percent coding completeness by each NPCR central cancer registry averaged across all data elements and diagnosis years ranged from 92.3 percent to 100 percent. RESULTS showing the mean percent coding completeness for the relative survival-related variables in NPCR data are presented. All data elements but 1 have a mean coding completeness greater than 90 percent as was the mean completeness by data item group type. Statistically significant differences in coding completeness were found in the ICD revision number, cause of death, vital status, and date of last contact variables when comparing diagnosis years. The majority of data items had a coding

  8. REGULAR METHOD FOR SYNTHESIS OF BASIC BENT-SQUARES OF RANDOM ORDER

    Directory of Open Access Journals (Sweden)

    A. V. Sokolov

    2016-01-01

    Full Text Available The paper is devoted to the class construction of the most non-linear Boolean bent-functions of any length N = 2k (k = 2, 4, 6…, on the basis of their spectral representation – Agievich bent squares. These perfect algebraic constructions are used as a basis to build many new cryptographic primitives, such as generators of pseudo-random key sequences, crypto graphic S-boxes, etc. Bent-functions also find their application in the construction of C-codes in the systems with code division multiple access (CDMA to provide the lowest possible value of Peak-to-Average Power Ratio (PAPR k = 1, as well as for the construction of error-correcting codes and systems of orthogonal biphasic signals. All the numerous applications of bent-functions relate to the theory of their synthesis. However, regular methods for complete class synthesis of bent-functions of any length N = 2k are currently unknown. The paper proposes a regular synthesis method for the basic Agievich bent squares of any order n, based on a regular operator of dyadic shift. Classification for a complete set of spectral vectors of lengths (l = 8, 16, … based on a criterion of the maximum absolute value and set of absolute values of spectral components has been carried out in the paper. It has been shown that any spectral vector can be a basis for building bent squares. Results of the synthesis for the Agievich bent squares of order n = 8 have been generalized and it has been revealed that there are only 3 basic bent squares for this order, while the other 5 can be obtained with help the operation of step-cyclic shift. All the basic bent squares of order n = 16 have been synthesized that allows to construct the bent-functions of length N = 256. The obtained basic bent squares can be used either for direct synthesis of bent-functions and their practical application or for further research in order to synthesize new structures of bent squares of orders n = 16, 32, 64, …

  9. Regular expression containment

    DEFF Research Database (Denmark)

    Henglein, Fritz; Nielsen, Lasse

    2011-01-01

    We present a new sound and complete axiomatization of regular expression containment. It consists of the conventional axiomatiza- tion of concatenation, alternation, empty set and (the singleton set containing) the empty string as an idempotent semiring, the fixed- point rule E* = 1 + E × E......* for Kleene-star, and a general coin- duction rule as the only additional rule. Our axiomatization gives rise to a natural computational inter- pretation of regular expressions as simple types that represent parse trees, and of containment proofs as coercions. This gives the axiom- atization a Curry......-Howard-style constructive interpretation: Con- tainment proofs do not only certify a language-theoretic contain- ment, but, under our computational interpretation, constructively transform a membership proof of a string in one regular expres- sion into a membership proof of the same string in another regular expression. We...

  10. Converter of a continuous code into the Grey code

    International Nuclear Information System (INIS)

    Gonchar, A.I.; TrUbnikov, V.R.

    1979-01-01

    Described is a converter of a continuous code into the Grey code used in a 12-charged precision amplitude-to-digital converter to decrease the digital component of spectrometer differential nonlinearity to +0.7% in the 98% range of the measured band. To construct the converter of a continuous code corresponding to the input signal amplitude into the Grey code used is the regularity in recycling of units and zeroes in each discharge of the Grey code in the case of a continuous change of the number of pulses of a continuous code. The converter is constructed on the elements of 155 series, the frequency of continuous code pulse passing at the converter input is 25 MHz

  11. Sparse coded image super-resolution using K-SVD trained dictionary based on regularized orthogonal matching pursuit.

    Science.gov (United States)

    Sajjad, Muhammad; Mehmood, Irfan; Baik, Sung Wook

    2015-01-01

    Image super-resolution (SR) plays a vital role in medical imaging that allows a more efficient and effective diagnosis process. Usually, diagnosing is difficult and inaccurate from low-resolution (LR) and noisy images. Resolution enhancement through conventional interpolation methods strongly affects the precision of consequent processing steps, such as segmentation and registration. Therefore, we propose an efficient sparse coded image SR reconstruction technique using a trained dictionary. We apply a simple and efficient regularized version of orthogonal matching pursuit (ROMP) to seek the coefficients of sparse representation. ROMP has the transparency and greediness of OMP and the robustness of the L1-minization that enhance the dictionary learning process to capture feature descriptors such as oriented edges and contours from complex images like brain MRIs. The sparse coding part of the K-SVD dictionary training procedure is modified by substituting OMP with ROMP. The dictionary update stage allows simultaneously updating an arbitrary number of atoms and vectors of sparse coefficients. In SR reconstruction, ROMP is used to determine the vector of sparse coefficients for the underlying patch. The recovered representations are then applied to the trained dictionary, and finally, an optimization leads to high-resolution output of high-quality. Experimental results demonstrate that the super-resolution reconstruction quality of the proposed scheme is comparatively better than other state-of-the-art schemes.

  12. Computing quantum discord is NP-complete

    International Nuclear Information System (INIS)

    Huang, Yichen

    2014-01-01

    We study the computational complexity of quantum discord (a measure of quantum correlation beyond entanglement), and prove that computing quantum discord is NP-complete. Therefore, quantum discord is computationally intractable: the running time of any algorithm for computing quantum discord is believed to grow exponentially with the dimension of the Hilbert space so that computing quantum discord in a quantum system of moderate size is not possible in practice. As by-products, some entanglement measures (namely entanglement cost, entanglement of formation, relative entropy of entanglement, squashed entanglement, classical squashed entanglement, conditional entanglement of mutual information, and broadcast regularization of mutual information) and constrained Holevo capacity are NP-hard/NP-complete to compute. These complexity-theoretic results are directly applicable in common randomness distillation, quantum state merging, entanglement distillation, superdense coding, and quantum teleportation; they may offer significant insights into quantum information processing. Moreover, we prove the NP-completeness of two typical problems: linear optimization over classical states and detecting classical states in a convex set, providing evidence that working with classical states is generically computationally intractable. (paper)

  13. APC: A new code for Atmospheric Polarization Computations

    International Nuclear Information System (INIS)

    Korkin, Sergey V.; Lyapustin, Alexei I.; Rozanov, Vladimir V.

    2013-01-01

    A new polarized radiative transfer code Atmospheric Polarization Computations (APC) is described. The code is based on separation of the diffuse light field into anisotropic and smooth (regular) parts. The anisotropic part is computed analytically. The smooth regular part is computed numerically using the discrete ordinates method. Vertical stratification of the atmosphere, common types of bidirectional surface reflection and scattering by spherical particles or spheroids are included. A particular consideration is given to computation of the bidirectional polarization distribution function (BPDF) of the waved ocean surface. -- Highlights: •A new code, APC, has been developed. •The code was validated against well-known codes. •The BPDF for an arbitrary Mueller matrix is computed

  14. Learning Sparse Visual Representations with Leaky Capped Norm Regularizers

    OpenAIRE

    Wangni, Jianqiao; Lin, Dahua

    2017-01-01

    Sparsity inducing regularization is an important part for learning over-complete visual representations. Despite the popularity of $\\ell_1$ regularization, in this paper, we investigate the usage of non-convex regularizations in this problem. Our contribution consists of three parts. First, we propose the leaky capped norm regularization (LCNR), which allows model weights below a certain threshold to be regularized more strongly as opposed to those above, therefore imposes strong sparsity and...

  15. Faster 2-regular information-set decoding

    NARCIS (Netherlands)

    Bernstein, D.J.; Lange, T.; Peters, C.P.; Schwabe, P.; Chee, Y.M.

    2011-01-01

    Fix positive integers B and w. Let C be a linear code over F 2 of length Bw. The 2-regular-decoding problem is to find a nonzero codeword consisting of w length-B blocks, each of which has Hamming weight 0 or 2. This problem appears in attacks on the FSB (fast syndrome-based) hash function and

  16. Analysis of reaction cross-section production in neutron induced fission reactions on uranium isotope using computer code COMPLET.

    Science.gov (United States)

    Asres, Yihunie Hibstie; Mathuthu, Manny; Birhane, Marelgn Derso

    2018-04-22

    This study provides current evidence about cross-section production processes in the theoretical and experimental results of neutron induced reaction of uranium isotope on projectile energy range of 1-100 MeV in order to improve the reliability of nuclear stimulation. In such fission reactions of 235 U within nuclear reactors, much amount of energy would be released as a product that able to satisfy the needs of energy to the world wide without polluting processes as compared to other sources. The main objective of this work is to transform a related knowledge in the neutron-induced fission reactions on 235 U through describing, analyzing and interpreting the theoretical results of the cross sections obtained from computer code COMPLET by comparing with the experimental data obtained from EXFOR. The cross section value of 235 U(n,2n) 234 U, 235 U(n,3n) 233 U, 235 U(n,γ) 236 U, 235 U(n,f) are obtained using computer code COMPLET and the corresponding experimental values were browsed by EXFOR, IAEA. The theoretical results are compared with the experimental data taken from EXFOR Data Bank. Computer code COMPLET has been used for the analysis with the same set of input parameters and the graphs were plotted by the help of spreadsheet & Origin-8 software. The quantification of uncertainties stemming from both experimental data and computer code calculation plays a significant role in the final evaluated results. The calculated results for total cross sections were compared with the experimental data taken from EXFOR in the literature, and good agreement was found between the experimental and theoretical data. This comparison of the calculated data was analyzed and interpreted with tabulation and graphical descriptions, and the results were briefly discussed within the text of this research work. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  17. p-topological Cauchy completions

    Directory of Open Access Journals (Sweden)

    J. Wig

    1999-01-01

    Full Text Available The duality between “regular” and “topological” as convergence space properties extends in a natural way to the more general properties “p-regular” and “p-topological.” Since earlier papers have investigated regular, p-regular, and topological Cauchy completions, we hereby initiate a study of p-topological Cauchy completions. A p-topological Cauchy space has a p-topological completion if and only if it is “cushioned,” meaning that each equivalence class of nonconvergent Cauchy filters contains a smallest filter. For a Cauchy space allowing a p-topological completion, it is shown that a certain class of Reed completions preserve the p-topological property, including the Wyler and Kowalsky completions, which are, respectively, the finest and the coarsest p-topological completions. However, not all p-topological completions are Reed completions. Several extension theorems for p-topological completions are obtained. The most interesting of these states that any Cauchy-continuous map between Cauchy spaces allowing p-topological and p′-topological completions, respectively, can always be extended to a θ-continuous map between any p-topological completion of the first space and any p′-topological completion of the second.

  18. Colors and geometric forms in the work process information coding

    Directory of Open Access Journals (Sweden)

    Čizmić Svetlana

    2006-01-01

    Full Text Available The aim of the research was to establish the meaning of the colors and geometric shapes in transmitting information in the work process. The sample of 100 students connected 50 situations which could be associated with regular tasks in the work process with 12 colors and 4 geometric forms in previously chosen color. Based on chosen color-geometric shape-situation regulation, the idea of the research was to find out regularities in coding of information and to examine if those regularities can provide meaningful data assigned to each individual code and to explain which codes are better and applicable represents of examined situations.

  19. Supervised Convolutional Sparse Coding

    KAUST Repository

    Affara, Lama Ahmed; Ghanem, Bernard; Wonka, Peter

    2018-01-01

    coding, which aims at learning discriminative dictionaries instead of purely reconstructive ones. We incorporate a supervised regularization term into the traditional unsupervised CSC objective to encourage the final dictionary elements

  20. A new Fortran 90 program to compute regular and irregular associated Legendre functions (new version announcement)

    Science.gov (United States)

    Schneider, Barry I.; Segura, Javier; Gil, Amparo; Guan, Xiaoxu; Bartschat, Klaus

    2018-04-01

    This is a revised and updated version of a modern Fortran 90 code to compute the regular Plm (x) and irregular Qlm (x) associated Legendre functions for all x ∈(- 1 , + 1) (on the cut) and | x | > 1 and integer degree (l) and order (m). The necessity to revise the code comes as a consequence of some comments of Prof. James Bremer of the UC//Davis Mathematics Department, who discovered that there were errors in the code for large integer degree and order for the normalized regular Legendre functions on the cut.

  1. UNFOLDED REGULAR AND SEMI-REGULAR POLYHEDRA

    Directory of Open Access Journals (Sweden)

    IONIŢĂ Elena

    2015-06-01

    Full Text Available This paper proposes a presentation unfolding regular and semi-regular polyhedra. Regular polyhedra are convex polyhedra whose faces are regular and equal polygons, with the same number of sides, and whose polyhedral angles are also regular and equal. Semi-regular polyhedra are convex polyhedra with regular polygon faces, several types and equal solid angles of the same type. A net of a polyhedron is a collection of edges in the plane which are the unfolded edges of the solid. Modeling and unfolding Platonic and Arhimediene polyhedra will be using 3dsMAX program. This paper is intended as an example of descriptive geometry applications.

  2. Complete heat transfer solutions of an insulated regular polygonal pipe by using a PWTR model

    International Nuclear Information System (INIS)

    Wong, K.-L.; Chou, H.-M.; Li, Y.-H.

    2004-01-01

    The heat transfer characteristics for insulated long regular polygonal (including circular) pipes are analyzed by using the same PWRT model in the present study as that used by Chou and Wong previously [Energy Convers. Manage. 44 (4) (2003) 629]. The thermal resistance of the inner convection term and the pipe conduction term in the heat transfer rate are not neglected in the present study. Thus, the complete heat transfer solution will be obtained. The present results can be applied more extensively to practical situations, such as heat exchangers. The results of the critical thickness t cr and the neutral thickness t e are independent of the values of J (generated by the combined effect of the inner convection term and the pipe conduction term). However, the heat transfer rates are dependent on the values of J. The present study shows that the thermal resistance of the inner convection term and the pipe conduction term cannot be neglected in the heat transfer equation in situations of low to medium inner convection coefficients h i and/or low to medium pipe conductivities K, especially in situations with large pipe sizes or/and great outer convection coefficients h 0

  3. Procedure and code for calculating black control rods taking into account epithermal absorption, code CAS-1

    International Nuclear Information System (INIS)

    Martinc, R.; Trivunac, N.; Zivkovic, Z.

    1964-12-01

    This report describes the computer code CAS-1, calculation method and procedure applied for calculating the black control rods taking into account the epithermal neutron absorption. Results obtained for supercell method applied for regular lattice reflected in the multiplication medium is part of this report in addition to the computer code manual

  4. Typical performance of regular low-density parity-check codes over general symmetric channels

    Energy Technology Data Exchange (ETDEWEB)

    Tanaka, Toshiyuki [Department of Electronics and Information Engineering, Tokyo Metropolitan University, 1-1 Minami-Osawa, Hachioji-shi, Tokyo 192-0397 (Japan); Saad, David [Neural Computing Research Group, Aston University, Aston Triangle, Birmingham B4 7ET (United Kingdom)

    2003-10-31

    Typical performance of low-density parity-check (LDPC) codes over a general binary-input output-symmetric memoryless channel is investigated using methods of statistical mechanics. Relationship between the free energy in statistical-mechanics approach and the mutual information used in the information-theory literature is established within a general framework; Gallager and MacKay-Neal codes are studied as specific examples of LDPC codes. It is shown that basic properties of these codes known for particular channels, including their potential to saturate Shannon's bound, hold for general symmetric channels. The binary-input additive-white-Gaussian-noise channel and the binary-input Laplace channel are considered as specific channel models.

  5. Typical performance of regular low-density parity-check codes over general symmetric channels

    International Nuclear Information System (INIS)

    Tanaka, Toshiyuki; Saad, David

    2003-01-01

    Typical performance of low-density parity-check (LDPC) codes over a general binary-input output-symmetric memoryless channel is investigated using methods of statistical mechanics. Relationship between the free energy in statistical-mechanics approach and the mutual information used in the information-theory literature is established within a general framework; Gallager and MacKay-Neal codes are studied as specific examples of LDPC codes. It is shown that basic properties of these codes known for particular channels, including their potential to saturate Shannon's bound, hold for general symmetric channels. The binary-input additive-white-Gaussian-noise channel and the binary-input Laplace channel are considered as specific channel models

  6. Propagation of spiking regularity and double coherence resonance in feedforward networks.

    Science.gov (United States)

    Men, Cong; Wang, Jiang; Qin, Ying-Mei; Deng, Bin; Tsang, Kai-Ming; Chan, Wai-Lok

    2012-03-01

    We investigate the propagation of spiking regularity in noisy feedforward networks (FFNs) based on FitzHugh-Nagumo neuron model systematically. It is found that noise could modulate the transmission of firing rate and spiking regularity. Noise-induced synchronization and synfire-enhanced coherence resonance are also observed when signals propagate in noisy multilayer networks. It is interesting that double coherence resonance (DCR) with the combination of synaptic input correlation and noise intensity is finally attained after the processing layer by layer in FFNs. Furthermore, inhibitory connections also play essential roles in shaping DCR phenomena. Several properties of the neuronal network such as noise intensity, correlation of synaptic inputs, and inhibitory connections can serve as control parameters in modulating both rate coding and the order of temporal coding.

  7. Automated Assume-Guarantee Reasoning for Omega-Regular Systems and Specifications

    Science.gov (United States)

    Chaki, Sagar; Gurfinkel, Arie

    2010-01-01

    We develop a learning-based automated Assume-Guarantee (AG) reasoning framework for verifying omega-regular properties of concurrent systems. We study the applicability of non-circular (AGNC) and circular (AG-C) AG proof rules in the context of systems with infinite behaviors. In particular, we show that AG-NC is incomplete when assumptions are restricted to strictly infinite behaviors, while AG-C remains complete. We present a general formalization, called LAG, of the learning based automated AG paradigm. We show how existing approaches for automated AG reasoning are special instances of LAG.We develop two learning algorithms for a class of systems, called infinite regular systems, that combine finite and infinite behaviors. We show that for infinity-regular systems, both AG-NC and AG-C are sound and complete. Finally, we show how to instantiate LAG to do automated AG reasoning for infinite regular, and omega-regular, systems using both AG-NC and AG-C as proof rules

  8. Regularization of fields for self-force problems in curved spacetime: Foundations and a time-domain application

    International Nuclear Information System (INIS)

    Vega, Ian; Detweiler, Steven

    2008-01-01

    We propose an approach for the calculation of self-forces, energy fluxes and waveforms arising from moving point charges in curved spacetimes. As opposed to mode-sum schemes that regularize the self-force derived from the singular retarded field, this approach regularizes the retarded field itself. The singular part of the retarded field is first analytically identified and removed, yielding a finite, differentiable remainder from which the self-force is easily calculated. This regular remainder solves a wave equation which enjoys the benefit of having a nonsingular source. Solving this wave equation for the remainder completely avoids the calculation of the singular retarded field along with the attendant difficulties associated with numerically modeling a delta-function source. From this differentiable remainder one may compute the self-force, the energy flux, and also a waveform which reflects the effects of the self-force. As a test of principle, we implement this method using a 4th-order (1+1) code, and calculate the self-force for the simple case of a scalar charge moving in a circular orbit around a Schwarzschild black hole. We achieve agreement with frequency-domain results to ∼0.1% or better.

  9. Protograph based LDPC codes with minimum distance linearly growing with block size

    Science.gov (United States)

    Divsalar, Dariush; Jones, Christopher; Dolinar, Sam; Thorpe, Jeremy

    2005-01-01

    We propose several LDPC code constructions that simultaneously achieve good threshold and error floor performance. Minimum distance is shown to grow linearly with block size (similar to regular codes of variable degree at least 3) by considering ensemble average weight enumerators. Our constructions are based on projected graph, or protograph, structures that support high-speed decoder implementations. As with irregular ensembles, our constructions are sensitive to the proportion of degree-2 variable nodes. A code with too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code with too many such nodes tends to not exhibit a minimum distance that grows linearly in block length. In this paper we also show that precoding can be used to lower the threshold of regular LDPC codes. The decoding thresholds of the proposed codes, which have linearly increasing minimum distance in block size, outperform that of regular LDPC codes. Furthermore, a family of low to high rate codes, with thresholds that adhere closely to their respective channel capacity thresholds, is presented. Simulation results for a few example codes show that the proposed codes have low error floors as well as good threshold SNFt performance.

  10. Image super-resolution reconstruction based on regularization technique and guided filter

    Science.gov (United States)

    Huang, De-tian; Huang, Wei-qin; Gu, Pei-ting; Liu, Pei-zhong; Luo, Yan-min

    2017-06-01

    In order to improve the accuracy of sparse representation coefficients and the quality of reconstructed images, an improved image super-resolution algorithm based on sparse representation is presented. In the sparse coding stage, the autoregressive (AR) regularization and the non-local (NL) similarity regularization are introduced to improve the sparse coding objective function. A group of AR models which describe the image local structures are pre-learned from the training samples, and one or several suitable AR models can be adaptively selected for each image patch to regularize the solution space. Then, the image non-local redundancy is obtained by the NL similarity regularization to preserve edges. In the process of computing the sparse representation coefficients, the feature-sign search algorithm is utilized instead of the conventional orthogonal matching pursuit algorithm to improve the accuracy of the sparse coefficients. To restore image details further, a global error compensation model based on weighted guided filter is proposed to realize error compensation for the reconstructed images. Experimental results demonstrate that compared with Bicubic, L1SR, SISR, GR, ANR, NE + LS, NE + NNLS, NE + LLE and A + (16 atoms) methods, the proposed approach has remarkable improvement in peak signal-to-noise ratio, structural similarity and subjective visual perception.

  11. APC: A New Code for Atmospheric Polarization Computations

    Science.gov (United States)

    Korkin, Sergey V.; Lyapustin, Alexei I.; Rozanov, Vladimir V.

    2014-01-01

    A new polarized radiative transfer code Atmospheric Polarization Computations (APC) is described. The code is based on separation of the diffuse light field into anisotropic and smooth (regular) parts. The anisotropic part is computed analytically. The smooth regular part is computed numerically using the discrete ordinates method. Vertical stratification of the atmosphere, common types of bidirectional surface reflection and scattering by spherical particles or spheroids are included. A particular consideration is given to computation of the bidirectional polarization distribution function (BPDF) of the waved ocean surface.

  12. Complete coding sequence of the human raf oncogene and the corresponding structure of the c-raf-1 gene

    Energy Technology Data Exchange (ETDEWEB)

    Bonner, T I; Oppermann, H; Seeburg, P; Kerby, S B; Gunnell, M A; Young, A C; Rapp, U R

    1986-01-24

    The complete 648 amino acid sequence of the human raf oncogene was deduced from the 2977 nucleotide sequence of a fetal liver cDNA. The cDNA has been used to obtain clones which extend the human c-raf-1 locus by an additional 18.9 kb at the 5' end and contain all the remaining coding exons.

  13. Fast decoding of codes from algebraic plane curves

    DEFF Research Database (Denmark)

    Justesen, Jørn; Larsen, Knud J.; Jensen, Helge Elbrønd

    1992-01-01

    Improvement to an earlier decoding algorithm for codes from algebraic geometry is presented. For codes from an arbitrary regular plane curve the authors correct up to d*/2-m2 /8+m/4-9/8 errors, where d* is the designed distance of the code and m is the degree of the curve. The complexity of finding...

  14. Improvement of genome assembly completeness and identification of novel full-length protein-coding genes by RNA-seq in the giant panda genome.

    Science.gov (United States)

    Chen, Meili; Hu, Yibo; Liu, Jingxing; Wu, Qi; Zhang, Chenglin; Yu, Jun; Xiao, Jingfa; Wei, Fuwen; Wu, Jiayan

    2015-12-11

    High-quality and complete gene models are the basis of whole genome analyses. The giant panda (Ailuropoda melanoleuca) genome was the first genome sequenced on the basis of solely short reads, but the genome annotation had lacked the support of transcriptomic evidence. In this study, we applied RNA-seq to globally improve the genome assembly completeness and to detect novel expressed transcripts in 12 tissues from giant pandas, by using a transcriptome reconstruction strategy that combined reference-based and de novo methods. Several aspects of genome assembly completeness in the transcribed regions were effectively improved by the de novo assembled transcripts, including genome scaffolding, the detection of small-size assembly errors, the extension of scaffold/contig boundaries, and gap closure. Through expression and homology validation, we detected three groups of novel full-length protein-coding genes. A total of 12.62% of the novel protein-coding genes were validated by proteomic data. GO annotation analysis showed that some of the novel protein-coding genes were involved in pigmentation, anatomical structure formation and reproduction, which might be related to the development and evolution of the black-white pelage, pseudo-thumb and delayed embryonic implantation of giant pandas. The updated genome annotation will help further giant panda studies from both structural and functional perspectives.

  15. Diabetes Mellitus Coding Training for Family Practice Residents.

    Science.gov (United States)

    Urse, Geraldine N

    2015-07-01

    Although physicians regularly use numeric coding systems such as the International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) to describe patient encounters, coding errors are common. One of the most complicated diagnoses to code is diabetes mellitus. The ICD-9-CM currently has 39 separate codes for diabetes mellitus; this number will be expanded to more than 50 with the introduction of ICD-10-CM in October 2015. To assess the effect of a 1-hour focused presentation on ICD-9-CM codes on diabetes mellitus coding. A 1-hour focused lecture on the correct use of diabetes mellitus codes for patient visits was presented to family practice residents at Doctors Hospital Family Practice in Columbus, Ohio. To assess resident knowledge of the topic, a pretest and posttest were given to residents before and after the lecture, respectively. Medical records of all patients with diabetes mellitus who were cared for at the hospital 6 weeks before and 6 weeks after the lecture were reviewed and compared for the use of diabetes mellitus ICD-9 codes. Eighteen residents attended the lecture and completed the pretest and posttest. The mean (SD) percentage of correct answers was 72.8% (17.1%) for the pretest and 84.4% (14.6%) for the posttest, for an improvement of 11.6 percentage points (P≤.035). The percentage of total available codes used did not substantially change from before to after the lecture, but the use of the generic ICD-9-CM code for diabetes mellitus type II controlled (250.00) declined (58 of 176 [33%] to 102 of 393 [26%]) and the use of other codes increased, indicating a greater variety in codes used after the focused lecture. After a focused lecture on diabetes mellitus coding, resident coding knowledge improved. Review of medical record data did not reveal an overall change in the number of diabetic codes used after the lecture but did reveal a greater variety in the codes used.

  16. Uniform emergency codes: will they improve safety?

    Science.gov (United States)

    2005-01-01

    There are pros and cons to uniform code systems, according to emergency medicine experts. Uniformity can be a benefit when ED nurses and other staff work at several facilities. It's critical that your staff understand not only what the codes stand for, but what they must do when codes are called. If your state institutes a new system, be sure to hold regular drills to familiarize your ED staff.

  17. Error-correction coding

    Science.gov (United States)

    Hinds, Erold W. (Principal Investigator)

    1996-01-01

    This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.

  18. LDPC Codes with Minimum Distance Proportional to Block Size

    Science.gov (United States)

    Divsalar, Dariush; Jones, Christopher; Dolinar, Samuel; Thorpe, Jeremy

    2009-01-01

    Low-density parity-check (LDPC) codes characterized by minimum Hamming distances proportional to block sizes have been demonstrated. Like the codes mentioned in the immediately preceding article, the present codes are error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels. The previously mentioned codes have low decoding thresholds and reasonably low error floors. However, the minimum Hamming distances of those codes do not grow linearly with code-block sizes. Codes that have this minimum-distance property exhibit very low error floors. Examples of such codes include regular LDPC codes with variable degrees of at least 3. Unfortunately, the decoding thresholds of regular LDPC codes are high. Hence, there is a need for LDPC codes characterized by both low decoding thresholds and, in order to obtain acceptably low error floors, minimum Hamming distances that are proportional to code-block sizes. The present codes were developed to satisfy this need. The minimum Hamming distances of the present codes have been shown, through consideration of ensemble-average weight enumerators, to be proportional to code block sizes. As in the cases of irregular ensembles, the properties of these codes are sensitive to the proportion of degree-2 variable nodes. A code having too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code having too many such nodes tends not to exhibit a minimum distance that is proportional to block size. Results of computational simulations have shown that the decoding thresholds of codes of the present type are lower than those of regular LDPC codes. Included in the simulations were a few examples from a family of codes characterized by rates ranging from low to high and by thresholds that adhere closely to their respective channel capacity thresholds; the simulation results from these examples showed that the codes in question have low

  19. 77 FR 76078 - Regular Board of Directors Sunshine Act Meeting

    Science.gov (United States)

    2012-12-26

    ..., DC 20005. STATUS: Open. CONTACT PERSON FOR MORE INFORMATION: Erica Hall, Assistant Corporate... Regular Board of Directors Meeting Minutes IV. Approval of the Finance, Budget & Program Committee Meeting... Corporate Secretary. [FR Doc. 2012-31163 Filed 12-21-12; 4:15 pm] BILLING CODE 7570-02-P ...

  20. A mean field theory of coded CDMA systems

    International Nuclear Information System (INIS)

    Yano, Toru; Tanaka, Toshiyuki; Saad, David

    2008-01-01

    We present a mean field theory of code-division multiple-access (CDMA) systems with error-control coding. On the basis of the relation between the free energy and mutual information, we obtain an analytical expression of the maximum spectral efficiency of the coded CDMA system, from which a mean-field description of the coded CDMA system is provided in terms of a bank of scalar Gaussian channels whose variances in general vary at different code symbol positions. Regular low-density parity-check (LDPC)-coded CDMA systems are also discussed as an example of the coded CDMA systems

  1. A mean field theory of coded CDMA systems

    Energy Technology Data Exchange (ETDEWEB)

    Yano, Toru [Graduate School of Science and Technology, Keio University, Hiyoshi, Kohoku-ku, Yokohama-shi, Kanagawa 223-8522 (Japan); Tanaka, Toshiyuki [Graduate School of Informatics, Kyoto University, Yoshida Hon-machi, Sakyo-ku, Kyoto-shi, Kyoto 606-8501 (Japan); Saad, David [Neural Computing Research Group, Aston University, Birmingham B4 7ET (United Kingdom)], E-mail: yano@thx.appi.keio.ac.jp

    2008-08-15

    We present a mean field theory of code-division multiple-access (CDMA) systems with error-control coding. On the basis of the relation between the free energy and mutual information, we obtain an analytical expression of the maximum spectral efficiency of the coded CDMA system, from which a mean-field description of the coded CDMA system is provided in terms of a bank of scalar Gaussian channels whose variances in general vary at different code symbol positions. Regular low-density parity-check (LDPC)-coded CDMA systems are also discussed as an example of the coded CDMA systems.

  2. Applying Physical-Layer Network Coding in Wireless Networks

    Directory of Open Access Journals (Sweden)

    Liew SoungChang

    2010-01-01

    Full Text Available A main distinguishing feature of a wireless network compared with a wired network is its broadcast nature, in which the signal transmitted by a node may reach several other nodes, and a node may receive signals from several other nodes, simultaneously. Rather than a blessing, this feature is treated more as an interference-inducing nuisance in most wireless networks today (e.g., IEEE 802.11. This paper shows that the concept of network coding can be applied at the physical layer to turn the broadcast property into a capacity-boosting advantage in wireless ad hoc networks. Specifically, we propose a physical-layer network coding (PNC scheme to coordinate transmissions among nodes. In contrast to "straightforward" network coding which performs coding arithmetic on digital bit streams after they have been received, PNC makes use of the additive nature of simultaneously arriving electromagnetic (EM waves for equivalent coding operation. And in doing so, PNC can potentially achieve 100% and 50% throughput increases compared with traditional transmission and straightforward network coding, respectively, in 1D regular linear networks with multiple random flows. The throughput improvements are even larger in 2D regular networks: 200% and 100%, respectively.

  3. Simulation and transient analyses of a complete passive heat removal system in a downward cooling pool-type material testing reactor against a complete station blackout and long-term natural convection mode using the RELAP5/3.2 code

    Energy Technology Data Exchange (ETDEWEB)

    Hedayat, Afshin [Reactor and Nuclear Safety School, Nuclear Science and Technology Research Institute (NSTRI), Tehran (Iran, Islamic Republic of)

    2017-08-15

    In this paper, a complete station blackout (SBO) or complete loss of electrical power supplies is simulated and analyzed in a downward cooling 5-MW pool-type Material Testing Reactor (MTR). The scenario is traced in the absence of active cooling systems and operators. The code nodalization is successfully benchmarked against experimental data of the reactor's operating parameters. The passive heat removal system includes downward water cooling after pump breakdown by the force of gravity (where the coolant streams down to the unfilled portion of the holdup tank), safety flapper opening, flow reversal from a downward to an upward cooling direction, and then the upward free convection heat removal throughout the flapper safety valve, lower plenum, and fuel assemblies. Both short-term and long-term natural core cooling conditions are simulated and investigated using the RELAP5 code. Short-term analyses focus on the safety flapper valve operation and flow reversal mode. Long-term analyses include simulation of both complete SBO and long-term operation of the free convection mode. Results are promising for pool-type MTRs because this allows operators to investigate RELAP code abilities for MTR thermal–hydraulic simulations without any oscillation; moreover, the Tehran Research Reactor is conservatively safe against the complete SBO and long-term free convection operation.

  4. Simulation and transient analyses of a complete passive heat removal system in a downward cooling pool-type material testing reactor against a complete station blackout and long-term natural convection mode using the RELAP5/3.2 code

    Directory of Open Access Journals (Sweden)

    Afshin Hedayat

    2017-08-01

    Full Text Available In this paper, a complete station blackout (SBO or complete loss of electrical power supplies is simulated and analyzed in a downward cooling 5-MW pool-type Material Testing Reactor (MTR. The scenario is traced in the absence of active cooling systems and operators. The code nodalization is successfully benchmarked against experimental data of the reactor's operating parameters. The passive heat removal system includes downward water cooling after pump breakdown by the force of gravity (where the coolant streams down to the unfilled portion of the holdup tank, safety flapper opening, flow reversal from a downward to an upward cooling direction, and then the upward free convection heat removal throughout the flapper safety valve, lower plenum, and fuel assemblies. Both short-term and long-term natural core cooling conditions are simulated and investigated using the RELAP5 code. Short-term analyses focus on the safety flapper valve operation and flow reversal mode. Long-term analyses include simulation of both complete SBO and long-term operation of the free convection mode. Results are promising for pool-type MTRs because this allows operators to investigate RELAP code abilities for MTR thermal–hydraulic simulations without any oscillation; moreover, the Tehran Research Reactor is conservatively safe against the complete SBO and long-term free convection operation.

  5. Structured LDPC Codes over Integer Residue Rings

    Directory of Open Access Journals (Sweden)

    Mo Elisa

    2008-01-01

    Full Text Available Abstract This paper presents a new class of low-density parity-check (LDPC codes over represented by regular, structured Tanner graphs. These graphs are constructed using Latin squares defined over a multiplicative group of a Galois ring, rather than a finite field. Our approach yields codes for a wide range of code rates and more importantly, codes whose minimum pseudocodeword weights equal their minimum Hamming distances. Simulation studies show that these structured codes, when transmitted using matched signal sets over an additive-white-Gaussian-noise channel, can outperform their random counterparts of similar length and rate.

  6. Tur\\'an type inequalities for regular Coulomb wave functions

    OpenAIRE

    Baricz, Árpád

    2015-01-01

    Tur\\'an, Mitrinovi\\'c-Adamovi\\'c and Wilker type inequalities are deduced for regular Coulomb wave functions. The proofs are based on a Mittag-Leffler expansion for the regular Coulomb wave function, which may be of independent interest. Moreover, some complete monotonicity results concerning the Coulomb zeta functions and some interlacing properties of the zeros of Coulomb wave functions are given.

  7. Polynomial theory of error correcting codes

    CERN Document Server

    Cancellieri, Giovanni

    2015-01-01

    The book offers an original view on channel coding, based on a unitary approach to block and convolutional codes for error correction. It presents both new concepts and new families of codes. For example, lengthened and modified lengthened cyclic codes are introduced as a bridge towards time-invariant convolutional codes and their extension to time-varying versions. The novel families of codes include turbo codes and low-density parity check (LDPC) codes, the features of which are justified from the structural properties of the component codes. Design procedures for regular LDPC codes are proposed, supported by the presented theory. Quasi-cyclic LDPC codes, in block or convolutional form, represent one of the most original contributions of the book. The use of more than 100 examples allows the reader gradually to gain an understanding of the theory, and the provision of a list of more than 150 definitions, indexed at the end of the book, permits rapid location of sought information.

  8. Complete genome sequence of Catenulispora acidiphila type strain (ID 139908T)

    Energy Technology Data Exchange (ETDEWEB)

    Copeland, Alex; Lapidus, Alla; Rio, Tijana GlavinaDel; Nolan, Matt; Lucas, Susan; Chen, Feng; Tice, Hope; Cheng, Jan-Fang; Bruce, David; Goodwin, Lynne; Pitluck, Sam; Mikhailova, Natalia; Pati, Amrita; Ivanova, Natalia; Mavromatis, Konstantinos; Chen, Amy; Palaniappan, Krishna; Chain, Patrick; Land, Miriam; Hauser, Loren; Chang, Yun-Juan; Jefferies, Cynthia C.; Chertkov, Olga; Brettin, Thomas; Detter, John C.; Han, Cliff; Ali, Zahid; Tindall, Brian J.; Goker, Markus; Bristow, James; Eisen, Jonathan A.; Markowitz, Victor; Hugenholtz, Philip; Kyrpides, Nikos C.; Klenk, Hans-Peter

    2009-05-20

    Catenulispora acidiphila Busti et al. 2006 is the type species of the genus Catenulispora, and is of interest because of the rather isolated phylogenetic location of the genomically little studied suborder Catenulisporineae within the order Actinomycetales. C. acidiphilia is known for its acidophilic, aerobic lifestyle, but can also grow scantly under anaerobic conditions. Under regular conditions C. acidiphilia grows in long filaments of relatively short aerial hyphae with marked septation. It is a free living, non motile, Gram-positive bacterium isolated from a forest soil sample taken from a wooded area in Gerenzano, Italy. Here we describe the features of this organism, together with the complete genome sequence and annotation. This is the first complete genome sequence of the actinobacterial family Catenulisporaceae, and the 10,467,782 bp long single replicon genome with its 9056 protein-coding and 69 RNA genes is a part of the Genomic Encyclopedia of Bacteria and Archaea project.

  9. Diverse Regular Employees and Non-regular Employment (Japanese)

    OpenAIRE

    MORISHIMA Motohiro

    2011-01-01

    Currently there are high expectations for the introduction of policies related to diverse regular employees. These policies are a response to the problem of disparities between regular and non-regular employees (part-time, temporary, contract and other non-regular employees) and will make it more likely that workers can balance work and their private lives while companies benefit from the advantages of regular employment. In this paper, I look at two issues that underlie this discussion. The ...

  10. 12 CFR 725.3 - Regular membership.

    Science.gov (United States)

    2010-01-01

    ... UNION ADMINISTRATION CENTRAL LIQUIDITY FACILITY § 725.3 Regular membership. (a) A natural person credit....5(b) of this part, and forwarding with its completed application funds equal to one-half of this... 1, 1979, is not required to forward these funds to the Facility until October 1, 1979. (3...

  11. Structured LDPC Codes over Integer Residue Rings

    Directory of Open Access Journals (Sweden)

    Marc A. Armand

    2008-07-01

    Full Text Available This paper presents a new class of low-density parity-check (LDPC codes over ℤ2a represented by regular, structured Tanner graphs. These graphs are constructed using Latin squares defined over a multiplicative group of a Galois ring, rather than a finite field. Our approach yields codes for a wide range of code rates and more importantly, codes whose minimum pseudocodeword weights equal their minimum Hamming distances. Simulation studies show that these structured codes, when transmitted using matched signal sets over an additive-white-Gaussian-noise channel, can outperform their random counterparts of similar length and rate.

  12. Nonlinear Chance Constrained Problems: Optimality Conditions, Regularization and Solvers

    Czech Academy of Sciences Publication Activity Database

    Adam, Lukáš; Branda, Martin

    2016-01-01

    Roč. 170, č. 2 (2016), s. 419-436 ISSN 0022-3239 R&D Projects: GA ČR GA15-00735S Institutional support: RVO:67985556 Keywords : Chance constrained programming * Optimality conditions * Regularization * Algorithms * Free MATLAB codes Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 1.289, year: 2016 http://library.utia.cas.cz/separaty/2016/MTR/adam-0460909.pdf

  13. Study of cold neutron sources: Implementation and validation of a complete computation scheme for research reactor using Monte Carlo codes TRIPOLI-4.4 and McStas

    International Nuclear Information System (INIS)

    Campioni, Guillaume; Mounier, Claude

    2006-01-01

    The main goal of the thesis about studies of cold neutrons sources (CNS) in research reactors was to create a complete set of tools to design efficiently CNS. The work raises the problem to run accurate simulations of experimental devices inside reactor reflector valid for parametric studies. On one hand, deterministic codes have reasonable computation times but introduce problems for geometrical description. On the other hand, Monte Carlo codes give the possibility to compute on precise geometry, but need computation times so important that parametric studies are impossible. To decrease this computation time, several developments were made in the Monte Carlo code TRIPOLI-4.4. An uncoupling technique is used to isolate a study zone in the complete reactor geometry. By recording boundary conditions (incoming flux), further simulations can be launched for parametric studies with a computation time reduced by a factor 60 (case of the cold neutron source of the Orphee reactor). The short response time allows to lead parametric studies using Monte Carlo code. Moreover, using biasing methods, the flux can be recorded on the surface of neutrons guides entries (low solid angle) with a further gain of running time. Finally, the implementation of a coupling module between TRIPOLI- 4.4 and the Monte Carlo code McStas for research in condensed matter field gives the possibility to obtain fluxes after transmission through neutrons guides, thus to have the neutron flux received by samples studied by scientists of condensed matter. This set of developments, involving TRIPOLI-4.4 and McStas, represent a complete computation scheme for research reactors: from nuclear core, where neutrons are created, to the exit of neutrons guides, on samples of matter. This complete calculation scheme is tested against ILL4 measurements of flux in cold neutron guides. (authors)

  14. Code, standard and specifications

    International Nuclear Information System (INIS)

    Abdul Nassir Ibrahim; Azali Muhammad; Ab. Razak Hamzah; Abd. Aziz Mohamed; Mohamad Pauzi Ismail

    2008-01-01

    Radiography also same as the other technique, it need standard. This standard was used widely and method of used it also regular. With that, radiography testing only practical based on regulations as mentioned and documented. These regulation or guideline documented in code, standard and specifications. In Malaysia, level one and basic radiographer can do radiography work based on instruction give by level two or three radiographer. This instruction was produced based on guideline that mention in document. Level two must follow the specifications mentioned in standard when write the instruction. From this scenario, it makes clearly that this radiography work is a type of work that everything must follow the rule. For the code, the radiography follow the code of American Society for Mechanical Engineer (ASME) and the only code that have in Malaysia for this time is rule that published by Atomic Energy Licensing Board (AELB) known as Practical code for radiation Protection in Industrial radiography. With the existence of this code, all the radiography must follow the rule or standard regulated automatically.

  15. Weight Distribution for Non-binary Cluster LDPC Code Ensemble

    Science.gov (United States)

    Nozaki, Takayuki; Maehara, Masaki; Kasai, Kenta; Sakaniwa, Kohichi

    In this paper, we derive the average weight distributions for the irregular non-binary cluster low-density parity-check (LDPC) code ensembles. Moreover, we give the exponential growth rate of the average weight distribution in the limit of large code length. We show that there exist $(2,d_c)$-regular non-binary cluster LDPC code ensembles whose normalized typical minimum distances are strictly positive.

  16. Effects of Irregular Bridge Columns and Feasibility of Seismic Regularity

    Science.gov (United States)

    Thomas, Abey E.

    2018-05-01

    Bridges with unequal column height is one of the main irregularities in bridge design particularly while negotiating steep valleys, making the bridges vulnerable to seismic action. The desirable behaviour of bridge columns towards seismic loading is that, they should perform in a regular fashion, i.e. the capacity of each column should be utilized evenly. But, this type of behaviour is often missing when the column heights are unequal along the length of the bridge, allowing short columns to bear the maximum lateral load. In the present study, the effects of unequal column height on the global seismic performance of bridges are studied using pushover analysis. Codes such as CalTrans (Engineering service center, earthquake engineering branch, 2013) and EC-8 (EN 1998-2: design of structures for earthquake resistance. Part 2: bridges, European Committee for Standardization, Brussels, 2005) suggests seismic regularity criterion for achieving regular seismic performance level at all the bridge columns. The feasibility of adopting these seismic regularity criterions along with those mentioned in literatures will be assessed for bridges designed as per the Indian Standards in the present study.

  17. Chaos regularization of quantum tunneling rates

    International Nuclear Information System (INIS)

    Pecora, Louis M.; Wu Dongho; Lee, Hoshik; Antonsen, Thomas; Lee, Ming-Jer; Ott, Edward

    2011-01-01

    Quantum tunneling rates through a barrier separating two-dimensional, symmetric, double-well potentials are shown to depend on the classical dynamics of the billiard trajectories in each well and, hence, on the shape of the wells. For shapes that lead to regular (integrable) classical dynamics the tunneling rates fluctuate greatly with eigenenergies of the states sometimes by over two orders of magnitude. Contrarily, shapes that lead to completely chaotic trajectories lead to tunneling rates whose fluctuations are greatly reduced, a phenomenon we call regularization of tunneling rates. We show that a random-plane-wave theory of tunneling accounts for the mean tunneling rates and the small fluctuation variances for the chaotic systems.

  18. Supervised Convolutional Sparse Coding

    KAUST Repository

    Affara, Lama Ahmed

    2018-04-08

    Convolutional Sparse Coding (CSC) is a well-established image representation model especially suited for image restoration tasks. In this work, we extend the applicability of this model by proposing a supervised approach to convolutional sparse coding, which aims at learning discriminative dictionaries instead of purely reconstructive ones. We incorporate a supervised regularization term into the traditional unsupervised CSC objective to encourage the final dictionary elements to be discriminative. Experimental results show that using supervised convolutional learning results in two key advantages. First, we learn more semantically relevant filters in the dictionary and second, we achieve improved image reconstruction on unseen data.

  19. Trellis and turbo coding iterative and graph-based error control coding

    CERN Document Server

    Schlegel, Christian B

    2015-01-01

    This new edition has been extensively revised to reflect the progress in error control coding over the past few years. Over 60% of the material has been completely reworked, and 30% of the material is original. Convolutional, turbo, and low density parity-check (LDPC) coding and polar codes in a unified framework. Advanced research-related developments such as spatial coupling. A focus on algorithmic and implementation aspects of error control coding.

  20. Coordinate-invariant regularization

    International Nuclear Information System (INIS)

    Halpern, M.B.

    1987-01-01

    A general phase-space framework for coordinate-invariant regularization is given. The development is geometric, with all regularization contained in regularized DeWitt Superstructures on field deformations. Parallel development of invariant coordinate-space regularization is obtained by regularized functional integration of the momenta. As representative examples of the general formulation, the regularized general non-linear sigma model and regularized quantum gravity are discussed. copyright 1987 Academic Press, Inc

  1. Total variation regularization for fMRI-based prediction of behavior

    Science.gov (United States)

    Michel, Vincent; Gramfort, Alexandre; Varoquaux, Gaël; Eger, Evelyn; Thirion, Bertrand

    2011-01-01

    While medical imaging typically provides massive amounts of data, the extraction of relevant information for predictive diagnosis remains a difficult challenge. Functional MRI (fMRI) data, that provide an indirect measure of task-related or spontaneous neuronal activity, are classically analyzed in a mass-univariate procedure yielding statistical parametric maps. This analysis framework disregards some important principles of brain organization: population coding, distributed and overlapping representations. Multivariate pattern analysis, i.e., the prediction of behavioural variables from brain activation patterns better captures this structure. To cope with the high dimensionality of the data, the learning method has to be regularized. However, the spatial structure of the image is not taken into account in standard regularization methods, so that the extracted features are often hard to interpret. More informative and interpretable results can be obtained with the ℓ1 norm of the image gradient, a.k.a. its Total Variation (TV), as regularization. We apply for the first time this method to fMRI data, and show that TV regularization is well suited to the purpose of brain mapping while being a powerful tool for brain decoding. Moreover, this article presents the first use of TV regularization for classification. PMID:21317080

  2. Discriminative sparse coding on multi-manifolds

    KAUST Repository

    Wang, J.J.-Y.; Bensmail, H.; Yao, N.; Gao, Xin

    2013-01-01

    Sparse coding has been popularly used as an effective data representation method in various applications, such as computer vision, medical imaging and bioinformatics. However, the conventional sparse coding algorithms and their manifold-regularized variants (graph sparse coding and Laplacian sparse coding), learn codebooks and codes in an unsupervised manner and neglect class information that is available in the training set. To address this problem, we propose a novel discriminative sparse coding method based on multi-manifolds, that learns discriminative class-conditioned codebooks and sparse codes from both data feature spaces and class labels. First, the entire training set is partitioned into multiple manifolds according to the class labels. Then, we formulate the sparse coding as a manifold-manifold matching problem and learn class-conditioned codebooks and codes to maximize the manifold margins of different classes. Lastly, we present a data sample-manifold matching-based strategy to classify the unlabeled data samples. Experimental results on somatic mutations identification and breast tumor classification based on ultrasonic images demonstrate the efficacy of the proposed data representation and classification approach. 2013 The Authors. All rights reserved.

  3. Discriminative sparse coding on multi-manifolds

    KAUST Repository

    Wang, J.J.-Y.

    2013-09-26

    Sparse coding has been popularly used as an effective data representation method in various applications, such as computer vision, medical imaging and bioinformatics. However, the conventional sparse coding algorithms and their manifold-regularized variants (graph sparse coding and Laplacian sparse coding), learn codebooks and codes in an unsupervised manner and neglect class information that is available in the training set. To address this problem, we propose a novel discriminative sparse coding method based on multi-manifolds, that learns discriminative class-conditioned codebooks and sparse codes from both data feature spaces and class labels. First, the entire training set is partitioned into multiple manifolds according to the class labels. Then, we formulate the sparse coding as a manifold-manifold matching problem and learn class-conditioned codebooks and codes to maximize the manifold margins of different classes. Lastly, we present a data sample-manifold matching-based strategy to classify the unlabeled data samples. Experimental results on somatic mutations identification and breast tumor classification based on ultrasonic images demonstrate the efficacy of the proposed data representation and classification approach. 2013 The Authors. All rights reserved.

  4. Real-Time Fabric Defect Detection Using Accelerated Small-Scale Over-Completed Dictionary of Sparse Coding

    Directory of Open Access Journals (Sweden)

    Tianpeng Feng

    2016-01-01

    Full Text Available An auto fabric defect detection system via computer vision is used to replace manual inspection. In this paper, we propose a hardware accelerated algorithm based on a small-scale over-completed dictionary (SSOCD via sparse coding (SC method, which is realized on a parallel hardware platform (TMS320C6678. In order to reduce computation, the image patches projections in the training SSOCD are taken as features and the proposed features are more robust, and exhibit obvious advantages in detection results and computational cost. Furthermore, we introduce detection ratio and false ratio in order to measure the performance and reliability of the hardware accelerated algorithm. The experiments show that the proposed algorithm can run with high parallel efficiency and that the detection speed meets the real-time requirements of industrial inspection.

  5. Selection of regularization parameter for l1-regularized damage detection

    Science.gov (United States)

    Hou, Rongrong; Xia, Yong; Bao, Yuequan; Zhou, Xiaoqing

    2018-06-01

    The l1 regularization technique has been developed for structural health monitoring and damage detection through employing the sparsity condition of structural damage. The regularization parameter, which controls the trade-off between data fidelity and solution size of the regularization problem, exerts a crucial effect on the solution. However, the l1 regularization problem has no closed-form solution, and the regularization parameter is usually selected by experience. This study proposes two strategies of selecting the regularization parameter for the l1-regularized damage detection problem. The first method utilizes the residual and solution norms of the optimization problem and ensures that they are both small. The other method is based on the discrepancy principle, which requires that the variance of the discrepancy between the calculated and measured responses is close to the variance of the measurement noise. The two methods are applied to a cantilever beam and a three-story frame. A range of the regularization parameter, rather than one single value, can be determined. When the regularization parameter in this range is selected, the damage can be accurately identified even for multiple damage scenarios. This range also indicates the sensitivity degree of the damage identification problem to the regularization parameter.

  6. Protograph LDPC Codes for the Erasure Channel

    Science.gov (United States)

    Pollara, Fabrizio; Dolinar, Samuel J.; Divsalar, Dariush

    2006-01-01

    This viewgraph presentation reviews the use of protograph Low Density Parity Check (LDPC) codes for erasure channels. A protograph is a Tanner graph with a relatively small number of nodes. A "copy-and-permute" operation can be applied to the protograph to obtain larger derived graphs of various sizes. For very high code rates and short block sizes, a low asymptotic threshold criterion is not the best approach to designing LDPC codes. Simple protographs with much regularity and low maximum node degrees appear to be the best choices Quantized-rateless protograph LDPC codes can be built by careful design of the protograph such that multiple puncturing patterns will still permit message passing decoding to proceed

  7. Predictions of the spontaneous symmetry-breaking theory for visual code completeness and spatial scaling in single-cell learning rules.

    Science.gov (United States)

    Webber, C J

    2001-05-01

    This article shows analytically that single-cell learning rules that give rise to oriented and localized receptive fields, when their synaptic weights are randomly and independently initialized according to a plausible assumption of zero prior information, will generate visual codes that are invariant under two-dimensional translations, rotations, and scale magnifications, provided that the statistics of their training images are sufficiently invariant under these transformations. Such codes span different image locations, orientations, and size scales with equal economy. Thus, single-cell rules could account for the spatial scaling property of the cortical simple-cell code. This prediction is tested computationally by training with natural scenes; it is demonstrated that a single-cell learning rule can give rise to simple-cell receptive fields spanning the full range of orientations, image locations, and spatial frequencies (except at the extreme high and low frequencies at which the scale invariance of the statistics of digitally sampled images must ultimately break down, because of the image boundary and the finite pixel resolution). Thus, no constraint on completeness, or any other coupling between cells, is necessary to induce the visual code to span wide ranges of locations, orientations, and size scales. This prediction is made using the theory of spontaneous symmetry breaking, which we have previously shown can also explain the data-driven self-organization of a wide variety of transformation invariances in neurons' responses, such as the translation invariance of complex cell response.

  8. Multilevel LDPC Codes Design for Multimedia Communication CDMA System

    Directory of Open Access Journals (Sweden)

    Hou Jia

    2004-01-01

    Full Text Available We design multilevel coding (MLC with a semi-bit interleaved coded modulation (BICM scheme based on low density parity check (LDPC codes. Different from the traditional designs, we joined the MLC and BICM together by using the Gray mapping, which is suitable to transmit the data over several equivalent channels with different code rates. To perform well at signal-to-noise ratio (SNR to be very close to the capacity of the additive white Gaussian noise (AWGN channel, random regular LDPC code and a simple semialgebra LDPC (SA-LDPC code are discussed in MLC with parallel independent decoding (PID. The numerical results demonstrate that the proposed scheme could achieve both power and bandwidth efficiency.

  9. Coding Theory and Applications : 4th International Castle Meeting

    CERN Document Server

    Malonek, Paula; Vettori, Paolo

    2015-01-01

    The topics covered in this book, written by researchers at the forefront of their field, represent some of the most relevant research areas in modern coding theory: codes and combinatorial structures, algebraic geometric codes, group codes, quantum codes, convolutional codes, network coding and cryptography. The book includes a survey paper on the interconnections of coding theory with constrained systems, written by an invited speaker, as well as 37 cutting-edge research communications presented at the 4th International Castle Meeting on Coding Theory and Applications (4ICMCTA), held at the Castle of Palmela in September 2014. The event’s scientific program consisted of four invited talks and 39 regular talks by authors from 24 different countries. This conference provided an ideal opportunity for communicating new results, exchanging ideas, strengthening international cooperation, and introducing young researchers into the coding theory community.

  10. Complete cumulative index (1963-1983)

    International Nuclear Information System (INIS)

    1983-01-01

    This complete cumulative index covers all regular and special issues and supplements published by Atomic Energy Review (AER) during its lifetime (1963-1983). The complete cumulative index consists of six Indexes: the Index of Abstracts, the Subject Index, the Title Index, the Author Index, the Country Index and the Table of Elements Index. The complete cumulative index supersedes the Cumulative Indexes for Volumes 1-7: 1963-1969 (1970), and for Volumes 1-10: 1963-1972 (1972); this Index also finalizes Atomic Energy Review, the publication of which has recently been terminated by the IAEA

  11. A novel construction method of QC-LDPC codes based on CRT for optical communications

    Science.gov (United States)

    Yuan, Jian-guo; Liang, Meng-qi; Wang, Yong; Lin, Jin-zhao; Pang, Yu

    2016-05-01

    A novel construction method of quasi-cyclic low-density parity-check (QC-LDPC) codes is proposed based on Chinese remainder theory (CRT). The method can not only increase the code length without reducing the girth, but also greatly enhance the code rate, so it is easy to construct a high-rate code. The simulation results show that at the bit error rate ( BER) of 10-7, the net coding gain ( NCG) of the regular QC-LDPC(4 851, 4 546) code is respectively 2.06 dB, 1.36 dB, 0.53 dB and 0.31 dB more than those of the classic RS(255, 239) code in ITU-T G.975, the LDPC(32 640, 30 592) code in ITU-T G.975.1, the QC-LDPC(3 664, 3 436) code constructed by the improved combining construction method based on CRT and the irregular QC-LDPC(3 843, 3 603) code constructed by the construction method based on the Galois field ( GF( q)) multiplicative group. Furthermore, all these five codes have the same code rate of 0.937. Therefore, the regular QC-LDPC(4 851, 4 546) code constructed by the proposed construction method has excellent error-correction performance, and can be more suitable for optical transmission systems.

  12. An RNA-Seq strategy to detect the complete coding and non-coding transcriptome including full-length imprinted macro ncRNAs.

    Directory of Open Access Journals (Sweden)

    Ru Huang

    Full Text Available Imprinted macro non-protein-coding (nc RNAs are cis-repressor transcripts that silence multiple genes in at least three imprinted gene clusters in the mouse genome. Similar macro or long ncRNAs are abundant in the mammalian genome. Here we present the full coding and non-coding transcriptome of two mouse tissues: differentiated ES cells and fetal head using an optimized RNA-Seq strategy. The data produced is highly reproducible in different sequencing locations and is able to detect the full length of imprinted macro ncRNAs such as Airn and Kcnq1ot1, whose length ranges between 80-118 kb. Transcripts show a more uniform read coverage when RNA is fragmented with RNA hydrolysis compared with cDNA fragmentation by shearing. Irrespective of the fragmentation method, all coding and non-coding transcripts longer than 8 kb show a gradual loss of sequencing tags towards the 3' end. Comparisons to published RNA-Seq datasets show that the strategy presented here is more efficient in detecting known functional imprinted macro ncRNAs and also indicate that standardization of RNA preparation protocols would increase the comparability of the transcriptome between different RNA-Seq datasets.

  13. On Analyzing LDPC Codes over Multiantenna MC-CDMA System

    Directory of Open Access Journals (Sweden)

    S. Suresh Kumar

    2014-01-01

    Full Text Available Multiantenna multicarrier code-division multiple access (MC-CDMA technique has been attracting much attention for designing future broadband wireless systems. In addition, low-density parity-check (LDPC code, a promising near-optimal error correction code, is also being widely considered in next generation communication systems. In this paper, we propose a simple method to construct a regular quasicyclic low-density parity-check (QC-LDPC code to improve the transmission performance over the precoded MC-CDMA system with limited feedback. Simulation results show that the coding gain of the proposed QC-LDPC codes is larger than that of the Reed-Solomon codes, and the performance of the multiantenna MC-CDMA system can be greatly improved by these QC-LDPC codes when the data rate is high.

  14. Code-Switching in Judaeo-Arabic Documents from the Cairo Geniza

    Science.gov (United States)

    Wagner, Esther-Miriam; Connolly, Magdalen

    2018-01-01

    This paper investigates code-switching and script-switching in medieval documents from the Cairo Geniza, written in Judaeo-Arabic (Arabic in Hebrew script), Hebrew, Arabic and Aramaic. Legal documents regularly show a macaronic style of Judaeo-Arabic, Aramaic and Hebrew, while in letters code-switching from Judaeo-Arabic to Hebrew is tied in with…

  15. Regular Breakfast and Blood Lead Levels among Preschool Children

    Directory of Open Access Journals (Sweden)

    Needleman Herbert

    2011-04-01

    Full Text Available Abstract Background Previous studies have shown that fasting increases lead absorption in the gastrointestinal tract of adults. Regular meals/snacks are recommended as a nutritional intervention for lead poisoning in children, but epidemiological evidence of links between fasting and blood lead levels (B-Pb is rare. The purpose of this study was to examine the association between eating a regular breakfast and B-Pb among children using data from the China Jintan Child Cohort Study. Methods Parents completed a questionnaire regarding children's breakfast-eating habit (regular or not, demographics, and food frequency. Whole blood samples were collected from 1,344 children for the measurements of B-Pb and micronutrients (iron, copper, zinc, calcium, and magnesium. B-Pb and other measures were compared between children with and without regular breakfast. Linear regression modeling was used to evaluate the association between regular breakfast and log-transformed B-Pb. The association between regular breakfast and risk of lead poisoning (B-Pb≥10 μg/dL was examined using logistic regression modeling. Results Median B-Pb among children who ate breakfast regularly and those who did not eat breakfast regularly were 6.1 μg/dL and 7.2 μg/dL, respectively. Eating breakfast was also associated with greater zinc blood levels. Adjusting for other relevant factors, the linear regression model revealed that eating breakfast regularly was significantly associated with lower B-Pb (beta = -0.10 units of log-transformed B-Pb compared with children who did not eat breakfast regularly, p = 0.02. Conclusion The present study provides some initial human data supporting the notion that eating a regular breakfast might reduce B-Pb in young children. To our knowledge, this is the first human study exploring the association between breakfast frequency and B-Pb in young children.

  16. Optimal Tikhonov Regularization in Finite-Frequency Tomography

    Science.gov (United States)

    Fang, Y.; Yao, Z.; Zhou, Y.

    2017-12-01

    The last decade has witnessed a progressive transition in seismic tomography from ray theory to finite-frequency theory which overcomes the resolution limit of the high-frequency approximation in ray theory. In addition to approximations in wave propagation physics, a main difference between ray-theoretical tomography and finite-frequency tomography is the sparseness of the associated sensitivity matrix. It is well known that seismic tomographic problems are ill-posed and regularizations such as damping and smoothing are often applied to analyze the tradeoff between data misfit and model uncertainty. The regularizations depend on the structure of the matrix as well as noise level of the data. Cross-validation has been used to constrain data uncertainties in body-wave finite-frequency inversions when measurements at multiple frequencies are available to invert for a common structure. In this study, we explore an optimal Tikhonov regularization in surface-wave phase-velocity tomography based on minimization of an empirical Bayes risk function using theoretical training datasets. We exploit the structure of the sensitivity matrix in the framework of singular value decomposition (SVD) which also allows for the calculation of complete resolution matrix. We compare the optimal Tikhonov regularization in finite-frequency tomography with traditional tradeo-off analysis using surface wave dispersion measurements from global as well as regional studies.

  17. The heat flows and harmonic maps from complete manifolds into generalized regular balls

    International Nuclear Information System (INIS)

    Li Jiayu.

    1993-01-01

    Let M be a complete Riemannian manifold (compact (with or without boundary) or noncompact). Let N be a complete Riemannian manifold. We generalize the existence result for harmonic maps obtained by Hildebrandt-Kaul-Widman using the heat flow method. (author). 21 refs

  18. From inactive to regular jogger

    DEFF Research Database (Denmark)

    Lund-Cramer, Pernille; Brinkmann Løite, Vibeke; Bredahl, Thomas Viskum Gjelstrup

    study was conducted using individual semi-structured interviews on how a successful long-term behavior change had been achieved. Ten informants were purposely selected from participants in the DANO-RUN research project (7 men, 3 women, average age 41.5). Interviews were performed on the basis of Theory...... of Planned Behavior (TPB) and The Transtheoretical Model (TTM). Coding and analysis of interviews were performed using NVivo 10 software. Results TPB: During the behavior change process, the intention to jogging shifted from a focus on weight loss and improved fitness to both physical health, psychological......Title From inactive to regular jogger - a qualitative study of achieved behavioral change among recreational joggers Authors Pernille Lund-Cramer & Vibeke Brinkmann Løite Purpose Despite extensive knowledge of barriers to physical activity, most interventions promoting physical activity have proven...

  19. Effective leader behaviors in regularly held staff meetings: Surveyed vs. Videotaped and Video-Coded Observations

    NARCIS (Netherlands)

    Hoogeboom, Marcella; Hoogeboom, Marcella; Wilderom, Celeste P.M.; Allen, Joseph A.; Lehmann-Willenbrock, Nale; Rogelberg, Steven G.

    2015-01-01

    In this chapter, we report on two studies that took an exploratory behavioral approach to leaders in regular staff meetings. The goal of both studies, which used a still rarely deployed observation method, was to identify effective behavioral repertoires of leaders in staff meetings; we specifically

  20. Operational reactor physics analysis codes (ORPAC)

    International Nuclear Information System (INIS)

    Kumar, Jainendra; Singh, K.P.; Singh, Kanchhi

    2007-07-01

    For efficient, smooth and safe operation of a nuclear research reactor, many reactor physics evaluations are regularly required. As part of reactor core management the important activities are maintaining core reactivity status, core power distribution, xenon estimations, safety evaluation of in-pile irradiation samples and experimental assemblies and assessment of nuclear safety in fuel handling/storage. In-pile irradiation of samples requires a prior estimation of the reactivity load due to the sample, the heating rate and the activity developed in it during irradiation. For the safety of personnel handling irradiated samples the dose rate at the surface of shielded flask housing the irradiated sample should be less than 200 mR/Hr.Therefore, a proper shielding and radioactive cooling of the irradiated sample are required to meet the said requirement. Knowledge of xenon load variation with time (Startup-curve) helps in estimating Xenon override time. Monitoring of power in individual fuel channels during reactor operation is essential to know any abnormal power distribution to avoid unsafe situations. Complexities in the estimation of above mentioned reactor parameters and their frequent requirement compel one to use computer codes to avoid possible human errors. For efficient and quick evaluation of parameters related to reactor operations such as xenon load, critical moderator height and nuclear heating and reactivity load of isotope samples/experimental assembly, a computer code ORPAC (Operational Reactor Physics Analysis Codes) has been developed. This code is being used for regular assessment of reactor physics parameters in Dhruva and Cirus. The code ORPAC written in Visual Basic 6.0 environment incorporates several important operational reactor physics aspects on a single platform with graphical user interfaces (GUI) to make it more user-friendly and presentable. (author)

  1. Assessment of the RELAP4/MOD6 thermal-hydraulic transient code for PWR experimental applications. Addendum. Analyses completed and reported in FY 1979. Interim report

    International Nuclear Information System (INIS)

    1979-09-01

    The results of three subtasks that complete the assessment of the RELAP4/MOD6 computer code are reported. These subtasks constitute the remainder of a broadly scoped assessment matrix defined and described in detail in a previously published document. The specific subtasks provide comparisons of code calculations with experimental results from core blowdown and critical-flow separate-effects experiments and from an integral systems-effects loss-of-coolant experiment. The basic emphasis of the comparisons is in the presentation of the study results in error form suitable for statistical analysis

  2. On the sizes of expander graphs and minimum distances of graph codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Justesen, Jørn

    2014-01-01

    We give lower bounds for the minimum distances of graph codes based on expander graphs. The bounds depend only on the second eigenvalue of the graph and the parameters of the component codes. We also give an upper bound on the size of a degree regular graph with given second eigenvalue....

  3. A Game Theoretic Approach to Minimize the Completion Time of Network Coded Cooperative Data Exchange

    KAUST Repository

    Douik, Ahmed S.

    2014-05-11

    In this paper, we introduce a game theoretic framework for studying the problem of minimizing the completion time of instantly decodable network coding (IDNC) for cooperative data exchange (CDE) in decentralized wireless network. In this configuration, clients cooperate with each other to recover the erased packets without a central controller. Game theory is employed herein as a tool for improving the distributed solution by overcoming the need for a central controller or additional signaling in the system. We model the session by self-interested players in a non-cooperative potential game. The utility function is designed such that increasing individual payoff results in a collective behavior achieving both a desirable system performance in a shared network environment and the Pareto optimal solution. Through extensive simulations, our approach is compared to the best performance that could be found in the conventional point-to-multipoint (PMP) recovery process. Numerical results show that our formulation largely outperforms the conventional PMP scheme in most practical situations and achieves a lower delay.

  4. A Game Theoretic Approach to Minimize the Completion Time of Network Coded Cooperative Data Exchange

    KAUST Repository

    Douik, Ahmed S.; Al-Naffouri, Tareq Y.; Alouini, Mohamed-Slim; Sorour, Sameh; Tembine, Hamidou

    2014-01-01

    In this paper, we introduce a game theoretic framework for studying the problem of minimizing the completion time of instantly decodable network coding (IDNC) for cooperative data exchange (CDE) in decentralized wireless network. In this configuration, clients cooperate with each other to recover the erased packets without a central controller. Game theory is employed herein as a tool for improving the distributed solution by overcoming the need for a central controller or additional signaling in the system. We model the session by self-interested players in a non-cooperative potential game. The utility function is designed such that increasing individual payoff results in a collective behavior achieving both a desirable system performance in a shared network environment and the Pareto optimal solution. Through extensive simulations, our approach is compared to the best performance that could be found in the conventional point-to-multipoint (PMP) recovery process. Numerical results show that our formulation largely outperforms the conventional PMP scheme in most practical situations and achieves a lower delay.

  5. The Aster code; Code Aster

    Energy Technology Data Exchange (ETDEWEB)

    Delbecq, J.M

    1999-07-01

    The Aster code is a 2D or 3D finite-element calculation code for structures developed by the R and D direction of Electricite de France (EdF). This dossier presents a complete overview of the characteristics and uses of the Aster code: introduction of version 4; the context of Aster (organisation of the code development, versions, systems and interfaces, development tools, quality assurance, independent validation); static mechanics (linear thermo-elasticity, Euler buckling, cables, Zarka-Casier method); non-linear mechanics (materials behaviour, big deformations, specific loads, unloading and loss of load proportionality indicators, global algorithm, contact and friction); rupture mechanics (G energy restitution level, restitution level in thermo-elasto-plasticity, 3D local energy restitution level, KI and KII stress intensity factors, calculation of limit loads for structures), specific treatments (fatigue, rupture, wear, error estimation); meshes and models (mesh generation, modeling, loads and boundary conditions, links between different modeling processes, resolution of linear systems, display of results etc..); vibration mechanics (modal and harmonic analysis, dynamics with shocks, direct transient dynamics, seismic analysis and aleatory dynamics, non-linear dynamics, dynamical sub-structuring); fluid-structure interactions (internal acoustics, mass, rigidity and damping); linear and non-linear thermal analysis; steels and metal industry (structure transformations); coupled problems (internal chaining, internal thermo-hydro-mechanical coupling, chaining with other codes); products and services. (J.S.)

  6. An axisymmetric gravitational collapse code

    Energy Technology Data Exchange (ETDEWEB)

    Choptuik, Matthew W [CIAR Cosmology and Gravity Program, Department of Physics and Astronomy, University of British Columbia, Vancouver BC, V6T 1Z1 (Canada); Hirschmann, Eric W [Department of Physics and Astronomy, Brigham Young University, Provo, UT 84604 (United States); Liebling, Steven L [Southampton College, Long Island University, Southampton, NY 11968 (United States); Pretorius, Frans [Theoretical Astrophysics, California Institute of Technology, Pasadena, CA 91125 (United States)

    2003-05-07

    We present a new numerical code designed to solve the Einstein field equations for axisymmetric spacetimes. The long-term goal of this project is to construct a code that will be capable of studying many problems of interest in axisymmetry, including gravitational collapse, critical phenomena, investigations of cosmic censorship and head-on black-hole collisions. Our objective here is to detail the (2+1)+1 formalism we use to arrive at the corresponding system of equations and the numerical methods we use to solve them. We are able to obtain stable evolution, despite the singular nature of the coordinate system on the axis, by enforcing appropriate regularity conditions on all variables and by adding numerical dissipation to hyperbolic equations.

  7. An axisymmetric gravitational collapse code

    International Nuclear Information System (INIS)

    Choptuik, Matthew W; Hirschmann, Eric W; Liebling, Steven L; Pretorius, Frans

    2003-01-01

    We present a new numerical code designed to solve the Einstein field equations for axisymmetric spacetimes. The long-term goal of this project is to construct a code that will be capable of studying many problems of interest in axisymmetry, including gravitational collapse, critical phenomena, investigations of cosmic censorship and head-on black-hole collisions. Our objective here is to detail the (2+1)+1 formalism we use to arrive at the corresponding system of equations and the numerical methods we use to solve them. We are able to obtain stable evolution, despite the singular nature of the coordinate system on the axis, by enforcing appropriate regularity conditions on all variables and by adding numerical dissipation to hyperbolic equations

  8. Performance analysis of multiple interference suppression over asynchronous/synchronous optical code-division multiple-access system based on complementary/prime/shifted coding scheme

    Science.gov (United States)

    Nieh, Ta-Chun; Yang, Chao-Chin; Huang, Jen-Fa

    2011-08-01

    A complete complementary/prime/shifted prime (CPS) code family for the optical code-division multiple-access (OCDMA) system is proposed. Based on the ability of complete complementary (CC) code, the multiple-access interference (MAI) can be suppressed and eliminated via spectral amplitude coding (SAC) OCDMA system under asynchronous/synchronous transmission. By utilizing the shifted prime (SP) code in the SAC scheme, the hardware implementation of encoder/decoder can be simplified with a reduced number of optical components, such as arrayed waveguide grating (AWG) and fiber Bragg grating (FBG). This system has a superior performance as compared to previous bipolar-bipolar coding OCDMA systems.

  9. Grasping completions: Towards a new paradigm

    NARCIS (Netherlands)

    Lommertzen, J.; Meulenbroek, R.G.J.; Lier, R.J. van

    2006-01-01

    We studied contextual effects of amodal completion in both a primed-matching task, and a grasping task in a within-subjects design with twenty-nine participants. Stimuli were partly occluded cylindrical objects that could have indentations (or protrusions) at regular intervals along the contour. The

  10. Spectral Regularization Algorithms for Learning Large Incomplete Matrices.

    Science.gov (United States)

    Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert

    2010-03-01

    We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 10(6) × 10(6) incomplete matrix with 10(5) observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques.

  11. Distance-regular graphs

    NARCIS (Netherlands)

    van Dam, Edwin R.; Koolen, Jack H.; Tanaka, Hajime

    2016-01-01

    This is a survey of distance-regular graphs. We present an introduction to distance-regular graphs for the reader who is unfamiliar with the subject, and then give an overview of some developments in the area of distance-regular graphs since the monograph 'BCN'[Brouwer, A.E., Cohen, A.M., Neumaier,

  12. Nonlocal Regularized Algebraic Reconstruction Techniques for MRI: An Experimental Study

    Directory of Open Access Journals (Sweden)

    Xin Li

    2013-01-01

    Full Text Available We attempt to revitalize researchers' interest in algebraic reconstruction techniques (ART by expanding their capabilities and demonstrating their potential in speeding up the process of MRI acquisition. Using a continuous-to-discrete model, we experimentally study the application of ART into MRI reconstruction which unifies previous nonuniform-fast-Fourier-transform- (NUFFT- based and gridding-based approaches. Under the framework of ART, we advocate the use of nonlocal regularization techniques which are leveraged from our previous research on modeling photographic images. It is experimentally shown that nonlocal regularization ART (NR-ART can often outperform their local counterparts in terms of both subjective and objective qualities of reconstructed images. On one real-world k-space data set, we find that nonlocal regularization can achieve satisfactory reconstruction from as few as one-third of samples. We also address an issue related to image reconstruction from real-world k-space data but overlooked in the open literature: the consistency of reconstructed images across different resolutions. A resolution-consistent extension of NR-ART is developed and shown to effectively suppress the artifacts arising from frequency extrapolation. Both source codes and experimental results of this work are made fully reproducible.

  13. Regular expressions cookbook

    CERN Document Server

    Goyvaerts, Jan

    2009-01-01

    This cookbook provides more than 100 recipes to help you crunch data and manipulate text with regular expressions. Every programmer can find uses for regular expressions, but their power doesn't come worry-free. Even seasoned users often suffer from poor performance, false positives, false negatives, or perplexing bugs. Regular Expressions Cookbook offers step-by-step instructions for some of the most common tasks involving this tool, with recipes for C#, Java, JavaScript, Perl, PHP, Python, Ruby, and VB.NET. With this book, you will: Understand the basics of regular expressions through a

  14. Code Team Training: Demonstrating Adherence to AHA Guidelines During Pediatric Code Blue Activations.

    Science.gov (United States)

    Stewart, Claire; Shoemaker, Jamie; Keller-Smith, Rachel; Edmunds, Katherine; Davis, Andrew; Tegtmeyer, Ken

    2017-10-16

    Pediatric code blue activations are infrequent events with a high mortality rate despite the best effort of code teams. The best method for training these code teams is debatable; however, it is clear that training is needed to assure adherence to American Heart Association (AHA) Resuscitation Guidelines and to prevent the decay that invariably occurs after Pediatric Advanced Life Support training. The objectives of this project were to train a multidisciplinary, multidepartmental code team and to measure this team's adherence to AHA guidelines during code simulation. Multidisciplinary code team training sessions were held using high-fidelity, in situ simulation. Sessions were held several times per month. Each session was filmed and reviewed for adherence to 5 AHA guidelines: chest compression rate, ventilation rate, chest compression fraction, use of a backboard, and use of a team leader. After the first study period, modifications were made to the code team including implementation of just-in-time training and alteration of the compression team. Thirty-eight sessions were completed, with 31 eligible for video analysis. During the first study period, 1 session adhered to all AHA guidelines. During the second study period, after alteration of the code team and implementation of just-in-time training, no sessions adhered to all AHA guidelines; however, there was an improvement in percentage of sessions adhering to ventilation rate and chest compression rate and an improvement in median ventilation rate. We present a method for training a large code team drawn from multiple hospital departments and a method of assessing code team performance. Despite subjective improvement in code team positioning, communication, and role completion and some improvement in ventilation rate and chest compression rate, we failed to consistently demonstrate improvement in adherence to all guidelines.

  15. LL-regular grammars

    NARCIS (Netherlands)

    Nijholt, Antinus

    1980-01-01

    Culik II and Cogen introduced the class of LR-regular grammars, an extension of the LR(k) grammars. In this paper we consider an analogous extension of the LL(k) grammars called the LL-regular grammars. The relation of this class of grammars to other classes of grammars will be shown. Any LL-regular

  16. Tunable Sparse Network Coding for Multicast Networks

    DEFF Research Database (Denmark)

    Feizi, Soheil; Roetter, Daniel Enrique Lucani; Sørensen, Chres Wiant

    2014-01-01

    This paper shows the potential and key enabling mechanisms for tunable sparse network coding, a scheme in which the density of network coded packets varies during a transmission session. At the beginning of a transmission session, sparsely coded packets are transmitted, which benefits decoding...... complexity. At the end of a transmission, when receivers have accumulated degrees of freedom, coding density is increased. We propose a family of tunable sparse network codes (TSNCs) for multicast erasure networks with a controllable trade-off between completion time performance to decoding complexity...... a mechanism to perform efficient Gaussian elimination over sparse matrices going beyond belief propagation but maintaining low decoding complexity. Supporting simulation results are provided showing the trade-off between decoding complexity and completion time....

  17. Application of regularization technique in image super-resolution algorithm via sparse representation

    Science.gov (United States)

    Huang, De-tian; Huang, Wei-qin; Huang, Hui; Zheng, Li-xin

    2017-11-01

    To make use of the prior knowledge of the image more effectively and restore more details of the edges and structures, a novel sparse coding objective function is proposed by applying the principle of the non-local similarity and manifold learning on the basis of super-resolution algorithm via sparse representation. Firstly, the non-local similarity regularization term is constructed by using the similar image patches to preserve the edge information. Then, the manifold learning regularization term is constructed by utilizing the locally linear embedding approach to enhance the structural information. The experimental results validate that the proposed algorithm has a significant improvement compared with several super-resolution algorithms in terms of the subjective visual effect and objective evaluation indices.

  18. Regularization of the double period method for experimental data processing

    Science.gov (United States)

    Belov, A. A.; Kalitkin, N. N.

    2017-11-01

    In physical and technical applications, an important task is to process experimental curves measured with large errors. Such problems are solved by applying regularization methods, in which success depends on the mathematician's intuition. We propose an approximation based on the double period method developed for smooth nonperiodic functions. Tikhonov's stabilizer with a squared second derivative is used for regularization. As a result, the spurious oscillations are suppressed and the shape of an experimental curve is accurately represented. This approach offers a universal strategy for solving a broad class of problems. The method is illustrated by approximating cross sections of nuclear reactions important for controlled thermonuclear fusion. Tables recommended as reference data are obtained. These results are used to calculate the reaction rates, which are approximated in a way convenient for gasdynamic codes. These approximations are superior to previously known formulas in the covered temperature range and accuracy.

  19. Self-Esteem of Deaf and Hard of Hearing Students in Regular and Special Schools

    Science.gov (United States)

    Lesar, Irena; Smrtnik Vitulic, Helena

    2014-01-01

    The study focuses on the self-esteem of deaf and hard of hearing (D/HH) students from Slovenia. A total of 80 D/HH students from regular and special primary schools (grades 6-9) and from regular and special secondary schools (grades 1-4) completed the Self-Esteem Questionnaire (Lamovec 1994). For the entire group of D/HH students, the results of…

  20. News from the Library: A new key reference work for the engineer: ASME's Boiler and Pressure Vessel Code at the CERN Library

    CERN Multimedia

    CERN Library

    2011-01-01

    The Library is aiming at offering a range of constantly updated reference books, to cover all areas of CERN activity. A recent addition to our collections strengthens our offer in the Engineering field.   The CERN Library now holds a copy of the complete ASME Boiler and Pressure Vessel Code, 2010 edition. This code establishes rules of safety governing the design, fabrication, and inspection of boilers and pressure vessels, and nuclear power plant components during construction. This document is considered worldwide as a reference for mechanical design and is therefore important for the CERN community. The Code published by ASME (American Society of Mechanical Engineers) is kept current by the Boiler and Pressure Committee, a volunteer group of more than 950 engineers worldwide. The Committee meets regularly to consider requests for interpretations, revision, and to develop new rules. The CERN Library receives updates and includes them in the volumes until the next edition, which is expected to ...

  1. Stream Processing Using Grammars and Regular Expressions

    DEFF Research Database (Denmark)

    Rasmussen, Ulrik Terp

    disambiguation. The first algorithm operates in two passes in a semi-streaming fashion, using a constant amount of working memory and an auxiliary tape storage which is written in the first pass and consumed by the second. The second algorithm is a single-pass and optimally streaming algorithm which outputs...... as much of the parse tree as is semantically possible based on the input prefix read so far, and resorts to buffering as many symbols as is required to resolve the next choice. Optimality is obtained by performing a PSPACE-complete pre-analysis on the regular expression. In the second part we present...... Kleenex, a language for expressing high-performance streaming string processing programs as regular grammars with embedded semantic actions, and its compilation to streaming string transducers with worst-case linear-time performance. Its underlying theory is based on transducer decomposition into oracle...

  2. An iterative method for Tikhonov regularization with a general linear regularization operator

    NARCIS (Netherlands)

    Hochstenbach, M.E.; Reichel, L.

    2010-01-01

    Tikhonov regularization is one of the most popular approaches to solve discrete ill-posed problems with error-contaminated data. A regularization operator and a suitable value of a regularization parameter have to be chosen. This paper describes an iterative method, based on Golub-Kahan

  3. Procedure and code for calculating black control rods taking into account epithermal absorption, code CAS-1; Postupak i program za proracun crnih kontrolnih sipki, uzimajuci u obzir i epitermalnu apsorpciju, CAS-1

    Energy Technology Data Exchange (ETDEWEB)

    Martinc, R; Trivunac, N; Zivkovic, Z [Boris Kidric Institute of nuclear sciences Vinca, Belgrade (Yugoslavia)

    1964-12-15

    This report describes the computer code CAS-1, calculation method and procedure applied for calculating the black control rods taking into account the epithermal neutron absorption. Results obtained for supercell method applied for regular lattice reflected in the multiplication medium is part of this report in addition to the computer code manual.

  4. The KFA-Version of the high-energy transport code HETC and the generalized evaluation code SIMPEL

    International Nuclear Information System (INIS)

    Cloth, P.; Filges, D.; Sterzenbach, G.; Armstrong, T.W.; Colborn, B.L.

    1983-03-01

    This document describes the updates that have been made to the high-energy transport code HETC for use in the German spallation-neutron source project SNQ. Performance and purpose of the subsidiary code SIMPEL that has been written for general analysis of the HETC output are also described. In addition means of coupling to low energy transport programs, such as the Monte-Carlo code MORSE is provided. As complete input descriptions for HETC and SIMPEL are given together with a sample problem, this document can serve as a user's manual for these two codes. The document is also an answer to the demand that has been issued by a greater community of HETC users on the ICANS-IV meeting, Oct 20-24 1980, Tsukuba-gun, Japan for a complete description of at least one single version of HETC among the many different versions that exist. (orig.)

  5. Two-pass greedy regular expression parsing

    DEFF Research Database (Denmark)

    Grathwohl, Niels Bjørn Bugge; Henglein, Fritz; Nielsen, Lasse

    2013-01-01

    We present new algorithms for producing greedy parses for regular expressions (REs) in a semi-streaming fashion. Our lean-log algorithm executes in time O(mn) for REs of size m and input strings of size n and outputs a compact bit-coded parse tree representation. It improves on previous algorithms...... by: operating in only 2 passes; using only O(m) words of random-access memory (independent of n); requiring only kn bits of sequentially written and read log storage, where k ... and not requiring it to be stored at all. Previous RE parsing algorithms do not scale linearly with input size, or require substantially more log storage and employ 3 passes where the first consists of reversing the input, or do not or are not known to produce a greedy parse. The performance of our unoptimized C...

  6. A Complete Video Coding Chain Based on Multi-Dimensional Discrete Cosine Transform

    Directory of Open Access Journals (Sweden)

    T. Fryza

    2010-09-01

    Full Text Available The paper deals with a video compression method based on the multi-dimensional discrete cosine transform. In the text, the encoder and decoder architectures including the definitions of all mathematical operations like the forward and inverse 3-D DCT, quantization and thresholding are presented. According to the particular number of currently processed pictures, the new quantization tables and entropy code dictionaries are proposed in the paper. The practical properties of the 3-D DCT coding chain compared with the modern video compression methods (such as H.264 and WebM and the computing complexity are presented as well. It will be proved the best compress properties could be achieved by complex H.264 codec. On the other hand the computing complexity - especially on the encoding side - is lower for the 3-D DCT method.

  7. On the Organizational Dynamics of the Genetic Code

    KAUST Repository

    Zhang, Zhang

    2011-06-07

    The organization of the canonical genetic code needs to be thoroughly illuminated. Here we reorder the four nucleotides—adenine, thymine, guanine and cytosine—according to their emergence in evolution, and apply the organizational rules to devising an algebraic representation for the canonical genetic code. Under a framework of the devised code, we quantify codon and amino acid usages from a large collection of 917 prokaryotic genome sequences, and associate the usages with its intrinsic structure and classification schemes as well as amino acid physicochemical properties. Our results show that the algebraic representation of the code is structurally equivalent to a content-centric organization of the code and that codon and amino acid usages under different classification schemes were correlated closely with GC content, implying a set of rules governing composition dynamics across a wide variety of prokaryotic genome sequences. These results also indicate that codons and amino acids are not randomly allocated in the code, where the six-fold degenerate codons and their amino acids have important balancing roles for error minimization. Therefore, the content-centric code is of great usefulness in deciphering its hitherto unknown regularities as well as the dynamics of nucleotide, codon, and amino acid compositions.

  8. On the Organizational Dynamics of the Genetic Code

    KAUST Repository

    Zhang, Zhang; Yu, Jun

    2011-01-01

    The organization of the canonical genetic code needs to be thoroughly illuminated. Here we reorder the four nucleotides—adenine, thymine, guanine and cytosine—according to their emergence in evolution, and apply the organizational rules to devising an algebraic representation for the canonical genetic code. Under a framework of the devised code, we quantify codon and amino acid usages from a large collection of 917 prokaryotic genome sequences, and associate the usages with its intrinsic structure and classification schemes as well as amino acid physicochemical properties. Our results show that the algebraic representation of the code is structurally equivalent to a content-centric organization of the code and that codon and amino acid usages under different classification schemes were correlated closely with GC content, implying a set of rules governing composition dynamics across a wide variety of prokaryotic genome sequences. These results also indicate that codons and amino acids are not randomly allocated in the code, where the six-fold degenerate codons and their amino acids have important balancing roles for error minimization. Therefore, the content-centric code is of great usefulness in deciphering its hitherto unknown regularities as well as the dynamics of nucleotide, codon, and amino acid compositions.

  9. Complete mitochondrial genome of Cynopterus sphinx (Pteropodidae: Cynopterus).

    Science.gov (United States)

    Li, Linmiao; Li, Min; Wu, Zhengjun; Chen, Jinping

    2015-01-01

    We have characterized the complete mitochondrial genome of Cynopterus sphinx (Pteropodidae: Cynopterus) and described its organization in this study. The total length of C. sphinx complete mitochondrial genome was 16,895 bp with the base composition of 32.54% A, 14.05% G, 25.82% T and 27.59% C. The complete mitochondrial genome included 13 protein-coding genes, 22 tRNA genes, 2 rRNA genes (12S rRNA and 16S rRNA) and 1 control region (D-loop). The control region was 1435 bp long with the sequence CATACG repeat 64 times. Three protein-coding genes (ND1, COI and ND4) were ended with incomplete stop codon TA or T.

  10. Performance Comparisons of Improved Regular Repeat Accumulate (RA and Irregular Repeat Accumulate (IRA Turbo Decoding

    Directory of Open Access Journals (Sweden)

    Ahmed Abdulkadhim Hamad

    2017-08-01

    Full Text Available In this paper, different techniques are used to improve the turbo decoding of regular repeat accumulate (RA and irregular repeat accumulate (IRA codes. The adaptive scaling of a-posteriori information produced by Soft-output Viterbi decoder (SOVA is proposed. The encoded pilots are another scheme that applied for short length RA codes. This work also suggests a simple and a fast method to generate a random interleaver having a free 4 cycle Tanner graph. Progressive edge growth algorithm (PEG is also studied and simulated to create the Tanner graphs which have a great girth.

  11. An analysis of electrical impedance tomography with applications to Tikhonov regularization

    KAUST Repository

    Jin, Bangti

    2012-01-16

    This paper analyzes the continuum model/complete electrode model in the electrical impedance tomography inverse problem of determining the conductivity parameter from boundary measurements. The continuity and differentiability of the forward operator with respect to the conductivity parameter in L p-norms are proved. These analytical results are applied to several popular regularization formulations, which incorporate a priori information of smoothness/sparsity on the inhomogeneity through Tikhonov regularization, for both linearized and nonlinear models. Some important properties, e.g., existence, stability, consistency and convergence rates, are established. This provides some theoretical justifications of their practical usage. © EDP Sciences, SMAI, 2012.

  12. An analysis of electrical impedance tomography with applications to Tikhonov regularization

    KAUST Repository

    Jin, Bangti; Maass, Peter

    2012-01-01

    This paper analyzes the continuum model/complete electrode model in the electrical impedance tomography inverse problem of determining the conductivity parameter from boundary measurements. The continuity and differentiability of the forward operator with respect to the conductivity parameter in L p-norms are proved. These analytical results are applied to several popular regularization formulations, which incorporate a priori information of smoothness/sparsity on the inhomogeneity through Tikhonov regularization, for both linearized and nonlinear models. Some important properties, e.g., existence, stability, consistency and convergence rates, are established. This provides some theoretical justifications of their practical usage. © EDP Sciences, SMAI, 2012.

  13. Insurance billing and coding.

    Science.gov (United States)

    Napier, Rebecca H; Bruelheide, Lori S; Demann, Eric T K; Haug, Richard H

    2008-07-01

    The purpose of this article is to highlight the importance of understanding various numeric and alpha-numeric codes for accurately billing dental and medically related services to private pay or third-party insurance carriers. In the United States, common dental terminology (CDT) codes are most commonly used by dentists to submit claims, whereas current procedural terminology (CPT) and International Classification of Diseases, Ninth Revision, Clinical Modification (ICD.9.CM) codes are more commonly used by physicians to bill for their services. The CPT and ICD.9.CM coding systems complement each other in that CPT codes provide the procedure and service information and ICD.9.CM codes provide the reason or rationale for a particular procedure or service. These codes are more commonly used for "medical necessity" determinations, and general dentists and specialists who routinely perform care, including trauma-related care, biopsies, and dental treatment as a result of or in anticipation of a cancer-related treatment, are likely to use these codes. Claim submissions for care provided can be completed electronically or by means of paper forms.

  14. The WIMS familly of codes

    International Nuclear Information System (INIS)

    Askew, J.

    1981-01-01

    WIMS-D4 is the latest version of the original form of the Winfrith Improved Multigroup Scheme, developed in 1963-5 for lattice calculations on all types of thermal reactor, whether moderated by graphite, heavy or light water. The code, in earlier versions, has been available from the NEA code centre for a number of years in both IBM and CDC dialects of FORTRAN. An important feature of this code was its rapid, accurate deterministic system for treating resonance capture in heavy nuclides, and capable of dealing with both regular pin lattices and with cluster geometries typical of pressure tube and gas cooled reactors. WIMS-E is a compatible code scheme in which each calcultation step is bounded by standard interfaces on disc or tape. The interfaces contain files of information in a standard form, restricted to numbers representing physically meaningful quantities such as cross-sections and fluxes. Restriction of code intercommunication to this channel limits the possible propagation of errors. A module is capable of transforming WIMS-D output into the standard interface form and hence the two schemes can be linked if required. LWR-WIMS was developed in 1970 as a method of calculating LWR reloads for the fuel fabricators BNFL/GUNF. It uses the WIMS-E library and a number of the same module

  15. On-line monitoring and inservice inspection in codes; Betriebsueberwachung und wiederkehrende Pruefungen in den Regelwerken

    Energy Technology Data Exchange (ETDEWEB)

    Bartonicek, J.; Zaiss, W. [Gemeinschaftskernkraftwerk Neckar GmbH, Neckarwestheim (Germany); Bath, H.R. [Bundesamt fuer Strahlenschutz, Salzgitter (Germany). Geschaeftsstelle des Kerntechnischen Ausschusses (KTA)

    1999-08-01

    The relevant regulatory codes determine the ISI tasks and the time intervals for recurrent components testing for evaluation of operation-induced damaging or ageing in order to ensure component integrity on the basis of the last available quality data. In-service quality monitoring is carried out through on-line monitoring and recurrent testing. The requirements defined by the engineering codes elaborated by various institutions are comparable, with the KTA nuclear engineering and safety codes being the most complete provisions for quality evaluation and assurance after different, defined service periods. German conventional codes for assuring component integrity provide exclusively for recurrent inspection regimes (mainly pressure tests and optical testing). The requirements defined in the KTA codes however always demanded more specific inspections relying on recurrent testing as well as on-line monitoring. Foreign codes for ensuring component integrity concentrate on NDE tasks at regular time intervals, with time intervals scope of testing activities being defined on the basis of the ASME code, section XI. (orig./CB) [Deutsch] Fuer die Komponentenintegritaet sind die Schaedigungsmechanismen mit dem nach den Regelwerken einzuhaltenden Abstand abzusichern. Dabei ist die jeweils vorhandene (Ist-) Qualitaet als Ausgangspunkt entscheidend. Die Absicherung der vorhandenen Qualitaet im weiteren Betrieb erfolgt durch geeignete Betriebsueberwachung und wiederkehrende Pruefungen. Die Anforderungen der Regelwerke sind vergleichbar, wobei die Bestimmung der vorhandenen Qualitaet nach einer bestimmten Betriebszeit sowie deren Absicherung im weiteren Betrieb am vollstaendigsten auf Basis des KTA-Regelwerkes moeglich ist. Die Absicherung der Komponentenintegritaet im Betrieb beruht in deutschen konventionellen Regelwerken nur auf den wiederkehrenden Pruefungen (hauptsaechlich Druckpruefungen und Sichtpruefungen). Das KTA-Regelwerk forderte hier schon immer qualifizierte

  16. Continuous-variable quantum erasure correcting code

    DEFF Research Database (Denmark)

    Lassen, Mikael Østergaard; Sabuncu, Metin; Huck, Alexander

    2010-01-01

    We experimentally demonstrate a continuous variable quantum erasure-correcting code, which protects coherent states of light against complete erasure. The scheme encodes two coherent states into a bi-party entangled state, and the resulting 4-mode code is conveyed through 4 independent channels...

  17. Advanced codes and methods supporting improved fuel cycle economics - 5493

    International Nuclear Information System (INIS)

    Curca-Tivig, F.; Maupin, K.; Thareau, S.

    2015-01-01

    AREVA's code development program was practically completed in 2014. The basic codes supporting a new generation of advanced methods are the followings. GALILEO is a state-of-the-art fuel rod performance code for PWR and BWR applications. Development is completed, implementation started in France and the U.S.A. ARCADIA-1 is a state-of-the-art neutronics/ thermal-hydraulics/ thermal-mechanics code system for PWR applications. Development is completed, implementation started in Europe and in the U.S.A. The system thermal-hydraulic codes S-RELAP5 and CATHARE-2 are not really new but still state-of-the-art in the domain. S-RELAP5 was completely restructured and re-coded such that its life cycle increases by further decades. CATHARE-2 will be replaced in the future by the new CATHARE-3. The new AREVA codes and methods are largely based on first principles modeling with an extremely broad international verification and validation data base. This enables AREVA and its customers to access more predictable licensing processes in a fast evolving regulatory environment (new safety criteria, requests for enlarged qualification databases, statistical applications, uncertainty propagation...). In this context, the advanced codes and methods and the associated verification and validation represent the key to avoiding penalties on products, on operational limits, or on methodologies themselves

  18. The geometry of continuum regularization

    International Nuclear Information System (INIS)

    Halpern, M.B.

    1987-03-01

    This lecture is primarily an introduction to coordinate-invariant regularization, a recent advance in the continuum regularization program. In this context, the program is seen as fundamentally geometric, with all regularization contained in regularized DeWitt superstructures on field deformations

  19. Supersymmetric dimensional regularization

    International Nuclear Information System (INIS)

    Siegel, W.; Townsend, P.K.; van Nieuwenhuizen, P.

    1980-01-01

    There is a simple modification of dimension regularization which preserves supersymmetry: dimensional reduction to real D < 4, followed by analytic continuation to complex D. In terms of component fields, this means fixing the ranges of all indices on the fields (and therefore the numbers of Fermi and Bose components). For superfields, it means continuing in the dimensionality of x-space while fixing the dimensionality of theta-space. This regularization procedure allows the simple manipulation of spinor derivatives in supergraph calculations. The resulting rules are: (1) First do all algebra exactly as in D = 4; (2) Then do the momentum integrals as in ordinary dimensional regularization. This regularization procedure needs extra rules before one can say that it is consistent. Such extra rules needed for superconformal anomalies are discussed. Problems associated with renormalizability and higher order loops are also discussed

  20. Expansion of the CHR bone code system

    International Nuclear Information System (INIS)

    Farnham, J.E.; Schlenker, R.A.

    1976-01-01

    This report describes the coding system used in the Center for Human Radiobiology (CHR) to identify individual bones and portions of bones of a complete skeletal system. It includes illustrations of various bones and bone segments with their respective code numbers. Codes are also presented for bone groups and for nonbone materials

  1. ARES: automated response function code. Users manual

    International Nuclear Information System (INIS)

    Maung, T.; Reynolds, G.M.

    1981-06-01

    This ARES user's manual provides detailed instructions for a general understanding of the Automated Response Function Code and gives step by step instructions for using the complete code package on a HP-1000 system. This code is designed to calculate response functions of NaI gamma-ray detectors, with cylindrical or rectangular geometries

  2. Complex sparse spatial filter for decoding mixed frequency and phase coded steady-state visually evoked potentials.

    Science.gov (United States)

    Morikawa, Naoki; Tanaka, Toshihisa; Islam, Md Rabiul

    2018-07-01

    Mixed frequency and phase coding (FPC) can achieve the significant increase of the number of commands in steady-state visual evoked potential-based brain-computer interface (SSVEP-BCI). However, the inconsistent phases of the SSVEP over channels in a trial and the existence of non-contributing channels due to noise effects can decrease accurate detection of stimulus frequency. We propose a novel command detection method based on a complex sparse spatial filter (CSSF) by solving ℓ 1 - and ℓ 2,1 -regularization problems for a mixed-coded SSVEP-BCI. In particular, ℓ 2,1 -regularization (aka group sparsification) can lead to the rejection of electrodes that are not contributing to the SSVEP detection. A calibration data based canonical correlation analysis (CCA) and CSSF with ℓ 1 - and ℓ 2,1 -regularization cases were demonstrated for a 16-target stimuli with eleven subjects. The results of statistical test suggest that the proposed method with ℓ 1 - and ℓ 2,1 -regularization significantly achieved the highest ITR. The proposed approaches do not need any reference signals, automatically select prominent channels, and reduce the computational cost compared to the other mixed frequency-phase coding (FPC)-based BCIs. The experimental results suggested that the proposed method can be usable implementing BCI effectively with reduce visual fatigue. Copyright © 2018 Elsevier B.V. All rights reserved.

  3. Regularization by External Variables

    DEFF Research Database (Denmark)

    Bossolini, Elena; Edwards, R.; Glendinning, P. A.

    2016-01-01

    Regularization was a big topic at the 2016 CRM Intensive Research Program on Advances in Nonsmooth Dynamics. There are many open questions concerning well known kinds of regularization (e.g., by smoothing or hysteresis). Here, we propose a framework for an alternative and important kind of regula......Regularization was a big topic at the 2016 CRM Intensive Research Program on Advances in Nonsmooth Dynamics. There are many open questions concerning well known kinds of regularization (e.g., by smoothing or hysteresis). Here, we propose a framework for an alternative and important kind...

  4. The Dedekind completion of C ( X ): an interval-valued functions ...

    African Journals Online (AJOL)

    In his paper [1] R. Anguelov described the construction of the Dedekind order completion of C(X) the set of all real-valued continuous functions defined on a completely regular topological space X; using Hausdorff continuous real intervalvalued functions. The aim of this paper is to show that Anguelov's construction can be ...

  5. Dedekind σ-complete vector lattice of b-AM-compact operators ...

    African Journals Online (AJOL)

    We give several equivalent conditions characterizing the case when Krb-AM(E,F) is Dedekind σ-complete. Moreover, we describe the case when the space of all regular b-AM-compact operators from E to F is complete under the b-AM-norm. Keywords: Banach lattices, b-AM-compact operator, discrete space ...

  6. Structure-Based Low-Rank Model With Graph Nuclear Norm Regularization for Noise Removal.

    Science.gov (United States)

    Ge, Qi; Jing, Xiao-Yuan; Wu, Fei; Wei, Zhi-Hui; Xiao, Liang; Shao, Wen-Ze; Yue, Dong; Li, Hai-Bo

    2017-07-01

    Nonlocal image representation methods, including group-based sparse coding and block-matching 3-D filtering, have shown their great performance in application to low-level tasks. The nonlocal prior is extracted from each group consisting of patches with similar intensities. Grouping patches based on intensity similarity, however, gives rise to disturbance and inaccuracy in estimation of the true images. To address this problem, we propose a structure-based low-rank model with graph nuclear norm regularization. We exploit the local manifold structure inside a patch and group the patches by the distance metric of manifold structure. With the manifold structure information, a graph nuclear norm regularization is established and incorporated into a low-rank approximation model. We then prove that the graph-based regularization is equivalent to a weighted nuclear norm and the proposed model can be solved by a weighted singular-value thresholding algorithm. Extensive experiments on additive white Gaussian noise removal and mixed noise removal demonstrate that the proposed method achieves a better performance than several state-of-the-art algorithms.

  7. Technical Note: Regularization performances with the error consistency method in the case of retrieved atmospheric profiles

    Directory of Open Access Journals (Sweden)

    S. Ceccherini

    2007-01-01

    Full Text Available The retrieval of concentration vertical profiles of atmospheric constituents from spectroscopic measurements is often an ill-conditioned problem and regularization methods are frequently used to improve its stability. Recently a new method, that provides a good compromise between precision and vertical resolution, was proposed to determine analytically the value of the regularization parameter. This method is applied for the first time to real measurements with its implementation in the operational retrieval code of the satellite limb-emission measurements of the MIPAS instrument and its performances are quantitatively analyzed. The adopted regularization improves the stability of the retrieval providing smooth profiles without major degradation of the vertical resolution. In the analyzed measurements the retrieval procedure provides a vertical resolution that, in the troposphere and low stratosphere, is smaller than the vertical field of view of the instrument.

  8. OPR1000 RCP Flow Coastdown Analysis using SPACE Code

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Dong-Hyuk; Kim, Seyun [KHNP CRI, Daejeon (Korea, Republic of)

    2016-10-15

    The Korean nuclear industry developed a thermal-hydraulic analysis code for the safety analysis of PWRs, named SPACE(Safety and Performance Analysis Code for Nuclear Power Plant). Current loss of flow transient analysis of OPR1000 uses COAST code to calculate transient RCS(Reactor Coolant System) flow. The COAST code calculates RCS loop flow using pump performance curves and RCP(Reactor Coolant Pump) inertia. In this paper, SPACE code is used to reproduce RCS flowrates calculated by COAST code. The loss of flow transient is transient initiated by reduction of forced reactor coolant circulation. Typical loss of flow transients are complete loss of flow(CLOF) and locked rotor(LR). OPR1000 RCP flow coastdown analysis was performed using SPACE using simplified nodalization. Complete loss of flow(4 RCP trip) was analyzed. The results show good agreement with those from COAST code, which is CE code for calculating RCS flow during loss of flow transients. Through this study, we confirmed that SPACE code can be used instead of COAST code for RCP flow coastdown analysis.

  9. Complete ML Decoding orf the (73,45) PG Code

    DEFF Research Database (Denmark)

    Justesen, Jørn; Høholdt, Tom; Hjaltason, Johan

    2005-01-01

    Our recent proof of the completeness of decoding by list bit flipping is reviewed. The proof is based on an enumeration of all cosets of low weight in terms of their minimum weight and syndrome weight. By using a geometric description of the error patterns we characterize all remaining cosets....

  10. Low Completeness of Bacteraemia Registration in the Danish National Patient Registry

    DEFF Research Database (Denmark)

    Gradel, Kim Oren; Nielsen, Stig Lønberg; Pedersen, Court

    2015-01-01

    Bacteraemia is associated with significant morbidity and mortality and timely access to relia-ble information is essential for health care administrators. Therefore, we investigated the complete-ness of bacteraemia registration in the Danish National Patient Registry (DNPR) containing hospital...... according to the International Classification of Diseases, version 10, and surgical procedure codes were retrieved from the DNPR. The codes were categorized into seven groups, ranked a priori according to the likelihood of bacteraemia. Completeness was analysed by contin-gency tables, for all patients...

  11. Functional interrogation of non-coding DNA through CRISPR genome editing.

    Science.gov (United States)

    Canver, Matthew C; Bauer, Daniel E; Orkin, Stuart H

    2017-05-15

    Methodologies to interrogate non-coding regions have lagged behind coding regions despite comprising the vast majority of the genome. However, the rapid evolution of clustered regularly interspaced short palindromic repeats (CRISPR)-based genome editing has provided a multitude of novel techniques for laboratory investigation including significant contributions to the toolbox for studying non-coding DNA. CRISPR-mediated loss-of-function strategies rely on direct disruption of the underlying sequence or repression of transcription without modifying the targeted DNA sequence. CRISPR-mediated gain-of-function approaches similarly benefit from methods to alter the targeted sequence through integration of customized sequence into the genome as well as methods to activate transcription. Here we review CRISPR-based loss- and gain-of-function techniques for the interrogation of non-coding DNA. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Minimizing The Completion Time Of A Wireless Cooperative Network Using Network Coding

    DEFF Research Database (Denmark)

    Roetter, Daniel Enrique Lucani; Khamfroush, Hana; Barros, João

    2013-01-01

    We consider the performance of network coding for a wireless cooperative network in which a source wants to transmit M data packets to two receivers. We assume that receivers can share their received packets with each other or simply wait to receive the packets from the source. The problem of fin...

  13. Sandia National Laboratories analysis code data base

    Science.gov (United States)

    Peterson, C. W.

    1994-11-01

    Sandia National Laboratories' mission is to solve important problems in the areas of national defense, energy security, environmental integrity, and industrial technology. The laboratories' strategy for accomplishing this mission is to conduct research to provide an understanding of the important physical phenomena underlying any problem, and then to construct validated computational models of the phenomena which can be used as tools to solve the problem. In the course of implementing this strategy, Sandia's technical staff has produced a wide variety of numerical problem-solving tools which they use regularly in the design, analysis, performance prediction, and optimization of Sandia components, systems, and manufacturing processes. This report provides the relevant technical and accessibility data on the numerical codes used at Sandia, including information on the technical competency or capability area that each code addresses, code 'ownership' and release status, and references describing the physical models and numerical implementation.

  14. Sandia National Laboratories analysis code data base

    Energy Technology Data Exchange (ETDEWEB)

    Peterson, C.W.

    1994-11-01

    Sandia National Laboratories, mission is to solve important problems in the areas of national defense, energy security, environmental integrity, and industrial technology. The Laboratories` strategy for accomplishing this mission is to conduct research to provide an understanding of the important physical phenomena underlying any problem, and then to construct validated computational models of the phenomena which can be used as tools to solve the problem. In the course of implementing this strategy, Sandia`s technical staff has produced a wide variety of numerical problem-solving tools which they use regularly in the design, analysis, performance prediction, and optimization of Sandia components, systems and manufacturing processes. This report provides the relevant technical and accessibility data on the numerical codes used at Sandia, including information on the technical competency or capability area that each code addresses, code ``ownership`` and release status, and references describing the physical models and numerical implementation.

  15. Memory-efficient decoding of LDPC codes

    Science.gov (United States)

    Kwok-San Lee, Jason; Thorpe, Jeremy; Hawkins, Jon

    2005-01-01

    We present a low-complexity quantization scheme for the implementation of regular (3,6) LDPC codes. The quantization parameters are optimized to maximize the mutual information between the source and the quantized messages. Using this non-uniform quantized belief propagation algorithm, we have simulated that an optimized 3-bit quantizer operates with 0.2dB implementation loss relative to a floating point decoder, and an optimized 4-bit quantizer operates less than 0.1dB quantization loss.

  16. Experimental validation of the HARMONIE code

    International Nuclear Information System (INIS)

    Bernard, A.; Dorsselaere, J.P. van

    1984-01-01

    An experimental program of deformation, in air, of different groups of subassemblies (7 to 41 subassemblies), was performed on a scale 1 mock-up in the SPX1 geometry, in order to achieve a first experimental validation of the code HARMONIE. The agreement between tests and calculations was suitable, qualitatively for all the groups and quantitatively for regular groups of 19 subassemblies at most. The differences come mainly from friction between pads, and secondly from the foot gaps. (author)

  17. On a correspondence between regular and non-regular operator monotone functions

    DEFF Research Database (Denmark)

    Gibilisco, P.; Hansen, Frank; Isola, T.

    2009-01-01

    We prove the existence of a bijection between the regular and the non-regular operator monotone functions satisfying a certain functional equation. As an application we give a new proof of the operator monotonicity of certain functions related to the Wigner-Yanase-Dyson skew information....

  18. NAGRADATA. Code key. Geology

    International Nuclear Information System (INIS)

    Mueller, W.H.; Schneider, B.; Staeuble, J.

    1984-01-01

    This reference manual provides users of the NAGRADATA system with comprehensive keys to the coding/decoding of geological and technical information to be stored in or retreaved from the databank. Emphasis has been placed on input data coding. When data is retreaved the translation into plain language of stored coded information is done automatically by computer. Three keys each, list the complete set of currently defined codes for the NAGRADATA system, namely codes with appropriate definitions, arranged: 1. according to subject matter (thematically) 2. the codes listed alphabetically and 3. the definitions listed alphabetically. Additional explanation is provided for the proper application of the codes and the logic behind the creation of new codes to be used within the NAGRADATA system. NAGRADATA makes use of codes instead of plain language for data storage; this offers the following advantages: speed of data processing, mainly data retrieval, economies of storage memory requirements, the standardisation of terminology. The nature of this thesaurian type 'key to codes' makes it impossible to either establish a final form or to cover the entire spectrum of requirements. Therefore, this first issue of codes to NAGRADATA must be considered to represent the current state of progress of a living system and future editions will be issued in a loose leave ringbook system which can be updated by an organised (updating) service. (author)

  19. Regularization ambiguities in loop quantum gravity

    International Nuclear Information System (INIS)

    Perez, Alejandro

    2006-01-01

    One of the main achievements of loop quantum gravity is the consistent quantization of the analog of the Wheeler-DeWitt equation which is free of ultraviolet divergences. However, ambiguities associated to the intermediate regularization procedure lead to an apparently infinite set of possible theories. The absence of an UV problem--the existence of well-behaved regularization of the constraints--is intimately linked with the ambiguities arising in the quantum theory. Among these ambiguities is the one associated to the SU(2) unitary representation used in the diffeomorphism covariant 'point-splitting' regularization of the nonlinear functionals of the connection. This ambiguity is labeled by a half-integer m and, here, it is referred to as the m ambiguity. The aim of this paper is to investigate the important implications of this ambiguity. We first study 2+1 gravity (and more generally BF theory) quantized in the canonical formulation of loop quantum gravity. Only when the regularization of the quantum constraints is performed in terms of the fundamental representation of the gauge group does one obtain the usual topological quantum field theory as a result. In all other cases unphysical local degrees of freedom arise at the level of the regulated theory that conspire against the existence of the continuum limit. This shows that there is a clear-cut choice in the quantization of the constraints in 2+1 loop quantum gravity. We then analyze the effects of the ambiguity in 3+1 gravity exhibiting the existence of spurious solutions for higher representation quantizations of the Hamiltonian constraint. Although the analysis is not complete in 3+1 dimensions - due to the difficulties associated to the definition of the physical inner product - it provides evidence supporting the definitions quantum dynamics of loop quantum gravity in terms of the fundamental representation of the gauge group as the only consistent possibilities. If the gauge group is SO(3) we find

  20. GLOBAL OPTIMIZATION METHODS FOR GRAVITATIONAL LENS SYSTEMS WITH REGULARIZED SOURCES

    International Nuclear Information System (INIS)

    Rogers, Adam; Fiege, Jason D.

    2012-01-01

    Several approaches exist to model gravitational lens systems. In this study, we apply global optimization methods to find the optimal set of lens parameters using a genetic algorithm. We treat the full optimization procedure as a two-step process: an analytical description of the source plane intensity distribution is used to find an initial approximation to the optimal lens parameters; the second stage of the optimization uses a pixelated source plane with the semilinear method to determine an optimal source. Regularization is handled by means of an iterative method and the generalized cross validation (GCV) and unbiased predictive risk estimator (UPRE) functions that are commonly used in standard image deconvolution problems. This approach simultaneously estimates the optimal regularization parameter and the number of degrees of freedom in the source. Using the GCV and UPRE functions, we are able to justify an estimation of the number of source degrees of freedom found in previous work. We test our approach by applying our code to a subset of the lens systems included in the SLACS survey.

  1. Stochastic analytic regularization

    International Nuclear Information System (INIS)

    Alfaro, J.

    1984-07-01

    Stochastic regularization is reexamined, pointing out a restriction on its use due to a new type of divergence which is not present in the unregulated theory. Furthermore, we introduce a new form of stochastic regularization which permits the use of a minimal subtraction scheme to define the renormalized Green functions. (author)

  2. FPGA-accelerated algorithm for the regular expression matching system

    Science.gov (United States)

    Russek, P.; Wiatr, K.

    2015-01-01

    This article describes an algorithm to support a regular expressions matching system. The goal was to achieve an attractive performance system with low energy consumption. The basic idea of the algorithm comes from a concept of the Bloom filter. It starts from the extraction of static sub-strings for strings of regular expressions. The algorithm is devised to gain from its decomposition into parts which are intended to be executed by custom hardware and the central processing unit (CPU). The pipelined custom processor architecture is proposed and a software algorithm explained accordingly. The software part of the algorithm was coded in C and runs on a processor from the ARM family. The hardware architecture was described in VHDL and implemented in field programmable gate array (FPGA). The performance results and required resources of the above experiments are given. An example of target application for the presented solution is computer and network security systems. The idea was tested on nearly 100,000 body-based viruses from the ClamAV virus database. The solution is intended for the emerging technology of clusters of low-energy computing nodes.

  3. Iterative optimization of quantum error correcting codes

    International Nuclear Information System (INIS)

    Reimpell, M.; Werner, R.F.

    2005-01-01

    We introduce a convergent iterative algorithm for finding the optimal coding and decoding operations for an arbitrary noisy quantum channel. This algorithm does not require any error syndrome to be corrected completely, and hence also finds codes outside the usual Knill-Laflamme definition of error correcting codes. The iteration is shown to improve the figure of merit 'channel fidelity' in every step

  4. LAPU2: a laser pulse propagation code with diffraction

    International Nuclear Information System (INIS)

    Goldstein, J.C.; Dickman, D.O.

    1978-03-01

    Complete descriptions of the mathematical models and numerical methods used in the code LAPU2 are presented. This code can be used to study the propagation with diffraction of a temporally finite pulse through a sequence of resonant media and simple optical components. The treatment assumes cylindrical symmetry and allows nonlinear refractive indices. An unlimited number of different media can be distributed along the propagation path of the pulse. A complete users guide to input data is given as well as a FORTRAN listing of the code

  5. Physical Educators' Habitual Physical Activity and Self-Efficacy for Regular Exercise

    Science.gov (United States)

    Zhu, Xihe; Haegele, Justin A.; Davis, Summer

    2018-01-01

    The purpose of this study was to examine physical education teachers' habitual physical activity and self-efficacy for regular exercise. In-service physical education teachers (N = 168) voluntarily completed an online questionnaire that included items to collect demographic information (gender, race/ethnicity, years of teaching experience, and…

  6. Colloquial French the complete course for beginners

    CERN Document Server

    Demouy, Valérie

    2014-01-01

     COLLOQUIAL FRENCH is easy to use and completely up to date!Specially written by experienced teachers for self-study or class use, the course offers a step-by-step approach to written and spoken French. No prior knowledge of the language is required.What makes COLLOQUIAL FRENCH your best choice in personal language learning?Interactive - lots of exercises for regular practiceClear - concise grammar notesPractical - useful vocabulary and pronunciation guideComplete - including answer key and reference sectionWhether you''re a business traveller, or about to take up a daring challenge in adventu

  7. Universal Fault-Tolerant Gates on Concatenated Stabilizer Codes

    Directory of Open Access Journals (Sweden)

    Theodore J. Yoder

    2016-09-01

    Full Text Available It is an oft-cited fact that no quantum code can support a set of fault-tolerant logical gates that is both universal and transversal. This no-go theorem is generally responsible for the interest in alternative universality constructions including magic state distillation. Widely overlooked, however, is the possibility of nontransversal, yet still fault-tolerant, gates that work directly on small quantum codes. Here, we demonstrate precisely the existence of such gates. In particular, we show how the limits of nontransversality can be overcome by performing rounds of intermediate error correction to create logical gates on stabilizer codes that use no ancillas other than those required for syndrome measurement. Moreover, the logical gates we construct, the most prominent examples being Toffoli and controlled-controlled-Z, often complete universal gate sets on their codes. We detail such universal constructions for the smallest quantum codes, the 5-qubit and 7-qubit codes, and then proceed to generalize the approach. One remarkable result of this generalization is that any nondegenerate stabilizer code with a complete set of fault-tolerant single-qubit Clifford gates has a universal set of fault-tolerant gates. Another is the interaction of logical qubits across different stabilizer codes, which, for instance, implies a broadly applicable method of code switching.

  8. RFQ simulation code

    International Nuclear Information System (INIS)

    Lysenko, W.P.

    1984-04-01

    We have developed the RFQLIB simulation system to provide a means to systematically generate the new versions of radio-frequency quadrupole (RFQ) linac simulation codes that are required by the constantly changing needs of a research environment. This integrated system simplifies keeping track of the various versions of the simulation code and makes it practical to maintain complete and up-to-date documentation. In this scheme, there is a certain standard version of the simulation code that forms a library upon which new versions are built. To generate a new version of the simulation code, the routines to be modified or added are appended to a standard command file, which contains the commands to compile the new routines and link them to the routines in the library. The library itself is rarely changed. Whenever the library is modified, however, this modification is seen by all versions of the simulation code, which actually exist as different versions of the command file. All code is written according to the rules of structured programming. Modularity is enforced by not using COMMON statements, simplifying the relation of the data flow to a hierarchy diagram. Simulation results are similar to those of the PARMTEQ code, as expected, because of the similar physical model. Different capabilities, such as those for generating beams matched in detail to the structure, are available in the new code for help in testing new ideas in designing RFQ linacs

  9. Discriminative Elastic-Net Regularized Linear Regression.

    Science.gov (United States)

    Zhang, Zheng; Lai, Zhihui; Xu, Yong; Shao, Ling; Wu, Jian; Xie, Guo-Sen

    2017-03-01

    In this paper, we aim at learning compact and discriminative linear regression models. Linear regression has been widely used in different problems. However, most of the existing linear regression methods exploit the conventional zero-one matrix as the regression targets, which greatly narrows the flexibility of the regression model. Another major limitation of these methods is that the learned projection matrix fails to precisely project the image features to the target space due to their weak discriminative capability. To this end, we present an elastic-net regularized linear regression (ENLR) framework, and develop two robust linear regression models which possess the following special characteristics. First, our methods exploit two particular strategies to enlarge the margins of different classes by relaxing the strict binary targets into a more feasible variable matrix. Second, a robust elastic-net regularization of singular values is introduced to enhance the compactness and effectiveness of the learned projection matrix. Third, the resulting optimization problem of ENLR has a closed-form solution in each iteration, which can be solved efficiently. Finally, rather than directly exploiting the projection matrix for recognition, our methods employ the transformed features as the new discriminate representations to make final image classification. Compared with the traditional linear regression model and some of its variants, our method is much more accurate in image classification. Extensive experiments conducted on publicly available data sets well demonstrate that the proposed framework can outperform the state-of-the-art methods. The MATLAB codes of our methods can be available at http://www.yongxu.org/lunwen.html.

  10. Distribution of absorbed dose in human eye simulated by SRNA-2KG computer code

    International Nuclear Information System (INIS)

    Ilic, R.; Pesic, M.; Pavlovic, R.; Mostacci, D.

    2003-01-01

    Rapidly increasing performances of personal computers and development of codes for proton transport based on Monte Carlo methods will allow, very soon, the introduction of the computer planning proton therapy as a normal activity in regular hospital procedures. A description of SRNA code used for such applications and results of calculated distributions of proton-absorbed dose in human eye are given in this paper. (author)

  11. Sparsity regularization for parameter identification problems

    International Nuclear Information System (INIS)

    Jin, Bangti; Maass, Peter

    2012-01-01

    The investigation of regularization schemes with sparsity promoting penalty terms has been one of the dominant topics in the field of inverse problems over the last years, and Tikhonov functionals with ℓ p -penalty terms for 1 ⩽ p ⩽ 2 have been studied extensively. The first investigations focused on regularization properties of the minimizers of such functionals with linear operators and on iteration schemes for approximating the minimizers. These results were quickly transferred to nonlinear operator equations, including nonsmooth operators and more general function space settings. The latest results on regularization properties additionally assume a sparse representation of the true solution as well as generalized source conditions, which yield some surprising and optimal convergence rates. The regularization theory with ℓ p sparsity constraints is relatively complete in this setting; see the first part of this review. In contrast, the development of efficient numerical schemes for approximating minimizers of Tikhonov functionals with sparsity constraints for nonlinear operators is still ongoing. The basic iterated soft shrinkage approach has been extended in several directions and semi-smooth Newton methods are becoming applicable in this field. In particular, the extension to more general non-convex, non-differentiable functionals by variational principles leads to a variety of generalized iteration schemes. We focus on such iteration schemes in the second part of this review. A major part of this survey is devoted to applying sparsity constrained regularization techniques to parameter identification problems for partial differential equations, which we regard as the prototypical setting for nonlinear inverse problems. Parameter identification problems exhibit different levels of complexity and we aim at characterizing a hierarchy of such problems. The operator defining these inverse problems is the parameter-to-state mapping. We first summarize some

  12. Source Coding for Wireless Distributed Microphones in Reverberant Environments

    DEFF Research Database (Denmark)

    Zahedi, Adel

    2016-01-01

    . However, it comes with the price of several challenges, including the limited power and bandwidth resources for wireless transmission of audio recordings. In such a setup, we study the problem of source coding for the compression of the audio recordings before the transmission in order to reduce the power...... consumption and/or transmission bandwidth by reduction in the transmission rates. Source coding for wireless microphones in reverberant environments has several special characteristics which make it more challenging in comparison with regular audio coding. The signals which are acquired by the microphones......Modern multimedia systems are more and more shifting toward distributed and networked structures. This includes audio systems, where networks of wireless distributed microphones are replacing the traditional microphone arrays. This allows for flexibility of placement and high spatial diversity...

  13. Code manual for CONTAIN 2.0: A computer code for nuclear reactor containment analysis

    International Nuclear Information System (INIS)

    Murata, K.K.; Williams, D.C.; Griffith, R.O.; Gido, R.G.; Tadios, E.L.; Davis, F.J.; Martinez, G.M.; Washington, K.E.; Tills, J.

    1997-12-01

    The CONTAIN 2.0 computer code is an integrated analysis tool used for predicting the physical conditions, chemical compositions, and distributions of radiological materials inside a containment building following the release of material from the primary system in a light-water reactor accident. It can also predict the source term to the environment. CONTAIN 2.0 is intended to replace the earlier CONTAIN 1.12, which was released in 1991. The purpose of this Code Manual is to provide full documentation of the features and models in CONTAIN 2.0. Besides complete descriptions of the models, this Code Manual provides a complete description of the input and output from the code. CONTAIN 2.0 is a highly flexible and modular code that can run problems that are either quite simple or highly complex. An important aspect of CONTAIN is that the interactions among thermal-hydraulic phenomena, aerosol behavior, and fission product behavior are taken into account. The code includes atmospheric models for steam/air thermodynamics, intercell flows, condensation/evaporation on structures and aerosols, aerosol behavior, and gas combustion. It also includes models for reactor cavity phenomena such as core-concrete interactions and coolant pool boiling. Heat conduction in structures, fission product decay and transport, radioactive decay heating, and the thermal-hydraulic and fission product decontamination effects of engineered safety features are also modeled. To the extent possible, the best available models for severe accident phenomena have been incorporated into CONTAIN, but it is intrinsic to the nature of accident analysis that significant uncertainty exists regarding numerous phenomena. In those cases, sensitivity studies can be performed with CONTAIN by means of user-specified input parameters. Thus, the code can be viewed as a tool designed to assist the knowledge reactor safety analyst in evaluating the consequences of specific modeling assumptions

  14. Code manual for CONTAIN 2.0: A computer code for nuclear reactor containment analysis

    Energy Technology Data Exchange (ETDEWEB)

    Murata, K.K.; Williams, D.C.; Griffith, R.O.; Gido, R.G.; Tadios, E.L.; Davis, F.J.; Martinez, G.M.; Washington, K.E. [Sandia National Labs., Albuquerque, NM (United States); Tills, J. [J. Tills and Associates, Inc., Sandia Park, NM (United States)

    1997-12-01

    The CONTAIN 2.0 computer code is an integrated analysis tool used for predicting the physical conditions, chemical compositions, and distributions of radiological materials inside a containment building following the release of material from the primary system in a light-water reactor accident. It can also predict the source term to the environment. CONTAIN 2.0 is intended to replace the earlier CONTAIN 1.12, which was released in 1991. The purpose of this Code Manual is to provide full documentation of the features and models in CONTAIN 2.0. Besides complete descriptions of the models, this Code Manual provides a complete description of the input and output from the code. CONTAIN 2.0 is a highly flexible and modular code that can run problems that are either quite simple or highly complex. An important aspect of CONTAIN is that the interactions among thermal-hydraulic phenomena, aerosol behavior, and fission product behavior are taken into account. The code includes atmospheric models for steam/air thermodynamics, intercell flows, condensation/evaporation on structures and aerosols, aerosol behavior, and gas combustion. It also includes models for reactor cavity phenomena such as core-concrete interactions and coolant pool boiling. Heat conduction in structures, fission product decay and transport, radioactive decay heating, and the thermal-hydraulic and fission product decontamination effects of engineered safety features are also modeled. To the extent possible, the best available models for severe accident phenomena have been incorporated into CONTAIN, but it is intrinsic to the nature of accident analysis that significant uncertainty exists regarding numerous phenomena. In those cases, sensitivity studies can be performed with CONTAIN by means of user-specified input parameters. Thus, the code can be viewed as a tool designed to assist the knowledge reactor safety analyst in evaluating the consequences of specific modeling assumptions.

  15. Effective field theory dimensional regularization

    International Nuclear Information System (INIS)

    Lehmann, Dirk; Prezeau, Gary

    2002-01-01

    A Lorentz-covariant regularization scheme for effective field theories with an arbitrary number of propagating heavy and light particles is given. This regularization scheme leaves the low-energy analytic structure of Greens functions intact and preserves all the symmetries of the underlying Lagrangian. The power divergences of regularized loop integrals are controlled by the low-energy kinematic variables. Simple diagrammatic rules are derived for the regularization of arbitrary one-loop graphs and the generalization to higher loops is discussed

  16. Effective field theory dimensional regularization

    Science.gov (United States)

    Lehmann, Dirk; Prézeau, Gary

    2002-01-01

    A Lorentz-covariant regularization scheme for effective field theories with an arbitrary number of propagating heavy and light particles is given. This regularization scheme leaves the low-energy analytic structure of Greens functions intact and preserves all the symmetries of the underlying Lagrangian. The power divergences of regularized loop integrals are controlled by the low-energy kinematic variables. Simple diagrammatic rules are derived for the regularization of arbitrary one-loop graphs and the generalization to higher loops is discussed.

  17. Hierarchical regular small-world networks

    International Nuclear Information System (INIS)

    Boettcher, Stefan; Goncalves, Bruno; Guclu, Hasan

    2008-01-01

    Two new networks are introduced that resemble small-world properties. These networks are recursively constructed but retain a fixed, regular degree. They possess a unique one-dimensional lattice backbone overlaid by a hierarchical sequence of long-distance links, mixing real-space and small-world features. Both networks, one 3-regular and the other 4-regular, lead to distinct behaviors, as revealed by renormalization group studies. The 3-regular network is planar, has a diameter growing as √N with system size N, and leads to super-diffusion with an exact, anomalous exponent d w = 1.306..., but possesses only a trivial fixed point T c = 0 for the Ising ferromagnet. In turn, the 4-regular network is non-planar, has a diameter growing as ∼2 √(log 2 N 2 ) , exhibits 'ballistic' diffusion (d w = 1), and a non-trivial ferromagnetic transition, T c > 0. It suggests that the 3-regular network is still quite 'geometric', while the 4-regular network qualifies as a true small world with mean-field properties. As an engineering application we discuss synchronization of processors on these networks. (fast track communication)

  18. Verification of reactor safety codes

    International Nuclear Information System (INIS)

    Murley, T.E.

    1978-01-01

    The safety evaluation of nuclear power plants requires the investigation of wide range of potential accidents that could be postulated to occur. Many of these accidents deal with phenomena that are outside the range of normal engineering experience. Because of the expense and difficulty of full scale tests covering the complete range of accident conditions, it is necessary to rely on complex computer codes to assess these accidents. The central role that computer codes play in safety analyses requires that the codes be verified, or tested, by comparing the code predictions with a wide range of experimental data chosen to span the physical phenomena expected under potential accident conditions. This paper discusses the plans of the Nuclear Regulatory Commission for verifying the reactor safety codes being developed by NRC to assess the safety of light water reactors and fast breeder reactors. (author)

  19. 75 FR 76006 - Regular Meeting

    Science.gov (United States)

    2010-12-07

    ... FARM CREDIT SYSTEM INSURANCE CORPORATION Regular Meeting AGENCY: Farm Credit System Insurance Corporation Board. ACTION: Regular meeting. SUMMARY: Notice is hereby given of the regular meeting of the Farm Credit System Insurance Corporation Board (Board). Date and Time: The meeting of the Board will be held...

  20. General inverse problems for regular variation

    DEFF Research Database (Denmark)

    Damek, Ewa; Mikosch, Thomas Valentin; Rosinski, Jan

    2014-01-01

    Regular variation of distributional tails is known to be preserved by various linear transformations of some random structures. An inverse problem for regular variation aims at understanding whether the regular variation of a transformed random object is caused by regular variation of components ...

  1. ESCADRE and ICARE code systems

    International Nuclear Information System (INIS)

    Reocreux, M.; Gauvain, J.

    1992-01-01

    The French sever accident code development program is following two parallel approaches: the first one is dealing with ''integral codes'' which are designed for giving immediate engineer answers, the second one is following a more mechanistic way in order to have the capability of detailed analysis of experiments, in order to get a better understanding of the scaling problem and reach a better confidence in plant calculations. In the first approach a complete system has been developed and is being used for practical cases: this is the ESCADRE system. In the second approach, a set of codes dealing first with primary circuit is being developed: a mechanistic core degradation code, ICARE, has been issued and is being coupled with the advanced thermalhydraulic code CATHARE. Fission product codes have been also coupled to CATHARE. The ''integral'' ESCADRE system and the mechanistic ICARE and associated codes are described. Their main characteristics are reviewed and the status of their development and assessment given. Future studies are finally discussed. 36 refs, 4 figs, 1 tab

  2. Panda code

    International Nuclear Information System (INIS)

    Altomare, S.; Minton, G.

    1975-02-01

    PANDA is a new two-group one-dimensional (slab/cylinder) neutron diffusion code designed to replace and extend the FAB series. PANDA allows for the nonlinear effects of xenon, enthalpy and Doppler. Fuel depletion is allowed. PANDA has a completely general search facility which will seek criticality, maximize reactivity, or minimize peaking. Any single parameter may be varied in a search. PANDA is written in FORTRAN IV, and as such is nearly machine independent. However, PANDA has been written with the present limitations of the Westinghouse CDC-6600 system in mind. Most computation loops are very short, and the code is less than half the useful 6600 memory size so that two jobs can reside in the core at once. (auth)

  3. Network Coding Protocols for Data Gathering Applications

    DEFF Research Database (Denmark)

    Nistor, Maricica; Roetter, Daniel Enrique Lucani; Barros, João

    2015-01-01

    Tunable sparse network coding (TSNC) with various sparsity levels of the coded packets and different feedback mechanisms is analysed in the context of data gathering applications in multi-hop networks. The goal is to minimize the completion time, i.e., the total time required to collect all data ...

  4. An Efficient Construction of Self-Dual Codes

    OpenAIRE

    Lee, Yoonjin; Kim, Jon-Lark

    2012-01-01

    We complete the building-up construction for self-dual codes by resolving the open cases over $GF(q)$ with $q \\equiv 3 \\pmod 4$, and over $\\Z_{p^m}$ and Galois rings $\\GR(p^m,r)$ with an odd prime $p$ satisfying $p \\equiv 3 \\pmod 4$ with $r$ odd. We also extend the building-up construction for self-dual codes to finite chain rings. Our building-up construction produces many new interesting self-dual codes. In particular, we construct 945 new extremal self-dual ternary $[32,16,9]$ codes, each ...

  5. Thermodynamic Product Relations for Generalized Regular Black Hole

    International Nuclear Information System (INIS)

    Pradhan, Parthapratim

    2016-01-01

    We derive thermodynamic product relations for four-parametric regular black hole (BH) solutions of the Einstein equations coupled with a nonlinear electrodynamics source. The four parameters can be described by the mass (m), charge (q), dipole moment (α), and quadrupole moment (β), respectively. We study its complete thermodynamics. We compute different thermodynamic products, that is, area product, BH temperature product, specific heat product, and Komar energy product, respectively. Furthermore, we show some complicated function of horizon areas that is indeed mass-independent and could turn out to be universal.

  6. Continuum-regularized quantum gravity

    International Nuclear Information System (INIS)

    Chan Huesum; Halpern, M.B.

    1987-01-01

    The recent continuum regularization of d-dimensional Euclidean gravity is generalized to arbitrary power-law measure and studied in some detail as a representative example of coordinate-invariant regularization. The weak-coupling expansion of the theory illustrates a generic geometrization of regularized Schwinger-Dyson rules, generalizing previous rules in flat space and flat superspace. The rules are applied in a non-trivial explicit check of Einstein invariance at one loop: the cosmological counterterm is computed and its contribution is included in a verification that the graviton mass is zero. (orig.)

  7. Online co-regularized algorithms

    NARCIS (Netherlands)

    Ruijter, T. de; Tsivtsivadze, E.; Heskes, T.

    2012-01-01

    We propose an online co-regularized learning algorithm for classification and regression tasks. We demonstrate that by sequentially co-regularizing prediction functions on unlabeled data points, our algorithm provides improved performance in comparison to supervised methods on several UCI benchmarks

  8. Geometric continuum regularization of quantum field theory

    International Nuclear Information System (INIS)

    Halpern, M.B.

    1989-01-01

    An overview of the continuum regularization program is given. The program is traced from its roots in stochastic quantization, with emphasis on the examples of regularized gauge theory, the regularized general nonlinear sigma model and regularized quantum gravity. In its coordinate-invariant form, the regularization is seen as entirely geometric: only the supermetric on field deformations is regularized, and the prescription provides universal nonperturbative invariant continuum regularization across all quantum field theory. 54 refs

  9. A SciCode web site: building bridges between owners and users

    Energy Technology Data Exchange (ETDEWEB)

    Gaver, C. [Atomic Energy of Canada Ltd., Mississauga, Ontario (Canada)

    2000-07-01

    Web technology is a tool that is gaining in popularity. Properly used, it is a powerful tool that has tremendous potential for providing better communication. It can also be effective as a training tool, an information-sharing tool, and as a means of simplifying work load, and facilitating compliance with Company procedures. The issue is one of communication. The challenge facing many large or geographically-distributed companies is how to communicate information to their staff and to their customers. Procedures overseeing quality-assurance programs and commitment to ensuring the quality of products need to be communicated to customers. Equally important is customer feedback. This information from users becomes the kernel for future product development. The issue is even more important when speaking of scientific analysis computer programs (SciCodes). Regular ongoing communication between Primary Holders and End Users is essential in the development and use of SciCodes. Without this communication, quality assurance is at risk. Quality assurance processes are an integral part in developing any SciCode. End Users also have a role to play. Primary Holders keep End Users informed of improvements or new releases. End Users must ensure they act on this information. Equally important, End Users must communicate problems or suggestions to the Primary Holder to remedy or incorporate in new releases. In other words, quality assurance processes become most effective when both Primary Holder and End Users are involved. This requires communication. Web technology offers AECL a means of providing regular, ongoing communication between its scientific-code (SciCode) Primary Holders-Owner Branches and the End Users of these codes within and outside the Company. Using the experience we have gained by developing the Y2K SciCode Web sites, setting up online documentation systems, and incorporating lessons learned from the Y2K project we have developed a model that is geared to

  10. A SciCode web site: building bridges between owners and users

    International Nuclear Information System (INIS)

    Gaver, C.

    2000-01-01

    Web technology is a tool that is gaining in popularity. Properly used, it is a powerful tool that has tremendous potential for providing better communication. It can also be effective as a training tool, an information-sharing tool, and as a means of simplifying work load, and facilitating compliance with Company procedures. The issue is one of communication. The challenge facing many large or geographically-distributed companies is how to communicate information to their staff and to their customers. Procedures overseeing quality-assurance programs and commitment to ensuring the quality of products need to be communicated to customers. Equally important is customer feedback. This information from users becomes the kernel for future product development. The issue is even more important when speaking of scientific analysis computer programs (SciCodes). Regular ongoing communication between Primary Holders and End Users is essential in the development and use of SciCodes. Without this communication, quality assurance is at risk. Quality assurance processes are an integral part in developing any SciCode. End Users also have a role to play. Primary Holders keep End Users informed of improvements or new releases. End Users must ensure they act on this information. Equally important, End Users must communicate problems or suggestions to the Primary Holder to remedy or incorporate in new releases. In other words, quality assurance processes become most effective when both Primary Holder and End Users are involved. This requires communication. Web technology offers AECL a means of providing regular, ongoing communication between its scientific-code (SciCode) Primary Holders-Owner Branches and the End Users of these codes within and outside the Company. Using the experience we have gained by developing the Y2K SciCode Web sites, setting up online documentation systems, and incorporating lessons learned from the Y2K project we have developed a model that is geared to

  11. MARS code manual volume I: code structure, system models, and solution methods

    International Nuclear Information System (INIS)

    Chung, Bub Dong; Kim, Kyung Doo; Bae, Sung Won; Jeong, Jae Jun; Lee, Seung Wook; Hwang, Moon Kyu; Yoon, Churl

    2010-02-01

    Korea Advanced Energy Research Institute (KAERI) conceived and started the development of MARS code with the main objective of producing a state-of-the-art realistic thermal hydraulic systems analysis code with multi-dimensional analysis capability. MARS achieves this objective by very tightly integrating the one dimensional RELAP5/MOD3 with the multi-dimensional COBRA-TF codes. The method of integration of the two codes is based on the dynamic link library techniques, and the system pressure equation matrices of both codes are implicitly integrated and solved simultaneously. In addition, the Equation-Of-State (EOS) for the light water was unified by replacing the EOS of COBRA-TF by that of the RELAP5. This theory manual provides a complete list of overall information of code structure and major function of MARS including code architecture, hydrodynamic model, heat structure, trip / control system and point reactor kinetics model. Therefore, this report would be very useful for the code users. The overall structure of the manual is modeled on the structure of the RELAP5 and as such the layout of the manual is very similar to that of the RELAP. This similitude to RELAP5 input is intentional as this input scheme will allow minimum modification between the inputs of RELAP5 and MARS3.1. MARS3.1 development team would like to express its appreciation to the RELAP5 Development Team and the USNRC for making this manual possible

  12. Bypassing the Limits of Ll Regularization: Convex Sparse Signal Processing Using Non-Convex Regularization

    Science.gov (United States)

    Parekh, Ankit

    Sparsity has become the basis of some important signal processing methods over the last ten years. Many signal processing problems (e.g., denoising, deconvolution, non-linear component analysis) can be expressed as inverse problems. Sparsity is invoked through the formulation of an inverse problem with suitably designed regularization terms. The regularization terms alone encode sparsity into the problem formulation. Often, the ℓ1 norm is used to induce sparsity, so much so that ℓ1 regularization is considered to be `modern least-squares'. The use of ℓ1 norm, as a sparsity-inducing regularizer, leads to a convex optimization problem, which has several benefits: the absence of extraneous local minima, well developed theory of globally convergent algorithms, even for large-scale problems. Convex regularization via the ℓ1 norm, however, tends to under-estimate the non-zero values of sparse signals. In order to estimate the non-zero values more accurately, non-convex regularization is often favored over convex regularization. However, non-convex regularization generally leads to non-convex optimization, which suffers from numerous issues: convergence may be guaranteed to only a stationary point, problem specific parameters may be difficult to set, and the solution is sensitive to the initialization of the algorithm. The first part of this thesis is aimed toward combining the benefits of non-convex regularization and convex optimization to estimate sparse signals more effectively. To this end, we propose to use parameterized non-convex regularizers with designated non-convexity and provide a range for the non-convex parameter so as to ensure that the objective function is strictly convex. By ensuring convexity of the objective function (sum of data-fidelity and non-convex regularizer), we can make use of a wide variety of convex optimization algorithms to obtain the unique global minimum reliably. The second part of this thesis proposes a non-linear signal

  13. Principles of speech coding

    CERN Document Server

    Ogunfunmi, Tokunbo

    2010-01-01

    It is becoming increasingly apparent that all forms of communication-including voice-will be transmitted through packet-switched networks based on the Internet Protocol (IP). Therefore, the design of modern devices that rely on speech interfaces, such as cell phones and PDAs, requires a complete and up-to-date understanding of the basics of speech coding. Outlines key signal processing algorithms used to mitigate impairments to speech quality in VoIP networksOffering a detailed yet easily accessible introduction to the field, Principles of Speech Coding provides an in-depth examination of the

  14. The Complete Sequence of a Human Parainfluenzavirus 4 Genome

    Science.gov (United States)

    Yea, Carmen; Cheung, Rose; Collins, Carol; Adachi, Dena; Nishikawa, John; Tellier, Raymond

    2009-01-01

    Although the human parainfluenza virus 4 (HPIV4) has been known for a long time, its genome, alone among the human paramyxoviruses, has not been completely sequenced to date. In this study we obtained the first complete genomic sequence of HPIV4 from a clinical isolate named SKPIV4 obtained at the Hospital for Sick Children in Toronto (Ontario, Canada). The coding regions for the N, P/V, M, F and HN proteins show very high identities (95% to 97%) with previously available partial sequences for HPIV4B. The sequence for the L protein and the non-coding regions represent new information. A surprising feature of the genome is its length, more than 17 kb, making it the longest genome within the genus Rubulavirus, although the length is well within the known range of 15 kb to 19 kb for the subfamily Paramyxovirinae. The availability of a complete genomic sequence will facilitate investigations on a respiratory virus that is still not completely characterized. PMID:21994536

  15. The Complete Sequence of a Human Parainfluenzavirus 4 Genome

    Directory of Open Access Journals (Sweden)

    Carmen Yea

    2009-06-01

    Full Text Available Although the human parainfluenza virus 4 (HPIV4 has been known for a long time, its genome, alone among the human paramyxoviruses, has not been completely sequenced to date. In this study we obtained the first complete genomic sequence of HPIV4 from a clinical isolate named SKPIV4 obtained at the Hospital for Sick Children in Toronto (Ontario, Canada. The coding regions for the N, P/V, M, F and HN proteins show very high identities (95% to 97% with previously available partial sequences for HPIV4B. The sequence for the L protein and the non-coding regions represent new information. A surprising feature of the genome is its length, more than 17 kb, making it the longest genome within the genus Rubulavirus, although the length is well within the known range of 15 kb to 19 kb for the subfamily Paramyxovirinae. The availability of a complete genomic sequence will facilitate investigations on a respiratory virus that is still not completely characterized.

  16. Using Tikhonov Regularization for Spatial Projections from CSR Regularized Spherical Harmonic GRACE Solutions

    Science.gov (United States)

    Save, H.; Bettadpur, S. V.

    2013-12-01

    It has been demonstrated before that using Tikhonov regularization produces spherical harmonic solutions from GRACE that have very little residual stripes while capturing all the signal observed by GRACE within the noise level. This paper demonstrates a two-step process and uses Tikhonov regularization to remove the residual stripes in the CSR regularized spherical harmonic coefficients when computing the spatial projections. We discuss methods to produce mass anomaly grids that have no stripe features while satisfying the necessary condition of capturing all observed signal within the GRACE noise level.

  17. Regularized maximum correntropy machine

    KAUST Repository

    Wang, Jim Jing-Yan; Wang, Yunji; Jing, Bing-Yi; Gao, Xin

    2015-01-01

    In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

  18. Regularized maximum correntropy machine

    KAUST Repository

    Wang, Jim Jing-Yan

    2015-02-12

    In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

  19. Development of Monte Carlo input code for proton, alpha and heavy ion microdosimetric trac structure simulations

    International Nuclear Information System (INIS)

    Douglass, M.; Bezak, E.

    2010-01-01

    Full text: Radiobiology science is important for cancer treatment as it improves our understanding of radiation induced cell death. Monte Carlo simulations playa crucial role in developing improved knowledge of cellular processes. By model Ii ng the cell response to radiation damage and verifying with experimental data, understanding of cell death through direct radiation hits and bystander effects can be obtained. A Monte Carlo input code was developed using 'Geant4' to simulate cellular level radiation interactions. A physics list which enables physically accurate interactions of heavy ions to energies below 100 e V was implemented. A simple biological cell model was also implemented. Each cell consists of three concentric spheres representing the nucleus, cytoplasm and the membrane. This will enable all critical cell death channels to be investigated (i.e. membrane damage, nucleus/DNA). The current simulation has the ability to predict the positions of ionization events within the individual cell components on I micron scale. We have developed a Geant4 simulation for investigation of radiation damage to cells on sub-cellular scale (∼I micron). This code currently allows the positions of the ionisation events within the individual components of the cell enabling a more complete picture of cell death to be developed. The next stage will include expansion of the code to utilise non-regular cell lattice. (author)

  20. The health benefits following regular ongoing exercise lifestyle in independent community-dwelling older Taiwanese adults.

    Science.gov (United States)

    Wang, Ching-Yi; Yeh, Chih-Jung; Wang, Chia-Wei; Wang, Chun-Feng; Lin, Yen-Ling

    2011-03-01

    To examine the effect of regular ongoing exercise lifestyle on mental and physical health in a group of independent community-dwelling Taiwanese older adults over a 2-year period. 197 older adults (mean age 72.5 years; 106 men and 91 women) who were independent in walking, instrumental and basic activities of daily living completed the baseline and a 2-year follow-up assessment. Older adults regularly performing exercises during the 2-year study period were grouped into regular exercise group; otherwise in the irregular exercise group. Baseline and follow-up assessments included a face-to-face interview and a battery of performance tests. The regular exercise group showed significantly less depression (P = 0.03) and tended to regress less on the performance tests (P = 0.025-0.410) across 2 years compared to the irregular exercise group. Regular exercise is important for maintaining or even improving mental and functional health, even for independent community-dwelling older adults. © 2010 The Authors. Australasian Journal on Ageing © 2010 ACOTA.

  1. Multilinear Graph Embedding: Representation and Regularization for Images.

    Science.gov (United States)

    Chen, Yi-Lei; Hsu, Chiou-Ting

    2014-02-01

    Given a set of images, finding a compact and discriminative representation is still a big challenge especially when multiple latent factors are hidden in the way of data generation. To represent multifactor images, although multilinear models are widely used to parameterize the data, most methods are based on high-order singular value decomposition (HOSVD), which preserves global statistics but interprets local variations inadequately. To this end, we propose a novel method, called multilinear graph embedding (MGE), as well as its kernelization MKGE to leverage the manifold learning techniques into multilinear models. Our method theoretically links the linear, nonlinear, and multilinear dimensionality reduction. We also show that the supervised MGE encodes informative image priors for image regularization, provided that an image is represented as a high-order tensor. From our experiments on face and gait recognition, the superior performance demonstrates that MGE better represents multifactor images than classic methods, including HOSVD and its variants. In addition, the significant improvement in image (or tensor) completion validates the potential of MGE for image regularization.

  2. Coding training for medical students: How good is diagnoses coding with ICD-10 by novices?

    Directory of Open Access Journals (Sweden)

    Stausberg, Jürgen

    2005-04-01

    Full Text Available Teaching of knowledge and competence in documentation and coding is an essential part of medical education. Therefore, coding training had been placed within the course of epidemiology, medical biometry, and medical informatics. From this, we can draw conclusions about the quality of coding by novices. One hundred and eighteen students coded diagnoses from 15 nephrological cases in homework. In addition to interrater reliability, validity was calculated by comparison with a reference coding. On the level of terminal codes, 59.3% of the students' results were correct. The completeness was calculated as 58.0%. The results on the chapter level increased up to 91.5% and 87.7% respectively. For the calculation of reliability a new, simple measure was developed that leads to values of 0.46 on the level of terminal codes and 0.87 on the chapter level for interrater reliability. The figures of concordance with the reference coding are quite similar. In contrary, routine data show considerably lower results with 0.34 and 0.63 respectively. Interrater reliability and validity of coding by novices is as good as coding by experts. The missing advantage of experts could be explained by the workload of documentation and a negative attitude to coding on the one hand. On the other hand, coding in a DRG-system is handicapped by a large number of detailed coding rules, which do not end in uniform results but rather lead to wrong and random codes. Anyway, students left the course well prepared for coding.

  3. Performance studies of the parallel VIM code

    International Nuclear Information System (INIS)

    Shi, B.; Blomquist, R.N.

    1996-01-01

    In this paper, the authors evaluate the performance of the parallel version of the VIM Monte Carlo code on the IBM SPx at the High Performance Computing Research Facility at ANL. Three test problems with contrasting computational characteristics were used to assess effects in performance. A statistical method for estimating the inefficiencies due to load imbalance and communication is also introduced. VIM is a large scale continuous energy Monte Carlo radiation transport program and was parallelized using history partitioning, the master/worker approach, and p4 message passing library. Dynamic load balancing is accomplished when the master processor assigns chunks of histories to workers that have completed a previously assigned task, accommodating variations in the lengths of histories, processor speeds, and worker loads. At the end of each batch (generation), the fission sites and tallies are sent from each worker to the master process, contributing to the parallel inefficiency. All communications are between master and workers, and are serial. The SPx is a scalable 128-node parallel supercomputer with high-performance Omega switches of 63 microsec latency and 35 MBytes/sec bandwidth. For uniform and reproducible performance, they used only the 120 identical regular processors (IBM RS/6000) and excluded the remaining eight planet nodes, which may be loaded by other's jobs

  4. Survey of nuclear fuel-cycle codes

    International Nuclear Information System (INIS)

    Thomas, C.R.; de Saussure, G.; Marable, J.H.

    1981-04-01

    A two-month survey of nuclear fuel-cycle models was undertaken. This report presents the information forthcoming from the survey. Of the nearly thirty codes reviewed in the survey, fifteen of these codes have been identified as potentially useful in fulfilling the tasks of the Nuclear Energy Analysis Division (NEAD) as defined in their FY 1981-1982 Program Plan. Six of the fifteen codes are given individual reviews. The individual reviews address such items as the funding agency, the author and organization, the date of completion of the code, adequacy of documentation, computer requirements, history of use, variables that are input and forecast, type of reactors considered, part of fuel cycle modeled and scope of the code (international or domestic, long-term or short-term, regional or national). The report recommends that the Model Evaluation Team perform an evaluation of the EUREKA uranium mining and milling code

  5. HYDRASTAR - a code for stochastic simulation of groundwater flow

    International Nuclear Information System (INIS)

    Norman, S.

    1992-05-01

    The computer code HYDRASTAR was developed as a tool for groundwater flow and transport simulations in the SKB 91 safety analysis project. Its conceptual ideas can be traced back to a report by Shlomo Neuman in 1988, see the reference section. The main idea of the code is the treatment of the rock as a stochastic continuum which separates it from the deterministic methods previously employed by SKB and also from the discrete fracture models. The current report is a comprehensive description of HYDRASTAR including such topics as regularization or upscaling of a hydraulic conductivity field, unconditional and conditional simulation of stochastic processes, numerical solvers for the hydrology and streamline equations and finally some proposals for future developments

  6. Regularities of Multifractal Measures

    Indian Academy of Sciences (India)

    First, we prove the decomposition theorem for the regularities of multifractal Hausdorff measure and packing measure in R R d . This decomposition theorem enables us to split a set into regular and irregular parts, so that we can analyze each separately, and recombine them without affecting density properties. Next, we ...

  7. Adaptive Regularization of Neural Classifiers

    DEFF Research Database (Denmark)

    Andersen, Lars Nonboe; Larsen, Jan; Hansen, Lars Kai

    1997-01-01

    We present a regularization scheme which iteratively adapts the regularization parameters by minimizing the validation error. It is suggested to use the adaptive regularization scheme in conjunction with optimal brain damage pruning to optimize the architecture and to avoid overfitting. Furthermo......, we propose an improved neural classification architecture eliminating an inherent redundancy in the widely used SoftMax classification network. Numerical results demonstrate the viability of the method...

  8. TRISUL- a code for Thorium Reactor Investigations with Segregated Uranium Loading

    International Nuclear Information System (INIS)

    Jagannathan, V.

    2000-09-01

    A code called TRISUL has been developed for the fuel cycle studies involving a large-scale utilization of thorium. It has been specially developed for the design studies of a thorium breeder reactor (ATBR) core. In this core, a high rate of breeding of 233 U is achieved by placing the thoria rods in the ambience of high thermal neutron flux, generated by a combination of enriched uranium or an equivalent seed material and D 2 O moderator. The core consists of a number of such seed and blanket type fuel assemblies arranged in a regular hexagonal lattice array surrounded by D 2 O reflector on all sides. At least one batch size of pure thoria clusters without the seed fuel rods are considered to be loaded uniformly in the same core at twice the assembly lattice pitch. TRISUL solves the few group diffusion theory equations by the center- mesh finite difference method. Regular hexagonal or triangular right-prismatic meshes are considered. Since the ATBR core considers boiling light water as coolant, a thermal hydraulic model is incorporated in the TRISUL code to calculate the void or steam fraction as a function of core height in each fuel assembly. The homogenized two group lattice parameters have been generated by the PHANTOM code system for the two types of fuel clusters stated above

  9. Interrupting Prolonged Sitting with Regular Activity Breaks does not Acutely Influence Appetite: A Randomised Controlled Trial

    OpenAIRE

    Evelyn M. Mete; Tracy L. Perry; Jillian J. Haszard; Ashleigh R. Homer; Stephen P. Fenemor; Nancy J. Rehrer; C. Murray Skeaff; Meredith C. Peddie

    2018-01-01

    Regular activity breaks increase energy expenditure; however, this may promote compensatory eating behaviour. The present study compared the effects of regular activity breaks and prolonged sitting on appetite. In a randomised, cross-over trial, 36 healthy adults (BMI (Body Mass Index) 23.9 kg/m2 (S.D. = 3.9)) completed four, two-day interventions: two with prolonged sitting (SIT), and two with sitting and 2 min of walking every 30 min (RAB). Standardized meals were provided throughout the in...

  10. Condition Number Regularized Covariance Estimation.

    Science.gov (United States)

    Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala

    2013-06-01

    Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the "large p small n " setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required.

  11. Sensory Coding by Cerebellar Mossy Fibres through Inhibition-Driven Phase Resetting and Synchronisation

    Science.gov (United States)

    Holtzman, Tahl; Jörntell, Henrik

    2011-01-01

    Temporal coding of spike-times using oscillatory mechanisms allied to spike-time dependent plasticity could represent a powerful mechanism for neuronal communication. However, it is unclear how temporal coding is constructed at the single neuronal level. Here we investigate a novel class of highly regular, metronome-like neurones in the rat brainstem which form a major source of cerebellar afferents. Stimulation of sensory inputs evoked brief periods of inhibition that interrupted the regular firing of these cells leading to phase-shifted spike-time advancements and delays. Alongside phase-shifting, metronome cells also behaved as band-pass filters during rhythmic sensory stimulation, with maximal spike-stimulus synchronisation at frequencies close to the idiosyncratic firing frequency of each neurone. Phase-shifting and band-pass filtering serve to temporally align ensembles of metronome cells, leading to sustained volleys of near-coincident spike-times, thereby transmitting synchronised sensory information to downstream targets in the cerebellar cortex. PMID:22046297

  12. Sensory coding by cerebellar mossy fibres through inhibition-driven phase resetting and synchronisation.

    Directory of Open Access Journals (Sweden)

    Tahl Holtzman

    Full Text Available Temporal coding of spike-times using oscillatory mechanisms allied to spike-time dependent plasticity could represent a powerful mechanism for neuronal communication. However, it is unclear how temporal coding is constructed at the single neuronal level. Here we investigate a novel class of highly regular, metronome-like neurones in the rat brainstem which form a major source of cerebellar afferents. Stimulation of sensory inputs evoked brief periods of inhibition that interrupted the regular firing of these cells leading to phase-shifted spike-time advancements and delays. Alongside phase-shifting, metronome cells also behaved as band-pass filters during rhythmic sensory stimulation, with maximal spike-stimulus synchronisation at frequencies close to the idiosyncratic firing frequency of each neurone. Phase-shifting and band-pass filtering serve to temporally align ensembles of metronome cells, leading to sustained volleys of near-coincident spike-times, thereby transmitting synchronised sensory information to downstream targets in the cerebellar cortex.

  13. The complete mitochondrial genome of the pirarucu (Arapaima gigas, Arapaimidae, Osteoglossiformes)

    OpenAIRE

    Hrbek,Tomas; Farias,Izeni Pires

    2008-01-01

    We sequenced the complete mitochondrial genome of the pirarucu, Arapaima gigas, the largest fish of the Amazon basin, and economically one of the most important species of the region. The total length of the Arapaima gigas mitochondrial genome is 16,433 bp. The mitochondrial genome contains 13 protein-coding genes, two rRNA genes and 22 tRNA genes. Twelve of the thirteen protein-coding genes are coded on the heavy strand, while nad6 is coded on the light strand. The Arapaima gene order and co...

  14. An information theoretic approach to use high-fidelity codes to calibrate low-fidelity codes

    Energy Technology Data Exchange (ETDEWEB)

    Lewis, Allison, E-mail: lewis.allison10@gmail.com [Department of Mathematics, North Carolina State University, Raleigh, NC 27695 (United States); Smith, Ralph [Department of Mathematics, North Carolina State University, Raleigh, NC 27695 (United States); Williams, Brian [Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); Figueroa, Victor [Sandia National Laboratories, Albuquerque, NM 87185 (United States)

    2016-11-01

    For many simulation models, it can be prohibitively expensive or physically infeasible to obtain a complete set of experimental data to calibrate model parameters. In such cases, one can alternatively employ validated higher-fidelity codes to generate simulated data, which can be used to calibrate the lower-fidelity code. In this paper, we employ an information-theoretic framework to determine the reduction in parameter uncertainty that is obtained by evaluating the high-fidelity code at a specific set of design conditions. These conditions are chosen sequentially, based on the amount of information that they contribute to the low-fidelity model parameters. The goal is to employ Bayesian experimental design techniques to minimize the number of high-fidelity code evaluations required to accurately calibrate the low-fidelity model. We illustrate the performance of this framework using heat and diffusion examples, a 1-D kinetic neutron diffusion equation, and a particle transport model, and include initial results from the integration of the high-fidelity thermal-hydraulics code Hydra-TH with a low-fidelity exponential model for the friction correlation factor.

  15. The Aster code

    International Nuclear Information System (INIS)

    Delbecq, J.M.

    1999-01-01

    The Aster code is a 2D or 3D finite-element calculation code for structures developed by the R and D direction of Electricite de France (EdF). This dossier presents a complete overview of the characteristics and uses of the Aster code: introduction of version 4; the context of Aster (organisation of the code development, versions, systems and interfaces, development tools, quality assurance, independent validation); static mechanics (linear thermo-elasticity, Euler buckling, cables, Zarka-Casier method); non-linear mechanics (materials behaviour, big deformations, specific loads, unloading and loss of load proportionality indicators, global algorithm, contact and friction); rupture mechanics (G energy restitution level, restitution level in thermo-elasto-plasticity, 3D local energy restitution level, KI and KII stress intensity factors, calculation of limit loads for structures), specific treatments (fatigue, rupture, wear, error estimation); meshes and models (mesh generation, modeling, loads and boundary conditions, links between different modeling processes, resolution of linear systems, display of results etc..); vibration mechanics (modal and harmonic analysis, dynamics with shocks, direct transient dynamics, seismic analysis and aleatory dynamics, non-linear dynamics, dynamical sub-structuring); fluid-structure interactions (internal acoustics, mass, rigidity and damping); linear and non-linear thermal analysis; steels and metal industry (structure transformations); coupled problems (internal chaining, internal thermo-hydro-mechanical coupling, chaining with other codes); products and services. (J.S.)

  16. Binary codes with impulse autocorrelation functions for dynamic experiments

    International Nuclear Information System (INIS)

    Corran, E.R.; Cummins, J.D.

    1962-09-01

    A series of binary codes exist which have autocorrelation functions approximating to an impulse function. Signals whose behaviour in time can be expressed by such codes have spectra which are 'whiter' over a limited bandwidth and for a finite time than signals from a white noise generator. These codes are used to determine system dynamic responses using the correlation technique. Programmes have been written to compute codes of arbitrary length and to compute 'cyclic' autocorrelation and cross-correlation functions. Complete listings of these programmes are given, and a code of 1019 bits is presented. (author)

  17. Simplified modeling and code usage in the PASC-3 code system by the introduction of a programming environment

    International Nuclear Information System (INIS)

    Pijlgroms, B.J.; Oppe, J.; Oudshoorn, H.L.; Slobben, J.

    1991-06-01

    A brief description is given of the PASC-3 (Petten-AMPX-SCALE) Reactor Physics code system and associated UNIPASC work environment. The PASC-3 code system is used for criticality and reactor calculations and consists of a selection from the Oak Ridge National Laboratory AMPX-SCALE-3 code collection complemented with a number of additional codes and nuclear data bases. The original codes have been adapted to run under the UNIX operating system. The recommended nuclear data base is a complete 219 group cross section library derived from JEF-1 of which some benchmark results are presented. By the addition of the UNIPASC work environment the usage of the code system is greatly simplified. Complex chains of programs can easily be coupled together to form a single job. In addition, the model parameters can be represented by variables instead of literal values which enhances the readability and may improve the integrity of the code inputs. (author). 8 refs.; 6 figs.; 1 tab

  18. Optimal interference code based on machine learning

    Science.gov (United States)

    Qian, Ye; Chen, Qian; Hu, Xiaobo; Cao, Ercong; Qian, Weixian; Gu, Guohua

    2016-10-01

    In this paper, we analyze the characteristics of pseudo-random code, by the case of m sequence. Depending on the description of coding theory, we introduce the jamming methods. We simulate the interference effect or probability model by the means of MATLAB to consolidate. In accordance with the length of decoding time the adversary spends, we find out the optimal formula and optimal coefficients based on machine learning, then we get the new optimal interference code. First, when it comes to the phase of recognition, this study judges the effect of interference by the way of simulating the length of time over the decoding period of laser seeker. Then, we use laser active deception jamming simulate interference process in the tracking phase in the next block. In this study we choose the method of laser active deception jamming. In order to improve the performance of the interference, this paper simulates the model by MATLAB software. We find out the least number of pulse intervals which must be received, then we can make the conclusion that the precise interval number of the laser pointer for m sequence encoding. In order to find the shortest space, we make the choice of the greatest common divisor method. Then, combining with the coding regularity that has been found before, we restore pulse interval of pseudo-random code, which has been already received. Finally, we can control the time period of laser interference, get the optimal interference code, and also increase the probability of interference as well.

  19. Investigating the Simulink Auto-Coding Process

    Science.gov (United States)

    Gualdoni, Matthew J.

    2016-01-01

    Ramses model that should be further investigated. Several skills were required to be built up over the course of the internship project. First and foremost, my Simulink skills have improved drastically, as much of my experience had been modeling electronic circuits as opposed to software models. Furthermore, I am now comfortable working with the Simulink Auto-coder, a tool I had never used until this summer; this tool also tested my critical thinking and C++ knowledge as I had to interpret the C++ code it was generating and attempt to understand how the Simulink model affected the generated code. I had come into the internship with a solid understanding of Matlab code, but had done very little in using it to automate tasks, particularly Simulink tasks; along the same lines, I had rarely used shell script to automate and interface with programs, which I gained a fair amount of experience with this summer, including how to use regular expression. Lastly, soft-skills are an area everyone can continuously improve on; having never worked with NASA engineers, which to me seem to be a completely different breed than what I am used to (commercial electronic engineers), I learned to utilize the wealth of knowledge present at JSC. I wish I had come into the internship knowing exactly how helpful everyone in my branch would be, as I would have picked up on this sooner. I hope that having gained such a strong foundation in Simulink over this summer will open the opportunity to return to work on this project, or potentially other opportunities within the division. The idea of leaving a project I devoted ten weeks to is a hard one to cope with, so having the chance to pick up where I left off sounds appealing; alternatively, I am interested to see if there are any opening in the future that would allow me to work on a project that is more in-line with my research in estimation algorithms. Regardless, this summer has been a milestone in my professional career, and I hope this has

  20. Low Completeness of Bacteraemia Registration in the Danish National Patient Registry.

    Directory of Open Access Journals (Sweden)

    Kim Oren Gradel

    Full Text Available Bacteraemia is associated with significant morbidity and mortality and timely access to relia-ble information is essential for health care administrators. Therefore, we investigated the complete-ness of bacteraemia registration in the Danish National Patient Registry (DNPR containing hospital discharge diagnoses and surgical procedures for all non-psychiatric patients. As gold standard we identified bacteraemia patients in three defined areas of Denmark (~2.3 million inhabitants from 2000 through 2011 by use of blood culture data retrieved from electronic microbiology databases. Diagnoses coded according to the International Classification of Diseases, version 10, and surgical procedure codes were retrieved from the DNPR. The codes were categorized into seven groups, ranked a priori according to the likelihood of bacteraemia. Completeness was analysed by contin-gency tables, for all patients and subgroups. We identified 58,139 bacteraemic episodes in 48,450 patients; 37,740 episodes (64.9% were covered by one or more discharge diagnoses within the sev-en diagnosis/surgery groups and 18,786 episodes (32.3% had a code within the highest priority group. Completeness varied substantially according to speciality (from 17.9% for surgical to 36.4% for medical, place of acquisition (from 26.0% for nosocomial to 36.2% for community, and mi-croorganism (from 19.5% for anaerobic Gram-negative bacteria to 36.8% for haemolytic strepto-cocci. The completeness increased from 25.1% in 2000 to 35.1% in 2011. In conclusion, one third of the bacteraemic episodes did not have a relevant diagnosis in the Danish administrative registry recording all non-psychiatric contacts. This source of information should be used cautiously to iden-tify patients with bacteraemia.

  1. AutoBayes/CC: Combining Program Synthesis with Automatic Code Certification: System Description

    Science.gov (United States)

    Whalen, Michael; Schumann, Johann; Fischer, Bernd; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Code certification is a lightweight approach to formally demonstrate software quality. It concentrates on aspects of software quality that can be defined and formalized via properties, e.g., operator safety or memory safety. Its basic idea is to require code producers to provide formal proofs that their code satisfies these quality properties. The proofs serve as certificates which can be checked independently, by the code consumer or by certification authorities, e.g., the FAA. It is the idea underlying such approaches as proof-carrying code [6]. Code certification can be viewed as a more practical version of traditional Hoare-style program verification. The properties to be verified are fairly simple and regular so that it is often possible to use an automated theorem prover to automatically discharge all emerging proof obligations. Usually, however, the programmer must still splice auxiliary annotations (e.g., loop invariants) into the program to facilitate the proofs. For complex properties or larger programs this quickly becomes the limiting factor for the applicability of current certification approaches.

  2. Effectiveness and safety of dydrogesterone in regularization of menstrual cycle: a post-marketing study.

    Science.gov (United States)

    Trivedi, Nilesh; Chauhan, Naveen; Vaidya, Vishal

    2016-08-01

    Oral administration of dydrogesterone during second half of menstrual cycle has been shown to reduce menstrual irregularities. This prospective, observational study aimed to determine continued effectiveness of dydrogesterone (prescribed between 1 and 6 cycles or longer) in menstrual cycle regularization in Indian women aged ≥18 years with irregular menstrual cycle for at least 3 months. Those achieving regular cycles (21 to 35 days, inclusive) during treatment were followed up for 6 months after cessation of dydrogesterone treatment. Of the 910 women completing dydrogesterone treatment, 880 (96.7%) achieved cycle regularization (p<0.0001 for 90% success rate) at end of treatment (EOT). Of the 788 subjects available for follow up at 6 months, 747 (94.8%) reported cycle regularity (p<0.0001 for 90% success rate). At EOT, the mean cycle duration reduced by 16.14 (±24.04) days and mean amount of menstrual bleeding decreased by 0.45 (±1.20) pads/day. While five subjects reported worst pain at baseline, none experienced it at EOT. One serious adverse event (appendicitis) and three non-serious adverse events were reported. Dydrogesterone regularizes and improves the duration of the menstrual cycle, reduces the amount of bleeding, relieves menstrual pain and prevents relapse of irregular cycles at six months after discontinuation of treatment.

  3. Fast algorithm for two-dimensional data table use in hydrodynamic and radiative-transfer codes

    International Nuclear Information System (INIS)

    Slattery, W.L.; Spangenberg, W.H.

    1982-01-01

    A fast algorithm for finding interpolated atomic data in irregular two-dimensional tables with differing materials is described. The algorithm is tested in a hydrodynamic/radiative transfer code and shown to be of comparable speed to interpolation in regularly spaced tables, which require no table search. The concepts presented are expected to have application in any situation with irregular vector lengths. Also, the procedures that were rejected either because they were too slow or because they involved too much assembly coding are described

  4. Condition Number Regularized Covariance Estimation*

    Science.gov (United States)

    Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala

    2012-01-01

    Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the “large p small n” setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required. PMID:23730197

  5. Rulemaking efforts on codes and standards

    International Nuclear Information System (INIS)

    Millman, G.C.

    1992-01-01

    Section 50.55a of the NRC regulations provides a mechanism for incorporating national codes and standards into the regulatory process. It incorporates by reference ASME Boiler and Pressure Vessel Code (ASME B and PV Code) Section 3 rules for construction and Section 11 rules for inservice inspection and inservice testing. The regulation is periodically amended to update these references. The rulemaking process, as applied to Section 50.55a amendments, is overviewed to familiarize users with associated internal activities of the NRC staff and the manner in which public comments are integrated into the process. The four ongoing rulemaking actions that would individually amend Section 50.55a are summarized. Two of the actions would directly impact requirements for inservice testing. Benefits accrued with NRC endorsement of the ASME B and PV Code, and possible future endorsement of the ASME Operations and Maintenance Code (ASME OM Code), are identified. Emphasis is placed on the need for code writing committees to be especially sensitive to user feedback on code rules incorporated into the regulatory process to ensure that the rules are complete, technically accurate, clear, practical, and enforceable

  6. Delay discounting as a function of intrinsic/extrinsic religiousness, religious fundamentalism, and regular church attendance.

    Science.gov (United States)

    Weatherly, Jeffrey N; Plumm, Karyn M

    2012-01-01

    Delay discounting occurs when the subjective value of an outcome decreases because its delivery is delayed. Previous research has suggested that the rate at which some, but not all, outcomes are discounted varies as a function of regular church attendance. In the present study, 509 participants completed measures of intrinsic religiousness, extrinsic religiousness, religious fundamentalism, and whether they regularly attended church services. They then completed a delay-discounting task involving five outcomes. Although religiousness was not a significant predictor of discounting for all outcomes, participants scoring high in intrinsic religiousness tended to display less delay discounting than participants scoring low. Likewise, participants scoring high in religious fundamentalism tended to display more delay discounting than participants scoring low. These results partially replicate previous ones in showing that the process of discounting may vary as a function of religiousness. The results also provide some direction for those interested in altering how individuals discount.

  7. A novel QC-LDPC code based on the finite field multiplicative group for optical communications

    Science.gov (United States)

    Yuan, Jian-guo; Xu, Liang; Tong, Qing-zhen

    2013-09-01

    A novel construction method of quasi-cyclic low-density parity-check (QC-LDPC) code is proposed based on the finite field multiplicative group, which has easier construction, more flexible code-length code-rate adjustment and lower encoding/decoding complexity. Moreover, a regular QC-LDPC(5334,4962) code is constructed. The simulation results show that the constructed QC-LDPC(5334,4962) code can gain better error correction performance under the condition of the additive white Gaussian noise (AWGN) channel with iterative decoding sum-product algorithm (SPA). At the bit error rate (BER) of 10-6, the net coding gain (NCG) of the constructed QC-LDPC(5334,4962) code is 1.8 dB, 0.9 dB and 0.2 dB more than that of the classic RS(255,239) code in ITU-T G.975, the LDPC(32640,30592) code in ITU-T G.975.1 and the SCG-LDPC(3969,3720) code constructed by the random method, respectively. So it is more suitable for optical communication systems.

  8. Regular use of aspirin and pancreatic cancer risk

    Directory of Open Access Journals (Sweden)

    Mahoney Martin C

    2002-09-01

    Full Text Available Abstract Background Regular use of aspirin and other non-steroidal anti-inflammatory drugs (NSAIDs has been consistently associated with reduced risk of colorectal cancer and adenoma, and there is some evidence for a protective effect for other types of cancer. As experimental studies reveal a possible role for NSAIDs is reducing the risk of pancreatic cancer, epidemiological studies examining similar associations in human populations become more important. Methods In this hospital-based case-control study, 194 patients with pancreatic cancer were compared to 582 age and sex-matched patients with non-neoplastic conditions to examine the association between aspirin use and risk of pancreatic cancer. All participants received medical services at the Roswell Park Cancer Institute in Buffalo, NY and completed a comprehensive epidemiologic questionnaire that included information on demographics, lifestyle factors and medical history as well as frequency and duration of aspirin use. Patients using at least one tablet per week for at least six months were classified as regular aspirin users. Unconditional logistic regression was used to compute crude and adjusted odds ratios (ORs with 95% confidence intervals (CIs. Results Pancreatic cancer risk in aspirin users was not changed relative to non-users (adjusted OR = 1.00; 95% CI 0.72–1.39. No significant change in risk was found in relation to greater frequency or prolonged duration of use, in the total sample or in either gender. Conclusions These data suggest that regular aspirin use may not be associated with lower risk of pancreatic cancer.

  9. A Low-Jitter Wireless Transmission Based on Buffer Management in Coding-Aware Routing

    Directory of Open Access Journals (Sweden)

    Cunbo Lu

    2015-08-01

    Full Text Available It is significant to reduce packet jitter for real-time applications in a wireless network. Existing coding-aware routing algorithms use the opportunistic network coding (ONC scheme in a packet coding algorithm. The ONC scheme never delays packets to wait for the arrival of a future coding opportunity. The loss of some potential coding opportunities may degrade the contribution of network coding to jitter performance. In addition, most of the existing coding-aware routing algorithms assume that all flows participating in the network have equal rate. This is unrealistic, since multi-rate environments often appear. To overcome the above problem and expand coding-aware routing to multi-rate scenarios, from the view of data transmission, we present a low-jitter wireless transmission algorithm based on buffer management (BLJCAR, which decides packets in coding node according to the queue-length based threshold policy instead of the regular ONC policy as used in existing coding-aware routing algorithms. BLJCAR is a unified framework to merge the single rate case and multiple rate case. Simulations results show that the BLJCAR algorithm embedded in coding-aware routing outperforms the traditional ONC policy in terms of jitter, packet delivery delay, packet loss ratio and network throughput in network congestion in any traffic rates.

  10. Fast Sparse Coding for Range Data Denoising with Sparse Ridges Constraint

    Directory of Open Access Journals (Sweden)

    Zhi Gao

    2018-05-01

    Full Text Available Light detection and ranging (LiDAR sensors have been widely deployed on intelligent systems such as unmanned ground vehicles (UGVs and unmanned aerial vehicles (UAVs to perform localization, obstacle detection, and navigation tasks. Thus, research into range data processing with competitive performance in terms of both accuracy and efficiency has attracted increasing attention. Sparse coding has revolutionized signal processing and led to state-of-the-art performance in a variety of applications. However, dictionary learning, which plays the central role in sparse coding techniques, is computationally demanding, resulting in its limited applicability in real-time systems. In this study, we propose sparse coding algorithms with a fixed pre-learned ridge dictionary to realize range data denoising via leveraging the regularity of laser range measurements in man-made environments. Experiments on both synthesized data and real data demonstrate that our method obtains accuracy comparable to that of sophisticated sparse coding methods, but with much higher computational efficiency.

  11. Fast Sparse Coding for Range Data Denoising with Sparse Ridges Constraint.

    Science.gov (United States)

    Gao, Zhi; Lao, Mingjie; Sang, Yongsheng; Wen, Fei; Ramesh, Bharath; Zhai, Ruifang

    2018-05-06

    Light detection and ranging (LiDAR) sensors have been widely deployed on intelligent systems such as unmanned ground vehicles (UGVs) and unmanned aerial vehicles (UAVs) to perform localization, obstacle detection, and navigation tasks. Thus, research into range data processing with competitive performance in terms of both accuracy and efficiency has attracted increasing attention. Sparse coding has revolutionized signal processing and led to state-of-the-art performance in a variety of applications. However, dictionary learning, which plays the central role in sparse coding techniques, is computationally demanding, resulting in its limited applicability in real-time systems. In this study, we propose sparse coding algorithms with a fixed pre-learned ridge dictionary to realize range data denoising via leveraging the regularity of laser range measurements in man-made environments. Experiments on both synthesized data and real data demonstrate that our method obtains accuracy comparable to that of sophisticated sparse coding methods, but with much higher computational efficiency.

  12. ARES: automated response function code. Users manual. [HPGAM and LSQVM

    Energy Technology Data Exchange (ETDEWEB)

    Maung, T.; Reynolds, G.M.

    1981-06-01

    This ARES user's manual provides detailed instructions for a general understanding of the Automated Response Function Code and gives step by step instructions for using the complete code package on a HP-1000 system. This code is designed to calculate response functions of NaI gamma-ray detectors, with cylindrical or rectangular geometries.

  13. Sub-grouping of Plasmodium falciparum 3D7 var genes based on sequence analysis of coding and non-coding regions

    DEFF Research Database (Denmark)

    Lavstsen, Thomas; Salanti, Ali; Jensen, Anja T R

    2003-01-01

    and organization of the 3D7 PfEMP1 repertoire was investigated on the basis of the complete genome sequence. METHODS: Using two tree-building methods we analysed the coding and non-coding sequences of 3D7 var and rif genes as well as var genes of other parasite strains. RESULTS: var genes can be sub...

  14. The Vulnerability Assessment Code for Physical Protection System

    International Nuclear Information System (INIS)

    Jang, Sung Soon; Yoo, Ho Sik

    2007-01-01

    To neutralize the increasing terror threats, nuclear facilities have strong physical protection system (PPS). PPS includes detectors, door locks, fences, regular guard patrols, and a hot line to a nearest military force. To design an efficient PPS and to fully operate it, vulnerability assessment process is required. Evaluating PPS of a nuclear facility is complicate process and, hence, several assessment codes have been developed. The estimation of adversary sequence interruption (EASI) code analyzes vulnerability along a single intrusion path. To evaluate many paths to a valuable asset in an actual facility, the systematic analysis of vulnerability to intrusion (SAVI) code was developed. KAERI improved SAVI and made the Korean analysis of vulnerability to intrusion (KAVI) code. Existing codes (SAVI and KAVI) have limitations in representing the distance of a facility because they use the simplified model of a PPS called adversary sequence diagram. In adversary sequence diagram the position of doors, sensors and fences is described just as the locating area. Thus, the distance between elements is inaccurate and we cannot reflect the range effect of sensors. In this abstract, we suggest accurate and intuitive vulnerability assessment based on raster map modeling of PPS. The raster map of PPS accurately represents the relative position of elements and, thus, the range effect of sensor can be easily incorporable. Most importantly, the raster map is easy to understand

  15. Cognitive and personality factors in the regular practice of martial arts.

    Science.gov (United States)

    Fabio, Rosa A; Towey, Giulia E

    2018-06-01

    The effects of regular practice of martial arts is considered controversial and studies in this field limited their attention to singular psychological benefits. The aim of this study is to examine the relationship between the regular practice of martial arts and cognitive and personality factors, such as: attention, creativity and school performance, together with, self-esteem, self-efficacy and aggression. The design consists in a factorial design with two independent variables (groups and age levels) and seven dependent variables (attention, creativity, intelligence, school performance, self-esteem, self-efficacy and aggression). Seventy-six people practicing martial arts were compared with a control group (70 participants) not involved in any martial arts training. Martial artists were divided into groups of three levels of experience: beginners, intermediate and experts. Each completed a battery of tests that measured all the cognitive and personality factors. Martial artists presented a better performance in the attentional and creativity tests. All the personality factors analyzed presented a significant difference between the two groups, resulting in higher levels of self-esteem and self-efficacy, and a decrease of aggressiveness. Regular practice of martial arts can influence many functional aspects, leading to positive effects on both personality and cognitive factors, with implications in psychological well-being, and in the educational field. The results were discussed with reference to theories claiming that regular activity has a differential positive effect on some aspects of cognition.

  16. Regular-, irregular-, and pseudo-character processing in Chinese: The regularity effect in normal adult readers

    Directory of Open Access Journals (Sweden)

    Dustin Kai Yan Lau

    2014-03-01

    Full Text Available Background Unlike alphabetic languages, Chinese uses a logographic script. However, the pronunciation of many character’s phonetic radical has the same pronunciation as the character as a whole. These are considered regular characters and can be read through a lexical non-semantic route (Weekes & Chen, 1999. Pseudocharacters are another way to study this non-semantic route. A pseudocharacter is the combination of existing semantic and phonetic radicals in their legal positions resulting in a non-existing character (Ho, Chan, Chung, Lee, & Tsang, 2007. Pseudocharacters can be pronounced by direct derivation from the sound of its phonetic radical. Conversely, if the pronunciation of a character does not follow that of the phonetic radical, it is considered as irregular and can only be correctly read through the lexical-semantic route. The aim of the current investigation was to examine reading aloud in normal adults. We hypothesized that the regularity effect, previously described for alphabetical scripts and acquired dyslexic patients of Chinese (Weekes & Chen, 1999; Wu, Liu, Sun, Chromik, & Zhang, 2014, would also be present in normal adult Chinese readers. Method Participants. Thirty (50% female native Hong Kong Cantonese speakers with a mean age of 19.6 years and a mean education of 12.9 years. Stimuli. Sixty regular-, 60 irregular-, and 60 pseudo-characters (with at least 75% of name agreement in Chinese were matched by initial phoneme, number of strokes and family size. Additionally, regular- and irregular-characters were matched by frequency (low and consistency. Procedure. Each participant was asked to read aloud the stimuli presented on a laptop using the DMDX software. The order of stimuli presentation was randomized. Data analysis. ANOVAs were carried out by participants and items with RTs and errors as dependent variables and type of stimuli (regular-, irregular- and pseudo-character as repeated measures (F1 or between subject

  17. Regularity effect in prospective memory during aging

    Directory of Open Access Journals (Sweden)

    Geoffrey Blondelle

    2016-10-01

    Full Text Available Background: Regularity effect can affect performance in prospective memory (PM, but little is known on the cognitive processes linked to this effect. Moreover, its impacts with regard to aging remain unknown. To our knowledge, this study is the first to examine regularity effect in PM in a lifespan perspective, with a sample of young, intermediate, and older adults. Objective and design: Our study examined the regularity effect in PM in three groups of participants: 28 young adults (18–30, 16 intermediate adults (40–55, and 25 older adults (65–80. The task, adapted from the Virtual Week, was designed to manipulate the regularity of the various activities of daily life that were to be recalled (regular repeated activities vs. irregular non-repeated activities. We examine the role of several cognitive functions including certain dimensions of executive functions (planning, inhibition, shifting, and binding, short-term memory, and retrospective episodic memory to identify those involved in PM, according to regularity and age. Results: A mixed-design ANOVA showed a main effect of task regularity and an interaction between age and regularity: an age-related difference in PM performances was found for irregular activities (older < young, but not for regular activities. All participants recalled more regular activities than irregular ones with no age effect. It appeared that recalling of regular activities only involved planning for both intermediate and older adults, while recalling of irregular ones were linked to planning, inhibition, short-term memory, binding, and retrospective episodic memory. Conclusion: Taken together, our data suggest that planning capacities seem to play a major role in remembering to perform intended actions with advancing age. Furthermore, the age-PM-paradox may be attenuated when the experimental design is adapted by implementing a familiar context through the use of activities of daily living. The clinical

  18. J-regular rings with injectivities

    OpenAIRE

    Shen, Liang

    2010-01-01

    A ring $R$ is called a J-regular ring if R/J(R) is von Neumann regular, where J(R) is the Jacobson radical of R. It is proved that if R is J-regular, then (i) R is right n-injective if and only if every homomorphism from an $n$-generated small right ideal of $R$ to $R_{R}$ can be extended to one from $R_{R}$ to $R_{R}$; (ii) R is right FP-injective if and only if R is right (J, R)-FP-injective. Some known results are improved.

  19. Iterative Regularization with Minimum-Residual Methods

    DEFF Research Database (Denmark)

    Jensen, Toke Koldborg; Hansen, Per Christian

    2007-01-01

    subspaces. We provide a combination of theory and numerical examples, and our analysis confirms the experience that MINRES and MR-II can work as general regularization methods. We also demonstrate theoretically and experimentally that the same is not true, in general, for GMRES and RRGMRES their success......We study the regularization properties of iterative minimum-residual methods applied to discrete ill-posed problems. In these methods, the projection onto the underlying Krylov subspace acts as a regularizer, and the emphasis of this work is on the role played by the basis vectors of these Krylov...... as regularization methods is highly problem dependent....

  20. Iterative regularization with minimum-residual methods

    DEFF Research Database (Denmark)

    Jensen, Toke Koldborg; Hansen, Per Christian

    2006-01-01

    subspaces. We provide a combination of theory and numerical examples, and our analysis confirms the experience that MINRES and MR-II can work as general regularization methods. We also demonstrate theoretically and experimentally that the same is not true, in general, for GMRES and RRGMRES - their success......We study the regularization properties of iterative minimum-residual methods applied to discrete ill-posed problems. In these methods, the projection onto the underlying Krylov subspace acts as a regularizer, and the emphasis of this work is on the role played by the basis vectors of these Krylov...... as regularization methods is highly problem dependent....

  1. Evaluation of completeness of selected poison control center data fields.

    Science.gov (United States)

    Jaramillo, Jeanie E; Marchbanks, Brenda; Willis, Branch; Forrester, Mathias B

    2010-08-01

    Poison control center data are used in research and surveillance. Due to the large volume of information, these efforts are dependent on data being recorded in machine readable format. However, poison center records include non-machine readable text fields and machine readable coded fields, some of which are duplicative. Duplicating this data increases the chance of inaccurate/incomplete coding. For surveillance efforts to be effective, coding should be complete and accurate. Investigators identified a convenience sample of 964 records and reviewed the substance code determining if it matched its text field. They also reviewed the coded clinical effects and treatments determining if they matched the notes text field. The substance code matched its text field for 91.4% of the substances. The clinical effects and treatments codes matched their text field for 72.6% and 82.4% of occurrences respectively. This under-reporting of clinical effects and treatments has surveillance and public health implications.

  2. Multiple graph regularized protein domain ranking.

    Science.gov (United States)

    Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin

    2012-11-19

    Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications.

  3. Final Technical Report for GO17004 Regulatory Logic: Codes and Standards for the Hydrogen Economy

    Energy Technology Data Exchange (ETDEWEB)

    Nakarado, Gary L. [Regulatory Logic LLC, Golden, CO (United States)

    2017-02-22

    The objectives of this project are to: develop a robust supporting research and development program to provide critical hydrogen behavior data and a detailed understanding of hydrogen combustion and safety across a range of scenarios, needed to establish setback distances in building codes and minimize the overall data gaps in code development; support and facilitate the completion of technical specifications by the International Organization for Standardization (ISO) for gaseous hydrogen refueling (TS 20012) and standards for on-board liquid (ISO 13985) and gaseous or gaseous blend (ISO 15869) hydrogen storage by 2007; support and facilitate the effort, led by the NFPA, to complete the draft Hydrogen Technologies Code (NFPA 2) by 2008; with experimental data and input from Technology Validation Program element activities, support and facilitate the completion of standards for bulk hydrogen storage (e.g., NFPA 55) by 2008; facilitate the adoption of the most recently available model codes (e.g., from the International Code Council [ICC]) in key regions; complete preliminary research and development on hydrogen release scenarios to support the establishment of setback distances in building codes and provide a sound basis for model code development and adoption; support and facilitate the development of Global Technical Regulations (GTRs) by 2010 for hydrogen vehicle systems under the United Nations Economic Commission for Europe, World Forum for Harmonization of Vehicle Regulations and Working Party on Pollution and Energy Program (ECE-WP29/GRPE); and to Support and facilitate the completion by 2012 of necessary codes and standards needed for the early commercialization and market entry of hydrogen energy technologies.

  4. Efficient topology optimization in MATLAB using 88 lines of code

    DEFF Research Database (Denmark)

    Andreassen, Erik; Clausen, Anders; Schevenels, Mattias

    2011-01-01

    The paper presents an efficient 88 line MATLAB code for topology optimization. It has been developed using the 99 line code presented by Sigmund (Struct Multidisc Optim 21(2):120–127, 2001) as a starting point. The original code has been extended by a density filter, and a considerable improvemen...... of the basic code to include recent PDE-based and black-and-white projection filtering methods. The complete 88 line code is included as an appendix and can be downloaded from the web site www.topopt.dtu.dk....

  5. Provably optimal parallel transport sweeps on regular grids

    International Nuclear Information System (INIS)

    Adams, M. P.; Adams, M. L.; Hawkins, W. D.; Smith, T.; Rauchwerger, L.; Amato, N. M.; Bailey, T. S.; Falgout, R. D.

    2013-01-01

    We have found provably optimal algorithms for full-domain discrete-ordinate transport sweeps on regular grids in 3D Cartesian geometry. We describe these algorithms and sketch a 'proof that they always execute the full eight-octant sweep in the minimum possible number of stages for a given P x x P y x P z partitioning. Computational results demonstrate that our optimal scheduling algorithms execute sweeps in the minimum possible stage count. Observed parallel efficiencies agree well with our performance model. An older version of our PDT transport code achieves almost 80% parallel efficiency on 131,072 cores, on a weak-scaling problem with only one energy group, 80 directions, and 4096 cells/core. A newer version is less efficient at present-we are still improving its implementation - but achieves almost 60% parallel efficiency on 393,216 cores. These results conclusively demonstrate that sweeps can perform with high efficiency on core counts approaching 10 6 . (authors)

  6. Provably optimal parallel transport sweeps on regular grids

    Energy Technology Data Exchange (ETDEWEB)

    Adams, M. P.; Adams, M. L.; Hawkins, W. D. [Dept. of Nuclear Engineering, Texas A and M University, 3133 TAMU, College Station, TX 77843-3133 (United States); Smith, T.; Rauchwerger, L.; Amato, N. M. [Dept. of Computer Science and Engineering, Texas A and M University, 3133 TAMU, College Station, TX 77843-3133 (United States); Bailey, T. S.; Falgout, R. D. [Lawrence Livermore National Laboratory (United States)

    2013-07-01

    We have found provably optimal algorithms for full-domain discrete-ordinate transport sweeps on regular grids in 3D Cartesian geometry. We describe these algorithms and sketch a 'proof that they always execute the full eight-octant sweep in the minimum possible number of stages for a given P{sub x} x P{sub y} x P{sub z} partitioning. Computational results demonstrate that our optimal scheduling algorithms execute sweeps in the minimum possible stage count. Observed parallel efficiencies agree well with our performance model. An older version of our PDT transport code achieves almost 80% parallel efficiency on 131,072 cores, on a weak-scaling problem with only one energy group, 80 directions, and 4096 cells/core. A newer version is less efficient at present-we are still improving its implementation - but achieves almost 60% parallel efficiency on 393,216 cores. These results conclusively demonstrate that sweeps can perform with high efficiency on core counts approaching 10{sup 6}. (authors)

  7. The DIT nuclear fuel assembly physics design code

    International Nuclear Information System (INIS)

    Jonsson, A.

    1988-01-01

    The DIT code is the Combustion Engineering, Inc. (C-E) nuclear fuel assembly design code. It belongs to a class of codes, all similar in structure and strategy, that may be characterized by the spectrum and spatial calculations being performed in two dimensions and in a single job step for the entire assembly. The forerunner of this class of codes is the United Kingdom Atomic Energy Authority WIMS code, the first version of which was completed 25 yr ago. The structure and strategy of assembly spectrum codes have remained remarkably similar to the original concept thus proving its usefulness. As other organizations, including C-E, have developed their own versions of the concept, many important variations have been added that significantly influence the accuracy and performance of the resulting computational tool. Those features, which are unique to the DIT code and which might be of interest to the community of fuel assembly physics design code users and developers, are described and discussed

  8. Jovan Hadžić's civil procedure code draft (1845

    Directory of Open Access Journals (Sweden)

    Stanković Uroš

    2013-01-01

    Full Text Available The article sheds light on civil procedure code draft prepared by renown Serbian lawmaker Jovan Hadžić in 1845. Contrarily to Hadžić's work on producing civil code, his efforts to write civil procedure code are to a large extent obscure. Original text of the draft has not been found. Therefore, one is imposed to reconstruct it with the aid of auxiliary sources, among which the record kept by the commission tasked with revision of the draft, has prevailing significance. The author made an attempt to determine the structure of the draft, as well as the contents of its provisions as accurately as possible, wherein the remarks that the Commission had made were of most assistance. Hadžić consigned his draft to the Prince Aleksandar Karađorđević on August 4th 1845. The Draft consisted of introduction and three parts. The introduction comprised of general provisions. Therein was stipulated principle of trial by written declaration, determined five kinds of courts before which civil procedure should occur and established vacatio legis. The first and concviningly largest part of the Draft was most likely entitled 'On principal litigations and litigation evidence' ('O parnicama glavnima i parničnim dokazatelstvima'. It included diverse rules, such as proceedings before peace courts, proceedings before district courts, ordinary course of litigation (i. e. rules in terms of filing a complaint, scheduling of hearings, subpoenas delivery, procurement of evidence, trial, bankruptcy proceedings, forms of evidence, presentation of evidence and deliberation of judgment, appellate procedure, new trial and enforcement procedure. The third part subsumed the title named 'likvidacija' (literally 'liquidation', whose contents is undeterminable on the basis of existing text of the draft, resolution of bankruptcy proceedings via settlement, resolution of regular litigations via settlement, title named 'on special proceedings and advantages' ('o osobenom

  9. Some regularities in invertebrate succession in different microhabitats on pine stumps

    OpenAIRE

    Franch, Joan

    1989-01-01

    Sixty eight pine stumps felled on known dates from one to sixteen years before the moment of sampling have been studied in the San Juan de la Peña woodland (province of Huesca). Four microhabitats were distinguished: bark, subcortical space, sapwood and heartwood. The object of the study is to compare the invertebrate macrofauna succession of the different microhabitats in order to find regularities among them. The biocenosis has not been completely studied: ipidae, diptera and annelidae are ...

  10. Mechanical behaviour of PEM fuel cell catalyst layers during regular cell operation

    OpenAIRE

    Maher A.R. Sadiq Al-Baghdadi

    2010-01-01

    Damage mechanisms in a proton exchange membrane fuel cell are accelerated by mechanical stresses arising during fuel cell assembly (bolt assembling), and the stresses arise during fuel cell running, because it consists of the materials with different thermal expansion and swelling coefficients. Therefore, in order to acquire a complete understanding of the mechanical behaviour of the catalyst layers during regular cell operation, mechanical response under steady-state hygro-thermal stresses s...

  11. MED101: a laser-plasma simulation code. User guide

    International Nuclear Information System (INIS)

    Rodgers, P.A.; Rose, S.J.; Rogoyski, A.M.

    1989-12-01

    Complete details for running the 1-D laser-plasma simulation code MED101 are given including: an explanation of the input parameters, instructions for running on the Rutherford Appleton Laboratory IBM, Atlas Centre Cray X-MP and DEC VAX, and information on three new graphics packages. The code, based on the existing MEDUSA code, is capable of simulating a wide range of laser-produced plasma experiments including the calculation of X-ray laser gain. (author)

  12. Optimal codes as Tanner codes with cyclic component codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Pinero, Fernando; Zeng, Peng

    2014-01-01

    In this article we study a class of graph codes with cyclic code component codes as affine variety codes. Within this class of Tanner codes we find some optimal binary codes. We use a particular subgraph of the point-line incidence plane of A(2,q) as the Tanner graph, and we are able to describe ...

  13. Higher derivative regularization and chiral anomaly

    International Nuclear Information System (INIS)

    Nagahama, Yoshinori.

    1985-02-01

    A higher derivative regularization which automatically leads to the consistent chiral anomaly is analyzed in detail. It explicitly breaks all the local gauge symmetry but preserves global chiral symmetry and leads to the chirally symmetric consistent anomaly. This regularization thus clarifies the physics content contained in the consistent anomaly. We also briefly comment on the application of this higher derivative regularization to massless QED. (author)

  14. ASME nuclear codes and standards risk management strategic plan

    International Nuclear Information System (INIS)

    Balkey, Kenneth R.

    2003-01-01

    Over the past 15 years, several risk-informed initiatives have been completed or are under development within the ASME Nuclear Codes and Standards organization. In order to better manage the numerous initiatives in the future, the ASME Board on Nuclear Codes and Standards has recently developed and approved a Risk Management Strategic Plan. This paper presents the latest approved version of the plan beginning with a background of applications completed to date, including the recent issuance of the ASME Standard for Probabilistic Risk Assessment (PRA) for Nuclear Power Plant Applications. The paper discusses potential applications within ASME Nuclear Codes and Standards that may require expansion of the PRA Standard, such as for new generation reactors, or the development of new PRA Standards. A long-term vision for the potential development and evolution to a nuclear systems code that adopts a risk-informed approach across a facility life-cycle (design, construction, operation, maintenance, and closure) is summarized. Finally, near term and long term actions are defined across the ASME Nuclear Codes and Standards organizations related to risk management, and related U.S. regulatory activities are also summarized. (author)

  15. A full-fledged micromagnetic code in fewer than 70 lines of NumPy

    International Nuclear Information System (INIS)

    Abert, Claas; Bruckner, Florian; Vogler, Christoph; Windl, Roman; Thanhoffer, Raphael; Suess, Dieter

    2015-01-01

    We present a complete micromagnetic finite-difference code in fewer than 70 lines of Python. The code makes a large use of the NumPy library and computes the exchange field by finite differences and the demagnetization field with a fast convolution algorithm. Since the magnetization in finite-difference micromagnetics is represented by a multi-dimensional array and the NumPy library features a rich interface for this data structure, the code we present is an ideal starting point for the development of novel algorithms. - Highlights: • A very concise but complete micromagnetic code written in Python is presented. • An introduction to finite-difference micromagnetics is provided. • The code is a perfect starting point for the development of novel algorithms

  16. Multiple graph regularized protein domain ranking

    KAUST Repository

    Wang, Jim Jing-Yan

    2012-11-19

    Background: Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods.Results: To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods.Conclusion: The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications. 2012 Wang et al; licensee BioMed Central Ltd.

  17. Multiple graph regularized protein domain ranking

    KAUST Repository

    Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin

    2012-01-01

    Background: Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods.Results: To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods.Conclusion: The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications. 2012 Wang et al; licensee BioMed Central Ltd.

  18. Multiple graph regularized protein domain ranking

    Directory of Open Access Journals (Sweden)

    Wang Jim

    2012-11-01

    Full Text Available Abstract Background Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. Results To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. Conclusion The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications.

  19. Regular growth of systems of functions and systems of non-homogeneous convolution equations in convex domains of the complex plane

    International Nuclear Information System (INIS)

    Krivosheev, A S

    2000-01-01

    In this paper we introduce the notion of regular growth for a system of entire functions of finite order and type. This is a direct and natural generalization of the classical completely regular growth of an entire function. We obtain sufficient and necessary conditions for the solubility of a system of non-homogeneous convolution equations in convex domains of the complex plane. These conditions depend on whether the system of Laplace transforms of the analytic functionals that generate the convolution equations has regular growth. In the case of smooth convex domains, these solubility conditions form a criterion

  20. 75 FR 53966 - Regular Meeting

    Science.gov (United States)

    2010-09-02

    ... FARM CREDIT SYSTEM INSURANCE CORPORATION Regular Meeting AGENCY: Farm Credit System Insurance Corporation Board. SUMMARY: Notice is hereby given of the regular meeting of the Farm Credit System Insurance Corporation Board (Board). DATE AND TIME: The meeting of the Board will be held at the offices of the Farm...

  1. R package MVR for Joint Adaptive Mean-Variance Regularization and Variance Stabilization.

    Science.gov (United States)

    Dazard, Jean-Eudes; Xu, Hua; Rao, J Sunil

    2011-01-01

    We present an implementation in the R language for statistical computing of our recent non-parametric joint adaptive mean-variance regularization and variance stabilization procedure. The method is specifically suited for handling difficult problems posed by high-dimensional multivariate datasets ( p ≫ n paradigm), such as in 'omics'-type data, among which are that the variance is often a function of the mean, variable-specific estimators of variances are not reliable, and tests statistics have low powers due to a lack of degrees of freedom. The implementation offers a complete set of features including: (i) normalization and/or variance stabilization function, (ii) computation of mean-variance-regularized t and F statistics, (iii) generation of diverse diagnostic plots, (iv) synthetic and real 'omics' test datasets, (v) computationally efficient implementation, using C interfacing, and an option for parallel computing, (vi) manual and documentation on how to setup a cluster. To make each feature as user-friendly as possible, only one subroutine per functionality is to be handled by the end-user. It is available as an R package, called MVR ('Mean-Variance Regularization'), downloadable from the CRAN.

  2. Work and family life of childrearing women workers in Japan: comparison of non-regular employees with short working hours, non-regular employees with long working hours, and regular employees.

    Science.gov (United States)

    Seto, Masako; Morimoto, Kanehisa; Maruyama, Soichiro

    2006-05-01

    This study assessed the working and family life characteristics, and the degree of domestic and work strain of female workers with different employment statuses and weekly working hours who are rearing children. Participants were the mothers of preschoolers in a large Japanese city. We classified the women into three groups according to the hours they worked and their employment conditions. The three groups were: non-regular employees working less than 30 h a week (n=136); non-regular employees working 30 h or more per week (n=141); and regular employees working 30 h or more a week (n=184). We compared among the groups the subjective values of work, financial difficulties, childcare and housework burdens, psychological effects, and strains such as work and family strain, work-family conflict, and work dissatisfaction. Regular employees were more likely to report job pressures and inflexible work schedules and to experience more strain related to work and family than non-regular employees. Non-regular employees were more likely to be facing financial difficulties. In particular, non-regular employees working longer hours tended to encounter socioeconomic difficulties and often lacked support from family and friends. Female workers with children may have different social backgrounds and different stressors according to their working hours and work status.

  3. DISCURSIVE ACTUALIZATION OF ETHNO-LINGUOCULTURAL CODE IN ENGLISH GLUTTONY

    Directory of Open Access Journals (Sweden)

    Nikishkova Mariya Sergeevna

    2014-11-01

    Full Text Available The article presents the overview of linguistic research on gastronomic / gluttony communicative environment as ethnocultural phenomenon from the standpoint of conceptology, discourse study and linguosemiotics. The authors study the linguosemiotic encoding / decoding in the English gastronomic (gluttony discourse. The peculiarities of gastronomic gluttonyms "immersion" into everyday communication are studied. The anglophone ethnicities are revealed and different ways of gluttony texts (including the precedent ones formation are investigated. The linguosemiotic parameters of ethnocultural (anglophone gastronomic coded communication are established, their discursive characteristics are identified. It is determined that in English gastronomic communication, the discursive actualization of ethno-linguocultural code has a dynamic nature; the constitutive features of gastronomic discourse have symbolic (semiotic basics and are connected with such semiotic categories as code, encoding, decoding. It was found that food is semiotic in its origin and represents the cultural code. It was revealed that the semiosis of English gastronomic text is regularly filled with the codes of traditional "English-likeness" (ethnic term by Roland Barthes expressed by gluttonyms. "Nationality" code is detected through the names of products specific to certain areas; national identity of ethnic code also allows highlighting ways of dish garnishing and serving, typical characteristics of particular local preparation methods. The authors analyze the "lingualization" of food images having an ambivalent character, determined, firstly, by food signs (gluttonyms which structure the common space of gastronomic discourse and provide it with ethnic linguocultural food source; secondly, by immerging formed images into a specific ethnic code that is decoded in gastronomic discourse unfolding. The precedent texts accumulate ethnic information supplying adequate gastronomic worldview

  4. Incremental projection approach of regularization for inverse problems

    Energy Technology Data Exchange (ETDEWEB)

    Souopgui, Innocent, E-mail: innocent.souopgui@usm.edu [The University of Southern Mississippi, Department of Marine Science (United States); Ngodock, Hans E., E-mail: hans.ngodock@nrlssc.navy.mil [Naval Research Laboratory (United States); Vidard, Arthur, E-mail: arthur.vidard@imag.fr; Le Dimet, François-Xavier, E-mail: ledimet@imag.fr [Laboratoire Jean Kuntzmann (France)

    2016-10-15

    This paper presents an alternative approach to the regularized least squares solution of ill-posed inverse problems. Instead of solving a minimization problem with an objective function composed of a data term and a regularization term, the regularization information is used to define a projection onto a convex subspace of regularized candidate solutions. The objective function is modified to include the projection of each iterate in the place of the regularization. Numerical experiments based on the problem of motion estimation for geophysical fluid images, show the improvement of the proposed method compared with regularization methods. For the presented test case, the incremental projection method uses 7 times less computation time than the regularization method, to reach the same error target. Moreover, at convergence, the incremental projection is two order of magnitude more accurate than the regularization method.

  5. Development of thermal hydraulic analysis code for IHX of FBR

    International Nuclear Information System (INIS)

    Kumagai, Hiromichi; Naohara, Nobuyuki

    1991-01-01

    In order to obtain flow resistance correlations for thermal-hydrauric analysis code concerned with an intermediate heat exchanger (IHX) of FBR, the hydraulic experiment by air was carried out through a bundle of tubes arranged in an in-line and staggard fashion. The main results are summarized as follows. (1) On pressure loss per unit length of a tube bundle, which is densely a regular triangle arrangement, the in-line fashion is almost the same as the staggard one. (2) In case of 30deg sector model for IHX tube bundle, pressure loss is 1/3 in comparison with the in-line or staggard arrangement. (3) By this experimental data, flow resistance correlations for thermalhydrauric analysis code are obtained. (author)

  6. Geometric regularizations and dual conifold transitions

    International Nuclear Information System (INIS)

    Landsteiner, Karl; Lazaroiu, Calin I.

    2003-01-01

    We consider a geometric regularization for the class of conifold transitions relating D-brane systems on noncompact Calabi-Yau spaces to certain flux backgrounds. This regularization respects the SL(2,Z) invariance of the flux superpotential, and allows for computation of the relevant periods through the method of Picard-Fuchs equations. The regularized geometry is a noncompact Calabi-Yau which can be viewed as a monodromic fibration, with the nontrivial monodromy being induced by the regulator. It reduces to the original, non-monodromic background when the regulator is removed. Using this regularization, we discuss the simple case of the local conifold, and show how the relevant field-theoretic information can be extracted in this approach. (author)

  7. Adaptive regularization

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Rasmussen, Carl Edward; Svarer, C.

    1994-01-01

    Regularization, e.g., in the form of weight decay, is important for training and optimization of neural network architectures. In this work the authors provide a tool based on asymptotic sampling theory, for iterative estimation of weight decay parameters. The basic idea is to do a gradient desce...

  8. The computer code system for reactor radiation shielding in design of nuclear power plant

    International Nuclear Information System (INIS)

    Li Chunhuai; Fu Shouxin; Liu Guilian

    1995-01-01

    The computer code system used in reactor radiation shielding design of nuclear power plant includes the source term codes, discrete ordinate transport codes, Monte Carlo and Albedo Monte Carlo codes, kernel integration codes, optimization code, temperature field code, skyshine code, coupling calculation codes and some processing codes for data libraries. This computer code system has more satisfactory variety of codes and complete sets of data library. It is widely used in reactor radiation shielding design and safety analysis of nuclear power plant and other nuclear facilities

  9. Entanglement in coined quantum walks on regular graphs

    International Nuclear Information System (INIS)

    Carneiro, Ivens; Loo, Meng; Xu, Xibai; Girerd, Mathieu; Kendon, Viv; Knight, Peter L

    2005-01-01

    Quantum walks, both discrete (coined) and continuous time, form the basis of several recent quantum algorithms. Here we use numerical simulations to study the properties of discrete, coined quantum walks. We investigate the variation in the entanglement between the coin and the position of the particle by calculating the entropy of the reduced density matrix of the coin. We consider both dynamical evolution and asymptotic limits for coins of dimensions from two to eight on regular graphs. For low coin dimensions, quantum walks which spread faster (as measured by the mean square deviation of their distribution from uniform) also exhibit faster convergence towards the asymptotic value of the entanglement between the coin and particle's position. For high-dimensional coins, the DFT coin operator is more efficient at spreading than the Grover coin. We study the entanglement of the coin on regular finite graphs such as cycles, and also show that on complete bipartite graphs, a quantum walk with a Grover coin is always periodic with period four. We generalize the 'glued trees' graph used by Childs et al (2003 Proc. STOC, pp 59-68) to higher branching rate (fan out) and verify that the scaling with branching rate and with tree depth is polynomial

  10. Hypothesis of Lithocoding: Origin of the Genetic Code as a "Double Jigsaw Puzzle" of Nucleobase-Containing Molecules and Amino Acids Assembled by Sequential Filling of Apatite Mineral Cellules.

    Science.gov (United States)

    Skoblikow, Nikolai E; Zimin, Andrei A

    2016-05-01

    The hypothesis of direct coding, assuming the direct contact of pairs of coding molecules with amino acid side chains in hollow unit cells (cellules) of a regular crystal-structure mineral is proposed. The coding nucleobase-containing molecules in each cellule (named "lithocodon") partially shield each other; the remaining free space determines the stereochemical character of the filling side chain. Apatite-group minerals are considered as the most preferable for this type of coding (named "lithocoding"). A scheme of the cellule with certain stereometric parameters, providing for the isomeric selection of contacting molecules is proposed. We modelled the filling of cellules with molecules involved in direct coding, with the possibility of coding by their single combination for a group of stereochemically similar amino acids. The regular ordered arrangement of cellules enables the polymerization of amino acids and nucleobase-containing molecules in the same direction (named "lithotranslation") preventing the shift of coding. A table of the presumed "LithoCode" (possible and optimal lithocodon assignments for abiogenically synthesized α-amino acids involved in lithocoding and lithotranslation) is proposed. The magmatic nature of the mineral, abiogenic synthesis of organic molecules and polymerization events are considered within the framework of the proposed "volcanic scenario".

  11. Simple regular black hole with logarithmic entropy correction

    Energy Technology Data Exchange (ETDEWEB)

    Morales-Duran, Nicolas; Vargas, Andres F.; Hoyos-Restrepo, Paulina; Bargueno, Pedro [Universidad de los Andes, Departamento de Fisica, Bogota, Distrito Capital (Colombia)

    2016-10-15

    A simple regular black hole solution satisfying the weak energy condition is obtained within Einstein-non-linear electrodynamics theory. We have computed the thermodynamic properties of this black hole by a careful analysis of the horizons and we have found that the usual Bekenstein-Hawking entropy gets corrected by a logarithmic term. Therefore, in this sense our model realises some quantum gravity predictions which add this kind of correction to the black hole entropy. In particular, we have established some similitudes between our model and a quadratic generalised uncertainty principle. This similitude has been confirmed by the existence of a remnant, which prevents complete evaporation, in agreement with the quadratic generalised uncertainty principle case. (orig.)

  12. Regularizing portfolio optimization

    International Nuclear Information System (INIS)

    Still, Susanne; Kondor, Imre

    2010-01-01

    The optimization of large portfolios displays an inherent instability due to estimation error. This poses a fundamental problem, because solutions that are not stable under sample fluctuations may look optimal for a given sample, but are, in effect, very far from optimal with respect to the average risk. In this paper, we approach the problem from the point of view of statistical learning theory. The occurrence of the instability is intimately related to over-fitting, which can be avoided using known regularization methods. We show how regularized portfolio optimization with the expected shortfall as a risk measure is related to support vector regression. The budget constraint dictates a modification. We present the resulting optimization problem and discuss the solution. The L2 norm of the weight vector is used as a regularizer, which corresponds to a diversification 'pressure'. This means that diversification, besides counteracting downward fluctuations in some assets by upward fluctuations in others, is also crucial because it improves the stability of the solution. The approach we provide here allows for the simultaneous treatment of optimization and diversification in one framework that enables the investor to trade off between the two, depending on the size of the available dataset.

  13. Regularizing portfolio optimization

    Science.gov (United States)

    Still, Susanne; Kondor, Imre

    2010-07-01

    The optimization of large portfolios displays an inherent instability due to estimation error. This poses a fundamental problem, because solutions that are not stable under sample fluctuations may look optimal for a given sample, but are, in effect, very far from optimal with respect to the average risk. In this paper, we approach the problem from the point of view of statistical learning theory. The occurrence of the instability is intimately related to over-fitting, which can be avoided using known regularization methods. We show how regularized portfolio optimization with the expected shortfall as a risk measure is related to support vector regression. The budget constraint dictates a modification. We present the resulting optimization problem and discuss the solution. The L2 norm of the weight vector is used as a regularizer, which corresponds to a diversification 'pressure'. This means that diversification, besides counteracting downward fluctuations in some assets by upward fluctuations in others, is also crucial because it improves the stability of the solution. The approach we provide here allows for the simultaneous treatment of optimization and diversification in one framework that enables the investor to trade off between the two, depending on the size of the available dataset.

  14. Two-Point Codes for the Generalised GK curve

    DEFF Research Database (Denmark)

    Barelli, Élise; Beelen, Peter; Datta, Mrinmoy

    2017-01-01

    completely cover and in many cases improve on their results, using different techniques, while also supporting any GGK curve. Our method builds on the order bound for AG codes: to enable this, we study certain Weierstrass semigroups. This allows an efficient algorithm for computing our improved bounds. We......We improve previously known lower bounds for the minimum distance of certain two-point AG codes constructed using a Generalized Giulietti–Korchmaros curve (GGK). Castellanos and Tizziotti recently described such bounds for two-point codes coming from the Giulietti–Korchmaros curve (GK). Our results...

  15. The Dit nuclear fuel assembly physics design code

    International Nuclear Information System (INIS)

    Jonsson, A.

    1987-01-01

    DIT is the Combustion Engineering, Inc. (C-E) nuclear fuel assembly design code. It belongs to a class of codes, all similar in structure and strategy, which may be characterized by the spectrum and spatial calculations being performed in 2D and in a single job step for the entire assembly. The forerunner of this class of codes is the U.K.A.E.A. WIMS code, the first version of which was completed 25 years ago. The structure and strategy of assembly spectrum codes have remained remarkably similar to the original concept thus proving its usefulness. As other organizations, including C-E, have developed their own versions of the concept, many important variations have been added which significantly influence the accuracy and performance of the resulting computational tool. This paper describes and discusses those features which are unique to the DIT code and which might be of interest to the community of fuel assembly physics design code users and developers

  16. Tessellating the Sphere with Regular Polygons

    Science.gov (United States)

    Soto-Johnson, Hortensia; Bechthold, Dawn

    2004-01-01

    Tessellations in the Euclidean plane and regular polygons that tessellate the sphere are reviewed. The regular polygons that can possibly tesellate the sphere are spherical triangles, squares and pentagons.

  17. Accuracy of Digitally Fabricated Wax Denture Bases and Conventional Completed Complete Dentures

    Directory of Open Access Journals (Sweden)

    Bogna Stawarczyk

    2017-12-01

    Full Text Available The purpose of this investigation was to analyze the accuracy of digitally fabricated wax trial dentures and conventionally finalized complete dentures in comparison to a surface tessellation language (STL-dataset. A generated data set for the denture bases and the tooth sockets was used, converted into STL-format, and saved as reference. Five mandibular and 5 maxillary denture bases were milled from wax blanks and denture teeth were waxed into their tooth sockets. Each complete denture was checked on fit, waxed onto the dental cast, and digitized using an optical laboratory scanning device. The complete dentures were completed conventionally using the injection method, finished, and scanned. The resulting STL-datasets were exported into the three-dimensional (3D software GOM Inspect. Each of the 5 mandibular and 5 maxillary complete dentures was aligned with the STL- and the wax trial denture dataset. Alignment was performed based on a best-fit algorithm. A three-dimensional analysis of the spatial divergences in x-, y- and z-axes was performed by the 3D software and visualized in a color-coded illustration. The mean positive and negative deviations between the datasets were calculated automatically. In a direct comparison between maxillary wax trial dentures and complete dentures, complete dentures showed higher deviations from the STL-dataset than the wax trial dentures. The deviations occurred in the area of the teeth as well as in the distal area of the denture bases. In contrast, the highest deviations in both the mandibular wax trial dentures and the mandibular complete dentures were observed in the distal area. The complete dentures showed higher deviations on the occlusal surfaces of the teeth compared to the wax dentures. Computer-aided design/computer-aided manufacturing (CAD/CAM-fabricated wax dentures exhibited fewer deviations from the STL-reference than the complete dentures. The deviations were significantly greater in the

  18. Upgrades to the WIMS-ANL code

    International Nuclear Information System (INIS)

    Woodruff, W. L.

    1998-01-01

    The dusty old source code in WIMS-D4M has been completely rewritten to conform more closely with current FORTRAN coding practices. The revised code contains many improvements in appearance, error checking and in control of the output. The output is now tabulated to fit the typical 80 column window or terminal screen. The Segev method for resonance integral interpolation is now an option. Most of the dimension limitations have been removed and replaced with variable dimensions within a compile-time fixed container. The library is no longer restricted to the 69 energy group structure, and two new libraries have been generated for use with the code. The new libraries are both based on ENDF/B-VI data with one having the original 69 energy group structure and the second with a 172 group structure. The common source code can be used with PCs using both Windows 95 and NT, with a Linux based operating system and with UNIX based workstations. Comparisons of this version of the code to earlier evaluations with ENDF/B-V are provided, as well as, comparisons with the new libraries

  19. Upgrades to the WIMS-ANL code

    International Nuclear Information System (INIS)

    Woodruff, W.L.; Leopando, L.S.

    1998-01-01

    The dusty old source code in WIMS-D4M has been completely rewritten to conform more closely with current FORTRAN coding practices. The revised code contains many improvements in appearance, error checking and in control of the output. The output is now tabulated to fit the typical 80 column window or terminal screen. The Segev method for resonance integral interpolation is now an option. Most of the dimension limitations have been removed and replaced with variable dimensions within a compile-time fixed container. The library is no longer restricted to the 69 energy group structure, and two new libraries have been generated for use with the code. The new libraries are both based on ENDF/B-VI data with one having the original 69 energy group structure and the second with a 172 group structure. The common source code can be used with PCs using both Windows 95 and NT, with a Linux based operating system and with UNIX based workstations. Comparisons of this version of the code to earlier evaluations with ENDF/B-V are provided, as well as, comparisons with the new libraries. (author)

  20. Accretion onto some well-known regular black holes

    International Nuclear Information System (INIS)

    Jawad, Abdul; Shahzad, M.U.

    2016-01-01

    In this work, we discuss the accretion onto static spherically symmetric regular black holes for specific choices of the equation of state parameter. The underlying regular black holes are charged regular black holes using the Fermi-Dirac distribution, logistic distribution, nonlinear electrodynamics, respectively, and Kehagias-Sftesos asymptotically flat regular black holes. We obtain the critical radius, critical speed, and squared sound speed during the accretion process near the regular black holes. We also study the behavior of radial velocity, energy density, and the rate of change of the mass for each of the regular black holes. (orig.)

  1. Accretion onto some well-known regular black holes

    Energy Technology Data Exchange (ETDEWEB)

    Jawad, Abdul; Shahzad, M.U. [COMSATS Institute of Information Technology, Department of Mathematics, Lahore (Pakistan)

    2016-03-15

    In this work, we discuss the accretion onto static spherically symmetric regular black holes for specific choices of the equation of state parameter. The underlying regular black holes are charged regular black holes using the Fermi-Dirac distribution, logistic distribution, nonlinear electrodynamics, respectively, and Kehagias-Sftesos asymptotically flat regular black holes. We obtain the critical radius, critical speed, and squared sound speed during the accretion process near the regular black holes. We also study the behavior of radial velocity, energy density, and the rate of change of the mass for each of the regular black holes. (orig.)

  2. Accretion onto some well-known regular black holes

    Science.gov (United States)

    Jawad, Abdul; Shahzad, M. Umair

    2016-03-01

    In this work, we discuss the accretion onto static spherically symmetric regular black holes for specific choices of the equation of state parameter. The underlying regular black holes are charged regular black holes using the Fermi-Dirac distribution, logistic distribution, nonlinear electrodynamics, respectively, and Kehagias-Sftesos asymptotically flat regular black holes. We obtain the critical radius, critical speed, and squared sound speed during the accretion process near the regular black holes. We also study the behavior of radial velocity, energy density, and the rate of change of the mass for each of the regular black holes.

  3. Quantification of fetal heart rate regularity using symbolic dynamics

    Science.gov (United States)

    van Leeuwen, P.; Cysarz, D.; Lange, S.; Geue, D.; Groenemeyer, D.

    2007-03-01

    Fetal heart rate complexity was examined on the basis of RR interval time series obtained in the second and third trimester of pregnancy. In each fetal RR interval time series, short term beat-to-beat heart rate changes were coded in 8bit binary sequences. Redundancies of the 28 different binary patterns were reduced by two different procedures. The complexity of these sequences was quantified using the approximate entropy (ApEn), resulting in discrete ApEn values which were used for classifying the sequences into 17 pattern sets. Also, the sequences were grouped into 20 pattern classes with respect to identity after rotation or inversion of the binary value. There was a specific, nonuniform distribution of the sequences in the pattern sets and this differed from the distribution found in surrogate data. In the course of gestation, the number of sequences increased in seven pattern sets, decreased in four and remained unchanged in six. Sequences that occurred less often over time, both regular and irregular, were characterized by patterns reflecting frequent beat-to-beat reversals in heart rate. They were also predominant in the surrogate data, suggesting that these patterns are associated with stochastic heart beat trains. Sequences that occurred more frequently over time were relatively rare in the surrogate data. Some of these sequences had a high degree of regularity and corresponded to prolonged heart rate accelerations or decelerations which may be associated with directed fetal activity or movement or baroreflex activity. Application of the pattern classes revealed that those sequences with a high degree of irregularity correspond to heart rate patterns resulting from complex physiological activity such as fetal breathing movements. The results suggest that the development of the autonomic nervous system and the emergence of fetal behavioral states lead to increases in not only irregular but also regular heart rate patterns. Using symbolic dynamics to

  4. Regular group exercise contributes to balanced health in older adults in Japan: a qualitative study.

    Science.gov (United States)

    Komatsu, Hiroko; Yagasaki, Kaori; Saito, Yoshinobu; Oguma, Yuko

    2017-08-22

    While community-wide interventions to promote physical activity have been encouraged in older adults, evidence of their effectiveness remains limited. We conducted a qualitative study among older adults participating in regular group exercise to understand their perceptions of the physical, mental, and social changes they underwent as a result of the physical activity. We conducted a qualitative study with purposeful sampling to explore the experiences of older adults who participated in regular group exercise as part of a community-wide physical activity intervention. Four focus group interviews were conducted between April and June of 2016 at community halls in Fujisawa City. The participants in the focus group interviews were 26 older adults with a mean age of 74.69 years (range: 66-86). The interviews were analysed using the constant comparative method in the grounded theory approach. We used qualitative research software NVivo10® to track the coding and manage the data. The finding 'regular group exercise contributes to balanced health in older adults' emerged as an overarching theme with seven categories (regular group exercise, functional health, active mind, enjoyment, social connectedness, mutual support, and expanding communities). Although the participants perceived that they were aging physically and cognitively, the regular group exercise helped them to improve or maintain their functional health and enjoy their lives. They felt socially connected and experienced a sense of security in the community through caring for others and supporting each other. As the older adults began to seek value beyond individuals, they gradually expanded their communities beyond geographical and generational boundaries. The participants achieved balanced health in the physical, mental, and social domains through regular group exercise as part of a community-wide physical activity intervention and contributed to expanding communities through social connectedness and

  5. Diagrammatic methods in phase-space regularization

    International Nuclear Information System (INIS)

    Bern, Z.; Halpern, M.B.; California Univ., Berkeley

    1987-11-01

    Using the scalar prototype and gauge theory as the simplest possible examples, diagrammatic methods are developed for the recently proposed phase-space form of continuum regularization. A number of one-loop and all-order applications are given, including general diagrammatic discussions of the nogrowth theorem and the uniqueness of the phase-space stochastic calculus. The approach also generates an alternate derivation of the equivalence of the large-β phase-space regularization to the more conventional coordinate-space regularization. (orig.)

  6. Analysis of pipe stress using CAESAR II code

    International Nuclear Information System (INIS)

    Sitandung, Y.B.; Bandriyana, B.

    2002-01-01

    Analysis of this piping stress with the purpose of knowing stress distribution piping system in order to determine pipe supports configuration. As an example of analysis, Gas Exchanger to Warm Separator Line was chosen with, input data was firstly prepared in a document, i.e. piping analysis specification that its content named as pipe characteristics, material properties, operation conditions, guide equipment's and so on. Analysis result such as stress, load, displacement and the use support type were verified based on requirements in the code, standard, and regularities were suitable with piping system condition analyzed. As the proof that piping system is in safety condition, it can be indicated from analysis results (actual loads) which still under allowable load. From the analysis steps that have been done CAESAR II code fulfill requirements to be used as a tool of piping stress analysis as well as nuclear and non nuclear installation piping system

  7. Metric regularity and subdifferential calculus

    International Nuclear Information System (INIS)

    Ioffe, A D

    2000-01-01

    The theory of metric regularity is an extension of two classical results: the Lyusternik tangent space theorem and the Graves surjection theorem. Developments in non-smooth analysis in the 1980s and 1990s paved the way for a number of far-reaching extensions of these results. It was also well understood that the phenomena behind the results are of metric origin, not connected with any linear structure. At the same time it became clear that some basic hypotheses of the subdifferential calculus are closely connected with the metric regularity of certain set-valued maps. The survey is devoted to the metric theory of metric regularity and its connection with subdifferential calculus in Banach spaces

  8. A regularized matrix factorization approach to induce structured sparse-low-rank solutions in the EEG inverse problem

    DEFF Research Database (Denmark)

    Montoya-Martinez, Jair; Artes-Rodriguez, Antonio; Pontil, Massimiliano

    2014-01-01

    We consider the estimation of the Brain Electrical Sources (BES) matrix from noisy electroencephalographic (EEG) measurements, commonly named as the EEG inverse problem. We propose a new method to induce neurophysiological meaningful solutions, which takes into account the smoothness, structured...... sparsity, and low rank of the BES matrix. The method is based on the factorization of the BES matrix as a product of a sparse coding matrix and a dense latent source matrix. The structured sparse-low-rank structure is enforced by minimizing a regularized functional that includes the ℓ21-norm of the coding...... matrix and the squared Frobenius norm of the latent source matrix. We develop an alternating optimization algorithm to solve the resulting nonsmooth-nonconvex minimization problem. We analyze the convergence of the optimization procedure, and we compare, under different synthetic scenarios...

  9. Graphical user interface development for the MARS code

    International Nuclear Information System (INIS)

    Jeong, J.-J.; Hwang, M.; Lee, Y.J.; Kim, K.D.; Chung, B.D.

    2003-01-01

    KAERI has developed the best-estimate thermal-hydraulic system code MARS using the RELAP5/MOD3 and COBRA-TF codes. To exploit the excellent features of the two codes, we consolidated the two codes. Then, to improve the readability, maintainability, and portability of the consolidated code, all the subroutines were completely restructured by employing a modular data structure. At present, a major part of the MARS code development program is underway to improve the existing capabilities. The code couplings with three-dimensional neutron kinetics, containment analysis, and transient critical heat flux calculations have also been carried out. At the same time, graphical user interface (GUI) tools have been developed for user friendliness. This paper presents the main features of the MARS GUI. The primary objective of the GUI development was to provide a valuable aid for all levels of MARS users in their output interpretation and interactive controls. Especially, an interactive control function was designed to allow operator actions during simulation so that users can utilize the MARS code like conventional nuclear plant analyzers (NPAs). (author)

  10. Temporal regularity of the environment drives time perception

    OpenAIRE

    van Rijn, H; Rhodes, D; Di Luca, M

    2016-01-01

    It’s reasonable to assume that a regularly paced sequence should be perceived as regular, but here we show that perceived regularity depends on the context in which the sequence is embedded. We presented one group of participants with perceptually regularly paced sequences, and another group of participants with mostly irregularly paced sequences (75% irregular, 25% regular). The timing of the final stimulus in each sequence could be var- ied. In one experiment, we asked whether the last stim...

  11. Comparisons of the complementary effect on exhaled nitric oxide of salmeterol vs montelukast in asthmatic children taking regular inhaled budesonide

    DEFF Research Database (Denmark)

    Buchvald, Frederik; Bisgaard, Hans

    2003-01-01

    . OBJECTIVE: To compare the control of FeNO provided by salmeterol or montelukast add-on therapy in asthmatic children undergoing regular maintenance treatment with a daily dose of 400 microg of budesonide. METHODS: The study included children with increased FeNO despite regular treatment with budesonide, 400...... microg/d, and normal lung function. Montelukast, 5 mg/d, salmeterol, 50 microg twice daily, or placebo was compared as add-on therapy to budesonide, 400 microg, in a randomized, double-blind, double-dummy, crossover study. RESULTS: Twenty-two children completed the trial. The geometric mean FeNO level...... with placebo in this group of children taking regular budesonide, 400 microg....

  12. Development of EASYQAD version β: A Visualization Code System for QAD-CGGP-A Gamma and Neutron Shielding Calculation Code

    International Nuclear Information System (INIS)

    Kim, Jae Cheon; Lee, Hwan Soo; Ha, Pham Nhu Viet; Kim, Soon Young; Shin, Chang Ho; Kim, Jong Kyung

    2007-01-01

    EASYQAD had been previously developed by using MATLAB GUI (Graphical User Interface) in order to perform conveniently gamma and neutron shielding calculations at Hanyang University. It had been completed as version α of radiation shielding analysis code. In this study, EASYQAD was upgraded to version β with many additional functions and more user-friendly graphical interfaces. For general users to run it on Windows XP environment without any MATLAB installation, this version was developed into a standalone code system

  13. Embedding QR codes in tumor board presentations, enhancing educational content for oncology information management.

    Science.gov (United States)

    Siderits, Richard; Yates, Stacy; Rodriguez, Arelis; Lee, Tina; Rimmer, Cheryl; Roche, Mark

    2011-01-01

    Quick Response (QR) Codes are standard in supply management and seen with increasing frequency in advertisements. They are now present regularly in healthcare informatics and education. These 2-dimensional square bar codes, originally designed by the Toyota car company, are free of license and have a published international standard. The codes can be generated by free online software and the resulting images incorporated into presentations. The images can be scanned by "smart" phones and tablets using either the iOS or Android platforms, which link the device with the information represented by the QR code (uniform resource locator or URL, online video, text, v-calendar entries, short message service [SMS] and formatted text). Once linked to the device, the information can be viewed at any time after the original presentation, saved in the device or to a Web-based "cloud" repository, printed, or shared with others via email or Bluetooth file transfer. This paper describes how we use QR codes in our tumor board presentations, discusses the benefits, the different QR codes from Web links and how QR codes facilitate the distribution of educational content.

  14. The uniqueness of the regularization procedure

    International Nuclear Information System (INIS)

    Brzezowski, S.

    1981-01-01

    On the grounds of the BPHZ procedure, the criteria of correct regularization in perturbation calculations of QFT are given, together with the prescription for dividing the regularized formulas into the finite and infinite parts. (author)

  15. Coupling regularizes individual units in noisy populations

    International Nuclear Information System (INIS)

    Ly Cheng; Ermentrout, G. Bard

    2010-01-01

    The regularity of a noisy system can modulate in various ways. It is well known that coupling in a population can lower the variability of the entire network; the collective activity is more regular. Here, we show that diffusive (reciprocal) coupling of two simple Ornstein-Uhlenbeck (O-U) processes can regularize the individual, even when it is coupled to a noisier process. In cellular networks, the regularity of individual cells is important when a select few play a significant role. The regularizing effect of coupling surprisingly applies also to general nonlinear noisy oscillators. However, unlike with the O-U process, coupling-induced regularity is robust to different kinds of coupling. With two coupled noisy oscillators, we derive an asymptotic formula assuming weak noise and coupling for the variance of the period (i.e., spike times) that accurately captures this effect. Moreover, we find that reciprocal coupling can regularize the individual period of higher dimensional oscillators such as the Morris-Lecar and Brusselator models, even when coupled to noisier oscillators. Coupling can have a counterintuitive and beneficial effect on noisy systems. These results have implications for the role of connectivity with noisy oscillators and the modulation of variability of individual oscillators.

  16. Learning regularization parameters for general-form Tikhonov

    International Nuclear Information System (INIS)

    Chung, Julianne; Español, Malena I

    2017-01-01

    Computing regularization parameters for general-form Tikhonov regularization can be an expensive and difficult task, especially if multiple parameters or many solutions need to be computed in real time. In this work, we assume training data is available and describe an efficient learning approach for computing regularization parameters that can be used for a large set of problems. We consider an empirical Bayes risk minimization framework for finding regularization parameters that minimize average errors for the training data. We first extend methods from Chung et al (2011 SIAM J. Sci. Comput. 33 3132–52) to the general-form Tikhonov problem. Then we develop a learning approach for multi-parameter Tikhonov problems, for the case where all involved matrices are simultaneously diagonalizable. For problems where this is not the case, we describe an approach to compute near-optimal regularization parameters by using operator approximations for the original problem. Finally, we propose a new class of regularizing filters, where solutions correspond to multi-parameter Tikhonov solutions, that requires less data than previously proposed optimal error filters, avoids the generalized SVD, and allows flexibility and novelty in the choice of regularization matrices. Numerical results for 1D and 2D examples using different norms on the errors show the effectiveness of our methods. (paper)

  17. 5 CFR 551.421 - Regular working hours.

    Science.gov (United States)

    2010-01-01

    ... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Regular working hours. 551.421 Section... Activities § 551.421 Regular working hours. (a) Under the Act there is no requirement that a Federal employee... distinction based on whether the activity is performed by an employee during regular working hours or outside...

  18. Regular extensions of some classes of grammars

    NARCIS (Netherlands)

    Nijholt, Antinus

    Culik and Cohen introduced the class of LR-regular grammars, an extension of the LR(k) grammars. In this report we consider the analogous extension of the LL(k) grammers, called the LL-regular grammars. The relations of this class of grammars to other classes of grammars are shown. Every LL-regular

  19. Oil and Gas field code master list 1995

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-12-01

    This is the fourteenth annual edition of the Energy Information Administration`s (EIA) Oil and Gas Field Code Master List. It reflects data collected through October 1995 and provides standardized field name spellings and codes for all identified oil and/or gas fields in the US. The Field Code Index, a listing of all field names and the States in which they occur, ordered by field code, has been removed from this year`s publications to reduce printing and postage costs. Complete copies (including the Field Code Index) will be available on the EIA CD-ROM and the EIA World-Wide Web Site. Future editions of the complete Master List will be available on CD-ROM and other electronic media. There are 57,400 field records in this year`s Oil and Gas Field Code Master List. As it is maintained by EIA, the Master List includes the following: field records for each State and county in which a field resides; field records for each offshore area block in the Gulf of Mexico in which a field resides; field records for each alias field name (see definition of alias below); and fields crossing State boundaries that may be assigned different names by the respective State naming authorities. Taking into consideration the double-counting of fields under such circumstances, EIA identifies 46,312 distinct fields in the US as of October 1995. This count includes fields that no longer produce oil or gas, and 383 fields used in whole or in part for oil or gas Storage. 11 figs., 6 tabs.

  20. Bar code usage in nuclear materials accountability

    International Nuclear Information System (INIS)

    Mee, W.T.

    1983-01-01

    The Oak Ridge Y-12 Plant began investigating the use of automated data collection devices in 1979. At this time, bar code and optical-character-recognition (OCR) systems were reviewed with the purpose of directly entering data into DYMCAS (Dynamic Special Nuclear Materials Control and Accountability System). Both of these systems appeared applicable, however, other automated devices already employed for production control made implementing the bar code and OCR seem improbable. However, the DYMCAS was placed on line for nuclear material accountability, a decision was made to consider the bar code for physical inventory listings. For the past several months a development program has been underway to use a bar code device to collect and input data to the DYMCAS on the uranium recovery operations. Programs have been completed and tested, and are being employed to ensure that data will be compatible and useful. Bar code implementation and expansion of its use for all nuclear material inventory activity in Y-12 is presented

  1. Regular non-twisting S-branes

    International Nuclear Information System (INIS)

    Obregon, Octavio; Quevedo, Hernando; Ryan, Michael P.

    2004-01-01

    We construct a family of time and angular dependent, regular S-brane solutions which corresponds to a simple analytical continuation of the Zipoy-Voorhees 4-dimensional vacuum spacetime. The solutions are asymptotically flat and turn out to be free of singularities without requiring a twist in space. They can be considered as the simplest non-singular generalization of the singular S0-brane solution. We analyze the properties of a representative of this family of solutions and show that it resembles to some extent the asymptotic properties of the regular Kerr S-brane. The R-symmetry corresponds, however, to the general lorentzian symmetry. Several generalizations of this regular solution are derived which include a charged S-brane and an additional dilatonic field. (author)

  2. ATHENA code manual. Volume 1. Code structure, system models, and solution methods

    International Nuclear Information System (INIS)

    Carlson, K.E.; Roth, P.A.; Ransom, V.H.

    1986-09-01

    The ATHENA (Advanced Thermal Hydraulic Energy Network Analyzer) code has been developed to perform transient simulation of the thermal hydraulic systems which may be found in fusion reactors, space reactors, and other advanced systems. A generic modeling approach is utilized which permits as much of a particular system to be modeled as necessary. Control system and secondary system components are included to permit modeling of a complete facility. Several working fluids are available to be used in one or more interacting loops. Different loops may have different fluids with thermal connections between loops. The modeling theory and associated numerical schemes are documented in Volume I in order to acquaint the user with the modeling base and thus aid effective use of the code. The second volume contains detailed instructions for input data preparation

  3. Lawrence Livermore National Laboratory Probabilistic Seismic Hazard Codes Validation

    International Nuclear Information System (INIS)

    Savy, J B

    2003-01-01

    Probabilistic Seismic Hazard Analysis (PSHA) is a methodology that estimates the likelihood that various levels of earthquake-caused ground motion will be exceeded at a given location in a given future time-period. LLNL has been developing the methodology and codes in support of the Nuclear Regulatory Commission (NRC) needs for reviews of site licensing of nuclear power plants, since 1978. A number of existing computer codes have been validated and still can lead to ranges of hazard estimates in some cases. Until now, the seismic hazard community had not agreed on any specific method for evaluation of these codes. The Earthquake Engineering Research Institute (EERI) and the Pacific Engineering Earthquake Research (PEER) center organized an exercise in testing of existing codes with the aim of developing a series of standard tests that future developers could use to evaluate and calibrate their own codes. Seven code developers participated in the exercise, on a voluntary basis. Lawrence Livermore National laboratory participated with some support from the NRC. The final product of the study will include a series of criteria for judging of the validity of the results provided by a computer code. This EERI/PEER project was first planned to be completed by June of 2003. As the group neared completion of the tests, the managing team decided that new tests were necessary. As a result, the present report documents only the work performed to this point. It demonstrates that the computer codes developed by LLNL perform all calculations correctly and as intended. Differences exist between the results of the codes tested, that are attributed to a series of assumptions, on the parameters and models, that the developers had to make. The managing team is planning a new series of tests to help in reaching a consensus on these assumptions

  4. Sparsity-regularized HMAX for visual recognition.

    Directory of Open Access Journals (Sweden)

    Xiaolin Hu

    Full Text Available About ten years ago, HMAX was proposed as a simple and biologically feasible model for object recognition, based on how the visual cortex processes information. However, the model does not encompass sparse firing, which is a hallmark of neurons at all stages of the visual pathway. The current paper presents an improved model, called sparse HMAX, which integrates sparse firing. This model is able to learn higher-level features of objects on unlabeled training images. Unlike most other deep learning models that explicitly address global structure of images in every layer, sparse HMAX addresses local to global structure gradually along the hierarchy by applying patch-based learning to the output of the previous layer. As a consequence, the learning method can be standard sparse coding (SSC or independent component analysis (ICA, two techniques deeply rooted in neuroscience. What makes SSC and ICA applicable at higher levels is the introduction of linear higher-order statistical regularities by max pooling. After training, high-level units display sparse, invariant selectivity for particular individuals or for image categories like those observed in human inferior temporal cortex (ITC and medial temporal lobe (MTL. Finally, on an image classification benchmark, sparse HMAX outperforms the original HMAX by a large margin, suggesting its great potential for computer vision.

  5. Near-Regular Structure Discovery Using Linear Programming

    KAUST Repository

    Huang, Qixing

    2014-06-02

    Near-regular structures are common in manmade and natural objects. Algorithmic detection of such regularity greatly facilitates our understanding of shape structures, leads to compact encoding of input geometries, and enables efficient generation and manipulation of complex patterns on both acquired and synthesized objects. Such regularity manifests itself both in the repetition of certain geometric elements, as well as in the structured arrangement of the elements. We cast the regularity detection problem as an optimization and efficiently solve it using linear programming techniques. Our optimization has a discrete aspect, that is, the connectivity relationships among the elements, as well as a continuous aspect, namely the locations of the elements of interest. Both these aspects are captured by our near-regular structure extraction framework, which alternates between discrete and continuous optimizations. We demonstrate the effectiveness of our framework on a variety of problems including near-regular structure extraction, structure-preserving pattern manipulation, and markerless correspondence detection. Robustness results with respect to geometric and topological noise are presented on synthesized, real-world, and also benchmark datasets. © 2014 ACM.

  6. ASME nuclear codes and standards risk management strategic planning

    International Nuclear Information System (INIS)

    Hill, Ralph S. III; Balkey, Kenneth R.; Erler, Bryan A.; Wesley Rowley, C.

    2007-01-01

    This paper is prepared in honor and in memory of the late Professor Emeritus Yasuhide Asada to recognize his contributions to ASME Nuclear Codes and Standards initiatives, particularly those related to risk-informed technology and System Based Code developments. For nearly two decades, numerous risk-informed initiatives have been completed or are under development within the ASME Nuclear Codes and Standards organization. In order to properly manage the numerous initiatives currently underway or planned for the future, the ASME Board on Nuclear Codes and Standards (BNCS) has an established Risk Management Strategic Plan (Plan) that is maintained and updated by the ASME BNCS Risk Management Task Group. This paper presents the latest approved version of the plan beginning with a background of applications completed to date, including the recent probabilistic risk assessment (PRA) standards developments for nuclear power plant applications. The paper discusses planned applications within ASME Nuclear Codes and Standards that will require expansion of the ASME PRA Standard to support new advanced light water reactor and next generation reactor developments, such as for high temperature gas-cooled reactors. Emerging regulatory developments related to risk-informed, performance- based approaches are summarized. A long-term vision for the potential development and evolution to a nuclear systems code that adopts a risk-informed approach across a facility life-cycle (design, construction, operation, maintenance, and closure) is also summarized. Finally, near term and long term actions are defined across the ASME Nuclear Codes and Standards organizations related to risk management, including related U.S. regulatory activities. (author)

  7. Regular Expression Matching and Operational Semantics

    Directory of Open Access Journals (Sweden)

    Asiri Rathnayake

    2011-08-01

    Full Text Available Many programming languages and tools, ranging from grep to the Java String library, contain regular expression matchers. Rather than first translating a regular expression into a deterministic finite automaton, such implementations typically match the regular expression on the fly. Thus they can be seen as virtual machines interpreting the regular expression much as if it were a program with some non-deterministic constructs such as the Kleene star. We formalize this implementation technique for regular expression matching using operational semantics. Specifically, we derive a series of abstract machines, moving from the abstract definition of matching to increasingly realistic machines. First a continuation is added to the operational semantics to describe what remains to be matched after the current expression. Next, we represent the expression as a data structure using pointers, which enables redundant searches to be eliminated via testing for pointer equality. From there, we arrive both at Thompson's lockstep construction and a machine that performs some operations in parallel, suitable for implementation on a large number of cores, such as a GPU. We formalize the parallel machine using process algebra and report some preliminary experiments with an implementation on a graphics processor using CUDA.

  8. Tetravalent one-regular graphs of order 4p2

    DEFF Research Database (Denmark)

    Feng, Yan-Quan; Kutnar, Klavdija; Marusic, Dragan

    2014-01-01

    A graph is one-regular if its automorphism group acts regularly on the set of its arcs. In this paper tetravalent one-regular graphs of order 4p2, where p is a prime, are classified.......A graph is one-regular if its automorphism group acts regularly on the set of its arcs. In this paper tetravalent one-regular graphs of order 4p2, where p is a prime, are classified....

  9. Regularization and error assignment to unfolded distributions

    CERN Document Server

    Zech, Gunter

    2011-01-01

    The commonly used approach to present unfolded data only in graphical formwith the diagonal error depending on the regularization strength is unsatisfac-tory. It does not permit the adjustment of parameters of theories, the exclusionof theories that are admitted by the observed data and does not allow the com-bination of data from different experiments. We propose fixing the regulariza-tion strength by a p-value criterion, indicating the experimental uncertaintiesindependent of the regularization and publishing the unfolded data in additionwithout regularization. These considerations are illustrated with three differentunfolding and smoothing approaches applied to a toy example.

  10. Puerto Rican kindergartners' self-worth as coded from the Attachment Story Completion Task: correlated with other self-evaluation measures and ratings of child behavior toward mothers and peers.

    Science.gov (United States)

    Gullón-Rivera, Ángel L

    2013-01-01

    This multi-method multi-informant study assessed 105 Puerto Rican kindergartners' sense of self-worth in family relationships as coded from their responses to the Attachment Story Completion Task (ASCT). The ASCT scores were compared with responses to two other age-appropriate self-evaluation measures (the Cassidy Puppet Interview and the Pictorial Scales of Social Acceptance). Correlations of children's scores on the three self-measures with maternal ratings of the mother-child relationship and teacher ratings of the child's prosocial behavior with peers were then compared. ASCT self-worth and Puppet Interview scores were strongly correlated with each other and both were modestly related to the pictorial social acceptance scales. All three measures were significantly associated with maternal and teacher reports of child behavior, but the strongest correlations were obtained with the ASCT. Coding the ASCT in terms of self-worth appears to be a promising approach for evaluating young children's (vicariously expressed) self-worth in family relationships.

  11. Ethical and educational considerations in coding hand surgeries.

    Science.gov (United States)

    Lifchez, Scott D; Leinberry, Charles F; Rivlin, Michael; Blazar, Philip E

    2014-07-01

    To assess treatment coding knowledge and practices among residents, fellows, and attending hand surgeons. Through the use of 6 hypothetical cases, we developed a coding survey to assess coding knowledge and practices. We e-mailed this survey to residents, fellows, and attending hand surgeons. In additionally, we asked 2 professional coders to code these cases. A total of 71 participants completed the survey out of 134 people to whom the survey was sent (response rate = 53%). We observed marked disparity in codes chosen among surgeons and among professional coders. Results of this study indicate that coding knowledge, not just its ethical application, had a major role in coding procedures accurately. Surgical coding is an essential part of a hand surgeon's practice and is not well learned during residency or fellowship. Whereas ethical issues such as deliberate unbundling and upcoding may have a role in inaccurate coding, lack of knowledge among surgeons and coders has a major role as well. Coding has a critical role in every hand surgery practice. Inconstancies among those polled in this study reveal that an increase in education on coding during training and improvement in the clarity and consistency of the Current Procedural Terminology coding rules themselves are needed. Copyright © 2014 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.

  12. Benchmarking NNWSI flow and transport codes: COVE 1 results

    International Nuclear Information System (INIS)

    Hayden, N.K.

    1985-06-01

    The code verification (COVE) activity of the Nevada Nuclear Waste Storage Investigations (NNWSI) Project is the first step in certification of flow and transport codes used for NNWSI performance assessments of a geologic repository for disposing of high-level radioactive wastes. The goals of the COVE activity are (1) to demonstrate and compare the numerical accuracy and sensitivity of certain codes, (2) to identify and resolve problems in running typical NNWSI performance assessment calculations, and (3) to evaluate computer requirements for running the codes. This report describes the work done for COVE 1, the first step in benchmarking some of the codes. Isothermal calculations for the COVE 1 benchmarking have been completed using the hydrologic flow codes SAGUARO, TRUST, and GWVIP; the radionuclide transport codes FEMTRAN and TRUMP; and the coupled flow and transport code TRACR3D. This report presents the results of three cases of the benchmarking problem solved for COVE 1, a comparison of the results, questions raised regarding sensitivities to modeling techniques, and conclusions drawn regarding the status and numerical sensitivities of the codes. 30 refs

  13. Higher order total variation regularization for EIT reconstruction.

    Science.gov (United States)

    Gong, Bo; Schullcke, Benjamin; Krueger-Ziolek, Sabine; Zhang, Fan; Mueller-Lisse, Ullrich; Moeller, Knut

    2018-01-08

    Electrical impedance tomography (EIT) attempts to reveal the conductivity distribution of a domain based on the electrical boundary condition. This is an ill-posed inverse problem; its solution is very unstable. Total variation (TV) regularization is one of the techniques commonly employed to stabilize reconstructions. However, it is well known that TV regularization induces staircase effects, which are not realistic in clinical applications. To reduce such artifacts, modified TV regularization terms considering a higher order differential operator were developed in several previous studies. One of them is called total generalized variation (TGV) regularization. TGV regularization has been successively applied in image processing in a regular grid context. In this study, we adapted TGV regularization to the finite element model (FEM) framework for EIT reconstruction. Reconstructions using simulation and clinical data were performed. First results indicate that, in comparison to TV regularization, TGV regularization promotes more realistic images. Graphical abstract Reconstructed conductivity changes located on selected vertical lines. For each of the reconstructed images as well as the ground truth image, conductivity changes located along the selected left and right vertical lines are plotted. In these plots, the notation GT in the legend stands for ground truth, TV stands for total variation method, and TGV stands for total generalized variation method. Reconstructed conductivity distributions from the GREIT algorithm are also demonstrated.

  14. Complete mitochondrial genome of the agarophyte red alga Gelidium vagum (Gelidiales).

    Science.gov (United States)

    Yang, Eun Chan; Kim, Kyeong Mi; Boo, Ga Hun; Lee, Jung-Hyun; Boo, Sung Min; Yoon, Hwan Su

    2014-08-01

    We describe the first complete mitochondrial genome of Gelidium vagum (Gelidiales) (24,901 bp, 30.4% GC content), an agar-producing red alga. The circular mitochondrial genome contains 43 genes, including 23 protein-coding, 18 tRNA and 2 rRNA genes. All the protein-coding genes have a typical ATG start codon. No introns were found. Two genes, secY and rps12, were overlapped by 41 bp.

  15. Application of Turchin's method of statistical regularization

    Science.gov (United States)

    Zelenyi, Mikhail; Poliakova, Mariia; Nozik, Alexander; Khudyakov, Alexey

    2018-04-01

    During analysis of experimental data, one usually needs to restore a signal after it has been convoluted with some kind of apparatus function. According to Hadamard's definition this problem is ill-posed and requires regularization to provide sensible results. In this article we describe an implementation of the Turchin's method of statistical regularization based on the Bayesian approach to the regularization strategy.

  16. Performance analysis of wavelength/spatial coding system with fixed in-phase code matrices in OCDMA network

    Science.gov (United States)

    Tsai, Cheng-Mu; Liang, Tsair-Chun

    2011-12-01

    This paper proposes a wavelength/spatial (W/S) coding system with fixed in-phase code (FIPC) matrix in the optical code-division multiple-access (OCDMA) network. A scheme is presented to form the FIPC matrix which is applied to construct the W/S OCDMA network. The encoder/decoder in the W/S OCDMA network is fully able to eliminate the multiple-access-interference (MAI) at the balanced photo-detectors (PD), according to fixed in-phase cross correlation. The phase-induced intensity noise (PIIN) related to the power square is markedly suppressed in the receiver by spreading the received power into each PD while the net signal power is kept the same. Simulation results show that the W/S OCDMA network based on the FIPC matrices cannot only completely remove the MAI but effectively suppress the PIIN to upgrade the network performance.

  17. Towards advanced code simulators

    International Nuclear Information System (INIS)

    Scriven, A.H.

    1990-01-01

    The Central Electricity Generating Board (CEGB) uses advanced thermohydraulic codes extensively to support PWR safety analyses. A system has been developed to allow fully interactive execution of any code with graphical simulation of the operator desk and mimic display. The system operates in a virtual machine environment, with the thermohydraulic code executing in one virtual machine, communicating via interrupts with any number of other virtual machines each running other programs and graphics drivers. The driver code itself does not have to be modified from its normal batch form. Shortly following the release of RELAP5 MOD1 in IBM compatible form in 1983, this code was used as the driver for this system. When RELAP5 MOD2 became available, it was adopted with no changes needed in the basic system. Overall the system has been used for some 5 years for the analysis of LOBI tests, full scale plant studies and for simple what-if studies. For gaining rapid understanding of system dependencies it has proved invaluable. The graphical mimic system, being independent of the driver code, has also been used with other codes to study core rewetting, to replay results obtained from batch jobs on a CRAY2 computer system and to display suitably processed experimental results from the LOBI facility to aid interpretation. For the above work real-time execution was not necessary. Current work now centers on implementing the RELAP 5 code on a true parallel architecture machine. Marconi Simulation have been contracted to investigate the feasibility of using upwards of 100 processors, each capable of a peak of 30 MIPS to run a highly detailed RELAP5 model in real time, complete with specially written 3D core neutronics and balance of plant models. This paper describes the experience of using RELAP5 as an analyzer/simulator, and outlines the proposed methods and problems associated with parallel execution of RELAP5

  18. On the regularized fermionic projector of the vacuum

    Science.gov (United States)

    Finster, Felix

    2008-03-01

    We construct families of fermionic projectors with spherically symmetric regularization, which satisfy the condition of a distributional MP-product. The method is to analyze regularization tails with a power law or logarithmic scaling in composite expressions in the fermionic projector. The resulting regularizations break the Lorentz symmetry and give rise to a multilayer structure of the fermionic projector near the light cone. Furthermore, we construct regularizations which go beyond the distributional MP-product in that they yield additional distributional contributions supported at the origin. The remaining freedom for the regularization parameters and the consequences for the normalization of the fermionic states are discussed.

  19. On the regularized fermionic projector of the vacuum

    International Nuclear Information System (INIS)

    Finster, Felix

    2008-01-01

    We construct families of fermionic projectors with spherically symmetric regularization, which satisfy the condition of a distributional MP-product. The method is to analyze regularization tails with a power law or logarithmic scaling in composite expressions in the fermionic projector. The resulting regularizations break the Lorentz symmetry and give rise to a multilayer structure of the fermionic projector near the light cone. Furthermore, we construct regularizations which go beyond the distributional MP-product in that they yield additional distributional contributions supported at the origin. The remaining freedom for the regularization parameters and the consequences for the normalization of the fermionic states are discussed

  20. Acute electronic cigarette use: nicotine delivery and subjective effects in regular users.

    Science.gov (United States)

    Dawkins, Lynne; Corcoran, Olivia

    2014-01-01

    Electronic cigarettes are becoming increasingly popular among smokers worldwide. Commonly reported reasons for use include the following: to quit smoking, to avoid relapse, to reduce urge to smoke, or as a perceived lower-risk alternative to smoking. Few studies, however, have explored whether electronic cigarettes (e-cigarettes) deliver measurable levels of nicotine to the blood. This study aims to explore in experienced users the effect of using an 18-mg/ml nicotine first-generation e-cigarette on blood nicotine, tobacco withdrawal symptoms, and urge to smoke. Fourteen regular e-cigarette users (three females), who are abstinent from smoking and e-cigarette use for 12 h, each completed a 2.5 h testing session. Blood was sampled, and questionnaires were completed (tobacco-related withdrawal symptoms, urge to smoke, positive and negative subjective effects) at four stages: baseline, 10 puffs, 60 min of ad lib use and a 60-min rest period. Complete sets of blood were obtained from seven participants. Plasma nicotine concentration rose significantly from a mean of 0.74 ng/ml at baseline to 6.77 ng/ml 10 min after 10 puffs, reaching a mean maximum of 13.91 ng/ml by the end of the ad lib puffing period. Tobacco-related withdrawal symptoms and urge to smoke were significantly reduced; direct positive effects were strongly endorsed, and there was very low reporting of adverse effects. These findings demonstrate reliable blood nicotine delivery after the acute use of this brand/model of e-cigarette in a sample of regular users. Future studies might usefully quantify nicotine delivery in relation to inhalation technique and the relationship with successful smoking cessation/harm reduction.

  1. 33 CFR 96.320 - What is involved to complete a safety management audit and when is it required to be completed?

    Science.gov (United States)

    2010-07-01

    ... Safety Management (ISM) Code by Administrations. (3) Make sure the audit is carried out by a team of... safety management audit and when is it required to be completed? 96.320 Section 96.320 Navigation and... SAFE OPERATION OF VESSELS AND SAFETY MANAGEMENT SYSTEMS How Will Safety Management Systems Be...

  2. Impact of Light Salt Substitution for Regular Salt on Blood Pressure of Hypertensive Patients

    Directory of Open Access Journals (Sweden)

    Carolina Lôbo de Almeida Barros

    2015-02-01

    Full Text Available Background: Studies have shown sodium restriction to have a beneficial effect on blood pressure (BP of hypertensive patients. Objective: To evaluate the impact of light salt substitution for regular salt on BP of hypertensive patients. Methods: Uncontrolled hypertensive patients of both sexes, 20 to 65 years-old, on stable doses of antihypertensive drugs were randomized into Intervention Group (IG - receiving light salt and Control Group (CG - receiving regular salt. Systolic BP (SBP and diastolic BP (DBP were analyzed by using casual BP measurements and Home Blood Pressure Monitoring (HBPM, and sodium and potassium excretion was assessed on 24-hour urine samples. The patients received 3 g of salt for daily consumption for 4 weeks. Results: The study evaluated 35 patients (65.7% women, 19 allocated to the IG and 16 to the CG. The mean age was 55.5 ± 7.4 years. Most participants had completed the Brazilian middle school (up to the 8th grade; n = 28; 80.0%, had a family income of up to US$ 600 (n = 17; 48.6% and practiced regular physical activity (n = 19; 54.3%. Two patients (5.7% were smokers and 40.0% consumed alcohol regularly (n = 14. The IG showed a significant reduction in both SBP and DBP on the casual measurements and HBPM (p < 0.05 and in sodium excretion (p = 0.016. The CG showed a significant reduction only in casual SBP (p = 0.032. Conclusions: The light salt substitution for regular salt significantly reduced BP of hypertensive patients.

  3. The complete chloroplast genome sequence of Dendrobium officinale.

    Science.gov (United States)

    Yang, Pei; Zhou, Hong; Qian, Jun; Xu, Haibin; Shao, Qingsong; Li, Yonghua; Yao, Hui

    2016-01-01

    The complete chloroplast sequence of Dendrobium officinale, an endangered and economically important traditional Chinese medicine, was reported and characterized. The genome size is 152,018 bp, with 37.5% GC content. A pair of inverted repeats (IRs) of 26,284 bp are separated by a large single-copy region (LSC, 84,944 bp) and a small single-copy region (SSC, 14,506 bp). The complete cp DNA contains 83 protein-coding genes, 39 tRNA genes and 8 rRNA genes. Fourteen genes contained one or two introns.

  4. The PASC-3 code system and the UNIPASC environment

    International Nuclear Information System (INIS)

    Pijlgroms, B.J.; Oppe, J.; Oudshoorn, H.

    1991-08-01

    A brief description is given of the PASC-3 (Petten-AMPX-SCALE) Reactor Physics code system and its associated UNIPASC work environment. The PASC-3 code system is used for criticality and reactor calculations and consists of a selection from the Oak Ridge National Laboratory AMPX-SCALE-3 code collection complemented with a number of additional codes and nuclear data bases. The original codes have been adapted to run under the UNIX operating system. The recommended nuclear data base is a complete 219 group cross section library derived from JEF-1 of which some benchmark results are presented. By the addition of the UNIPASC work environment the usage of the code system is greatly simplified, Complex chains of programs can easily be coupled together to form a single job. In addition, the model parameters can be represented by variables instead of literal values which enhances the readability and may improve the integrity of the code inputs. (author). 8 refs.; 6 figs.; 1 tab

  5. Regularization modeling for large-eddy simulation

    NARCIS (Netherlands)

    Geurts, Bernardus J.; Holm, D.D.

    2003-01-01

    A new modeling approach for large-eddy simulation (LES) is obtained by combining a "regularization principle" with an explicit filter and its inversion. This regularization approach allows a systematic derivation of the implied subgrid model, which resolves the closure problem. The central role of

  6. Continuous Non-malleable Codes

    DEFF Research Database (Denmark)

    Faust, Sebastian; Mukherjee, Pratyay; Nielsen, Jesper Buus

    2014-01-01

    or modify it to the encoding of a completely unrelated value. This paper introduces an extension of the standard non-malleability security notion - so-called continuous non-malleability - where we allow the adversary to tamper continuously with an encoding. This is in contrast to the standard notion of non...... is necessary to achieve continuous non-malleability in the split-state model. Moreover, we illustrate that none of the existing constructions satisfies our uniqueness property and hence is not secure in the continuous setting. We construct a split-state code satisfying continuous non-malleability. Our scheme...... is based on the inner product function, collision-resistant hashing and non-interactive zero-knowledge proofs of knowledge and requires an untamperable common reference string. We apply continuous non-malleable codes to protect arbitrary cryptographic primitives against tampering attacks. Previous...

  7. Spatially-Variant Tikhonov Regularization for Double-Difference Waveform Inversion

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Youzuo [Los Alamos National Laboratory; Huang, Lianjie [Los Alamos National Laboratory; Zhang, Zhigang [Los Alamos National Laboratory

    2011-01-01

    Double-difference waveform inversion is a potential tool for quantitative monitoring for geologic carbon storage. It jointly inverts time-lapse seismic data for changes in reservoir geophysical properties. Due to the ill-posedness of waveform inversion, it is a great challenge to obtain reservoir changes accurately and efficiently, particularly when using time-lapse seismic reflection data. Regularization techniques can be utilized to address the issue of ill-posedness. The regularization parameter controls the smoothness of inversion results. A constant regularization parameter is normally used in waveform inversion, and an optimal regularization parameter has to be selected. The resulting inversion results are a trade off among regions with different smoothness or noise levels; therefore the images are either over regularized in some regions while under regularized in the others. In this paper, we employ a spatially-variant parameter in the Tikhonov regularization scheme used in double-difference waveform tomography to improve the inversion accuracy and robustness. We compare the results obtained using a spatially-variant parameter with those obtained using a constant regularization parameter and those produced without any regularization. We observe that, utilizing a spatially-variant regularization scheme, the target regions are well reconstructed while the noise is reduced in the other regions. We show that the spatially-variant regularization scheme provides the flexibility to regularize local regions based on the a priori information without increasing computational costs and the computer memory requirement.

  8. Conjugacy classes in the Weyl group admitting a regular eigenvector and integrable hierarchies

    International Nuclear Information System (INIS)

    Delduc, F.; Feher, L.

    1994-10-01

    The classification of the integrable hierarchies in the Drinfeld-Sokolov (DS) approach is studied. The DS construction, originally based on the principal Heisenberg subalgebra of an affine Lie algebra, has been recently generalized to arbitrary graded Heisenberg subalgebras. The graded Heisenberg subalgebras of an untwisted loop algebra l(G) are classified by the conjugacy classes in the Weyl group of G, but a complete classification of the hierarchies obtained from generalized DS reductions is still missing. The main result presented here is the complete list of the graded regular elements of l(G) for G a classical Lie algebra or G 2 , extending previous results on the gl n case. (author). 9 refs., 4 tabs

  9. The menstrual cycle regularization following D-chiro-inositol treatment in PCOS women: a retrospective study.

    Science.gov (United States)

    La Marca, Antonio; Grisendi, Valentina; Dondi, Giulia; Sighinolfi, Giovanna; Cianci, Antonio

    2015-01-01

    Polycystic ovary syndrome is characterized by irregular cycles, hyperandrogenism, polycystic ovary at ultrasound and insulin resistance. The effectiveness of D-chiro-inositol (DCI) treatment in improving insulin resistance in PCOS patients has been confirmed in several reports. The objective of this study was to retrospectively analyze the effect of DCI on menstrual cycle regularity in PCOS women. This was a retrospective study of patients with irregular cycles who were treated with DCI. Of all PCOS women admitted to our centre, 47 were treated with DCI and had complete medical charts. The percentage of women reporting regular menstrual cycles significantly increased with increasing duration of DCI treatment (24% and 51.6% at a mean of 6 and 15 months of treatment, respectively). Serum AMH levels and indexes of insulin resistance significantly decreased during the treatment. Low AMH levels, high HOMA index, and the presence of oligomenorrhea at the first visit were the independent predictors of obtaining regular menstrual cycle with DCI. In conclusion, the use of DCI is associated to clinical benefits for many women affected by PCOS including the improvement in insulin resistance and menstrual cycle regularity. Responders to the treatment may be identified on the basis of menstrual irregularity and hormonal or metabolic markers.

  10. Computer access security code system

    Science.gov (United States)

    Collins, Earl R., Jr. (Inventor)

    1990-01-01

    A security code system for controlling access to computer and computer-controlled entry situations comprises a plurality of subsets of alpha-numeric characters disposed in random order in matrices of at least two dimensions forming theoretical rectangles, cubes, etc., such that when access is desired, at least one pair of previously unused character subsets not found in the same row or column of the matrix is chosen at random and transmitted by the computer. The proper response to gain access is transmittal of subsets which complete the rectangle, and/or a parallelepiped whose opposite corners were defined by first groups of code. Once used, subsets are not used again to absolutely defeat unauthorized access by eavesdropping, and the like.

  11. The development of code benchmarks

    International Nuclear Information System (INIS)

    Glass, R.E.

    1986-01-01

    Sandia National Laboratories has undertaken a code benchmarking effort to define a series of cask-like problems having both numerical solutions and experimental data. The development of the benchmarks includes: (1) model problem definition, (2) code intercomparison, and (3) experimental verification. The first two steps are complete and a series of experiments are planned. The experiments will examine the elastic/plastic behavior of cylinders for both the end and side impacts resulting from a nine meter drop. The cylinders will be made from stainless steel and aluminum to give a range of plastic deformations. This paper presents the results of analyses simulating the model's behavior using materials properties for stainless steel and aluminum

  12. Disentangling complete and incomplete fusion for 9Be+187Re system at near barrier energies

    International Nuclear Information System (INIS)

    Kharab, Rajesh; Chahal, Rajiv; Rajiv Kumar

    2015-01-01

    The breakup of projectile before fusion leads to some unusual fusion mechanisms like incomplete fusion (ICF) and sequential complete fusion (SCF). Experimentally, it is not possible to separate SCF events from direct complete fusion (DCF). However, the complete fusion and incomplete fusion can be measured separately. Theoretically it is very difficult to calculate the complete and incomplete fusion cross section separately using different models. Very recently A. Diaz-Torres has developed a computer code platypus based on classical dynamical model wherein the complete and incomplete fusion cross sections are calculated separately. But this model is found to work very well at energies above the barrier energy. Here we have attempted to extrapolate the results of the code platypus by using simple Wong's formula in conjunction with the energy dependent Woods-Saxon potential (EDWSP) in the below barrier energy region

  13. Manifold Regularized Correlation Object Tracking.

    Science.gov (United States)

    Hu, Hongwei; Ma, Bo; Shen, Jianbing; Shao, Ling

    2018-05-01

    In this paper, we propose a manifold regularized correlation tracking method with augmented samples. To make better use of the unlabeled data and the manifold structure of the sample space, a manifold regularization-based correlation filter is introduced, which aims to assign similar labels to neighbor samples. Meanwhile, the regression model is learned by exploiting the block-circulant structure of matrices resulting from the augmented translated samples over multiple base samples cropped from both target and nontarget regions. Thus, the final classifier in our method is trained with positive, negative, and unlabeled base samples, which is a semisupervised learning framework. A block optimization strategy is further introduced to learn a manifold regularization-based correlation filter for efficient online tracking. Experiments on two public tracking data sets demonstrate the superior performance of our tracker compared with the state-of-the-art tracking approaches.

  14. From recreational to regular drug use

    DEFF Research Database (Denmark)

    Järvinen, Margaretha; Ravn, Signe

    2011-01-01

    This article analyses the process of going from recreational use to regular and problematic use of illegal drugs. We present a model containing six career contingencies relevant for young people’s progress from recreational to regular drug use: the closing of social networks, changes in forms...

  15. Development and Verification of the Computer Codes for the Fast Reactors Nuclear Safety Justification

    International Nuclear Information System (INIS)

    Kisselev, A.E.; Mosunova, N.A.; Strizhov, V.F.

    2015-01-01

    The information on the status of the work on development of the system of the nuclear safety codes for fast liquid metal reactors is presented in paper. The purpose of the work is to create an instrument for NPP neutronic, thermohydraulic and strength justification including human and environment radiation safety. The main task that is to be solved by the system of codes developed is the analysis of the broad spectrum of phenomena taking place on the NPP (including reactor itself, NPP components, containment rooms, industrial site and surrounding area) and analysis of the impact of the regular and accidental releases on the environment. The code system is oriented on the ability of fully integrated modeling of the NPP behavior in the coupled definition accounting for the wide range of significant phenomena taking place on the NPP under normal and accident conditions. It is based on the models that meet the state-of-the-art knowledge level. The codes incorporate advanced numerical methods and modern programming technologies oriented on the high-performance computing systems. The information on the status of the work on verification of the separate codes of the system of codes is also presented. (author)

  16. Regular variation on measure chains

    Czech Academy of Sciences Publication Activity Database

    Řehák, Pavel; Vitovec, J.

    2010-01-01

    Roč. 72, č. 1 (2010), s. 439-448 ISSN 0362-546X R&D Projects: GA AV ČR KJB100190701 Institutional research plan: CEZ:AV0Z10190503 Keywords : regularly varying function * regularly varying sequence * measure chain * time scale * embedding theorem * representation theorem * second order dynamic equation * asymptotic properties Subject RIV: BA - General Mathematics Impact factor: 1.279, year: 2010 http://www.sciencedirect.com/science/article/pii/S0362546X09008475

  17. New regular black hole solutions

    International Nuclear Information System (INIS)

    Lemos, Jose P. S.; Zanchin, Vilson T.

    2011-01-01

    In the present work we consider general relativity coupled to Maxwell's electromagnetism and charged matter. Under the assumption of spherical symmetry, there is a particular class of solutions that correspond to regular charged black holes whose interior region is de Sitter, the exterior region is Reissner-Nordstroem and there is a charged thin-layer in-between the two. The main physical and geometrical properties of such charged regular black holes are analyzed.

  18. Manifold Regularized Correlation Object Tracking

    OpenAIRE

    Hu, Hongwei; Ma, Bo; Shen, Jianbing; Shao, Ling

    2017-01-01

    In this paper, we propose a manifold regularized correlation tracking method with augmented samples. To make better use of the unlabeled data and the manifold structure of the sample space, a manifold regularization-based correlation filter is introduced, which aims to assign similar labels to neighbor samples. Meanwhile, the regression model is learned by exploiting the block-circulant structure of matrices resulting from the augmented translated samples over multiple base samples cropped fr...

  19. On geodesics in low regularity

    Science.gov (United States)

    Sämann, Clemens; Steinbauer, Roland

    2018-02-01

    We consider geodesics in both Riemannian and Lorentzian manifolds with metrics of low regularity. We discuss existence of extremal curves for continuous metrics and present several old and new examples that highlight their subtle interrelation with solutions of the geodesic equations. Then we turn to the initial value problem for geodesics for locally Lipschitz continuous metrics and generalize recent results on existence, regularity and uniqueness of solutions in the sense of Filippov.

  20. Threshold Multi Split-Row algorithm for decoding irregular LDPC codes

    Directory of Open Access Journals (Sweden)

    Chakir Aqil

    2017-12-01

    Full Text Available In this work, we propose a new threshold multi split-row algorithm in order to improve the multi split-row algorithm for LDPC irregular codes decoding. We give a complete description of our algorithm as well as its advantages for the LDPC codes. The simulation results over an additive white gaussian channel show that an improvement in code error performance between 0.4 dB and 0.6 dB compared to the multi split-row algorithm.

  1. Minimal free resolutions over complete intersections

    CERN Document Server

    Eisenbud, David

    2016-01-01

    This book introduces a theory of higher matrix factorizations for regular sequences and uses it to describe the minimal free resolutions of high syzygy modules over complete intersections. Such resolutions have attracted attention ever since the elegant construction of the minimal free resolution of the residue field by Tate in 1957. The theory extends the theory of matrix factorizations of a non-zero divisor, initiated by Eisenbud in 1980, which yields a description of the eventual structure of minimal free resolutions over a hypersurface ring. Matrix factorizations have had many other uses in a wide range of mathematical fields, from singularity theory to mathematical physics.

  2. The complete mitochondrial genome of the Giant Manta ray, Manta birostris.

    Science.gov (United States)

    Hinojosa-Alvarez, Silvia; Díaz-Jaimes, Pindaro; Marcet-Houben, Marina; Gabaldón, Toni

    2015-01-01

    The complete mitochondrial genome of the giant manta ray (Manta birostris), consists of 18,075 bp with rich A + T and low G content. Gene organization and length is similar to other species of ray. It comprises of 13 protein-coding genes, 2 rRNAs genes, 23 tRNAs genes and 1 non-coding sequence, and the control region. We identified an AT tandem repeat region, similar to that reported in Mobula japanica.

  3. Complete mitochondrial genome of the Loligo opalescence.

    Science.gov (United States)

    Jiang, Lihua; Liu, Wei; Zhu, Aiyi; Zhang, Jianshe; Wu, Changwen

    2016-09-01

    In this study, we determined the complete mitochondrial genome of the Loligo opalescence. The genome was 17,370 bp in length and contained 13 protein-coding genes, 22 transfer RNA genes, 2 ribosomal RNA genes and 3 main non-coding regions. The composition and order of genes, were similar to most other invertebrates. The overall base composition of L. opalescence is A 38.62%, C 19.40%, T 32.37% and G 9.61%, with a highly A + T bias of 70.99%. All of the three control regions (CR) contain termination-associated sequences and conserved sequence blocks. This mitogenome sequence data would play an important role in the investigation of phylogenetic relationship, taxonomic resolution and phylogeography of the Loliginidae.

  4. Manifold Regularized Reinforcement Learning.

    Science.gov (United States)

    Li, Hongliang; Liu, Derong; Wang, Ding

    2018-04-01

    This paper introduces a novel manifold regularized reinforcement learning scheme for continuous Markov decision processes. Smooth feature representations for value function approximation can be automatically learned using the unsupervised manifold regularization method. The learned features are data-driven, and can be adapted to the geometry of the state space. Furthermore, the scheme provides a direct basis representation extension for novel samples during policy learning and control. The performance of the proposed scheme is evaluated on two benchmark control tasks, i.e., the inverted pendulum and the energy storage problem. Simulation results illustrate the concepts of the proposed scheme and show that it can obtain excellent performance.

  5. Laplacian manifold regularization method for fluorescence molecular tomography

    Science.gov (United States)

    He, Xuelei; Wang, Xiaodong; Yi, Huangjian; Chen, Yanrong; Zhang, Xu; Yu, Jingjing; He, Xiaowei

    2017-04-01

    Sparse regularization methods have been widely used in fluorescence molecular tomography (FMT) for stable three-dimensional reconstruction. Generally, ℓ1-regularization-based methods allow for utilizing the sparsity nature of the target distribution. However, in addition to sparsity, the spatial structure information should be exploited as well. A joint ℓ1 and Laplacian manifold regularization model is proposed to improve the reconstruction performance, and two algorithms (with and without Barzilai-Borwein strategy) are presented to solve the regularization model. Numerical studies and in vivo experiment demonstrate that the proposed Gradient projection-resolved Laplacian manifold regularization method for the joint model performed better than the comparative algorithm for ℓ1 minimization method in both spatial aggregation and location accuracy.

  6. From concatenated codes to graph codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Høholdt, Tom

    2004-01-01

    We consider codes based on simple bipartite expander graphs. These codes may be seen as the first step leading from product type concatenated codes to more complex graph codes. We emphasize constructions of specific codes of realistic lengths, and study the details of decoding by message passing...

  7. Trace Norm Regularized CANDECOMP/PARAFAC Decomposition With Missing Data.

    Science.gov (United States)

    Liu, Yuanyuan; Shang, Fanhua; Jiao, Licheng; Cheng, James; Cheng, Hong

    2015-11-01

    In recent years, low-rank tensor completion (LRTC) problems have received a significant amount of attention in computer vision, data mining, and signal processing. The existing trace norm minimization algorithms for iteratively solving LRTC problems involve multiple singular value decompositions of very large matrices at each iteration. Therefore, they suffer from high computational cost. In this paper, we propose a novel trace norm regularized CANDECOMP/PARAFAC decomposition (TNCP) method for simultaneous tensor decomposition and completion. We first formulate a factor matrix rank minimization model by deducing the relation between the rank of each factor matrix and the mode- n rank of a tensor. Then, we introduce a tractable relaxation of our rank function, and then achieve a convex combination problem of much smaller-scale matrix trace norm minimization. Finally, we develop an efficient algorithm based on alternating direction method of multipliers to solve our problem. The promising experimental results on synthetic and real-world data validate the effectiveness of our TNCP method. Moreover, TNCP is significantly faster than the state-of-the-art methods and scales to larger problems.

  8. Adaptive regularization of noisy linear inverse problems

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Madsen, Kristoffer Hougaard; Lehn-Schiøler, Tue

    2006-01-01

    In the Bayesian modeling framework there is a close relation between regularization and the prior distribution over parameters. For prior distributions in the exponential family, we show that the optimal hyper-parameter, i.e., the optimal strength of regularization, satisfies a simple relation: T......: The expectation of the regularization function, i.e., takes the same value in the posterior and prior distribution. We present three examples: two simulations, and application in fMRI neuroimaging....

  9. Interface requirements for coupling a containment code to a reactor system thermal hydraulic codes

    International Nuclear Information System (INIS)

    Baratta, A.J.

    1997-01-01

    To perform a complete analysis of a reactor transient, not only the primary system response but the containment response must also be accounted for. Such transients and accidents as a loss of coolant accident in both pressurized water and boiling water reactors and inadvertent operation of safety relief valves all challenge the containment and may influence flows because of containment feedback. More recently, the advanced reactor designs put forth by General Electric and Westinghouse in the US and by Framatome and Seimens in Europe rely on the containment to act as the ultimate heat sink. Techniques used by analysts and engineers to analyze the interaction of the containment and the primary system were usually iterative in nature. Codes such as RELAP or RETRAN were used to analyze the primary system response and CONTAIN or CONTEMPT the containment response. The analysis was performed by first running the system code and representing the containment as a fixed pressure boundary condition. The flows were usually from the primary system to the containment initially and generally under choked conditions. Once the mass flows and timing are determined from the system codes, these conditions were input into the containment code. The resulting pressures and temperatures were then calculated and the containment performance analyzed. The disadvantage of this approach becomes evident when one performs an analysis of a rapid depressurization or a long term accident sequence in which feedback from the containment can occur. For example, in a BWR main steam line break transient, the containment heats up and becomes a source of energy for the primary system. Recent advances in programming and computer technology are available to provide an alternative approach. The author and other researchers have developed linkage codes capable of transferring data between codes at each time step allowing discrete codes to be coupled together

  10. Interface requirements for coupling a containment code to a reactor system thermal hydraulic codes

    Energy Technology Data Exchange (ETDEWEB)

    Baratta, A.J.

    1997-07-01

    To perform a complete analysis of a reactor transient, not only the primary system response but the containment response must also be accounted for. Such transients and accidents as a loss of coolant accident in both pressurized water and boiling water reactors and inadvertent operation of safety relief valves all challenge the containment and may influence flows because of containment feedback. More recently, the advanced reactor designs put forth by General Electric and Westinghouse in the US and by Framatome and Seimens in Europe rely on the containment to act as the ultimate heat sink. Techniques used by analysts and engineers to analyze the interaction of the containment and the primary system were usually iterative in nature. Codes such as RELAP or RETRAN were used to analyze the primary system response and CONTAIN or CONTEMPT the containment response. The analysis was performed by first running the system code and representing the containment as a fixed pressure boundary condition. The flows were usually from the primary system to the containment initially and generally under choked conditions. Once the mass flows and timing are determined from the system codes, these conditions were input into the containment code. The resulting pressures and temperatures were then calculated and the containment performance analyzed. The disadvantage of this approach becomes evident when one performs an analysis of a rapid depressurization or a long term accident sequence in which feedback from the containment can occur. For example, in a BWR main steam line break transient, the containment heats up and becomes a source of energy for the primary system. Recent advances in programming and computer technology are available to provide an alternative approach. The author and other researchers have developed linkage codes capable of transferring data between codes at each time step allowing discrete codes to be coupled together.

  11. The completeness of chest X-ray procedure codes in the Danish National Patient Registry

    Directory of Open Access Journals (Sweden)

    Hjertholm P

    2017-03-01

    Full Text Available Peter Hjertholm,1 Kaare Rud Flarup,1 Louise Mahncke Guldbrandt,1 Peter Vedsted1,2 1Research Center for Cancer Diagnosis in Primary Care, Department of Public Health, 2University Clinic for Innovative Health Care Delivery, Diagnostic Centre, Silkeborg Hospital, Department of Clinical Medicine, Aarhus University, Aarhus, Denmark Objective: The aim of this validation study was to assess the completeness of the registrations of chest X-rays (CXR in two different versions of the Danish National Patient Registry (DNPR. Material and methods: We included electronic record data on CXR performed on patients aged 40 to 99 years from nine radiology departments covering 20 Danish hospitals. From each department, we included data from three randomly selected weeks between 2004 and 2011 (reference standard. In two versions of the DNPR from the State Serum Institute (SSI and Statistics Denmark, respectively, we investigated the proportion of registered CXR compared to the reference standard. Furthermore, we compared the completeness of the recorded data according to the responsible department (main department. Results: We identified 11,235 patients and 12,513 CXR in the reference standard. The data from the SSI contained 12,265 (98% CXR, whereas the data from Statistics Denmark comprised 9,151 (73.1% CXR. The completeness of the SSI data was fairly constant across years, radiology departments, medical specialties, and age groups. The data from Statistics Denmark was almost complete in 2011 (95.8%. However, for the remaining study period, the data with radiology departments registered as the main department were lacking in the version from Statistics Denmark, and so the overall completeness was 73.1%. Conclusion: The completeness of CXR registrations varied between 98% and 73% depending on the information source, and this should be considered when investigating radiology services in the basis of DNPR. Keywords: chest X-ray, Danish National Patient Registry

  12. ADLIB: A simple database framework for beamline codes

    International Nuclear Information System (INIS)

    Mottershead, C.T.

    1993-01-01

    There are many well developed codes available for beamline design and analysis. A significant fraction of each of these codes is devoted to processing its own unique input language for describing the problem. None of these large, complex, and powerful codes does everything. Adding a new bit of specialized physics can be a difficult task whose successful completion makes the code even larger and more complex. This paper describes an attempt to move in the opposite direction, toward a family of small, simple, single purpose physics and utility modules, linked by an open, portable, public domain database framework. These small specialized physics codes begin with the beamline parameters already loaded in the database, and accessible via the handful of subroutines that constitute ADLIB. Such codes are easier to write, and inherently organized in a manner suitable for incorporation in model based control system algorithms. Examples include programs for analyzing beamline misalignment sensitivities, for simulating and fitting beam steering data, and for translating among MARYLIE, TRANSPORT, and TRACE3D formats

  13. Exclusion of children with intellectual disabilities from regular ...

    African Journals Online (AJOL)

    Study investigated why teachers exclude children with intellectual disability from the regular classrooms in Nigeria. Participants were, 169 regular teachers randomly selected from Oyo and Ogun states. Questionnaire was used to collect data result revealed that 57.4% regular teachers could not cope with children with ID ...

  14. On infinite regular and chiral maps

    OpenAIRE

    Arredondo, John A.; Valdez, Camilo Ramírez y Ferrán

    2015-01-01

    We prove that infinite regular and chiral maps take place on surfaces with at most one end. Moreover, we prove that an infinite regular or chiral map on an orientable surface with genus can only be realized on the Loch Ness monster, that is, the topological surface of infinite genus with one end.

  15. 29 CFR 779.18 - Regular rate.

    Science.gov (United States)

    2010-07-01

    ... employee under subsection (a) or in excess of the employee's normal working hours or regular working hours... Relating to Labor (Continued) WAGE AND HOUR DIVISION, DEPARTMENT OF LABOR STATEMENTS OF GENERAL POLICY OR... not less than one and one-half times their regular rates of pay. Section 7(e) of the Act defines...

  16. Continuum regularized Yang-Mills theory

    International Nuclear Information System (INIS)

    Sadun, L.A.

    1987-01-01

    Using the machinery of stochastic quantization, Z. Bern, M. B. Halpern, C. Taubes and I recently proposed a continuum regularization technique for quantum field theory. This regularization may be implemented by applying a regulator to either the (d + 1)-dimensional Parisi-Wu Langevin equation or, equivalently, to the d-dimensional second order Schwinger-Dyson (SD) equations. This technique is non-perturbative, respects all gauge and Lorentz symmetries, and is consistent with a ghost-free gauge fixing (Zwanziger's). This thesis is a detailed study of this regulator, and of regularized Yang-Mills theory, using both perturbative and non-perturbative techniques. The perturbative analysis comes first. The mechanism of stochastic quantization is reviewed, and a perturbative expansion based on second-order SD equations is developed. A diagrammatic method (SD diagrams) for evaluating terms of this expansion is developed. We apply the continuum regulator to a scalar field theory. Using SD diagrams, we show that all Green functions can be rendered finite to all orders in perturbation theory. Even non-renormalizable theories can be regularized. The continuum regulator is then applied to Yang-Mills theory, in conjunction with Zwanziger's gauge fixing. A perturbative expansion of the regulator is incorporated into the diagrammatic method. It is hoped that the techniques discussed in this thesis will contribute to the construction of a renormalized Yang-Mills theory is 3 and 4 dimensions

  17. Prevalence of Dental Erosion among the Young Regular Swimmers in Kaunas, Lithuania.

    Science.gov (United States)

    Zebrauskas, Andrius; Birskute, Ruta; Maciulskiene, Vita

    2014-04-01

    To determine prevalence of dental erosion among competitive swimmers in Kaunas, the second largest city in Lithuania. The study was designed as a cross-sectional survey, with a questionnaire and clinical examination protocols. The participants were 12 - 25 year-old swimmers regularly practicing in the swimming pools of Kaunas. Of the total of 132 participants there were 76 (12 - 17 year-old) and 56 (18 - 25 year-old) individuals; in Groups 1 and 2, respectively. Participants were examined for dental erosion, using a portable dental unit equipped with fibre-optic light, compressed air and suction, and standard dental instruments for oral inspection. Lussi index was applied for recording dental erosion. The completed questionnaires focused on the common erosion risk factors were returned by all participants. Dental erosion was found in 25% of the 12 - 17 year-olds, and in 50% of 18 - 25 years-olds. Mean value of the surfaces with erosion was 6.31 (SD 4.37). All eroded surfaces were evaluated as grade 1. Swimming training duration and the participants' age correlated positively (Kendall correlation, r = 0.65, P dental erosion and the analyzed risk factors (gastroesophageal reflux disease, frequent vomiting, dry mouth, regular intake of acidic medicines, carbonated drinks) was found in both study groups. Prevalence of dental erosion of very low degree was high among the regular swimmers in Kaunas, and was significantly related to swimmers' age.

  18. Completion of the Heysham 2 peripheral manipulator

    International Nuclear Information System (INIS)

    Shipp, R.; Ewen, R.O.

    1996-01-01

    The in-service inspection strategy for the AGR power station at Heysham 2 envisaged a suite of five manipulators to be used for inserting TRIUMPH television cameras into the reactor vessel. Prior to power raising, four of the five had been successfully commissioned and have been in regular use during the subsequent statutory outages. The final device, the Peripheral Manipulator (PM), was eventually completed prior to the 1994 outage and has been successfully deployed on reactor for both the 1994 and 1995 outages. The paper describes the design of the manipulator, its operation and scope of use in the Heysham 2 reactors. (Author)

  19. Complete mitochondrial genome of the Freshwater Catfish Rita rita (Siluriformes, Bagridae).

    Science.gov (United States)

    Lashari, Punhal; Laghari, Muhammad Younis; Xu, Peng; Zhao, Zixia; Jiang, Li; Narejo, Naeem Tariq; Deng, Yulin; Sun, Xiaowen; Zhang, Yan

    2015-01-01

    The complete mitochondrial genome of Catfish, Rita rita, was isolated by LA PCR (TakaRa LAtaq, Dalian, China); and sequenced by Sanger's method to obtain the complete mitochondrial genome, which is listed Critically Endangered and Red Listed species. The complete mitogenome was 16,449 bp in length and contains 13 typical vertebrate protein-coding genes, 2 rRNA and 22 tRNA genes. The whole genome base composition was estimated to be 33.40% A, 27.43% C, 14.26% G and 24.89% T. The complete mitochondrial genome of catfish, Rita rita provides the basis for genetic breeding and conservation studies.

  20. Comparative analysis of design codes for timber bridges in Canada, the United States, and Europe

    Science.gov (United States)

    James Wacker; James (Scott) Groenier

    2010-01-01

    The United States recently completed its transition from the allowable stress design code to the load and resistance factor design (LRFD) reliability-based code for the design of most highway bridges. For an international perspective on the LRFD-based bridge codes, a comparative analysis is presented: a study addressed national codes of the United States, Canada, and...

  1. Development of Regulatory Audit Core Safety Code : COREDAX

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Chae Yong; Jo, Jong Chull; Roh, Byung Hwan [Korea Institute of Nuclear Safety, Taejon (Korea, Republic of); Lee, Jae Jun; Cho, Nam Zin [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)

    2005-07-01

    Korea Institute of Nuclear Safety (KINS) has developed a core neutronics simulator, COREDAX code, for verifying core safety of SMART-P reactor, which is technically supported by Korea Advanced Institute of Science and Technology (KAIST). The COREDAX code would be used for regulatory audit calculations of 3- dimendional core neutronics. The COREDAX code solves the steady-state and timedependent multi-group neutron diffusion equation in hexagonal geometry as well as rectangular geometry by analytic function expansion nodal (AFEN) method. AFEN method was developed at KAIST, and it was internationally verified that its accuracy is excellent. The COREDAX code is originally programmed based on the AFEN method. Accuracy of the code on the AFEN method was excellent for the hexagonal 2-dimensional problems, but there was a need for improvement for hexagonal-z 3-dimensional problems. Hence, several solution routines of the AFEN method are improved, and finally the advanced AFEN method is created. COREDAX code is based on the advanced AFEN method . The initial version of COREDAX code is to complete a basic framework, performing eigenvalue calculations and kinetics calculations with thermal-hydraulic feedbacks, for audit calculations of steady-state core design and reactivity-induced accidents of SMART-P reactor. This study describes the COREDAX code for hexagonal geometry.

  2. Calculation of fluid-structure interaction for reactor safety with the Cassiopee code

    International Nuclear Information System (INIS)

    Graveleau, J.L.; Louvet, P.D.

    1979-01-01

    The cassiopee code is an eulerian-lagrangian coupled code for computations where the hydrodynamic is coupled with structural domains. It is completely explicit. The fluid zones may be computed either in lagrangian or in eulerian coordinates; thin shells can be computed wih their flexural behaviour; elastic plastic zones must be calculated in a lagrangian way. This code is under development in Cadarache. Its purpose is to compute the hypothetical core disruptive accident of a LMFBR when lagrangian codes are not sufficient. This paper contains a description of the code and two examples of computations, one of which has been compared with experimental results

  3. TPASS: a gamma-ray spectrum analysis and isotope identification computer code

    International Nuclear Information System (INIS)

    Dickens, J.K.

    1981-03-01

    The gamma-ray spectral data-reduction and analysis computer code TPASS is described. This computer code is used to analyze complex Ge(Li) gamma-ray spectra to obtain peak areas corrected for detector efficiencies, from which are determined gamma-ray yields. These yields are compared with an isotope gamma-ray data file to determine the contributions to the observed spectrum from decay of specific radionuclides. A complete FORTRAN listing of the code and a complex test case are given

  4. O2-GIDNC: Beyond instantly decodable network coding

    KAUST Repository

    Aboutorab, Neda

    2013-06-01

    In this paper, we are concerned with extending the graph representation of generalized instantly decodable network coding (GIDNC) to a more general opportunistic network coding (ONC) scenario, referred to as order-2 GIDNC (O2-GIDNC). In the O2-GIDNC scheme, receivers can store non-instantly decodable packets (NIDPs) comprising two of their missing packets, and use them in a systematic way for later decodings. Once this graph representation is found, it can be used to extend the GIDNC graph-based analyses to the proposed O2-GIDNC scheme with a limited increase in complexity. In the proposed O2-GIDNC scheme, the information of the stored NIDPs at the receivers and the decoding opportunities they create can be exploited to improve the broadcast completion time and decoding delay compared to traditional GIDNC scheme. The completion time and decoding delay minimizing algorithms that can operate on the new O2-GIDNC graph are further described. The simulation results show that our proposed O2-GIDNC improves the completion time and decoding delay performance of the traditional GIDNC. © 2013 IEEE.

  5. Regularity effect in prospective memory during aging

    OpenAIRE

    Blondelle, Geoffrey; Hainselin, Mathieu; Gounden, Yannick; Heurley, Laurent; Voisin, Hélène; Megalakaki, Olga; Bressous, Estelle; Quaglino, Véronique

    2016-01-01

    Background: Regularity effect can affect performance in prospective memory (PM), but little is known on the cognitive processes linked to this effect. Moreover, its impacts with regard to aging remain unknown. To our knowledge, this study is the first to examine regularity effect in PM in a lifespan perspective, with a sample of young, intermediate, and older adults.Objective and design: Our study examined the regularity effect in PM in three groups of participants: 28 young adults (18–30), 1...

  6. Generalized instantly decodable network coding for relay-assisted networks

    KAUST Repository

    Elmahdy, Adel M.

    2013-09-01

    In this paper, we investigate the problem of minimizing the frame completion delay for Instantly Decodable Network Coding (IDNC) in relay-assisted wireless multicast networks. We first propose a packet recovery algorithm in the single relay topology which employs generalized IDNC instead of strict IDNC previously proposed in the literature for the same relay-assisted topology. This use of generalized IDNC is supported by showing that it is a super-set of the strict IDNC scheme, and thus can generate coding combinations that are at least as efficient as strict IDNC in reducing the average completion delay. We then extend our study to the multiple relay topology and propose a joint generalized IDNC and relay selection algorithm. This proposed algorithm benefits from the reception diversity of the multiple relays to further reduce the average completion delay in the network. Simulation results show that our proposed solutions achieve much better performance compared to previous solutions in the literature. © 2013 IEEE.

  7. 20 CFR 226.14 - Employee regular annuity rate.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Employee regular annuity rate. 226.14 Section... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Computing an Employee Annuity § 226.14 Employee regular annuity rate. The regular annuity rate payable to the employee is the total of the employee tier I...

  8. Toric Varieties and Codes, Error-correcting Codes, Quantum Codes, Secret Sharing and Decoding

    DEFF Research Database (Denmark)

    Hansen, Johan Peder

    We present toric varieties and associated toric codes and their decoding. Toric codes are applied to construct Linear Secret Sharing Schemes (LSSS) with strong multiplication by the Massey construction. Asymmetric Quantum Codes are obtained from toric codes by the A.R. Calderbank P.W. Shor and A.......M. Steane construction of stabilizer codes (CSS) from linear codes containing their dual codes....

  9. Effects of attitude, social influence, and self-efficacy model factors on regular mammography performance in life-transition aged women in Korea.

    Science.gov (United States)

    Lee, Chang Hyun; Kim, Young Im

    2015-01-01

    This study analyzed predictors of regular mammography performance in Korea. In addition, we determined factors affecting regular mammography performance in life-transition aged women by applying an attitude, social influence, and self-efficacy (ASE) model. Data were collected from women aged over 40 years residing in province J in Korea. The 178 enrolled subjects provided informed voluntary consent prior to completing a structural questionnaire. The overall regular mammography performance rate of the subjects was 41.6%. Older age, city residency, high income and part-time job were associated with a high regular mammography performance. Among women who had undergone more breast self-examinations (BSE) or more doctors' physical examinations (PE), there were higher regular mammography performance rates. All three ASE model factors were significantly associated with regular mammography performance. Women with a high level of positive ASE values had a significantly high regular mammography performance rate. Within the ASE model, self-efficacy and social influence were particularly important. Logistic regression analysis explained 34.7% of regular mammography performance and PE experience (β=4.645, p=.003), part- time job (β=4.010, p=.050), self-efficacy (β=1.820, p=.026) and social influence (β=1.509, p=.038) were significant factors. Promotional strategies that could improve self-efficacy, reinforce social influence and reduce geographical, time and financial barriers are needed to increase the regular mammography performance rate in life-transition aged.

  10. Regular algebra and finite machines

    CERN Document Server

    Conway, John Horton

    2012-01-01

    World-famous mathematician John H. Conway based this classic text on a 1966 course he taught at Cambridge University. Geared toward graduate students of mathematics, it will also prove a valuable guide to researchers and professional mathematicians.His topics cover Moore's theory of experiments, Kleene's theory of regular events and expressions, Kleene algebras, the differential calculus of events, factors and the factor matrix, and the theory of operators. Additional subjects include event classes and operator classes, some regulator algebras, context-free languages, communicative regular alg

  11. 39 CFR 6.1 - Regular meetings, annual meeting.

    Science.gov (United States)

    2010-07-01

    ... 39 Postal Service 1 2010-07-01 2010-07-01 false Regular meetings, annual meeting. 6.1 Section 6.1 Postal Service UNITED STATES POSTAL SERVICE THE BOARD OF GOVERNORS OF THE U.S. POSTAL SERVICE MEETINGS (ARTICLE VI) § 6.1 Regular meetings, annual meeting. The Board shall meet regularly on a schedule...

  12. Automatic coding method of the ACR Code

    International Nuclear Information System (INIS)

    Park, Kwi Ae; Ihm, Jong Sool; Ahn, Woo Hyun; Baik, Seung Kook; Choi, Han Yong; Kim, Bong Gi

    1993-01-01

    The authors developed a computer program for automatic coding of ACR(American College of Radiology) code. The automatic coding of the ACR code is essential for computerization of the data in the department of radiology. This program was written in foxbase language and has been used for automatic coding of diagnosis in the Department of Radiology, Wallace Memorial Baptist since May 1992. The ACR dictionary files consisted of 11 files, one for the organ code and the others for the pathology code. The organ code was obtained by typing organ name or code number itself among the upper and lower level codes of the selected one that were simultaneous displayed on the screen. According to the first number of the selected organ code, the corresponding pathology code file was chosen automatically. By the similar fashion of organ code selection, the proper pathologic dode was obtained. An example of obtained ACR code is '131.3661'. This procedure was reproducible regardless of the number of fields of data. Because this program was written in 'User's Defined Function' from, decoding of the stored ACR code was achieved by this same program and incorporation of this program into program in to another data processing was possible. This program had merits of simple operation, accurate and detail coding, and easy adjustment for another program. Therefore, this program can be used for automation of routine work in the department of radiology

  13. The complete mitochondrial genome of a spiraling whitefly, Aleurodicus dispersus Russell (Hemiptera: Aleyrodidae).

    Science.gov (United States)

    Ming-Xing, Lu; Zhi-Teng, Chen; Wei-Wei, Yu; Yu-Zhou, Du

    2017-03-01

    We report the complete mitochondrial genome (mitogenome) of a spiraling whitefly, Aleurodicus dispersus (Hemiptera: Aleyrodidae). The 16 170 bp long genome consists of 13 protein-coding genes, 20 transfer RNAs, 2 ribosomal RNAs, and a control region. The A. dispersus mitogenome also includes a cytb-like non-coding region and shows several variations relative to the typical insect mitogenome. A phylogenetic tree has been constructed using the 13 protein-coding genes of 12 related species from Hemiptera. Our results would contribute to further study of phylogeny in Aleyrodidae and Hemiptera.

  14. LINK codes TRAC-BF1/PARCSv2.7 in LINUX without external communication interface

    International Nuclear Information System (INIS)

    Barrachina, T.; Garcia-Fenoll, M.; Abarca, A.; Miro, R.; Verdu, G.; Concejal, A.; Solar, A.

    2014-01-01

    The TRAC-BF1 code is still widely used by the nuclear industry for safety analysis. The plant models developed using this code are highly validated, so it is advisable to continue improving this code before migrating to another completely different code. The coupling with the NRC neutronic code PARCSv2.7 increases the simulation capabilities in transients in which the power distribution plays an important role. In this paper, the procedure for the coupling of TRAC-BF1 and PARCSv2.7 codes without PVM and in Linux is presented. (Author)

  15. TART 2000: A Coupled Neutron-Photon, 3-D, Combinatorial Geometry, Time Dependent, Monte Carlo Transport Code

    International Nuclear Information System (INIS)

    Cullen, D.E

    2000-01-01

    TART2000 is a coupled neutron-photon, 3 Dimensional, combinatorial geometry, time dependent Monte Carlo radiation transport code. This code can run on any modern computer. It is a complete system to assist you with input Preparation, running Monte Carlo calculations, and analysis of output results. TART2000 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system can save you a great deal of time and energy. TART2000 is distributed on CD. This CD contains on-line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART2000 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART2000 and its data files

  16. TART 2000 A Coupled Neutron-Photon, 3-D, Combinatorial Geometry, Time Dependent, Monte Carlo Transport Code

    CERN Document Server

    Cullen, D

    2000-01-01

    TART2000 is a coupled neutron-photon, 3 Dimensional, combinatorial geometry, time dependent Monte Carlo radiation transport code. This code can run on any modern computer. It is a complete system to assist you with input Preparation, running Monte Carlo calculations, and analysis of output results. TART2000 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system can save you a great deal of time and energy. TART2000 is distributed on CD. This CD contains on-line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART2000 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART2000 and its data files.

  17. Complete Flow Blockage of a Fuel Channel for Research Reactor

    International Nuclear Information System (INIS)

    Lee, Byeonghee; Park, Suki

    2015-01-01

    The CHF correlation suitable for narrow rectangular channels are implemented in RELAP5/MOD3.3 code for the analyses, and the behavior of fuel temperatures and MCHFR(minimum critical heat flux ratio) are compared between the original and modified codes. The complete flow blockage of fuel channel for research reactor is analyzed using original and modified RELAP5/MOD3.3 and the results are compared each other. The Sudo-Kaminaga CHF correlation is implemented into RELAP5/MOD3.3 for analyzing the behavior of fuel adjacent to the blocked channel. A flow blockage of fuel channels can be postulated by a foreign object blocking cooling channels of fuels. Since a research reactor with plate type fuel has isolated fuel channels, a complete flow blockage of one fuel channel can cause a failure of adjacent fuel plates by the loss of cooling capability. Although research reactor systems are designed to prevent foreign materials from entering into the core, partial flow blockage accidents and following fuel failures are reported in some old research reactors. In this report, an analysis of complete flow blockage accident is presented for a 15MW pool-type research reactor with plate type fuels. The fuel surface experience different heat transfer regime in the results from original and modified RELAP5/MOD3.3. By the discrepancy in heat transfer mode of two cases, a fuel melting is expected by the modified RELAP5/MOD3.3, whereas the fuel integrity is ensured by the original code

  18. Improving coding accuracy in an academic practice.

    Science.gov (United States)

    Nguyen, Dana; O'Mara, Heather; Powell, Robert

    2017-01-01

    Practice management has become an increasingly important component of graduate medical education. This applies to every practice environment; private, academic, and military. One of the most critical aspects of practice management is documentation and coding for physician services, as they directly affect the financial success of any practice. Our quality improvement project aimed to implement a new and innovative method for teaching billing and coding in a longitudinal fashion in a family medicine residency. We hypothesized that implementation of a new teaching strategy would increase coding accuracy rates among residents and faculty. Design: single group, pretest-posttest. military family medicine residency clinic. Study populations: 7 faculty physicians and 18 resident physicians participated as learners in the project. Educational intervention: monthly structured coding learning sessions in the academic curriculum that involved learner-presented cases, small group case review, and large group discussion. overall coding accuracy (compliance) percentage and coding accuracy per year group for the subjects that were able to participate longitudinally. Statistical tests used: average coding accuracy for population; paired t test to assess improvement between 2 intervention periods, both aggregate and by year group. Overall coding accuracy rates remained stable over the course of time regardless of the modality of the educational intervention. A paired t test was conducted to compare coding accuracy rates at baseline (mean (M)=26.4%, SD=10%) to accuracy rates after all educational interventions were complete (M=26.8%, SD=12%); t24=-0.127, P=.90. Didactic teaching and small group discussion sessions did not improve overall coding accuracy in a residency practice. Future interventions could focus on educating providers at the individual level.

  19. Development and Assessment of CFD Models Including a Supplemental Program Code for Analyzing Buoyancy-Driven Flows Through BWR Fuel Assemblies in SFP Complete LOCA Scenarios

    Science.gov (United States)

    Artnak, Edward Joseph, III

    This work seeks to illustrate the potential benefits afforded by implementing aspects of fluid dynamics, especially the latest computational fluid dynamics (CFD) modeling approach, through numerical experimentation and the traditional discipline of physical experimentation to improve the calibration of the severe reactor accident analysis code, MELCOR, in one of several spent fuel pool (SFP) complete loss-ofcoolant accident (LOCA) scenarios. While the scope of experimental work performed by Sandia National Laboratories (SNL) extends well beyond that which is reasonably addressed by our allotted resources and computational time in accordance with initial project allocations to complete the report, these simulated case trials produced a significant array of supplementary high-fidelity solutions and hydraulic flow-field data in support of SNL research objectives. Results contained herein show FLUENT CFD model representations of a 9x9 BWR fuel assembly in conditions corresponding to a complete loss-of-coolant accident scenario. In addition to the CFD model developments, a MATLAB based controlvolume model was constructed to independently assess the 9x9 BWR fuel assembly under similar accident scenarios. The data produced from this work show that FLUENT CFD models are capable of resolving complex flow fields within a BWR fuel assembly in the realm of buoyancy-induced mass flow rates and that characteristic hydraulic parameters from such CFD simulations (or physical experiments) are reasonably employed in corresponding constitutive correlations for developing simplified numerical models of comparable solution accuracy.

  20. A regularized stationary mean-field game

    KAUST Repository

    Yang, Xianjin

    2016-01-01

    In the thesis, we discuss the existence and numerical approximations of solutions of a regularized mean-field game with a low-order regularization. In the first part, we prove a priori estimates and use the continuation method to obtain the existence of a solution with a positive density. Finally, we introduce the monotone flow method and solve the system numerically.

  1. A regularized stationary mean-field game

    KAUST Repository

    Yang, Xianjin

    2016-04-19

    In the thesis, we discuss the existence and numerical approximations of solutions of a regularized mean-field game with a low-order regularization. In the first part, we prove a priori estimates and use the continuation method to obtain the existence of a solution with a positive density. Finally, we introduce the monotone flow method and solve the system numerically.

  2. Automating InDesign with Regular Expressions

    CERN Document Server

    Kahrel, Peter

    2006-01-01

    If you need to make automated changes to InDesign documents beyond what basic search and replace can handle, you need regular expressions, and a bit of scripting to make them work. This Short Cut explains both how to write regular expressions, so you can find and replace the right things, and how to use them in InDesign specifically.

  3. Coding in pigeons: Multiple-coding versus single-code/default strategies.

    Science.gov (United States)

    Pinto, Carlos; Machado, Armando

    2015-05-01

    To investigate the coding strategies that pigeons may use in a temporal discrimination tasks, pigeons were trained on a matching-to-sample procedure with three sample durations (2s, 6s and 18s) and two comparisons (red and green hues). One comparison was correct following 2-s samples and the other was correct following both 6-s and 18-s samples. Tests were then run to contrast the predictions of two hypotheses concerning the pigeons' coding strategies, the multiple-coding and the single-code/default. According to the multiple-coding hypothesis, three response rules are acquired, one for each sample. According to the single-code/default hypothesis, only two response rules are acquired, one for the 2-s sample and a "default" rule for any other duration. In retention interval tests, pigeons preferred the "default" key, a result predicted by the single-code/default hypothesis. In no-sample tests, pigeons preferred the key associated with the 2-s sample, a result predicted by multiple-coding. Finally, in generalization tests, when the sample duration equaled 3.5s, the geometric mean of 2s and 6s, pigeons preferred the key associated with the 6-s and 18-s samples, a result predicted by the single-code/default hypothesis. The pattern of results suggests the need for models that take into account multiple sources of stimulus control. © Society for the Experimental Analysis of Behavior.

  4. Optimal behaviour can violate the principle of regularity.

    Science.gov (United States)

    Trimmer, Pete C

    2013-07-22

    Understanding decisions is a fundamental aim of behavioural ecology, psychology and economics. The regularity axiom of utility theory holds that a preference between options should be maintained when other options are made available. Empirical studies have shown that animals violate regularity but this has not been understood from a theoretical perspective, such decisions have therefore been labelled as irrational. Here, I use models of state-dependent behaviour to demonstrate that choices can violate regularity even when behavioural strategies are optimal. I also show that the range of conditions over which regularity should be violated can be larger when options do not always persist into the future. Consequently, utility theory--based on axioms, including transitivity, regularity and the independence of irrelevant alternatives--is undermined, because even alternatives that are never chosen by an animal (in its current state) can be relevant to a decision.

  5. Conference on Algebraic Geometry for Coding Theory and Cryptography

    CERN Document Server

    Lauter, Kristin; Walker, Judy

    2017-01-01

    Covering topics in algebraic geometry, coding theory, and cryptography, this volume presents interdisciplinary group research completed for the February 2016 conference at the Institute for Pure and Applied Mathematics (IPAM) in cooperation with the Association for Women in Mathematics (AWM). The conference gathered research communities across disciplines to share ideas and problems in their fields and formed small research groups made up of graduate students, postdoctoral researchers, junior faculty, and group leaders who designed and led the projects. Peer reviewed and revised, each of this volume's five papers achieves the conference’s goal of using algebraic geometry to address a problem in either coding theory or cryptography. Proposed variants of the McEliece cryptosystem based on different constructions of codes, constructions of locally recoverable codes from algebraic curves and surfaces, and algebraic approaches to the multicast network coding problem are only some of the topics covered in this vo...

  6. Dimensional regularization in configuration space

    International Nuclear Information System (INIS)

    Bollini, C.G.; Giambiagi, J.J.

    1995-09-01

    Dimensional regularization is introduced in configuration space by Fourier transforming in D-dimensions the perturbative momentum space Green functions. For this transformation, Bochner theorem is used, no extra parameters, such as those of Feynman or Bogoliubov-Shirkov are needed for convolutions. The regularized causal functions in x-space have ν-dependent moderated singularities at the origin. They can be multiplied together and Fourier transformed (Bochner) without divergence problems. The usual ultraviolet divergences appear as poles of the resultant functions of ν. Several example are discussed. (author). 9 refs

  7. Matrix regularization of 4-manifolds

    OpenAIRE

    Trzetrzelewski, M.

    2012-01-01

    We consider products of two 2-manifolds such as S^2 x S^2, embedded in Euclidean space and show that the corresponding 4-volume preserving diffeomorphism algebra can be approximated by a tensor product SU(N)xSU(N) i.e. functions on a manifold are approximated by the Kronecker product of two SU(N) matrices. A regularization of the 4-sphere is also performed by constructing N^2 x N^2 matrix representations of the 4-algebra (and as a byproduct of the 3-algebra which makes the regularization of S...

  8. On the equivalence of different regularization methods

    International Nuclear Information System (INIS)

    Brzezowski, S.

    1985-01-01

    The R-circunflex-operation preceded by the regularization procedure is discussed. Some arguments are given, according to which the results may depend on the method of regularization, introduced in order to avoid divergences in perturbation calculations. 10 refs. (author)

  9. Accreting fluids onto regular black holes via Hamiltonian approach

    Energy Technology Data Exchange (ETDEWEB)

    Jawad, Abdul [COMSATS Institute of Information Technology, Department of Mathematics, Lahore (Pakistan); Shahzad, M.U. [COMSATS Institute of Information Technology, Department of Mathematics, Lahore (Pakistan); University of Central Punjab, CAMS, UCP Business School, Lahore (Pakistan)

    2017-08-15

    We investigate the accretion of test fluids onto regular black holes such as Kehagias-Sfetsos black holes and regular black holes with Dagum distribution function. We analyze the accretion process when different test fluids are falling onto these regular black holes. The accreting fluid is being classified through the equation of state according to the features of regular black holes. The behavior of fluid flow and the existence of sonic points is being checked for these regular black holes. It is noted that the three-velocity depends on critical points and the equation of state parameter on phase space. (orig.)

  10. Code Cactus; Code Cactus

    Energy Technology Data Exchange (ETDEWEB)

    Fajeau, M; Nguyen, L T; Saunier, J [Commissariat a l' Energie Atomique, Centre d' Etudes Nucleaires de Saclay, 91 - Gif-sur-Yvette (France)

    1966-09-01

    This code handles the following problems: -1) Analysis of thermal experiments on a water loop at high or low pressure; steady state or transient behavior; -2) Analysis of thermal and hydrodynamic behavior of water-cooled and moderated reactors, at either high or low pressure, with boiling permitted; fuel elements are assumed to be flat plates: - Flowrate in parallel channels coupled or not by conduction across plates, with conditions of pressure drops or flowrate, variable or not with respect to time is given; the power can be coupled to reactor kinetics calculation or supplied by the code user. The code, containing a schematic representation of safety rod behavior, is a one dimensional, multi-channel code, and has as its complement (FLID), a one-channel, two-dimensional code. (authors) [French] Ce code permet de traiter les problemes ci-dessous: 1. Depouillement d'essais thermiques sur boucle a eau, haute ou basse pression, en regime permanent ou transitoire; 2. Etudes thermiques et hydrauliques de reacteurs a eau, a plaques, a haute ou basse pression, ebullition permise: - repartition entre canaux paralleles, couples on non par conduction a travers plaques, pour des conditions de debit ou de pertes de charge imposees, variables ou non dans le temps; - la puissance peut etre couplee a la neutronique et une representation schematique des actions de securite est prevue. Ce code (Cactus) a une dimension d'espace et plusieurs canaux, a pour complement Flid qui traite l'etude d'un seul canal a deux dimensions. (auteurs)

  11. Training and support to improve ICD coding quality: A controlled before-and-after impact evaluation.

    Science.gov (United States)

    Dyers, Robin; Ward, Grant; Du Plooy, Shane; Fourie, Stephanus; Evans, Juliet; Mahomed, Hassan

    2017-05-24

    The proposed National Health Insurance policy for South Africa (SA) requires hospitals to maintain high-quality International Statistical Classification of Diseases (ICD) codes for patient records. While considerable strides had been made to improve ICD coding coverage by digitising the discharge process in the Western Cape Province, further intervention was required to improve data quality. The aim of this controlled before-and-after study was to evaluate the impact of a clinician training and support initiative to improve ICD coding quality. To compare ICD coding quality between two central hospitals in the Western Cape before and after the implementation of a training and support initiative for clinicians at one of the sites. The difference in differences in data quality between the intervention site and the control site was calculated. Multiple logistic regression was also used to determine the odds of data quality improvement after the intervention and to adjust for potential differences between the groups. The intervention had a positive impact of 38.0% on ICD coding completeness over and above changes that occurred at the control site. Relative to the baseline, patient records at the intervention site had a 6.6 (95% confidence interval 3.5 - 16.2) adjusted odds ratio of having a complete set of ICD codes for an admission episode after the introduction of the training and support package. The findings on impact on ICD coding accuracy were not significant. There is sufficient pragmatic evidence that a training and support package will have a considerable positive impact on ICD coding completeness in the SA setting.

  12. The complete mitochondrial genome of the Border Collie dog.

    Science.gov (United States)

    Wu, An-Quan; Zhang, Yong-Liang; Li, Li-Li; Chen, Long; Yang, Tong-Wen

    2016-01-01

    Border Collie dog is one of the famous breed of dog. In the present work we report the complete mitochondrial genome sequence of Border Collie dog for the first time. The total length of the mitogenome was 16,730 bp with the base composition of 31.6% for A, 28.7% for T, 25.5% for C, and 14.2% for G and an A-T (60.3%)-rich feature was detected. It harbored 13 protein-coding genes, two ribosomal RNA genes, 22 transfer RNA genes and one non-coding control region (D-loop region). The arrangement of all genes was identical to the typical mitochondrial genomes of dogs.

  13. Cervical vertebral maturation: An objective and transparent code staging system applied to a 6-year longitudinal investigation.

    Science.gov (United States)

    Perinetti, Giuseppe; Bianchet, Alberto; Franchi, Lorenzo; Contardo, Luca

    2017-05-01

    To date, little information is available regarding individual cervical vertebral maturation (CVM) morphologic changes. Moreover, contrasting results regarding the repeatability of the CVM method call for the use of objective and transparent reporting procedures. In this study, we used a rigorous morphometric objective CVM code staging system, called the "CVM code" that was applied to a 6-year longitudinal circumpubertal analysis of individual CVM morphologic changes to find cases outside the reported norms and analyze individual maturation processes. From the files of the Oregon Growth Study, 32 subjects (17 boys, 15 girls) with 6 annual lateral cephalograms taken from 10 to 16 years of age were included, for a total of 221 recordings. A customized cephalometric analysis was used, and each recording was converted into a CVM code according to the concavities of cervical vertebrae (C) C2 through C4 and the shapes of C3 and C4. The retrieved CVM codes, either falling within the reported norms (regular cases) or not (exception cases), were also converted into the CVM stages. Overall, 31 exception cases (14%) were seen. with most of them accounting for pubertal CVM stage 4. The overall durations of the CVM stages 2 to 4 were about 1 year, even though only 4 subjects had regular annual durations of CVM stages 2 to 5. Whereas the overall CVM changes are consistent with previous reports, intersubject variability must be considered when dealing with individual treatment timing. Future research on CVM may take advantage of the CVM code system. Copyright © 2017 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.

  14. Verification of the MOTIF code version 3.0

    International Nuclear Information System (INIS)

    Chan, T.; Guvanasen, V.; Nakka, B.W.; Reid, J.A.K.; Scheier, N.W.; Stanchell, F.W.

    1996-12-01

    As part of the Canadian Nuclear Fuel Waste Management Program (CNFWMP), AECL has developed a three-dimensional finite-element code, MOTIF (Model Of Transport In Fractured/ porous media), for detailed modelling of groundwater flow, heat transport and solute transport in a fractured rock mass. The code solves the transient and steady-state equations of groundwater flow, solute (including one-species radionuclide) transport, and heat transport in variably saturated fractured/porous media. The initial development was completed in 1985 (Guvanasen 1985) and version 3.0 was completed in 1986. This version is documented in detail in Guvanasen and Chan (in preparation). This report describes a series of fourteen verification cases which has been used to test the numerical solution techniques and coding of MOTIF, as well as demonstrate some of the MOTIF analysis capabilities. For each case the MOTIF solution has been compared with a corresponding analytical or independently developed alternate numerical solution. Several of the verification cases were included in Level 1 of the International Hydrologic Code Intercomparison Project (HYDROCOIN). The MOTIF results for these cases were also described in the HYDROCOIN Secretariat's compilation and comparison of results submitted by the various project teams (Swedish Nuclear Power Inspectorate 1988). It is evident from the graphical comparisons presented that the MOTIF solutions for the fourteen verification cases are generally in excellent agreement with known analytical or numerical solutions obtained from independent sources. This series of verification studies has established the ability of the MOTIF finite-element code to accurately model the groundwater flow and solute and heat transport phenomena for which it is intended. (author). 20 refs., 14 tabs., 32 figs

  15. [Quality management and strategic consequences of assessing documentation and coding under the German Diagnostic Related Groups system].

    Science.gov (United States)

    Schnabel, M; Mann, D; Efe, T; Schrappe, M; V Garrel, T; Gotzen, L; Schaeg, M

    2004-10-01

    The introduction of the German Diagnostic Related Groups (D-DRG) system requires redesigning administrative patient management strategies. Wrong coding leads to inaccurate grouping and endangers the reimbursement of treatment costs. This situation emphasizes the roles of documentation and coding as factors of economical success. The aims of this study were to assess the quantity and quality of initial documentation and coding (ICD-10 and OPS-301) and find operative strategies to improve efficiency and strategic means to ensure optimal documentation and coding quality. In a prospective study, documentation and coding quality were evaluated in a standardized way by weekly assessment. Clinical data from 1385 inpatients were processed for initial correctness and quality of documentation and coding. Principal diagnoses were found to be accurate in 82.7% of cases, inexact in 7.1%, and wrong in 10.1%. Effects on financial returns occurred in 16%. Based on these findings, an optimized, interdisciplinary, and multiprofessional workflow on medical documentation, coding, and data control was developed. Workflow incorporating regular assessment of documentation and coding quality is required by the DRG system to ensure efficient accounting of hospital services. Interdisciplinary and multiprofessional cooperation is recognized to be an important factor in establishing an efficient workflow in medical documentation and coding.

  16. Impact testing and analysis for structural code benchmarking

    International Nuclear Information System (INIS)

    Glass, R.E.

    1989-01-01

    Sandia National Laboratories, in cooperation with industry and other national laboratories, has been benchmarking computer codes used to predict the structural, thermal, criticality, and shielding behavior of radioactive materials packages. The first step in the benchmarking of the codes was to develop standard problem sets and to compare the results from several codes and users. This step for structural analysis codes has been completed as described in Structural Code Benchmarking for the Analysis of Impact Response of Nuclear Material Shipping Casks, R.E. Glass, Sandia National Laboratories, 1985. The problem set is shown in Fig. 1. This problem set exercised the ability of the codes to predict the response to end (axisymmetric) and side (plane strain) impacts with both elastic and elastic/plastic materials. The results from these problems showed that there is good agreement in predicting elastic response. Significant differences occurred in predicting strains for the elastic/plastic models. An example of the variation in predicting plastic behavior is given, which shows the hoop strain as a function of time at the impacting end of Model B. These differences in predicting plastic strains demonstrated a need for benchmark data for a cask-like problem

  17. QR Codes as Mobile Learning Tools for Labor Room Nurses at the San Pablo Colleges Medical Center

    Science.gov (United States)

    Del Rosario-Raymundo, Maria Rowena

    2017-01-01

    Purpose: The purpose of this paper is to explore the use of QR codes as mobile learning tools and examine factors that impact on their usefulness, acceptability and feasibility in assisting the nurses' learning. Design/Methodology/Approach: Study participants consisted of 14 regular, full-time, board-certified LR nurses. Over a two-week period,…

  18. Bounded Perturbation Regularization for Linear Least Squares Estimation

    KAUST Repository

    Ballal, Tarig

    2017-10-18

    This paper addresses the problem of selecting the regularization parameter for linear least-squares estimation. We propose a new technique called bounded perturbation regularization (BPR). In the proposed BPR method, a perturbation with a bounded norm is allowed into the linear transformation matrix to improve the singular-value structure. Following this, the problem is formulated as a min-max optimization problem. Next, the min-max problem is converted to an equivalent minimization problem to estimate the unknown vector quantity. The solution of the minimization problem is shown to converge to that of the ℓ2 -regularized least squares problem, with the unknown regularizer related to the norm bound of the introduced perturbation through a nonlinear constraint. A procedure is proposed that combines the constraint equation with the mean squared error (MSE) criterion to develop an approximately optimal regularization parameter selection algorithm. Both direct and indirect applications of the proposed method are considered. Comparisons with different Tikhonov regularization parameter selection methods, as well as with other relevant methods, are carried out. Numerical results demonstrate that the proposed method provides significant improvement over state-of-the-art methods.

  19. The Role of Personality in a Regular Cognitive Monitoring Program.

    Science.gov (United States)

    Sadeq, Nasreen A; Valdes, Elise G; Harrison Bush, Aryn L; Andel, Ross

    2018-02-20

    This study examines the role of personality in cognitive performance, adherence, and satisfaction with regular cognitive self-monitoring. One hundred fifty-seven cognitively healthy older adults, age 55+, completed the 44-item Big-Five Inventory and were subsequently engaged in online monthly cognitive monitoring using the Cogstate Brief Battery for up to 35 months (M=14 mo, SD=7 mo). The test measures speed and accuracy in reaction time, visual learning, and working memory tasks. Neuroticism, although not related to cognitive performance overall (P>0.05), was related to a greater increase in accuracy (estimate=0.07, P=0.04) and speed (estimate=-0.09, P=0.03) on One Card Learning. Greater conscientiousness was related to faster overall speed on Detection (estimate=-1.62, P=0.02) and a significant rate of improvement in speed on One Card Learning (estimate=-0.10, Pconscientiousness were observed. Participants volunteering for regular cognitive monitoring may be quite uniform in terms of personality traits, with personality traits playing a relatively minor role in adherence and satisfaction. The more neurotic may exhibit better accuracy and improve in speed with time, whereas the more conscientious may perform faster overall and improve in speed on some tasks, but the effects appear small.

  20. MRI reconstruction with joint global regularization and transform learning.

    Science.gov (United States)

    Tanc, A Korhan; Eksioglu, Ender M

    2016-10-01

    Sparsity based regularization has been a popular approach to remedy the measurement scarcity in image reconstruction. Recently, sparsifying transforms learned from image patches have been utilized as an effective regularizer for the Magnetic Resonance Imaging (MRI) reconstruction. Here, we infuse additional global regularization terms to the patch-based transform learning. We develop an algorithm to solve the resulting novel cost function, which includes both patchwise and global regularization terms. Extensive simulation results indicate that the introduced mixed approach has improved MRI reconstruction performance, when compared to the algorithms which use either of the patchwise transform learning or global regularization terms alone. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. The complete mitochondrial genome of Octopus conispadiceus (Sasaki, 1917) (Cephalopoda: Octopodidae).

    Science.gov (United States)

    Ma, Yuanyuan; Zheng, Xiaodong; Cheng, Rubin; Li, Qi

    2016-01-01

    In this paper, we determined the complete mitochondrial genome of Octopus conispadiceus (Cephalopoda: Octopodidae). The whole mitogenome of O. conispadiceus is 16,027 basepairs (bp) in length with a base composition of 41.4% A, 34.8% T, 16.1% C, 7.7% G and contains 13 protein-coding genes, 2 ribosomal RNA genes, 22 transfer RNA genes, and a major non-coding region (MNR). The gene arrangements of O. conispadiceus showed remarkable similarity to that of O. vulgaris, Amphioctopus fangsiao, Cistopus chinensis and C. taiwanicus.

  2. Improving the accuracy of operation coding in surgical discharge summaries

    Science.gov (United States)

    Martinou, Eirini; Shouls, Genevieve; Betambeau, Nadine

    2014-01-01

    Procedural coding in surgical discharge summaries is extremely important; as well as communicating to healthcare staff which procedures have been performed, it also provides information that is used by the hospital's coding department. The OPCS code (Office of Population, Censuses and Surveys Classification of Surgical Operations and Procedures) is used to generate the tariff that allows the hospital to be reimbursed for the procedure. We felt that the OPCS coding on discharge summaries was often incorrect within our breast and endocrine surgery department. A baseline measurement over two months demonstrated that 32% of operations had been incorrectly coded, resulting in an incorrect tariff being applied and an estimated loss to the Trust of £17,000. We developed a simple but specific OPCS coding table in collaboration with the clinical coding team and breast surgeons that summarised all operations performed within our department. This table was disseminated across the team, specifically to the junior doctors who most frequently complete the discharge summaries. Re-audit showed 100% of operations were accurately coded, demonstrating the effectiveness of the coding table. We suggest that specifically designed coding tables be introduced across each surgical department to ensure accurate OPCS codes are used to produce better quality surgical discharge summaries and to ensure correct reimbursement to the Trust. PMID:26734286

  3. Strictly-regular number system and data structures

    DEFF Research Database (Denmark)

    Elmasry, Amr Ahmed Abd Elmoneim; Jensen, Claus; Katajainen, Jyrki

    2010-01-01

    We introduce a new number system that we call the strictly-regular system, which efficiently supports the operations: digit-increment, digit-decrement, cut, concatenate, and add. Compared to other number systems, the strictly-regular system has distinguishable properties. It is superior to the re...

  4. Analysis of regularized Navier-Stokes equations, 2

    Science.gov (United States)

    Ou, Yuh-Roung; Sritharan, S. S.

    1989-01-01

    A practically important regularization of the Navier-Stokes equations was analyzed. As a continuation of the previous work, the structure of the attractors characterizing the solutins was studied. Local as well as global invariant manifolds were found. Regularity properties of these manifolds are analyzed.

  5. Evaluation Codes from an Affine Veriety Code Perspective

    DEFF Research Database (Denmark)

    Geil, Hans Olav

    2008-01-01

    Evaluation codes (also called order domain codes) are traditionally introduced as generalized one-point geometric Goppa codes. In the present paper we will give a new point of view on evaluation codes by introducing them instead as particular nice examples of affine variety codes. Our study...... includes a reformulation of the usual methods to estimate the minimum distances of evaluation codes into the setting of affine variety codes. Finally we describe the connection to the theory of one-pointgeometric Goppa codes. Contents 4.1 Introduction...... . . . . . . . . . . . . . . . . . . . . . . . 171 4.9 Codes form order domains . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 4.10 One-point geometric Goppa codes . . . . . . . . . . . . . . . . . . . . . . . . 176 4.11 Bibliographical Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 References...

  6. Regularities, Natural Patterns and Laws of Nature

    Directory of Open Access Journals (Sweden)

    Stathis Psillos

    2014-02-01

    Full Text Available  The goal of this paper is to sketch an empiricist metaphysics of laws of nature. The key idea is that there are regularities without regularity-enforcers. Differently put, there are natural laws without law-makers of a distinct metaphysical kind. This sketch will rely on the concept of a natural pattern and more significantly on the existence of a network of natural patterns in nature. The relation between a regularity and a pattern will be analysed in terms of mereology.  Here is the road map. In section 2, I will briefly discuss the relation between empiricism and metaphysics, aiming to show that an empiricist metaphysics is possible. In section 3, I will offer arguments against stronger metaphysical views of laws. Then, in section 4 I will motivate nomic objectivism. In section 5, I will address the question ‘what is a regularity?’ and will develop a novel answer to it, based on the notion of a natural pattern. In section 6, I will raise the question: ‘what is a law of nature?’, the answer to which will be: a law of nature is a regularity that is characterised by the unity of a natural pattern.

  7. Computer codes for tasks in the fields of isotope and radiation research

    International Nuclear Information System (INIS)

    Friedrich, K.; Gebhardt, O.

    1978-11-01

    Concise descriptions of computer codes developed for solving problems in the fields of isotope and radiation research at the Zentralinstitut fuer Isotopen- und Strahlenforschung (ZfI) are compiled. In part two the structure of the ZfI program library MABIF is outlined and a complete list of all codes available is given

  8. MARS CODE MANUAL VOLUME III - Programmer's Manual

    International Nuclear Information System (INIS)

    Chung, Bub Dong; Hwang, Moon Kyu; Jeong, Jae Jun; Kim, Kyung Doo; Bae, Sung Won; Lee, Young Jin; Lee, Won Jae

    2010-02-01

    Korea Advanced Energy Research Institute (KAERI) conceived and started the development of MARS code with the main objective of producing a state-of-the-art realistic thermal hydraulic systems analysis code with multi-dimensional analysis capability. MARS achieves this objective by very tightly integrating the one dimensional RELAP5/MOD3 with the multi-dimensional COBRA-TF codes. The method of integration of the two codes is based on the dynamic link library techniques, and the system pressure equation matrices of both codes are implicitly integrated and solved simultaneously. In addition, the Equation-Of-State (EOS) for the light water was unified by replacing the EOS of COBRA-TF by that of the RELAP5. This programmer's manual provides a complete list of overall information of code structure and input/output function of MARS. In addition, brief descriptions for each subroutine and major variables used in MARS are also included in this report, so that this report would be very useful for the code maintenance. The overall structure of the manual is modeled on the structure of the RELAP5 and as such the layout of the manual is very similar to that of the RELAP. This similitude to RELAP5 input is intentional as this input scheme will allow minimum modification between the inputs of RELAP5 and MARS3.1. MARS3.1 development team would like to express its appreciation to the RELAP5 Development Team and the USNRC for making this manual possible

  9. The complete mitochondrial genome of Chrysopa pallens (Insecta, Neuroptera, Chrysopidae).

    Science.gov (United States)

    He, Kun; Chen, Zhe; Yu, Dan-Na; Zhang, Jia-Yong

    2012-10-01

    The complete mitochondrial genome of Chrysopa pallens (Neuroptera, Chrysopidae) was sequenced. It consists of 13 protein-coding genes, 22 transfer RNA genes, 2 ribosomal RNA (rRNA) genes, and a control region (AT-rich region). The total length of C. pallens mitogenome is 16,723 bp with 79.5% AT content, and the length of control region is 1905 bp with 89.1% AT content. The non-coding regions of C. pallens include control region between 12S rRNA and trnI genes, and a 75-bp space region between trnI and trnQ genes.

  10. Consistent Partial Least Squares Path Modeling via Regularization.

    Science.gov (United States)

    Jung, Sunho; Park, JaeHong

    2018-01-01

    Partial least squares (PLS) path modeling is a component-based structural equation modeling that has been adopted in social and psychological research due to its data-analytic capability and flexibility. A recent methodological advance is consistent PLS (PLSc), designed to produce consistent estimates of path coefficients in structural models involving common factors. In practice, however, PLSc may frequently encounter multicollinearity in part because it takes a strategy of estimating path coefficients based on consistent correlations among independent latent variables. PLSc has yet no remedy for this multicollinearity problem, which can cause loss of statistical power and accuracy in parameter estimation. Thus, a ridge type of regularization is incorporated into PLSc, creating a new technique called regularized PLSc. A comprehensive simulation study is conducted to evaluate the performance of regularized PLSc as compared to its non-regularized counterpart in terms of power and accuracy. The results show that our regularized PLSc is recommended for use when serious multicollinearity is present.

  11. Consistent Partial Least Squares Path Modeling via Regularization

    Directory of Open Access Journals (Sweden)

    Sunho Jung

    2018-02-01

    Full Text Available Partial least squares (PLS path modeling is a component-based structural equation modeling that has been adopted in social and psychological research due to its data-analytic capability and flexibility. A recent methodological advance is consistent PLS (PLSc, designed to produce consistent estimates of path coefficients in structural models involving common factors. In practice, however, PLSc may frequently encounter multicollinearity in part because it takes a strategy of estimating path coefficients based on consistent correlations among independent latent variables. PLSc has yet no remedy for this multicollinearity problem, which can cause loss of statistical power and accuracy in parameter estimation. Thus, a ridge type of regularization is incorporated into PLSc, creating a new technique called regularized PLSc. A comprehensive simulation study is conducted to evaluate the performance of regularized PLSc as compared to its non-regularized counterpart in terms of power and accuracy. The results show that our regularized PLSc is recommended for use when serious multicollinearity is present.

  12. Simplified diagnostic coding sheet for computerized data storage and analysis in ophthalmology.

    Science.gov (United States)

    Tauber, J; Lahav, M

    1987-11-01

    A review of currently-available diagnostic coding systems revealed that most are either too abbreviated or too detailed. We have compiled a simplified diagnostic coding sheet based on the International Coding and Diagnosis (ICD-9), which is both complete and easy to use in a general practice. The information is transferred to a computer, which uses the relevant (ICD-9) diagnoses as database and can be retrieved later for display of patients' problems or analysis of clinical data.

  13. User manual of FUNF code for fissile material data calculation

    International Nuclear Information System (INIS)

    Zhang, Jingshang

    2006-03-01

    The FUNF code (2005 version) is used to calculate fast neutron reaction data of fissile materials with incident energies from about 1 keV up to 20 MeV. The first version of the FUNF code was completed in 1994. the code has been developed continually since that time and has often been used as an evaluation tool for setting up CENDL and for analyzing the measurements of fissile materials. During these years many improvements have been made. In this manual, the format of the input parameter files and the output files, as well as the functions of flag used in FUNF code, are introduced in detail, and the examples of the format of input parameters files are given. FUNF code consists of the spherical optical model, the Hauser-Feshbach model, and the unified Hauser-Feshbach and exciton model. (authors)

  14. Sequence of the intron/exon junctions of the coding region of the human androgen receptor gene and identification of a point mutation in a family with complete androgen insensitivity

    International Nuclear Information System (INIS)

    Lubahn, D.B.; Simental, J.A.; Higgs, H.N.; Wilson, E.M.; French, F.S.; Brown, T.R.; Migeon, C.J.

    1989-01-01

    Androgens act through a receptor protein (AR) to mediate sex differentiation and development of the male phenotype. The authors have isolated the eight exons in the amino acid coding region of the AR gene from a human X chromosome library. Nucleotide sequences of the AR gene intron/exon boundaries were determined for use in designing synthetic oligonucleotide primers to bracket coding exons for amplification by the polymerase chain reaction. Genomic DNA was amplified from 46, XY phenotypic female siblings with complete androgen insensitivity syndrome. AR binding affinity for dihydrotestosterone in the affected siblings was lower than in normal males, but the binding capacity was normal. Sequence analysis of amplified exons demonstrated within the AR steroid-binding domain (exon G) a single guanine to adenine mutation, resulting in replacement of valine with methionine at amino acid residue 866. As expected, the carrier mother had both normal and mutant AR genes. Thus, a single point mutation in the steroid-binding domain of the AR gene correlated with the expression of an AR protein ineffective in stimulating male sexual development

  15. The complete mitochondrial genome of the Jacobin pigeon (Columba livia breed Jacobin).

    Science.gov (United States)

    He, Wen-Xiao; Jia, Jin-Feng

    2015-06-01

    The Jacobin is a breed of fancy pigeon developed over many years of selective breeding that originated in Asia. In the present work, we report the complete mitochondrial genome sequence of Jacobin pigeon for the first time. The total length of the mitogenome was 17,245 bp with the base composition of 30.18% for A, 23.98% for T, 31.88% for C, and 13.96% for G and an A-T (54.17 %)-rich feature was detected. It harbored 13 protein-coding genes, 2 ribosomal RNA genes, 22 transfer RNA genes and 1 non-coding control region. The arrangement of all genes was identical to the typical mitochondrial genomes of pigeon. The complete mitochondrial genome sequence of Jacobin pigeon would serve as an important data set of the germplasm resources for further study.

  16. Is a genome a codeword of an error-correcting code?

    Directory of Open Access Journals (Sweden)

    Luzinete C B Faria

    Full Text Available Since a genome is a discrete sequence, the elements of which belong to a set of four letters, the question as to whether or not there is an error-correcting code underlying DNA sequences is unavoidable. The most common approach to answering this question is to propose a methodology to verify the existence of such a code. However, none of the methodologies proposed so far, although quite clever, has achieved that goal. In a recent work, we showed that DNA sequences can be identified as codewords in a class of cyclic error-correcting codes known as Hamming codes. In this paper, we show that a complete intron-exon gene, and even a plasmid genome, can be identified as a Hamming code codeword as well. Although this does not constitute a definitive proof that there is an error-correcting code underlying DNA sequences, it is the first evidence in this direction.

  17. The complete mitochondrial genome sequence of Diaphorina citri (Hemiptera: Psyllidae)

    Science.gov (United States)

    The first complete mitochondrial genome (mitogenome) sequence of Asian citrus psyllid, Diaphorina citri (Hemiptera: Psyllidae), from Guangzhou, China is presented. The circular mitogenome is 14,996 bp in length with an A+T content of 74.5%, and contains 13 protein-coding genes (PCGs), 22 tRNA genes ...

  18. Regularization of the Boundary-Saddle-Node Bifurcation

    Directory of Open Access Journals (Sweden)

    Xia Liu

    2018-01-01

    Full Text Available In this paper we treat a particular class of planar Filippov systems which consist of two smooth systems that are separated by a discontinuity boundary. In such systems one vector field undergoes a saddle-node bifurcation while the other vector field is transversal to the boundary. The boundary-saddle-node (BSN bifurcation occurs at a critical value when the saddle-node point is located on the discontinuity boundary. We derive a local topological normal form for the BSN bifurcation and study its local dynamics by applying the classical Filippov’s convex method and a novel regularization approach. In fact, by the regularization approach a given Filippov system is approximated by a piecewise-smooth continuous system. Moreover, the regularization process produces a singular perturbation problem where the original discontinuous set becomes a center manifold. Thus, the regularization enables us to make use of the established theories for continuous systems and slow-fast systems to study the local behavior around the BSN bifurcation.

  19. Low-Complexity Regularization Algorithms for Image Deblurring

    KAUST Repository

    Alanazi, Abdulrahman

    2016-11-01

    Image restoration problems deal with images in which information has been degraded by blur or noise. In practice, the blur is usually caused by atmospheric turbulence, motion, camera shake, and several other mechanical or physical processes. In this study, we present two regularization algorithms for the image deblurring problem. We first present a new method based on solving a regularized least-squares (RLS) problem. This method is proposed to find a near-optimal value of the regularization parameter in the RLS problems. Experimental results on the non-blind image deblurring problem are presented. In all experiments, comparisons are made with three benchmark methods. The results demonstrate that the proposed method clearly outperforms the other methods in terms of both the output PSNR and structural similarity, as well as the visual quality of the deblurred images. To reduce the complexity of the proposed algorithm, we propose a technique based on the bootstrap method to estimate the regularization parameter in low and high-resolution images. Numerical results show that the proposed technique can effectively reduce the computational complexity of the proposed algorithms. In addition, for some cases where the point spread function (PSF) is separable, we propose using a Kronecker product so as to reduce the computations. Furthermore, in the case where the image is smooth, it is always desirable to replace the regularization term in the RLS problems by a total variation term. Therefore, we propose a novel method for adaptively selecting the regularization parameter in a so-called square root regularized total variation (SRTV). Experimental results demonstrate that our proposed method outperforms the other benchmark methods when applied to smooth images in terms of PSNR, SSIM and the restored image quality. In this thesis, we focus on the non-blind image deblurring problem, where the blur kernel is assumed to be known. However, we developed algorithms that also work

  20. Improvements in GRACE Gravity Fields Using Regularization

    Science.gov (United States)

    Save, H.; Bettadpur, S.; Tapley, B. D.

    2008-12-01

    The unconstrained global gravity field models derived from GRACE are susceptible to systematic errors that show up as broad "stripes" aligned in a North-South direction on the global maps of mass flux. These errors are believed to be a consequence of both systematic and random errors in the data that are amplified by the nature of the gravity field inverse problem. These errors impede scientific exploitation of the GRACE data products, and limit the realizable spatial resolution of the GRACE global gravity fields in certain regions. We use regularization techniques to reduce these "stripe" errors in the gravity field products. The regularization criteria are designed such that there is no attenuation of the signal and that the solutions fit the observations as well as an unconstrained solution. We have used a computationally inexpensive method, normally referred to as "L-ribbon", to find the regularization parameter. This paper discusses the characteristics and statistics of a 5-year time-series of regularized gravity field solutions. The solutions show markedly reduced stripes, are of uniformly good quality over time, and leave little or no systematic observation residuals, which is a frequent consequence of signal suppression from regularization. Up to degree 14, the signal in regularized solution shows correlation greater than 0.8 with the un-regularized CSR Release-04 solutions. Signals from large-amplitude and small-spatial extent events - such as the Great Sumatra Andaman Earthquake of 2004 - are visible in the global solutions without using special post-facto error reduction techniques employed previously in the literature. Hydrological signals as small as 5 cm water-layer equivalent in the small river basins, like Indus and Nile for example, are clearly evident, in contrast to noisy estimates from RL04. The residual variability over the oceans relative to a seasonal fit is small except at higher latitudes, and is evident without the need for de-striping or

  1. An Optimal Linear Coding for Index Coding Problem

    OpenAIRE

    Pezeshkpour, Pouya

    2015-01-01

    An optimal linear coding solution for index coding problem is established. Instead of network coding approach by focus on graph theoric and algebraic methods a linear coding program for solving both unicast and groupcast index coding problem is presented. The coding is proved to be the optimal solution from the linear perspective and can be easily utilize for any number of messages. The importance of this work is lying mostly on the usage of the presented coding in the groupcast index coding ...

  2. Moving force identification based on redundant concatenated dictionary and weighted l1-norm regularization

    Science.gov (United States)

    Pan, Chu-Dong; Yu, Ling; Liu, Huan-Lin; Chen, Ze-Peng; Luo, Wen-Feng

    2018-01-01

    Moving force identification (MFI) is an important inverse problem in the field of bridge structural health monitoring (SHM). Reasonable signal structures of moving forces are rarely considered in the existing MFI methods. Interaction forces are complex because they contain both slowly-varying harmonic and impact signals due to bridge vibration and bumps on a bridge deck, respectively. Therefore, the interaction forces are usually hard to be expressed completely and sparsely by using a single basis function set. Based on the redundant concatenated dictionary and weighted l1-norm regularization method, a hybrid method is proposed for MFI in this study. The redundant dictionary consists of both trigonometric functions and rectangular functions used for matching the harmonic and impact signal features of unknown moving forces. The weighted l1-norm regularization method is introduced for formulation of MFI equation, so that the signal features of moving forces can be accurately extracted. The fast iterative shrinkage-thresholding algorithm (FISTA) is used for solving the MFI problem. The optimal regularization parameter is appropriately chosen by the Bayesian information criterion (BIC) method. In order to assess the accuracy and the feasibility of the proposed method, a simply-supported beam bridge subjected to a moving force is taken as an example for numerical simulations. Finally, a series of experimental studies on MFI of a steel beam are performed in laboratory. Both numerical and experimental results show that the proposed method can accurately identify the moving forces with a strong robustness, and it has a better performance than the Tikhonov regularization method. Some related issues are discussed as well.

  3. SAFE: A computer code for the steady-state and transient thermal analysis of LMR fuel elements

    International Nuclear Information System (INIS)

    Hayes, S.L.

    1993-12-01

    SAFE is a computer code developed for both the steady-state and transient thermal analysis of single LMR fuel elements. The code employs a two-dimensional control-volume based finite difference methodology with fully implicit time marching to calculate the temperatures throughout a fuel element and its associated coolant channel for both the steady-state and transient events. The code makes no structural calculations or predictions whatsoever. It does, however, accept as input structural parameters within the fuel such as the distributions of porosity and fuel composition, as well as heat generation, to allow a thermal analysis to be performed on a user-specified fuel structure. The code was developed with ease of use in mind. An interactive input file generator and material property correlations internal to the code are available to expedite analyses using SAFE. This report serves as a complete design description of the code as well as a user's manual. A sample calculation made with SAFE is included to highlight some of the code's features. Complete input and output files for the sample problem are provided

  4. Complete genome sequence of Gordonia bronchialis type strain (3410T)

    Energy Technology Data Exchange (ETDEWEB)

    Ivanova, N [U.S. Department of Energy, Joint Genome Institute; Sikorski, Johannes [DSMZ - German Collection of Microorganisms and Cell Cultures GmbH, Braunschweig, Germany; Jando, Marlen [DSMZ - German Collection of Microorganisms and Cell Cultures GmbH, Braunschweig, Germany; Lapidus, Alla L. [U.S. Department of Energy, Joint Genome Institute; Nolan, Matt [U.S. Department of Energy, Joint Genome Institute; Glavina Del Rio, Tijana [U.S. Department of Energy, Joint Genome Institute; Tice, Hope [U.S. Department of Energy, Joint Genome Institute; Copeland, A [U.S. Department of Energy, Joint Genome Institute; Cheng, Jan-Fang [U.S. Department of Energy, Joint Genome Institute; Chen, Feng [U.S. Department of Energy, Joint Genome Institute; Bruce, David [Los Alamos National Laboratory (LANL); Goodwin, Lynne A. [Los Alamos National Laboratory (LANL); Pitluck, Sam [U.S. Department of Energy, Joint Genome Institute; Mavromatis, K [U.S. Department of Energy, Joint Genome Institute; Ovchinnikova, Galina [U.S. Department of Energy, Joint Genome Institute; Pati, Amrita [U.S. Department of Energy, Joint Genome Institute; Chen, Amy [U.S. Department of Energy, Joint Genome Institute; Palaniappan, Krishna [U.S. Department of Energy, Joint Genome Institute; Land, Miriam L [ORNL; Hauser, Loren John [ORNL; Chang, Yun-Juan [ORNL; Jeffries, Cynthia [Oak Ridge National Laboratory (ORNL); Chain, Patrick S. G. [Lawrence Livermore National Laboratory (LLNL); Saunders, Elizabeth H [Los Alamos National Laboratory (LANL); Han, Cliff [Los Alamos National Laboratory (LANL); Detter, J C [U.S. Department of Energy, Joint Genome Institute; Brettin, Thomas S [ORNL; Rohde, Manfred [HZI - Helmholtz Centre for Infection Research, Braunschweig, Germany; Goker, Markus [DSMZ - German Collection of Microorganisms and Cell Cultures GmbH, Braunschweig, Germany; Bristow, James [U.S. Department of Energy, Joint Genome Institute; Eisen, Jonathan [U.S. Department of Energy, Joint Genome Institute; Markowitz, Victor [U.S. Department of Energy, Joint Genome Institute; Hugenholtz, Philip [U.S. Department of Energy, Joint Genome Institute; Klenk, Hans-Peter [DSMZ - German Collection of Microorganisms and Cell Cultures GmbH, Braunschweig, Germany; Kyrpides, Nikos C [U.S. Department of Energy, Joint Genome Institute

    2010-01-01

    Gordonia bronchialis Tsukamura 1971 is the type species of the genus. G. bronchialis is a human-pathogenic organism that has been isolated from a large variety of human tissues. Here we describe the features of this organism, together with the complete genome sequence and annotation. This is the first completed genome sequence of the family Gordoniaceae. The 5,290,012 bp long genome with its 4,944 protein-coding and 55 RNA genes is part of the Genomic Encyclopedia of Bacteria and Archaea project.

  5. Computation of the Genetic Code

    Science.gov (United States)

    Kozlov, Nicolay N.; Kozlova, Olga N.

    2018-03-01

    One of the problems in the development of mathematical theory of the genetic code (summary is presented in [1], the detailed -to [2]) is the problem of the calculation of the genetic code. Similar problems in the world is unknown and could be delivered only in the 21st century. One approach to solving this problem is devoted to this work. For the first time provides a detailed description of the method of calculation of the genetic code, the idea of which was first published earlier [3]), and the choice of one of the most important sets for the calculation was based on an article [4]. Such a set of amino acid corresponds to a complete set of representations of the plurality of overlapping triple gene belonging to the same DNA strand. A separate issue was the initial point, triggering an iterative search process all codes submitted by the initial data. Mathematical analysis has shown that the said set contains some ambiguities, which have been founded because of our proposed compressed representation of the set. As a result, the developed method of calculation was limited to the two main stages of research, where the first stage only the of the area were used in the calculations. The proposed approach will significantly reduce the amount of computations at each step in this complex discrete structure.

  6. Training and support to improve ICD coding quality: A controlled before-and-after impact evaluation

    Directory of Open Access Journals (Sweden)

    Robin Dyers

    2017-06-01

    Full Text Available Background. The proposed National Health Insurance policy for South Africa (SA requires hospitals to maintain high-quality International Statistical Classification of Diseases (ICD codes for patient records. While considerable strides had been made to improve ICD coding coverage by digitising the discharge process in the Western Cape Province, further intervention was required to improve data quality. The aim of this controlled before-and-after study was to evaluate the impact of a clinician training and support initiative to improve ICD coding quality. Objective. To compare ICD coding quality between two central hospitals in the Western Cape before and after the implementation of a training and support initiative for clinicians at one of the sites. Methods. The difference in differences in data quality between the intervention site and the control site was calculated. Multiple logistic regression was also used to determine the odds of data quality improvement after the intervention and to adjust for potential differences between the groups. Results. The intervention had a positive impact of 38.0% on ICD coding completeness over and above changes that occurred at the control site. Relative to the baseline, patient records at the intervention site had a 6.6 (95% confidence interval 3.5 - 16.2 adjusted odds ratio of having a complete set of ICD codes for an admission episode after the introduction of the training and support package. The findings on impact on ICD coding accuracy were not significant. Conclusion. There is sufficient pragmatic evidence that a training and support package will have a considerable positive impact on ICD coding completeness in the SA setting.

  7. Deterministic automata for extended regular expressions

    Directory of Open Access Journals (Sweden)

    Syzdykov Mirzakhmet

    2017-12-01

    Full Text Available In this work we present the algorithms to produce deterministic finite automaton (DFA for extended operators in regular expressions like intersection, subtraction and complement. The method like “overriding” of the source NFA(NFA not defined with subset construction rules is used. The past work described only the algorithm for AND-operator (or intersection of regular languages; in this paper the construction for the MINUS-operator (and complement is shown.

  8. Regularities of intermediate adsorption complex relaxation

    International Nuclear Information System (INIS)

    Manukova, L.A.

    1982-01-01

    The experimental data, characterizing the regularities of intermediate adsorption complex relaxation in the polycrystalline Mo-N 2 system at 77 K are given. The method of molecular beam has been used in the investigation. The analytical expressions of change regularity in the relaxation process of full and specific rates - of transition from intermediate state into ''non-reversible'', of desorption into the gas phase and accumUlation of the particles in the intermediate state are obtained

  9. Depth Measurement Based on Infrared Coded Structured Light

    Directory of Open Access Journals (Sweden)

    Tong Jia

    2014-01-01

    Full Text Available Depth measurement is a challenging problem in computer vision research. In this study, we first design a new grid pattern and develop a sequence coding and decoding algorithm to process the pattern. Second, we propose a linear fitting algorithm to derive the linear relationship between the object depth and pixel shift. Third, we obtain depth information on an object based on this linear relationship. Moreover, 3D reconstruction is implemented based on Delaunay triangulation algorithm. Finally, we utilize the regularity of the error curves to correct the system errors and improve the measurement accuracy. The experimental results show that the accuracy of depth measurement is related to the step length of moving object.

  10. Sparse structure regularized ranking

    KAUST Repository

    Wang, Jim Jing-Yan; Sun, Yijun; Gao, Xin

    2014-01-01

    Learning ranking scores is critical for the multimedia database retrieval problem. In this paper, we propose a novel ranking score learning algorithm by exploring the sparse structure and using it to regularize ranking scores. To explore the sparse

  11. Model for determining the completion and production policy in oil wells

    Energy Technology Data Exchange (ETDEWEB)

    Acurero S, L A

    1983-12-01

    An optimization scheme for reservoir development was examined considering the value of the resource, choice of completion and production techniques, and boundary conditions for the reservoir. A 3-phase semi-analytic single-well model was formulated to determine the reservoir response for any completion and production policy. Second, an optimization scheme based on the discrete version of the maximum principle of Pontryagin and the Fibonacci search method was formulated to determine the optimal production and completion policy. Both models are combined in a general algorithm of solution proposed to solve the optimization problem, and a computer code was developed and tested.

  12. Verification of HYDRASTAR - A code for stochastic continuum simulation of groundwater flow

    International Nuclear Information System (INIS)

    Norman, S.

    1991-07-01

    HYDRASTAR is a code developed at Starprog AB for use in the SKB 91 performance assessment project with the following principal function: - Reads the actual conductivity measurements from a file created from the data base GEOTAB. - Regularizes the measurements to a user chosen calculation scale. - Generates three dimensional unconditional realizations of the conductivity field by using a supplied model of the conductivity field as a stochastic function. - Conditions the simulated conductivity field on the actual regularized measurements. - Reads the boundary conditions from a regional deterministic NAMMU computation. - Calculates the hydraulic head field, Darcy velocity field, stream lines and water travel times by solving the stationary hydrology equation and the streamline equation obtained with the velocities calculated from Darcy's law. - Generates visualizations of the realizations if desired. - Calculates statistics such as semivariograms and expectation values of the output fields by repeating the above procedure by iterations of the Monte Carlo type. When using computer codes for safety assessment purpose validation and verification of the codes are important. Thus this report describes a work performed with the goal of verifying parts of HYDRASTAR. The verification described in this report uses comparisons with two other solutions of related examples: A. Comparison with a so called perturbation solution of the stochastical stationary hydrology equation. This as an analytical approximation of the stochastical stationary hydrology equation valid in the case of small variability of the unconditional random conductivity field. B. Comparison with the (Hydrocoin, 1988), case 2. This is a classical example of a hydrology problem with a deterministic conductivity field. The principal feature of the problem is the presence of narrow fracture zones with high conductivity. the compared output are the hydraulic head field and a number of stream lines originating from a

  13. Regular Cycles of Forward and Backward Signal Propagation in Prefrontal Cortex and in Consciousness

    Directory of Open Access Journals (Sweden)

    Paul John Werbos

    2016-11-01

    Full Text Available This paper addresses two fundamental questions:(1 Is it possible to develop mathematical neural network models which can explain and replicate the way in which higher-order capabilities like intelligence, consciousness, optimization and prediction emerge from the process of learning ?; and(2 How can we use and test such models in a practical way, to track, to analyze and to model high-frequency ( 500 hertz many-channel data from recording the brain, just as econometrics sometimes uses models grounded in the theory of efficient markets to track real-world time-series data ?This paper first reviews some of the prior work addressing question (1, and then reports new work performed in MATLAB analyzing spike-sorted and burst-sorted data on the prefrontal cortex from the Buzsaki lab which is consistent with a regular clock cycle of about 153.4 milliseconds and with regular alternation between a forward pass of network calculations and a backwards pass, as in the general form of the backpropagation algorithm which one of us first developed in the period 1968-1974. In business and finance, it is well known that adjustments for cycles of the year are essential to accurate prediction of time-series data; in a similar way, methods for identifying and using regular clock cycles offer large new opportunities in neural time-series analysis. This paper demonstrates a few initial footprints on the large continent of this type of neural time-series analysis, and discusses a few of the many further possibilities opened up by this new approach to decoding the neural code. Three new general MATLAB functions and eight new numerical measures are discussed.

  14. Sparse structure regularized ranking

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-04-17

    Learning ranking scores is critical for the multimedia database retrieval problem. In this paper, we propose a novel ranking score learning algorithm by exploring the sparse structure and using it to regularize ranking scores. To explore the sparse structure, we assume that each multimedia object could be represented as a sparse linear combination of all other objects, and combination coefficients are regarded as a similarity measure between objects and used to regularize their ranking scores. Moreover, we propose to learn the sparse combination coefficients and the ranking scores simultaneously. A unified objective function is constructed with regard to both the combination coefficients and the ranking scores, and is optimized by an iterative algorithm. Experiments on two multimedia database retrieval data sets demonstrate the significant improvements of the propose algorithm over state-of-the-art ranking score learning algorithms.

  15. THYDE-P2 code: RCS (reactor-coolant system) analysis code

    International Nuclear Information System (INIS)

    Asahi, Yoshiro; Hirano, Masashi; Sato, Kazuo

    1986-12-01

    THYDE-P2, being characterized by the new thermal-hydraulic network model, is applicable to analysis of RCS behaviors in response to various disturbances including LB (large break)-LOCA(loss-of-coolant accident). In LB-LOCA analysis, THYDE-P2 is capable of through calculation from its initiation to complete reflooding of the core without an artificial change in the methods and models. The first half of the report is the description of the methods and models for use in the THYDE-P2 code, i.e., (1) the thermal-hydraulic network model, (2) the various RCS components models, (3) the heat sources in fuel, (4) the heat transfer correlations, (5) the mechanical behavior of clad and fuel, and (6) the steady state adjustment. The second half of the report is the user's mannual for the THYDE-P2 code (version SV04L08A) containing items; (1) the program control (2) the input requirements, (3) the execution of THYDE-P2 job, (4) the output specifications and (5) the sample problem to demonstrate capability of the thermal-hydraulic network model, among other things. (author)

  16. 20 CFR 226.35 - Deductions from regular annuity rate.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Deductions from regular annuity rate. 226.35... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Computing a Spouse or Divorced Spouse Annuity § 226.35 Deductions from regular annuity rate. The regular annuity rate of the spouse and divorced...

  17. The complete mitochondrial genome of the Feral Rock Pigeon (Columba livia breed feral).

    Science.gov (United States)

    Li, Chun-Hong; Liu, Fang; Wang, Li

    2014-10-01

    Abstract In the present work, we report the complete mitochondrial genome sequence of feral rock pigeon for the first time. The total length of the mitogenome was 17,239 bp with the base composition of 30.3% for A, 24.0% for T, 31.9% for C, and 13.8% for G and an A-T (54.3 %)-rich feature was detected. It harbored 13 protein-coding genes, 2 ribosomal RNA genes, 22 transfer RNA genes and 1 non-coding control region (D-loop region). The arrangement of all genes was identical to the typical mitochondrial genomes of pigeon. The complete mitochondrial genome sequence of feral rock pigeon would serve as an important data set of the germplasm resources for further study.

  18. The transition from regular to irregular motions, explained as travel on Riemann surfaces

    International Nuclear Information System (INIS)

    Calogero, F; Santini, P M; Gomez-Ullate, D; Sommacal, M

    2005-01-01

    We introduce and discuss a simple Hamiltonian dynamical system, interpretable as a three-body problem in the (complex) plane and providing the prototype of a mechanism explaining the transition from regular to irregular motions as travel on Riemann surfaces. The interest of this phenomenology-illustrating the onset in a deterministic context of irregular motions-is underlined by its generality, suggesting its eventual relevance to understand natural phenomena and experimental investigations. Here only some of our main findings are reported, without detailing their proofs: a more complete presentation will be published elsewhere

  19. Development and application of best-estimate LWR safety analysis codes

    International Nuclear Information System (INIS)

    Reocreux, M.

    1997-01-01

    This paper is a review of the status and the future orientations of the development and application of best estimate LWR safety analysis codes. The present status of these codes exhibits a large success and almost a complete fulfillment of the objectives which were assigned in the 70s. The applications of Best Estimate codes are numerous and cover a large variety of safety questions. However these applications raised a number of problems. The first ones concern the need to have a better control of the quality of the results. This means requirements on code assessment and on uncertainties evaluation. The second ones concern needs for code development and specifically regarding physical models, numerics, coupling with other codes and programming. The analysis of the orientations for code developments and applications in the next years, shows that some developments should be made without delay in order to solve today questions whereas some others are more long term and should be tested for example in some pilot programmes before being eventually applied in main code development. Each of these development programmes are analyzed in the paper by detailing their main content and their possible interest. (author)

  20. Regularization theory for ill-posed problems selected topics

    CERN Document Server

    Lu, Shuai

    2013-01-01

    Thismonograph is a valuable contribution to thehighly topical and extremly productive field ofregularisationmethods for inverse and ill-posed problems. The author is an internationally outstanding and acceptedmathematicianin this field. In his book he offers a well-balanced mixtureof basic and innovative aspects.He demonstrates new,differentiatedviewpoints, and important examples for applications. The bookdemontrates thecurrent developments inthe field of regularization theory,such as multiparameter regularization and regularization in learning theory. The book is written for graduate and PhDs

  1. MARS CODE MANAUAL VOLUME IV - Developmental Assessment Report

    International Nuclear Information System (INIS)

    Chung, Bub Dong; Jeong, Jae Jun; Hwang, Moon Kyu; Lee, Won Jae; Lee, Young Jin; Lee, Seung Wook; Kim, Kyung Doo; Bae, Sung Won

    2010-02-01

    Korea Advanced Energy Research Institute (KAERI) conceived and started the development of MARS code with the main objective of producing a state-of-the-art realistic thermal hydraulic systems analysis code with multi-dimensional analysis capability. MARS achieves this objective by very tightly integrating the one dimensional RELAP5/MOD3 with the multi-dimensional COBRA-TF codes. The method of integration of the two codes is based on the dynamic link library techniques, and the system pressure equation matrices of both codes are implicitly integrated and solved simultaneously. In addition, the Equation-Of-State (EOS) for the light water was unified by replacing the EOS of COBRA-TF by that of the RELAP5. This assessment manual provides a complete list of code assessment results of the MARS code for various conceptual problem, separate effect test and integral effect test. From these validation procedures, the soundness and accuracy of the MARS code has been confirmed. The overall structure of the input is modeled on the structure of the RELAP5 and as such the layout of the manual is very similar to that of the RELAP. This similitude to RELAP5 input is intentional as this input scheme will allow minimum modification between the inputs of RELAP5 and MARS3.1. MARS3.1 development team would like to express its appreciation to the RELAP5 Development Team and the USNRC for making this manual possible

  2. Characterization of the complete mitochondrial genome of Marshallagia marshalli and phylogenetic implications for the superfamily Trichostrongyloidea.

    Science.gov (United States)

    Sun, Miao-Miao; Han, Liang; Zhang, Fu-Kai; Zhou, Dong-Hui; Wang, Shu-Qing; Ma, Jun; Zhu, Xing-Quan; Liu, Guo-Hua

    2018-01-01

    Marshallagia marshalli (Nematoda: Trichostrongylidae) infection can lead to serious parasitic gastroenteritis in sheep, goat, and wild ruminant, causing significant socioeconomic losses worldwide. Up to now, the study concerning the molecular biology of M. marshalli is limited. Herein, we sequenced the complete mitochondrial (mt) genome of M. marshalli and examined its phylogenetic relationship with selected members of the superfamily Trichostrongyloidea using Bayesian inference (BI) based on concatenated mt amino acid sequence datasets. The complete mt genome sequence of M. marshalli is 13,891 bp, including 12 protein-coding genes, 22 transfer RNA genes, and 2 ribosomal RNA genes. All protein-coding genes are transcribed in the same direction. Phylogenetic analyses based on concatenated amino acid sequences of the 12 protein-coding genes supported the monophylies of the families Haemonchidae, Molineidae, and Dictyocaulidae with strong statistical support, but rejected the monophyly of the family Trichostrongylidae. The determination of the complete mt genome sequence of M. marshalli provides novel genetic markers for studying the systematics, population genetics, and molecular epidemiology of M. marshalli and its congeners.

  3. Bar code usage in nuclear materials accountability

    International Nuclear Information System (INIS)

    Mee, W.T.

    1983-01-01

    The age old method of physically taking an inventory of materials by listing each item's identification number has lived beyond its usefulness. In this age of computerization, which offers the local grocery store a quick, sure, and easy means to inventory, it is time for nuclear materials facilities to automate accountability activities. The Oak Ridge Y-12 Plant began investigating the use of automated data collection devices in 1979. At that time, bar code and optical-character-recognition (OCR) systems were reviewed with the purpose of directly entering data into DYMCAS (Dynamic Special Nuclear Materials Control and Accountability System). Both of these systems appeared applicable; however, other automated devices already employed for production control made implementing the bar code and OCR seem improbable. However, the DYMCAS was placed on line for nuclear material accountability, a decision was made to consider the bar code for physical inventory listings. For the past several months a development program has been underway to use a bar code device to collect and input data to the DYMCAS on the uranium recovery operations. Programs have been completed and tested, and are being employed to ensure that data will be compatible and useful. Bar code implementation and expansion of its use for all nuclear material inventory activity in Y-12 is presented

  4. 20 CFR 226.34 - Divorced spouse regular annuity rate.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Divorced spouse regular annuity rate. 226.34... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Computing a Spouse or Divorced Spouse Annuity § 226.34 Divorced spouse regular annuity rate. The regular annuity rate of a divorced spouse is equal to...

  5. CH-TRU Waste Content Codes

    Energy Technology Data Exchange (ETDEWEB)

    Washington TRU Solutions LLC

    2008-01-16

    The CH-TRU Waste Content Codes (CH-TRUCON) document describes the inventory of the U.S. Department of Energy (DOE) CH-TRU waste within the transportation parameters specified by the Contact-Handled Transuranic Waste Authorized Methods for Payload Control (CH-TRAMPAC). The CH-TRAMPAC defines the allowable payload for the Transuranic Package Transporter-II (TRUPACT-II) and HalfPACT packagings. This document is a catalog of TRUPACT-II and HalfPACT authorized contents and a description of the methods utilized to demonstrate compliance with the CH-TRAMPAC. A summary of currently approved content codes by site is presented in Table 1. The CH-TRAMPAC describes "shipping categories" that are assigned to each payload container. Multiple shipping categories may be assigned to a single content code. A summary of approved content codes and corresponding shipping categories is provided in Table 2, which consists of Tables 2A, 2B, and 2C. Table 2A provides a summary of approved content codes and corresponding shipping categories for the "General Case," which reflects the assumption of a 60-day shipping period as described in the CH-TRAMPAC and Appendix 3.4 of the CH-TRU Payload Appendices. For shipments to be completed within an approximately 1,000-mile radius, a shorter shipping period of 20 days is applicable as described in the CH-TRAMPAC and Appendix 3.5 of the CH-TRU Payload Appendices. For shipments to WIPP from Los Alamos National Laboratory (LANL), Nevada Test Site, and Rocky Flats Environmental Technology Site, a 20-day shipping period is applicable. Table 2B provides a summary of approved content codes and corresponding shipping categories for "Close-Proximity Shipments" (20-day shipping period). For shipments implementing the controls specified in the CH-TRAMPAC and Appendix 3.6 of the CH-TRU Payload Appendices, a 10-day shipping period is applicable. Table 2C provides a summary of approved content codes and corresponding shipping categories for "Controlled Shipments

  6. Fast and compact regular expression matching

    DEFF Research Database (Denmark)

    Bille, Philip; Farach-Colton, Martin

    2008-01-01

    We study 4 problems in string matching, namely, regular expression matching, approximate regular expression matching, string edit distance, and subsequence indexing, on a standard word RAM model of computation that allows logarithmic-sized words to be manipulated in constant time. We show how...... to improve the space and/or remove a dependency on the alphabet size for each problem using either an improved tabulation technique of an existing algorithm or by combining known algorithms in a new way....

  7. Impact testing and analysis for structural code benchmarking

    International Nuclear Information System (INIS)

    Glass, R.E.

    1989-01-01

    Sandia National Laboratories, in cooperation with industry and other national laboratories, has been benchmarking computer codes (''Structural Code Benchmarking for the Analysis of Impact Response of Nuclear Material Shipping Cask,'' R.E. Glass, Sandia National Laboratories, 1985; ''Sample Problem Manual for Benchmarking of Cask Analysis Codes,'' R.E. Glass, Sandia National Laboratories, 1988; ''Standard Thermal Problem Set for the Evaluation of Heat Transfer Codes Used in the Assessment of Transportation Packages, R.E. Glass, et al., Sandia National Laboratories, 1988) used to predict the structural, thermal, criticality, and shielding behavior of radioactive materials packages. The first step in the benchmarking of the codes was to develop standard problem sets and to compare the results from several codes and users. This step for structural analysis codes has been completed as described in ''Structural Code Benchmarking for the Analysis of Impact Response of Nuclear Material Shipping Casks,'' R.E. Glass, Sandia National Laboratories, 1985. The problem set is shown in Fig. 1. This problem set exercised the ability of the codes to predict the response to end (axisymmetric) and side (plane strain) impacts with both elastic and elastic/plastic materials. The results from these problems showed that there is good agreement in predicting elastic response. Significant differences occurred in predicting strains for the elastic/plastic models. An example of the variation in predicting plastic behavior is given, which shows the hoop strain as a function of time at the impacting end of Model B. These differences in predicting plastic strains demonstrated a need for benchmark data for a cask-like problem. 6 refs., 5 figs

  8. Genome-wide identification of coding and non-coding conserved sequence tags in human and mouse genomes

    Directory of Open Access Journals (Sweden)

    Maggi Giorgio P

    2008-06-01

    Full Text Available Abstract Background The accurate detection of genes and the identification of functional regions is still an open issue in the annotation of genomic sequences. This problem affects new genomes but also those of very well studied organisms such as human and mouse where, despite the great efforts, the inventory of genes and regulatory regions is far from complete. Comparative genomics is an effective approach to address this problem. Unfortunately it is limited by the computational requirements needed to perform genome-wide comparisons and by the problem of discriminating between conserved coding and non-coding sequences. This discrimination is often based (thus dependent on the availability of annotated proteins. Results In this paper we present the results of a comprehensive comparison of human and mouse genomes performed with a new high throughput grid-based system which allows the rapid detection of conserved sequences and accurate assessment of their coding potential. By detecting clusters of coding conserved sequences the system is also suitable to accurately identify potential gene loci. Following this analysis we created a collection of human-mouse conserved sequence tags and carefully compared our results to reliable annotations in order to benchmark the reliability of our classifications. Strikingly we were able to detect several potential gene loci supported by EST sequences but not corresponding to as yet annotated genes. Conclusion Here we present a new system which allows comprehensive comparison of genomes to detect conserved coding and non-coding sequences and the identification of potential gene loci. Our system does not require the availability of any annotated sequence thus is suitable for the analysis of new or poorly annotated genomes.

  9. Dimensional regularization and analytical continuation at finite temperature

    International Nuclear Information System (INIS)

    Chen Xiangjun; Liu Lianshou

    1998-01-01

    The relationship between dimensional regularization and analytical continuation of infrared divergent integrals at finite temperature is discussed and a method of regularization of infrared divergent integrals and infrared divergent sums is given

  10. Micromagnetic Code Development of Advanced Magnetic Structures Final Report CRADA No. TC-1561-98

    Energy Technology Data Exchange (ETDEWEB)

    Cerjan, Charles J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Shi, Xizeng [Read-Rite Corporation, Fremont, CA (United States)

    2017-11-09

    The specific goals of this project were to: Further develop the previously written micromagnetic code DADIMAG (DOE code release number 980017); Validate the code. The resulting code was expected to be more realistic and useful for simulations of magnetic structures of specific interest to Read-Rite programs. We also planned to further the code for use in internal LLNL programs. This project complemented LLNL CRADA TC-840-94 between LLNL and Read-Rite, which allowed for simulations of the advanced magnetic head development completed under the CRADA. TC-1561-98 was effective concurrently with LLNL non-exclusive copyright license (TL-1552-98) to Read-Rite for DADIMAG Version 2 executable code.

  11. Senior military officers' educational concerns, motivators and barriers for healthful eating and regular exercise.

    Science.gov (United States)

    Sigrist, Lori D; Anderson, Jennifer E; Auld, Garry W

    2005-10-01

    The increasing trend of overweight in the military, the high cost of health care associated with overweight, and the failure to meet some Healthy People 2000 objectives related to diet identify the need for more appropriate nutrition and fitness education for military personnel. The purpose of this study was to assess senior military officers' concerns on various health topics, educational preferences for nutrition and health topics, eating habits, and barriers and motivators for eating healthfully and exercising regularly. The survey was completed by 52 resident students at the U.S. Army War College. Fitness, weight, and blood cholesterol were top health concerns, and respondents wanted to know more about eating healthfully on the run. The primary barrier to eating healthfully and exercising regularly was lack of time, whereas health and appearance were top motivators. Health interventions for this population should include their topics of concern and should address perceived barriers and motivators.

  12. Final Technical Report: Hydrogen Codes and Standards Outreach

    Energy Technology Data Exchange (ETDEWEB)

    Hall, Karen I.

    2007-05-12

    This project contributed significantly to the development of new codes and standards, both domestically and internationally. The NHA collaborated with codes and standards development organizations to identify technical areas of expertise that would be required to produce the codes and standards that industry and DOE felt were required to facilitate commercialization of hydrogen and fuel cell technologies and infrastructure. NHA staff participated directly in technical committees and working groups where issues could be discussed with the appropriate industry groups. In other cases, the NHA recommended specific industry experts to serve on technical committees and working groups where the need for this specific industry expertise would be on-going, and where this approach was likely to contribute to timely completion of the effort. The project also facilitated dialog between codes and standards development organizations, hydrogen and fuel cell experts, the government and national labs, researchers, code officials, industry associations, as well as the public regarding the timeframes for needed codes and standards, industry consensus on technical issues, procedures for implementing changes, and general principles of hydrogen safety. The project facilitated hands-on learning, as participants in several NHA workshops and technical meetings were able to experience hydrogen vehicles, witness hydrogen refueling demonstrations, see metal hydride storage cartridges in operation, and view other hydrogen energy products.

  13. Determination of national midwifery ethical values and ethical codes: in Turkey.

    Science.gov (United States)

    Ergin, Ayla; Özcan, Müesser; Acar, Zeynep; Ersoy, Nermin; Karahan, Nazan

    2013-11-01

    It is important to define and practice ethical rules and codes for professionalisation. Several national and international associations have determined midwifery ethical codes. In Turkey, ethical rules and codes that would facilitate midwifery becoming professionalised have not yet been determined. This study was planned to contribute to the professionalisation of midwifery by determining national ethical values and codes. A total of 1067 Turkish midwives completed the survey. The most prevalent values of Turkish midwives were care for mother-child health, responsibility and professional adequacy. The preferred professional codes chosen by Turkish midwives were absence of conflicts of interest, respect for privacy, avoidance of deception, reporting of faulty practices, consideration of mothers and newborns as separate beings and prevention of harm. In conclusion, cultural values, beliefs and expectations of society cannot be underestimated, although the international professional values and codes of ethics contribute significantly to professionalisation of the midwifery profession.

  14. Regular and conformal regular cores for static and rotating solutions

    Energy Technology Data Exchange (ETDEWEB)

    Azreg-Aïnou, Mustapha

    2014-03-07

    Using a new metric for generating rotating solutions, we derive in a general fashion the solution of an imperfect fluid and that of its conformal homolog. We discuss the conditions that the stress–energy tensors and invariant scalars be regular. On classical physical grounds, it is stressed that conformal fluids used as cores for static or rotating solutions are exempt from any malicious behavior in that they are finite and defined everywhere.

  15. Regular and conformal regular cores for static and rotating solutions

    International Nuclear Information System (INIS)

    Azreg-Aïnou, Mustapha

    2014-01-01

    Using a new metric for generating rotating solutions, we derive in a general fashion the solution of an imperfect fluid and that of its conformal homolog. We discuss the conditions that the stress–energy tensors and invariant scalars be regular. On classical physical grounds, it is stressed that conformal fluids used as cores for static or rotating solutions are exempt from any malicious behavior in that they are finite and defined everywhere.

  16. Improving the quality of clinical coding: a comprehensive audit model

    Directory of Open Access Journals (Sweden)

    Hamid Moghaddasi

    2014-04-01

    Full Text Available Introduction: The review of medical records with the aim of assessing the quality of codes has long been conducted in different countries. Auditing medical coding, as an instructive approach, could help to review the quality of codes objectively using defined attributes, and this in turn would lead to improvement of the quality of codes. Method: The current study aimed to present a model for auditing the quality of clinical codes. The audit model was formed after reviewing other audit models, considering their strengths and weaknesses. A clear definition was presented for each quality attribute and more detailed criteria were then set for assessing the quality of codes. Results: The audit tool (based on the quality attributes included legibility, relevancy, completeness, accuracy, definition and timeliness; led to development of an audit model for assessing the quality of medical coding. Delphi technique was then used to reassure the validity of the model. Conclusion: The inclusive audit model designed could provide a reliable and valid basis for assessing the quality of codes considering more quality attributes and their clear definition. The inter-observer check suggested in the method of auditing is of particular importance to reassure the reliability of coding.

  17. Complete Genome Sequence of the Human Gut Symbiont Roseburia hominis

    DEFF Research Database (Denmark)

    Travis, Anthony J.; Kelly, Denise; Flint, Harry J

    2015-01-01

    We report here the complete genome sequence of the human gut symbiont Roseburia hominis A2-183(T) (= DSM 16839(T) = NCIMB 14029(T)), isolated from human feces. The genome is represented by a 3,592,125-bp chromosome with 3,405 coding sequences. A number of potential functions contributing to host...

  18. Low-rank matrix approximation with manifold regularization.

    Science.gov (United States)

    Zhang, Zhenyue; Zhao, Keke

    2013-07-01

    This paper proposes a new model of low-rank matrix factorization that incorporates manifold regularization to the matrix factorization. Superior to the graph-regularized nonnegative matrix factorization, this new regularization model has globally optimal and closed-form solutions. A direct algorithm (for data with small number of points) and an alternate iterative algorithm with inexact inner iteration (for large scale data) are proposed to solve the new model. A convergence analysis establishes the global convergence of the iterative algorithm. The efficiency and precision of the algorithm are demonstrated numerically through applications to six real-world datasets on clustering and classification. Performance comparison with existing algorithms shows the effectiveness of the proposed method for low-rank factorization in general.

  19. The complete mitochondrial genome of the Fancy Pigeon, Columba livia (Columbiformes: Columbidae).

    Science.gov (United States)

    Zhang, Rui-Hua; Xu, Ming-Ju; Wang, Cun-Lian; Xu, Tong; Wei, Dong; Liu, Bao-Jian; Wang, Guo-Hua

    2015-02-01

    The fancy pigeons are domesticated varieties of the rock pigeon developed over many years of selective breeding. In the present work, we report the complete mitochondrial genome sequence of fancy pigeon for the first time. The total length of the mitogenome was 17,233 bp with the base composition of 30.1% for A, 24.0% for T, 31.9% for C, and 14.0% for G and an A-T (54.2 %)-rich feature was detected. It harbored 13 protein-coding genes, 2 ribosomal RNA genes, 22 transfer RNA genes and 1 non-coding control region (D-loop region). The arrangement of all genes was identical to the typical mitochondrial genomes of pigeon. The complete mitochondrial genome sequence of fancy pigeon would serve as an important data set of the germplasm resources for further study.

  20. The complete mitochondrial genome of the ice pigeon (Columba livia breed ice).

    Science.gov (United States)

    Zhang, Rui-Hua; He, Wen-Xiao

    2015-02-01

    The ice pigeon is a breed of fancy pigeon developed over many years of selective breeding. In the present work, we report the complete mitochondrial genome sequence of ice pigeon for the first time. The total length of the mitogenome was 17,236 bp with the base composition of 30.2% for A, 24.0% for T, 31.9% for C, and 13.9% for G and an A-T (54.2 %)-rich feature was detected. It harbored 13 protein-coding genes, 2 ribosomal RNA genes, 22 transfer RNA genes and 1 non-coding control region (D-loop region). The arrangement of all genes was identical to the typical mitochondrial genomes of pigeon. The complete mitochondrial genome sequence of ice pigeon would serve as an important data set of the germplasm resources for further study.

  1. Regularity criteria for incompressible magnetohydrodynamics equations in three dimensions

    International Nuclear Information System (INIS)

    Lin, Hongxia; Du, Lili

    2013-01-01

    In this paper, we give some new global regularity criteria for three-dimensional incompressible magnetohydrodynamics (MHD) equations. More precisely, we provide some sufficient conditions in terms of the derivatives of the velocity or pressure, for the global regularity of strong solutions to 3D incompressible MHD equations in the whole space, as well as for periodic boundary conditions. Moreover, the regularity criterion involving three of the nine components of the velocity gradient tensor is also obtained. The main results generalize the recent work by Cao and Wu (2010 Two regularity criteria for the 3D MHD equations J. Diff. Eqns 248 2263–74) and the analysis in part is based on the works by Cao C and Titi E (2008 Regularity criteria for the three-dimensional Navier–Stokes equations Indiana Univ. Math. J. 57 2643–61; 2011 Gobal regularity criterion for the 3D Navier–Stokes equations involving one entry of the velocity gradient tensor Arch. Rational Mech. Anal. 202 919–32) for 3D incompressible Navier–Stokes equations. (paper)

  2. Fire-accident analysis code (FIRAC) verification

    International Nuclear Information System (INIS)

    Nichols, B.D.; Gregory, W.S.; Fenton, D.L.; Smith, P.R.

    1986-01-01

    The FIRAC computer code predicts fire-induced transients in nuclear fuel cycle facility ventilation systems. FIRAC calculates simultaneously the gas-dynamic, material transport, and heat transport transients that occur in any arbitrarily connected network system subjected to a fire. The network system may include ventilation components such as filters, dampers, ducts, and blowers. These components are connected to rooms and corridors to complete the network for moving air through the facility. An experimental ventilation system has been constructed to verify FIRAC and other accident analysis codes. The design emphasizes network system characteristics and includes multiple chambers, ducts, blowers, dampers, and filters. A larger industrial heater and a commercial dust feeder are used to inject thermal energy and aerosol mass. The facility is instrumented to measure volumetric flow rate, temperature, pressure, and aerosol concentration throughout the system. Aerosol release rates and mass accumulation on filters also are measured. We have performed a series of experiments in which a known rate of thermal energy is injected into the system. We then simulated this experiment with the FIRAC code. This paper compares and discusses the gas-dynamic and heat transport data obtained from the ventilation system experiments with those predicted by the FIRAC code. The numerically predicted data generally are within 10% of the experimental data

  3. Regular-fat dairy and human health

    DEFF Research Database (Denmark)

    Astrup, Arne; Bradley, Beth H Rice; Brenna, J Thomas

    2016-01-01

    In recent history, some dietary recommendations have treated dairy fat as an unnecessary source of calories and saturated fat in the human diet. These assumptions, however, have recently been brought into question by current research on regular fat dairy products and human health. In an effort to......, cheese and yogurt, can be important components of an overall healthy dietary pattern. Systematic examination of the effects of dietary patterns that include regular-fat milk, cheese and yogurt on human health is warranted....

  4. Bounded Perturbation Regularization for Linear Least Squares Estimation

    KAUST Repository

    Ballal, Tarig; Suliman, Mohamed Abdalla Elhag; Al-Naffouri, Tareq Y.

    2017-01-01

    This paper addresses the problem of selecting the regularization parameter for linear least-squares estimation. We propose a new technique called bounded perturbation regularization (BPR). In the proposed BPR method, a perturbation with a bounded

  5. Severe accident analysis code Sampson for impact project

    International Nuclear Information System (INIS)

    Hiroshi, Ujita; Takashi, Ikeda; Masanori, Naitoh

    2001-01-01

    Four years of the IMPACT project Phase 1 (1994-1997) had been completed with financial sponsorship from the Japanese government's Ministry of Economy, Trade and Industry. At the end of the phase, demonstration simulations by combinations of up to 11 analysis modules developed for severe accident analysis in the SAMPSON Code were performed and physical models in the code were verified. The SAMPSON prototype was validated by TMI-2 and Phebus-FP test analyses. Many of empirical correlation and conventional models have been replaced by mechanistic models during Phase 2 (1998-2000). New models for Accident Management evaluation have been also developed. (author)

  6. Recognition Memory for Novel Stimuli: The Structural Regularity Hypothesis

    Science.gov (United States)

    Cleary, Anne M.; Morris, Alison L.; Langley, Moses M.

    2007-01-01

    Early studies of human memory suggest that adherence to a known structural regularity (e.g., orthographic regularity) benefits memory for an otherwise novel stimulus (e.g., G. A. Miller, 1958). However, a more recent study suggests that structural regularity can lead to an increase in false-positive responses on recognition memory tests (B. W. A.…

  7. Investigation of complete and incomplete fusion in 20Ne + 51V system using recoil range measurement

    Directory of Open Access Journals (Sweden)

    Ali Sabir

    2015-01-01

    Full Text Available Recoil range distributions of evaporation residues, populated in 20Ne + 51V reaction at Elab ≈ 145 MeV, have been studied to determine the degree of momentum transferred through the complete and incomplete fusion reactions. Evaporation residues (ERs populated through the complete and incomplete fusion reactions have been identified on the basis of their recoil range in the Al catcher medium. Measured recoil range of evaporation residues have been compared with the theoretical value calculated using the code SRIM. Range integrated cross section of observed ERs have been compared with the value predicted by statistical model code PACE4.

  8. Arbitrary parameters in implicit regularization and democracy within perturbative description of 2-dimensional gravitational anomalies

    International Nuclear Information System (INIS)

    Souza, Leonardo A.M.; Sampaio, Marcos; Nemes, M.C.

    2006-01-01

    We show that the Implicit Regularization Technique is useful to display quantum symmetry breaking in a complete regularization independent fashion. Arbitrary parameters are expressed by finite differences between integrals of the same superficial degree of divergence whose value is fixed on physical grounds (symmetry requirements or phenomenology). We study Weyl fermions on a classical gravitational background in two dimensions and show that, assuming Lorentz symmetry, the Weyl and Einstein Ward identities reduce to a set of algebraic equations for the arbitrary parameters which allows us to study the Ward identities on equal footing. We conclude in a renormalization independent way that the axial part of the Einstein Ward identity is always violated. Moreover whereas we can preserve the pure tensor part of the Einstein Ward identity at the expense of violating the Weyl Ward identities we may as well violate the former and preserve the latter

  9. Rate-adaptive BCH codes for distributed source coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Larsen, Knud J.; Forchhammer, Søren

    2013-01-01

    This paper considers Bose-Chaudhuri-Hocquenghem (BCH) codes for distributed source coding. A feedback channel is employed to adapt the rate of the code during the decoding process. The focus is on codes with short block lengths for independently coding a binary source X and decoding it given its...... strategies for improving the reliability of the decoded result are analyzed, and methods for estimating the performance are proposed. In the analysis, noiseless feedback and noiseless communication are assumed. Simulation results show that rate-adaptive BCH codes achieve better performance than low...... correlated side information Y. The proposed codes have been analyzed in a high-correlation scenario, where the marginal probability of each symbol, Xi in X, given Y is highly skewed (unbalanced). Rate-adaptive BCH codes are presented and applied to distributed source coding. Adaptive and fixed checking...

  10. Lattice Boltzmann method fundamentals and engineering applications with computer codes

    CERN Document Server

    Mohamad, A A

    2014-01-01

    Introducing the Lattice Boltzmann Method in a readable manner, this book provides detailed examples with complete computer codes. It avoids the most complicated mathematics and physics without scarifying the basic fundamentals of the method.

  11. Anxiety, Depression and Emotion Regulation Among Regular Online Poker Players.

    Science.gov (United States)

    Barrault, Servane; Bonnaire, Céline; Herrmann, Florian

    2017-12-01

    Poker is a type of gambling that has specific features, including the need to regulate one's emotion to be successful. The aim of the present study is to assess emotion regulation, anxiety and depression in a sample of regular poker players, and to compare the results of problem and non-problem gamblers. 416 regular online poker players completed online questionnaires including sociodemographic data, measures of problem gambling (CPGI), anxiety and depression (HAD scale), and emotion regulation (ERQ). The CPGI was used to divide participants into four groups according to the intensity of their gambling practice (non-problem, low risk, moderate risk and problem gamblers). Anxiety and depression were significantly higher among severe-problem gamblers than among the other groups. Both significantly predicted problem gambling. On the other hand, there was no difference between groups in emotion regulation (cognitive reappraisal and expressive suppression), which was linked neither to problem gambling nor to anxiety and depression (except for cognitive reappraisal, which was significantly correlated to anxiety). Our results underline the links between anxiety, depression and problem gambling among poker players. If emotion regulation is involved in problem gambling among poker players, as strongly suggested by data from the literature, the emotion regulation strategies we assessed (cognitive reappraisal and expressive suppression) may not be those involved. Further studies are thus needed to investigate the involvement of other emotion regulation strategies.

  12. A new method for species identification via protein-coding and non-coding DNA barcodes by combining machine learning with bioinformatic methods.

    Science.gov (United States)

    Zhang, Ai-bing; Feng, Jie; Ward, Robert D; Wan, Ping; Gao, Qiang; Wu, Jun; Zhao, Wei-zhong

    2012-01-01

    Species identification via DNA barcodes is contributing greatly to current bioinventory efforts. The initial, and widely accepted, proposal was to use the protein-coding cytochrome c oxidase subunit I (COI) region as the standard barcode for animals, but recently non-coding internal transcribed spacer (ITS) genes have been proposed as candidate barcodes for both animals and plants. However, achieving a robust alignment for non-coding regions can be problematic. Here we propose two new methods (DV-RBF and FJ-RBF) to address this issue for species assignment by both coding and non-coding sequences that take advantage of the power of machine learning and bioinformatics. We demonstrate the value of the new methods with four empirical datasets, two representing typical protein-coding COI barcode datasets (neotropical bats and marine fish) and two representing non-coding ITS barcodes (rust fungi and brown algae). Using two random sub-sampling approaches, we demonstrate that the new methods significantly outperformed existing Neighbor-joining (NJ) and Maximum likelihood (ML) methods for both coding and non-coding barcodes when there was complete species coverage in the reference dataset. The new methods also out-performed NJ and ML methods for non-coding sequences in circumstances of potentially incomplete species coverage, although then the NJ and ML methods performed slightly better than the new methods for protein-coding barcodes. A 100% success rate of species identification was achieved with the two new methods for 4,122 bat queries and 5,134 fish queries using COI barcodes, with 95% confidence intervals (CI) of 99.75-100%. The new methods also obtained a 96.29% success rate (95%CI: 91.62-98.40%) for 484 rust fungi queries and a 98.50% success rate (95%CI: 96.60-99.37%) for 1094 brown algae queries, both using ITS barcodes.

  13. A new method for species identification via protein-coding and non-coding DNA barcodes by combining machine learning with bioinformatic methods.

    Directory of Open Access Journals (Sweden)

    Ai-bing Zhang

    Full Text Available Species identification via DNA barcodes is contributing greatly to current bioinventory efforts. The initial, and widely accepted, proposal was to use the protein-coding cytochrome c oxidase subunit I (COI region as the standard barcode for animals, but recently non-coding internal transcribed spacer (ITS genes have been proposed as candidate barcodes for both animals and plants. However, achieving a robust alignment for non-coding regions can be problematic. Here we propose two new methods (DV-RBF and FJ-RBF to address this issue for species assignment by both coding and non-coding sequences that take advantage of the power of machine learning and bioinformatics. We demonstrate the value of the new methods with four empirical datasets, two representing typical protein-coding COI barcode datasets (neotropical bats and marine fish and two representing non-coding ITS barcodes (rust fungi and brown algae. Using two random sub-sampling approaches, we demonstrate that the new methods significantly outperformed existing Neighbor-joining (NJ and Maximum likelihood (ML methods for both coding and non-coding barcodes when there was complete species coverage in the reference dataset. The new methods also out-performed NJ and ML methods for non-coding sequences in circumstances of potentially incomplete species coverage, although then the NJ and ML methods performed slightly better than the new methods for protein-coding barcodes. A 100% success rate of species identification was achieved with the two new methods for 4,122 bat queries and 5,134 fish queries using COI barcodes, with 95% confidence intervals (CI of 99.75-100%. The new methods also obtained a 96.29% success rate (95%CI: 91.62-98.40% for 484 rust fungi queries and a 98.50% success rate (95%CI: 96.60-99.37% for 1094 brown algae queries, both using ITS barcodes.

  14. Regularization Techniques for Linear Least-Squares Problems

    KAUST Repository

    Suliman, Mohamed

    2016-04-01

    Linear estimation is a fundamental branch of signal processing that deals with estimating the values of parameters from a corrupted measured data. Throughout the years, several optimization criteria have been used to achieve this task. The most astonishing attempt among theses is the linear least-squares. Although this criterion enjoyed a wide popularity in many areas due to its attractive properties, it appeared to suffer from some shortcomings. Alternative optimization criteria, as a result, have been proposed. These new criteria allowed, in one way or another, the incorporation of further prior information to the desired problem. Among theses alternative criteria is the regularized least-squares (RLS). In this thesis, we propose two new algorithms to find the regularization parameter for linear least-squares problems. In the constrained perturbation regularization algorithm (COPRA) for random matrices and COPRA for linear discrete ill-posed problems, an artificial perturbation matrix with a bounded norm is forced into the model matrix. This perturbation is introduced to enhance the singular value structure of the matrix. As a result, the new modified model is expected to provide a better stabilize substantial solution when used to estimate the original signal through minimizing the worst-case residual error function. Unlike many other regularization algorithms that go in search of minimizing the estimated data error, the two new proposed algorithms are developed mainly to select the artifcial perturbation bound and the regularization parameter in a way that approximately minimizes the mean-squared error (MSE) between the original signal and its estimate under various conditions. The first proposed COPRA method is developed mainly to estimate the regularization parameter when the measurement matrix is complex Gaussian, with centered unit variance (standard), and independent and identically distributed (i.i.d.) entries. Furthermore, the second proposed COPRA

  15. Benefits of regular walking exercise in advanced pre-dialysis chronic kidney disease.

    Science.gov (United States)

    Kosmadakis, George C; John, Stephen G; Clapp, Emma L; Viana, Joao L; Smith, Alice C; Bishop, Nicolette C; Bevington, Alan; Owen, Paul J; McIntyre, Christopher W; Feehally, John

    2012-03-01

    There is increasing evidence of the benefit of regular physical exercise in a number of long-term conditions including chronic kidney disease (CKD). In CKD, this evidence has mostly come from studies in end stage patients receiving regular dialysis. There is little evidence in pre-dialysis patients with CKD Stages 4 and 5. A prospective study compared the benefits of 6 months regular walking in 40 pre-dialysis patients with CKD Stages 4 and 5. Twenty of them were the exercising group and were compared to 20 patients who were continuing with usual physical activity. In addition, the 40 patients were randomized to receive additional oral sodium bicarbonate (target venous bicarbonate 29 mmol/L) or continue with previous sodium bicarbonate treatment (target 24 mmol/L). Improvements noted after 1 month were sustained to 6 months in the 18 of 20 who completed the exercise study. These included improvements in exercise tolerance (reduced exertion to achieve the same activity), weight loss, improved cardiovascular reactivity, avoiding an increase in blood pressure medication and improvements in quality of health and life and uraemic symptom scores assessed by questionnaire. Sodium bicarbonate supplementation did not produce any significant alterations. This study provides further support for the broad benefits of aerobic physical exercise in CKD. More studies are needed to understand the mechanisms of these benefits, to study whether resistance exercise will add to the benefit and to evaluate strategies to promote sustained lifestyle changes, that could ensure continued increase in habitual daily physical activity levels.

  16. Prevalence of Dental Erosion among the Young Regular Swimmers in Kaunas, Lithuania

    Directory of Open Access Journals (Sweden)

    Andrius Zebrauskas

    2014-07-01

    Full Text Available Objectives: To determine prevalence of dental erosion among competitive swimmers in Kaunas, the second largest city in Lithuania. Material and Methods: The study was designed as a cross-sectional survey, with a questionnaire and clinical examination protocols. The participants were 12 - 25 year-old swimmers regularly practicing in the swimming pools of Kaunas. Of the total of 132 participants there were 76 (12 - 17 year-old and 56 (18 - 25 year-old individuals; in Groups 1 and 2, respectively. Participants were examined for dental erosion, using a portable dental unit equipped with fibre-optic light, compressed air and suction, and standard dental instruments for oral inspection. Lussi index was applied for recording dental erosion. The completed questionnaires focused on the common erosion risk factors were returned by all participants. Results: Dental erosion was found in 25% of the 12 - 17 year-olds, and in 50% of 18 - 25 years-olds. Mean value of the surfaces with erosion was 6.31 (SD 4.37. All eroded surfaces were evaluated as grade 1. Swimming training duration and the participants’ age correlated positively (Kendall correlation, r = 0.65, P < 0.001, meaning that older swimmers had practiced for longer period. No significant correlation between occurrence of dental erosion and the analyzed risk factors (gastroesophageal reflux disease, frequent vomiting, dry mouth, regular intake of acidic medicines, carbonated drinks was found in both study groups. Conclusions: Prevalence of dental erosion of very low degree was high among the regular swimmers in Kaunas, and was significantly related to swimmers’ age.

  17. Regularized Regression and Density Estimation based on Optimal Transport

    KAUST Repository

    Burger, M.

    2012-03-11

    The aim of this paper is to investigate a novel nonparametric approach for estimating and smoothing density functions as well as probability densities from discrete samples based on a variational regularization method with the Wasserstein metric as a data fidelity. The approach allows a unified treatment of discrete and continuous probability measures and is hence attractive for various tasks. In particular, the variational model for special regularization functionals yields a natural method for estimating densities and for preserving edges in the case of total variation regularization. In order to compute solutions of the variational problems, a regularized optimal transport problem needs to be solved, for which we discuss several formulations and provide a detailed analysis. Moreover, we compute special self-similar solutions for standard regularization functionals and we discuss several computational approaches and results. © 2012 The Author(s).

  18. Self-complementary circular codes in coding theory.

    Science.gov (United States)

    Fimmel, Elena; Michel, Christian J; Starman, Martin; Strüngmann, Lutz

    2018-04-01

    Self-complementary circular codes are involved in pairing genetic processes. A maximal [Formula: see text] self-complementary circular code X of trinucleotides was identified in genes of bacteria, archaea, eukaryotes, plasmids and viruses (Michel in Life 7(20):1-16 2017, J Theor Biol 380:156-177, 2015; Arquès and Michel in J Theor Biol 182:45-58 1996). In this paper, self-complementary circular codes are investigated using the graph theory approach recently formulated in Fimmel et al. (Philos Trans R Soc A 374:20150058, 2016). A directed graph [Formula: see text] associated with any code X mirrors the properties of the code. In the present paper, we demonstrate a necessary condition for the self-complementarity of an arbitrary code X in terms of the graph theory. The same condition has been proven to be sufficient for codes which are circular and of large size [Formula: see text] trinucleotides, in particular for maximal circular codes ([Formula: see text] trinucleotides). For codes of small-size [Formula: see text] trinucleotides, some very rare counterexamples have been constructed. Furthermore, the length and the structure of the longest paths in the graphs associated with the self-complementary circular codes are investigated. It has been proven that the longest paths in such graphs determine the reading frame for the self-complementary circular codes. By applying this result, the reading frame in any arbitrary sequence of trinucleotides is retrieved after at most 15 nucleotides, i.e., 5 consecutive trinucleotides, from the circular code X identified in genes. Thus, an X motif of a length of at least 15 nucleotides in an arbitrary sequence of trinucleotides (not necessarily all of them belonging to X) uniquely defines the reading (correct) frame, an important criterion for analyzing the X motifs in genes in the future.

  19. Diagonal Eigenvalue Unity (DEU) code for spectral amplitude coding-optical code division multiple access

    Science.gov (United States)

    Ahmed, Hassan Yousif; Nisar, K. S.

    2013-08-01

    Code with ideal in-phase cross correlation (CC) and practical code length to support high number of users are required in spectral amplitude coding-optical code division multiple access (SAC-OCDMA) systems. SAC systems are getting more attractive in the field of OCDMA because of its ability to eliminate the influence of multiple access interference (MAI) and also suppress the effect of phase induced intensity noise (PIIN). In this paper, we have proposed new Diagonal Eigenvalue Unity (DEU) code families with ideal in-phase CC based on Jordan block matrix with simple algebraic ways. Four sets of DEU code families based on the code weight W and number of users N for the combination (even, even), (even, odd), (odd, odd) and (odd, even) are constructed. This combination gives DEU code more flexibility in selection of code weight and number of users. These features made this code a compelling candidate for future optical communication systems. Numerical results show that the proposed DEU system outperforms reported codes. In addition, simulation results taken from a commercial optical systems simulator, Virtual Photonic Instrument (VPI™) shown that, using point to multipoint transmission in passive optical network (PON), DEU has better performance and could support long span with high data rate.

  20. Code compression for VLIW embedded processors

    Science.gov (United States)

    Piccinelli, Emiliano; Sannino, Roberto

    2004-04-01

    The implementation of processors for embedded systems implies various issues: main constraints are cost, power dissipation and die area. On the other side, new terminals perform functions that require more computational flexibility and effort. Long code streams must be loaded into memories, which are expensive and power consuming, to run on DSPs or CPUs. To overcome this issue, the "SlimCode" proprietary algorithm presented in this paper (patent pending technology) can reduce the dimensions of the program memory. It can run offline and work directly on the binary code the compiler generates, by compressing it and creating a new binary file, about 40% smaller than the original one, to be loaded into the program memory of the processor. The decompression unit will be a small ASIC, placed between the Memory Controller and the System bus of the processor, keeping unchanged the internal CPU architecture: this implies that the methodology is completely transparent to the core. We present comparisons versus the state-of-the-art IBM Codepack algorithm, along with its architectural implementation into the ST200 VLIW family core.

  1. Energy functions for regularization algorithms

    Science.gov (United States)

    Delingette, H.; Hebert, M.; Ikeuchi, K.

    1991-01-01

    Regularization techniques are widely used for inverse problem solving in computer vision such as surface reconstruction, edge detection, or optical flow estimation. Energy functions used for regularization algorithms measure how smooth a curve or surface is, and to render acceptable solutions these energies must verify certain properties such as invariance with Euclidean transformations or invariance with parameterization. The notion of smoothness energy is extended here to the notion of a differential stabilizer, and it is shown that to void the systematic underestimation of undercurvature for planar curve fitting, it is necessary that circles be the curves of maximum smoothness. A set of stabilizers is proposed that meet this condition as well as invariance with rotation and parameterization.

  2. Three regularities of recognition memory: the role of bias.

    Science.gov (United States)

    Hilford, Andrew; Maloney, Laurence T; Glanzer, Murray; Kim, Kisok

    2015-12-01

    A basic assumption of Signal Detection Theory is that decisions are made on the basis of likelihood ratios. In a preceding paper, Glanzer, Hilford, and Maloney (Psychonomic Bulletin & Review, 16, 431-455, 2009) showed that the likelihood ratio assumption implies that three regularities will occur in recognition memory: (1) the Mirror Effect, (2) the Variance Effect, (3) the normalized Receiver Operating Characteristic (z-ROC) Length Effect. The paper offered formal proofs and computational demonstrations that decisions based on likelihood ratios produce the three regularities. A survey of data based on group ROCs from 36 studies validated the likelihood ratio assumption by showing that its three implied regularities are ubiquitous. The study noted, however, that bias, another basic factor in Signal Detection Theory, can obscure the Mirror Effect. In this paper we examine how bias affects the regularities at the theoretical level. The theoretical analysis shows: (1) how bias obscures the Mirror Effect, not the other two regularities, and (2) four ways to counter that obscuring. We then report the results of five experiments that support the theoretical analysis. The analyses and the experimental results also demonstrate: (1) that the three regularities govern individual, as well as group, performance, (2) alternative explanations of the regularities are ruled out, and (3) that Signal Detection Theory, correctly applied, gives a simple and unified explanation of recognition memory data.

  3. List Decoding of Matrix-Product Codes from nested codes: an application to Quasi-Cyclic codes

    DEFF Research Database (Denmark)

    Hernando, Fernando; Høholdt, Tom; Ruano, Diego

    2012-01-01

    A list decoding algorithm for matrix-product codes is provided when $C_1,..., C_s$ are nested linear codes and $A$ is a non-singular by columns matrix. We estimate the probability of getting more than one codeword as output when the constituent codes are Reed-Solomon codes. We extend this list...... decoding algorithm for matrix-product codes with polynomial units, which are quasi-cyclic codes. Furthermore, it allows us to consider unique decoding for matrix-product codes with polynomial units....

  4. The complete mitochondrial genome of the land snail Cornu aspersum (Helicidae: Mollusca: intra-specific divergence of protein-coding genes and phylogenetic considerations within Euthyneura.

    Directory of Open Access Journals (Sweden)

    Juan Diego Gaitán-Espitia

    Full Text Available The complete sequences of three mitochondrial genomes from the land snail Cornu aspersum were determined. The mitogenome has a length of 14050 bp, and it encodes 13 protein-coding genes, 22 transfer RNA genes and two ribosomal RNA genes. It also includes nine small intergene spacers, and a large AT-rich intergenic spacer. The intra-specific divergence analysis revealed that COX1 has the lower genetic differentiation, while the most divergent genes were NADH1, NADH3 and NADH4. With the exception of Euhadra herklotsi, the structural comparisons showed the same gene order within the family Helicidae, and nearly identical gene organization to that found in order Pulmonata. Phylogenetic reconstruction recovered Basommatophora as polyphyletic group, whereas Eupulmonata and Pulmonata as paraphyletic groups. Bayesian and Maximum Likelihood analyses showed that C. aspersum is a close relative of Cepaea nemoralis, and with the other Helicidae species form a sister group of Albinaria caerulea, supporting the monophyly of the Stylommatophora clade.

  5. Predictive features of persistent activity emergence in regular spiking and intrinsic bursting model neurons.

    Directory of Open Access Journals (Sweden)

    Kyriaki Sidiropoulou

    Full Text Available Proper functioning of working memory involves the expression of stimulus-selective persistent activity in pyramidal neurons of the prefrontal cortex (PFC, which refers to neural activity that persists for seconds beyond the end of the stimulus. The mechanisms which PFC pyramidal neurons use to discriminate between preferred vs. neutral inputs at the cellular level are largely unknown. Moreover, the presence of pyramidal cell subtypes with different firing patterns, such as regular spiking and intrinsic bursting, raises the question as to what their distinct role might be in persistent firing in the PFC. Here, we use a compartmental modeling approach to search for discriminatory features in the properties of incoming stimuli to a PFC pyramidal neuron and/or its response that signal which of these stimuli will result in persistent activity emergence. Furthermore, we use our modeling approach to study cell-type specific differences in persistent activity properties, via implementing a regular spiking (RS and an intrinsic bursting (IB model neuron. We identify synaptic location within the basal dendrites as a feature of stimulus selectivity. Specifically, persistent activity-inducing stimuli consist of activated synapses that are located more distally from the soma compared to non-inducing stimuli, in both model cells. In addition, the action potential (AP latency and the first few inter-spike-intervals of the neuronal response can be used to reliably detect inducing vs. non-inducing inputs, suggesting a potential mechanism by which downstream neurons can rapidly decode the upcoming emergence of persistent activity. While the two model neurons did not differ in the coding features of persistent activity emergence, the properties of persistent activity, such as the firing pattern and the duration of temporally-restricted persistent activity were distinct. Collectively, our results pinpoint to specific features of the neuronal response to a given

  6. Method of transferring regular shaped vessel into cell

    International Nuclear Information System (INIS)

    Murai, Tsunehiko.

    1997-01-01

    The present invention concerns a method of transferring regular shaped vessels from a non-contaminated area to a contaminated cell. A passage hole for allowing the regular shaped vessels to pass in the longitudinal direction is formed to a partitioning wall at the bottom of the contaminated cell. A plurality of regular shaped vessel are stacked in multiple stages in a vertical direction from the non-contaminated area present below the passage hole, allowed to pass while being urged and transferred successively into the contaminated cell. As a result, since they are transferred while substantially closing the passage hole by the regular shaped vessels, radiation rays or contaminated materials are prevented from discharging from the contaminated cell to the non-contaminated area. Since there is no requirement to open/close an isolation door frequently, the workability upon transfer can be improved remarkably. In addition, the sealing member for sealing the gap between the regular shaped vessel passing through the passage hole and the partitioning wall of the bottom is disposed to the passage hole, the contaminated materials in the contaminated cells can be prevented from discharging from the gap to the non-contaminated area. (N.H.)

  7. Complete mitochondrial genome of a wild Siberian tiger.

    Science.gov (United States)

    Sun, Yujiao; Lu, Taofeng; Sun, Zhaohui; Guan, Weijun; Liu, Zhensheng; Teng, Liwei; Wang, Shuo; Ma, Yuehui

    2015-01-01

    In this study, the complete mitochondrial genome of Siberian tiger (Panthera tigris altaica) was sequenced, using muscle tissue obtained from a male wild tiger. The total length of the mitochondrial genome is 16,996 bp. The genome structure of this tiger is in accordance with other Siberian tigers and it contains 12S rRNA gene, 16S rRNA gene, 22 tRNA genes, 13 protein-coding genes, and 1 control region.

  8. Automatic Constraint Detection for 2D Layout Regularization.

    Science.gov (United States)

    Jiang, Haiyong; Nan, Liangliang; Yan, Dong-Ming; Dong, Weiming; Zhang, Xiaopeng; Wonka, Peter

    2016-08-01

    In this paper, we address the problem of constraint detection for layout regularization. The layout we consider is a set of two-dimensional elements where each element is represented by its bounding box. Layout regularization is important in digitizing plans or images, such as floor plans and facade images, and in the improvement of user-created contents, such as architectural drawings and slide layouts. To regularize a layout, we aim to improve the input by detecting and subsequently enforcing alignment, size, and distance constraints between layout elements. Similar to previous work, we formulate layout regularization as a quadratic programming problem. In addition, we propose a novel optimization algorithm that automatically detects constraints. We evaluate the proposed framework using a variety of input layouts from different applications. Our results demonstrate that our method has superior performance to the state of the art.

  9. Automatic Constraint Detection for 2D Layout Regularization

    KAUST Repository

    Jiang, Haiyong

    2015-09-18

    In this paper, we address the problem of constraint detection for layout regularization. As layout we consider a set of two-dimensional elements where each element is represented by its bounding box. Layout regularization is important for digitizing plans or images, such as floor plans and facade images, and for the improvement of user created contents, such as architectural drawings and slide layouts. To regularize a layout, we aim to improve the input by detecting and subsequently enforcing alignment, size, and distance constraints between layout elements. Similar to previous work, we formulate the layout regularization as a quadratic programming problem. In addition, we propose a novel optimization algorithm to automatically detect constraints. In our results, we evaluate the proposed framework on a variety of input layouts from different applications, which demonstrates our method has superior performance to the state of the art.

  10. Lavrentiev regularization method for nonlinear ill-posed problems

    International Nuclear Information System (INIS)

    Kinh, Nguyen Van

    2002-10-01

    In this paper we shall be concerned with Lavientiev regularization method to reconstruct solutions x 0 of non ill-posed problems F(x)=y o , where instead of y 0 noisy data y δ is an element of X with absolut(y δ -y 0 ) ≤ δ are given and F:X→X is an accretive nonlinear operator from a real reflexive Banach space X into itself. In this regularization method solutions x α δ are obtained by solving the singularly perturbed nonlinear operator equation F(x)+α(x-x*)=y δ with some initial guess x*. Assuming certain conditions concerning the operator F and the smoothness of the element x*-x 0 we derive stability estimates which show that the accuracy of the regularized solutions is order optimal provided that the regularization parameter α has been chosen properly. (author)

  11. Tri-Lab Co-Design Milestone: In-Depth Performance Portability Analysis of Improved Integrated Codes on Advanced Architecture.

    Energy Technology Data Exchange (ETDEWEB)

    Hoekstra, Robert J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hammond, Simon David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Richards, David [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Bergen, Ben [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-09-01

    This milestone is a tri-lab deliverable supporting ongoing Co-Design efforts impacting applications in the Integrated Codes (IC) program element Advanced Technology Development and Mitigation (ATDM) program element. In FY14, the trilabs looked at porting proxy application to technologies of interest for ATS procurements. In FY15, a milestone was completed evaluating proxy applications in multiple programming models and in FY16, a milestone was completed focusing on the migration of lessons learned back into production code development. This year, the co-design milestone focuses on extracting the knowledge gained and/or code revisions back into production applications.

  12. Complete genome sequence of Parvibaculum lavamentivorans type strain (DS-1(T)).

    Science.gov (United States)

    Schleheck, David; Weiss, Michael; Pitluck, Sam; Bruce, David; Land, Miriam L; Han, Shunsheng; Saunders, Elizabeth; Tapia, Roxanne; Detter, Chris; Brettin, Thomas; Han, James; Woyke, Tanja; Goodwin, Lynne; Pennacchio, Len; Nolan, Matt; Cook, Alasdair M; Kjelleberg, Staffan; Thomas, Torsten

    2011-12-31

    Parvibaculum lavamentivorans DS-1(T) is the type species of the novel genus Parvibaculum in the novel family Rhodobiaceae (formerly Phyllobacteriaceae) of the order Rhizobiales of Alphaproteobacteria. Strain DS-1(T) is a non-pigmented, aerobic, heterotrophic bacterium and represents the first tier member of environmentally important bacterial communities that catalyze the complete degradation of synthetic laundry surfactants. Here we describe the features of this organism, together with the complete genome sequence and annotation. The 3,914,745 bp long genome with its predicted 3,654 protein coding genes is the first completed genome sequence of the genus Parvibaculum, and the first genome sequence of a representative of the family Rhodobiaceae.

  13. Coding Partitions

    Directory of Open Access Journals (Sweden)

    Fabio Burderi

    2007-05-01

    Full Text Available Motivated by the study of decipherability conditions for codes weaker than Unique Decipherability (UD, we introduce the notion of coding partition. Such a notion generalizes that of UD code and, for codes that are not UD, allows to recover the ``unique decipherability" at the level of the classes of the partition. By tacking into account the natural order between the partitions, we define the characteristic partition of a code X as the finest coding partition of X. This leads to introduce the canonical decomposition of a code in at most one unambiguouscomponent and other (if any totally ambiguouscomponents. In the case the code is finite, we give an algorithm for computing its canonical partition. This, in particular, allows to decide whether a given partition of a finite code X is a coding partition. This last problem is then approached in the case the code is a rational set. We prove its decidability under the hypothesis that the partition contains a finite number of classes and each class is a rational set. Moreover we conjecture that the canonical partition satisfies such a hypothesis. Finally we consider also some relationships between coding partitions and varieties of codes.

  14. A combined N-body and hydrodynamic code for modeling disk galaxies

    International Nuclear Information System (INIS)

    Schroeder, M.C.

    1989-01-01

    A combined N-body and hydrodynamic computer code for the modeling of two dimensional galaxies is described. The N-body portion of the code is used to calculate the motion of the particle component of a galaxy, while the hydrodynamics portion of the code is used to follow the motion and evolution of the fluid component. A complete description of the numerical methods used for each portion of the code is given. Additionally, the proof tests of the separate and combined portions of the code are presented and discussed. Finally, a discussion of the topics researched with the code and results obtained is presented. These include: the measurement of stellar relaxation times in disk galaxy simulations; the effects of two-armed spiral perturbations on stable axisymmetric disks; the effects of the inclusion of an instellar medium (ISM) on the stability of disk galaxies; and the effect of the inclusion of stellar evolution on disk galaxy simulations

  15. Biometrics encryption combining palmprint with two-layer error correction codes

    Science.gov (United States)

    Li, Hengjian; Qiu, Jian; Dong, Jiwen; Feng, Guang

    2017-07-01

    To bridge the gap between the fuzziness of biometrics and the exactitude of cryptography, based on combining palmprint with two-layer error correction codes, a novel biometrics encryption method is proposed. Firstly, the randomly generated original keys are encoded by convolutional and cyclic two-layer coding. The first layer uses a convolution code to correct burst errors. The second layer uses cyclic code to correct random errors. Then, the palmprint features are extracted from the palmprint images. Next, they are fused together by XORing operation. The information is stored in a smart card. Finally, the original keys extraction process is the information in the smart card XOR the user's palmprint features and then decoded with convolutional and cyclic two-layer code. The experimental results and security analysis show that it can recover the original keys completely. The proposed method is more secure than a single password factor, and has higher accuracy than a single biometric factor.

  16. Online Manifold Regularization by Dual Ascending Procedure

    Directory of Open Access Journals (Sweden)

    Boliang Sun

    2013-01-01

    Full Text Available We propose a novel online manifold regularization framework based on the notion of duality in constrained optimization. The Fenchel conjugate of hinge functions is a key to transfer manifold regularization from offline to online in this paper. Our algorithms are derived by gradient ascent in the dual function. For practical purpose, we propose two buffering strategies and two sparse approximations to reduce the computational complexity. Detailed experiments verify the utility of our approaches. An important conclusion is that our online MR algorithms can handle the settings where the target hypothesis is not fixed but drifts with the sequence of examples. We also recap and draw connections to earlier works. This paper paves a way to the design and analysis of online manifold regularization algorithms.

  17. Experimental validation of the containment codes ASTARTE and SEURBNUK

    International Nuclear Information System (INIS)

    Kendall, K.C.; Arnold, L.A.; Broadhouse, B.J.; Jones, A.; Yerkess, A.; Benuzzi, A.

    1979-10-01

    The fast reactor containment codes ASTARTE and SEURBNUK are being validated against data from the COVA series of small scale experiments being performed jointly by the UKAEA and JRC Ispra. The experimental programme is nearly complete, and data are given. (U.K.)

  18. The complete mitochondrial genome sequence of Oceanic whitetip shark, Carcharhinus longimanus (Carcharhiniformes: Carcharhinidae).

    Science.gov (United States)

    Li, Weiwen; Dai, Xiaojie; Xu, Qianghua; Wu, Feng; Gao, Chunxia; Zhang, Yanbo

    2016-05-01

    The complete mitochondrial DNA sequence of Carcharhinus longimanus was determined and analyzed. The complete mtDNA genome sequence of C. longimanus was 16,706 bp in length. It contained 22 tRNA genes, 2 rRNA genes, 13 protein-coding genes and 2 non-conding regions: control region (D-loop) and origin of light-strand replication (OL). The complete mitogenome sequence information of C. longimanus can provide a useful data for further studies on molecular systematics, stock evaluation, taxonomic status and conservation genetics.

  19. Documentation and verification of the SHAFT code

    International Nuclear Information System (INIS)

    St John, C.M.

    1991-12-01

    The SHAFT code incorporates equations to compute stresses in a shaft liner when the rock through which a shaft passes is subject to known three-dimensional states of stress or strain. The deformation modes considered are hoop deformation, axial deformation, and shear on a plane normal to the shaft axis. Interaction between the liner and the soil and rock is considered, and it is assumed that the liner is in place before loading is applied. This code is intended to be used interactively but creates a permanent record complete with necessary quality assurance information. The code has been carefully verified for the case of generalized plane strain, in which an arbitrary axial strain can be defined. It may also be used for plane stress analysis. Output is given in the form of stresses at selected sample points in the linear and the rock and a simple graphical representation of the distribution of stress through the liner. 12 figs., 13 tabs

  20. A two dimensional code (R,Z) for nuclear reactor analysis and its application to the UAR-RI reactor

    International Nuclear Information System (INIS)

    Bishay, S.T.; Mikhail, I.F.I.; Gaafar, M.A.; Michaiel, M.L.; Nassar, S.F.

    1988-01-01

    A detailed study is given of fuel consumption in completely reflected cylindrical reactors. A two group, two dimensional (r,z) code is developed to carry out the required procedure. The code is applied to the UAR-RI reactor and the results are found to be in complete agreement with the experimental observations and with the theoretical interpretations. 7 fig., 12 tab